Additionally, for switches with Cisco IOS Software, these messages appear for link up or down situations (in this example, on the FastEthernet0/1 interface):
%LINK-3-UPDOWN: Interface FastEthernet0/1, changed state to up %LINK-3-UPDOWN: Interface FastEthernet0/1, changed state to down
In an environment using Category 3 wiring, the maintenance crew installs a new air conditioning system that introduces new EMI sources into the environment. In an environment using Category 5 wiring, cabling is run too close to an elevator motor. Poor cable management puts strain on RJ-45 connectors, causing one or more wires to break. New applications can change traffic patterns.
Something as simple as a user connecting a hub to a switch port in an office in order to connect a second PC can cause an increase in collisions. Damaged wiring and EMI commonly show up as excessive collisions and noise. Changes in traffic patterns and the installation of a hub will show up as collisions and runt frames. These symptoms are best viewed using the show interface command. To display information about the specific Fast Ethernet interface, use the show interfaces fastethernet 0/0 command. The resulting output varies, depending on the network for which an interface has been configured. Callout Field Description Indicates whether the interface hardware is currently active or whether an administrator has disabled it. If the interface is shown as "disabled," the device has received more than 5000 errors in a keepalive interval, which is 10 sec by default. If the line protocol is shown as "down" or "administratively down," the software processes that manage the line protocol consider the interface unusable (because of unsuccessful keepalives) or an administrator has disabled the interface. Total number of errors that are related to no buffer, runt, giant, CRC, frame, overrun, ignored, and abort issues. Other inputrelated errors can also increment the count, so this sum might not balance with the other counts. Number of times that the receiver hardware was unable to hand received data to a hardware buffer because the input rate exceeded the ability of the receiver to process the data. Number of messages that are retransmitted because of an
Input errors, including cyclic redundancy check (CRC) errors and framing errors Output errors Collisions
3 4
Callout
Field
Interface resets
Description Ethernet collision, which is usually the result of an overextended LAN. LANs can become overextended when an Ethernet or transceiver cable is too long or when there are more than two repeaters between stations. Duplex mismatch is on of the most common reasons for collisions. Number of times an interface has been completely reset.
Local EMI is commonly known as noise. There are four types of noise that are most significant to data networks:
Impulse noise that is caused by voltage fluctuations or current spikes that are induced on the cabling. Random (white) noise that is generated by many sources, such as FM radio stations, police radios, building security, and avionics for automated landing. Alien crosstalk, which is noise that is induced by other cables in the same pathway. Near-end crosstalk, which is noise originating from crosstalk from other adjacent cables or noise from nearby electric cables, devices with large electric motors, or anything that includes a transmitter that is more powerful than a cell phone.
When you are troubleshooting issues that are related to excessive noise, three steps are suggested to help isolate and resolve the issue:
Use the show interface fastethernet EXEC command to determine the status of the Fast Ethernet interfaces of the device. The presence of many CRC errors but not many collisions is an indication of excessive noise. Inspect the cables for damage. If you are using 100BASE-TX, make sure that you are using Category 5 cabling.
Collision domain problems affect the local medium and disrupt communications to Layer 2 or Layer 3 infrastructure devices, local servers, or services. Collisions are normally a moresignificant problem on shared media than on switch ports. Average collision counts on shared media should generally be below 5 percent, although that number is conservative. Be sure that judgments are based on the average and not on a peak or spike in collisions. Collision-based problems may often be traced back to a single source. It may be a bad cable to a single station, a bad uplink cable on a hub or port on a hub, or a link that is exposed to external electrical noise. A noise source near a cable or hub can cause collisions even when there is no apparent traffic to cause them. If collisions get worse in direct proportion to the level of traffic, if the amount of collisions approaches 100 percent, or if there is no good traffic at all, the cable system may have failed. When you are troubleshooting issues that are related to excessive collisions, three steps are suggested to help isolate and resolve the issue:
Use the show interface ethernet command to check the rate of collisions. The total number of collisions that are compared to the total number of output packets should be 0.1 percent or less. A time domain reflectometer (TDR) is a device that sends signals through a network medium to check cable continuity and other attributes. Use a TDR to find any unterminated Ethernet cables. Jabber occurs when a device that is experiencing circuitry or logic failure continuously sends random (garbage) data. Look for a jabbering transceiver that is attached to a host. This issue might require host-by-host inspection or the use of a protocol analyzer.
Late collision happens when a collision occurs after transmitting the preamble. The most common cause of late collisions is that your Ethernet cable segments are too long for the speed at which you are transmitting. When you are troubleshooting issues that are related to excessive late collisions, two steps are suggested to help isolate and resolve the issue. You can use the show interfaces command to check for FCS late collision errors, as shown in example below:
SwitchX#show interfaces fastethernet0/1 FastEthernet0/1 is up, line protocol is up (connected) Hardware is Fast Ethernet, address is 0022.91c4.0e01 (bia 0022.91c4.0e01) MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 <output omitted> 0 0 0 0 output errors, 0 collisions, 1 interface resets babbles, 0 late collision, 0 deferred lost carrier, 0 no carrier, 0 PAUSE output output buffer failures, 0 output buffers swapped out
WLANs use Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) instead of Carrier Sense Multiple Access with Collision Detection (CSMA/CD), which is used by Ethernet LANs. Collision detection is not possible in WLANs, because a sending station cannot receive at the same time that it is transmitting and, therefore, cannot detect a collision. Instead, WLANs use the Ready to Send (RTS) and Clear to Send (CTS) protocols to avoid collisions.
WLANs use a different frame format than wired Ethernet LANs. WLANs require additional information in the Layer 2 header of the frame.
Radio waves cause problems in WLANs that are not found in LANs:
Connectivity issues occur in WLANs because of coverage problems, RF transmission, multipath distortion, and interference from other wireless services or other WLANs. Privacy issues occur because radio frequencies can reach outside the facility.
In WLANs, mobile clients connect to the network through an access point, which is the equivalent of a wired Ethernet hub (but an access point has some Layer 2 features, making it also have characteristics of switches):
Mobile clients do not have a physical connection to the network. Mobile devices are often battery-powered, as opposed to plugged-in LAN devices.
WLANs must meet country-specific RF regulations. The aim of standardization is to make WLANs available worldwide. Because WLANs use radio frequencies, they must follow country-specific regulations of RF power and frequencies. This requirement does not apply to wired LANs.
RF Transmission
Radio frequencies range from the AM radio band to frequencies used by cell phones. Radio frequencies are radiated into the air by antennas that create radio waves. When radio waves are propagated through objects, they may be absorbed (for instance, by walls) or reflected (for instance, by metal surfaces). This absorption and reflection may cause areas of low signal strength or low signal quality. The transmission of radio waves is influenced by the following factors:
Reflection: Occurs when RF waves bounce off objects (for example, metal or glass surfaces). Scattering: Occurs when RF waves strike an uneven surface (for example, a rough surface) and are reflected in many directions. Absorption: Occurs when RF waves are absorbed by objects (for example, walls).
The following rules apply for data transmission over radio waves:
Higher data rates have a shorter range because the receiver requires a stronger signal with a better signal-to-noise ratio (SNR) to retrieve the information. Higher transmit power results in a greater range. To double the range, the power has to be increased by a factor of 4.
Higher data rates require more bandwidth. Increased bandwidth is possible with higher frequencies or more-complex modulation. Higher frequencies have a shorter transmission range because they have higher degradation and absorption. This problem can be addressed by more-efficient antennas.
900-MHz band: 902 to 928 MHz. 2.4-GHz band: 2.400 MHz to 2.483 GHz. (In Japan, this band extends to 2.495 GHz.).
5-GHz band: 5.150 to 5.350, 5.725 to 5.825 MHz, with some countries supporting middle bands between 5.350 and 5.725 MHz. Not all countries permit IEEE 802.11a, and the available spectrum varies widely. The list of countries that permit 802.11a is changing.
Next to the WLAN frequencies in the spectrum are other wireless services such as cellular phones and narrowband Personal Communications Service (PCS). The frequencies that are used for WLAN are ISM bands. A license is not required to operate wireless equipment on unlicensed frequency bands. However, no user has exclusive use of any frequency. For example, the 2.4-GHz band is used for WLANs, video transmitters, Bluetooth, microwave ovens, and portable phones. Unlicensed frequency bands offer best-effort use, and interference and degradation are possible. Even though these three frequency bands do not require a license to operate equipment, they are still subject to the local country code regulations. Countries regulate frequency areas such as transmitter power, antenna gain (which increases the effective power), and the sum of transmitter loss, cable loss, and antenna gain. Note: The number of channels that are available and transmission parameters are regulated by country regulations. Each country allocates radio spectrum channels to various services. Refer to the country regulations and product documentation for specific details for each regulatory domain. Effective Isotropic Radiated Power (EIRP) is the final unit of measurement that is monitored by local regulatory agencies. EIRP is the radiated power from the device, including antenna, cables, and other components of the WLAN system that are attached to it. By changing the antenna, cables, and transmitter power, the EIRP can change and exceed the allowed value. Therefore, caution should be used when attempting to replace a component of wireless equipment; for example, when adding or upgrading an antenna to increase the range. The possible result could be a WLAN that is illegal under local codes. EIRP = transmitter power + antenna gain cable loss Note: Only use antennas and cables that are supplied by the original manufacturer that is listed for the specific access point implementation. Only use qualified technicians who understand the many requirements of the RF regulatory codes for that country.
By design, the standard does not address the upper layers of the OSI model. IEEE 802.11b was defined using Direct Sequence Spread Spectrum (DSSS). DSSS uses just one channel that spreads the data across all frequencies that are defined by that channel. IEEE 802.11 divided the 2.4-GHz ISM band into 14 channels, but local regulatory agencies such as the FCC designate which channels are allowed, such as channels 1 through 11 in the United States. Each channel in the 2.4-GHz ISM band is 22-MHz wide with 5-MHz separation, resulting in an overlap with channels before or after a defined channel. Therefore, a separation of five channels is needed to ensure unique nonoverlapping channels. For example, using the 11 FCC channels, there are three nonoverlapping channels: 1, 6, and 11. Remember that wireless uses half-duplex communication, so the basic throughput is only about half of the data rate. Because of this limitation, the IEEE 802.11b main development goal was to achieve higher data rates within the 2.4-GHz ISM band. They want to continue to increase the Wi-Fi consumer market and encourage consumer acceptance of Wi-Fi. IEEE 802.11b defined the usage of DSSS with newer encoding or modulation of Complementary Code Keying (CCK) for higher data rates of 5.5 and 11 Mb/s while retaining coding of 1 and 2 Mb/s. IEEE 802.11b still uses the same 2.4-GHz ISM band as prior 802.11 standards, making it backward-compatible with the prior 802.11 standard and its associated data of 1 and 2 Mb/s. The same year that the 802.11b standard was adopted, IEEE developed another standard that is known as 802.11a. This standard was motivated by the goal of increasing data rates by using a different orthogonal frequency-division multiplexing (OFDM) spread spectrum and modulation technology, and using the less-crowded frequency of 5-GHz UNII. The 2.4-GHz ISM band was widely used for all WLAN devices, such as Bluetooth, cordless phones, monitors, video, and home gaming consoles. The 802.11a standard was not as widely accepted because materials that were needed to manufacture chips that supported 802.11a were less readily available and initially resulted in higher cost. Most applications satisfied the requirements for wireless support by following the cheaper and more accessible standards of 802.11b. A continued development by IEEE maintains usage of the 802.11 MAC and obtains higher data rates in the 2.4-GHz ISM band. The IEEE 802.11g amendment uses the newer OFDM from 802.11a for higher speeds, yet is backward-compatible with 802.11b using DSSS, which was already using the same ISM frequency band. DSSS data rates of 1, 2, 5.5, and 11 Mb/s are supported, as are OFDM data rates of 6, 9, 12, 18, 24, 36, 48, and 54 Mb/s. The most recent development by IEEE is the completed 802.11n standard as the upgrade to the 802.11 protocol. The project was a multi-year effort to standardize and upgrade the 802.11g standard. IEEE 802.11n provides a new set of capabilities that dramatically improve the reliability of communications, the predictability of coverage, and the overall throughput of devices. The 802.11n protocol has several enhancements in the physical layer and the MAC sublayer that provide exceptional benefits to wireless deployments. The four key features are as follows:
Multiple-input multiple-output (MIMO). MIMO uses the diversity and duplication of signals using the multiple transmit and receive antennas. 40-MHz operation bonds adjacent channels that are combined with some of the reserved channel space between the two, to more than double the data rate. Frame aggregation reduces the overhead of 802.11 by coalescing multiple packets together. Backward compatibility, which makes it possible for 802.11a/b/g and 802.11n devices to coexist, therefore allowing customers to phase in their access point and/or client migrations over time.
The 802.11n standard supports 2.4- and 5-GHz frequency bands, and adopted an OFDM modulation method. 20-MHz or 40-MHz bandwidth is supported. 20-MHz bandwidth is used for backward compatibility. IEEE 802.11n continues the modulation evolution. IEEE 802.11n uses OFDM like 802.11a and 802.11g standards. However, 802.11n increases the number of subcarriers in each 20-MHz channel from 48 to 52. IEEE 802.11n provides a selection of eight data rates for a transmitter, including a data rate using 64 quadrature amplitude modulation (QAM) with a rate 5/6 encoder. Together, these changes marginally increase the data rate to a maximum of 72.2 Mb/s for a single-transmit radio. Via spatial division multiplexing, 802.11n also increases the number of transmitters allowable to four. For two, the maximum data rate is 144 Mb/s. Three provide a maximum data rate of 216 Mb/s. The maximum of four transmitters can deliver 288 Mb/s. When using 40-MHz channels, 80211n increases the number of subcarriers available to 108. This provides a maximum data rate of 150, 300, 450, and 600 Mb/s for one through four transmitters, respectively. The data rates depend on the OFDM mode of operation. IEEE 802.11n has the ability to dramatically increase the capacity of a WLAN, the effective throughput of every client, and the reliability of the networking experience for the client.
Wi-Fi Certification
Even after the 802.11 standards were established, there was a need to ensure interoperability among 802.11 products. The Wi-Fi Alliance is a global, nonprofit industry trade association that is devoted to promoting the growth and acceptance of WLANs. One of the primary benefits of the Wi-Fi Alliance is to ensure interoperability among 802.11 products that are offered by various vendors. The Wi-Fi Alliance provides a certification for each product as a proof of interoperability. Certified vendor interoperability provides a comfort zone for purchasers. Certification includes all three IEEE 802.11-RF technologies, as well as early adoption of pending IEEE drafts, such as one addressing security. The Wi-Fi Alliance adapted IEEE 802.11i draft security as WPA, and then revised it to Wi-Fi Protected Access 2 (WPA2) after the final release of IEEE 802.11i.
Here is the IEEE 802.11 Standards Comparison Chart. The images are shown in more detail below.
Course: Cisco ICND1 1.1: Implementing Wireless LANs Topic: Wireless Transmissions, Standards, and Certification
worst, a rogue access point could be configured to gain access to servers and files. A simple and common version of a rogue access point is one installed by employees with authorization. Employee access points that are intended for home use and are configured without the necessary security can cause a security risk in the enterprise network.
Authentication, to ensure that legitimate clients and users access the network via trusted access points. Encryption, to provide privacy and confidentiality. Intrusion detection systems (IDSs) and intrusion prevention systems (IPSs), to protect from security risks and availability.
The fundamental solution for wireless security is authentication and encryption to protect the wireless data transmission. These two wireless security solutions can be implemented in degrees; however, both apply to both small office, home office (SOHO) and large enterprise wireless networks. Larger enterprise networks will need the additional security that is offered by an IPS monitor. Current IPSs can not only detect wireless network attacks, but they also provide basic protection against unauthorized clients and access points. Many enterprise networks use IPSs for protection not primarily against outside threats, but mainly against unintentional access points that are installed by employees who desire the mobility and benefits of wireless.
more difficult for hackers to find it. To allow the client to learn the access point SSID, 802.11 allows wireless clients to use a null value (that is, no value is entered in the SSID field), therefore requesting that the access point broadcast its SSID. However, this technique renders the security effort ineffective because hackers need only send a null string until they find an access point. Access points supported filtering using a MAC address as well. Tables are manually constructed on the access point to allow for clients that are based on their physical hardware address. However, MAC addresses are easily spoofed, and MAC address filtering is not considered a security feature. While 802.11 committees began the process of upgrading WLAN security, enterprise customers needed wireless security immediately to enable deployment. Driven by customer demand, Cisco introduced early proprietary enhancements to RC4-based WEP encryption. Cisco implemented Cisco Temporal Key Integrity Protocol (CKIP) per-packet keying or hashing, and Cisco Message Integrity Check (Cisco MIC) to protect WEP keys. Cisco also adapted 802.1X wired authentication protocols to wireless and dynamic keys using Cisco Lightweight Extensible Authentication Protocol (Cisco LEAP) to a centralized database. This approach is based on the IEEE 802.11 Task Group i end-to-end framework using 802.1X and the Extensible Authentication Protocol (EAP) to provide this enhanced functionality. Cisco has incorporated 802.1X and EAP into its WLAN solutionthe Cisco Wireless Security Suite. Numerous EAP types are available today for user authentication over wired and wireless networks. Current EAP types include the following:
EAP-Cisco Wireless (LEAP) EAP-Transport Layer Security (EAP-TLS) Protected EAP (PEAP) EAP-Tunneled TLS (EAP-TTLS) EAP-Subscriber Identity Module (EAP-SIM)
In the Cisco SAFE wireless architecture, LEAP, EAP-TLS, and PEAP were tested and documented as viable mutual authentication EAP protocols for WLAN deployments. Soon after Cisco wireless security implementation, the Wi-Fi Alliance introduced Wi-Fi Protected Access (WPA) as an interim solution. WPA was a subset of the expected IEEE 802.11i security standard for WLANs using 802.1X authentication and improvements to WEP encryption. The newer key-hashing TKIP has security implementations like those implementations that are provided by Cisco Key Integrity Protocol and message integrity check (CKIP), but these three security implementations are not compatible. Today, 802.11i has been ratified and the Advanced Encryption Standard (AES) has replaced WEP as the latest and most secure method of encrypting data. Wireless IDSs are available to identify attacks and protect the WLAN from them. The Wi-Fi Alliance certifies 802.11i devices under Wi-Fi Protected Access 2 (WPA2).
In the client association process, access points send out beacons announcing one or more SSIDs, data rates, and other information. The client scans all of the channels and listens for beacons and responses from the access points. The client associates to the access point that has the strongest signal. If the signal becomes low, the client repeats the scan to associate with another access point. This process is called "roaming." During association, the SSID, MAC address, and security settings are sent from the client to the access point, and then checked by the access point. The association of a wireless client to a selected access point is actually the second step in a twostep process. First authentication, then association, must occur before an 802.11 client can pass traffic through the access point to another host on the network. Client authentication in this initial process is not the same as network authentication (which is entering a username and password to gain access to the network). Client authentication is simply the first step (followed by association) between the wireless client and access point, and merely establishes communication. The 802.11 standard has specified only two methods of authentication: open authentication and shared-key authentication. Open authentication is simply the exchange of four hello-type packets with no client or access point verification to allow ease of connectivity. Shared-key authentication uses a static WEP key that is known between the client and access point for verification. This same key may be used to encrypt the actual data passing between a wireless client and access point.
WPA2 (standard 802.11i) uses the same authentication architecture, key distribution, and key renewal technique as WPA. However, WPA2 added better encryption, called AES-Counter with CBC-MAC Protocol (AES-CCMP). AES-CCMP uses two combined cryptographic techniques. One is counter mode and the second one is CBC-MAC. AES-CCMP provides a robust security protocol between the wireless client and the wireless access point. Note: AES is a cryptographic cipher that uses a block length of 128 bits and key lengths of 128, 192, or 256 bits. Counter mode is a mode of operation. Counter mode uses a number that changes with each block of text encrypted. The number is called the counter. The counter is encrypted with the cipher, and the result goes into ciphertext. The counter changes for each block and the ciphertext is not repeated. Cipher Block Chaining-Message Authentication Code (CBC-MAC) is a message integrity method. The method uses block cipher such as AES. Each block of cleartext is encrypted with the cipher and then an exclusive OR (XOR) operation is conducted between the first and the second encrypted blocks. An XOR operation is then run between this result and the third block, and so on.
Enterprise Mode
"Enterprise mode" is a term that is used for products that are tested to be interoperable in both PSK and 802.1X Extensible Authentication Protocol (EAP) modes of operation for authentication. When 802.1X is used, an authentication, authorization, and accounting (AAA) server is required to perform authentication as well as key and user management. Enterprise mode is targeted to enterprise environments.
Personal Mode
"Personal mode" is a term that is used for products that are tested to be interoperable in the PSKonly mode of operation for authentication. It requires manual configuration of a PSK on the access point and clients. The PSK authenticates users via a password, or identifying code, on both the client station and the access point. No authentication server is needed. Personal mode is targeted to SOHO environments. Encryption is the process of transforming plaintext information to make it unreadable to anyone except those possessing the key. The algorithm that is used to encrypt information is called cipher, and the result is called ciphertext. Encryption is now commonly used in protecting information within WLAN implementations. Encryption is also used to protect data in transit. Data in transit might be intercepted, and encryption is one option for protection.
WEP keys were the first solution to encrypt and decrypt WLAN transmitted data. Several research papers and articles have highlighted the potential vulnerabilities of static WEP keys. An improvement to static WEP keys was dynamic WEP keys in combination with 802.1X authentication. However, hackers have ready access to tools for cracking WEP keys. Several enhancements to WEP keys were provided. These WEP enhancements were TKIP, support for MIC, per-packet key hashing, and broadcast key rotation. TKIP is a set of software enhancements to RC4-based WEP. Cisco had a proprietary implementation of TKIP at the beginning. It was sometimes referred to as Cisco TKIP. In 2002, 802.11i finalized the specification for TKIP, and the Wi-Fi Alliance announced that it was making TKIP a component of WPA. Cisco TKIP and the WPA TKIP both include per-packet keying and message integrity check. A weakness exists in TKIP, however, that can allow an attacker to decrypt packets under certain circumstances. An enhancement to TKIP is Advanced Encryption Standard (AES). AES is a stronger alternative to the RC4 encryption algorithm. AES is a more-secure encryption algorithm and has been deemed acceptable for the U.S. government to encrypt both unclassified and classified data. AES is currently the highest standard for encryption and replaces WEP. AES has been developed to replace the Data Encryption Standard (DES). AES offers a larger key size, while ensuring that the only known approach to decrypt a message is for an intruder to try every possible key. AES has a variable key lengththe algorithm can specify a 128-bit key (the default), a 192-bit key, or a 256-bit key. The use of WPA2 with AES is recommended whenever possible. It, however, is more resource-consuming and requires new hardware compared to simple WEP or TKIP implementations. If a client does not support WPA2 with AES due to the age of the hardware or lack of driver compatibility, a VPNmay be a good solution for securing over-the-air client connections. IP Security (IPsec) and Secure Sockets Layer (SSL) VPNs provide a similar level of security as WPA2. IPsec VPNs are the services that are defined within IPsec to ensure confidentiality, integrity, and authenticity of data communications across public networks, such as the Internet. IPsec also has a practical application to secure WLANs by overlaying IPsec in addition to cleartext 802.11 wireless traffic. IPsec provides for confidentiality of IP traffic, as well as authentication and anti-replay capabilities. Confidentiality is achieved through encryption using a variant of the DES, called Triple DES (3DES), or the new AES.
Ad hoc mode: Independent Basic Service Set (IBSS) is the ad hoc topology mode. Mobile clients connect directly without an intermediate access point. Operating systems such as Windows have made this peer-to-peer network easy to set up. This configuration can be used for a small office (or home office) to allow a laptop to be connected to the main PC or for several people to simply share files. The coverage is limited. Everyone must be able to hear everyone else. An access point is not required. A drawback of peerto-peer networks is that they are difficult to secure. Infrastructure mode: In infrastructure mode, clients connect through an access point.
There are two infrastructure sub-topologies. These topologies are the original standard defined 802.11 topologies. Topologies such as repeaters, bridges, and workgroup bridges are vendor-specific extensions.
It is recommended that ESA cells have 10 to 15 percent overlap to allow remote users to roam without losing RF connections. For wireless voice networks, an overlap of 15 to 20 percent is recommended. Bordering cells should be set to different nonoverlapping channels for best performance. Extending the coverage with more access points must be properly designed. WLAN coverage outside of the office or home area provides easy access to your network to anybody including attackers. Once the coverage is extended, and the number of users increases, the performance of the network devices is important. The access points, which are providing access to multiple users, must ensure that all of the users get enough bandwidth and the required quality of service. At the same time, the increased number of users requires additional throughput via the wired network and WLAN. A sufficient number of access points must be implemented and the network capacity must be taken into account.
Higher data rates require stronger signals at the receiver. Therefore, lower data rates have a greater range. Wireless clients always try to communicate with the highest possible data rate. The client will reduce the data rate only if transmission errors and transmission retries occur.
This approach provides the highest total throughput within the wireless cell. The same concept applies to 802.11a, 802.11g, or 802.11n data rates. The difference is in distance and the coverage area of the wireless cell. The performance, throughput, and the distance (range) depend on topology, installation, different obstacles in a path, and configuration of the WLAN equipment. The topology and the installation can significantly change the performance of the WLAN network. Installation without a line of sight, and placement near metal objects, can significantly decrease the distance as well as the throughput and data rate of the WLAN network. When different obstacles are on a path between two wireless devices, the absorption of the signal can limit the performance and the distance. Water, cardboard, and metal can significantly impact the coverage. Additionally, the configuration of the WLAN devices with different parameters is important. In order to limit the coverage to a particular area, the transmit power can be decreased and antennas with lower gain can be used. Lowering the transmit power and antenna gain affects the coverage area. There is no single answer on how far away the wireless signal will go and how large the data rate can be.
The whole WLAN network must be observed and tests must be performed in order to define the real coverage area and data rate.
The basic approach to wireless implementation, as with any basic networking, is to gradually configure and incrementally test. Before implementing any wireless network, verify the existing network and Internet access for the wired hosts. Implement the wireless network with only a single access point and a single client, without wireless security. Verify that the wireless client has received a DHCP IP address and can ping the local wired default router, and then browse to the external Internet. Before the installation, perform a site survey to identify the position and the configuration parameters for all the required WLAN equipment. Correct WLAN coverage and throughput in the WLAN network must be ensured. Finally, configure wireless security with WPA or WPA2. Use WEP only if the hardware does not support WPA. Use WPA2 if possible because AES encryption support provides a higher level of security. Once the configuration is completed, verify the WLAN operation.
Wireless Clients
Currently, there are many form factors available to add wireless capabilities to laptops. The most common are Universal Serial Bus (USB) devices with self-contained fixed antennas and wireless supplicant software. Both of them enable wireless hardware usage and provide security options for authentication and encryption. Most new laptops contain some form of wireless capability. The availability of wireless technology has increased the wireless market and improved ease of use. Newer Microsoft Windows operating systems have a basic wireless supplicant client (that is, WZC) to enable wireless plug-and-play. This functionality is performed by discovering SSIDs that are being broadcast and allowing the user to simply enter the matching security credentials or keys for WEP or WPA, for example. The basic features of WZC are satisfactory for simple small office, home office (SOHO) environments. Large enterprise networks require more-advanced wireless client features than those features of native operating systems. In 2000, Cisco started a program of value-added feature enhancements through a royalty-free certification program. More than 95 percent of Wi-Fi- enabled laptops that are shipped today are compliant with Cisco Compatible Extensions.
Topic
Example Call Admission Control (CAC), voice metrics Management Frame Protection (MFP), client reporting
Until Cisco offered a full-featured supplicant for both wired and wireless clients (called Cisco Secure Services Client), enterprise networks were managing one set of wired clients and another set of wireless clients separately. The benefit to users is a single client for wired or wireless connectivity and security.
Wireless Troubleshooting
If you follow the recommended steps for implementing a wireless network, the incremental method of configuration will most likely lead you to the probable cause of an issue. These issues are the most common causes of configuration problems:
Configuring a defined SSID on the client (as opposed to the method of discovering the SSID) that does not match the access point (including case-sensitivity) Configuring incompatible security methods
The wireless client and access point must match in authentication methodExtensible Authentication Protocol (EAP) or PSKand encryption method (TKIP or AES). Other common problems can result from initial RF installation, such as:
Is the radio enabled on both the access point and the client for the correct RF (2.4-GHz ISM or 5GHz UNII)? Is an external antenna connected and facing in the correct direction? Is the antenna location too high or too low relative to the wireless clients, preferably within 20 vertical feet (6 vertical m) of the client? Is a metal object in the room reflecting RF and causing poor performance? Are you attempting to reach too great a distance?
The first step in troubleshooting a suspected wireless issue is to separate the environment into wired network versus wireless network. The second step is to further divide the wireless network into configuration versus RF issues. Begin by verifying the proper operation of the existing wired infrastructure and associated services. Verify that existing Ethernet-attached hosts can renew their DHCP addresses and reach the Internet. Then colocate the access point and wireless client to verify the configuration and eliminate the possibility of RF issues. Always start the wireless client on open authentication and establish connectivity. Then implement the desired wireless security.
If the wireless client is operational at this point, then only RF-related issues remain. First, consider whether metal obstructions exist. If so, move the obstruction or change the location of the access point. If the distance is too great, consider adding another access point using the same SSID but on a unique RF channel. Lastly, consider the RF environment. Just as a wired network can become congested with traffic, so can RF for 2.4 GHz (more often than 5 GHz). Check for other sources of wireless devices using 2.4 GHz. Performance issues that seem to relate to time of day would indicate RF interference from a device. An example would be slow performance at lunchtime in an office that is located near a microwave oven that is used by employees. Although most microwaves will jam RF channel 11, some microwaves will jam all of the 2.4 GHz RF channels. Another cause of problems could be RF devices that hop frequencies, such as the Frequency Hopping Spread Spectrum (FHSS) that is used in cordless phones. Since there can be many sources of RF interference, always start by colocating the access point and wireless client, and then move the wireless client until you can reproduce the problem. Most wireless clients have supplicant software that helps you troubleshoot issues by presenting relative RF signal strength and quality.
VoIP Requirements
VoIP Requirements in Switched Networks
Modern networks can support converged services where video and voice traffic is merged with data traffic. When implementing VoIP in the network, all network requirements, including power and capacity planning, must be examined. There are several VoIP network devices that are required to implement a VoIP solution. Special attention is required for VoIP phones because network engineers might meet them on a daily basis. In order to support the VoIP solution, network e engineers must be aware of the parameters and requirements of the VoIP solution. When connecting the VoIP phone to the network, two options exist:
Wired VoIP phone that is connected directly to the switch Wireless VoIP phone that is connected to the switch via an access point
It is a common solution that end-user PCs are connected to the VoIP phone, and then the VoIP phone provides the connectivity toward the switched network. The Cisco VoIP solution offers many benefits and, in order to perform a proper installation, network engineers must take many parameter settings s into consideration. Network engineers are working with network equipment on a daily basis, but they are not necessarily VoIP and WLAN professionals. In order to support their LAN and WLAN environments, they must be aware of VoIP requirements. One of the important parameters is delay. For data traffic, the delay is not critical. The delay is unacceptable, however, for VoIP users. In addition to the delay, the concerns are jitter (variable
delay) and guaranteed bandwidth. All these factors are best described with the term quality of service (QoS). Voice traffic usually generates a smooth demand on bandwidth and has minimal impact on other traffic as long as voice traffic is properly managed. VoIP traffic requirements are as follows:
Guaranteed bandwidth Transmission priority over other types of network traffic or the ability to be routed around congested areas on the network End-to-end delay of less than 150 ms across the network
Additionally, administrators must provide power to the VoIP devices. An uninterruptible power supply (UPS) is a great solution for these devices, in order to provide uninterrupted power. Wired IP phones are best implemented with Power over Ethernet (PoE). WLAN IP phones are usually battery-powered. PoE power that is supplied to the wired IP phones is implemented directly from Cisco Catalyst switches with inline power capabilities. If the PoE switch is not available in the network, a Cisco Catalyst Inline Power Patch Panel or adapter must be used. When data and voice traffic is mixed in the network, the best solution is to separate these two traffic types. If the user PCs and the IP phones are on the same VLAN, each of them will try to use the available bandwidth without considering the other device. The simplest method to avoid a conflict is to use separate VLANs for IP telephony traffic and data traffic. Some Cisco Catalyst switches offer a unique feature that is called a voice VLAN, which lets you overlay a voice topology onto a data network. You can segment phones into separate logical networks, even though the data and voice infrastructure are physically the same. The voice VLAN feature places the phones into their own VLANs without any end-user intervention. The user simply plugs the phone into the switch, and the switch provides the phone with the necessary VLAN information.
Routers have these components, which are also found in computers and switches: CPU : The CPU or the processor is the chip that is installed on the motherboard that carries out the instructions of a computer program. CPUs process all of the information that is gathered from other routers and sent to other routers. o Motherboard : The motherboard is the central circuit board, which holds critical electronic components of the system. The motherboard provides connections to other peripherals and interfaces. o Memory : There are two types of memory, RAM and ROM. RAM is memory on the motherboard that stores data during CPU processing. It is a volatile type of memory in that the information is lost after the power is switched off. ROM is read-only memory on the motherboard. As opposed to RAM, the content of the ROM is not lost after the power is switched off. Modern types of ROM are EPROM and EEPROM, which can be erased and reprogrammed multiple times. Routers have network adapters to which IP addresses are assigned. Network adapters are used to connect routers to other devices in the network. Routers can have these types of ports: Console/AUX port : The router uses a console port to attach to a terminal that is used for management, configuration, and control. A console port may not exist on all routers. The AUX interface is used for remote management of the router. Typically, a modem is connected to the AUX interface for dial-in access. From a security standpoint, enabling the option to connect remotely to a network device carries with it the responsibility of maintaining vigilant device management. o Network port : The router has a number of network ports, including different LAN or WAN media ports, which may be copper or fiber cable. The router uses its routing 4-??? to determine the best path on which to forward the packet. When the router receives a packet, it examines its destination IP address and searches for the best match to a network address in the routing 4-???.
o o
Routers are devices that gather routing information from the network. The information that is processed locally goes into the routing 4-???. The routing 4-??? contains a list of all known destinations to the router and provides information regarding how to reach them. Routers have the following two important functions:
Path determination : Routers must maintain their own routing 4-???s and ensure that other routers know about changes in the network. Routers use a routing protocol to communicate network information to other routers. A routing protocol distributes the information from a local routing 4-??? on the router. Different protocols use different methods to populate the routing 4-???. The first letter in each line of the routing 4-??? indicates which protocol was the source for the information. It is possible to statically populate the routing 4-???s. Statically populating the routing 4-???s does not scale and leads to problems when the network topology changes. Design changes and outages also result in some problems. Packet forwarding : Routers use the routing 4-??? to determine where to forward packets. Routers forward packets through a network interface toward the destination network. Each line of the routing 4-??? indicates which network interface is used to forward a packet. The destination IP address in the packet defines the packet destination. Routers use their local routing 4-??? and compare the entries to the destination IP address of the packet. The result is which outgoing interface to use to send the packet out of the router. If routers do not have a matching entry in their routing 4-???s, the packets are dropped.
Path Determination
During the path determination portion of transmitting data over a network, routers evaluate the available paths to remote destinations. The routing 4-??? holds only one entry per network. More sources of the information about the particular destination might exist. The routing process that runs on the router must be able to evaluate all the sources and select the best one to populate the routing 4-???. Multiple sources come from having multiple dynamic routing protocols running, and from static and default information being available. The routing protocols use different metrics to measure the distance and desirability of a path to a destination network. When multiple routing protocols are running at the same time, the routers must be able to select the best source of information. Administrative distance is the feature that routers use in order to select the best path when there are two or more different routes to the same destination from two different routing protocols. Administrative distance defines the reliability of a routing protocol. Each routing protocol is prioritized in order of most to least reliable (believable) with the help of an administrative distance value.
Routing Tables
As part of the path determination procedure, the routing process builds a routing 4-??? that identifies known networks and how to reach them. Routers forward packets using the information in the routing 4-???. Each router has its own local routing 4-???. The routing 4-??? is populated from different sources. Routing metrics vary depending on the routing protocol that is running in the router.
The routing 4-??? consists of an ordered list of known network addresses. Network addresses can be learned dynamically by the routing process or by being statically configured. All directly connected networks are added to the routing 4-??? automatically. Routing 4-???s also include information regarding destinations and next-hop associations. These associations tell a router that a particular destination is either directly connected to the router or that it can be reached via another router. This router is the next-hop router and is on the path to the final destination. When a router receives an incoming packet, it uses the destination address and searches the routing 4-??? to find the best path. If no entry can be found, the router will discard the packet after sending an Internet Control Message Protocol (ICMP) message to the source address of the packet.
Directly connected networks : This entry comes from having router interfaces that are directly attached to network segments. This method is the most certain method of populating a routing 4-???. If the interface fails or is administratively shut down, the entry for that network will be removed from the routing 4-???. The administrative distance is "0" and, therefore, will preempt all other entries for that destination network. Entries with the lowest administrative distance are the best, most-trusted sources. Static routes : A system administrator manually enters static routes directly into the configuration of a router. The default administrative distance for a static route is "1"; therefore, the static routes will be included in the routing 4-??? unless there is a direct connection to that network. Static routes can be an effective method for small, simple networks that do not change frequently. For bigger and uns4-??? networks, the solution with static routes does not scale. Dynamic routes : The router learns dynamic routes automatically when the routing protocol is configured and a neighbor relationship to other routers is established. The information is responsive to changes in the network and updates constantly. There is, however, always a lag between the time that a network changes and all of the routers become aware of the change. The time delay for a router to match a network change is called convergence time. A shorter convergence time is better for users of the network.
Different routing protocols perform differently in this regard. Larger networks require the dynamic routing method because there are usually many addresses and constant changes. These changes require updates to routing 4-???s across all routers in the network or connectivity is lost. Default routes : A default route is an optional entry that is used when no explicit path to a destination is found in the routing 4-???. The default route can be manually inserted or it can be populated from a dynamic routing protocol.
The show ip route command is used to show the contents of the routing 4-??? in a router. The first part of the output explains the codes, presenting the letters and the associated source of the entries in the routing 4-???. Letter "C," which is reserved for directly connected networks, labels the second and third entry. Letter "S," which is reserved for static routes, labels the last two entries. Letter "R," which is reserved for Routing Information Protocol (RIP), labels the first entry. Letter "O," which is reserved for Open Shortest Path First (OSPF) routing protocol, labels the fourth entry. Letter "D," which is reserved for Enhanced Interior Gateway Routing Protocol (EIGRP), labels the fifth entry.
Routing Metrics
When a routing protocol updates a routing 4-???, the primary objective of the protocol is to determine the best information to include in the 4-???. The routing algorithm generates a number, called the metric value, for each path through the network. Sophisticated routing protocols can base route selection on multiple metrics, combining them in a single metric. Typically, the smaller the metric number is, the better the path. Metrics can be based on either a single characteristic or on several characteristics of a path. The metrics that are most commonly used by routing protocols are as follows:
Bandwidth : The data capacity of a link (the connection between two network devices). Delay : The length of time that is required to move a packet along each link from the source to the destination. The delay depends on the bandwidth of intermediate links, port queues at each router, network congestion, and physical distance. Hop count : The number of routers that a packet must travel through before reaching its destination Cost : An arbitrary value that is assigned by a network administrator, usually based on bandwidth, administrator preference, or other measurement, such as load or reliability.
Routing Methods
Many routing protocols are designed around one of the following routing methods:
Distance vector routing : In distance vector routing, a router does not have to know the entire path to every network segment. The router only has to know the direction, or vector, in which to send the packet. The distance vector routing approach determines the direction (vector) and distance (hop count) to any network in the internetwork. Distance vector algorithms periodically (such as every 30 seconds) send all or portions of their routing 4-??? to their adjacent neighbors. Routers that are running a distance vector routing protocol will send periodic updates, even if there are no changes in the network. By receiving the routing 4-??? of a neighbor, a router can verify all the known routes and changes to its local routing 4-???. The router changes its routing 4-??? that is based on updated information that is received from the neighboring router. This process is also known as "routing by rumor." This name comes from the fact that the understanding of the network topology is based on the perspective of the neighboring router routing 4-???. An example of a distance vector protocol is the RIP, which is a commonly used routing protocol that uses hop count as its routing metric. Link-state routing : In link-state routing, each router tries to build its own internal map of the network topology. Each router sends messages into the network when it first becomes active. This message lists the routers to which it is directly connected and provides information about whether the link to each router is active. The other routers use this information to build a map of the network topology and then use the map to choose the best destination. Link-state routing protocols respond quickly to network changes. Triggered updates are sent when a network change occurs. Periodic updates (link-state refreshes) are sent at longer time intervals, such as every 30 minutes.
When a link state changes, the device that detected the change creates an update message concerning that link (route). This update message is propagated to all routers that are running the same routing protocol. Each router takes a copy of the update message, updates its routing 4???s, and forwards the update message to all neighboring routers. This flooding of the update message is required to ensure that all routers update their databases before creating an updated routing 4-??? that reflects the new topology. Examples of link-state routing protocols are OSPF and Intermediate System-to- Intermediate System (IS-IS). Note Cisco developed EIGRP, which combines the best features of distance vector and link-state routing protocols.
Runs the power-on self-test (POST) to test the hardware Finds and loads the Cisco IOS Software that the router uses for its operating system Finds and applies the configuration statements about router-specific attributes, protocol functions, and interface addresses
When a Cisco router powers up, it performs a POST. During the POST, the router executes diagnostics to verify the basic operation of the CPU, memory, and interface circuitry. After verifying the hardware functions, the router proceeds with the software initialization. During the software initialization, the router finds and loads the Cisco IOS image. When the Cisco IOS image is loaded, the router finds and loads the configuration file, if one exists. This 4-??? lists the steps that are required for the initial startup of a Cisco router. Startup of Cisco Routers Step Action Before starting the router, verify the following: - All network cable connections are secure. 1. - Your terminal is connected to the console port. - Your console terminal application, such as HyperTerminal, is selected. 2. Push the power switch to "on." The Cisco IOS Software output text appears on the console. Observe the boot sequence of 3. the router on the console.
contents even when power is turned off. If the router has a configuration file in NVRAM, the user-mode prompt appears. When starting a new Cisco router or when starting a Cisco router without a configuration in NVRAM, there will be no configuration file. If no valid configuration file exists in NVRAM, the operating system executes a question-driven initial configuration routine that is referred to as the system configuration dialog or setup mode. Setup mode is not intended for entering complex protocol features in the router. Use setup mode to bring up a minimal configuration. Rather than using setup mode, you can use other various configuration modes to configure the router. When using the setup mode and after you complete the configuration process for all of the installed interfaces on the router, the setup command shows the configuration command script that was created. Depending on the software revision, the router asks if the configuration that was created can be used or the setup command offers the following three choices:
[0]: Go to the EXEC prompt without saving the created configuration. [1]: Go back to the beginning of the setup without saving the created configuration. [2]: Accept the created configuration, save it to NVRAM, and exit to the EXEC mode.
If you choose [2], the configuration is executed and saved to NVRAM, and the system is ready to use. To modify the configuration, you must reconfigure it manually. The script file that is generated by the setup command is additive. You can turn on features with the setup command, but not off. In addition, the setup command does not support many of the advanced features of the router or those features that require a more-complex configuration.
User mode : Typical tasks include checking the router status. Privileged mode : Typical tasks include changing the router configuration.
When you first log in to the router, a user-mode prompt is displayed. The EXEC commands that are available in user mode are a subset of the EXEC commands that are available in privileged mode. These commands provide a means to display information without changing the configuration settings of the router. To access the complete set of commands, you must enable the privileged mode with the enable command and supply the enable password, if it is configured. Note The enable password is displayed in cleartext by using the show run command. The secret password is encrypted, so it is not displayed in cleartext. If both the enable and secret passwords are configured, the secret password overrides the enable password. The EXEC prompt is displayed as a pound sign (#) while in privileged mode. From the privileged level, you can access global configuration mode and the other specific configuration modes, such as interface, subinterface, line, router, route-map, and several others. Use the disable command to return to the user EXEC mode from the privileged EXEC mode. Use the exit or logout command to end the current session. Enter a question mark (?) at the user-mode prompt or at the privileged-mode prompt to display a list of commands that are available in the current mode. Note The available commands vary with different Cisco IOS Software versions. Notice "-- More --" at the bottom of the sample display. This output indicates that multiple screens are available as output. You can perform any of the following tasks:
Press the Spacebar to display the next available screen. Press the Enter key to display the next line. Press any other key to return to the prompt.
Enter the enable user-mode command to access the privileged EXEC mode. Normally, if an enable password has been configured, you must also enter the enable password before you can access the privileged EXEC mode. Enter the ? command at the privileged-mode prompt to display a list of the available privileged EXEC commands. Note The available commands vary with different Cisco IOS Software versions.
After logging in to a Cisco router, the router hardware and software status can be verified by using the following router status commands: show version, show running-config, and show startup-config. Use the show version EXEC command to display the configuration of the system hardware, the software version, the memory size, and the configuration register setting. The I/O memory is used for holding packets while they are in the process of being routed. The router has two Fast Ethernet interfaces and two serial interfaces. This output is useful for confirming that the expected interfaces are recognized at startup and are functioning, from a hardware perspective. The router has 239 KB used for startup configuration storage in the NVRAM and 62,720 KB of flash storage for the Cisco IOS Software image. The show version command displays information about the currently loaded software version, along with hardware and device information. Some of the information that is shown from this command is as follows:
Software version : Cisco IOS Software version (stored in flash) Bootstrap version : Bootstrap version (stored in boot ROM) System uptime : Time since last reboot System restart info : Method of restart (such as power cycle or crash) Software image name : Cisco IOS filename that is stored in flash Router type and processor type: Router model number and processor type Memory type and allocation (shared and main): Main processor RAM and shared packet I/O buffering Software features : Supported protocols or feature sets Hardware interfaces : Interfaces that are available on the router Configuration register : Sets bootup specifications, console speed setting, and related parameters
The show running-config command, which is used in privileged EXEC mode, displays the current running configuration that is stored in RAM. With a few exceptions, all configuration commands that were used will be entered into the running-config and implemented immediately by Cisco IOS Software. The show startup-config command displays the startup configuration file that is stored in NVRAM. This is the configuration that the router will use on the next reboot. This configuration does not change unless the current running configuration is saved to NVRAM.
Interface : Supports commands that configure operations on a per-interface basis. Subinterface : Supports commands that configure multiple virtual interfaces on a single physical interface. Controller : Supports commands that configure controllers (for example, E1 and T1 controllers). Line : Supports commands that configure the operation of a terminal line (for example, the console or the vty ports). Router : Supports commands that configure an IP routing protocol.
If you enter the exit command, the router will go back one level. You can enter the exit command from one of the specific configuration modes to return to global configuration mode. Press Ctrl-Z to leave the configuration mode completely and return the router to the privileged EXEC mode. In terminal configuration mode, an incremental compiler is used. Each configuration command that is entered is parsed as soon as the Enter key is pressed. If there are no syntax errors, the command is executed and stored in the running configuration, and it is effective immediately. Commands that affect the entire router are called global commands. The hostname and enable password commands are examples of global commands.
Commands that point to or indicate a process or interface that will be configured are called major commands. When they are entered, major commands cause the CLI to enter a specific configuration mode. Major commands have no effect unless a subcommand that supplies the configuration entry is immediately entered. For example, the major command interface serial 0 has no effect unless it is followed by a subcommand that tells what is to be done to that interface. The following are examples of some major commands and the subcommands that go with them:
Router(config)#interface serial 0 (major command) Router(config-if)#shutdown (subcommand) Router(config-if)#line console 0 (major command) Router(config-line)#password cisco (subcommand) Router(config-line)#router rip (major command) Router(config-router)#network 10.0.0.0 (subcommand)
Entering a major command switches from one configuration mode to another. It is not necessary to return to the global configuration mode first before entering another configuration mode. After you enter the commands to configure the router, the running configuration is changed. You must save the running configuration to NVRAM. If the configuration is not saved to NVRAM and the router is reloaded, the configuration will be lost and the router will revert to the last configuration saved in NVRAM. Use the copy running-config startup-config command to save the running configuration to the startup configuration in NVRAM.
The logging synchronous console-line command is useful when console messages are being displayed while you are attempting to input EXEC or configuration commands. Instead of the console messages being interspersed with the input, the input is redisplayed on a single line at the end of each console message that interrupts the input. This functionality makes reading the input and the messages much easier. The following example shows how the console messages interrupt the interface serial 0/0 command entered.
RouterX(config)#interface ser *Jan 9 00:26:44.887: %LINK-5-CHANGED: Interface Serial0/0, changed state to administratively down *Mar 9 00:26:45.887: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0/0, changed state to downial 0/0
The following example shows the same situation except that this time the logging synchronous console-line command is used. Now the input is redisplayed on a single line.
RouterX(config)#logging synchronous RouterX(config)#interface ser *Jan 9 00:26:44.887: %LINK-5-CHANGED: Interface Serial0/0, changed state to administratively down *Mar 9 00:26:45.887: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0/0, changed state to down RouterX(config)#interface Serial 0/0
An interface in a Cisco 2800 and 3800 Series Integrated Services Router, or other modular router, is specified by the physical slot in the router and port number on the module in that slot, as follows:
RouterX(config)#interface fa 1/0
You can add a description to an interface to help remember specific information about that interface. Two common descriptions might be the network that is serviced by that interface or the customer that is connected to that interface. This description is meant solely as a comment to help identify how the interface is being used. To add a description to an interface configuration, use the description command in interface configuration mode. To remove the description, use the no form of this command. A Serial 0 interface in the RouterX router is connected to the Router1 router. The following example shows the commands that are used to add the description on the Serial 0 interface:
RouterX(config)#interface Serial 0 RouterX(config-if)#description Link to Router1
The description will appear in the output when the configuration information that exists in the memory of the router is displayed. The same text will appear in the show interfaces command display output, as follows:
RouterX(config)#show interfaces <output ommited> Serial0/0/0 is administratively down, line protocol is down (disabled) Hardware is HD64570 Description: Link to Router1 MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 <output ommited>
To quit interface configuration mode and to move into global configuration mode, enter the exit command at the RouterX(config-if)# prompt as follows:
RouterX(config-if)#exit
You may want to disable an interface to perform hardware maintenance on a specific interface or a segment of a network. You may also want to disable an interface if a problem exists on a specific segment of the network and you must isolate that segment from the rest of the network. The shutdown subcommand administratively turns off an interface. To reinstate the interface, use the no shutdown subcommand. When an interface is first configured, except in setup mode, you must administratively enable the interface before it can be used to transmit and receive packets. Use the no shutdown subcommand to allow Cisco IOS Software to use the interface.
Each interface on a Cisco router must have its own IP address to uniquely identify it on the network. Unique IP addressing is required for the communication between the hosts and other network devices. Each router link to each LAN is associated to a dedicated and unique subnet. The router needs to have an IP address configured on each of its links to each LAN. The routers determine the path to the destination based on the destination IP address, which is written in the IP header. To configure an interface on a Cisco router, complete these steps. Step Action Results and Notes Enter global configuration mode using the configure This command displays a new 1 terminal command. prompt: Router#configure terminal Router(config)# Identify the specific interface that requires an IP address This command displays a new prompt, for example, as follows: 2 by using the interface type slot/port command. Router(config-if)# Router(config)#interface serial 0 Set the IP address and subnet mask for the interface by using the ip address ip-address mask command. Router(config-if)#ip address 172.18.0.1 255.255.0.0 Enable the interface to change the state from administratively down to up by using the no shutdown command. Router(config-if)#no shutdown Exit configuration mode for the interface by using the exit command. Router(config-if)#exit This command configures the IP address and subnet mask for the selected interface.
This command enables the current interface. This command displays the global configuration mode prompt. Router(config)#
The following example shows how to configure the IP address on the Serial 0 interface on RouterX:
RouterX#configure terminal RouterX(config)#interface serial 0 RouterX(config-if)#ip address 172.18.0.1 255.255.0.0
One of the most important elements of the show interfaces command output is the display of the line and data-link protocol status. For other types of interfaces, the meanings of the status line may be slightly different. The first parameter refers to the hardware layer and, essentially, reflects whether the interface is receiving the carrier detect signal from the other end (the DCE if using serial connection). The second parameter refers to the data link layer, and reflects whether the data link layer protocol keepalives are being received. Based on the output of the show interfaces command, possible problems can be fixed as follows:
If the interface is up and the line protocol is down, a problem exists. Some possible causes include the following: No keepalives Mismatch in encapsulation type Clock rate issue If the line protocol and the interface are both down, a cable might never have been attached when the router was powered up, or some other interface problem exists. For example, in a back-to-back connection, the other end of the connection may be administratively down. If the interface is administratively down, it has been manually disabled (the shutdown command has been issued) in the active configuration.
o o o
After configuring a serial interface, use the show interfaces serial command to verify the changes. In this example, the show interfaces serial 0/0/0 command is used. Note that the line and protocol are up and that the bandwidth is 64 kb/s.
Layer 3 Addressing
Layer 3 addresses are assigned to end devices such as hosts and to network devices that provide Layer 3 functions. The router has its own Layer 3 address on each interface. Each network device that provides a Layer 3 function maintains a Layer 3 address table.
uses the standard ARP process to obtain the mapping. The host sends an ARP request to the router. The user has programmed the IP address of 192.168.3.2 as the default gateway. Host 192.168.3.1 sends out the ARP request, and the router receives it. The ARP request contains information about the host, and the router adds the information in its ARP table. The router processes the ARP request like any other host, and sends the ARP reply with its own information. The host receives an ARP reply to the ARP request and enters the information in its local ARP table. Now the Layer 2 frame with the application data can be sent to the default gateway. Note that the ARP reports a mapping of the destination IP address (192.168.4.2) to the MAC address of the default gateway instead of the actual destination MAC address. The pending frame is sent with the local host IP address and MAC address as the source. However, the destination IP address is that of the remote host, but the destination MAC address is that of the default gateway. The router receives the frame and must decide where to send the data. When the frame is received by the router, the router recognizes its MAC address and processes the frame. At Layer 3, the router sees that the destination IP address is not its address. A host Layer 3 device would discard the frame. However, because this device is a router, it passes all packets that are for unknown destinations to the routing process. The routing process will determine where to send the packet. The routing process looks up the destination IP address in its routing table. In this example, the destination segment is directly connected. Because of this functionality, the routing process can pass the packet directly to Layer 2 for the appropriate interface. Layer 2 will use the ARP process to obtain the mapping for the IP address and the MAC address. The router asks for the Layer 2 information in the same way as hosts. An ARP request for the destination Layer 3 address is sent to the link. The destination receives and processes the ARP request. The host receives the frame that contains the ARP request and passes the request to the ARP process. The ARP process takes the information about the router from the ARP request and places the information in its local ARP table. The ARP process generates the ARP reply and sends it back to the router. The router receives the ARP reply and takes the information that is required for forwarding the packet to the next hop. The router populates its local ARP table and starts the packet forwarding process.
Syntax Description (Optional) Hostname (Optional) 48-bit MAC address (Optional) ARP entries that are learned via this interface type and number are displayed.
Usage Guidelines
The ARP establishes correspondence between network addresses (an IP address, for example) and LAN hardware addresses (Ethernet addresses). A record of each correspondence is kept in a cache for a predetermined amount of time and then discarded. The table describes the following sample output from the show ip arp commmand:
RouterX#show ip arp Protocol Address Age(min)Hardware Addr Type Interface Internet 192.168.3.1 - 0800.0222.2222 ARPA FastEthernet0/0 Internet 192.168.4.2 - 0800.0222.1111 ARPA FastEthernet0/1
Usage Guidelines Description The protocol for the network address in the Address field. The network address that corresponds to the hardware address. The age in minutes of the cache entry. A hyphen (-) means that the address is local. The LAN hardware address of a MAC address that corresponds to the network address. Indicates the encapsulation type that Cisco IOS Software is using in the network address in this entry. Possible values include the following:
Type
Advanced Research Projects Agency (ARPA) Subnetwork Access Protocol (SNAP) Session Announcement Protocol (SAP)
Interface
Syntax Description Hostname of the system to ping. If a host-name or system-address is not host-name specified at the command line, it will be required in the ping system dialog. systemAddress of the system to ping. If a host-name or system-address is not specified at address the command line, it will be required in the ping system dialog. This example represents a simple network with two routers. The RouterX router uses the ping 10.0.0.2 command to check the reachability of the neighboring router interface. By default, five Internet Control Message Protocol (ICMP) packets are sent and five replies are required for a perfectly successful test. The RouterX router receives all five replies. The following ping command output represents a perfectly successful test:
RouterX#ping 10.0.0.2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.0.0.2, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 4/6/8 ms
To determine the routes that packets will actually take when traveling to their destination address, you can use the traceroute command in user EXEC or privileged EXEC mode as follows:
traceroute [vrf vrf-name] [protocol] destination
Syntax Description (Optional in privileged EXEC mode; required in user EXEC mode) The destination address or hostname for which you want to trace the route. The destination software determines the default parameters for the appropriate protocol and the tracing action begins. This example represents a network with four routers. The RouterX router uses the traceroute 192.168.1.4 command to verify the path that packets will take to the RouterW router. The RouterX router receives replies from all the hops. The following traceroute command output represents the path from RouterX to RouterW:
RouterX#traceroute 192.168.1.4 Type escape sequence to abort. Tracing the route to 192.168.1.4
1 10.1.1.2 4 msec 4 msec 4 msec 2 172.16.1.3 20 msec 16 msec 16 msec 3 192.168.1.4 16 msec * 16 msec
Step 02
Step 03
Step 04
Step 05
Step 06
Step 07
Step 08
Step 09
Step 10
Step 11
Step 12
Step 13
Step 14
Step 15
Step 16
Step 17
Course: The Packet Delivery Process, Router Security, and Remote Access Topic: 5
Hardware threats : Threats of physical damage to the router or router hardware. Environmental threats : Threats such as temperature extremes (too hot or too cold) or humidity extremes (too wet or too dry). Electrical threats : Threats such as voltage spikes, insufficient supply voltage (brownouts), unconditioned power (noise), and total power loss. Maintenance threats : Threats such as poor handling of key electrical components (ESD), lack of critical spare parts, poor cabling, poor labeling, and so on.
configure the additional vty lines, use the line vty 5 15 command, followed by the login and password subcommands You can use the login local command to enable password checking on a per-user basis, using the username and password that is specified with the username global configuration command. The username command establishes username authentication with encrypted passwords. The enable password global configuration command restricts access to the privileged EXEC mode. You can assign an encrypted form of the enable password command, called the enable secret password. The enable secret command with the desired password at the global configuration mode prompt is required for this functionality. If the enable secret password is configured, it is used rather than enable password, not in addition to it. You can also add a further layer of security, which is particularly useful for passwords that cross the network or are stored on a TFTP server. Cisco provides a feature that allows the use of encrypted passwords. To set password encryption, enter the service password-encryption command in global configuration mode. Passwords that are displayed or set after you configure the service password-encryption command will be encrypted. To disable a command, enter no before the command. For example, use the no service password-encryption command to disable password encryption. Cisco AutoSecure is a Cisco IOS security CLI command feature. You can deploy one of these two modes, depending on your needs:
Interactive mode : Prompts the user with options to enable and disable services and other security features. Noninteractive mode : Automatically executes a Cisco AutoSecure command with the recommended Cisco default settings.
Caution Cisco AutoSecure attempts to ensure maximum security by disabling the services most commonly used by hackers to attack a router. However, some of these services may be needed for successful operation in your network. For this reason, you should not use the Cisco AutoSecure feature until you fully understand its operations and the requirements of your network. Cisco AutoSecure performs the following functions:
Finger Packet assembler/disassembler (PAD) Small servers Bootstrap Protocol (BOOTP) servers
HTTP service Identification service Cisco Discovery Protocol Network Time Protocol (NTP) Source routing Enables the following global services: Password encryption service Tuning of scheduler interval and allocation TCP synwait-time TCP keepalive messages Security policy database (SPD) configuration Internet Control Message Protocol (ICMP) unreachable messages Disables the following services per interface: ICMP Proxy Address Resolution Protocol (ARP) Directed broadcast Maintenance Operation Protocol (MOP) service ICMP unreachables ICMP mask reply messages Provides logging for security, including the following functions: Enables sequence numbers and time stamp Provides a console log Sets log buffered size Provides an interactive dialog to configure the logging server IP address Secures access to the router, including the following functions: Checking for a banner and providing the ability to add text for automatic configuration o Login and password o Transport input and output o exec-timeout commands o Local authentication, authorization, and accounting (AAA) o Secure Shell (SSH) timeouts and ssh authentication-retries commands o Enabling only SSH and Secure Copy Protocol (SCP) for access and file transfers to and from the router o Disabling Simple Network Management Protocol (SNMP) if not being used Secures the forwarding plane, including the following functions:
o o o o o o o o o o o o o o o o o o o o
o o o o o
Enabling Cisco Express Forwarding or distributed Cisco Express Forwarding on the router, when available Antispoofing Blocking all Internet Assigned Numbers Authority (IANA)-reserved IP address blocks
o o o o o
Blocking private address blocks, if the customer desires Installing a default route to Null0, if a default route is not being used Configuring a TCP Intercept for a connection timeout, if the TCP Intercept feature is available and the user desires it Starting an interactive configuration for Context-Based Access Control (CBAC) on interfaces facing the Internet, when using a Cisco IOS Firewall image Enabling NetFlow on software forwarding platforms
RouterX(config-line)#login localIn order to enable and test authentication with SSH, you must add to the previous statements. Then test SSH from the PC and UNIX stations. The following configuration enables SSH and disables Telnet access:
RouterX(config)#ip domain-name cisco.com RouterX(config)#crypto key generate rsa The name for the keys will be: RouterX.cisco.com Choose the size of the key modulus in the range of 360 to 2048 for your General Purpose Keys. Choosing a key modulus greater than 512 may take a few minutes. How many bits in the modulus [512]: 1024 % Generating 1024 bit RSA keys, keys will be non- exportable...[OK] *Mar 16 20:32:15.613: %SSH-5-ENABLED: SSH 1.99 has been enabled RouterX(config)#ip ssh version 2 RouterX(config)#line vty 0 4 RouterX(config-line)#login local RouterX(config-line)#transport input ssh
If you want to prevent non-SSH connections, the transport input ssh command limits the router to SSH connections only. Straight (non-SSH) Telnet connections are refused. The following configuration enables SSH connections only:
RouterX(config)#line vty 0 4 RouterX(config-line)transport input ssh
Test to ensure that non-SSH users cannot use Telnet to connect to the router.
If you have a router that does not have Cisco SDM installed, and you would like to use Cisco SDM, you must download it from http://www.Cisco.com and install it on your router. Ensure that your router contains enough flash memory to support your existing flash file structure and the Cisco SDM files. Cisco Configuration Professional is a GUI-based device management tool for Cisco IOS Software-based access routers, including Cisco integrated services routers, Cisco 7200VXR Series Routers, and the Cisco 7301 router. Cisco Configuration Professional simplifies router, firewall, IPS, VPN, unified communications, WAN, and basic LAN configuration through easyto-use wizards. With Cisco Configuration Professional, you can remotely configure and monitor Cisco routers without using the Cisco IOS Software CLI. Cisco Configuration Professional is an alternative to Cisco SDM. Like Cisco SDM, Cisco Configuration Professional assumes a general understanding of networking technologies and terms, but assists individuals unfamiliar with the Cisco CLI. Cisco Configuration Professional is currently supported on Windows platforms only. Cisco Configuration Professional is included on a CD at no additional cost with several integrated services routers. It is also available as a free download from http://www.cisco.com/. Always consult the latest information regarding Cisco Configuration Professional router and Cisco IOS Software release support at http://www.cisco.com/.
You can install and run Cisco SDM on a router that is already in use without disrupting network traffic, but you must ensure that a few configuration settings are present in the router configuration file. Access the CLI using SSH or the console connection to modify the existing configuration before installing Cisco SDM on your router. Step 1 Enable the HTTP and HTTPS servers on your router by entering the following commands in global configuration mode:
Router#configure terminal Enter configuration commands, one per line. End with CNTL/Z. Router(config)#ip http server Router(config)#ip http secure-server Router(config)#ip http authentication local Router(config)#ip http timeout-policy idle 600 life 86400 requests 10000
Note If the router supports HTTPS, the HTTPS server will be enabled. If not, the HTTP server will be enabled. HTTPS is supported in all images that support the cryptography IPsec feature set, starting from Cisco IOS Release 12.25(T).
Step 2 Create a user account that is defined with privilege level 15 (enable privileges). Enter the following command in global configuration mode, replacing username and password with the strings that you want to use:
Router(config)#username username privilege 15 secret 0 password
For example, if you chose the username "tomato" and the password "vegetable", you would enter:
Router(config)#username tomato privilege 15 secret 0 vegetable
You will use this username and password to log in to Cisco SDM. Step 3 Configure SSH and Telnet for local login and privilege level 15. Use the following commands:
Router(config)#line vty 0 4 Router(config-line)#privilege level 15 Router(config-line)#login local Router(config-line)#transport input telnet ssh Router(config-line)#exit
Host Name : This hostname is the configured name of the router. About Your Router : This area shows basic information about your router hardware and software, and contains the fields that are shown in this table. Hardware Description
Hardware
Description
Hardware Description Hardware Model Type Available/Total Memory Total Flash Capacity The router model number Available RAM and total RAM Description
Description The version of Cisco IOS Software that is currently running on the router
Cisco SDM Version The version of Cisco SDM software that is currently running on the router Feature Availability The features available in the Cisco IOS image that the router is using are designated by a check. The features that Cisco SDM looks for are IP, firewall, VPN and IPS.
Hardware Details : In addition to the information presented in the About Your Router window, this tab displays information about the following: Where the router boots from (flash memory or the configuration file) Whether the router has accelerators, such as VPN accelerators A diagram of the hardware configuration Software Details : In addition to the information presented in the About Your Router section, this tab displays information about the feature sets included in the Cisco IOS image.
o o o
Configuration Overview
This section of the home page summarizes the configuration settings that have been made. If you want to view the running configuration, click View Running Config.
Up : The number of connections that are up. Down : The number of connections that are down. Double arrow : Click to display or hide details. Total Supported LAN : Shows the total number of LAN interfaces that are present in the router. Total Supported WAN : The number of WAN interfaces that are present on the router and that are supported by Cisco SDM. Configured LAN Interface : The number of supported LAN interfaces that are currently configured on the router. Total WAN Connections : The total number of WAN connections that are present on the router and that are supported by Cisco SDM. DHCP Server : Configured and not configured. DHCP Pool (Detail View): If one pool is configured, this area shows the starting and ending address of the DHCP pool. If multiple pools are configured, it shows a list of configured pool names. Number of DHCP Clients (Detail View): Current number of clients leasing addresses. Interface : Name of the configured interface.
o o o
Type : Interface type IP Mask : IP address and subnet mask Description : Description of the interface
Firewall Policies
This area shows the following information:
Active : A firewall is in place. Inactive : No firewall is in place. Trusted : The number of trusted (inside) interfaces. Untrusted : The number of untrusted (outside) interfaces. DMZ : The number of demilitarized zone (DMZ) interfaces. Double arrow : Click to display or hide details. Interface : The name of the interface to which a firewall has been applied. Firewall icon : Whether the interface is designated as an inside or an outside interface. NAT : The name or number of the Network Address Translation (NAT) rule that is applied to this interface. Inspection Rule : The names or numbers of the inbound and outbound inspection rules. Access Rule : The names or numbers of the inbound and outbound access rules.
Double arrow: Click to display or hide details. IPsec (Site-to-Site): The number of configured site-to-site VPN connections. GRE over IPsec: The number of configured Generic Routing Encapsulation (GRE) over IPsec connections. XAUTH Login Required: The number of Cisco Easy VPN connections awaiting an Extended Authentication (XAUTH) login.
Note Some VPN servers or concentrators authenticate clients using XAUTH. This functionality shows the number of VPN tunnels awaiting an XAUTH login. If any Cisco Easy VPN tunnel is waiting for an XAUTH login, a separate message panel is shown with a Login button. Click Login to enter the credentials for the tunnel. If XAUTH has been configured for a tunnel, it will not begin to function until the login and password have been supplied. There is no timeout after which it will stop waiting; it will wait indefinitely for this information.
Easy VPN Remote : The number of configured Cisco Easy VPN Remote connections. Number of DMVPN Clients : If the router is configured as a Dynamic Multipoint VPN (DMVPN) hub, the number of DMVPN clients. Number of Active VPN Clients : If the router is functioning as a Cisco Easy VPN Server, the number of Cisco Easy VPN Clients with active connections. Interface : The name of the interface with a configured VPN connection. IPsec Policy : The name of the IPsec policy that is associated with the VPN connection.
Routing
Number of Static Routes: The number of static routes that are configured on the router. Dynamic Routing Protocols: List of any dynamic routing protocols that are configured on the router.
Intrusion Prevention
Active Signatures: The number of active signatures that the router is using. These signatures may be built-in, or they may be loaded from a remote location. Number of IPS-Enabled Interfaces : The number of router interfaces on which IPS has been enabled.
Interfaces and Connections: This menu contains several wizards that are designed to help you configure how the router connects to the network. You can access a LAN wizard to configure the LAN interfaces with a static or DHCP-assigned IP address. You can also access a WAN wizard to configure PPP, Frame Relay, and High-Level Data Link Control (HDLC) WAN interfaces. Additionally, you can configure the router as a DHCP server. Firewall wizard: This wizard is used to configure the firewall features. You can access a basic firewall setup, which consists of predefined access control lists (ACLs) for standard services, and an advanced firewall setup, where you can define each rule manually. VPN wizard: This wizard is used to configure the VPN features. You can configure your router as a VPN client for a site-to-site VPN, or as a VPN server for Cisco IOS WebVPN or Cisco Easy VPN. Security Audit wizards: There are these two options, as follows: The router security audit wizard An easy one-step router security lockdown wizard Quality of Service wizard: This wizard is used to configure a basic QoS policy for outgoing traffic on WAN interfaces and IP Security (IPsec) tunnels.
o o
Note At the end of each wizard procedure, all changes are automatically delivered to the router using Cisco SDM-generated CLI commands. You can choose whether to preview the commands to be sent. The default is to not preview the commands.
Automatic allocation: DHCP assigns a permanent IP address to a client. Dynamic allocation: DHCP assigns an IP address to a client for a limited time (or until the client explicitly relinquishes the address). Manual allocation: A client IP address is assigned by the network administrator, and DHCP is used simply to convey the assigned address to the client.
Dynamic allocation is the only one of the three mechanisms that allows automatic reuse of an address that is no longer needed by the client to which it was assigned. Dynamic allocation is particularly useful for assigning an address to a client that will be connected to the network only temporarily, or for sharing a limited pool of IP addresses among a group of clients that do not need permanent IP addresses. Dynamic allocation may also be a good choice for assigning an IP address to a new client that is being permanently connected to a network in which IP addresses are so scarce that it is important to reclaim them when old clients are retired.
DHCPDISCOVER
When a client boots up for the first time, it transmits a DHCPDISCOVER message on its local physical subnet. Because the client has no way of knowing the subnet to which it belongs, the DHCPDISCOVER message is an all-subnets broadcast (destination IP address of 255.255.255.255). The client does not have a configured IP address, so the source IP address of 0.0.0.0 is used.
DHCPOFFER
A DHCP server that receives a DHCPDISCOVER message may respond with a DHCPOFFER message, which contains initial configuration information for the client. For example, the DHCP server provides the requested IP address. The DHCPOFFER message also contains an Options field that is used to provide additional information such as the subnet mask or the default gateway ("router"). This Options field can also be used to specify several other values, including the IP address lease time, renewal time, domain name server, and NetBIOS Name Service (Microsoft Windows Internet Name Service [Microsoft WINS]). This DHCPOFFER message is sent to the client MAC address at Layer 2. The destination IP address is the address offered by the server.
DHCPREQUEST
After the client receives a DHCPOFFER message, it responds with a DHCPREQUEST message, indicating its intent to accept the parameters in the DHCPOFFER. The DHCPREQUEST is sent to the broadcast address (at Layer 2 and Layer 3), because the client is not sure yet if this address can safely be used )or if another DHCP client is also going to try to use it).
DHCPACK
After the DHCP server receives the DHCPREQUEST message, it acknowledges the request with a unicast DHCPACK message, thus completing the initialization process.
DHCP Pool Name: A character string that identifies the DHCP pool. DHCP Pool Network and Subnet Mask: The IP addresses that the DHCP server assigns are drawn from a common pool that you configure by specifying the starting IP address in the range and the ending address in the range. The address range that you specify should be within the following private address ranges:
10.1.1.1 to 10.255.255.255 172.16.1.1 to 172.31.255.255 192.168.0.0 to 192.168.255.255 The address range that you specify must also be in the same subnet as the IP address of the LAN interface. The range can represent a maximum of 254 addresses. The following examples are valid ranges: 10.1.1.1 to 10.1.1.254 (assuming that the LAN IP address is in the 10.1.1.0 subnet) o 172.16.1.1 to 172.16.1.254 (assuming that the LAN IP address is in the 172.16.1.0 subnet) Cisco SDM configures the router to automatically exclude the LAN interface IP address in the pool. You must not use the following reserved addresses in the range of addresses that you specify: The network or subnetwork IP address The broadcast address on the network Starting IP: Enter the beginning of the range of IP addresses for the DHCP server to use in assigning addresses to devices on the LAN. This IP address is the lowest-numbered IP address in the range. Ending IP: Enter the highest-numbered IP address in the range of IP addresses. Lease Length: The amount of time that the client may use the assigned address before it must be renewed. DHCP Options: Use this pane to configure DHCP options that will be sent to hosts on the LAN that request IP addresses from the router. These are not options for the router that you are configuring; these are parameters that will be sent to the requesting hosts on the LAN. To set these properties for the router, click Additional Tasks on the Cisco SDM category bar, click DHCP, and configure these settings in the DHCP Pool window. DNS Server1: The DNS server is typically a server that maps a known device name with its IP address. If you have a DNS server that is configured for your network, enter the IP address for the server here. DNS Server2: If there is an additional DNS server on the network, you can enter the IP address for that server in this field. Domain Name: The DHCP server that you are configuring on this router will provide services to other devices within this domain. Enter the name of the domain here. WINS Server1: Some clients may require Microsoft WINS to connect to devices on the Internet. If there is a Microsoft WINS server on the network, enter the IP address for the server in this field. WINS Server2: If there is an additional Microsoft WINS server on the network, enter the IP address for the server in this field. Default Router: The IP address that will be provided to the client for use as the default gateway.
o o o
o o o
Import All DHCP Options into the DHCP Server Database: Select this check box to allow the DHCP options to be imported from a higher-level server. This import is typically used with an Internet DHCP server.
DHCP server configuration is supported through Cisco SDM or Cisco IOS command-line interface (CLI). The Cisco SDM GUI tool introduces an easier way of configuration for users that are not familiar with Cisco IOS CLI. For more-experienced users, Cisco IOS CLI provides additional DHCP configuration options and faster configuration. To configure Cisco IOS DHCP, follow these steps: Step 1 Using the ip dhcp pool global configuration command, create a DHCP IP address pool for the IP addresses you want to use. The configuration mode will change to dhcp pool configuration mode. Step 2 Using the network command, specify the network and the subnet to use. Step 3 Using the domain-name command, define the DNS domain name. Step 4 Using the dns-server command, define the primary and secondary DNS servers. Step 5 Using the default-router command, define the default gateway. Step 6 Using the lease command, specify the lease duration for the addresses that are provided from the DHCP server. The example shows a seven-day lease: lease 7. Step 7 Using the exit command, exit the dhcp pool configuration mode. Step 8 Using the ip dhcp excluded-address global configuration command, exclude addresses in the pool range that you do not want to assign to the clients. The following example shows a configured Cisco IOS DCHP server on a router:
Router(config)#ip dhcp pool mydhcppool Router(dhcp-config)#network 10.10.10.0 /8 Router(dhcp-config)#domain-name mydhcpdomain.com Router(dhcp-config)#dns-server 10.10.10.98 10.10.10.99 Router(dhcp-config)#default-router 10.10.10.1 Router(dhcp-config)#lease 7 Router(dhcp-config)#exit Router(config)#ip dhcp excluded-address 10.10.10.0 10.10.10.99
The server uses ping to detect conflicts. The client uses Gratuitous Address Resolution Protocol (GARP) to detect clients. If an address conflict is detected, the address is removed from the pool and the address is not assigned until an administrator resolves the conflict. The following example displays the detection method and detection time for all IP addresses that are offered by the DHCP server that have conflicts with other devices.
RouterX#show ip dhcp conflict IP address Detection Method Detection time 172.16.1.32 Ping Feb 16 2007 12:28 PM 172.16.1.64 Gratuitous ARP Feb 23 2007 08:12 AM
Field Descriptions for the show ip dhcp conflict Command Field IP address Detection Method Description The IP address of the host as recorded on the DHCP server The manner in which the IP addresses of the hosts were found on the DHCP server. This field can be ping or GARP.
Detection Time The date and time when the conflict was found.
Network administrators can connect to a router or switch locally or remotely. As companies get bigger, and as the number of routers and switches in the network grows, the workload to connect to all of the devices locally can become overwhelming. Telnet and SSH are Virtual Terminal Protocols that are part of the TCP/IP suite. The protocols allow connections and remote console sessions from one network device to one or more other remote devices. Remote administrative access is more convenient than local access for administrators that have many devices to manage. However, if it is not implemented securely, an attacker could collect valuable confidential information. For example, implementing remote administrative access using Telnet can be unsecure because Telnet forwards all network traffic in cleartext. An attacker could capture network traffic while an administrator is logged in remotely to a router and sniff the administrator passwords or router configuration information. Therefore, remote administrative access must be configured with additional security precautions. Telnet on Cisco routers varies slightly from Telnet on most Cisco Catalyst switches. To log on to a host that supports Telnet, use the telnet user EXEC command. As shown in this example, IP address 10.2.2.2 is used to establish a telnet session:
RouterA#telnet 10.2.2.2
The SSH feature has an SSH server and an SSH integrated client, which are applications that run on the switch. You can use any SSH client running on a PC or the Cisco SSH client running on the switch to connect to a switch running the SSH server. To start an encrypted session with a remote networking device, use the ssh user EXEC command, where the 1 cisco option represents a username that is used to access the SSHenabled switch SwitchB:
RouterA#ssh l cisco 10.2.2.2
With Cisco IOS Software installed on a router, the IP address or hostname of the target device is all that is required to establish a Telnet connection. The telnet command that is placed before the target IP address or hostname is used to open a Telnet connection from a Cisco Catalyst switch. For routers and switches, a prompt for console login signifies a successful Telnet connection if login is enabled on the vty ports on the remote device. Once you are logged in to the remote device, the console prompt indicates which device is active on the console. The console prompt uses the hostname of the device. Use the show sessions command on the originating router or switch to verify Telnet connectivity and to display a list of hosts to which a connection has been established. This command displays the hostname, the IP address, the byte count, the amount of time that the device has been idle, and the connection name that is assigned to the session. If multiple sessions are in progress, the asterisk (*) indicates which was the last session and to which session the user will return to if the Enter key is pressed. Use the show users command to learn whether the console port is active and to list all active Telnet or SSH sessions with the IP address or IP alias of the originating host on the local device. In the show users output, the "con" line represents the local console, and the "vty" line represents a remote connection. The "11" next to the vty value in the example indicates the vty line number, not its port number. If there are multiple users, the asterisk (*) denotes the current terminal session user. To display the status of SSH server connections, use the show ssh command in privileged EXEC mode.
Press the Enter key. Enter the resume command if there is only one session. (Entering resume without the session number argument will resume the last active session.) Enter the resume session number command to reconnect to a specific Telnet session. (Enter the show sessions command to find the session number.)
From a remote device, use the exit or logout command to log out of the console session and return the session to the local device. From the local device, use the disconnect command (when there are multiple sessions) or the disconnect session session number command to disconnect a single session.
If a Telnet session from a remote user is causing bandwidth or other types of problems, you should close the session. Alternatively, network staff can terminate the session from their console. To close a Telnet session from a foreign host, use the clear line linenumber command. The linenumber option corresponds to the vty port of the incoming Telnet session. In this example, the line number is 11. Use the show sessions command to determine the linenumber variable. At the other end of the connection, the user gets a notice that the connection was "closed by a foreign host."
The ping command verifies network connectivity. Ping tells the minimum, average, and maximum times it takes for ping packets to find the specified system and return. This information can validate the reliability of the path to a specified system.
This table lists the possible output characters from the ping command output.
Ping Command Output Character ! . U Receipt of a reply The network server timed out while waiting for a reply The destination unreachable protocol data unit (PDU) was received Description
Ping Command Output Character Q M ? & Source quench (destination too busy) A router along the path could not fragment the transmitted packet and the transmitted packet was larger than the allowed transmission unit (MTU) on the affected link Unknown packet type Lifetime of the packet was exceeded Description
The traceroute command shows the actual routes that the packets take between network devices. A device, such as a router or switch, sends out a sequence of User Datagram Protocol (UDP) datagrams to an invalid port address at the remote host. Three datagrams are sent, each with a Time to Live (TTL) field value that is set to 1. The TTL value of 1 causes the datagram to time out as soon as it reaches the first router in the path. The router then responds with an Internet Control Message Protocol (ICMP) Time Exceeded Message (TEM) indicating that the datagram has expired. Another three UDP messages are then sent, each with the TTL value set to 2, which causes the second router to return ICMP TEMs. Traceroute then progressively increments the TTL field (3, 4, 5, and so on) for each sequence of messages. This sequence provides traceroute with the address of each hop as the packets time out further down the path. The TTL field continues to be increased until the destination is reached or it is incremented to a predefined maximum. Once the final destination is reached, the host responds with either an ICMP port unreachable message or an ICMP echo-reply message instead of the ICMP TEM. The purpose is to record the source of each ICMP TEM in order to provide a trace of the path that the packet took to reach the destination.
This table lists the characters that can appear in the traceroute command output.
Traceroute Command Output Character Description
nn msec For each node, the round-trip time (RTT) in milliseconds for the specified number of probes * A Q I The probe timed out Administratively prohibited (for example, access list) Source quench (destination too busy) User-interrupted test
Traceroute Command Output Character U H N P T ? Port unreachable Host unreachable Network unreachable Protocol unreachable Timeout Unknown packet type Description
Note If the IP domain name lookup is enabled, the router will attempt to reconcile each IP address to a name, which can cause the traceroute command to slow down.
WANs generally connect devices that are separated by a broader geographical area than a LAN can serve. WANs use the services of carriers, such as telephone companies, cable companies, satellite systems, and network providers. WANs use serial connections of various types to provide access to bandwidth over large geographic areas.
People in the regional or branch offices of an organization need to be able to communicate and share data. Organizations often want to share information with other organizations across large distances. For example, software manufacturers routinely communicate product and promotion information to distributors that sell their products to end users. Employees who travel on company business frequently need to access information that resides on their corporate networks.
In addition, home computer users need to send and receive data across increasingly larger distances. Here are some examples:
It is now common in many households for consumers to communicate with banks, stores, and various providers of goods and services via computers. Students do research for classes by accessing library indexes and publications that are located in other parts of their country and in other parts of the world.
Because it is obviously not feasible to connect computers across a country or around the world in the same way that computers are connected in a LAN environment with cables, different technologies have evolved to support this need. Increasingly, the Internet is being used as an inexpensive alternative to using an enterprise WAN for some applications. New technologies are available to businesses to provide security and privacy for their Internet communications and transactions. WANs used by themselves, or in concert with the Internet, allow organizations and individuals to meet their wide-area communication needs.
The organization has the responsibility of installing and managing the infrastructure. Ethernet is the most common technology that is used. The LAN connects users and provides support for localized applications and server farms. Connected devices are usually in the same local area, such as a building or a campus.
Connected sites are usually geographically dispersed. Connectivity to the WAN requires a device such as a modem or CSU/DSU to put the data in a form that is acceptable to the network of the service provider. WAN services include T1 to T3 lines, or E1 to E3 lines, DSL, Cable, Frame Relay, and ATM. The ISP has the responsibility of installing and managing the WAN infrastructure. The edge devices modify the Ethernet encapsulation to a serial WAN encapsulation.
Some WANs are privately owned; however, because the development and maintenance of a private WAN is expensive, only very large organizations can afford to maintain a private WAN. Most companies purchase WAN connections from a service provider or ISP. The ISP is then responsible for maintaining the back-end network connections and network services between the LANs. When an organization has many global sites, establishing WAN connections and service can be complex. For example, the major ISP for the organization may not offer service in every location or country in which the organization has an office. As a result, the organization must purchase services from multiple ISPs. Using multiple ISPs often leads to differences in the quality of
services that are provided. In many emerging countries, for example, network designers will find differences in equipment availability, WAN services that are offered, and encryption technology for security. To support an enterprise network, it is important to have uniform standards for equipment, configuration, and services.
WAN Devices
There are several devices that operate at the physical layer in a WAN. WANs use numerous types of devices that are specific to WAN environments, including the following:
Modem: Modulates an analog carrier signal to encode digital information, and also demodulates the carrier signal to decode the transmitted information. A voiceband modem, such as one used in a dial-up connection, converts the digital signals that are produced by a computer into voice frequencies that can be transmitted over the analog lines of the public telephone network. On the other side of the connection, another modem converts the sounds back into a digital signal for input to a computer or network connection. Faster modems, such as cable modems and DSL modems, transmit using higher broadband frequencies. CSU/DSU: Digital lines, such as T1 or T3 carrier lines, require a CSU and a DSU. The two are often combined into a single piece of equipment that is called the CSU/DSU. The CSU provides termination for the digital signal and ensures connection integrity through error correction and line monitoring. The DSU converts the T-carrier line frames into frames that the LAN can interpret and vice versa.
Access server: Centralizes dial-in and dial-out user communications. An access server may have a mixture of analog and digital interfaces, and supports hundreds of simultaneous users. WAN switch: A multiport internetworking device that is used in carrier networks. These devices typically switch traffic such as Frame Relay, ATM, or X.25, and operate at the data link layer of the OSI reference model. Public switched telephone network (PSTN) switches may also be used within the cloud for circuit-switched connections like ISDN or analog dialup. Router: Provides internetworking and WAN access interface ports that are used to connect to the service provider network. These interfaces may be serial connections or other WAN interfaces. With some types of WAN interfaces, an external device such as a CSU/DSU or modem (analog, cable, or DSL) is required to connect the router to the local point of presence (POP) of the service provider. Core router: A router that resides within the middle or backbone of the WAN, rather than at its periphery. To fulfill this role, a router must be able to support multiple telecommunications interfaces of the highest speed in use in the WAN core. It also must be able to forward IP packets at wire speed on all of those interfaces. The router must support the routing protocols being used in the core
WAN Cabling
WAN physical layer protocols describe how to provide electrical, mechanical, operational, and functional connections for WAN services. The WAN physical layer also determines the interface between the DTE and the DCE. The DTE and DCE interfaces on Cisco routers use various physical layer protocols, including the following:
EIA/TIA-232: This protocol allows signal speeds of up to 64 kb/s on a 25-pin Dconnector over short distances. It was formerly known as RS-232. The ITU-T V.24 specification is effectively the same. EIA/TIA-449, -530: This protocol is a faster (up to 2 Mb/s) version of EIA/TIA-232. It uses a 36-pin D-connector and is capable of longer cable runs. There are several versions. This standard is also known as RS422 and RS-423. EIA/TIA-612, -613: This standard describes the High-Speed Serial Interface (HSSI) protocol, which provides access to services up to 52 Mb/s on a 60-pin D-connector. V.35: This is the ITU-T standard for synchronous communications between a network access device (NAD) and a packet network. Originally specified to support data rates of 48 kb/s, it now supports speeds of up to 2.048 Mb/s using a 34-pin rectangular connector. X.21: This protocol is an ITU-T standard for synchronous digital communications. It uses a 15-pin D-connector.
These protocols establish the codes and electrical parameters that the devices use to communicate with each other. The method of facilitation that is used by the service provider largely determines the choice of protocol. When you order the cable, you receive a shielded serial transition cable that has the appropriate connector for the standard that you specify. The router end of the shielded serial transition cable
has a DB-60 connector, which connects to the DB-60 port on a serial WAN interface card (WIC). Because five different cable types are supported with this port, the port is sometimes called a five-in-one serial port. The other end of the serial transition cable is available with the connector that is appropriate for the standard that you specify. The documentation for the device to which you want to connect should indicate the standard for that device. Your CPE, in this case a router, is the DTE. The data DCE, commonly a modem or a CSU/DSU, is the device that is used to convert the user data from the DTE into a form acceptable to the WAN service provider. The synchronous serial port on the router is configured as DTE or DCE (except EIA/TIA-530, which is only DTE) depending on the attached cable, which is ordered as either DTE or DCE to match the router configuration. If the port is configured as DTE (which is the default setting), it will require external clocking from the DCE device. Note: To support higher densities in a smaller form factor, Cisco introduced a Smart Serial cable. The serial end of the Smart Serial cable is a 26-pin connector. It is much smaller than the DB-60 connector that is used to connect to a five-in-one serial port. These transition cables support the same five serial standards, are available in either a DTE or DCE configuration, and are used with two-port serial connections as well as with two-port asynchronous and synchronous WICs.
run a terminal emulation program to provide a text-based session with the router, which enables you to manage the device.
HDLC PPP Frame Relay ATM Metro Ethernet Multiprotocol Label Switching (MPLS)
ISP service to home networks is currently often delivered over Ethernet at LAN-type speeds. Within metro areas (and beyond), companies are connecting using 1 Gigabit Ethernet and 10 Gigabit Ethernet, which is sometimes leased from a telecommunication company and sometimes company-owned. One of the most widely deployed technologies that Cisco supports is Metro Ethernet. Another data link layer protocol is the MPLS protocol. Service providers increasingly deploy MPLS to provide an economical solution to carry circuit-switched as well as packet-switched network traffic. It can operate over any existing infrastructure, such as IP, Frame Relay, ATM, or Ethernet. It sits between Layer 2 and Layer 3, and is sometimes referred to as a Layer 2.5 protocol.
Metro Ethernet
Metro Ethernet is a rapidly maturing networking technology that extends Ethernet to the WAN networks services that are offered by telecommunications companies. IP-aware Ethernet switches enable service providers to offer enterprises converged voice, data, and video services such as IP telephony, video streaming, imaging, and data storage. By extending Ethernet to the metropolitan area, companies can provide their remote offices with reliable access to applications and data on the corporate headquarters LAN. Benefits of Metro Ethernet include the following:
Reduced expenses and administration: Metro Ethernet provides a switched, highbandwidth Layer 2 network that is capable of managing data, voice, and video all on the same infrastructure. This characteristic increases bandwidth and eliminates expensive conversions to ATM and Frame Relay. The technology enables businesses to inexpensively connect numerous sites in a metropolitan area to each other and to the Internet. Easy integration with existing networks: Metro Ethernet connects easily to existing Ethernet LANs, thus reducing installation costs and time. Enhanced business productivity: Metro Ethernet enables businesses to take advantage of productivity-enhancing IP applications that are difficult to implement on time-division multiplexing (TDM) or Frame Relay networks, such as hosted IP communications, VoIP, and streaming and broadcast video.
Dedicated communication links: When permanent dedicated connections are required, point-to-point lines are used with various capacities that are limited only by the underlying physical facilities and the willingness of users to pay for these dedicated lines. A point-to- point link provides a pre-established WAN communications path from the customer premises through the provider network to a remote destination. Point-to-point lines are usually leased from a carrier and are also called leased lines. Switched communication links: Switched communication links can be either circuitswitched or packet-switched.
Internet WAN connection links are through broadband services such as DSL, cable modem, and broadband wireless, and are combined with VPN technology to provide privacy across the Internet. Broadband connection options are typically used to connect telecommuting employees to a corporate site over the Internet.
Connectionless systems, such as the Internet, carry complete addressing information in each packet. Each switch must evaluate the address to determine where to send the packet. Connection-oriented systems predetermine the route for a packet, and each packet only has to carry an identifier. In the case of Frame Relay, these are called data-link connection identifiers (DLCIs). The switch determines the onward route by looking up the identifier in tables that are held in memory. The set of entries in the tables identifies a particular route or circuit through the system. If this circuit is only physically in existence while a packet is traveling through it, it is called a virtual circuit (VC). The circuit, or pathway, between the source and destination is often a preconfigured link, but it is not an exclusive link. When the customer does not use the complete bandwidth on its VC, the carrier, through statistical multiplexing, can make that unused bandwidth available to another customer.
Because the internal links between the switches are shared between many users, the costs of packet switching are lower than the costs of circuit switching. Delays (latency) and variability of delay (jitter) are greater in packet-switched than in circuit-switched networks. This behavior is because the links are shared, and packets must be entirely received at one switch before moving to the next. Despite the latency and jitter inherent in shared networks, modern technology allows satisfactory transport of voice and even video communications on these networks.
Packet switching enables you to reduce the number of links to the network. It allows the carrier to make more efficient use of its infrastructure so that the overall cost is generally lower than with discreet point-to-point lines, or leased lines.
Transceiver: Connects the computer of the teleworker to the DSL. Usually the transceiver is a DSL modem that is connected to the computer using a Universal Serial Bus (USB) or Ethernet cable. Newer DSL transceivers can be built into small routers with multiple 10/100 switch ports that are suitable for home office use. DSLAM: Located at the CO of the carrier, the DSLAM combines individual DSL connections from users into one high-capacity link to an ISP and to the Internet.
DSL availability is far from universal and there is a wide variety of types, standards, and emerging standards. DSL is now a popular choice for enterprise IT departments to support home workers. Generally, a subscriber cannot choose to connect to an enterprise network directly. The subscriber must first connect to an ISP and then an IP connection is made through the Internet to the enterprise. Security risks are incurred in this process.
Asymmetric DSL (ADSL): Provides higher download bandwidth than upload bandwidth.
Symmetric DSL (SDSL): Provides the same capacity of bandwidth in both directions.
All forms of DSL service are categorized as asymmetric or symmetric, but there are several varieties of each type. ADSL includes the following forms:
ADSL, ADSL2, ADSL2+ Consumer DSL, also called G.Lite or G.992.2 Very-high-data-rate DSL (VDSL, VDSL2)
SDSL High-data-rate DSL (HDSL) ISDN DSL (IDSL) Symmetric high bit rate DSL (G.shdsl)
Current DSL technologies use sophisticated coding and modulation techniques to achieve high data rates. ADSL reaches greater distances than other DSL types, but the achievable speed of ADSL transmissions degrades as the distance increases. The maximum distance is limited to approximately 18,000 feet (5.5 km) from the CO. ADSL2 and ADSL2+ are enhancements to basic ADSL and provide a downstream bandwidth of up to 24 Mb/s and an upstream bandwidth of up to 1.5 Mb/s. VDSL2 offers the highest operational speed but has the shortest achievable distance. VDSL2 deteriorates quickly from a theoretical maximum of 250 Mb/s at the source to 100 Mb/s at 1640 feet (0.5 km) and 50 Mb/s at 3280 feet (1 km).
DSL Considerations
DSL service can be added incrementally in any area. A service provider can upgrade the bandwidth to coincide with a growth in the number of subscribers. DSL is also backwardcompatible with analog voice and makes good use of the existing local loop, which means that it is easy to use DSL service simultaneously with normal phone service. Another advantage that DSL has over cable technology is that DSL is not a shared medium. Each user has a separate direct connection to the DSLAM. Adding users does not impede performance unless the DSLAM Internet connection to the ISP, or the Internet, becomes saturated. However, DSL suffers from distance limitations. Most DSL service offerings currently require the customer to be within 18,000 feet (5.5 km) of the CO location of the provider, and the older, longer loops present problems. Additionally, upstream (upload) speed is usually considerably slower than the downstream (download) speed. The always-on technology of DSL can also present security risks, because potential hackers have greater access.
Cable
Another technology that has become increasingly popular as a WAN communications access option is the IP-over-Ethernet Internet service that is delivered by cable networks. Coaxial cable is widely used in urban areas to distribute television signals. Network access is available from some cable television networks. This technology allows for greater bandwidth than the conventional telephone local loop. Cable modems provide an always-on connection and a simple installation. A subscriber connects a computer or LAN router to the cable modem, which translates the digital signals into the broadband frequencies that are used for transmitting on a cable television network. The local cable TV office, which is called the cable headend, contains the computer system and databases that are needed to provide Internet access. The most important component that is located at the headend is the cable modem termination system (CMTS). It sends and receives digital cable modem signals on a cable network and is necessary for providing Internet services to cable subscribers. Cable modem subscribers must use the ISP that is associated with the service provider. All the local subscribers share the same cable bandwidth. As more users join the service, available bandwidth may be below the expected rate.
In 1972, the first email messaging software was developed so that ARPANET developers could more easily communicate and coordinate on projects. Later that year, a program that allowed users to read, file, forward, and respond to messages was developed. Throughout the 1970s and 1980s, the network expanded as technology became more sophisticated. In 1984, the Domain Name System (DNS) was introduced and gave the world domain suffixes (such as .edu, .com, .gov, and .org) as well as a series of country codes. This system made the Internet more manageable. Without DNS, users had to remember the IP address of every Internet site they wanted to visita long series of numbers, instead of a string of words. In 1989, Timothy Berners-Lee began work on a means to better facilitate communication among physicists around the world, based on the concept of hypertext, which would allow electronic documents to be linked directly to each other. The eventual result of linking documents was the World Wide Web. Standard formatting languages, such as HTML and its variants, allow web pages to display formatted text, graphics, and multimedia. A web browser can read and display HTML documents, and can access and download related files and software. The web was popularized in the 1993 release of a graphical, easy-to-use browser called Mosaic. Although the web began as only one component of the Internet, it is clearly the most popular, and the two are now nearly synonymous. Throughout the 1990s, PCs became more powerful and less expensive, allowing millions of people to buy them for their homes and offices. ISPs, such as America Online (AOL), CompuServe, and many local providers, began offering affordable dialup connections to the Internet. To accommodate the need for increased speed, cable service providers began to offer access through cable network facilities and technologies. Today, the Internet has grown into the largest network on the earth, providing access to information and communication for business and home users. The Internet can be seen as a network of networks that consists of a worldwide mesh of hundreds of thousands of networks. Millions of companies and individuals all over the world, all connected to thousands of ISPs, own and operate these networks. These ISP networks connect to each other to provide access for millions of users all over the world. Ensuring effective communication across this diverse infrastructure requires the application of consistent and commonly recognized technologies and protocols as well as the cooperation of many network administration agencies.
telephone. The client calls the main number to your office, which is the only number that the client knows. When the client tells the receptionist who they are looking for, the receptionist checks a lookup table that matches your name to your extension. The receptionist knows that you requested this call; therefore, the receptionist forwards the caller to your extension. NAT operates on a Cisco router and is designed for IP address simplification and conservation. Usually, NAT connects two networks together and translates the private (inside local) addresses in the internal network into public addresses (inside global) before packets are forwarded to another network. You can configure NAT to advertise only one address for the entire network to the outside world. Advertising only one address effectively hides the internal network from the world, thus providing additional security. Any device that sits between an internal network and the public networksuch as a firewall, a router, or a computeruses NAT, which is defined in RFC 1631. In NAT terminology, the "inside network" is the set of networks that are subject to translation. The "outside network" refers to all other addresses. Usually, these are valid addresses that are located on the Internet. Cisco defines the following NAT terms:
Inside local address: The IP address that is assigned to a host on the inside network. The inside local address is likely not an IP address that is assigned by the Network Information Center (NIC) or service provider. Inside global address: A legitimate IP address that is assigned by the NIC or service provider that represents one or more inside local IP addresses to the outside world. Outside local address: The IP address of an outside host as it appears to the inside network. Not necessarily legitimate, the outside local address is allocated from an address space that is routable on the inside. Outside global address: The IP address that is assigned to a host on the outside network by the host owner. The outside global address is allocated from a globally routable address or network space.
One of the main features of NAT is static PAT, which is also referred to as overload in a Cisco IOS configuration. Several internal addresses can be translated into just one or a few external addresses by using NAT PAT. Most home routers operate in this manner. Your ISP assigns one address to your router, but several members of your family can simultaneously access the Internet. With NAT overloading, multiple addresses can be mapped to one or to a few addresses because a port number tracks each private address. When a client opens a TCP/IP session, the NAT router assigns a port number to its source address. NAT overload ensures that clients use a different TCP port number for each client session with a server on the Internet. When a response comes back from the server, the source port number, which becomes the destination port number on the return trip, determines to which client the router routes the packets. It also validates that the incoming packets were requested, thus adding a degree of security to the session.
PAT uses unique source port numbers on the inside global IP address to distinguish between translations. Because the port number is encoded in 16 bits, the total number of internal addresses that NAT can translate into one external address is, theoretically, as many as 65,536 addresses. PAT attempts to preserve the original source port. If the source port is already allocated, PAT attempts to find the first available port number. It starts from the beginning of the appropriate port group, which is either 0 to 511, 512 to 1023, or 1024 to 65535. If PAT does not find a port that is available from the appropriate port group and if more than one external IP address is configured, PAT will move to the next IP address and try to allocate the original source port again. PAT continues trying to allocate the original source port until it runs out of available ports and external IP addresses.
To begin configuring the DHCP client interface, click the Interfaces and Connections tab. Check the Ethernet (PPPoE or Unencapsulated Routing) radio button, and then click the Create New Connection button If the ISP uses PPP over Ethernet (PPPoE), check the Enable PPPoE encapsulation check box, and then click Next. Check the Dynamic (DHCP Client) radio button and enter the hostname. Check Port Address Translation and choose the inside interface from the drop-down list. Note: Cisco routers can also be manually configured as a DHCP client. To configure an interface as a DHCP client, the ip address dhcp command must be configured on the interface.
Displays active translations Clears all dynamic address translation entries from the NAT translation table
After you have configured NAT, verify that it is operating as expected by using the clear and show commands. By default, dynamic address translations will time out from the NAT and PAT translation tables after a period of nonuse. When port translation is not configured, translation entries time out after 24 hours unless you reconfigure them with the ip nat translation command. You can clear the entries before the timeout by using one of the commands that are listed in the table.
Alternatively, you can use the show run command and look for NAT, access control list (ACL), interface, or pool sections with the required values.
Identify the destination address: Determine the destination (or address) of the packet that needs to be routed. Identify sources of routing information: Determine from which sources (other routers) the router can learn the paths to the given destinations. Identify routes: Determine the initial possible routes or paths to the intended destination. Select routes: Select the best path to the intended destination. Maintain and verify routing information: Determine if the known paths to the destination are the most current.
The routing information that a router obtains from other routers is placed in its routing table. The routing table is used to find the best match and path between the destination IP of a packet and a network address in the routing table. The routing table will ultimately determine the exit interface to forward the packet, and the router will encapsulate that packet in the appropriate data-link frame for that outgoing interface. The routing table stores information about connected and remote networks. Connected networks are directly attached to one of the router interfaces. These interfaces are the gateways for the hosts on different local networks. If the destination network is directly connected, the router already knows which interface to use when forwarding packets. Remote networks are networks that are not directly connected to the router. Routes to these networks can be determined in one of the following ways:
Manually configured on the router by the network administrator Learned automatically using the dynamic routing process
Static routes: Routes to remote networks with an associated next hop can be manually configured on the router. These routes are known as static routes. The administrator must manually update a static route entry whenever an internetwork topology change requires an update. Static routes are user-defined routes that specify the path that packets take when moving between a source and a destination. These administrator-defined routes allow very precise control over the routing behavior of the IP internetwork. Dynamic routes: Remote networks can also be added to the routing table by using a dynamic routing protocol. The router dynamically learns routes after an administrator configures a routing protocol that helps determine routes. Unlike the situation with static routes, after the network administrator enables dynamic routing, the routing process automatically updates route knowledge whenever new topology information is received. The router learns and maintains routes to the remote destinations by exchanging routing updates with other routers in the internetwork.
A network consists of only a few routers. Using a dynamic routing protocol in such a case does not present any substantial benefit. On the contrary, dynamic routing may add more administrative overhead. A network is connected to the Internet only through a single ISP. There is no need to use a dynamic routing protocol across this link because the ISP represents the only exit point to the Internet. A large network is configured in a hub-and-spoke topology. A hub-and-spoke topology consists of a central location (the hub) and multiple branch locations (the spokes), with each spoke having only one connection to the hub. Using dynamic routing would be unnecessary because each branch has only one path to a given destination: through the central location.
Routers in an enterprise network use bandwidth, memory, and processing resources to provide NAT and PAT, packet filtering, and other services. Static routing provides forwarding services without the overhead that is associated with most dynamic routing protocols.
Static routing provides more security than dynamic routing, because no routing updates are required. A hacker could intercept a dynamic routing update to gain information about a network. However, static routing is not without problems. It requires time and accuracy from the network administrator, who must manually enter routing information. A simple typographical error in a static route can result in network downtime and packet loss. When a static route changes, the network may experience routing errors and problems during manual reconfiguration. For these reasons, static routing is impractical for general use in a large enterprise environment.
When no other routes in the routing table match the destination IP address of the packet or when a more specific match does not exist. A common use for a default static route is when connecting an edge router of a company to the ISP network. When a router has only one other router to which it is connected. This condition is known as a stub router.
The syntax for a default static route is like any other static route, except that the network address is 0.0.0.0 and the subnet mask is 0.0.0.0.
Router(config)#ip route 0.0.0.0 0.0.0.0 exit-interface
or
Router(config)#ip route 0.0.0.0 0.0.0.0 ip-address
The 0.0.0.0 network address and 0.0.0.0 mask is called a "quad-zero" route.
data backup can result in lower off-peak tariffs (line charges). Tariffs are based on the distance between the endpoints, the time of day, and the duration of the call. There are a number of advantages to using the PSTN, including the following:
Simplicity : Other than a modem, there is no additional equipment that is required, and analog modems are easy to configure. Availability : Because a public telephone network is available almost everywhere, it is easy to locate a telephone service provider. The maintenance of the telephone system is of high quality, with few instances in which lines are not available.
There are also some disadvantages to using the PSTN, including the following:
Low data rates : Because the telephone system was designed to transmit voice data, the transmission rate for large data files is noticeably slow. Relatively long connection setup time : Because the connection to the PSTN requires a dialup activity, the time that is required to connect through the WAN is very slow when compared to other connection types.
customer equipment interface from the DSU and terminate the channelized transport media of the carrier on the CSU. The CSU also provides diagnostic functions such as a loopback test. Most T1 or E1 time-division multiplexing (TDM) interfaces on current routers include approved CSU/DSU capabilities. Leased lines provide permanent dedicated capacity and are used extensively for building WANs. They have been the traditional connection of choice but have a number of disadvantages. Leased lines have a fixed capacity; however, WAN traffic is often variable and leaves some of the capacity unused. In addition, each endpoint needs a separate physical interface on the router, which increases equipment costs. Any change to the leased line generally requires a site visit by the carrier personnel.
Bandwidth
Bandwidth refers to the rate at which data is transferred over the communication link. The underlying carrier technology depends on the bandwidth that is available. There is a difference in bandwidth points between the North American (T-carrier) specification and the European (Ecarrier) system. Both of these systems are based on the Plesiochronous Digital Hierarchy (PDH) that is supported in their networks. Optical networks use a different bandwidth hierarchy, which again differs between North America and Europe. In the United States, the Optical Carrier (OC) defines the bandwidth points. In Europe, the Synchronous Digital Hierarchy (SDH) defines the bandwidth points. In North America, the bandwidth is usually expressed as a digital signal level number (DS0, DS1, and so on), which refers to the rate and format of the signal. The most fundamental line speed is 64 kb/s, or DS0, which is the bandwidth that is required for an uncompressed, digitized phone call. Serial connection bandwidths can be incrementally increased to accommodate the need for faster transmission. For example, 24 DS0s can be bundled to get a DS1 line (also called a T1 line) with a speed of 1.544 Mb/s. Also, 28 DS1s can be bundled to get a DS3 line (also called a T3 line) with a speed of 43.736 Mb/s. Leased lines are available in different capacities and are generally priced based on the bandwidth that is required and the distance between the two connected points. Note E1 (2.048 Mb/s) and E3 (34.368 Mb/s) are European standards like T1 and T3, but they possess different bandwidths and frame structures. Serial interfaces are used to connect WANs to routers at a remote site or ISP. To configure a serial interface, follow the steps that are described in this table: Configuring a serial interface Step Action 1. Enter global configuration mode (configure terminal command). 2. When in global configuration mode, enter the interface configuration mode. In this
Configuring a serial interface Step Action example, use the interface serial 0/0/0 command. Enter the specified bandwidth for the interface. The bandwidth command provides a minimum bandwidth guarantee during congestion. The bandwidth command overrides the default bandwidth that is displayed in the show interfaces command and is used by some routing protocols, such as the Interior Gateway Routing Protocol (IGRP), for routing metric 3. calculations. The router also uses the bandwidth for other types of calculations, such as those required for the Resource Reservation Protocol (RSVP). The bandwidth that is entered has no effect on the actual speed of the line. If a DCE cable is attached, enter the clock rate bps command with the desired speed. Use the clock rate interface configuration command to configure the clock rate for the hardware connections on the serial interfaces, such as network interface modules (NIMs) and interface processors, to an acceptable bit rate. Typically clock rate is configured in lab environment. Be sure to enter the complete clock speed. For example, a clock rate of 64000 cannot be abbreviated to 64. 4. On serial links, one side of the link acts as the DCE and the other side of the link acts as the DTE. By default, Cisco routers are DTE devices, but can be configured as DCE devices. In a "back-to-back" router configuration in which a modem is not used, one of the interfaces must be configured as the DCE to provide a clocking signal. You must specify the clock rate for each DCE interface that is configured in this type of environment. Clock rates in bits per second are as follows: 1200, 2400, 4800, 9600, 19200, 38400, 56000, 64000, 72000, 125000, 148000, 500000, 800000, 1000000, 1300000, 2000000, and 4000000. By default, interfaces are disabled. The interface must be activated with the no shutdown command. This activation is like powering on the interface. The interface must also be connected to another device (such as a hub, a switch, or another router) for the physical layer to be active. If an interface needs to be disabled for maintenance or troubleshooting, use the shutdown command. Serial interfaces require a clock signal to control the timing of the communications. In most environments, a DCE device such as a CSU/DSU will provide the clock. By default, Cisco routers are DTE devices, but they can be configured as DCE devices. Note The serial cable that is attached determines the DTE or DCE mode of the Cisco router. Choose the cable to match the network requirement. Each connected serial interface must have an IP address and subnet mask to route IP packets. Note A common misconception for students new to networking and Cisco IOS Software is to assume that the bandwidth command will change the physical bandwidth of the link. The bandwidth command only modifies the bandwidth metric that is used by routing protocols such
as Enhanced Interior Gateway Routing Protocol (EIGRP) and Open Shortest Path First (OSPF). Sometimes, a network administrator will change the bandwidth value in order have more control over the chosen outgoing interface. The show controller command displays information about the physical interface itself. This command is useful with serial interfaces to determine the type of cable that is connected without the need to physically inspect the cable itself. The information that is displayed is determined when the router initially starts and represents only the type of cable that was attached when the router was started. If the cable type is changed after startup, the show controller command will not display the cable type of the new cable.
Simplicity : Point-to-point communication links require minimal expertise to install and maintain. Quality : Point-to-point communication links usually offer a high quality of service, if they have adequate bandwidth. The dedicated capacity gives no latency or jitter between the endpoints. Availability : Constant availability is essential for some applications such as ecommerce, and point-to-point communication links provide permanent, dedicated capacity that is always available.
There are also some disadvantages to this type of WAN access, including the following:
Cost : Point-to-point links are generally the most expensive type of WAN access, and this cost can become significant when they are used to connect many sites. In addition, each endpoint requires an interface on the router, which increases equipment costs. Limited flexibility : WAN traffic is often variable, and leased lines have a fixed capacity, resulting in the bandwidth of the line seldom being exactly what is needed. Any change to the leased line generally requires a site visit by the ISP or carrier personnel to adjust capacity.
Step 1 Enter the interface configuration mode of the serial interface. Step 2 Enter the encapsulation hdlc command to specify the encapsulation protocol on the interface.
Example: Verifying HDLC Encapsulation Configuration
Use the show interface command to verify proper configuration. When HDLC is configured, "Encapsulation HDLC" should be reflected in the output of the show interface command.
Point-to-Point Protocol
Cisco HDLC is a PPP that can be used on leased lines between two Cisco devices. For communicating with a device from another vendor, synchronous PPP is a better option. PPP originally emerged as an encapsulation protocol for transporting IP traffic over point-topoint links. PPP also established a standard for the assignment and management of IP addresses, asynchronous (start and stop bit) and bit-oriented synchronous encapsulation, network protocol multiplexing, link configuration, link quality testing, error detection, and option negotiation for such capabilities as network layer address negotiation and data- compression negotiation. PPP provides router-to-router and host-to-network connections over both synchronous and asynchronous circuits. An example of an asynchronous connection is a dialup connection. An example of a synchronous connection is a leased line. PPP provides a standard method for transporting multiprotocol datagrams (packets) over pointto-point links. There are many advantages to using PPP, including the fact that it is not proprietary. Moreover, it includes many features not available in HDLC, including the following:
The link quality management feature monitors the quality of the link. If too many errors are detected, PPP takes down the link. PPP supports Password Authentication Protocol (PAP) and Challenge Handshake Authentication Protocol (CHAP) authentication.
Developers designed PPP to make the connection for point-to-point links. PPP, described in RFCs 1661 and 1332, encapsulates network layer protocol information over point-to-point links. RFC 2153: PPP Vendor Extensions updated the previous RFC 1661. You can configure PPP on the following types of physical interfaces:
A method for encapsulating multiprotocol datagrams. Extensible Link Control Protocol (LCP) to establish, configure, and test the WAN data-link connection. Family of Network Control Protocols (NCPs) for establishing and configuring different network layer protocols. PPP allows the simultaneous use of multiple network layer protocols. Some of the more common NCPs are IP Control Protocol (IPCP), AppleTalk Control Protocol, Novell IPX Control Protocol, Cisco Systems Control Protocol, Systems Network Architecture (SNA) Control Protocol, and Compression Control Protocol.
LCP provides sufficient versatility and portability to a wide variety of environments. LCP is used to automatically determine the encapsulation format option, to manage varying limits on sizes of packets, and to detect a loopback link and terminate the link. Other optional facilities that are provided are authentication of the identity of its peer on the link and determination of when a link is functioning properly or failing. The authentication phase of a PPP session is optional. After the link has been established and the authentication protocol is chosen, the peer can be authenticated. If the authentication option is used, authentication takes place before the network layer protocol configuration phase begins. The authentication options require that the calling side of the link enters authentication information to help ensure that the user has permission from the network administrator to make the call. Peer routers exchange authentication messages. To enable PPP encapsulation, enter interface configuration mode. Use the encapsulation ppp interface configuration command to specify PPP encapsulation on the interface. To set PPP as the encapsulation method to be used by a serial or ISDN interface, use the encapsulation ppp interface configuration command. The following example enables PPP encapsulation on serial interface 0/0/0 on RouterA:
RouterA#configure terminal RouterA(config)#interface serial 0/0/0 RouterA(config-if)#encapsulation ppp RouterA(config-if)#bandwith 64 RouterA(config-if)#no shutdown
The encapsulation ppp command has no arguments but you must first configure the router with an IP routing protocol to use PPP encapsulation. Remember that if you do not configure PPP on a Cisco router, the default encapsulation for serial interfaces is HLDC. In this example the bandwidth value is set to 64 kb/s.
A routing protocol is a set of processes, algorithms, and messages that is used to exchange routing information and populate the routing table with the choice of best paths for the routing protocol. Routing protocols are a set of rules by which routers dynamically share their routing information. As routers become aware of changes to the networks for which they act as the gateway, or changes to links between routers, this information is passed on to other routers. When a router receives information about new or changed routes, it updates its own routing table and, in turn, passes the information to other routers. In this way, all routers have accurate routing tables that are updated dynamically and can learn about routes to remote networks that are many hops way. Further examples of the information that routing protocols determine are as follows:
How updates are conveyed What knowledge is conveyed When to convey knowledge How to locate recipients of the updates
Discovery of remote networks Maintaining up-to-date routing information Choosing the best path to destination networks Ability to find a new best path if the current path is no longer available
All routing protocols have the same purpose: to learn about remote networks and to quickly adapt whenever there is a change in the topology. The method that a routing protocol uses to accomplish this purpose depends upon the algorithm that it uses and the operational characteristics of that protocol. The operations of a dynamic routing protocol vary depending on the type of routing protocol and on the routing protocol itself. In general, the operations of a dynamic routing protocol can be described as follows:
The router sends and receives routing messages on its interfaces. The router shares routing messages and routing information with other routers that are using the same routing protocol. Routers exchange routing information to learn about remote networks. When a router detects a topology change, the routing protocol can advertise this change to other routers.
Although routing protocols provide routers with up-to-date routing tables, there are costs that put additional demands on the memory and processing power of the router. First, the exchange of route information adds overhead that consumes network bandwidth. This overhead can be a problem, particularly for low-bandwidth links between routers. Second, after the router receives the route information, protocols such as Enhanced Interior Gateway Routing Protocol (EIGRP) and Open Shortest Path First (OSPF) process it extensively to make routing table entries. This means that routers that are employing these protocols must have sufficient processing capacity to implement the algorithms of the protocol as well as to perform timely packet routing and forwarding. An autonomous system (AS), otherwise known as a routing domain, is a collection of routers under a common administration. Typical examples are an internal network of a company and a network of an ISP. Because the Internet is based on the AS concept, the following two types of routing protocols are required:
Interior gateway protocols (IGPs): These routing protocols are used to exchange routing information within an AS. RIP, EIGRP, Intermediate System-to-Intermediate System (IS- IS), and OSPF are examples of IGPs. Exterior gateway protocols (EGPs): These routing protocols are used to route between autonomous systems. Border Gateway Protocol (BGP) is the EGP of choice in networks today.
Note The Internet Assigned Numbers Authority (IANA) assigns AS numbers for many jurisdictions. Use of IANA numbering is required if your organization plans to use BGP. However, it is good practice to be aware of private versus public AS numbering schema. IGPs are used for routing within a routing domain, which consists of those networks within the control of a single organization. An AS is commonly comprised of many individual networks that belong to companies, schools, and other institutions. An IGP is used to route within the AS and it is also used to route within the individual networks themselves. For example, a fictitious organization operates an AS that includes schools, colleges, and universities. This organization uses an IGP to route within its AS in order to interconnect all of these institutions. Each of the educational institutions also uses an IGP of their own choosing to route within its own individual network. The IGP that is used by each entity provides best-path determination within its own routing domains, just as the IGP that is used by the organization provides best-path routes within the AS itself. EGPs, on the other hand, are designed for use between different autonomous systems that are under the control of different administrations. BGP is the only currently viable EGP and is the routing protocol that is used by the Internet. BGP is a path vector protocol that can use many different attributes to measure routes. At the ISP level, there are often more important issues than
just choosing the fastest path. BGP is typically used between ISPs and sometimes between a company and an ISP. Within an AS, most IGP routing can be classified as conforming to one of the following algorithms:
Distance vector : The distance vector routing approach determines the direction (vector) and distance (such as hops) to any link in the internetwork. Some distance vector protocols periodically send complete routing tables to all of the connected neighbors. In large networks, these routing updates can become enormous, causing significant traffic on the links. Distance vector protocols use routers as signposts along the path to the final destination. The only information that a router knows about a remote network is the distance or metric to reach that network and which path or interface to use to get there. Distance vector routing protocols do not have an actual map of the network topology. Protocol RIP is an example of distance vector routing protocol. Link state : The link-state approach, which uses the Shortest Path First (SPF) algorithm, creates an abstract of the exact topology of the entire internetwork, or at least of the partition in which the router is situated. Using an analogy of signposts, using a link-state routing protocol is like having a complete map of the network topology. The signposts along the way from the source to the destination are not necessary, because all link-state routers are using an identical "map" of the network. A link-state router uses the link-state information to create a topology map and to select the best path to all destination networks in the topology. Protocols OSPF and IS-IS are examples of link state routing protocols. Advanced distance vector or balanced hybrid : The advanced distance vector approach combines aspects of the link-state and distance vector algorithms. Protocol EIGRP is an example of advanced distance vector routing protocol.
There is no single best routing algorithm for all internetworks. All routing protocols provide the information differently.
mask. Routers that are running a classful routing protocol perform automatic route summarization across network boundaries. Upon receiving a routing update packet, a router that is running a classful routing protocol performs one of the following actions to determine the network portion of the route:
If the routing update information contains the same major network number as is configured on the receiving interface (for example /26 for both the update and the receiving interface), the router applies the subnet mask that is configured on the receiving interface. If the routing update information contains a major network that is different from the network that is configured on the receiving interface, the router applies the default classful mask (by address class) as follows:
o o o
For Class A addresses, the default classful mask is 255.0.0.0. For Class B addresses, the default classful mask is 255.255.0.0. For Class C addresses, the default classful mask is 255.255.255.0.
Classless routing protocols can be considered second-generation protocols because they are designed to address some of the limitations of the earlier classful routing protocols. A serious limitation in a classful network environment is that the subnet mask is not exchanged during the routing update process, thus requiring the same subnet mask to be used on all subnetworks within the same major network. Another limitation of the classful approach is the need to automatically summarize to the classful network number at all major network boundaries. In the classless environment, the summarization process is controlled manually and can usually be invoked at any bit position within the address. Because subnet routes are propagated throughout the routing domain, manual summarization may be required to keep the size of the routing tables manageable. Classless routing protocols include Routing Information Protocol version 2 (RIPv2), EIGRP, OSPF, and IS-IS.
The direction or interface in which packets should be forwarded The distance or how far it is to the destination network
Distance vector routing protocols call for the router to periodically broadcast the entire routing table to each of its neighbors. The periodic routing updates that most distance vector routing
protocols generate are addressed only to directly connected routing devices. The addressing scheme that is most commonly used is a logical broadcast. Routers that are running a distance vector routing protocol send periodic updates even if there are no changes in the network. In a pure distance vector environment, the periodic routing update includes a complete routing table. Upon receiving a complete routing table from its neighbor, a router can verify all known routes and make changes to the local routing table based on this updated information. This process is also known as "routing by rumor" because the router understands the network that is based on the perspective of the network topology of the neighboring router.
RIP Features
Over the years, routing protocols have evolved to meet the increasing demands of complex networks. The first protocol used was RIP. RIP still enjoys popularity because of its simplicity and widespread support. The key characteristics of RIP include the following:
RIP is a distance vector routing protocol. Hop count is used as the metric for path selection. The maximum allowable hop count is 15. By default, routing updates are broadcast every 30 seconds. RIP is capable of load balancing over as many as six equal-cost paths. The default is four equalcost paths. Defining the maximum number of parallel paths that are allowed in a routing table enables RIP load balancing. With RIP, the paths must be equal-cost paths. If the maximum number of paths is set to one, load balancing is disabled.
Includes the subnet mask in the routing updates, making it a classless routing protocol Has an authentication mechanism to secure routing table updates Supports variable-length subnet mask (VLSM) Uses multicast addresses instead of broadcast Supports manual route summarization
Connectivity Addressing Media types Devices Rack layouts Card assignments Cable routing Cable identification Termination points Power information Circuit identification information
Graphical representation of a network illustrates how each device in a network is connected and its logical architecture. A topology diagram shares many of the same components as the network configuration table. Each network device should be represented on the diagram with consistent notation or a graphical symbol. Also, each logical and physical connection should be represented using a simple line or other appropriate symbol. Routing protocols can also be shown. Maintaining accurate network topology documentation is important to successful configuration management. To create an environment where topology documentation maintenance can occur, the information must be available for updates. Cisco strongly recommends updating topology documentation whenever a network change occurs.
types of devices that are connected, the router interfaces to which they are connected, the interfaces that are used to make the connections, and the model numbers of the devices. A Cisco device frequently has other Cisco devices as neighbors on the network. Information that is gathered from other devices can assist you in making network design decisions, troubleshooting, and making changes to equipment. Cisco Discovery Protocol can be used as a network discovery tool, helping you to build a logical topology of a network when such documentation is missing or lacking in detail. Cisco Discovery Protocol runs over the data link layer, connecting the physical media to the upper-layer protocols (ULPs). Because Cisco Discovery Protocol operates at the data link layer, two or more Cisco network devicessuch as routers that support different network layer protocols (for example, IP and Novell IPX)can learn about each other. Physical media that connect Cisco Discovery Protocol devices must support Subnetwork Access Protocol (SNAP) encapsulation. These media can include all LANs, Frame Relay, other WANs, and ATM networks. When a Cisco device boots, Cisco Discovery Protocol starts by default and automatically discovers neighboring Cisco devices that are running Cisco Discovery Protocol, regardless of which protocol suite is running. The exception to this is Frame Relay interfaces, where Cisco Discovery Protocol must manually be enabled.
Device identifiers : For example, the configured hostname of the switch. Address list : Up to one network layer address for each protocol that is supported. Port identifier : The name of the local port and remote port, in the form of an ASCII character string such as ethernet0. Capabilities list : Supported features, such as whether this device is a router or a switch.
Platform : The hardware platform of the device, such as Cisco 2800 Series Router.
To obtain Cisco Discovery Protocol information about this upper router from the console of the administrator, network staff could use Telnet to connect to a switch that is connected directly to this target device. Cisco Discovery Protocol version 2 is the most recent release of the protocol and provides more intelligent device-tracking features. These features include a reporting mechanism that allows for quicker error tracking, therefore reducing costly downtime. Reported error messages can be sent to the console or to a logging server.
different levels of detail. It is designed and implemented as a simple, low-overhead protocol. A Cisco Discovery Protocol packet can be as small as 80 octets, mostly made up of the ASCII strings that represent information. Cisco Discovery Protocol functionality is enabled by default on all interfaces (except for Frame Relay multipoint subinterfaces), but can be disabled at the device level. However, some interfaces, such as ATM interfaces, do not support Cisco Discovery Protocol. To prevent other Cisco Discovery Protocol-capable devices from accessing information about a specific device, the no cdp run global configuration command is used.
Router(config)#no cdp run
If you want to use Cisco Discovery Protocol but need to stop Cisco Discovery Protocol advertisements on a particular interface, use the following command:
Router(config-if)#no cdp enable
To enable Cisco Discovery Protocol on an interface, the cdp enable interface configuration command is used.
Router(config-if)#cdp enable
Neighbor device ID Local interface Holdtime value, in seconds Neighbor device capability code Neighbor hardware platform Neighbor remote port ID
The holdtime value indicates how long the receiving device should hold the Cisco Discovery Protocol packet before discarding it. The format of the show cdp neighbors output varies between different types of devices, but the available information is generally consistent across devices. The show cdp neighbors command can be used on a Cisco Catalyst switch to display the Cisco Discovery Protocol updates that were received on the local interfaces. Note that on a switch, the local interface is referred to as the local port.
The show cdp neighbors detail command also reveals the IP address of a neighboring device. Cisco Discovery Protocol will reveal the IP address of the neighbor even if you cannot ping the neighbor. This command is very helpful when two Cisco routers cannot route across their shared data link. The show cdp neighbors detail command will help determine if one of the Cisco Discovery Protocol neighbors has an IP configuration error. For network discovery situations, knowing the IP address of the Cisco Discovery Protocol neighbor is often all the information that is needed in order to use Telnet to connect to that device. With an established Telnet session, information can be gathered about directly connected Cisco devices. In this fashion, you can use Telnet to navigate around a network and build a logical topology. The output from the show cdp neighbors detail command is identical to the output that is produced by the show cdp entry * command.
Neighbor device ID Layer 3 protocol information Device platform Device capabilities Local interface type and outgoing remote port ID Holdtime value, in seconds Cisco IOS Software type and release (Cisco IOS Software, 2800 Software
The output from this command includes all the Layer 3 addresses of the neighboring device interfaces (up to one Layer 3 address per protocol). The show cdp traffic command displays information about interface traffic. It shows the number of Cisco Discovery Protocol packets that are sent and received. It also displays the number of errors for the following error conditions:
Syntax error Checksum error Failed encapsulations Out of memory Invalid packets. Fragmented packets
Number of Cisco Discovery Protocol version 1 packets sent Number of Cisco Discovery Protocol version 2 packets sent
The show cdp interface command displays the following interface status and configuration information about the local device:
Line and data-link status of the interface Encapsulation type for the interface Frequency at which Cisco Discovery Protocol packets are sent (default is 60 seconds) Holdtime value, in seconds (default is 180 seconds)
Cisco Discovery Protocol is limited to gathering information about directly connected Cisco neighbors. Other tools, such as Telnet, are available for gathering information about remote devices that are not directly connected.
Central processing unit (CPU) Synchronous dynamic random-access memory (SDRAM) Read-only memory (ROM): Boot ROM Data storage: Flash, CompactFlash Interfaces: Console, Fast Ethernet Operating system: Cisco IOS Software
Although there are several different types and models of routers, every router has the same general hardware components. Depending on the model, those components are located in different places inside the router. The major components of a router are mostly hardware.
CPU : The CPU executes operating system instructions, such as system initialization, routing functions, and switching functions.
RAM : RAM stores the instructions and data that the CPU needs to execute. This read/write memory contains the software and data structures that allow the router to function. RAM is volatile memory and loses its content when the router is powered down or restarted. However, the router also contains permanent storage areas, such as ROM, flash, and NVRAM. RAM is used to store the following components: Operating system : Cisco IOS Software is copied into RAM during bootup. Running configuration file : This file stores the configuration commands that Cisco IOS Software is currently using on the router. With few exceptions, all commands that are configured on the router are stored in the running configuration file, which is also known as "running-config." o IP routing table : This file stores information about directly connected and remote networks. It is used to determine the best path to forward the packet. o Address Resolution Protocol (ARP) cache : This cache contains the IP version 4 (IPv4) address to MAC address mappings, like the ARP cache on a PC. The ARP cache is used on routers that have LAN interfaces such as Ethernet interfaces. o Packet buffer : Packets are temporarily stored in a buffer when they are received on an interface or before they exit an interface. ROM : ROM is a form of permanent storage. This type of memory contains microcode for basic functions to start and maintain the router. The ROM contains the ROM monitor, which is used for router disaster recovery functions, such as password recovery. ROM is nonvolatile; it maintains the memory contents even when the power is off. Flash memory : Flash memory is nonvolatile computer memory that can be electrically stored and erased. Flash is used as permanent storage for the operating system. In most models of Cisco routers, the IOS is permanently stored in flash memory and copied into RAM during the bootup process, where the CPU then executes it. Some older models of Cisco routers run the IOS directly from flash. Flash consists of SIMMs or Personal Computer Memory Card International Association (PCMCIA) cards, which can be upgraded to increase the amount of flash memory. Flash memory does not lose its contents when the router loses power or is restarted. NVRAM : NVRAM does not lose its information when power is turned off. Cisco IOS Software uses NVRAM as permanent storage for the startup configuration file (startupconfig). All configuration changes are stored in the running configuration file in RAM, and with few exceptions, Cisco IOS Software implements them immediately. To save those changes in case the router is restarted or loses power, the running configuration must be copied to NVRAM, where it is stored as the startup configuration file. Configuration register : The configuration register is used to control how the router boots. The configuration register is part of the NVRAM. Interfaces : Interfaces are the physical connections to the external world for the router, and include the following types, among others:
o o o o o o o
Ethernet, Fast Ethernet, and Gigabit Ethernet Asynchronous and synchronous serial Token Ring Fiber Distributed Data Interface (FDDI) ATM
There are three major areas of microcode that are generally contained in ROM.
Bootstrap code : The bootstrap code is used to bring up the router during initialization. It reads the configuration register to determine how to boot and then, if instructed to do so, loads the Cisco IOS Software. Power-on self-test (POST): POST is the microcode that is used to test the basic functionality of the router hardware and determine which components are present. ROM monitor (ROMMON): This area includes a low-level operating system that is normally used for manufacturing, testing, troubleshooting, and password recovery. In ROM monitor mode, the router has no routing or IP capabilities.
Note: Depending on the specific Cisco router platform, the components that are listed may be stored in flash memory or in bootstrap memory to allow for field upgrade to later versions.
2.
3.
Step
4.
5.
6.
7.
Description scaled-down version of the Cisco IOS Software is copied from ROM into RAM. This version of Cisco IOS Software is used to help diagnose any problems and can be used to load a complete version of the Cisco IOS Software into RAM. Once the bootstrap code has found the proper image, it then loads that Load the Cisco IOS image into RAM and starts the Cisco IOS Software. Some older Software routers do not load the Cisco IOS Software image into RAM, but execute it directly from flash memory instead. Find the After the Cisco IOS Software is loaded, the bootstrap program configuration searches for the startup configuration file (startup-config) in NVRAM. If a startup configuration file is found in NVRAM, the Cisco IOS Software loads it into RAM as the running configuration and executes the commands in the file, one line at a time. The running-config file Load the contains interface addresses, starts routing processes, configures router configuration passwords and defines other characteristics of the router. If no configuration exists, the router will enter the setup utility or attempt an AutoInstall to look for a configuration file from a TFTP server. When the prompt is displayed, the router is running the Cisco IOS Run the configured Software with the current running configuration file. The network Cisco IOS Software administrator can now begin using Cisco IOS commands on this router.
Event
as well, such as selection of the console b/s rate and whether to use the saved configuration file (startup- config) in NVRAM. It is possible to change the configuration register and, therefore, change where the router looks for the Cisco IOS image and the startup configuration file during the bootup process. For example, a configuration register value of 0x2102 (the "0x" indicates that the digits that follow are in hexadecimal notation) has a boot field value of 0x2. The right-most digit in the register value is 2 and represents the lower 4 bits of the register. 2. If the boot field value of the configuration register is from 0x2 to 0xF, the bootstrap code parses the startup configuration file in NVRAM for the boot system commands that specify the name and location of the Cisco IOS Software image to load. Several boot system commands can be entered in sequence to provide a fault-tolerant boot plan. The boot system command is a global configuration command that allows you to specify the source for the Cisco IOS Software image to load. Some of the syntax options available include the following:
Router(config)#boot system tftp://192.168.7.24/cs3-rx.90-1 Router(config)#boot system tftp://192.168.7.19/cs3-rx.83-2 Router(config)#boot system rom
1. If there are no boot system commands in the configuration, the router defaults to loading the first valid Cisco IOS image in flash memory and running it. 2. If no valid Cisco IOS image is found in flash memory, the router attempts to boot from a network TFTP server using the boot field value as part of the Cisco IOS image filename.
Note : Booting from a network TFTP server is a seldom-used method of loading a Cisco IOS Software image. Not every router has a boothelper image, so Steps 5 and 6 do not always follow.
1. By default, if booting from a network TFTP server fails after five tries, the router will boot the boothelper image (the Cisco IOS subset) from ROM. The user can also set bit 13 of the configuration register to 0 to tell the router to try to boot from a TFTP server continuously without booting the Cisco IOS subset from ROM after five unsuccessful tries. 2. If there is no boothelper image or if it is corrupted, the router will boot the ROM monitor from ROM. When the router locates a valid Cisco IOS image file in flash memory, the Cisco IOS image is normally loaded into RAM to run. Some routers, including the Cisco 2500 Series Routers, do not have sufficient RAM to hold the Cisco IOS image and, therefore, run the Cisco IOS image directly from flash memory. If the image needs to be loaded from flash memory into RAM, it must first be decompressed. After the file is decompressed into RAM, it is started. When the Cisco IOS Software begins to load, you may see a string of pounds signs (#), as shown in this example, while the image decompresses:
System Bootstrap, Version 12.1(3r)T2, RELEASE SOFTWARE (fc1) Copyright (c) 2000 by cisco Systems, Inc. cisco 2811 (MPC860) processor (revision 0x200) with 60416K/5120K bytes of memory Self decompressing the image : ##############################################################
Cisco IOS images that are run from flash memory are not compressed. The show version command can be used to help verify and troubleshoot some of the basic hardware and software components of the router. The show version command displays information about the version of the Cisco IOS Software that is currently running on the router, the version of the bootstrap program, and information about the hardware configuration, including the amount of system memory. The output from the show version command includes the following:
Cisco IOS Software, 1841 Software (C1841-ADVIPSERVICESK9-M), Version 12.4(15)T7, RELEASE SOFTWARE (fc3)
This line from the example output shows the version of the Cisco IOS Software in RAM that the router is using.
This line from the example output shows the version of the system bootstrap software, which is stored in ROM memory and was initially used to boot up the router.
This line from the example output shows where the Cisco IOS image is located and loaded, as well as its complete filename.
The first part of this line displays the type of CPU on this router. The last part of this line displays the amount of DRAM. Some series of routers, like the Cisco 2600 Series Routers, use a fraction of DRAM as packet memory. Packet memory is used for buffering packets. To determine the total amount of DRAM on the router, add both numbers. In this example, the Cisco1800 router has 236,544 KB (kilobytes) of free DRAM that is used for temporarily storing
the Cisco IOS Software and other system processes. The other 25,600 KB is dedicated for packet memory. The sum of these numbers is 302,080 KB of total DRAM.
Interfaces
This section of the output displays the physical interfaces on the router. In this example, the Cisco 2621 router has two Fast Ethernet interfaces and two low-speed serial interfaces.
Amount of NVRAM
This line from the example output shows the amount of NVRAM on the router.
Amount of flash
This line from the example output shows the amount of flash memory on the router.
Configuration register
The last line of the show version command displays the current configured value of the software configuration register in hexadecimal format. This value indicates that the router will attempt to load a Cisco IOS Software image from flash memory and load the startup configuration file from NVRAM After the Cisco IOS Software image is loaded and started, the router must be configured to be useful. If there is an existing saved configuration file (startup-config) in NVRAM, it is executed. If there is no saved configuration file in NVRAM, the router either begins AutoInstall or enters the setup utility. If the startup configuration file does not exist in NVRAM, the router may search for a TFTP server. If the router detects that it has an active link to another configured router, it sends a broadcast searching for a configuration file across the active link. This condition will cause the router to pause, but you will eventually see a console message like the following one: <The router pauses here while it broadcasts for a configuration file across an active link.>
%Error opening tftp://255.255.255.255/network-confg(Timed out) %Error opening tftp://255.255.255.255/cisconet.cfg (Timed out)
The setup utility prompts the user at the console for specific configuration information to create a basic initial configuration on the router, as shown in this example:
<output omitted> If you require further assistance please contact us by sending email to export@cisco.com. cisco 2811 (MPC860) processor (revision 0x200) with 60416K/5120K bytes of memory Processor board ID JAD05190MTZ (4292891495) M860 processor: part number 0, mask 49 2 FastEthernet/IEEE 802.3 interface(s) 239K bytes of non-volatile configuration memory. 62720K bytes of ATA CompactFlash (Read/Write) Cisco IOS Software, 2800 Software (C2800NM-ADVIPSERVICESK9-M), Version 12.4(15)T1, RELEASE SOFTWARE (fc2) Technical Support: http://www.cisco.com/techsupport Copyright (c) 1987 by Cisco Systems, Inc. Compiled Wed 18-Jul-07 06:21 by pt_rel_team --- System Configuration Dialog --Continue with configuration dialog? [yes/no]: no
The show running-config and show startup-config commands are among the most common Cisco IOS Software EXEC commands, because they allow you to see the current running configuration in RAM on the router or the startup configuration commands in the startup configuration file in NVRAM that the router will use on the next restart. If the words "Current configuration" are displayed, the active running configuration from RAM is being displayed. If there is a message at the top indicating how much nonvolatile memory is being used ("Using 924 out of 196600 bytes" in this example), the startup configuration file from NVRAM is being displayed.
Configuration Register
The configuration register includes information that specifies where to locate the Cisco IOS Software image. You can examine the register with the show command, and you can change the register value with the config-register global configuration command. Before altering the configuration register, you should determine how the router is currently loading the software image. The show version command will display the current configuration register value. The last line of the display contains the configuration register value. You can change the default configuration register setting with the config-register global configuration command. The configuration register is a 1 register. The lowest 4 bits of the configuration register (bits 3, 2, 1, and 0) form the boot field. A hexadecimal number is used as the argument to set the value of the configuration register. The default value of the configuration register is 0x2102.
The boot field is set to 0 to enter ROM monitor mode automatically. This value sets the boot field bits to 0000. In ROM monitor mode, the router displays the ">" or "rommon>" prompt, depending on the router processor type. From ROM monitor mode, you can use the boot command to manually boot the router. The boot field is set to 1 to configure the system to boot the Cisco IOS subset automatically from ROM. This value sets the boot field bits to 0001. The router displays the "Router(boot)>" prompt in this mode. The boot field is set to any value from 0x2 to 0xF to configure the system to use the boot system commands in the startup configuration file in NVRAM. The default is 0x2. These values set the boot field bits to 0010 through 1111.
The show version command is used to verify changes in the configuration register setting. The new configuration register value takes effect when the router reloads. In this example, the show version command indicates that the current configuration register setting of 0x2102 will be used during the next router reload, because of the password recovery procedure. Note: When using the config-register command, all 16 bits of the configuration register are set. Be careful to modify only the bits that you are trying to change, such as the boot field, and leave the other bits as they are. Remember that the other configuration register bits perform functions that include the selection of the console b/s rate and whether to use the saved configuration in NVRAM. The show flash command is an important tool to gather information about the router memory and image file. It can determine the following information:
Total amount of flash memory on the router Amount of flash memory that is available Names of all the files that are stored in the flash memory
In this example, the line at the bottom tells how much flash memory is available. Some of it might already be in use. Flash memory is always read-only.
Flash memory file systems Network file systems (NFSs): TFTP, Remote Copy Protocol (RCP), and FTP Any other endpoint for reading or writing data (such as NVRAM, the running configuration in RAM, and so on)
An important feature of the Cisco IFS is the use of the URL convention to specify files on network devices and the network. The URL prefix specifies the file system. The output of the show file systems command lists all of the available file systems on a Cisco 1841 router. The command provides insightful information, such as the amount of available and free memory, and the type of file system and its permissions. Permissions include read only (as indicated by the "ro" flag), write only (as indicated by the "wo" flag), and read and write (as indicated by the "rw" flag). The flash file system has an asterisk preceding it, which indicates the current default file system. The bootable Cisco IOS Software is located in flash, therefore the pound symbol (#) that is appended to the flash listing indicates a bootable disk. This table contains some commonly used URL prefixes for Cisco network devices. Prefix bootflash: Bootflash memory Description
Prefix flash: flh: ftp: nvram: rcp: slot0: slot1: system: tftp:
Description Flash memory. This prefix is available on all platforms. For platforms that do not have a device named flash, the flash: prefix is aliased to slot0. Therefore, the flash: prefix can be used to refer to the main flash memory storage area on all platforms. Flash load helper log files FTP network server NVRAM The RCP network server The first Personal Computer Memory Card International Association (PCMCIA) flash memory card The second PCMCIA flash memory card Contains the system memory, including the current running configuration TFTP network server
Step 4 Create the destination file to receive the upload, if required. This step is dependent on the network server operating system. The show flash command is an important tool to gather information about the router memory and image file. The show flash command can determine the following information:
Total amount of flash memory on the router Amount of flash memory that is available Names of all the files that are stored in the flash memory
The Cisco IOS image file is based on a special naming convention. The name for the Cisco IOS image file contains multiple parts, each with a specific meaning. It is important that you understand this naming convention when upgrading and selecting a Cisco IOS Software.
The first part (c1841) identifies the platform on which the image runs. In this example, the platform is a Cisco 1841 router. The second part (ipbase) specifies the feature set. In this case, "ipbase" refers to the basic IP internetworking image. Other feature set possibilities include the following: i : Designates the IP feature set. j : Designates the enterprise feature set (all protocols). s : Designates a Plus feature set (extra queuing, manipulation, or translations). 56i : Designates 5 IP Security (IPsec) Data Encryption Standard (DES). 3 : Designates the firewall or intrusion detection system (IDS). k2 : Designates the 3-DES IPsec encryption (168 bit). The third part (mz) indicates where the image runs and if the file is compressed. In this example, "mz" indicates that the file runs from RAM and is compressed. The fourth part (12.3-14.T7) is the version number. The final part (bin) is the file extension. This extension indicates that this file is a binary executable file.
o o o o o o
The Cisco IOS Software naming conventions, field meaning, image content, and other details are subject to change.
A good practice for maintaining system availability is to ensure that you always have backup copies of the startup configuration files and Cisco IOS image files. The Cisco IOS Software copy command is used to move configuration files from one component or device to another, such as RAM, NVRAM, or a TFTP server. A software backup image file is created by copying the image file from a router to a network TFTP server. To copy the current system image file from the router to the network TFTP server, use the following command in the privileged EXEC mode:
Router#copy flash tftp
The copy flash tftp command requires that you enter the IP address of the remote host and the name of the source and destination system image files. The exclamation points (!) indicate the copying process from the flash memory of the router to the TFTP server. Each exclamation point means that one User Datagram Protocol (UDP) segment has successfully transferred. Before updating the flash memory with a new Cisco IOS image, you should back up the current Cisco IOS image to a TFTP server. Backing up provides a fallback in case there is only sufficient space to store one image in the flash memory. Upgrading a system to a newer software version requires loading a different system image file on the router. Use the following command to download the new image from the network TFTP server:
Router#copy tftp flash
This command prompts you for the IP address of the remote host and the name of the source and destination system image files. Enter the appropriate filename of the update image just as it appears on the server. After these entries are confirmed, the "erase flash" prompt appears. Erasing flash memory makes room for the new image. Erase flash memory if there is not sufficient flash memory for more than one Cisco IOS image. If no free flash memory is available, the erase routine is required before new files can be copied. The system informs you of these conditions and prompts for a response. Each exclamation point means that one UDP segment has successfully transferred. Note : Make sure that the Cisco IOS image that is loaded is appropriate for the router platform. If the wrong Cisco IOS image is loaded, the router could be made unbootable, requiring ROM monitor intervention.
The running configuration is stored in RAM. The startup configuration is stored in NVRAM.
You can copy configuration files from the router to a file server using FTP, RCP, or TFTP. For example, you can copy configuration files to back up a current configuration file to a server before changing its contents, therefore allowing the original configuration file to be restored from the server. The protocol that is used depends on which type of server is used. You can copy configuration files from a TFTP, RCP, or FTP server to the running configuration in RAM or to the startup configuration file in NVRAM of the router for one of the following reasons:
To restore a backed-up configuration file. To use the configuration file for another router. For example, you may add another router to the network and want it to have a similar configuration as the original router. By copying the file to the network server and making the changes to reflect the configuration requirements of the new router, you can save time by not recreating the entire file. To load the same configuration commands onto all of the routers in the network so that all of the routers have similar configurations. To use the configuration file for another router. For example, you may add another router.
For example, in the copy running-config tftp command, the running configuration in RAM is copied to a TFTP server. Use the copy running-config startup-config command after a configuration change is made in the RAM and must be saved to the startup configuration file in NVRAM. Similarly, copy the startup configuration file in NVRAM back into RAM with the copy startup running: command. Notice that you can abbreviate the commands.
Similar commands exist for copying between a TFTP server and either NVRAM or RAM. The following examples show common copy command usage. The examples list two methods to accomplish the same tasks. The first example is a simple syntax and the second example provides a more explicit syntax.
Copy the running configuration from RAM to the startup configuration in NVRAM, overwriting the existing file:
Copy the running configuration from RAM to a remote location, overwriting the existing file:
Copy a configuration from a remote source to the running configuration, merging the new content with the existing file:
Copy a configuration from a remote source to the startup configuration, overwriting the existing file:
Use the configure terminal command to interactively create configurations in RAM from the console or Remote Terminal. Use the erase startup-config command to delete the saved startup configuration file in NVRAM. Note : When a configuration is copied into RAM from any source, the configuration merges with, or overlays, any existing configuration in RAM, rather than overwriting it. New configuration parameters are added, and changes to existing parameters overwrite the old parameters. Configuration commands that exist in RAM, for which there is no corresponding command in NVRAM, remain unaffected. Copying the running configuration from RAM into the startup configuration file in NVRAM will overwrite the startup configuration file in NVRAM. With Cisco IOS Release 12.0 and later, commands that are used to copy and transfer configuration and system files are changed to integrate the Cisco IFS specifications. Refer to Cisco.com for details.
You can use the TFTP servers to store configurations in a central place, allowing centralized management and updating. Regardless of the size of the network, there should always be a copy of the current running configuration online as a backup. The copy running-config tftp command allows the current configuration to be uploaded and saved to a TFTP server. The IP address or name of the TFTP server and the destination filename must be supplied. A series of exclamation marks in the display shows the progress of the upload. The copy tftp running-config command downloads a configuration file from the TFTP server to the running configuration of the RAM. Again, the address or name of the TFTP server and the source and destination filename must be supplied. In this case, because you are copying the file to the running configuration, the destination filename should be running-config. This process is a merge process, not an overwrite process.
: Lists the configured parameters and their values, and provides a snapshot of problems with interfaces, media, or network performance. debug : Allows you to trace the execution of a process and to check the flow of protocol traffic for problems, protocol bugs, or misconfigurations.
The table describes the major differences between the show and debug commands. Command
show
Description Provides a static collection of information about the status of a network device, neighboring devices, and network performance. Use show commands when gathering facts for isolating problems in an internetwork, including problems with interfaces,
Command
debug
Description nodes, media, servers, clients, or applications. Provides a flow of information about the traffic being seen (or not seen) on an interface, error messages that are generated by nodes on the network, protocolspecific diagnostic packets, and other useful troubleshooting data. Use debug commands when operations on the router or network must be viewed to determine if events or packets are working properly.
Use debug commands to isolate problems, not to monitor normal network operation. The following are some considerations when using debug commands:
Be aware that the debug commands may generate too much data that is of little use for a specific problem. Typically, knowledge of the protocol or protocols being debugged is required to properly interpret the debug output. Because the high CPU overhead of debug commands can disrupt network device operation, debug commands should be used only when looking for specific types of traffic or problems, and when those problems have been narrowed to a likely subset of causes. When using the debug troubleshooting tools, be aware that output formats vary with each protocol. Some generate a single line of output per packet, whereas others generate multiple lines of output per packet. Some debug commands generate large amounts of output, whereas others generate only occasional output. Some generate lines of text, and others generate information in field format. Use of debug commands is suggested for obtaining information about network traffic and router status. Use these commands with great care. If you are not sure about the impact of a debug command, refer to Cisco.com for details or consult with a technical support representative.
This table lists commands that you can use with the debug command. Command
service timestamps show processes
undebug all
terminal monitor
Description Use this command to add a time stamp to a debug or log message. This feature can provide valuable information about when debug elements occurred and the duration of time between events. Displays the CPU utilization for each process. This data can influence decisions about using a debug command if it indicates that the production system is already too heavily used for adding a debug command. Disables all debug commands. This command can free up system resources after you finish debugging. The no debug all command can be used to disable all debugging. Displays debug output and system error messages for the current terminal and session. When you use Telnet to connect to a device and issue a debug command, you will not see output unless this command is entered.
Because the problem condition is an abnormal situation, you may be willing to temporarily trade off switching efficiency for the opportunity to rapidly diagnose and correct the problem. To effectively use debugging tools, you must consider the following:
The impact that a troubleshooting tool has on router performance The most selective and focused use of the diagnostic tool How to minimize the impact of troubleshooting on other processes that compete for resources on the network device How to stop the troubleshooting tool when diagnosing is complete so that the router can resume its most efficient switching.
It is one thing to use debug commands to troubleshoot a lab network that lacks end-user application traffic. It is another thing to use debug commands on a production network that users depend on for data flow. Without proper precautions, the impact of a broadly focused debug command could make matters worse. With proper, selective, and temporary use of debug commands, you can easily obtain potentially useful information without needing a protocol analyzer or other third-party tool. Other considerations for using debug commands are as follows:
Ideally, it is best to use debug commands during periods of lower network traffic and fewer users. Debugging during these periods reduces the effect on other users. When the information you need from the debug command is interpreted and the debug process (and any other related configuration setting, if any) is stopped, the router can resume its faster switching. Problem-solving can be resumed, a better-targeted action plan can be created, and the network problem can be resolved.
All debug commands are entered in privileged EXEC mode, and most debug commands take no arguments. Caution : Do not use the debug all command, because this command can cause a system to crash. To list and see a brief description of all the debugging command options, enter the debug ? command in privileged EXEC mode. By default, the network server sends the output from debug commands and system error messages to the console. When using this default, you should monitor the debugging output using a virtual terminal connection rather than the console port. To redirect debugging output, you should use the logging command options within configuration mode. Possible destinations include the console, vty, internal buffer, and UNIX hosts running a syslog server. The syslog format is compatible with 4.3 Berkeley Software Distribution (4.3 BSD) UNIX and its derivatives. Caution : It is important to turn off debugging when you have finished troubleshooting.
User EXEC: Allows access to only a limited number of basic monitoring commands. Privileged EXEC: Allows access to all device commands, such as those used for configuration and management, and can be password-protected to allow only authorized users to access the device.
Interface: Supports commands that configure operations on a per-interface basis Subinterface: Supports commands that configure multiple virtual interfaces on a single physical interface Controller: Supports commands that configure controllers (for example, E1 and T1 controllers) Line: Supports commands that configure the operation of a terminal line; for example, the console or the vty ports Router: Supports commands that configure the parameters for one of the routing protocols
If you enter the exit command, the router backs out one level, eventually logging out. In general, you enter the exit command from one of the specific configuration modes to return to global configuration mode. Press Ctrl-Z or enter the end command to leave configuration mode completely and return to the privileged EXEC mode. Commands that affect the entire device are called global commands. The hostname and enable secret commands are examples of global commands. Commands that point to or indicate a process or interface that will be configured are called major commands. When entered, major commands cause the CLI to enter a specific configuration mode. Major commands have no effect unless a subcommand that supplies the configuration entry is immediately entered. For example, the major command interface serial 0 has no effect unless a subcommand is used. The subcommand configures a selected interface. The following are examples of some major commands and subcommands that go with them: Major command:
RouterX(config)#interface serial 0
Subcommand:
RouterX(config-if)#shutdown
Major command:
RouterX(config-if)#line console 0
Subcommand:
RouterX(config-line)#password cisco
Major command:
RouterX(config-line)#router rip
Subcommand:
RouterX(config-router)#network 10.0.0.0
Notice that entering a major command switches from one configuration mode to another. Note: You do not need to return to global configuration mode before entering another configuration mode.
Word help: Displays a list of commands or keywords that start with a specific character or characters. Enter the ? command to get word help for a list of commands that begin with a particular character sequence. Enter the character sequence and follow it immediately by the question mark. Do not include a space before the question mark. The router displays a list of commands that begin with the characters you entered. For example, enter sh? to get a list of commands that begin with the character sequence "sh."
Note: You can use context-sensitive help to get a list of available commands. This functionality can be used when you are unsure of the name for a command or if you want to see if the Cisco IOS CLI supports a particular command in a particular mode. For example, to list the commands available at the user EXEC level, type ? at the Router > prompt.
Command syntax help: Used to determine which options, keywords, or arguments are matched with a specific command. Enter the ? command to get command syntax help for completing a command. Enter a question mark in place of a keyword or argument. Include a space before the question mark. The network device then displays a list of available command options. "<cr>" represents a carriage return.
If you submit a command by pressing the Enter key and the interpreter cannot understand the command, it will provide feedback describing what is wrong with the command. There are three different types of error messages:
Ambiguous command: The Cisco IOS Software returns an error message indicating that required keywords or arguments were left off the end of the command. Incomplete command: The Cisco IOS Software returns an error message to indicate that not enough characters were entered for the command interpreter to recognize the command. Incorrect command: The Cisco IOS Software returns a "^" underneath the command to indicate the point from where the command interpreter cannot decipher the command.
The Cisco IOS Software buffers past command lines (by default 10) so that entries can be recalled. The buffer is useful for re-entering commands without retyping. By using Up Arrow key and Down Arrow key, you can view previous commands.
Failure domains: One of the most important reasons to implement an effective network design is to minimize the extent of problems when they occur. When Layer 2 and Layer 3 boundaries are not clearly defined, failure in one network area can have a far-reaching effect. Broadcast domains: Broadcasts exist in every network. Many applications and network operations require broadcasts to function properly; therefore, it is not possible to eliminate them completely. In the same way that avoiding failure domains involves clearly defining boundaries, broadcast domains should also have clear boundaries. They should also include an optimal number of devices to minimize the negative impact of broadcasts. Large amount of unknown MAC unicast traffic: Cisco Catalyst switches limit unicast frame forwarding to ports that are associated with the specific unicast address recorded in the MAC address table of the switch. However, when there is no entry corresponding to the frame's destination MAC address, this unicast frame, as is the case with broadcast frames, will be sent to all forwarding ports within the respective VLAN except the port where the frame originally arrived. This behavior is called "unknown MAC unicast flooding." Because this type of flooding causes excessive traffic on switch ports, network interface cards (NICs) must contend with a larger number of frames on the wire. When data is propagated on a wire for which it was not intended, security could be compromised. Multicast traffic on ports where not intended: IP multicast is a technique that allows IP traffic to be propagated from one source to a multicast group that is identified by a single MAC destination group address or a single IP and MAC destination group-address pair. Like unicast flooding and broadcasting, multicast frames are flooded out of all of the switch ports within the respective VLAN. A proper design allows for the containment of multicast frames while allowing them to be functional. Difficulty in management and support: A poorly designed network may be disorganized, poorly documented and lack easily identified traffic flows, which can make support, maintenance, and problem resolution time-consuming and arduous. Possible security vulnerabilities: A switched network that has been designed with little attention to security requirements at the access layer can compromise the integrity of the entire network.
A poorly designed network always has a negative impact and becomes a support and cost burden for any organization.
VLAN Overview
Network performance can affect productivity in an organization and its reputation for delivering as promised. VLANs contribute to network performance by separating large broadcast domains into smaller segments. A VLAN allows a network administrator to create logical groups of network devices. These devices act as if they were on their own independent network, even if they share a common infrastructure with other VLANs. A VLAN is a logical broadcast domain that can span multiple physical LAN segments. Within the switched internetwork, VLANs provide segmentation and organizational flexibility. You can design a VLAN structure that lets you group stations that are segmented logically by functions, project teams, and applications without regard to the physical location of the users. VLANs allow you to implement access and security policies to particular groups of users. You can assign each switch port to only one VLAN, which adds a layer of security (if the port is operating as an access port). Ports in the same VLAN share broadcasts, whereas ports in different VLANs do not share broadcasts. Containing broadcasts within a VLAN improves the overall performance of the network. A VLAN can exist on a single switch or span multiple switches. VLANs can include stations in a single building or multiple-building infrastructures. VLANs can also connect across WANs. A process of forwarding network traffic from one VLAN to another VLAN using a router is called inter-VLAN routing. VLANs are associated with unique IP subnets on the network. This subnet configuration facilitates the routing process in a multi-VLAN environment. When using a router to facilitate inter-VLAN routing, the router interfaces can be connected to separate VLANs. Devices on those VLANs send traffic through the router to reach other VLANs.
Ease of management and troubleshooting: A hierarchical addressing scheme groups network addresses contiguously. Because a hierarchical IP addressing scheme makes problem components easier to locate, network management and troubleshooting are more efficient. Fewer errors: Orderly network address assignment can minimize errors and duplicate address assignments. Reduced routing table entries: In a hierarchical addressing plan, routing protocols are able to perform route summarization, allowing a single routing table entry to represent a collection of IP network numbers. Route summarization makes routing table entries more manageable and provides these benefits:
o o
Fewer CPU cycles when recalculating a routing table or sorting through the routing table entries to find a match. Reduced router memory requirements.
o o
Design the IP addressing scheme in the blocks of 2n contiguous network numbers (such as 4, 8, 16, 32, 64, and so on). These blocks of IP addresses can be assigned to the subnets in a given building distribution and access switch block. This approach lets you summarize each switch block into one large address block. At the building distribution layer, continue to assign network numbers contiguously out to the access layer devices. Have a single IP subnet correspond to a single VLAN. Each VLAN is a separate broadcast domain. When possible, subnet at the same binary value on all network numbers to avoid variable length subnet masks. This approach helps minimize errors and confusion when troubleshooting or configuring new devices and segments.
Many different types of network management traffic can be present on the network. Some examples are bridge protocol data units (BPDUs), Cisco Discovery Protocol updates, Simple Network Management Protocol (SNMP) traffic, and Remote Monitoring (RMON) traffic. To make network troubleshooting easier, some designers assign a separate VLAN to carry certain types of network management traffic.
IP telephony
There are two types of IP telephony traffic: signaling information between end devices (IP phones and softswitches, such as Cisco Unified Communications Manager) and the data packets of the voice conversation itself. Designers often configure the data to and from the IP phones on a separate VLAN designated for voice traffic. Quality of service (QoS) measures can be applied to these VLANs to give high priority to voice traffic.
IP multicast
IP multicast traffic is sent from a particular source address to a multicast group that is identified by a single IP and MAC destination-group address pair. Examples of applications that generate this type of traffic are Cisco IP/TV broadcasts and imaging software that is used to quickly configure workstations and servers. Multicast traffic can produce a large amount of data streaming across the network. For example, video traffic from online training, security
applications, Cisco Unified MeetingPlace, and Cisco TelePresence is proliferating on some networks. Switches must be configured to keep this traffic from flooding to devices that have not requested it. Routers must be configured to ensure that multicast traffic is forwarded to the network areas where it is requested.
Normal data
Normal data traffic is typical application traffic that is related to file and print services, email, Internet browsing, database access, and other shared network applications. This data will need to be treated in either the same ways or different ways in different parts of the network, depending on the volume of each type. Examples of this type of traffic are Server Message Block (SMB), Netware Core Protocol (NCP), Simple Mail Transfer Protocol (SMTP), Structured Query Language (SQL), and HTTP.
Scavenger class
Scavenger class includes all traffic with protocols or patterns that exceed their normal data flows. This type of traffic is used to protect the network from exceptional traffic flows that may be the result of malicious programs executing on end-system PCs. Scavenger class is also used for "less than best effort" traffic, such as peer-to-peer traffic.
Assured bandwidth and voice quality Transmission priority over other types of network traffic Ability to be routed around congested areas on the network One-way overall delay of less than 150 ms across the network
If both the user PCs and the IP phones are on the same VLANs, each will try to use the available bandwidth without considering the other device. The simplest method to avoid a conflict is to use separate VLANs for IP telephony traffic and data traffic.
Some Cisco Catalyst switches offer a unique feature that is called a voice VLAN, which lets you overlay a voice topology onto a data network. You can segment phones into separate logical networks, even though the data and voice infrastructure are physically the same. The voice VLAN feature places the phones into their own VLANs without any end-user intervention. The user simply plugs the phone into the switch, and the switch provides the phone with the necessary VLAN information. There are several advantages to using voice VLANs. You can seamlessly maintain these VLAN assignments, even if the phones move to new locations. By placing phones into their own VLANs, you gain the advantages of network segmentation and control. It also allows you to preserve your existing IP topology for the data end stations and easily assign IP phones to different IP subnets using standards-based DHCP operation. In addition, with the phones in their own IP subnets and VLANs, you can more easily identify and troubleshoot network problems and create and enforce QoS or security policies. With the voice VLAN feature, you have all of the advantages of the physical infrastructure convergence, while maintaining separate logical topologies for voice and data terminals. This configuration creates the most effective way to manage a multiservice network.
VLAN Operation
A Cisco Catalyst switch operates in a network like a traditional bridge. Each VLAN that you configure on the switch implements address learning, forwarding, filtering decisions, and loop avoidance mechanisms, as if the VLAN were a separate physical bridge. The Cisco Catalyst switch implements VLANs by restricting traffic forwarding to destination ports that are in the same VLAN as the originating ports. When a frame arrives on a switch port, the switch must retransmit the frame to only the ports that belong to the same VLAN. In essence, a VLAN that is operating on a switch limits transmission of unicast, multicast, and broadcast traffic. Traffic originating from a particular VLAN floods to only the other ports in that VLAN. A port normally carries only the traffic for the single VLAN to which it belongs. For a VLAN to span across multiple switches, a trunk is required to connect two switches. A trunk can carry traffic for multiple VLANs.
Dynamic VLAN: Cisco Catalyst switches support dynamic VLANs using a VLAN Membership Policy Server (VMPS). Some Cisco Catalyst switches can be designated as the VMPS; you can also designate an external server. The VMPS contains a database that maps MAC addresses to VLAN assignments. When a frame arrives at a dynamic port on the Cisco Catalyst access switch, the switch queries the VMPS server for the VLAN assignment. The VLAN assignment is based on the source MAC address of the arriving frame. A dynamic port can belong to only one VLAN at a time. Multiple hosts can be active on a dynamic port only if they all belong to the same VLAN. This mode is not commonly used in today's networks and beyond the scope of the course. Voice VLAN: A voice VLAN port is an access port that is attached to a Cisco IP phone. The Cisco IP phone must be configured to use one VLAN for voice traffic and another VLAN for data traffic. That data traffic is received from a device that is attached to the phone.
When Ethernet frames are placed on a trunk, they need additional information about the VLANs that they belong to. This task is accomplished by using the 802.1Q encapsulation header. IEEE 802.1Q uses an internal tagging mechanism that inserts a 4-byte tag field into the original
Ethernet frame between the Source Address and Type or Length fields. Because 802.1Q alters the frame, the trunking device recomputes the frame check sequence (FCS) on the modified frame. It is the responsibility of the Ethernet switch to look at the 4-byte tag field and determine where to deliver the frame. A tiny part of the 4-byte tag field3 bits, to be exactis used to specify the priority of the frame. The details are specified in the IEEE 802.1p standard. The 802.1Q header contains the 802.1p field, so you must have 802.1Q to have 802.1p.
802.1Q Native VLAN
An 802.1Q trunk and its associated trunk ports have a native VLAN value. When configuring an 802.1Q trunk, a matching native VLAN must be defined on each end of the trunk link. 802.1Q does not tag frames for the native VLAN. Therefore, ordinary stations can read the native untagged frames but cannot read any other frame because the frames are tagged. Note: The default native VLAN is VLAN 1.
VTP Modes
VTP operates in one of three modes: server, transparent, or client. You can complete different tasks depending on the VTP operation mode. The characteristics of the three VTP modes are as follows:
Server: The default VTP mode is server mode, but VLANs are not propagated over the network until a management domain name is specified or learned. When you make a change to the VLAN configuration on a VTP server, the change is propagated to all switches in the VTP domain. VTP messages are transmitted out of all the trunk connections. Transparent: When you make a change to the VLAN configuration in VTP transparent mode, the change affects only the local switch and does not propagate to other switches in the VTP domain. VTP transparent mode does forward VTP advertisements that it receives within the domain. Client: A VTP client behaves like a VTP server and transmits and receives VTP updates on its trunks, but you cannot create, change, or delete VLANs on a VTP client. VLANs are configured on another switch in the domain that is in server mode.
Cisco IOS VTP servers and clients save VLANs to the vlan.dat file in flash memory, causing them to retain the VLAN table and revision number. Switches that are in VTP transparent mode display the VLAN and VTP configurations in the show running-config command output because this information is also stored in the configuration text file. Caution: The erase startup-config command does not affect the vlan.dat file on Cisco IOS switches. VTP clients with a higher configuration revision number can overwrite VLANs on a VTP server in the same VTP domain. Delete the vlan.dat file and reload the switch to clear the VTP and VLAN information. See documentation for your specific switch model to determine how to delete the vlan.dat file.
VTP Operation
VTP advertisements are flooded throughout the management domain. VTP advertisements are sent every five minutes or whenever there is a change in VLAN configurations. Advertisements are transmitted (untagged) over the native VLAN (VLAN 1 by default) using a multicast frame. A configuration revision number is included in each VTP advertisement. A higher configuration revision number indicates that the VLAN information being advertised is more current than the stored information. One of the most critical components of VTP is the configuration revision number. Each time a VTP server modifies its VLAN information, the VTP server increments the configuration revision number by one. The server then sends out a VTP advertisement with the new configuration revision number. If the configuration revision number being advertised is higher than the number stored on the other switches in the VTP domain, the switches overwrite their VLAN configurations with the new information that is being advertised. The configuration revision number in VTP transparent mode is always zero.
Note: In the overwrite process, if the VTP server deleted all of the VLANs and had the higher revision number, the other devices in the VTP domain would also delete their VLANs. A device that receives VTP advertisements must check various parameters before incorporating the received VLAN information. First, the management domain name and password in the advertisement must match those values that are configured on the local switch. Next, if the configuration revision number indicates that the message was created after the configuration currently in use, the switch incorporates the advertised VLAN information. On many Cisco Catalyst switches, you can change the VTP domain to another name and then change it back to reset the configuration revision number, or alternatively, change the mode to transparent and then back to the previous setting.
VTP Pruning
VTP pruning prevents unnecessary flooding of broadcast information from one VLAN across all trunks in a VTP domain. VTP pruning permits switches to negotiate which VLANs are assigned to ports at the other end of a trunk. At the same time, the VLANs that are not assigned to ports on the remote switch are pruned. By default, a trunk connection carries traffic for all VLANs in the VTP management domain. It is not unusual for some switches in an enterprise network to not have ports that are configured in each VLAN. VTP pruning increases available bandwidth by restricting flooded traffic to those trunk links that the traffic must use to access the appropriate network devices. You can enable pruning only on Cisco Catalyst switches that are configured for VTP servers, and not on clients. The default setting for VTP pruning depends on the model of the Cisco Catalyst Switch.
Step 2: Enable trunking on the interswitch connections. Step 3: Create the VLANs on a VTP server and have those VLANs propagate to other switches. Step 4: Assign switch ports to a VLAN using static or dynamic assignment. Step 5: Execute adds, moves, and changes of ports. Step 6: Save the VLAN configuration. Note: The steps are not necessary in the same order as listed above. Based on the configuration some steps might be optional. It may also take up to 5 minutes for the changes to be reflected in the network.
VTP Configuration
When creating VLANs, you must decide whether to use VTP in your network. With VTP, you can make configuration changes on one or more switches, and those changes are automatically communicated to all other switches in the same VTP domain. Default VTP configuration values depend on the switch model and the software version. The default values for Cisco Catalyst switches are as follows:
VTP domain name: <Null> VTP mode: Server VTP password: None VTP pruning: Enabled/Disabled (model specific) VTP version: Version 1 Configuration revision: 0
The VTP domain name can be specified or learned. By default, the domain name is <Null>. You can set a password for the VTP management domain. However, if you do not assign the same password for each switch in the domain, VTP will not function properly. Note: The domain name cannot be reset to <Null> except if the database is deleted. VTP pruning eligibility is one VLAN parameter that the VTP protocol advertises. Enabling or disabling VTP pruning on a VTP server propagates the change throughout the management domain. Use the vtp global configuration command to modify the VTP configuration, including the domain name, mode, password, pruning, interface, and storage filename. Use the no form of this command to remove the filename or to return to the default settings. When the VTP mode is transparent, you can save the VTP configuration in the switch configuration file by entering the copy running-config startup-config privileged EXEC command.
Note: The domain name and password are case sensitive. You cannot remove a domain name after it is assigned; you can only reassign it.
Ensure that the native VLAN for an 802.1Q trunk is the same on both ends of the trunk link. If they are different, spanning-tree loops might result. Ensure that native VLAN frames are untagged.
Note: If 802.1Q trunk configuration is not the same on both ends, the Cisco IOS will report error messages. Use the switchport mode interface configuration command to set a Fast Ethernet or Gigabit Ethernet port to trunk mode. Many Cisco Catalyst switches support the Dynamic Trunking Protocol (DTP), which manages automatic trunk negotiation. Dynamic Trunking Protocol (DTP) is a Cisco proprietary protocol. Switches from other vendors do not support DTP. DTP is automatically enabled on a switch port when certain trunking modes are configured on the switch port. DTP manages trunk negotiation only if the port on the other switch is configured in a trunk mode that supports DTP. There are four options for the switchport mode command:
Trunk: Configures the port into permanent 802.1Q trunk mode and negotiates with the connected device to convert the link to trunk mode. Access: Disables port trunk mode and negotiates with the connected device to convert the link to nontrunk. Dynamic desirable: Triggers the port to negotiate the link from nontrunk to trunk mode. The port negotiates to a trunk port if the connected device is in either trunk state, desirable state, or auto state. Otherwise, the port becomes a nontrunk port. Dynamic auto: Enables a port to become a trunk only if the connected device has the state set to trunk or desirable. Otherwise, the port becomes a nontrunk port.
The switchport nonegotiate interface command specifies that DTP negotiation packets are not sent on the Layer 2 interface. The switch does not engage in DTP negotiation on this interface. This command is valid only when the interface switchport mode is access or trunk (configured by using the switchport mode access or the switchport mode trunk interface configuration command). This command returns an error if you attempt to execute it in dynamic
(auto or desirable) mode. Use the no form of this command to return to the default setting. When you configure a port with the switchport nonegotiate command, the port trunks only if the other end of the link is specifically set to trunk. The switchport nonegotiate command does not form a trunk link with ports in either dynamic desirable or dynamic auto mode. To verify a trunk configuration on many Cisco Catalyst switches, use the show interfaces switchport and show interfaces trunk commands. These two commands display the trunk parameters and VLAN information of the port.
Note: When the switch is in VTP transparent mode and the enhanced software image is installed, you can also create extended-range VLANs (VLANs with IDs from 1006 to 4094). These VLANs are not saved in the VLAN database. Configurations for VIDs 1 to 1005 are written to the vlan.dat file (VLAN database). You can display the VLANs by entering the show vlan privileged EXEC command. The vlan.dat file is stored in flash memory. To add an Ethernet VLAN, you must specify at least a VLAN number. If no name is entered for the VLAN, the default is to append the VLAN number to the command vlan. For example, VLAN0004 would be the default name for VLAN 4 if no name is specified. After you configure the VLAN, validate the parameters for that VLAN. Use the show vlan id vlan_number or the show vlan name vlan-name command to display information about a particular VLAN. Use the show vlan command to display information on all configured VLANs. The show vlan command displays the switch ports that are assigned to each VLAN. Other VLAN parameters that are displayed include the type, the security association ID (SAID), the maximum transmission unit (MTU), the STP, and other parameters that are used for Token Ring or FDDI VLANs. The default type is Ethernet. The SAID is used for the FDDI trunk.
Alternatively, use the show interfaces switchport privileged EXEC command to display the VLAN information for a particular interface.
Note: Before deleting a VLAN, be sure to first reassign all member ports to a different VLAN. Any ports that are not moved to an active VLAN will be unable to communicate with other stations after you delete the VLAN, as the switchport access vlan vlan# command will still be present on that port even though the vlan may no longer exist in the vlan database. To reassign a port to the default VLAN (VLAN 1), use the no switchport access vlan command in interface configuration mode.
A number of technologies are available to interconnect devices in a switched network. The interconnection technology that you select depends on the amount of traffic the link must carry. You will likely use a mixture of copper and fiber-optic cabling that is based on distances, noise immunity requirements, security, and other business requirements. Some of the more common technologies are as follows:
Fast Ethernet (100 Mb/s Ethernet): This LAN specification (IEEE 802.3u) operates at 100 Mb/s over twisted-pair cable. The Fast Ethernet standard raises the speed of Ethernet from 10 to 100 Mb/s with only minimal changes to the existing cable structure. A switch that has ports that function at both 10 and 100 Mb/s can move frames between ports without Layer 2 protocol translation. Gigabit Ethernet: An extension of the IEEE 802.3 Ethernet standard, Gigabit Ethernet increases speed tenfold over that of Fast Ethernet, to 1000 Mb/s, or 1 Gb/s. IEEE 802.3z specifies operations over fiber optics, and IEEE 802.3ab specifies operations over twistedpair cable. 10-Gigabit Ethernet: 10-Gigabit Ethernet (IEEE 802.3ae) was formally ratified as an 802.3 Ethernet standard in June 2002. This technology is the next step for scaling the performance and functionality of an enterprise. With the deployment of Gigabit Ethernet becoming more common, 10-Gigabit Ethernet will become typical for uplinks. EtherChannel: This feature provides link aggregation of bandwidth over Layer 2 links between two switches. EtherChannel bundles individual Ethernet ports into a single logical port or link. All interfaces in each EtherChannel bundle must be configured with similar speed, duplex, and VLAN membership.
There are four objectives in the design of any high-performance network: security, availability, scalability, and manageability. This list describes the equipment and cabling decisions that you should consider when altering the infrastructure:
Replace hubs and legacy switches with new switches at the building access layer. Select equipment with the appropriate port density at the access layer to support the current user base while preparing for growth. Some designers begin by planning for about 30 percent growth. If budget allows, use modular access switches to accommodate future expansion. Consider planning for the support of inline power and quality of service (QoS) if you think you might implement IP telephony in the future. When building the cable plant from the building access layer to the building distribution layer devices, remember that these links will carry aggregate traffic from the end nodes at the access
layer to the building distribution switches. Ensure that these links have adequate bandwidth capability. You can use EtherChannel bundles here to add bandwidth as necessary. At the distribution layer, select switches with adequate performance to manage the load of the current access layer. In addition, plan some port density for adding trunks later to support new access layer devices. The devices at this layer should be multilayer (Layer 2 and Layer 3) switches that support routing between the workgroup VLANs and network resources. Depending on the size of the network, the building distribution layer devices can be fixed chassis or modular. Plan for redundancy in the chassis and in the connections to the access and core layers, as business objectives dictate. The campus backbone equipment must support high-speed data communications between other submodules. Be sure to size the backbone for scalability and plan for redundancy.
EtherChannel Overview
The increasing deployment of switched Ethernet to the desktop can be attributed to the proliferation of bandwidth-intensive applications. Any-to-any communications of new applicationssuch as video to the desktop, interactive messaging, and collaborative whiteboardingare increasing the need for scalable bandwidth. At the same time, missioncritical applications call for resilient network designs. With the wide deployment of faster switched Ethernet links in the campus, organizations must either aggregate their existing resources or upgrade the speed in their uplinks and core to scale performance across the network backbone. EtherChannel is a technology that Cisco originally developed as a LAN switch-to-switch technique of inverse multiplexing of multiple Fast Ethernet or Gigabit Ethernet switch ports into one logical channel. The benefit of EtherChannel is that it is effectively cheaper than higherspeed media while using existing switch ports. The following are advantages of EtherChannel:
It allows for the creation of a very high-bandwidth logical link. It load-shares among the physical links involved. It provides automatic failover. It simplifies subsequent logical configuration (configuration is per logical link instead of per physical link).
EtherChannel technology provides bandwidth scalability within the campus by providing the following aggregate bandwidth:
Fast Ethernet: Up to 800 Mb/s. Gigabit Ethernet: Up to 8 Gb/s. 10-Gigabit Ethernet : Up to 80 Gb/s.
Each of these connection speeds can vary in amounts equal to the speed of the links used (100 Mb/s, 1 Gb/s, or 10 Gb/s). Even in the most bandwidth-demanding situations, EtherChannel technology helps aggregate traffic and keeps oversubscription to a minimum, while providing effective link-resiliency mechanisms.
Redundant designs can eliminate the possibility of a single point of failure causing a loss of function for the entire switched or bridged network. At the same time, you must consider problems that redundant designs can cause. Some of the problems that can occur with redundant links and devices in switched or bridged networks are as follows:
Broadcast storms: Without some loop-avoidance process in operation, each switch or bridge floods broadcasts endlessly. This situation is commonly called a broadcast storm. Multiple frame transmission: Multiple copies of unicast frames may be delivered to destination stations. Many protocols expect to receive only a single copy of each transmission. Multiple copies of the same frame can cause unrecoverable errors. MAC database instability: Instability in the content of the MAC address table results from copies of the same frame being received on different ports of the switch. Data forwarding can be impaired when the switch consumes the resources that are coping with instability in the MAC address table.
Layer 2 LAN protocols, such as Ethernet, lack a mechanism to recognize and eliminate endlessly looping frames. Some Layer 3 protocols implement a Time to Live (TTL) mechanism that limits the number of times a Layer 3 networking device can retransmit a packet. Lacking such a mechanism, Layer 2 devices continue to retransmit looping traffic indefinitely. A loop-avoidance mechanism is required to solve each of these problems. The Spanning Tree Protocol (STP) was developed to address these issues.
Switches process broadcast and multicast frames differently from how they process unicast frames. Because broadcast and multicast frames may be of interest to all stations, the switch or bridge normally floods broadcast and multicast frames to all ports except the originating port. A switch or bridge never learns a broadcast or multicast address because broadcast and multicast addresses never appear as the source address of a frame. This flooding of broadcast and multicast frames can potentially cause a problem in a redundant switched topology.
Broadcast Storms
A broadcast storm occurs when each switch on a redundant network floods broadcast frames endlessly. Switches flood broadcast frames to all ports except the port on which the frame was received. A broadcast storm can disrupt normal traffic flow. It can also disrupt all the devices on the switched or bridged network because the CPU in each device on the segment must process the broadcast. In this way, a broadcast storm can lock up the PCs and servers that are trying to process all the broadcast frames.
A loop avoidance mechanism eliminates this problem by preventing one of the four interfaces from transmitting frames during normal operation, therefore breaking the loop.
Multiple Frame Transmissions
In a redundant topology, multiple copies of the same frame can arrive at the intended host, potentially causing problems with the receiving protocol. Most protocols are not designed to recognize or cope with duplicate transmissions. In general, protocols that make use of a sequence-numbering mechanism assume that many transmissions have failed and that the sequence number has recycled. Other protocols attempt to hand the duplicate transmission to the appropriate upper-layer protocol (ULP), with unpredictable results.
MAC Database Instability
MAC database instability results when multiple copies of a frame arrive on different ports of a switch. This subtopic describes how MAC database instability can arise and the problems that can result.
STP allows Layer 2 devices to communicate with each other to discover physical loops in the network. STP forces certain ports into a standby state so that they do not listen to, forward, or flood data frames. The overall effect is that there is only one path to each network segment that is active at any time. In other words, STP creates a tree structure of loopfree leaves and branches that spans the entire Layer 2 network. If there is a problem with connectivity to any of the segments within the network, STP reestablishes connectivity by automatically activating a previously inactive path, if one exists.
Spanning-Tree Operation
STP executes an algorithm called spanning-tree algorithm. Spanning-tree algorithm chooses a reference point, called a root bridge, and then determines the available paths to that reference
point. If more than two paths exist, spanning-tree algorithm picks the best path and blocks the rest. STP performs three steps to provide a loop-free logical network topology: 1. Elects one root bridge: STP has a process to elect a root bridge. Only one bridge can act as the root bridge in a given network. On the root bridge, all ports are designated ports. Designated ports are normally in the forwarding state. When in the forwarding state, a port can send and receive traffic. 2. Selects the root port on the non-root bridge: STP establishes one root port on each nonroot bridge. The root port is the lowest-cost path from the non-root bridge to the root bridge. Root ports are normally in the forwarding state. Spanning-tree path cost is an accumulated cost that is calculated on the bandwidth. 3. Selects the designated port on each segment: On each segment, STP establishes one designated port. The designated port is selected on the bridge that has the lowest-cost path to the root bridge. Designated ports are normally in the forwarding state, forwarding traffic for the segment. Nondesignated ports are normally in the blocking state to logically break the loop topology. When a port is in the blocking state, it is not forwarding traffic but can still receive traffic. Blocking the redundant paths is critical to preventing loops on the network. The physical paths still exist to provide redundancy, but these paths are disabled to prevent the loops from occurring. If the path is ever needed to compensate for a network cable or switch failure, STP recalculates the paths and unblocks the necessary ports to allow the redundant path to become active. Switches and bridges running the spanning-tree algorithm exchange configuration messages with other switches and bridges at regular intervals (every 2 seconds by default). Switches and bridges exchange these messages using a frame that is called the bridge protocol data unit (BPDU). One of the pieces of information that is included in the BPDU is the bridge ID (BID). STP requires a unique BID assigned for each switch or bridge. Typically, the BID comprises a priority value (2 bytes) and the bridge MAC address (6 bytes). The default priority, in accordance with IEEE 802.1D, is 32,768 (1000 0000 0000 0000 in binary, or 0x8000 in hex format), which is the midrange value. The root bridge is the bridge with the lowest BID. Note: A Cisco Catalyst switch uses one of its MAC addresses from a pool of MAC addresses that are assigned either to the backplane or to the supervisor module. The selection depends on the switch model. There are five STP port states:
Note: The disabled state is not strictly part of STP; a network administrator can manually disable an STP on a specific port.
When STP is enabled, every bridge in the network goes through the blocking state and the transitory states of listening and learning when powering up. If properly configured, the ports then stabilize to the forwarding or blocking state. Forwarding ports provide the lowest-cost path to the root bridge. During a topology change, a port temporarily implements the listening and learning states. All bridge ports initially start in the blocking state, from which they listen for BPDUs. When the bridge first boots, it functions like the root bridge and transitions to the listening state. An absence of BPDUs for a certain period is called the maximum age (max_age), which has a default of 20 seconds. If a port is in the blocking state and does not receive a new BPDU within the max_age, the bridge transitions from the blocking state to the listening state. When a port is in the transitional listening state, it is able to send and receive BPDUs to determine the active topology. At this point, the switch is not passing any user data. During the listening state, the bridge performs these three steps:
Selects the root bridge Selects the root ports on the non-root bridges Selects the designated ports on each segment
The time that it takes for a port to transition from the listening state to the learning state or from the learning state to the forwarding state is called the forward delay. The forward delay has a default value of 15 seconds. The learning state reduces the amount of flooding required when data forwarding begins. If a port is still a designated or root port at the end of the learning state, the port transitions to the forwarding state. In the forwarding state, a port is capable of sending and receiving user data. Ports that are not the designated or root ports transition back to the blocking state. A port normally transitions from the blocking state to the forwarding state in 30 to 50 seconds. You can tune the spanning-tree timers to adjust the timing, but these timers are meant to be set to the default value. The default values are put in place to give the network enough time to gather all the correct information about the network topology. Note: For switch ports that connect only to end-user stations (not to another switch or bridge), you should enable a Cisco Catalyst switch feature called PortFast. A switch port that has PortFast enabled automatically transitions from the blocking state to the forwarding state when it first comes up. This behavior is acceptable because no loops can be formed through the port, since no other switches or bridges are connected to it.
PVST+ (Per-VLAN spanning tree protocol plus): Based on the 802.1D standard and includes Cisco proprietary extensions, such as BackboneFast, UplinkFast, and PortFast. PVRST+ (Rapid PVST+): Based on the 802.1w standard and has a faster convergence than 802.1D. MSTP (802.1s) (Multiple STP): Combines the best aspects of PVST+ and the IEEE standards.
Describing PortFast
PortFast is a Cisco technology. When a switch port configured with PortFast is configured as an access port, that port transitions from blocking to forwarding state immediately, bypassing the typical STP listening and learning states. You can use PortFast on access ports, which are connected to a single workstation or to a server, to allow those devices to connect to the network immediately rather than waiting for spanning tree to converge. In a valid PortFast configuration, configuration BPDUs should never be received as it indicates another bridge or switch is connected to the port, potentially causing a spanning-tree loop. Cisco switches support a feature called BPDU Guard, that when enabled, puts the port in an 'errdisabled' state (effectively shutdown) on receipt of a BPDU. Note: Because the purpose of PortFast is to minimize the time that access ports, connecting to user equipment and servers, must wait for spanning tree to converge, it should be used only on access ports. If you enable PortFast on a port connecting to another switch, you risk creating a spanning-tree loop. The spanning-tree portfast interface command configures PortFast on an interface. The spanning-tree portfast default global configuration command enables PortFast on all nontrunking interfaces. The show running-config interface fa0/1 command shows the configuration on FastEthernet 0/1 interface. One part of the configuration is the spanning-tree portfast configuration, if it exists.
No load sharing is possible; one uplink must block for all VLANs. The CPU is spared; only one instance of spanning tree must be computed.
PVST+ defines a spanning-tree protocol that has several spanning-tree instances running for the network (one instance of STP per VLAN). In a network running several spanning-tree instances, these statements are true:
Optimum load sharing can result. One spanning-tree instance for each VLAN maintained can mean a considerable waste of CPU cycles for all the switches in the network (in addition to the bandwidth used for each instance to send its own BPDUs). This would only be problematic if there were a high number of VLANs configured.
PVST+ Operation
In a Cisco PVST+ environment, you can tune the spanning-tree parameters so that half of the VLANs forward on each uplink trunk. Correct configuration must be applied to the network. The configuration must define different root bridge for each half of the VLANs. Providing different STP root switches per VLAN creates a more redundant network. Spanning-tree operation requires that each switch have a unique BID. In the original 802.1D standard, the BID was composed of the bridge priority and the MAC address of the switch, and a CST represented all VLANs. PVST+ requires that a separate instance of spanning tree that is run for each VLAN and the BID field must carry VLAN ID (VID) information. This functionality is accomplished by reusing a portion of the Priority field as the extended system ID to carry a VID. To accommodate the extended system ID, the original 802.1D 16-bit bridge priority field is split into two fields. The BID includes the following fields:
Bridge priority: A 4-bit field that is still used to carry bridge priority. The priority is conveyed in discreet values in increments of 4096 rather than discreet values in increments of 1, due to the fact that only the four most significant bits are available of the 16-bit field. In other words in binary: priority 0 = [0000|<sys-id-ext #>], priority 4096 = [0001|<sys-idext #>], etc. Increments of 1 would be used if the complete 16-bit field will be available. The default priority, in accordance with IEEE 802.1D, is 32,768, which is the midrange value. Extended system ID: A 12-bit field carrying, in this case, the VID for PVST+. MAC address: A 6-byte field with the MAC address of a single switch.
By virtue of the MAC address, a BID is always unique. When the priority and extended system ID are prepended to the switch MAC address, each VLAN on the switch can be represented by a unique BID. Example for VLAN 2 default BID : Bridge ID Priority 32770 (priority 32768 sys-id-ext 2) If no priority has been configured, every switch will have the same default priority, and the election of the root for each VLAN is based on the MAC address. This method is a random means of selecting the ideal root bridge. For this reason, it is recommended to assign a lower priority to the switch that should serve as the root bridge.
The RSTP (802.1w) standard uses CST, which assumes only one spanning-tree instance for the entire switched network, regardless of the number of VLANs. Per VLAN, Rapid Spanning Tree Plus (PVRST+) defines a spanning-tree protocol that has one instance of RSTP per VLAN.
Spanning-Tree Recalculation
When there is a topology change because of a bridge or link failure, spanning tree adjusts the network topology to ensure connectivity by placing blocked ports in the forwarding state.
STP Convergence
Convergence in STP is a state in which all the switch and bridge ports have transitioned to either the forwarding or the blocking state. Convergence is necessary for normal network operations. For a switched or bridged network, a key issue is the amount of time that is required for convergence when the network topology changes.
Fast convergence is a desirable network feature because it reduces the amount of time that bridge and switch ports are in transitional states and not sending any user traffic. The normal convergence time is 30 to 50 seconds for 802.1D STP.
Root: A port that is elected for the spanning-tree topology. Designated: A port that is elected for every switched LAN segment. Alternate: An alternate path to the root bridge that is different from the path that the root port takes. Backup: A backup path that provides a redundant (but less desirable) connection to a segment which is already connected to another port on the same switch (which is the Designated port for that segment). Backup ports can exist only where two ports are connected in a loopback by a point-to-point link or bridge with two or more connections to a shared LAN segment. Disabled: A port that has no role within the operation of spanning-tree. Root and designated port roles include the port in the active topology. Alternate and backup port roles exclude the port from the active topology.
Configuring RSTP
To implement PVRST+, perform these steps: Step 1: Enable PVRST+. Step 2: Designate and configure a switch to be the root bridge (optional). Step 3: Designate and configure a switch to be the secondary (backup) root bridge (optional). Step 4: Verify the configuration. The spanning-tree mode command is used to configure PVRST+ on Cisco Catalyst switches. The show spanning-tree vlan 2 command is used to verify the spanning-tree configuration for VLAN 2. The debug spanning-tree pvst+ command is used to display PVRST+ event debug messages. The show spanning-tree vlan 2 command is used to verify the spanning-tree configuration for VLAN 2. If all the switches in a network are enabled with the default spanning-tree settings, the switch with the lowest MAC address becomes the root bridge. However, the default root bridge might not be the ideal root bridge, because of traffic patterns, the number of forwarding interfaces, or link types. Before you configure STP, select a switch to be the root of the spanning-tree. This switch does not need to be the most powerful switch, but should be the more centralized switch on the network. All data flow across the network occurs from the perspective of this switch. The distribution layer switches often serve as the spanning-tree root because these switches typically do not connect to end stations. In addition, moves and changes within the network are less likely to affect these switches. By increasing the priority (lowering the numerical value) of the preferred switch it becomes the root bridge. Change in priority is forcing spanning-tree to perform a recalculation. Recalculation reflects a new topology with the preferred switch as the root. The switch with the lowest BID becomes the root bridge for spanning-tree for a VLAN. You can use specific configuration commands to help determine which switch will become the root bridge. A Cisco Catalyst switch running PVST+ or PVRST+ maintains an instance of spanning-tree for each active VLAN that is configured on the switch. A unique BID is associated with each instance. For each VLAN, the switch with the lowest BID becomes the root bridge for that VLAN. Whenever the bridge priority changes, the BID also changes. This change results in the recomputation of the root bridge for the VLAN. To configure a switch to become the root bridge for a VLAN 1, use the command spanningtree vlan 1 root primary. With this command, the switch checks the priority of the root
switches for the VLAN 1. Because of the extended system ID support, the switch sets its own priority to 24576 for the specified VLAN if this value will cause the switch to become the root for this VLAN. If there is another switch for the specified VLAN that has a priority lower than 24576, then the switch on which you are configuring the spanning-tree vlan vlan-ID root primary command sets its own priority for the specified VLAN to 4096 less than the lowest switch priority. Note: Spanning-tree commands take effect immediately, so network traffic is interrupted while reconfiguration occurs. A secondary root is a switch that can become the root bridge for a VLAN if the primary root bridge fails. To configure a switch as the secondary root bridge for the VLAN, use the command spanning-tree vlan 1 root secondary. With this command, the switch priority is modified from the default value of 32768 to 28672 (for example). Assuming that the other bridges in the VLAN retain their default STP priority, this switch becomes the root bridge if the primary root bridge fails. You can execute this command on more than one switch to configure multiple backup root bridges. For root and secondary bridges verification, use the show spanning-tree command or related commands.
using a router to facilitate inter-VLAN routing, the router interfaces can be connected to separate VLANs. Devices on those VLANs send traffic through the router to reach other VLANs. However, not all inter-VLAN routing configurations require multiple physical interfaces. Some router software permits configuring router interfaces as trunk links. Trunk links open up new possibilities for inter-VLAN routing. A "Router-on-a-Stick" is a type of router configuration in which a single physical interface routes traffic between multiple VLANs on a network.
Multilayer Switching
Some switches can perform Layer 3 functions, replacing the need for dedicated routers to perform basic routing on a network. Multilayer switches (MLS) are capable of performing interVLAN routing. Traditionally, a switch makes forwarding decisions by looking at the Layer 2 header, whereas a router makes forwarding decisions by looking at the Layer 3 header. A multilayer switch combines the functionality of a switch and a router into one device. It switches traffic when the source and destination are in the same VLAN and routes traffic when the source and destination are in different VLANs (that is, on different IP subnets). To enable a multilayer switch to perform routing functions, VLAN interfaces on the switch need to be properly configured. You must use the appropriate IP addresses that match the subnet that the VLAN is associated with on the network. The multilayer switch must also have IP routing enabled. Multilayer switching is beyond the scope of this course. To support 802.1Q trunking, you must subdivide the physical Fast Ethernet interface of the router into multiple, logical, addressable interfaces, one per VLAN. The resulting logical interfaces are called subinterfaces. Without this subdivision, you would have to dedicate a separate physical interface to each VLAN.
The show vlans command displays the information about the Cisco IOS VLAN subinterfaces. The show ip route command displays the current state of the routing table.
Network security vulnerabilities include loss of privacy, data theft, impersonation, and loss of data integrity. You should take basic security measures on every network to mitigate adverse effects of user negligence or acts of malicious intent. Recommended practices dictate that you should follow these general steps whenever placing new equipment in service: Step 1: Follow established organizational security policies. Step 2: Secure switch protocols.
Provides a process for auditing existing network security Provides a general security framework to implement network security Defines behaviors toward electronic data that are not allowed Determines which tools and procedures are needed for the organization Communicates consensus among a group of key decision makers and defines responsibilities of users and administrators Defines a process for managing network security incidents Enables an enterprise-wide, all-site security implementation and enforcement plan
Secure physical access to the switch: Physical security is the first step when securing the devices. Securing physical access is providing anti-theft prevention and prevention that intruders can use local ports accessing and changing the device configuration. Console port is typically used to configure the devices and physical access security is important. It is not enough to provide secure physical access only as the networking devices allow remote access as well. Set system passwords: Setting the password is one of the most basic rules for security. Passwords are preventing an unauthorized access to the device configuration once physical access to the device is allowed. Secure remote access:
Use SSH when possible: SSH gives the same type of access as Telnet. Additionally SSH provides secure communication between the SSH client and the SSH server. The communication is encrypted whereas Telnet traffic is transmitted in plaintext. Not all devices support the SSH protocol and if supported, SSH version 2 is recommended. o Secure Access via Telnet: Cisco networking devices support remote access in order to manipulate with the device configuration. Telnet is a common software tool and secure access via Telnet (for example username and password checking) is an important recommended practice for switch security. o Disable HTTP, enable HTTPS: Cisco IOS Software provides an integrated HTTP server for management. In order to avoid unauthorized access it is highly recommended that you disable the HTTP server. If HTTP access to the switch is required, use basic ACLs to control the access to use secure version of HTTP HTTPS where the traffic transmitted is encrypted rather than sent in plaintext. Configure system warning banners: The Cisco IOS command set allows you to configure messages that anyone logging onto the switch sets. Configuring a warning banner to display before login is a convenient and effective way to reinforce security and general usage policies. Disable unneeded services: Cisco devices implement multiple default TCP and UDP servers to facilitate management and integration into existing environments. For most installations, these services are typically not needed and can be disabled to reduce overall security exposure. Use syslog if available: Cisco devices offer logging facility to assist and simplify troubleshooting and security investigations.
You can assign an encrypted form of the enable password, called the enable secret password, by entering the enable secret command with the desired password at the global configuration mode prompt. If the enable secret password is configured, it is used instead of the enable password, not in addition to it. Because the enable secret command simply implements a Message Digest 5 (MD5) hash on the configured password, that password remains vulnerable to dictionary attacks. Therefore, apply standard practices in selecting a feasible password. Try to pick passwords that contain both letters and numbers in addition to special characters. For example, choose "$pecia1$" instead of "specials," in which the "s" has been replaced with "$," and the "l" has been replaced with "1" (one). You can perform all configuration options directly from the console or through the remote access. Access to the console requires local physical access to the device. Once the physical access is available, the only security that is left is a password protected console port. Without the protection, a malicious user could compromise the switch configuration. To secure the console port from unauthorized access, set a password on the console port. Use the line console 0 command to switch from global configuration mode to line configuration mode for console 0, where a password can be applied. Use the password command in line configuration mode to set the line password on the console port. Console 0 is the console port on Cisco switches and routers. When a console password is defined, the device is not using it. To ensure that a user on the console port is required to enter the line password, use the login command. The virtual terminal or vty ports on a Cisco switch allow you to access the device remotely. Any user with network access to the switch can establish a vty remote terminal connection, as no physical access is needed to access vty ports. You can perform all configuration options using the vty ports. To secure the vty ports from unauthorized access, you can set a vty line password that is required before access is granted. To set the password on the vty ports, you must be in line configuration mode. There can be many vty ports available on a Cisco switch. Multiple ports permit more than one administrator to connect to and manage the switch. To secure all vty lines, make sure that a password is set and login is enforced on all lines. Leaving some lines unsecured compromises security and allows unauthorized users access to the switch. Note: If the switch has more vty lines available, adjust the range to secure them all. For example, a Cisco 2960 has lines 0 through 15 available. Password protection is not the only protection for Telnet access. Minimum recommended steps for securing Telnet access are as follows:
Configure a line password for all configured vty lines. Apply a basic ACL for in-band access to all vty lines.
If supported by the installed Cisco IOS Software, use the Secure Shell (SSH) protocol instead of Telnet to access the device remotely.
Passwords in Cisco IOS CLI, by default all passwords, except for the enable secret password, are stored in cleartext format within the startup-config and running-config. It is common that passwords should be encrypted and not stored in cleartext format. The Cisco IOS command service password-encryption enables service password encryption. When the service password-encryption command is entered from global configuration mode, all system passwords in cleartext form are immediately converted to encrypted passwords. Note: Before you complete the switch configuration, remember to save the running configuration file to the startup configuration. There are two choices for remotely accessing a vty on a Cisco switch, Telnet and SSH. Telnet is a popular protocol that is used for terminal access. Most current operating systems come with a Telnet client built-in and historically Telnet was the way to manage Cisco switches from a vty. However, Telnet is an insecure way of accessing a network device. It sends all communications across the network in cleartext. Because of the security concerns of the Telnet protocol, SSH has become the preferred protocol for remotely accessing virtual terminal lines on a Cisco device. SSH gives the same type of access as Telnet with the added benefit of security. Communication between the SSH client and the SSH server is encrypted. Several versions exist for SSH. Cisco devices currently support both SSHv1 and SSHv2 (recommended). The ip ssh version 2 command forces all SSH sessions to be ssh version 2, the default on SSH-2 capable devices is to support both. The SSH feature has an SSH server and an SSH integrated client, which are applications that run on the switch. You can use any SSH client running on a PC or the Cisco SSH client running on the switch to connect to a switch running the SSH server. If you want to prevent non-SSH connections like Telnet, add the transport input ssh command in line configuration mode to limit the switch to SSH connections only. If the transport input all line configuration command is used, all transport protocols are permitted. The SSH access to the switch as well as Telnet access is allowed. Telnet is the default vty-supported protocol on Cisco switches and there is no need to configure Telnet support on a Cisco switch. If you have switched the transport protocol on the vty lines to permit only SSH, you need to enable the Telnet protocol to permit Telnet access manually. If you need to re-enable the Telnet protocol on a Cisco 2960 switch, use the transport input telnet line configuration command. Although Cisco IOS Software provides an integrated HTTP server for management, it is highly recommended that you disable it to minimize overall exposure. If HTTP access to the switch is
required, use basic ACLs to permit access only from trusted subnets. The no ip http server global configuration command disables the HTTP server. The Cisco IOS command set includes a feature that allows you to configure messages that anyone logging onto the switch sees. These messages are called login banners and message of the day (MOTD) banners. For both legal and administrative purposes, configuring a systemwarning banner to display before login is a convenient and effective way to reinforce security and general usage policies. By clearly stating the ownership, usage, access, and protection policies before a login, you provide better support for potential prosecution. You can define a customized banner to be displayed before the username and password login prompts by using the banner login command in global configuration mode. Enclose the banner text in quotations or using a delimiter different from any character appearing in the MOTD string. The MOTD banner displays on all connected terminals at login and is useful for sending messages that affect all network users (such as impending device maintenance). The MOTD banner displays before the login banner if it is configured. By default, Cisco devices implement multiple TCP and User Datagram Protocol (UDP) servers to facilitate management and integration into existing environments. For most installations, these services are typically not required, and disabling them can greatly reduce overall security exposure. Any network device that has UDP, TCP, BOOTP, or finger services should be protected by a firewall or have the services that are disabled to protect against Denial of Service attacks. TCP and UDP small servers are servers (daemons, in Unix parlance) that run in the router and are useful for diagnostics. The TCP and UDP small servers are enabled by default on Cisco IOS Software Version 11.2 and earlier. They may be disabled using the commands no service tcpsmall-servers and no service udp-small-servers. They are disabled by default on Cisco IOS Software Versions 11.3 and later. It is recommended that you do not enable these services unless it is absolutely necessary. These services could be exploited indirectly to gain information about the target system or directly as is the case with the fraggle attack, which uses UDP echo. The finger service allows remote users to view the output equivalent to the show users command. When IP finger is configured, the router will respond to a telnet a.b.c.d finger command from a remote host by immediately displaying the output of the show users command and then closing the connection. As with all minor services, the Finger service should be disabled on your system if you do not have a need for it in your network. The no service finger command disables the Finger service. Usually, the service config command is used with the boot host or boot network command. It can be used without boot host or boot network command as well. The service config command enables the router to automatically configure the system from file that is specified by the boot host or boot network command. To disable autoloading of configuration files from a network server, use the no service config global configuration command.
To assist and simplify both problem troubleshooting and security investigations, monitor the switch subsystem information that is received from the logging facility. To render the on-system logging useful, increase the default buffer size by using logging buffered size command. Do not make the buffer size too large because the router could run out of memory for other tasks. You can use the show memory command to view the free processor memory on the router. The output of the show memory command shows the maximum available size and should not be approached. The default logging buffered command resets the buffer size to the default for the platform. To re-enable message logging after it has been disabled, use the logging on global configuration command. Additionally, the logging process logs messages to the console and the various destinations after the processes that generated them have completed. When the logging process is disabled, messages are displayed on the console as soon as they are produced, often appearing in the middle of command output. Use logging console command to send system logging (syslog) messages to all available tty lines and limit messages that are based on severity. The console keyword indicates all available tty lines. This keyword can mean a console terminal that is attached to the router's tty line, a dialup modem connection, or a printer.
Cisco Discovery Protocol: Cisco Discovery Protocol does not reveal security-specific information, but it is possible for an attacker to exploit this information in a reconnaissance attack, whereby an attacker learns device and IP address information to launch other types of attacks. You should follow two practical guidelines for Cisco Discovery Protocol:
o o
If Cisco Discovery Protocol is not required, or if the device is located in an unsecured environment, disable Cisco Discovery Protocol globally on the device. If Cisco Discovery Protocol is required, disable Cisco Discovery Protocol on a per interface basis on ports that are connected to untrusted networks. Because Cisco Discovery Protocol is a link-level protocol, it is not transient across a network, unless a Layer 2 tunneling mechanism is in place. Limit it to run only between trusted devices, and disable it everywhere else. However, Cisco Discovery Protocol is required on any access port where you are attaching a Cisco IP phone to establish a trust relationship.
Secure the spanning-tree topology: It is important to protect the Spanning Tree Protocol (STP) process of the switches that form the infrastructure. Inadvertent or
malicious introduction of STP bridge protocol data units (BPDUs) could potentially overwhelm a device or pose a denial of service (DoS) attack. The first step in stabilizing a spanning-tree installation is to identify the intended root bridge in the design and hard set the STP bridge priority of that bridge to an acceptable root value. Do the same for the designated backup root bridge. These actions protect against inadvertent shifts in STP. These shifts happen when a new switch is introduced without a control. On some platforms, the BPDU guard feature may be available. If so, enable it on access port with the PortFast feature to protect the network from unwanted BPDU traffic injection. Upon receipt of a BPDU, the BPDU guard feature automatically disables the port.
Proactively configure unused router and switch ports: Execute the shutdown command on all unused ports and interfaces. Place all unused ports in a "parking-lot" VLAN, which is dedicated to grouping unused ports until they are proactively placed into service. o Configure all unused ports as access ports, disallowing automatic trunk negotiation. Disable automatic negotiation of trunk capabilities: By default, Cisco Catalyst switches that are running Cisco IOS Software are configured to automatically negotiate trunking capabilities. This situation poses a serious hazard to the infrastructure because an unsecured third-party device can be introduced to the network as a valid infrastructure component. Potential attacks include interception of traffic, redirection of traffic, DoS, and more. To avoid this risk, disable automatic negotiation of trunking and manually enable it on links that require it. Ensure that trunks use a native VLAN that is dedicated exclusively to trunk links. Physical device access: You should closely monitor physical access to the switch to avoid rogue device placement in wiring closets with direct access to switch ports. Access port-based security: Specific measures should be taken on every access port of any switch that is placed into service. Ensure that a policy is in place outlining the configuration of unused switch ports in addition to those ports that are in use. For ports that will connect to end devices, you can use a macro called switchport host. When you execute this command on a specific switch port, the switch port mode is set to access, spanning-tree PortFast is enabled, and channel grouping is disabled. Note: The switchport host macro disables EtherChannel and trunking, and enables STP PortFast.
o o
The switchport host command is a macro that executes several configuration commands. You cannot revoke the effect of the switchport host command by using the no form of the command because it does not exist. To return an interface to its default configuration, use the default interface interface-id global configuration command. This command returns all interface configurations to their defaults.
Dynamic: You specify how many different MAC addresses are permitted to use a port at one time. You use the dynamic approach when you care only about how many rather than which specific MAC addresses are permitted. Depending on how you configure the switch, these dynamically learned addresses age out after a certain period, and new addresses are learned, up to the maximum that you have defined. Static: You statically configure which specific MAC addresses are permitted to use a port. Any source MAC addresses that you do not specifically permit are not allowed to source frames to the port. A combination of static and dynamic learning: You can choose to specify some of the permitted MAC addresses and let the switch learn the rest of the permitted MAC addresses. For example, if the number of MAC addresses is limited to four, this number becomes the maximum number. You statically configure two MAC addresses and the switch dynamically learns the next two MAC addresses that it receives on that port. Port access is limited to these four addresses, two static and two dynamically learned addresses. The two statically configured addresses do not age out, but the two dynamically learned addresses can, depending on the switch configuration. "Sticky learning": When this feature is configured on an interface, the interface converts dynamically learned addresses to "sticky secure" addresses. This feature adds the dynamically learned addresses to the running-configuration as if they were statically configured using the switchport port-security mac-address command. "Sticky learned" addresses do not age out.
Scenario
Process
Imagine five individuals whose laptops are allowed to connect to a specific switch port when they visit an area of the building. You want to restrict switch port access to the MAC addresses of those five laptops and allow no addresses to be learned dynamically on that port.
Step Action Notes 1. Port security is configured to allow only five connections on that port, and one entry is configured for each of the five allowed MAC addresses.
2. This step populates the MAC address table with five entries for that port and allows no additional entries to be learned dynamically. 3. Allowed frames are processed. 4. When frames arrive on the switch port, their source MAC address is checked against the MAC address table. If the source MAC address matches an entry in the table for that port, the frames are forwarded to the switch to be processed like any other frames on the switch. 5. New addresses are not allowed to create new MAC address table entries. 6. When frames with an unauthorized MAC address arrive on the port, the switch determines that the address is not in the current MAC address table and does not create a dynamic entry for that new MAC address. 7. The switch takes action in response to unauthorized frames.
The switch disallows access to the port and takes one of these configuration-dependent actions: (a) the entire switch port can be shut down; (b) access can be denied for only that MAC address and a log error message is generated; (c) access can be denied for that MAC address but no log message is generated. Note: You cannot apply port security to trunk ports because addresses on trunk links might change frequently. Implementations of port security vary depending on which Cisco Catalyst switch is in use. Check documentation to determine whether and how particular hardware supports this feature.
The IEEE 802.1X standard defines a port-based access control and authentication protocol that restricts unauthorized workstations from connecting to a LAN through publicly accessible switch ports. The authentication server authenticates each workstation that is connected to a switch port before making available any services offered by the switch or the LAN. Until the workstation is authenticated, 802.1X access control allows only Extensible Authentication Protocol over LAN (EAPOL), CDP and STP traffic through the port to which the workstation is connected. After authentication succeeds, normal traffic can pass through the port. With 802.1X port-based authentication, the devices in the network have specific roles, as follows:
o
Client: The device (workstation) that requests access to the LAN and switch services, and responds to requests from the switch. The workstation must be running 802.1Xcompliant client software, such as that offered in the Microsoft Windows XP operating system. The client running the 802.1x software is referred to as the 'supplicant' in the IEEE 802.1X specification. Authentication server: Performs the actual authentication of the client. The authentication server validates the identity of the client and notifies the switch whether the client is authorized to access the LAN and switch services. Because the switch acts as the proxy, the authentication service is transparent to the client. The RADIUS security system with Extensible Authentication Protocol (EAP) extensions is the only supported authentication server.
Switch (also called the authenticator): Controls physical access to the network based on the authentication status of the client. The switch acts as an intermediary (proxy) between the client (supplicant) and the authentication server. The switch is requesting identifying information from the client, verifying that information with the authentication server, and relaying a response to the client. The switch uses a RADIUS software agent, which is responsible for encapsulating and decapsulating the EAP frames and interacting with the authentication server.
The switch port state determines whether the client is granted access to the network. The port starts in the unauthorized state. While in this state, the port disallows all ingress and egress traffic except for 802.1X protocol packets. When a client is successfully authenticated, the port transitions to the authorized state, allowing all traffic for the client to flow normally. If the switch requests the client identity (authenticator initiation) and the client does not support 802.1X, the port remains in the unauthorized state. The client is not granted access to the network. An 802.1X-enabled client connects to a port and initiates the authentication process (supplicant initiation) by sending an EAPOL-start frame to a switch. If the switch is not running 802.1X, no response is received. After that unsuccessful authentication process, the client begins sending frames as if the port is in the authorized state. If the client is successfully authenticated (receives an 'ACCEPT' frame from the authentication server), the port state changes to authorized. After successful authentication all frames from the authenticated client are allowed through the port. If the authentication fails, the port remains in the unauthorized state, but authentication can be retried. If the authentication server cannot be reached, the switch can retransmit the request. If no response is received from the server after the specified number of attempts, authentication fails, and network access is not granted. When a client logs out, it sends an EAPOL-logout message, causing the switch port to transition to the unauthorized state.
planned, and their impact on the organization can be severe. If an unexpected network outage occurs, administrators must be able to troubleshoot and bring the network back into complete production. There are many ways to troubleshoot a switch. Developing a troubleshooting approach or test plan works much better than using a random approach. Here are some suggestions to make troubleshooting more effective:
Become familiar with normal switch operation: The Cisco website (Cisco.com) provides useful technical information about Cisco switches and how they operate. The configuration guides in particular are very helpful. For more complex situations, have an accurate physical and logical map of the network on hand: A physical map shows how the devices and cables are connected. A logical map shows the segments (VLANs) that exist in the network and which routers provide routing services to these segments. A spanning-tree map is also very useful for troubleshooting complex issues. Because a switch can create different segments by implementing VLANs, the physical connections alone do not provide all of the necessary information. You must know how the switches are configured to determine which segments (VLANs) exist and how they are logically connected. Always ensure documentation is up to date and always remember to update the documentation when changes are made. Have a plan. Some problems and solutions are obvious; others are not. The symptoms that you see in the network can be the result of problems in another area or layer. Before drawing conclusions, try to verify in a structured way what is working and what is not. Because networks can be complex, it is helpful to isolate possible problem domains. One way to isolate domains is to use the Open Systems Interconnection (OSI) seven-layer model. For example: check the physical connections involved (Layer 1), check connectivity issues within the VLAN (Layer 2), check connectivity issues across different VLANs (Layer 3), and so on. Assuming that the switch is configured correctly, many of the problems you encounter will be related to physical layer issues (physical ports and cabling). Do not assume that a component is working; first, verify that it is. If a PC is not able to log into a server across the network, it could be due to any number of things. Do not assume that basic components are working correctly without testing them firstsomeone else may have altered their configurations and not informed you of the change. It usually takes only a minute to verify the basics (for example, that the ports are correctly connected and active), and it can save you much valuable time.
trunking and EtherChannel. However, do not overlook the other portsthey are also significant because they connect users in the network.
Hardware Issues
Hardware issues can be one of the reasons that a switch has connectivity problems. To rule out hardware issues, verify the following:
The port status for both ports that are involved in the link: Ensure that neither is shut down. The administrator may have manually shut down one or both ports, or the switch software may have shut down one of the ports because of a configuration error. If one side is shut down and the other is not, the status on the enabled side will be "notconnected" (because it does not sense a neighbor on the other side of the wire). The status on the shutdown side will say something like "disabled" or "err-disabled" (depending on what actually shut down the port). The link will not be active if both ports are not enabled. The type of cable that is used for the connection: Use at least a Category 5 cable for 100Mb/s connections, and a Category 5e cable for 1 Gb/s or faster. Use a straight-through RJ45 cable for end stations, routers, or servers to connect to a switch or hub. Use an Ethernet crossover cable for switch-to-switch connections or hub-to-switch connections. The maximum distance for Ethernet or Fast Ethernet copper wires is 100 m (109.36 yd). A software process disables a port: A solid orange light on the port indicates that the switch software has shut down the port, either by way of the user interface or by internal processes such as spanning tree bridge protocol data unit (BPDU) guard, root guard, or port security violations.
Configuration Issues
Configuration of the port is another possible reason that the port is experiencing connectivity problems. Some of the common configuration issues are as follows:
The VLAN to which the port belongs has disappeared. Each port in a switch belongs to a VLAN. If the VLAN is deleted, then the port becomes inactive. Autonegotiation is enabled: Autonegotiation is an optional function of the Fast Ethernet (IEEE 802.3u) standard. It enables devices to automatically exchange information about speed and duplex abilities over a link. Do not use autonegotiation for ports that support network infrastructure devices, such as switches, routers, or other nontransient end systems like servers and printers. Autonegotiating speed and duplex settings is the typical default behavior on switch ports that have this capability. However, you should always configure ports that connect to fixed devices for the correct speed and duplex setting, rather than allowing them to autonegotiate these settings. This configuration eliminates any potential negotiation issues and ensures that you always know exactly how the ports should be operating.
The native VLAN that is configured on each end of an IEEE 802.1Q trunk must be the same. Remember that a switch receiving an untagged frame assigns the frame to the native VLAN of the trunk. If one end of the trunk is configured for native VLAN 1 and the other end is configured for native VLAN 2, a frame that is sent from VLAN 1 on one side is received on VLAN 2 on the other. VLAN 1 traffic "leaks" into the VLAN 2 segment. There is no reason that this behavior would be required, and connectivity issues will occur in the network if a native VLAN mismatch exists. Such configuration error generates console notifications, causes control and management traffic misdirections, and poses a security risk.
Trunk Mode Mismatches
You should statically configure trunk links whenever possible. One of the most common configuration errors is when one trunk port is configured with trunk mode "off" and the other with trunk mode "on." This configuration error causes the trunk link to stop working. However, Cisco Catalyst switch ports run Dynamic Trunking Protocol (DTP) by default, which tries to automatically negotiate a trunk link. This Cisco proprietary protocol can determine an operational trunking mode and protocol on a switch port when it is connected to another device that is also capable of dynamic trunk negotiation.
VLANs and IP Subnets
Each VLAN must correspond to a unique IP subnet. Two devices in the same VLAN should have addresses in the same subnet. With intra-VLAN traffic, the sending device recognizes the destination as local and sends an Address Resolution Protocol (ARP) broadcast to discover the MAC address of the destination. Two devices in different VLANs should have addresses in different subnets. With inter-VLAN traffic, the sending device recognizes the destination as remote and sends an ARP broadcast for the MAC address of the default gateway.
You should also check to see if the list of allowed VLANs on a trunk has been updated with the current VLAN trunking requirements. In this situation, unexpected traffic or no traffic is being sent over the trunk.
Inter-VLAN Connectivity
Inter-VLAN connectivity issues are most of the time, the result of user misconfiguration. For example, if you incorrectly configure a router on a stick or Multilayer Switching, then packets from one VLAN may not reach another VLAN. To avoid misconfiguration and to troubleshoot efficiently, you should understand the mechanism that the Layer 3 forwarding device uses. If you are sure that the equipment is properly configured, yet hardware switching is not taking place, then a software bug or hardware malfunction may be the cause. Another type of misconfiguration that affects inter-VLAN routing is misconfiguration on end user devices such as PCs. A common situation is a misconfigured PC default gateway.
Troubleshooting VTP
Unable to See VLAN Details in the show vlan Command Output
VTP client and server systems require that VTP updates from other VTP servers be saved immediately without user intervention. A VLAN database was introduced into Cisco IOS Software as a method to immediately save VTP updates for VTP clients and servers. In some versions of the software, this VLAN database is in the form of a separate file in Flash, called the vlan.dat file. You can view VTP and VLAN information that is stored in the vlan.dat file for the VTP client or VTP server if you use the show vtp status command. VTP server and client mode switches do not save the entire VTP and VLAN configuration to the startup-config file in NVRAM when you use the copy running-config startup-config command on these systems. The command saves the configuration in the vlan.dat file. This behavior does not apply to systems that run in VTP transparent mode. VTP transparent switches save the entire VTP and VLAN configuration to the startup-config file in NVRAM when you use the copy running-config startup-config command. For example, if you delete the vlan.dat file on a VTP server or client mode switch after you have configured VLANS, and then reload the switch, VTP is reset to the default settings (all user-configured VLANs are deleted). But if you delete the vlan.dat file on a VTP transparent mode switch, and then reload the switch, it retains the VTP configuration. This behavior is an example of default VTP configuration. You can configure normal-range VLANs (2 through 1000) when the switch is in either VTP server or transparent mode. But on the Cisco Catalyst 2960 Switch, you can configure extendedrange VLANs (1025 through 4094) only on VTP-transparent switches.
Cisco Catalyst Switches Do Not Exchange VTP Information
There are several reasons why VTP fails to exchange the VLAN information. Verify these items if switches that run VTP fail to exchange VLAN information:
VTP information passes only through a trunk port. Ensure that all ports that interconnect switches are configured as trunks and are actually trunking. Ensure that the VLANs are active on all of the VTP server switches. One of the switches must be a VTP server in the VTP domain. All VLAN changes must be done on this switch in order to have them propagated to the VTP clients. The VTP domain name must match and it is case-sensitive. For example, "COMPANY" and "company" are different domain names. Ensure that no password is set between the server and client. If any password is set, ensure that the password is the same on both sides. The password is also case-sensitive. Every switch in the VTP domain must use the same VTP version. VTP version 1 (VTPv1) and VTP version 2 (VTPv2) are not compatible on switches in the same VTP domain. Do not enable VTPv2 unless every switch in the VTP domain supports version 2.
Note: VTPv2 is disabled by default on VTPv2-capable switches. When you enable VTPv2 on a switch, every VTPv2-capable switch in the VTP domain enables version 2. You can configure the version only on switches in VTP server or transparent mode.
A switch that is in VTP transparent mode and uses VTPv2 propagates all VTP messages, regardless of the VTP domain that is listed. However, a switch running VTPv1 propagates only VTP messages that have the same VTP domain as the domain that is configured on the local switch. VTP transparent mode switches that are using VTPv1 will drop VTP advertisements if they are not in the same VTP domain. The extended-range VLANs are not propagated. So you must configure extended-range VLANs manually on each network device. The updates from a VTP server are not updated on a client if the client already has a higher VTP revision number. In addition, the client does not propagate the VTP updates to its downstream VTP neighbors if the client has a higher revision number than that which the VTP server sends.
A newly installed switch can cause problems in the network when all of the switches in the network are in the same VTP domain, and you add a switch into the network that does not have the default VTP and VLAN configuration. If the configuration revision number of the switch that you insert into the VTP domain is higher than the configuration revision number on the existing switches of the VTP domain, your recently introduced switch overwrites the VLAN database of the domain with its own VLAN database. This overwriting happens whether the switch is a VTP client or a VTP server. A VTP client can overwrite and in many cases effectively erase VLAN information on a VTP server. A typical indication that this issue has happened is when many of the ports in your network go into an inactive state but continue to be assigned to a nonexistent VLAN. To prevent this problem from occurring, always ensure that the configuration revision number of all switches that you insert into the VTP domain is lower than the configuration revision number of the switches that are already in the VTP domain.
Switch ports move to the inactive state when they are members of VLANs that do not exist in the VLAN database. A common problem is that all of the ports move to this inactive state after a power cycle. Generally, you see this problem when the switch is configured as a VTP client with the uplink trunk port on a VLAN other than VLAN1. Because the switch is in VTP client mode, when the switch resets, it loses its VLAN database and causes the uplink port and any other ports that were not members of VLAN1 to become inactive. To solve this problem, complete these steps: Step 1: Temporarily change the VTP mode to transparent. Step 2: Add the VLAN to which the uplink port is assigned to the VLAN database. Step 3: Change the VTP mode back to client after the uplink port begins forwarding.
Use the Cisco IOS show commands to verify STP port states
Before you troubleshoot a bridging loop, you must be aware of the following:
The topology of the bridge network The location of the root bridge The location of the blocked ports and the redundant links
Before you can determine what to fix in the network, you must know how the network looks when it is functioning correctly. Most of the troubleshooting steps simply use show commands to try to identify error conditions. Knowledge of the network helps you to focus on the critical ports on the key devices.
single host, such as a server, will bring down a network through broadcasts. The best way to identify a bridging loop is to capture the traffic on a saturated link and verify that you see similar packets multiple times. Realistically, however, if all users in a certain bridge domain have connectivity issues at the same time, you can already suspect a bridging loop. Check the port utilization on your devices to determine whether there are abnormal values.
Very often, information about the location of the spanning-tree root bridge is not available at troubleshooting time. Do not let STP decide which switch becomes the root bridge. For each VLAN, you can usually identify which switch can best serve as the root bridge. Which switch would make the best root bridge depends on the design of the network. Generally, choose a powerful switch in the middle of the network. If you put the root bridge in the center of the network with direct connection to the servers and routers, you reduce the average distance from the clients to the servers and routers. For each VLAN, hardcode which switches will serve as the root bridge and the backup (secondary) root bridge.
Static: The router learns routes when an administrator manually configures the static route. The administrator must manually update this static route entry whenever an internetwork topology change requires an update. Static routes are user-defined routes that specify the path that packets take when moving between a source and a destination. These administrator-defined routes allow very precise control over the routing behavior of the IP internetwork. Dynamic: The router dynamically learns routes after an administrator configures a routing protocol that helps determine routes. Unlike the situation with static routes, after the network administrator enables dynamic routing, the routing process automatically updates route knowledge whenever new topology information is received. The router learns and maintains routes to the remote destinations by exchanging routing updates with other routers in the internetwork. In this way, all routers have accurate routing tables. These routing tables are updated dynamically and can learn about routes to remote networks that are many hops away. Routed protocol: Any network protocol that provides enough information in its network layer address to allow a packet to be forwarded from one host to another host.
Forwarding is based on the addressing scheme, without knowing the entire path from source to destination. Packets generally are conveyed from end system to end system. IP is an example of a routed protocol.
Routing protocol: Facilitates the exchange of routing information between networks, allowing routers to build routing tables dynamically. Traditional IP routing stays simple because it uses next-hop (next-router) routing. In next-hop routing, the router needs to consider only where it sends the packet. There is no need to consider the subsequent path of the packet on the remaining hops (routers). How updates are conveyed What knowledge is conveyed When to convey the knowledge How to locate recipients of the updates Discovering remote networks: There is no need to manually define the available destination (routes). The routing protocols discover the remote networks and update the internal routing table of the router. Maintaining up-to-date routing information: The routing table contains the entries about remote networks. When changes happen in the network, routing tables are automatically updated. Choosing the best path to destination networks: Routing protocols discover the remote networks. More paths to the destinations are possible and the best paths enter the routing table. Finding a new best path if the current path is no longer available: The routing table is constantly updated and new paths may also be added. When the current best path is not available or better paths are found, the routing protocol selects a new best path.
Interior gateway protocols (IGPs): These routing protocols are used to exchange routing information within an autonomous system. Routing Information Protocol version 2 (RIPv2), Enhanced Interior Gateway Routing (EIGRP), and Open Shortest Path First (OSPF) are examples of IGPs. Exterior gateway protocols (EGPs): These routing protocols are used to route between autonomous systems. Border Gateway Protocol (BGP) is the EGP of choice in networks today.
Distance vector: The distance vector routing approach determines the direction (vector) and distance (a metric, such as hop count in the case of RIP) to any link in the internetwork. Pure distance vector protocols periodically send complete routing tables to all connected neighbors, this mode of operation is key in defining what is a distance vector routing protocol. In large networks, these routing updates can become enormous, causing significant traffic on the links. The only information that a router knows about a remote network is the distance or metric to reach that network and which path or interface to use to get there. It is worth noting that different distance vector routing protocols may use different kinds of metrics. Distance vector routing protocols do not have an actual map of the network topology, rather a router's view of the network is based on the information provided by its neighbor(s). Link-state: The link-state approach, which utilizes the Shortest Path First (SPF) algorithm, creates an abstraction of the exact topology of the entire internetwork, or at least of the partition in which the router is situated. A link-state router uses the link-state information to create a topology map and to select the best path to all destination networks in the topology. All link-state routers are using an identical "map" of the network and calculate the shortest paths to reach the destination networks based in relation to where they are on that map. Unlike their distance vector counterparts, complete routing tables are not exchanged periodically, instead event-based 'triggered' updates containing only specific link-state information are sent. Periodic keepalives that are small and efficient, in the form of 'hello' messages are exchanged between directly connected neighbors to establish and maintain reachability to that neighbor. Advanced Distance Vector: The advanced distance vector approach combines aspects of the link-state and distance vector algorithms. Hop count: The number of times that a packet passes through the output port of one router. Bandwidth: The data capacity of, for instance, a 10-Mb/s Ethernet link is preferable to a 64-kb/s leased line. Delay: The length of time that is required to move a packet from its source to its destination. Load: The amount of activity on a network resource, such as a router or link. Reliability: This usually refers to the bit error rate of each network link. Cost: A configurable value thaton Cisco routersis based by default on the bandwidth of the interface.
Administrative Distance
Route Source Default Distance
If nondefault values are necessary, you can use Cisco IOS Software to configure administrative distance values on a per-router, per-protocol, and per-route basis (with the exception of directly connected networks).
As the name implies, distance vector means that routes are advertised as distance and direction. Distance is defined in terms of a metric, such as hop count (for RIP), and direction is simply the next-hop router or exit interface. A router using a distance vector routing protocol does not know the entire path to a destination network. The router knows only the following information:
The direction or interface in which packets should be forwarded The distance (or how far it is) to the destination network
Distance vector routing protocols call for the router to periodically advertise their entire routing table to each of its neighbors. The periodic routing updates are addressed only to directly connected routing devices. The addressing scheme that is most commonly used is a logical broadcast or multicast. Routers that are running a pure distance vector routing protocol send full periodic updates which includes a complete routing table, even if there are no changes in the network. Upon receiving a full routing table from its neighbor, a router can verify all known routes and make changes to the local routing table based on updated information. This process is also known as "routing by rumor," because the router understands the network that is based on the perspective of the network topology from the neighboring router. Distance vector protocols traditionally have also been classful protocols. RIPv2 and EIGRP are examples of more modern distance vector protocols that exhibit classless behavior. EIGRP which is considered and advanced distance vector protocol also exhibits some link-state characteristics for this very reason. As the distance vector network discovery process continues, routers discover the best path to destination networks that are not directly connected, based on accumulated metrics from each neighbor. Neighboring routers provide information for routes that are not directly connected.
Failure of a link Introduction of a new link Failure of a router Change of link parameters
When a router receives an update from a neighboring router, the router compares the update with its own routing table. To establish the new metric, the router adds the cost of reaching the neighboring router to the path cost reported by the neighbor. If the router learns from its neighbor of a better route (a smaller total metric) to a network, it updates its own routing table. Each routing-table entry includes the following information:
Information about the total path cost, which is defined by the routing-table metric The logical address of the first router on the path to each network that the routing table knows about (the next-hop)
The age of routing information in a routing table is defined and refreshed each time that an update is received. Therefore, information in the routing table can be maintained when there is a topology change.
When a router receives an update from a neighbor indicating that a previously accessible network is now inaccessible, the router marks the route as "possibly down" and starts a hold-down timer. If an update arrives from a neighboring router with a better metric than originally recorded for the network, the router marks the network as "accessible" and removes the hold-down timer.
If, at any time before the hold-down timer expires, an update is received from a different neighboring router with a poorer or the same metric, the update is ignored. Ignoring an update with a poorer or the same metric when a hold-down timer is in effect allows more time for the change to propagate through the entire network. During the hold-down period, routes appear in the routing table as possibly down. The router will still attempt to route packets to the possibly down network (in case the network is having only intermittent connectivity problems, which is referred to as flapping).
Routing loops can be caused by erroneous information that was calculated as a result of inconsistent updates, slow convergence, and timing. Slow convergence problems can also occur if routers wait for their regularly scheduled updates before notifying neighboring routers of network changes. Routing-table updates normally are sent to neighboring routers at regular intervals. A triggered update is a routing-table update that is sent immediately in response to some change. The detecting router immediately sends an update message to adjacent routers, which, in turn, generate triggered updates notifying their neighbors of the change. This wave of notifications propagates throughout that portion of the network where routes went through the specific link that changed. Triggered updates would be sufficient if there were a guarantee that the wave of updates would reach every appropriate router immediately. However, there are two problems:
Packets containing the update message can be dropped or corrupted by a link in the network. The triggered updates do not happen instantaneously. It is possible that a router that has not yet received the triggered update will issue a regular update at just the wrong time. Wrong timing will cause the bad route to be reinserted in a neighbor that had already received the triggered update.
Coupling triggered updates with hold-down timers is designed to prevent these problems. The hold-down rule specifies that for a specified period, no new route with the same or a worse metric than a route that is in hold-down (possibly down) will be accepted for the same destination as the hold-down route. This mechanism gives the triggered update time to propagate throughout the network.
Link-state routing protocols are also known as Shortest Path First protocols and built around Edger Dijkstra's Shortest Path First (SPF) algorithm. Examples of link-state routing protocols include OSPF and Intermediate System-to-Intermediate System (IS-IS). Link-state routing protocols collect routing information from all other routers in the network or within a defined area of the network. After all of the information is collected, each router, independent of the other routers, calculates the best paths to all destinations in the network. Because each router maintains its own view of the network, the router is less likely to propagate incorrect information that is provided by a router. A link is like an interface on a router. The state of the link is a description of that interface and of its relationship to its neighboring routers. An example description of the interface would include the IP address of the interface, the mask, the type of network to which it is connected, the routers that are connected to that network, and so on. The collection of link states forms a link-state (or topological) database. The link-state database is used to calculate the best paths through the network. Link-state routers find the best paths to destinations by applying Dijkstra's algorithm against the link-state database to build the SPF tree. The best paths are then selected from the SPF tree and placed in the routing table. Link-state routing protocols have the reputation of being much more complex than their distance vector counterparts. However, the basic functionality and configuration of link-state routing protocols is not complex at all. To maintain routing information, link-state routing uses link-state advertisements (LSAs), a topological database, the SPF algorithm, the resulting SPF tree, and a routing table of paths and ports to each network. Examples of link-state routing protocols include OSPF and IS-IS.
information about every router and associated interfaces which in large networks can be resource-intensive. Arranging routers into areas, effectively partitions this potentially large database into smaller and more manageable databases. With hierarchical routing, routing still occurs between the areas (called interarea routing). At the same time, many of the minute internal routing operations, such as recalculating the database, are kept within an area. When a failure occurs in the network, such as a neighbor becoming unreachable, link-state protocols flood LSAs using a special multicast address throughout an area. Each link-state router takes a copy of the LSA, updates its link-state (topological) database, and forwards the LSA to all neighboring devices. LSAs cause every router within the area to recalculate routes. Because LSAs must be flooded throughout an area and all routers within that area must recalculate their routing tables, the number of link-state routers that can be in an area should be limited.
Reduced frequency of SPF calculations Smaller routing tables Reduced link-state update overhead
Link-state protocols use cost metrics to choose paths through the network. The cost metric reflects the capacity of the links on those paths. Routing updates are less frequent. Link-state protocols usually scale to larger networks than distance vector protocols do, particularly the traditional distance vector protocols, such as RIPv2. The network can be segmented into area hierarchies, limiting the scope of route changes. Link-state protocols send only updates of a topology change. By using triggered, flooded updates, link-state protocols can immediately report changes in the network topology to all routers in the network. This immediate reporting generally leads to fast convergence times. Because each router has a complete and synchronized picture of the network, it is very difficult for routing loops to occur. Because LSAs are sequenced and aged, routers always base their routing decisions on the most recent set of information. With careful network design, the link-state database sizes can be minimized, leading to smaller Dijkstra's algorithm calculations and faster convergence.
In addition to the routing table, link-state protocols require a topology database and an adjacency database. Using all of these databases can require a significant amount of memory in large or complex networks. Dijkstra's algorithm requires CPU cycles to calculate the best paths through the network. If the network is large or complex, the Dijkstra's algorithm calculation is complex as well. The same happens if the network is unstable. In this case, the Dijkstra's algorithm calculation is running regularly. All such examples for link-state protocols require the use of significant amount of CPU power. Creating an area hierarchy can cause problems because areas must remain contiguous at all times. The routers in an area must always be capable of contacting and receiving LSAs from all other routers in their area. In a multiarea design, an area router must always have a path to the backbone, or the router will have no connectivity to the rest of the network. In addition, the backbone area must remain contiguous at all times to avoid some areas becoming isolated (partitioned). If the network design is complex, the operation of the link-state protocol may have to be tuned to accommodate it. Configuring a link-state protocol in a large network can be challenging. Interpreting the information that is stored in the topology, neighboring databases, and the routing table requires a good understanding of the concepts of link-state routing.
OSPF's major advantages over RIP are its fast convergence and its scalability to much larger network implementations. The ability of link-state routing protocols, such as OSPF, to divide one large autonomous system into smaller groupings of routers (called areas) is referred to as hierarchical routing. Link-state routing protocols use the concept of areas for scalability. Topological databases contain information about every router and associated interfaces which in large networks can be resource-intensive. Arranging routers into areas, effectively partitions this potentially large database into smaller and more manageable databases. With hierarchical routing, routing still occurs between the areas (called interarea routing). At the same time, many of the minute internal routing operations, such as recalculating the database, are kept within an area. When a failure occurs in the network, such as a neighbor becoming unreachable, link-state protocols flood LSAs using a special multicast address throughout an area. Each link-state router takes a copy of the LSA, updates its link-state (topological) database, and forwards the LSA to all neighboring devices. LSAs cause every router within the area to recalculate routes. Because LSAs must be flooded throughout an area and all routers within that area must recalculate their routing tables, the number of link-state routers that can be in an area should be limited.
Reduced frequency of SPF calculations Smaller routing tables Reduced link-state update overhead
Link-state protocols use cost metrics to choose paths through the network. The cost metric reflects the capacity of the links on those paths. Routing updates are less frequent. Link-state protocols usually scale to larger networks than distance vector protocols do, particularly the traditional distance vector protocols, such as RIPv2. The network can be segmented into area hierarchies, limiting the scope of route changes. Link-state protocols send only updates of a topology change. By using triggered, flooded updates, link-state protocols can immediately report changes in the network topology to all routers in the network. This immediate reporting generally leads to fast convergence times. Because each router has a complete and synchronized picture of the network, it is very difficult for routing loops to occur. Because LSAs are sequenced and aged, routers always base their routing decisions on the most recent set of information. With careful network design, the link-state database sizes can be minimized, leading to smaller Dijkstra's algorithm calculations and faster convergence.
In addition to the routing table, link-state protocols require a topology database and an adjacency database. Using all of these databases can require a significant amount of memory in large or complex networks. Dijkstra's algorithm requires CPU cycles to calculate the best paths through the network. If the network is large or complex, the Dijkstra's algorithm calculation is complex as well. The same happens if the network is unstable. In this case, the Dijkstra's algorithm calculation is running regularly. All such examples for link-state protocols require the use of significant amount of CPU power. Creating an area hierarchy can cause problems because areas must remain contiguous at all times. The routers in an area must always be capable of contacting and receiving LSAs from all other routers in their area. In a multiarea design, an area router must always have a path to the backbone, or the router will have no connectivity to the rest of the network. In addition, the backbone area must remain contiguous at all times to avoid some areas becoming isolated (partitioned). If the network design is complex, the operation of the link-state protocol may have to be tuned to accommodate it. Configuring a link-state protocol in a large network can be challenging.
Interpreting the information that is stored in the topology, neighboring databases, and the routing table requires a good understanding of the concepts of link-state routing.
More efficient use of IP addresses: Without the use of VLSMs, companies must implement a single subnet mask within an entire Class A, B, or C network number. For example, consider the 192.168.1.0/24 network address that is divided into subnetworks using /27 masking. One of the subnetworks in this range, 192.168.1.160/27, is further divided into smaller subnetworks using /30 masking. This creates subnets with only two hosts, to be used on the WAN links. The /30 subnets range from 192.168.1.160/30 to 192.168.1.168/30.
Greater capability to use route summarization: VLSM allows more hierarchical levels within an addressing plan and thus allows better route summarization within routing tables. Isolation of topology changes from other routers: Another advantage to using route summarization in a large, complex network is that it can isolate topology changes from other routers. For example, if a specific link in the 192.168.1.128/27 domain is rapidly fluctuating between being active and inactive (called flapping), the summary route does not change. Therefore, no router that is external to the domain needs to keep modifying its routing table because of this flapping activity.
The rapid growth of the Internet has caused a dramatic increase in the number of routes to networks around the world. This growth has resulted in heavy loads on Internet routers. A VLSM addressing scheme allows for route summarization, which reduces the number of routes advertised. Route summarization groups contiguous subnets or networks using a single address. Route summarization is also known as route aggregation. Summarization decreases the number of entries in routing updates and lowers the number of entries in routing tables. It also reduces bandwidth utilization for routing updates and results in faster routing table lookups. Route summarization is synonymous with the term supernetting. Supernetting is the opposite of subnetting. Supernetting joins multiple smaller contiguous networks together. For an efficient summarization, a good addressing plan fitting a network design is required. Route summarization is most effective within a subnetted environment in which the network addresses are in contiguous blocks in powers of 2. For example, a single routing entry can represent address block sizes of 4, 16, 32, 64, 128, 256, 512, and so on. This functionality is true because, like subnet masks, summary masks are binary, so summarization must take place on binary boundaries (powers of 2). Classful routing is a consequence of the fact that older distance vector routing protocols were designed before subnetting was widely used and hence do not advertise subnet masks in the routing advertisements that they generate. When a classful routing protocol (such as RIPv1) is used, all subnetworks of the same major network (Class A, B, or C) must use the same subnet mask or in other words a fixed-length subnet mask (FLSM). Routers that are running a classful routing protocol perform automatic route summarization across network boundaries. Upon receiving a routing update packet, a router that is running a classful routing protocol does one of the following things to determine the network portion of the route:
If the routing update information contains the same major network number as is configured on the receiving interface, the router applies the subnet mask that is configured on the receiving interface. If the routing update information contains a major network that is different from the one that is configured on the receiving interface, the router applies the default classful mask (by address class) as follows:
With the rapid depletion of IPv4 addresses, the Internet Engineering Task Force (IETF) developed classless interdomain routing (CIDR). CIDR uses IPv4 address space more efficiently and allows network address aggregation or summarization, which reduces the size of routing tables.
The use of CIDR requires a classless routing protocol, such as RIPv2, EIGRP, OSPF or static routing. To CIDR-compliant routers, address class is meaningless. The network subnet mask determines the network portion of the address. This number is also known as the network prefix, or prefix length. The class of the address no longer determines the network address. ISPs assign blocks of IP addresses to a network based on the requirements of the customer, ranging from a few hosts to hundreds or thousands of hosts. With CIDR and VLSM, ISPs are no longer limited to using prefix lengths of /8, /16, or /24. Classless routing protocols can be considered second-generation protocols because they are designed to address some of the limitations of the earlier classful routing protocols. One of the most serious limitations in a classful network environment is that the subnet mask is not exchanged during the routing update process. This functionality requires the use of the same subnet mask on all subnetworks within the same major network. In other words, a fixedlength subnet masks (FLSM) scheme, which means that IP addresses are often wasted, especially on point-to-point links. Another limitation of the classful approach is the need to automatically summarize to the classful network boundary at major network boundaries. In the classless environment, the summarization process is controlled manually and can usually be invoked at any bit position within the address. Because subnet routes propagate throughout the routing domain, you may need to perform manual summarization to keep the size of the routing tables manageable. Classless routing protocols include RIPv2, EIGRP, and OSPF.
Subnets are not advertised to a different major network. Discontiguous subnets are not visible to each other.
Discontiguous networks cause unreliable or suboptimal routing. To avoid this condition, an administrator can do the following:
Modify the addressing scheme, if possible Use a classless routing protocol, such as RIPv2, OSPF or EIGRP Turn automatic summarization off Manually summarize at the classful boundary
OSPF RIPv2 with the no auto-summary command EIGRP with the no auto-summary command
Area: An area is a grouping of contiguous networks. Areas are logical subdivisions of the autonomous system. Autonomous system: The largest entity within the hierarchy. An autonomous system consists of a collection of networks under a common administration that share a common routing strategy. An autonomous system, sometimes called a domain, can be logically subdivided into multiple areas.
OSPF uses the concept of areas for scalability. Within each autonomous system, a contiguous backbone area (Area 0) must be defined. All other nonbackbone areas are connected off the backbone area. The backbone area is the transit area because all other areas communicate through it to each other. The operation of OSPF within an area is different from operation between that area and the backbone area. Summarization of network information usually occurs between areas, it is not on by default and has to be manually configured. This functionality helps
to decrease the size of routing tables in the backbone. Summarization also isolates changes and unstable, or flapping, links to a specific area in the routing domain. If summarization is used, when there is a change in the topology, only those routers in the affected area receive the LSA and run the SPF algorithm. For OSPF, the nonbackbone areas can be additionally configured as special area types such as stub areas, totally stubby areas, or not-so-stubby areas, also known as NSSAs, these advanced techniques help reduce the link-state database and routing table size. Routers that operate within the two-layer network hierarchy have different routing entities and different functions in OSPF.
Router ID: The router ID is a 32-bit number that uniquely identifies the router, expressed in a dotted decimal format, it does not have to be an actual IP address of the router, although it is preferable for administrative purposes. The highest IP address on an active interface is chosen by default, unless a loopback interface or the router ID is configured. IP address 192.168.1.1 would be chosen over 172.16.1.1 since it is
numerically greater. This identification is important in establishing and troubleshooting neighbor relationships and coordinating route exchanges, it is recommended to use a loopback address or router ID for stability. Hello and dead intervals: The hello interval specifies the frequency in seconds at which a router sends hello packets. The default hello interval on multiaccess networks is 10 seconds. The dead interval is the time in seconds that a router waits to hear from a neighbor before declaring the neighboring router out of service. By default, the dead interval is four times the hello interval. These timers must be the same on neighboring routers; otherwise, an adjacency will not be established. Neighbors : The Neighbors field lists the adjacent routers with established bidirectional communication. This bidirectional communication is indicated when the router recognizes itself listed in the Neighbors field of the hello packet from the neighbor. Area ID : To communicate, two routers must share a common segment and their interfaces must belong to the same OSPF area on that segment. The neighbors must also share the same subnet and mask. These routers will all have the same link-state information. Router priority : The router priority is an 8-bit number that indicates the priority of a router. OSPF uses the priority to select a DR and BDR. DR and BDR IP addresses : These addresses are the IP addresses of the DR and BDR for the specific network, if they are known. Authentication password : If router authentication is enabled, two routers must exchange the same password. Authentication is not required, but if it is enabled, all peer routers must have the same password. Stub area flag : A stub area is a special area. Two routers must agree on the stub area flag in the hello packets. Designating a stub area is a technique that reduces routing updates by replacing them with a default route.
SPF Algorithm
The SPF algorithm places each router at the root of a tree and calculates the shortest path to each node, using Dijkstra's algorithm. Dijkstra's algorithm is based on the cumulative cost that is required to reach that destination. The cost, or metric, of an interface is an indication of the overhead that is required to send packets across a certain interface. The cost of an interface is inversely proportional to the bandwidth of that interface, so a higher bandwidth indicates a lower cost. A higher overhead, higher cost, and more time delays are involved in crossing a T1 serial line than in crossing a 10Mb/s Ethernet line. The formula that is used to calculate OSPF cost is cost = reference bandwidth / interface bandwidth (in bits per second). The default reference bandwidth is 108 , which is 100,000,000, or the equivalent of the bandwidth of Fast Ethernet. Therefore, the default cost of a 10-Mb/s Ethernet link will be 108 / 107 = 10 and the cost of a T1 link will be 108 / 1,544,000 = 64.
Using this equation presents a problem with link speeds 100 Mb/s or greater, such as Fast Ethernet and Gigabit Ethernet (both calculate to a value of 1). To adjust the reference bandwidth for links with bandwidths greater than Fast Ethernet, use the ospf auto-cost referencebandwidth command. Each router determines its own cost to each destination in the topology. In other words, each router calculates the SPF algorithm and determines the cost from its own perspective. LSAs are flooded throughout the area using a reliable algorithm, which ensures that all routers in an area have the same topological database. As a result of the flooding process, routers learn the link-state information for each router in its routing area. Each router uses the information in its topological database to calculate a shortest path tree, with itself as the root. The SPF tree is then used to populate the IP routing table with the best paths to each network. The shortest path is not necessarily the path with the fewest hops. Each router has its own view of the topology, even though all of the routers build a shortestpath tree using the same link-state database.
all routers. Even if no areas are specified, there must be an Area 0. In a single-area OSPF environment, the area is always 0.
The configuration of the loopback does not change the OSPF router ID automatically. The OSPF process must be cleared in order to derive a new router ID. The clear ip ospf process command is used to clear the OSPF process. Note Modifying a router ID with a new loopback or physical interface IP address may require reloading the router. The OSPF router-id command takes precedence over loopback and physical interface IP address for determining the router ID. The command syntax is as follows: Router(config)#router ospf 100 Router(config-router)#router-id 172.16.17.5 The router ID is selected when OSPF is configured with its first OSPF network command. If the OSPF router-id command or the loopback address is configured after the OSPF network command, the router ID will be derived from the interface with the highest active IP address. The router ID can be modified with the IP address from a subsequent OSPF router-id command by reloading the router or by using the following command. RouterX#clear ip ospf process Note Modifying a router ID with a new loopback or physical interface IP address may require reloading the router.
chosen as the router ID. This command also displays the timer intervals, including the hello interval, and shows the neighbor adjacencies. The show ip ospf neighbor command displays OSPF neighbor information on a per-interface basis.
OSPF Authentication
Types of Authentication
OSPF neighbor authentication (also called "neighbor router authentication" or "route authentication") can be configured such that routers can participate in routing based on predefined passwords.
When you configure neighbor authentication on a router, the router authenticates the source of each routing update packet that it receives. This authentication is accomplished by the exchange of an authenticating key (sometimes referred to as a password) that is known to both the sending and receiving router. Note that the actual routing update is not encrypted, privacy is not the goal verification that the information came from a trusted source is the objective of authentication. By default, OSPF uses null authentication, which means that routing exchanges over a network are not authenticated. OSPF supports two other authentication methods:
A more secure method of authentication is MD5. It requires a key and a key ID on each router. The router uses a one-way algorithm that processes the key, the OSPF packet, and the key ID to generate an encrypted number hash value - which is the result. Each OSPF packet includes that encrypted number. A packet sniffer cannot be used to obtain the actual key because it is never transmitted, only the digest is sent. The receiving OSPF router will also have the same key configured and will compare its computational result to one included with the update, if the results match then the key must have been equal at the sender. OSPF MD5 authentication includes a nondecreasing sequence number in each OSPF packet to protect against replay attacks.
For plaintext password authentication, use the ip ospf authentication command with no parameters. Before using this command, configure a password for the interface using the ip ospf authentication-key command. The ip ospf authentication command was introduced in Cisco IOS Release 12.0. For backward compatibility, the authentication type for an area is still supported. If the authentication type is not specified for an interface, the authentication type for the area is used (the area default is null authentication). To enable plaintext authentication for an OSPF area 0, use the area 0 authentication router configuration command.
To determine whether any of these hello packet options do not match, use the debug ip ospf adj command.
O: OSPF intra-area route from a router within the same OSPF area O IA: OSPF interarea route from a router in a different OSPF area O E1 or E2: An external OSPF route from another autonomous system
The network command that you configure under the OSPF routing process also indicates which interfaces are running OSPF.
The IP subnet masks for the routers on the same network do not match. The OSPF hello interval for the router does not match the OSPF hello interval that is configured on a neighbor. The OSPF dead interval for the router does not match the OSPF dead interval that is configured on a neighbor.
If a router that is configured for OSPF routing is not seeing an OSPF neighbor on an attached network, perform the following tasks:
Ensure that both routers have been configured in same IP subnet, have the same subnet mask and that the OSPF hello interval and dead intervals match on both routers. Ensure that both neighbors are part of the same area number and area type.
Note Debug commands, especially the debug all command, should be used sparingly. These commands can disrupt router operations. Debug commands are useful when configuring or troubleshooting a network; however, they can make intensive use of CPU and memory resources. It is recommended that you run as few debug processes as necessary and disable them
immediately when they are no longer needed. Debug commands should be used with caution on production networks because they can affect the performance of the device.
Rapid convergence: EIGRP uses the Diffusing Update Algorithm (DUAL) to achieve rapid convergence. As the computational engine that runs EIGRP, DUAL resides at the center of the routing protocol, guaranteeing loop-free paths and backup paths throughout the routing domain. A router that is using EIGRP stores all available backup routes for destinations so that it can quickly adapt to alternate routes. If the primary route in the routing table fails, the best backup route is immediately added to the routing table. If no appropriate route or backup route exists in the local routing table, EIGRP queries its neighbors to discover an alternate route. Reduced bandwidth usage: EIGRP uses the terms partial and bounded when referring to its updates. EIGRP does not make periodic updates. Partial means that the update only includes information about the route changes. EIGRP sends these incremental updates when the state of a destination changes, instead of sending the entire contents of the routing table. Bounded refers to the propagation of partial updates that are sent only to those routers that the changes affect. By sending only the routing information that is needed and only to those routers that need it, EIGRP minimizes the bandwidth that is required to send EIGRP updates. Multiple network layer support: EIGRP supports AppleTalk, IP version 4 (IPv4), IP version 6 (IPv6), and Novell Internetwork Packet Exchange (IPX), all of which use protocol-dependent modules (PDMs). PDMs are responsible for protocol requirements that are specific to the network layer.
Classless routing: Because EIGRP is a classless routing protocol, it advertises a routing mask for each destination network. The routing mask feature enables EIGRP to support discontiguous subnetworks and variable-length subnet masks (VLSMs). Less overhead: EIGRP uses multicast and unicast rather than broadcast. Multicast EIGRP packets use the reserved multicast address of 224.0.0.10. As a result, end stations are unaffected by routing updates and requests for topology information. Load balancing: EIGRP supports unequal metric load balancing as well as equal metric load balancing, which allows administrators to better distribute traffic flow in their networks. Easy summarization: EIGRP allows administrators to create summary routes anywhere within the network rather than rely on the traditional distance vector approach of performing classful route summarization only at major network boundaries. In OSPF, route summarization can only be configured at specific points in the network at the ABR or the ASBR.
Note: The term hybrid routing protocol is sometimes used to define EIGRP. However, this term is misleading because EIGRP is not a hybrid between distance vector and link-state routing protocols. It is in essence a distance vector routing protocol. Therefore, Cisco no longer uses the term hybrid to refer to EIGRP. Each EIGRP router maintains a neighbor table. This table includes a list of directly connected EIGRP routers that have an adjacency with this router. Neighbor relationships are used to track the status of these neighbors. EIGRP uses a lightweight Hello protocol to monitor connection status with its neighbors. Each EIGRP router maintains a topology table for each routed protocol configuration. The topology table includes route entries for every destination that the router learns from its directly connected EIGRP neighbors. EIGRP chooses the best routes to a destination from the topology table and places these routes in the routing table. To determine the best route (successor) and any backup routes (feasible successors) to a destination, EIGRP uses the following two parameters:
Advertised distance: The EIGRP metric for an EIGRP neighbor to reach a particular network. It is also sometimes referred to as the reported distance. Feasible distance: The advertised distance for a particular network that is learned from an EIGRP neighbor plus the EIGRP metric to reach that neighbor. This sum provides an end-to-end metric from router to that remote network.
A router compares all feasible distances to reach a specific network and then selects the lowest feasible distance and places it in the routing table. The feasible distance for the chosen route becomes the EIGRP routing metric to reach that network in the routing table.
The router eigrp global configuration command enables EIGRP. Use the router eigrp and network commands to create an EIGRP routing process. Note that EIGRP requires an autonomous system (AS) number. The autonomous system parameter is a number between 1 and 65535 that is chosen by the network administrator. The AS number does not have to be registered. Note: Although EIGRP refers to the parameter as an AS number, it actually functions as a process ID. This number is not associated with an autonomous system number discussed previously and can be assigned any 16-bit value. As opposed to OSPF the AS number in EIGRP must match on all routers that are involved in the same EIGRP process. The AS number that is chosen is the process ID number and is important. All routers in the EIGRP routing domain must use the same process ID number to exchange routing information with each other. Typically, only a single process ID of any routing protocol would be configured on a router. The network command in EIGRP has the same function as in other IGP routing protocols:
The network command defines a major network number to which the router is directly connected. Any interface on this router that matches the network address in the network command will be enabled to send and receive EIGRP updates. The EIGRP routing process looks for interfaces that have an IP address that belongs to the networks that are specified with the network command. The EIGRP process begins on these interfaces. This network (or subnet) will be included in EIGRP routing updates.
The network command is used in router configuration mode. To configure EIGRP to advertise specific subnets only, use the wildcard-mask option with the network command. Think of the wildcard mask as the inverse of a subnet mask. The inverse of the subnet mask 255.255.255.240 is 0.0.0.14 and is used in the following example to advertise the 192.168.1.0/28 subnet:
RouterC(config-router)#network 192.168.1.0 0.0.0.15
The show ip route eigrp command displays the current EIGRP entries in the routing table. EIGRP has a default administrative distance of 90 for internal routes and 170 for routes that are imported from an external source, such as default routes. When compared to other interior gateway protocols (IGPs), EIGRP is the most preferred by the Cisco IOS because it has the lowest administrative distance.
EIGRP automatically summarizes routes at the classful boundary, by default. In some cases, you may not want automatic summarization to occur. For example, if you have discontiguous networks, you need to disable automatic summarization to minimize router confusion. To disable automatic summarization, use the no auto-summary command in the EIGRP router configuration mode. DUAL takes down all neighbor adjacencies and then re-establishes them so that the effect of the no auto-summary command can be fully realized. All EIGRP neighbors will immediately send out a new round of updates that will not be automatically summarized. The show ip protocols command displays the parameters and current state of the active routing protocol process. This command shows the EIGRP autonomous (AS) number. It also displays filtering and redistribution numbers, neighbor, and administrative distance information. Use the show ip eigrp interfaces command to determine on which interfaces EIGRP is active, and to learn information about EIGRP that relates to those interfaces. If you specify an interface (for example, show ip eigrp interfaces Fa0/0), only that interface is displayed. Otherwise, all interfaces on which EIGRP is running are displayed. If you specify a process ID (AS) (for example, show ip eigrp interfaces 100), only the routing process for the specified process ID (AS) is displayed. Otherwise, all EIGRP processes are displayed. Use the show ip eigrp neighbors command to display the neighbors that EIGRP discovered and to determine when neighbors become active and inactive. The command is also useful for debugging certain types of transport problems. The show ip eigrp topology command displays the EIGRP topology table, the active or passive state of routes, the number of successors, and the feasible distance to the destination. Use the show ip eigrp topology all-links command to display all paths, even those that are not feasible. The show ip eigrp traffic command displays the number of packets that are sent and received. The debug ip eigrp privileged EXEC command helps you analyze the EIGRP packets that an interface sends and receives. Because the debug ip eigrp command generates a substantial amount of output, use it only when traffic on the network is light.
The following criteria can be used but are not recommended, because they typically result in frequent recalculation of the topology table:
Reliability: This value represents the worst reliability between the source and destination, based on keepalives. Load: This value represents the worst load on a link between the source and destination, computed based on the packet rate and the configured bandwidth of the interface.
The composite metric formula is used by EIGRP to calculate metric value. The formula consists of values K1 through K5, known as EIGRP metric weights. By default, K1 and K3 are set to 1, and K2, K4, and K5 are set to 0. The result is that only the bandwidth and delay values are used in the computation of the default composite metric. Metric calculation method (K values) as well as AS number must match. Note: Although maximum transmission unit (MTU) is exchanged in EIGRP packets between neighbor routers, MTU is not factored into the EIGRP metric calculation. By using the show interface command, you can examine the actual values that are used for bandwidth, delay, reliability, and load in the computation of the routing metric.
You can configure EIGRP neighbor authentication (also known as neighbor router authentication or route authentication) such that routers can participate in routing based on predefined passwords. By default, no authentication is used for EIGRP packets. EIGRP can be configured to use Message Digest 5 (MD5) authentication. Without neighbor authentication, unauthorized or deliberately malicious routing updates could compromise the security of your network traffic. A security compromise could occur if any unfriendly party interferes with your network. For example, an unauthorized router could launch a fictitious routing update to convince your router to send traffic to an incorrect destination. When you configure neighbor authentication on a router, the router authenticates the source of each routing update packet that it receives. For EIGRP MD5 authentication, you must configure an authenticating key and a key ID on both the sending and the receiving router. The key is sometimes referred to as a password. By default, no authentication is used for routing protocol packets. The MD5 keyed digest in each EIGRP packet prevents the introduction of unauthorized or false routing messages from unapproved sources. Each key has its own key ID, which the router stores locally. The combination of the key ID and the interface that is associated with the message uniquely identifies the authentication algorithm and the MD5 authentication key in use. EIGRP allows you to manage keys by using key chains. Each key definition within the key chain can specify a time interval for which that key is activated (its lifetime). Then, during the lifetime of a given key, routing update packets are sent with this activated key. Only one authentication packet is sent, regardless of how many valid keys exist. The software examines the key numbers in order from lowest to highest, and it uses the first valid key that it encounters. Keys cannot be used during time periods for which they are not activated. Therefore, it is recommended that for a given key chain, key activation times overlap to avoid any period for which no key is activated. If there is a time during which no key is activated, neighbor authentication cannot occur, and therefore, routing updates fail.
If authentication is not successful, then routers will not process EIGRP packets and routers will not form the neighborship. Also, routers will not build the EIGRP tables and populate the IP routing table with EIGRP routes.
You can use the debug eigrp packets command for troubleshooting MD5 authentication. However, to identify potential problems using this command, you should recognize and understand the output of a correctly configured MD5 first.
EIGRP neighbor relationships: If the routing protocol establishes an adjacency with a neighbor, check to see if there are any problems with the routers forming neighbor relationships. EIGRP routes in the routing table: Check the routing table for anything unexpected, such as missing routes or unexpected routes. Use debug commands to view routing updates and routing table maintenance. EIGRP authentication: If authentication is not successful, then routers will not process EIGRP packets and routers will not form the neighbor adjacency. Also, routers will not build the EIGRP tables and populate the IP routing table with EIGRP routes.
Are both routers configured with the same EIGRP process ID? Is the directly connected network included in the EIGRP network statements?
Is the passive-interface command configured to prevent EIGRP hello packets on the interface? Is authentication configured properly?
For EIGRP routers to form a neighbor relationship, both routers must share a directly connected IP subnet. A log message that says that EIGRP neighbors are "not on common subnet" indicates that there is an improper IP address on one of the two EIGRP neighbor interfaces. Use the show ip interface command to verify the IP addresses. The network command that is configured under the EIGRP routing process indicates which router interfaces will participate in EIGRP. The "Routing for Networks" section of the show ip protocols command indicates which networks have been configured; any interfaces in those networks participate in EIGRP. Remember, the process ID must be the same on all routers for EIGRP to establish neighbor adjacencies and share routing information. The show ip eigrp interfaces command can quickly indicate on which interfaces EIGRP is enabled and how many neighbors can be found on each interface. EIGRP routers create a neighbor relationship by exchanging hello packets. There are certain fields in the hello packets that must match before an EIGRP neighbor relationship is established:
You can use the debug eigrp packets command to troubleshoot when hello packet information does not match.
diagnose it. Access Lists (ACLs) provide filtering for different protocols and they may affect the exchange of the routing protocol messages causing routes to be missing from the routing table. The show ip protocols command shows if there are any filter lists applied to EIGRP. By default, EIGRP is classful and performs automatic network summarization. Automatic network summarization causes connectivity problems in discontiguous networks. The show ip protocols command confirms whether automatic network summarization is in effect.
Limit network traffic to improve network performance. For example, if corporate policy does not allow video traffic on the network, ACLs that block video traffic could be
configured and applied. This mechanism would greatly reduce the network load and improve network performance. Provide traffic flow control. ACLs can restrict the delivery of routing updates. If updates are not required because of network conditions, bandwidth is preserved.
Filtering
Provide a basic level of security for network access. ACLs can allow one host to access a part of the network and prevent another host from accessing the same area. For example, access to the Human Resources network can be restricted to select users. Decide which types of traffic to forward or block at the router interfaces. For example, an ACL can permit email traffic, but block all Telnet traffic. Control which areas a client can access on a network. Screen hosts to permit or deny them access to network services. ACLs can permit or deny a user to access file types, such as FTP or HTTP.
ACLs inspect network packets that are based on criteria such as source address, destination address, protocols, and port numbers. In addition to either permitting or denying traffic, an ACL can classify traffic to enable priority processing down the line. This capability is like having a special ticket to a concert or sporting event. The ticket gives selected guests privileges that are not offered to general-admission ticket holders, such as being able to enter a restricted area and to be escorted to their box seats. Access control presents new challenges due to the increase in the use of the Internet and in the number of router connections to outside networks. Network administrators face the dilemma of how to deny unwanted traffic while allowing appropriate access. For example, you can use an ACL as a filter to keep the rest of your network from accessing sensitive data on the finance subnet.
Classification
Routers also use ACLs to identify particular traffic. Once an ACL has identified and classified traffic, you can configure the router with instructions on how to manage that traffic. For example, you can use an ACL to identify the executive subnet as the traffic source and then give that traffic priority over other types of traffic on a congested WAN link. ACLs offer an important tool for controlling traffic on the network. Packet filtering, sometimes called static packet filtering, helps control packet movement through the network by analyzing the incoming and outgoing packets and passing or halting them based on stated criteria. A router acts as a packet filter when it forwards or denies packets according to filtering rules. When a packet arrives at the packet-filtering router, the router extracts certain information from the packet header. It then decides, according to the filter rules, whether the packet can pass through or be discarded.
The crossing of packets to or from specified router interfaces, and traffic going through the router Telnet traffic into or out of the router vty ports for router administration
By default, all IP traffic is permitted in and out of all of the router interfaces. When the router discards packets, some protocols return a special packet to notify the sender that the destination is unreachable. For the IP protocol, an ACL discard results in a Destination unreachable (U.U.U.)" in response to a ping and an "Administratively prohibited (!A * !A)" response to a traceroute. IP ACLs can classify and differentiate traffic. Classification enables you to assign special handling (such as the following) for traffic that is defined in an ACL:
Identify the type of traffic to be encrypted across a virtual private network (VPN) connection. Identify the routes that are redistributed from one routing protocol to another. Use with route filtering to identify which routes are included in the routing updates between routers. Use with policy-based routing to identify the type of traffic that is routed across a designated link. Use with Network Address Translation (NAT) to identify which addresses are translated.
ACL Operation
An ACL is a router configuration script that controls whether a router permits packets to pass based on criteria that are found in the packet header. ACLs express the set of rules that give added control for packets that enter inbound interfaces, packets that relay through the router, and packets that exit outbound interfaces of the router. Outbound ACL traffic that is generated locally will not have the ACLs rules applied. For example, if the router outbound ACL denies telnet, the telnet from the router to the outside host is still permitted even if that outbound ACL list is applied. ACLs operate in two ways.
Inbound ACLs: Incoming packets are processed before they are routed to an outbound interface. An inbound ACL is efficient because it saves the overhead of routing lookups if the packet will be discarded after the filtering tests deny it. If the tests permit the packet, it is then processed for routing. Outbound ACLs: Incoming packets are routed to the outbound interface, and then they are processed through the outbound ACL.
ACL statements operate in sequential, logical order. They evaluate packets from the top down, one statement at a time. If a packet header and an ACL statement match, the rest of the
statements in the list are skipped. The packet is then permitted or denied as determined by the matched statement. If a packet header does not match an ACL statement, the packet is tested against the next statement in the list. This matching process continues until the end of the list is reached. A final implied statement covers all packets for which conditions did not test true. This final test condition matches all other packets and results in a "deny" instruction. Instead of proceeding into or out of an interface, the router drops all of these remaining packets. This final statement is often referred to as the "implicit deny any statement." Because of this statement, an ACL should have at least one permit statement in it; otherwise, the ACL blocks all traffic. You can apply an ACL to multiple interfaces. However, a general rule for applying ACLs on a router can be recalled by remembering the three Ps. You can configure one ACL per protocol, per direction, per interface:
One ACL per protocol: To control traffic flow on an interface, an ACL must be defined for each protocol that is enabled on the interface (for example, IP or IPX). One ACL per direction: ACLs control traffic in one direction at a time on an interface. Two separate ACLs must be created to control inbound and outbound traffic. One ACL per interface: ACLs control traffic for an interface; for example, the FastEthernet 0/0 interface.
Every interface can have multiple protocols and directions defined. The router, for example, can have two interfaces that are configured for IP, AppleTalk, and IPX. This router could possibly require 12 separate ACLs: one ACL for each protocol, times two for each direction, times two for the number of
Standard ACLs: Standard IP ACLs check the source addresses of packets that can be routed. The result either permits or denies the output for an entire protocol suite, which is based on the source network, subnet, or host IP address. Extended ACLs: Extended IP ACLs check both the source and destination packet addresses. They can also check for specific protocols, port numbers, and other parameters, which allow administrators more flexibility and control.
There are two methods that you can use to identify standard and extended ACLs:
Numbered ACLs use a number for identification. Named ACLs use a descriptive name or number for identification.
Identifying ACLs
When you create numbered ACLs, you enter an ACL number as the first argument of the global ACL statement. The test conditions for an ACL vary depending on whether the number identifies a standard or extended ACL. You can create many ACLs for a protocol. Select a different ACL number for each new ACL within a given protocol. However, on an interface you can apply only one ACL per protocol, per direction. Specifying an ACL number from 1 to 99 or 1300 to 1999 instructs the router to accept numbered standard IPv4 ACL statements. Specifying an ACL number from 100 to 199 or 2000 to 2699 instructs the router to accept numbered extended IPv4 ACL statements. The named ACL feature allows you to identify IP standard and extended ACLs with an alphanumeric string (name) instead of the numeric representations. Named IP ACLs give you more flexibility in working with the ACL entries. Named ACLs have a large advantage over numbered ACLs in that they are easier to edit and a name may be chosen to provide a better description of what the ACL does. Since with Cisco IOS Software Release 12.3(2)T, it is possible to delete individual entries in a specific ACL. You can use sequence numbers to insert statements anywhere in the named or numbered ACL. There are two benefits to IP access-list entry-sequence numbering:
You can edit the order of ACL statements. You can remove individual statements from an ACL.
If you are using an earlier Cisco IOS Software version, you can add statements only at the bottom of the named ACL. Because you can delete individual entries, you can modify your ACL without having to delete and then reconfigure the entire ACL. Well-designed and well-implemented ACLs add an important security component to your network. Follow these general principles to ensure that the ACLs you create have the intended results:
Based upon the test conditions, choose a standard or extended, numbered or named ACL. Only one ACL per protocol, per direction, and per interface is allowed. Multiple ACLs are permitted per interface, but each must be for a different protocol (e.g. IP, IPX, MAC address) or different direction (in or out). Your ACL should be organized to allow processing from the top down. Organize your ACL so that the more specific references to a network or subnet appear before ones that are more general. Place conditions that occur more frequently before conditions that occur less frequently to optimize how the router processes the traffic through the ACL. Your ACL always contains an implicit "deny any" statement at the end.
Unless you end your ACL with an explicit "permit any" statement, by default, the ACL denies all traffic that fails to match any of the ACL lines. o Every ACL should have at least one permit statement. Otherwise, all traffic is denied. You should create the ACL before applying it to an interface. With most versions of Cisco IOS Software, an interface that has an empty ACL applied to it permits all traffic. Depending on how you apply the ACL, the ACL filters traffic either going through the router or going to and from the router, such as traffic to or from the vty lines.
Every ACL should be placed where it has the greatest impact on efficiency. The basic rules are as follows:
Locate extended ACLs as close as possible to the source of the traffic denied. This way, undesirable traffic is filtered without crossing the network infrastructure. Because standard ACLs do not specify destination addresses, place them as close to the destination as possible.
In today's networks, extended ACLs tend to be commonly used for traffic filtering and standard ACLs more for classification. Prior to implementing ACLs, plan them properly and select the type which best fits your network requirements.
Dynamic ACLs (lock-and-key): Users that want to traverse the router are blocked until they use Telnet to connect to the router and are authenticated. Reflexive ACLs: Allows outbound traffic and limits inbound traffic in response to sessions that originate inside the router. Time-based ACLs: Allows for access control that is based on the time of day and week.
Dynamic ACLs
Dynamic ACLs are dependent on Telnet connectivity, authentication (local or remote), and extended ACLs. Lock-and-key configuration starts with the application of an extended ACL to block traffic through the router. Extended ACL blocks the users who want to traverse the router until they use Telnet to connect to the router and are authenticated. The Telnet connection is then dropped, and a single-entry dynamic ACL is added to the extended ACL that exists. This permits traffic for a particular time period; idle and absolute timeouts are possible.
When you want a specific remote user or group of remote users to access a host within your network, connecting from their remote hosts via the Internet. Lock-and-key authenticates the user and then permits limited access through your firewall router for a host or subnet for a finite period.
When you want a subset of hosts on a local network to access a host on a remote network that is protected by a firewall. With lock-and-key, you can enable access to the remote host only for the desired set of local hosts. Lock-and-key requires the users to authenticate through a AAA, TACACS+ server, or other security server before it allows their hosts to access the remote hosts.
Use of a challenge mechanism to authenticate individual users Simplified management in large internetworks In many cases, reduction of the amount of router processing that is required for ACLs Reduction of the opportunity for network break-in by network hackers Creation of dynamic user access through a firewall, without compromising other configured security restrictions
Reflexive ACLs
Reflexive ACLs allow IP packets to be filtered based on upper-layer session information. They are generally used to allow outbound traffic and limit inbound traffic in response to sessions that originate from a network inside the router. Reflexive ACLs contain only temporary entries. These entries are automatically created when a new IP session begins (for example, with an outbound packet), and the entries are automatically removed when the session ends. Reflexive ACLs are not applied directly to an interface but are "nested" within an extended named IP ACL that is applied to the interface. Reflexive ACLs can be defined only with extended named IP ACLs. They cannot be defined with numbered or standard named ACLs or with other protocol ACLs. Reflexive ACLs can be used with other standard and static extended ACLs.
Help secure your network against network hackers and can be included in a firewall defense. Provide a level of security against spoofing and certain denial of service (DoS) attacks. Reflexive ACLs are more difficult to spoof because more filter criteria must match before
a packet is permitted through. For example, source and destination addresses and port numbersnot just ACK and RST bitsare checked. Are simple to use and, compared to basic ACLs, provide greater control over which packets enter your network.
Time-Based ACLs
Time-based ACLs are like extended ACLs in function, but they allow for access control that is based on time. To implement time-based ACLs, you create a time range that defines specific times of the day and week. The time range is identified by a name and then referenced by a function. Therefore, the time restrictions are imposed on the function itself.
The network administrator has more control over permitting or denying a user access to resources. One resource could be, for example, an application. This application could be identified by an IP address and mask pair and port number, or by policy routing, or by an on-demand link (which would be identified as interesting traffic to the dialer). Network administrators can set time-based security policies such as the following: Perimeter security using the Cisco IOS Firewall Feature Set or ACLs Data confidentiality with Cisco Encryption Technology or IP Security (IPsec) Policy-based routing and queuing functions are enhanced. When provider access rates vary by time of day, it is possible to automatically reroute traffic cost effectively. Service providers can dynamically change a committed access rate (CAR) configuration to support the quality of service (QoS) service level agreements (SLAs) that are negotiated for certain times of day. Network administrators can control logging messages. ACL entries can log traffic at certain times of the day, but not constantly. Therefore, administrators can simply deny access without analyzing the many logs that are generated during peak hours.
o o
Dynamic ACLs (lock-and-key): Users that want to traverse the router are blocked until they use Telnet to connect to the router and are authenticated. Reflexive ACLs: Allows outbound traffic and limits inbound traffic in response to sessions that originate inside the router. Time-based ACLs: Allows for access control that is based on the time of day and week.
Dynamic ACLs
Dynamic ACLs are dependent on Telnet connectivity, authentication (local or remote), and extended ACLs. Lock-and-key configuration starts with the application of an extended ACL to block traffic through the router. Extended ACL blocks the users who want to traverse the router until they use Telnet to connect to the router and are authenticated. The Telnet connection is then dropped, and a single-entry dynamic ACL is added to the extended ACL that exists. This permits traffic for a particular time period; idle and absolute timeouts are possible.
When you want a specific remote user or group of remote users to access a host within your network, connecting from their remote hosts via the Internet. Lock-and-key authenticates the user and then permits limited access through your firewall router for a host or subnet for a finite period.
When you want a subset of hosts on a local network to access a host on a remote network that is protected by a firewall. With lock-and-key, you can enable access to the remote host only for the desired set of local hosts. Lock-and-key requires the users to authenticate through a AAA, TACACS+ server, or other security server before it allows their hosts to access the remote hosts.
Use of a challenge mechanism to authenticate individual users Simplified management in large internetworks In many cases, reduction of the amount of router processing that is required for ACLs Reduction of the opportunity for network break-in by network hackers Creation of dynamic user access through a firewall, without compromising other configured security restrictions
Reflexive ACLs
Reflexive ACLs allow IP packets to be filtered based on upper-layer session information. They are generally used to allow outbound traffic and limit inbound traffic in response to sessions that
originate from a network inside the router. Reflexive ACLs contain only temporary entries. These entries are automatically created when a new IP session begins (for example, with an outbound packet), and the entries are automatically removed when the session ends. Reflexive ACLs are not applied directly to an interface but are "nested" within an extended named IP ACL that is applied to the interface. Reflexive ACLs can be defined only with extended named IP ACLs. They cannot be defined with numbered or standard named ACLs or with other protocol ACLs. Reflexive ACLs can be used with other standard and static extended ACLs.
Help secure your network against network hackers and can be included in a firewall defense. Provide a level of security against spoofing and certain denial of service (DoS) attacks. Reflexive ACLs are more difficult to spoof because more filter criteria must match before a packet is permitted through. For example, source and destination addresses and port numbersnot just ACK and RST bitsare checked. Are simple to use and, compared to basic ACLs, provide greater control over which packets enter your network.
Time-Based ACLs
Time-based ACLs are like extended ACLs in function, but they allow for access control that is based on time. To implement time-based ACLs, you create a time range that defines specific times of the day and week. The time range is identified by a name and then referenced by a function. Therefore, the time restrictions are imposed on the function itself.
The network administrator has more control over permitting or denying a user access to resources. One resource could be, for example, an application. This application could be identified by an IP address and mask pair and port number, or by policy routing, or by an on-demand link (which would be identified as interesting traffic to the dialer). Network administrators can set time-based security policies such as the following: Perimeter security using the Cisco IOS Firewall Feature Set or ACLs Data confidentiality with Cisco Encryption Technology or IP Security (IPsec) Policy-based routing and queuing functions are enhanced. When provider access rates vary by time of day, it is possible to automatically reroute traffic cost effectively.
o o
Service providers can dynamically change a committed access rate (CAR) configuration to support the quality of service (QoS) service level agreements (SLAs) that are negotiated for certain times of day. Network administrators can control logging messages. ACL entries can log traffic at certain times of the day, but not constantly. Therefore, administrators can simply deny access without analyzing the many logs that are generated during peak hours.
Restricting vty access is primarily a technique for increasing network security and defining which addresses are allowed telnet access to the router EXEC process. Filtering Telnet traffic is typically considered an extended IP ACL function because it filters a higher-level protocol. However, because you are using the access-class command to filter incoming or outgoing Telnet sessions by source address and to apply filtering to vty lines, you can use standard IP ACL statements to control vty access.
Comments, also known as remarks, are ACL statements that are not processed. They are simple descriptive statements that you can use to better understand and troubleshoot either named or numbered ACLs. Each remark line is limited to 100 characters. The remark can go before or after a permit or deny statement. You should be consistent about where you put the remark so it is clear which remark describes which permit or deny statement. It would be confusing to have some remarks before the associated permit or deny statements and some remarks after the associated statements. To add a comment to a named IP ACL, use the command remark in access list configuration mode. To add a comment to a numbered IP ACL (for example, ACL 101), use the command access-list 101 remark. The remark keyword is used for documentation and makes access lists easier to understand. Each remark is limited to 100 characters. When reviewing the ACL in the configuration, the remark is also displayed. Using ACLs requires attention to detail and great care. Mistakes can be costly in terms of downtime, troubleshooting efforts, and poor network service. Before starting to configure an ACL, basic planning is required.
Inside local address: The IP address that is assigned to a host on the inside network. The inside local address is likely not an IP address that the Network Information Center (NIC) or service provider assigned. Inside global address: A legitimate IP address that the NIC or service provider assigned that represents one or more inside local IP addresses to the outside world. Outside local address: The IP address of an outside host as it appears to the inside network. Not necessarily legitimate, the outside local address is allocated from an address space routable on the inside. In most situations, this address will be identical to the outside global address of that outside device. Outside global address: The IP address that is assigned to a host on the outside network by the host owner. The outside global address is allocated from a globally routable address or network space.
A good way to remember what's local and what's global is to add the word visible, hence an address that is locally visible normally implies a private IP address and an address that is globally visible normally implies a public IP address. After that the rest is simple, 'Inside' internal to your network and 'Outside' external to your network. So for example Inside global address means that the device is physically inside your network and has an address that is visible from the Internet this could be a web server for instance. NAT has many forms and can work in the following ways:
Static NAT: Manually entered by the network administrator and maps an unregistered IPv4 address to a registered IPv4 address (one-to-one). Static NAT is particularly useful when a device must be accessible from outside the network. Dynamic NAT: Maps an unregistered IPv4 address to a registered IPv4 address from a pool of registered IPv4 addresses (many-to-many). NAT overloading: A form of dynamic NAT that maps multiple unregistered IPv4 addresses to a single registered IPv4 address (many-to-one) by using different ports. Overloading is also known as PAT (Port Address Translation), and is a form of dynamic NAT.
NAT provides many benefits and advantages. However, there are some drawbacks to using NAT, including the lack of support for some types of traffic. The benefits of using NAT include the following:
NAT conserves the legally registered addressing scheme by allowing the privatization of intranets. NAT conserves addresses through application port-level multiplexing. With NAT overload, internal hosts can share a single public IP address for all external communications. In this type of configuration, very few external addresses are required to support many internal hosts.
NAT increases the flexibility of connections to the public network. Multiple pools, backup pools, and load-balancing pools can be implemented to ensure reliable public network connections. NAT provides consistency for internal network addressing schemes. On a network without private IP addresses and NAT, changing public IP addresses requires the renumbering of all hosts on the existing network. The costs of renumbering hosts can be significant. NAT allows the existing scheme to remain while supporting a new public addressing scheme. This means that an organization could change ISPs without having to change any of its inside clients. NAT enhances network security. Because private networks do not advertise their addresses or internal topology, they remain reasonably secure when used with NAT to gain controlled external access. However, NAT does not replace firewalls.
NAT does have some drawbacks. The fact that hosts on the Internet appear to communicate directly with the NAT device, rather than with the actual host inside the private network, creates a number of issues. In theory, a single globally unique IP address can represent many privately addressed hosts. This has advantages from a privacy and security point of view, but in practice, there are drawbacks. The first disadvantage affects performance. NAT increases switching delays because the translation of each IP address within the packet headers takes time. The first packet is processswitched, meaning that it always goes through the slower path. The router must look at every packet to decide whether it needs translation. The router needs to alter the IP header, and possibly alter the TCP or UDP header. Remaining packets go through the fast-switched path if a cache entry exists; otherwise, they too are delayed. Many IPs and applications depend on end-to-end functionality, with unmodified packets forwarded from the source to the destination. By changing end-to-end addresses, NAT blocks some applications that use IP addressing. For example, some security applications, such as digital signatures, fail because the source IP address changes. Applications that use physical addresses instead of a qualified domain name do not reach destinations that are translated across the NAT router. Sometimes, this problem can be avoided by implementing static NAT mappings. End-to-end IP traceability is also lost. It becomes much more difficult to trace packets that undergo numerous packet address changes over multiple NAT hops, so troubleshooting is challenging. On the other hand, hackers who want to determine the source of a packet find it difficult to trace or obtain the original source or destination address. Using NAT also complicates tunneling protocols, such as IPSec, because NAT modifies the values in the headers that interfere with the integrity checks done by IPSec and other tunneling protocols. Services that require the initiation of TCP connections from the outside network, or stateless protocols such as those using UDP, can be disrupted. Unless the NAT router makes a specific effort to support such protocols, incoming packets cannot reach their destination. Some protocols
can accommodate one instance of NAT between participating hosts (passive mode FTP, for example), but fail when both systems are separated from the Internet by NAT. One of the main forms of NAT is Port Address Translation , PAT for short, which is also referred to as "overload" in Cisco IOS configuration. Several inside local addresses can be translated using NAT into just one or a few inside global addresses by using PAT. Most home routers operate in this manner. Your ISP assigns one address to your router, yet several members of your family can simultaneously surf the Internet. With NAT overloading, multiple addresses can be mapped to one or to a few addresses because a TCP or UDP port number tracks each private address. When a client opens a TCP/IP session, the NAT router assigns a port number to its source address. NAT overload ensures that clients use a different TCP or UDP port number for each client session with a server on the Internet. When a response comes back from the server, the source port number (which becomes the destination port number on the return trip) determines to which client the router routes the packets. It also validates that the incoming packets were requested, thus adding a degree of security to the session.
PAT uses unique source port numbers on the inside global IPv4 address to distinguish between translations. Because the port number is encoded in 16 bits, the total number of internal addresses that NAT can translate into one external address is, theoretically, as many as 65,536. PAT attempts to preserve the original source port. If the source port is already allocated, PAT attempts to find the first available port number. It starts from the beginning of the appropriate port group, 0 to 511, 512 to 1023, or 1024 to 65535. If PAT does not find an available port from the appropriate port group and if more than one external IPv4 address is configured, PAT moves to the next IPv4 address and tries to allocate the original source port again. PAT continues trying to allocate the original source port until it runs out of available ports and external IPv4 addresses.
Use the command show ip nat translations in EXEC mode to display active translation information.
Step 2 NAT statements are parsed so that the interface serial 0 IPv4 address can be used in overload mode. PAT creates a source address to use. Step 3 The router encapsulates the packet and sends it out on interface serial 0. Step 4 The NAT outside-to-inside address translation process works in sequence. Step 5 NAT statements are parsed. The router looks for an existing translation and identifies the appropriate destination address. Step 6 The packet goes to the route table and the next-hop interface is determined. Step 7 The packet is encapsulated and sent out to the local interface. No internal addresses are visible during this process. As a result, hosts do not have an external public address, so security is improved. By default, translation entries time out after 24 hours, unless the timers have been reconfigured with the ip nat translation timeout command in global configuration mode. It is sometimes useful to clear the dynamic entries sooner than the default. This fact is especially true when testing the NAT configuration. To clear dynamic entries before the timeout has expired, use the clear ip nat translation global command. You can be very specific about which translation to clear, or you can clear all translations from the table using the clear ip nat translation * global command. Only the dynamic translations are cleared from the table. Static translations cannot be cleared from the translation table.
Step 3 Verify whether the translation is occurring by using the show and debug commands. Step 4 Review in detail what is happening to the translated packet and verify that routers have the correct routing information for the translated address to move the packet. Step 5 If the appropriate translations are not in the translation table, verify the following items:
No inbound ACLs are denying the packets entry into the NAT router. The ACL associated with the NAT command is permitting all necessary networks. There are enough addresses in the NAT pool. The router interfaces are appropriately defined as NAT inside or NAT outside.
In a simple network environment, it is useful to monitor NAT statistics with the show ip nat statistics command. The show ip nat statistics command displays information about the total number of active translations, NAT configuration parameters, how many addresses are in the pool, and how many have been allocated. However, in a more complex NAT environment with several translations taking place, this show command many not clearly identify the issue. In this case, it may be necessary to run debug commands on the router. The debug ip nat command displays information about every packet that the router translates, which helps you verify the operation of the NAT feature. The debug ip nat detailed command generates a description of each packet that is considered for translation. This command also outputs information about certain errors or exception conditions, such as the failure to allocate a global address. The debug ip nat detailed command generates more overhead than the debug ip nat command, but it can provide the detail that you need to troubleshoot the NAT problem. Always remember to turn off debugging when finished. When decoding the debug output, note what the following symbols and values indicate:
*: The asterisk next to NAT indicates that the translation is occurring in the fast-switched path. The first packet in a conversation is always process-switched, which is slower. The remaining packets go through the fast-switched path if a cache entry exists. s=: Refers to the source IP address. a.b.c.d->w.x.y.z: Indicates that source address a.b.c.d is translated to w.x.y.z. d=: Refers to the destination IP address. [xxxx]: The value in brackets is the IP identification number. This information may be useful for debugging in that it enables correlation with other packet traces from protocol analyzers.
What the NAT configuration is supposed to accomplish That the NAT entry exists in the translation table and that it is accurate That the translation is actually taking place by monitoring the NAT process or statistics That the NAT router has the appropriate route in the routing table if the packet is going from inside to outside That all necessary routers have a return route back to the translated address
You can verify if any translations have ever taken place and identify the interfaces between which translation should be occurring. Use the show ip nat statistics command to determine this information. After you correctly define the NAT inside and outside interfaces, generate another ping from host A to host B. If the ping still fails, troubleshoot the problem by using the show ip nat translations and show ip nat statistics commands again. Next, use the show access-list command to verify whether the ACL that the NAT command references is permitting all of the necessary networks.
Because of this rapid growth, the finite pool of globally unique IPv4 addresses has almost run out. The growth of the Internet, matched by increasing computing power, has extended the reach of IP-based applications. Some reports predict IPv4 address exhaustion by 2012, and others, by 2014. IPv4 address exhaustion has been threatening for more than 15 years. Over the years, the American Registry for Internet Numbers (ARIN) and the Internet Assigned Numbers Authority (IANA) have allocated addresses slots that were "unused" or "reserved" before. This delayed the depletion by several years, but did not solve the problem. The result of the depletion will be that no new IP address will be available. The existing IPv4 addresses will still of course be usable. The major impact over the next several years will be on Internet presence (websites, e-commerce, and email). While many enterprises have enough address space (public or private) to manage their intranet needs for the next few years, the length of time needed to transition to IPv6 demands that administrators and managers consider the issue well in advance. The largest enterprises may need to act sooner rather than later to ensure sufficient enterprise connectivity. The change from IPv4 to IPv6 has already begun, particularly in Europe, Japan, and the AsiaPacific region. These areas are exhausting their allotted IPv4 addresses, which makes IPv6 even more necessary. Some countries, such as Japan, are aggressively adopting IPv6. Others, such as those in the European Union, are moving toward IPv6, and China is considering building new networks that are dedicated for IPv6, the 2008 Beijing Olympics ran on China's IPv6 Next Generation Internet (CNGI). As of October 1, 2003, the U.S. Department of Defense mandated that all new equipment purchased be IPv6 capable. Given the huge installed base of IPv4 in the world, it is easy to appreciate that transitioning to IPv6 from IPv4 deployments is a challenge. However, various techniques, including an autoconfiguration option, can make the transition easier. The transition mechanism that you use depends on the needs of your network. Some people argue that IPv6 would not exist if there were no recognized depletion of available IPv4 addresses. However, beyond the increased IP address space, the development of IPv6 has presented opportunities to apply lessons that are learned from the limitations of IPv4 to create a protocol with new and improved features. A simplified header architecture and protocol operation translates into reduced operational expenses. Built-in security features mean easier security practices that are sorely lacking in many current networks. However, perhaps the most significant IPv6 improvement is the address autoconfiguration features. The Internet is rapidly evolving from a collection of stationary devices to a fluid network of mobile devices. IPv6 allows mobile devices to quickly acquire and transition between addresses as they move among foreign networks, with no need for a foreign agent. (A foreign agent is a router that can function as the point of attachment for a mobile device when it roams from its home network to a foreign network.)
Address autoconfiguration also means more robust plug-and-play network connectivity. Autoconfiguration supports consumers who can have any combination of computers, printers, digital cameras, digital radios, IP phones, Internet-enabled household appliances, and robotic toys that are connected to their home networks. Many manufacturers already integrate IPv6 into their products. IPv6 is a powerful enhancement to IPv4. Several features in IPv6 offer functional improvements. Some of these features may also be available in IPv4 and require additional extensions, in IPv6 however they are in-built. What IP developers learned from using IPv4 suggested changes to better suit current and probable future network demands:
Enhanced IP addressing : Larger address space includes several enhancements: Improved global reachability and flexibility. Better aggregation of IP prefixes that are announced in routing tables. Multihomed hosts. Multihoming is a technique that increases the reliability of the Internet connection of an IP network. With IPv6, a host can have multiple IP addresses over one physical upstream link. For example, a host can connect to several ISPs. o Autoconfiguration that can include data link layer addresses in the address space. o More plug-and-play options for more devices. o Public-to-private, end-to-end readdressing without address translation. This enhancement makes peer-to-peer networking more functional and easier to deploy. o Simplified mechanisms for address renumbering and modification. Simpler header : A simpler header offers several advantages over IPv4: Better routing efficiency for performance and forwarding-rate scalability. No broadcasts and thus no potential threat of broadcast storms. No requirement for processing checksums. Simpler and more efficient extension-header mechanisms. Flow labels for per-flow processing with no need to open the transport inner packet to identify the various traffic flows. Mobility and security : Mobility and security help ensure compliance with Mobile IP and IP Security (IPsec) standards functionality. Mobility enables people with mobile network devices, many with wireless connectivity, to move around in networks.
o o o o o o o o o
The Internet Engineering Task Force (IETF) Mobile IP standard is available for both IPv4 and IPv6. The standard enables mobile devices to move without breaks in established network connections. Mobile devices use a home address and a care-of address to achieve this mobility. With IPv4, these addresses are manually configured. With IPv6, the configurations are dynamic, giving IPv6-enabled devices built-in mobility. IPsec is available for both IPv4 and IPv6. Although the functionalities are essentially identical in both environments, IPsec is mandatory in IPv6, making the IPv6 Internet more secure.
Transition richness : IPv4 will not disappear overnight. Rather, it will coexist with IPv6, which will gradually replace it. For this reason, IPv6 was delivered with migration techniques to cover every conceivable IPv4 upgrade case. However, the technology community ultimately rejected many of them. There are several ways to incorporate existing IPv4 capabilities with the added features of IPv6:
o o
Implement a dual-stack method, with both IPv4 and IPv6 configured on the interface of a network device. Tunneling will become more prominent as the adoption of IPv6 grows. There are various IPv6-over-IPv4 tunneling methods. Some methods require manual configuration, while others are more automatic. Cisco IOS Release 12.3(2)T and later also includes Network Address TranslationProtocol Translation (NAT-PT) between IPv6 and IPv4. This translation allows direct communication between hosts that use different versions of the IP protocol.
The leading zeros in a field are optional, so that 09C0 equals 9C0, and 0000 equals 0. Successive fields of zeros can be represented as "::" only once in an address. An unspecified address is written as "::" because it contains only zeros.
Using the "::" notation, or sometimes known as double colon, greatly reduces the size of most addresses. For example, FF01:0:0:0:0:0:0:1 becomes FF01::1. This formatting is in contrast to the 32-bit dotted decimal notation of IPv4. Note An address parser identifies the number of missing zeros by separating the two parts and entering 0 until the 128 bits are complete. If two "::" notations were placed in the address, there would be no way to identify the size of each block of zeros. Broadcasting in IPv4 can cause problems. Broadcasting generates a number of interrupts in every computer on the network and, in some cases, triggers malfunctions that can completely halt an entire network. This disastrous network event is known as a "broadcast storm." In IPv6, broadcasting does not exist. IPv6 replaces broadcasts with multicasts and anycasts. Multicast enables efficient network operation by using a number of functionally specific multicast groups to send requests to a limited number of computers on the network. The multicast groups prevent most of the problems that are related to broadcast storms in IPv4.
The range of multicast addresses in IPv6 is larger than in IPv4. For the near future, allocation of multicast groups is not being limited. IPv6 also defines a new type of address that is called an anycast address. An anycast address identifies a list of devices or nodes; therefore, an anycast address identifies multiple interfaces. Anycast addresses are like a cross between unicast and multicast addresses. Unicast sends packets to one specific device with one specific address, and multicast sends a packet to every member of a group. Anycast addresses send a packet to any one member of the group of devices with the anycast address assigned. For efficiency, a packet that is sent to an anycast address is delivered to the closest interface as defined by the routing protocols in usethat is identified by the anycast address, so anycast can also be thought of as a "one-to-nearest" type of address. Anycast addresses are syntactically indistinguishable from global unicast addresses because anycast addresses are allocated from the global unicast address space. Note Internet anycast addresses have not been widely used. Generally speaking, some known complications and hazards can develop when they are used. Until more experience has been gained and solutions have been agreed upon for those problems, the following restrictions are imposed on IPv6 anycast addresses: (1) An anycast address MUST NOT be used as the source address of an IPv6 packet. (2) An anycast address MUST NOT be assigned to an IPv6 host; that is, it may be assigned to an IPv6 router only.
Global Addresses
The IPv6 global unicast address is the equivalent of the IPv4 global unicast address. A global unicast address is an IPv6 address from the global unicast prefix. The structure of global unicast addresses enables the aggregation of routing prefixes, which limits the number of routing table entries in the global routing table. Global unicast addresses that are used on links are aggregated upward through organizations and eventually to the ISPs. A global routing prefix, a subnet ID, and an interface ID define global unicast addresses. The IPv6 unicast address space encompasses the entire IPv6 address range, except for FF00::/8 (1111 1111), which is used for multicast addresses. The current global unicast address that is assigned by the IANA uses the range of addresses that start with binary value 001 (2000::/3), which is 1/8 of the total IPv6 address space and is the largest block of assigned block addresses. Addresses with a prefix of 2000::/3 (001) through E000::/3 (111) are required to have 64-bit interface identifiers in the extended universal identifier EUI-64 format.
The IANA is currently allocating the IPv6 address space in the range of 2001::/16 to the registries. The global unicast address typically consists of a 48-bit global routing prefix and a 16-bit subnet ID. Individual organizations can use a 16-bit subnet field called "Subnet ID" to create their own local addressing hierarchy and to identify subnets. This field allows an organization to use up to 65,535 individual subnets.
Reserved Addresses
The IETF reserved a portion of the IPv6 address space for various uses, both present and future. Reserved addresses represent 1/256th of the total IPv6 address space. Some of the other types of IPv6 addresses come from this block.
Link-Local Addresses
A block of IPv6 addresses is set aside for private addresses, just as is done in IPv4. These private addresses are local only to a particular link, and are therefore never routed outside of a particular company network. Private addresses have a first octet value of "FE" in hexadecimal notation, with the next hexadecimal digit being 8. Link-local addresses are new to the concept of addressing with IP in the network layer. These addresses refer only to a particular physical link (physical network). Routers do not forward datagrams using link-local addresses at all, not even within the organization; they are only for local communication on a particular physical network segment. They are used for link communications such as automatic address configuration, neighbor discovery, and router discovery. Many IPv6 routing protocols also use link-local addresses. Link-local addresses typically begin with "FE80". The next digits can be defined manually. If you do not define them manually the interface MAC address is used, thus resulting in an address in the form FE80<interface MAC address> based on EUI-64 format.
Loopback Address
Just as in IPv4, a provision has been made for a special loopback IPv6 address for testing; datagrams that are sent to this address "loop back" to the sending device. However, in IPv6 there is just one address, not a whole block, for this function. The loopback address is 0:0:0:0:0:0:0:1, which is normally expressed (using zero compression) as "::1".
Unspecified Address
In IPv4, an IP address of all zeroes has a special meaning; it refers to the host itself, and is used when a device does not know its own address. In IPv6, this concept has been formalized, and the all-zeroes address (0:0:0:0:0:0:0:0) is named the "unspecified" address. It is typically used in the source field of a datagram that is sent by a device that seeks to have its IP address configured.
You can apply address compression to this address; because the address is all zeroes, the address becomes just "::".
Ethernet (Cisco supports this data link layer.) PPP (Cisco supports this data link layer.) High-Level Data Link Control (HDLC) (Cisco supports this data link layer.) FDDI Token Ring Attached Resource Computer network (ARCnet) Nonbroadcast multiaccess (NBMA) ATM (Cisco supports only ATM permanent virtual circuit [PVC], not switched virtual circuit [SVC] or ATM LAN Emulation [LANE].) Frame Relay (Cisco supports only Frame Relay PVC, not SVC.) IEEE 1394
An RFC describes the behavior of IPv6 in each of these specific data link layers, but Cisco IOS Software does not necessarily support all of them. The data link layer defines how IPv6 interface identifiers are created and how neighbor discovery manages data link layer address resolution. Larger address spaces make room for large address allocations to ISPs and organizations. An ISP aggregates all of the prefixes of its customers into a single prefix and announces the single prefix to the IPv6 Internet. The increased address space is sufficient to allow organizations to define a single prefix for their entire network as well. Aggregation of customer prefixes results in an efficient and scalable routing table. Scalable routing is necessary to expand broader adoption of network functions. Scalable routing also improves network bandwidth and functionality for user traffic that connects the various devices and applications. Internet usage, both now and in the future, can include the following elements:
A huge increase in the number of broadband consumers with high-speed connections that are always on. Users who spend more time online and are generally willing to spend more money on communications services (such as downloading music) and high-value searchable offerings. Home networks with expanded network applications such as wireless VoIP, home surveillance, and advanced services such as real-time video on demand (VoD). Massively scalable games with global participants and media-rich e-learning, providing learners with on-demand remote labs or lab simulations.
Static assignment using a manual interface ID Static assignment using an EUI-64 interface ID Stateless autoconfiguration DHCP for IPv6 (DHCPv6)
Stateless Autoconfiguration
Autoconfiguration, as the name implies, is a mechanism that automatically configures the IPv6 address of a node. In IPv6, it is assumed that non-PC devices, as well as computer terminals, will
be connected to the network. The auto-configuration mechanism was introduced to enable plugand-play networking of these devices, to help reduce administration overhead.
DHCPv6 (Stateful)
DHCP for IPv6 enables DHCP servers to pass configuration parameters such as IPv6 network addresses to IPv6 nodes. It offers the capability of automatic allocation of reusable network addresses and additional configuration flexibility. This protocol is a stateful counterpart to IPv6 stateless address autoconfiguration (RFC 2462), and can be used separately or concurrently with IPv6 stateless address autoconfiguration to obtain configuration parameters.
Stateless Autoconfiguration
Stateless autoconfiguration is a key feature of IPv6. It enables serverless basic configuration of the nodes as well as easy renumbering. Stateless autoconfiguration uses the information in the router advertisement messages to configure the node. The prefix included in the router advertisement is used as the /64 prefix for the node address. The dynamically created interface Identifier (which in the case of Ethernet is the modified EUI-64 format) obtains the other 64 bits. Routers periodically send route advertisements (RA). When a node boots up, the node needs its address in the early stage of the boot process. It can be "long" to wait for the next router advertisement to get the information to configure its interfaces. Instead, a node sends a router solicitation (RS) message to the routers on the network, asking them to reply immediately with a router advertisement so the node can immediately autoconfigure its IPv6 address. All of the routers respond with a normal router advertisement message, with the all-nodes multicast address as the destination address. Autoconfiguration enables plug-and-play configuration of an IPv6 device, which allows devices to connect themselves to the network without any configuration from an administrator and without any servers, such as DHCP servers. This key feature enables the deployment of new devices on the Internet, such as cellular phones, wireless devices, home appliances, and home networks.
DHCPv6 (Stateful)
DHCPv6 is an updated version of DHCP for IPv4. It supports the addressing model of IPv6 and benefits from new IPv6 features. DHCPv6 has the following characteristics:
Enables more control than serverless or stateless autoconfiguration Can be used in an environment that uses only servers and no routers Can be used concurrently with stateless autoconfiguration Can be used for renumbering
Can be used for automatic domain name registration of hosts using the Dynamic Domain Name System (DDNS)
The process for acquiring configuration data for a DHCPv6 client is like the one in IPv4, with a few exceptions. Initially, the client must first detect the presence of routers on the link by using neighbor discovery messages. If at least one router is found, then the client examines the router advertisements to determine if DHCPv6 should be used. If the router advertisements enable the use of DHCPv6 on that link, or if no router is found, then the client starts a DHCP solicit phase to find a DHCP server. DHCPv6 uses multicast for many messages. When the client sends a solicit message, it sends the message to the ALL-DHCP-Agents multicast address with link-local scope (FF02::1:2). Agents include both servers and relays. When a DHCP relay forwards a message, it can forward the message to the All-DHCP-Servers multicast address with site-local scope (FF05::1:3). This forwarding means that you do not need to configure a relay with all of the static addresses of the DHCP servers, as in IPv4. If you want only specific DHCP servers to receive the messages, or if there is a problem forwarding multicast traffic to all of the network segments that contain a DHCP server, a relay can contain a static list of DHCP servers. You can configure different DHCPv6 servers, or the same server with different contexts, to assign addresses that are based on different polices. For example, you could configure one DHCPv6 server to give global addresses using a more restrictive policy, such as "do not give addresses to printers." You could then configure another DHCPv6 server, or the same server within a different context, to give site-local addresses using a more liberal policy, such as "give to anyone."
Alternatively, you can manually configure the exact IPv6 address that should be assigned to a router interface by using the ipv6 address command, in interface configuration mode. Note: The configuration of the IPv6 address on an interface automatically configures the linklocal address for that interface. To display the status of interfaces that are configured for IPv6, use the show ipv6 interface command.
Is based on IPv4 Routing Information Protocol (RIP) version 2 (RIPv2) and is like RIPv2 Uses IPv6 for transport Includes the IPv6 prefix and next-hop IPv6 address Uses the multicast group FF02::9, the all-RIP-routers multicast group, as the destination address for RIP updates Sends updates on User Datagram Protocol (UDP) port 521 Is supported by Cisco IOS Release 12.2(2)T and later
To enable RIPng routing on the router, use the ipv6 router rip name global configuration command. The name parameter is a tag which identifies the RIP process. This process name is used later when configuring RIPng on participating interfaces. The name only is locally significant to the router and does not have to be the same on all routers. For RIPng, instead of using the network command to identify which interfaces should run RIPng, you use the command ipv6 rip v6process enable in interface configuration mode to enable RIPng on an interface. The v6process is a name parameter that you use for the ipv6 rip enable command, where it must match the name parameter in the ipv6 router rip command (ipv6 router rip v6process). To verify the configuration of RIP, use the show ipv6 rip command or show ipv6 route rip command. Note: Enabling RIP on an interface dynamically creates a "router rip" process, if necessary. Note : Most show commands support IPv6 and are usually used simply by adding the ipv6 keyword. For example, using show ipv6 route instead of show route will show the content of the IPv6 routing table.
To specify an external DNS server to resolve IPv6 addresses, use the ip name-server command. The address can be an IPv4 or IPv6 address. You can specify up to six DNS servers with this command. For example, configure a DNS server (IPv6 address 2001:db8:1:1::10) to query with this command:
RouterX(config)#ip name-server 2001:db8:1:1::10
Configuring name resolution on a router is done for the convenience of a technician who uses the router to access other devices on the network by name. It does not affect the operation of the router and this DNS server name is not advertised to DHCP clients.
Data terminal equipment (DTE): Generally considered to be the terminating equipment for a specific network. DTE devices are typically located on the customer premises and may be owned by the customer. Examples of DTE devices are Frame Relay Access Devices (FRADs), routers, and bridges. Data Communications Equipment (DCE): Carrier-owned internetworking devices. The purpose of DCE devices is to provide clocking and switching services in a network and to transmit data through the WAN. In most cases, the switches in a WAN are Frame Relay switches.
Frame Relay provides a means for statistically multiplexing many logical data conversations, referred to as virtual circuits (VCs), over a single physical transmission link by assigning connection identifiers to each pair of DTE devices. The service provider switching equipment constructs a switching table that maps the connection identifier to outbound ports. When a frame is received, the switching device analyzes the connection identifier and delivers the frame to the associated outbound port. The complete path to the destination is established prior to the transmission of the first frame. The following terms are used frequently in Frame Relay discussions and may be the same or slightly different from the terms your Frame Relay service provider uses.
Local access rate: Clock speed (port speed) of the connection (local loop) to the Frame Relay cloud. The local access rate is the rate at which data travels into or out of the network, regardless of other settings. VC: Logical circuit, uniquely identified by a data-link connection identifier (DLCI), which is created to ensure bidirectional communication from one DTE device to another. A number of VCs can be multiplexed into a single physical circuit for transmission across the network. This capability can often reduce the complexity of the equipment and network that is required to connect multiple DTE devices. A VC can pass through any number of intermediate DCE devices (Frame Relay switches). A VC can be either a permanent virtual circuit (PVC) or a switched virtual circuit (SVC).
PVC: Provides permanently established connections that are used for frequent and consistent data transfers between DTE devices across the Frame Relay network. Communication across a PVC does not require the call setup and call teardown that is used with an SVC. Switched VC (SVC): Provides temporary connections that are used in situations that require only sporadic data transfer between DTE devices across the Frame Relay network. SVCs are dynamically established on demand and are torn down when transmission is complete. Data Link Connection Identifier (DLCI): Frame Relay virtual circuits are identified by DLCIs. The Frame Relay service providers (for example, telephone companies) typically assign DLCI values. DLCI contains a 10-bit number in the address field of the Frame Relay frame header that identifies the VC. DLCIs have local significance because the identifier references the point between the local router and the local Frame Relay switch to which the DLCI is connected. Therefore, devices at opposite ends of a connection can use different DLCI values to refer to the same virtual connection. Committed information rate (CIR): Specifies the maximum average data rate that the network undertakes to deliver under normal conditions. When subscribing to a Frame Relay service, you specify the local access rate, for example, 56 kb/s or T1. Typically, you are also asked to specify a CIR for each DLCI. If you send information faster than the CIR on a given DLCI, the network marks some frames with a discard eligible (DE) bit. The network does its best to deliver all packets, but discards any DE packets first if there is congestion. Many inexpensive Frame Relay services are based on a CIR of zero. A CIR of zero means that every frame is a DE frame, and the network throws away any frame when it needs to. The DE bit is within the address field of the Frame Relay frame header. Inverse Address Resolution Protocol (ARP): A method of dynamically associating the network layer address of the remote router with a local DLCI. Inverse ARP allows a router to automatically discover the network address of the remote DTE device that is associated with a VC. Local Management Interface (LMI): A signaling standard between the router (DTE device) and the local Frame Relay switch (DCE device) that is responsible for managing the connection and maintaining status between the router and the Frame Relay switch. Basically, the LMI is a mechanism that provides status information about Frame Relay connections between the router (DTE) and the Frame Relay switch (DCE). Every 10 seconds or so, the end device polls the network, either requesting a dumb sequenced response or channel status information. If the network does not respond with the requested information, the user device may consider the connection to be down. Forward explicit congestion notification (FECN): A bit in the address field of the Frame Relay frame header. The FECN mechanism is initiated when a DTE device sends Frame Relay frames into the network. If the network is congested, DCE devices (Frame Relay switches) set the FECN bit value of the frames to one. When these frames reach the destination DTE device, the address field with the FECN bit set, indicates that these frames experienced congestion in the path from source to destination. The DTE device can relay this information to a higher-layer protocol for processing. Depending on the implementation, flow control may be initiated or the indication may be ignored.
Backward explicit congestion notification (BECN): A bit in the address field of the Frame Relay frame header. DCE devices set the value of the BECN bit to 1 in frames that travel in the opposite direction of frames that have their FECN bit set. Setting BECN bits to 1 informs the receiving DTE device that a particular path through the network is congested. The DTE device can then relay this information to a higher-layer protocol for processing. Depending on the implementation, flow control may be initiated or the indication may be ignored.
By default, a Frame Relay network provides nonbroadcast multiaccess (NBMA) connectivity between remote sites. An NBMA environment is treated like other broadcast media environments, such as Ethernet, where all the routers are on the same subnet. However, to reduce cost, NBMA clouds are usually built in a hub-and-spoke topology. With a hub-and-spoke topology, the physical topology does not provide the multiaccess capabilities that Ethernet does, so each router may not have separate PVCs to reach the other remote routers on the same subnet. Split horizon is one of the main issues you encounter when Frame Relay is running multiple PVCs over a single interface. Frame Relay allows you to interconnect your remote sites in various topologies, described as follows:
Star topology: Remote sites are connected to a central site that generally provides a service or an application. The star topology, also known as a hub-and-spoke configuration, is the most popular Frame Relay network topology. This is the least expensive topology because it requires the least number of PVCs. Full-mesh topology: All routers have VCs to all other destinations. Full-mesh topology, although costly, provides direct connections from each site to all other sites and allows for redundancy. When one link goes down, a router can reroute traffic through another site. As the number of nodes in this topology increases, a full-mesh topology can become very expensive. Use the n (n 1) / 2 formula to calculate the total number of links that are required to implement a full-mesh topology, where n is the number of nodes. For example, to fully mesh a network of 10 nodes, 45 links are required: 10 (10 1) / 2. Partial-mesh topology: Not all sites have direct access to all other sites. Depending on the traffic patterns in your network, you may want to have additional PVCs connect to remote sites that have large data traffic requirements.
In any Frame Relay topology, when a single interface must be used to interconnect multiple sites, you can have reachability issues because of the NBMA nature of Frame Relay. The Frame Relay NBMA topology can cause the following two problems:
Routing update reachability: Split horizon rule reduces routing loops by preventing a routing update that is received on an interface from being forwarded out the same interface. In a scenario using a hub-and-spoke Frame Relay topology, a remote router (a spoke router) sends an update to the headquarters router (the hub router) that is connecting multiple PVCs over a single physical interface. The headquarters router then receives the broadcast on its physical interface but cannot forward that routing update
through the same interface to other remote (spoke) routers. Split horizon is not a problem if there is a single PVC on a physical interface because this type of connection would be more of a point-to-point connection type. Broadcast replication: With routers that support multipoint connections over a single interface that terminate many PVCs, the router must replicate broadcast packets, such as routing update broadcasts, on each PVC to the remote routers. These replicated broadcast packets consume bandwidth and cause significant latency variations in user traffic.
There are several methods to solve the routing update reachability issue.
One method for solving reachability issues that are brought on by split horizon may be to turn off split horizon. However, two problems exist with this solution. First, although most network layer protocols, such as IP, do allow you to disable split horizon, not all network layer protocols allow you to do this. Second, disabling split horizon increases the chances of routing loops in your network. Another method is to use a fully meshed topology; however, this topology increases the cost. The last method is to use subinterfaces. To enable the forwarding of broadcast routing updates in a hub-and-spoke Frame Relay topology, you can configure the hub router with logically assigned interfaces that are called subinterfaces, which are logical subdivisions of a physical interface. In split-horizon routing environments, routing updates that are received on one subinterface can be sent out another subinterface. In subinterface configuration, each VC can be configured as a point-to-point connection, which allows each subinterface to act like a leased line. When you use a Frame Relay point-to-point subinterface, each subinterface is on its own subnet.
A Frame Relay connection requires that, on a VC, the local DLCI must be mapped to a destination network layer address, such as an IP address. Routers can automatically discover their local DLCI from the local Frame Relay switch using the LMI protocol. On Cisco routers, the local DLCI can be dynamically mapped to the remote router network layer addresses with Inverse ARP. Inverse ARP associates a given DLCI to the next-hop protocol address for a specific connection. Inverse ARP is described in RFC 1293. Instead of using Inverse ARP to automatically map the local DLCIs to the remote router network layer addresses, you can manually configure a static Frame Relay map in the map table.
Cisco: LMI type that was developed jointly by Cisco, StrataCom, Northern Telecom (Nortel), and Digital Equipment Corporation ANSI: ANSI T1.617 Annex D Q.933A: ITU-T Q.933 Annex A
You can also manually configure the appropriate LMI type from the three supported types to ensure proper Frame Relay operation. When the router receives LMI information, it updates its VC status to one of the following three states:
Active: Indicates that the VC connection is active and that routers can exchange data over the Frame Relay network. Inactive: Indicates that the local connection to the Frame Relay switch is working, but the remote router connection to the remote Frame Relay switch is not working. Deleted: Indicates that either no LMI is being received from the Frame Relay switch or there is no service between the router and local Frame Relay switch.
A basic Frame Relay configuration assumes that you want to configure Frame Relay on one or more physical interfaces and that the routers support LMI and Inverse ARP.
Point-to-point: A single point-to-point subinterface is used to establish one PVC connection to another physical interface or subinterface on a remote router. In this case, each pair of the point-to-point routers is on its own subnet, and each point-to-point subinterface has a single DLCI. In a point-to-point environment, because each subinterface acts like a point-to-point interface, update traffic is not subject to the splithorizon rule. Multipoint: A single multipoint subinterface is used to establish multiple PVC connections to multiple physical interfaces or subinterfaces on remote routers. In this case, all the participating interfaces are in the same subnet. In this environment, because the subinterface acts like a regular NBMA Frame Relay interface, update traffic is subject to the split-horizon rule.
First, use the show controllers serial command to verify that the cable is present and recognized by the router. Next, you may need to troubleshoot the problem with a loopback test. Follow these steps to perform a loopback test. Step 1 Set the serial line encapsulation to High-Level Data Link Control (HDLC) and keepalive to 10 seconds. To do this, use the commands encapsulation hdlc and keepalive 10 in the interface configuration mode of the interface you are troubleshooting. Step 2 Place the CSU/DSU or modem in local-loop mode. Check the device documentation for how to do this. If the line protocol comes up when the CSU/DSU or modem is in local-loop mode, indicated by a "line protocol is up (looped)" message, it suggests that the problem is occurring beyond the local CSU/DSU. If the status line does not change states, there could be a problem in the router, connecting cable, CSU/DSU, or modem. In most cases, the problem is with the CSU/DSU or modem. Step 3 Execute a ping to the IP address of the interface you are troubleshooting while the CSU/DSU or modem is in local-loop mode. There should not be any misses. An extended ping that uses a data pattern of 0x0000 is helpful in resolving line problems because a T1 or E1 connection derives clock from the data and requires a transition every 8 bits. A data pattern with many zeros helps to determine if the transitions are appropriately forced on the trunk. A pattern with many ones is used to appropriately simulate a high zero load in case there is a pair of data inverters in the path. The alternating pattern (0x5555) represents a "typical" data pattern. If your pings fail or if you get cyclic redundancy check (CRC) errors, a bit error rate tester (BERT) with an appropriate analyzer from the telephone company (telco) is needed. Step 4 When you are finished testing, ensure that you return the encapsulation of the interface to Frame Relay. An incorrect statically defined DLCI on a subinterface may also cause the status of the subinterface to appear as "down/down", and the PVC status may appear as " deleted". To verify that the correct DLCI number has been configured, use the show frame-relay pvc command. PVC STATUS field in the output of show frame-relay pvc command reports the status of the PVC. The DCE device reports the status, and the DTE device receives the status. The PVC status is exchanged using the LMI protocol.
ACTIVE State indicates a successful end-to-end (DTE to DTE) circuit. INACTIVE State indicates a successful connection to the switch (DTE to DCE) without a DTE detected on the other end of the PVC. This can occur due to residual or incorrect configuration on the switch. DELETED State indicates that the DTE is configured for a DLCI the switch does not recognize as valid for that interface.
If the output of a show interface serial command displays a status of "interface up/line protocol down ", this typically indicates a problem at Layer 2, the data link layer. If so, the serial interface may not be receiving the LMI keepalives from the Frame Relay service provider. To verify that LMI messages are being sent and received, and to verify that the router LMI type matches the LMI type of the provider, use the show frame-relay lmi command.