Huawei Certification
HCDA-HNTD
Huawei Networking Technology and Device
HC Series
HUAWEI TECHNOLOGIES
HCDA-HNTD
Huawei Certification
HCDA-HNTD Huawei Networking Technology and Device
Edition 1.6
HUAWEI TECHNOLOGIES
HC Series
HCDA-HNTD
HC Series
HUAWEI TECHNOLOGIES
HCDA-HNTD
Foreword
Outline
This book is about the Huawei certified Datacom Associate certification.The
students who want to prepare for the HCDA exam or want to learn the technology
about TCP/IP protocol stacks,router,swith,WAN,eathernet and how to configure
use on the VRP
Content
The guide contains a total of six modules, starting from the basic knowledge of
data communications, the guide introduces the fields of routing, switching, WAN,
firewall and other basic knowledge, as well as configuration and implementation
using the VRP platform.
Module 1 systematically introduces the IP network infrastructure, TCP/IP four-layer
model to help the reader to establish the basic framework of the data
communications network. In highlighting functions and roles of the network layer,
transport layer and application layer ,this module helps readers to master the
functions and roles of communication networks in a variety of products.
Module 2 describes the basics and operation of the Huawei generic routing
platform VRP and progressive approach to the basics of routing protocols, static
routing and dynamic routing protocols. This module helps readers understand the
principles and the basic process of data communication by highlighting RIP and
OSPF, two IGP routing protocols.
Module 3 introduces the popular Ethernet technology, how Ethernet equipment
works as well as technologies used mostly in the LAN like VLAN, STP, VRRP to help
readers improve abilities for planning LAN.
Module 4 briefly describes the basic principles of WAN technologies such as HDLC,
PPP, Frame Relay configuration and implementation on the VRP to help readers to
master WAN technologies and implement flexibly.
Module 5 briefly describes the types of firewall technologies and development,
performance and the basic functions of Huawei Eudemon series firewalls, and
implementations on the VRP to help readers to understand and layout the network
security policy .
Module 6 briefly describes hardware features, positioning and networking
applications of Huawei routers and switches. Through studying this part, readers
will develop a comprehensive understanding of Huawei data communications
products.
HUAWEI TECHNOLOGIES
HC Series
HCDA-HNTD
The guide enables the readers to master step by step from the basis of data
communication to routing, switching, WAN, network security technologies, and
Huawei products. Readers can also read selectively according to their own
circumstances.
HC Series
HUAWEI TECHNOLOGIES
HCDA-HNTD
IPv6 Router
Core Router
Edge Switch
SOHO Router
Hub
Cascade Switch
Access Server
Voice Router
Convergence Switch
AP
Audio Gateway
HUAWEI TECHNOLOGIES
Low-end Router
High-end Router
Socket switch
Core Switch
AP Amplifier
Firewall
Wireless Bridge
Internet Telephony
HC Series
Table of Contents
Module 1 Network Fundamentals ...........................................................................................Page 9
HC110110000 -
page 7
page 8
page 11
Page 15
page 12
Page 16
page 13
Page 17
page 14
Page 18
Data refers to information in any format. The format used to encode any information must
follow agreed or standard rules before successful communication between a sender and
receiver is possible.
For example, a picture can be broken down into a number of dots referred to as pixels,
each pixel can then be represented by a number which can then be encoded ready for
transmission. The format used to encode the image data by the sender must be understood
by the receiver to enable them to decode and rebuild the picture.
Common types of data that can be encoded for transmission include text, numbers,
pictures, audio, and video. many standard ways of encoding the different types of data exist.
Data communication is the process of exchanging data between two devices through a
transmission medium, such as a wired or wireless network.
page 15
Page 19
page 16
Page 20
There are three different ways in which two devices can communicate in data networking:
Simplex communication:
Simplex communication is in one direction. One device can only send
messages, the other one can only receive messages.
For example a keyboard is a device which only sends data and a monitor
a device that can only receive data both use simplex communication.
Half-duplex communication:
Half-duplex communication is two way but only one device can be sending at
any time, the other must be receiving. Both devices are capable of sending and
receiving but communication can only be in one direction at a time.
Two-way radios, such as those used by police and taxis work in half-duplex mode.
Full-duplex communication:
Full-duplex communication is two way concurrently, both devices can send and
receive messages at the same time.
A motorway is full duplex as traffic is able to travel in both directions at the same time.
Telephony networks are also full duplex, however most humans can only either talk or
listen - not do both at the same time.
Huawei Networking Technology and Device Module 1 Part 1 IP Network Fundamental
page 17
Page 21
page 18
Page 22
A network is any group of people, things or places that are interconnected in some way.
Networks exist everywhere in our life, we have road, rail, telephone and postal networks
which we use on a daily basis.
A computer network consists of two or more computers and peripheral which are
interconnected by communication lines.
The computers in a network can easily exchange and share information and resources.
Computer networks were developed to meet increasing requirements for exchanging
information and sharing resources.
In early computer networks , each computer was an independent device, there was little
or no communication between systems.
As computer and communication technologies evolved, communication between different
systems was made possible.
Standard protocols understood by different systems made sharing resources and data
possible and improved resource utilisation.
page 19
Page 23
page 20
Page 24
The topology defines the organization of devices in a network. A LAN can adopt
various topologies, such as the bus topology and star topology.
In the bus topology, all devices are connected to a linear network media, which
is called the bus. When a node transmits data in a network adopting the bus
topology, the data reaches all nodes. Each node checks the data. If the data is
not sent to this node, the node discards the data. If the data is sent to this node,
the node accepts the data and transfers the data to the upper layer protocol. A
typical bus topology has simple layout of lines. Such layout uses short network
media, and thus, the expense on cables is low. However, this topology makes
it difficult to diagnose and isolate faults. Once a fault occurs, the entire network
will be affected. In addition, each device in the LAN sends data to all the other
devices, which consumes large amount of bandwidth. It will lower network
performance.
In the star topology, devices are connected to a central control point. A device
communicates with another device through the point-to-point connection between
it and the hub or switch. The start topology is easy to design and install, because
network media connect the hub or switch and workstations. The star topology
is easy to maintain, because the network can be easily modified and network
faults can be easily be located. The star topology is extensively used in LAN
construction. Of course the star topology has its weakness. Once the central
control device becomes faulty, the single point failure may be occur. In addition, a
Network media can connect only one device, so large amount of network
media are needed and the LAN installation cost increases.
page 21
Page 25
These topologies are logical structures and are not necessarily related to the
physical structure of devices. For example, logical bus and ring topologies usually
adopt the physical star structure. A WAN usually adopts the star, tree, fullmeshed,
or half-meshed topology.
page 22
Page 26
The Internet is a large network formed by networks and devices. Based on the
covered geographic scope, networks are classified into LAN, WAN, and
Metropolitan Area Network (MAN) whose size is between the LAN and WAN.
Local Area Network (LAN)
A LAN is formed by connected communication devices in a small area. A LAN
covers a room, a building, or an industry garden. A LAN covers several
kilometers. It is a combination of computers, printers, modems, and other devices
interconnected through various media within several kilometers.
Wide Area Network (WAN)
A WAN covers a larger geographic scope, such as a state or a continent. It
provides the data communication service in a large area and is used to connect
LANs. The China Packet Network (CHINAPAC), China Data Digital Network
(CHINADDN), China Education and Research network (CERnet), CHINANET,
and China Next Generation Internet (CNGI) are all WANs. A WAN
connects LANs that are far from each other.
page 23
Page 27
page 24
Page 28
page 25
Page 29
page 26
Page 30
Frame Relay: a comparatively newer technology developed on the basis of X.25. The
transmission speed is between 64 kbit/s and 2.048 Mbit/s. The Frame Relay is
flexible. It implements point-to-multipoint connection. In addition, FR can transmit
data at a speed that exceeds the Committed Information Rate (CIR) when large
amount of data needs to be transmitted, and it allows certain burst traffic. For
these reasons, FR is a good choice for business subscribers.
Asynchronous Transfer Mode (ATM): a cell exchange network that features high
speed, low delay, and guaranteed transmission quality. Most of ATM network use
fibers as the connection medium. The fiber provides a high speed of over 1
gigabit, but the cost is also high. ATM is also a WAN protocol.
page 27
Page 31
The WAN operates in a scope larger than that of the LAN. In the WAN, the network
access is
implemented through various serial connections. Generally, enterprise networks
are connected to the local ISP through the WAN lines. The WAN provides fulltime and
part-time connections. In the WAN, serial interfaces can work at different speeds.
The following devices are used in the WAN:
Router: In the WAN, messages are sent to the destination according to the
address. The process of looking for the transmission path is called routing. A router will
send data to the destination by
establishing routes between WANs and LANS according to their address information.
Modem: As the device used to transform signals between the end system and
communication system, a modem is the indispensable device in a WAN. Modems
are classified into synchronous modem and asynchronous modem. The
synchronous modem is connected to the synchronous serial interface and is
applied to the leased line, Frame Relay, and X.25. The asynchronous modem is
connected to the asynchronous serial interface and is applied to the PSTN.
page 28
Page 32
ARPAnet solves the problem of network robustness. That is, once a device fault
or link fault occurs, data transmission must be ensured between any two nodes if
the two nodes are physically connected. For the high ability of self-healing,
ARPAnet meets the requirement in wars. It comes of the Defence Advanced
Research Projects Agency (DARPA).
In 1985, the National Science Foundation (NSF) established the NSFnet. NSF
established a WAN consisting of regional networks and connected these regional
networks to the super computer center. In June 1990, the NFSnet took the place
of the ARPAnet and became the backbone network of the Internet. Owing to the
NSFnet, the Internet is open to the public, while it was only used by computer
science researchers and governments before.
The second leap of the Internet was attributed to the commercialization in early of
the 1990s. As soon as commercial organizations entered the world of Internet,
they found the great potential of Internet in communications, information
searching, and customer service. Then numerous enterprises in the world
swarmed into the Internet, which resulted in a new leap of the Internet.
In 1995, NSFnet came to an end and it was replaced by a new Internet backbone
network operated by multiple private companies.
page 29
Page 33
page 30
Page 34
page 31
Page 35
page 32
Page 36
A standard is a set of rules and processes that are widely used or defined by the
government. A standard describes stipulations in a protocol and sets the simplest
performance set for guaranteeing network communications. IEEE 802.X is the
dominant LAN standard.
page 33
Page 37
page 34
Page 38
They define the standards for network cables, for example, RS232, CAT5, HSSI,
and V.24. They also define the standard for cabling, for example, EIA/TIA 568B.
International Telecomm Union (ITU)
They define the standard for the telecom network working as the WAN, for
example, X.25 and Frame Relay.
Internet Engineering Task Force (IETF)
Founded at the end of 1985, the IETF is responsible for researching and
establishing technical specifications related to the Internet. Now IETF has
become the most authoritative research institute in the global Internet field.
page 35
Page 39
page 36
Page 40
page 37
Page 41
service identification
differentiated services
A typical IP network is comprised of a backbone network, Metropolitan Area Network (MAN) and
Access Network. The backbone network commonly interconnects networks from different
countries and cities. Metropolitan Area Networks are located between the backbone network
and the access network, and it is commonly comprised of a backbone layer, convergence layer
and access layer. Access networks are used for terminal user access, it is usually in the layer2
access network, which is under the service access point. Users can access the internet via xDSL,
Ethernet and so on.
The target network structure of IP MAN is divided into:
IP MAN
Service access point (BRAS and service router) and the upper layer routers that compose the
layer3 network.
IP MAN is comprised of a backbone layer, convergence layer and access layer.
Broadband access network
The layer2 access network, which is under the service access point.
The network structure is divided into the layer2 convergence network and the last mile access
network.
On the service plane, the structure can be divided into a public access network plane and the
major account access network plane.
page 38
Page 42
The Metropolitan Area Network (MAN) is located between the backbone network and the
access network, and interlinks different areas of a city.
The MAN provides the following services:
Internet access There are two access modes: dialup access mode and private line access mode.
In the dialup access mode, subscribers have different service attributes. In the private line
access mode, subscribers in the same group have the same service attributes. The Asymmetric
Digital Subscriber Line (ADSL) and Local Area Network (LAN) technologies are widely used as
Internet access services. Both technologies support dialup access and private line access modes.
Virtual private network (VPN)
In recent years, enterprises have increasing requirements for diversified services. As such, VPN
technology has become more and more popular. VPN is a private network constructed within a
public network infrastructure with the help of Internet service providers (ISPs) and network
service providers (NSPs).
Based on the implementation layer, VPN can be classified into Layer 2 VPN (L2VPN), Layer 3 VPN
(L3VPN) and the Virtual Private Dial Network (VPDN). The VPDN provides network access to
mobile personnel in enterprises and small-sized ISPs using the dialup function of the public
network and the access network.
Page 44
page 39
The common Internet access modes are ADSL, Ethernet, and leased line. Household users usually
choose the ADSL access mode, residential users prefer the Ethernet access mode, and enterprise
users select the leased line access mode. Normally, the access network uses Layer 2 devices,
such as digital subscriber line access multiplexers (DSLAM) and Ethernet switches, to provide the
access service for users. The access network does not perform any control on users and it simply
sets up Layer 2 connections to transparently transmit user information to upper-layer devices.
The access network refers to all devices at the access layer.
The access layer uses the broadband remote access server (BRAS) to manage users.
The convergence layer generally uses aggregation routers or Layer 3 switches. The convergence
layer aggregates traffic from the BRAS into the MAN devices and forwards this traffic through
routing functions.
The following shows the Internet access process:
A user sends an Internet access request. Layer 2 devices in the access network establish a Layer 2
connection and transparently transmit the request to the BRAS.
The BRAS performs user identity authentication and authorization, and allocates IP addresses to
the user.
The BRAS routes the user packets to devices at the convergence layer. The devices at the
convergence layer forward the packets through routing functions, to allow the user to have
access to the Internet.
page 40
Page 46
VPN services are classified into L3VPN services, L2VPN services and VPDN services. Here, we talk
about the most common L3VPN services. L3VPN has multiple types, such as Internet Protocol
Security VPN (IPSec VPN), Ground Radar Equipment VPN (GRE VPN) and Border Gateway
Protocol/Multiple protocol Label Switching VPN (BGP/MPLS VPN).
The BGP/MPLS VPN model has three parts: customer edge (CE), provider edge (PE) and provider
(P).
CE: It is an edge device on the user network. A CE provides interfaces that are directly connected
to the service provider (SP) network. It can be a router, switch or a host.
PE: It is an edge router provided by the SP. A PE device is directly connected to the CE. On the
MPLS network, all VPN operations are performed in the PEs.
P: It is a backbone router on the SP network. A P device is not directly connected to the CE. The P
device forwards MPLS data, and does not maintain VPN information.
As shown in the figure on this slide, enterprise private line users A, B and C can communicate
with each other on the LAN by means of the BGP/MPLS VPN network.
Page 48
page 41
Generally, the performance of the backbone network can be evaluated using the following
indicators:
High reliability
Devices on the backbone network must be stable, which is critical to the stable operation of the
entire network. Therefore, network architects should properly design the network architecture
and develop reliable network backup policies to ensure strong network self-healing capabilities.
Flexibility and scalability
To meet future network services, the network must be seamlessly expanded and upgraded while
minimally affecting the network architecture and devices.
Flat networking
The number of network layers and hops should be minimized to facilitate network management.
Proper planning of quality of service (QoS)
In, the IP network also supports voice over IP (VoIP), video and key customer services. These
services have high requirements on service in addition to carrying Internet access service quality.
Therefore, support for QoS is one of the necessary conditions for the transition of the IP network
to the telecommunications network. To achieve support for QoS, QoS should be properly
planned.
Operability and manageability
Centralized monitoring, rights-based management, and unified allocation of bandwidth
resources are supported, which make the entire network controllable.
page 42
Page 50
Page 52
page 43
page 44
Page 55
Page 56
page 45
page 46
Page 57
Page 58
page 47
Since the 1960s, computer networks have undergone a dramatic development. To take the
leading position and have a larger share in the communication market, manufacturers
competed in advertising their own network structures and standards which included IBMs
SNA, Novells IPX/SPX., Apples Apple Talk, DECs DECnet and TCP/IP, which remains
the most widely used today. These companies pushed software and hardware that use
their protocols to the market enthusiastically. All these efforts promoted the fast
development of network technology and the prosperity of the market of network devices.
However, the network became more and more complicated due to lack of compatibility
between the various protocols.
To improve network compatibility, the International Organization for Standardization (ISO)
developed the Open System Interconnection Reference Model (OSI RM) which soon
became the model of network communications. The ISO followed the following principles
when they designed the OSI reference model:
1. Each layer of the model has its own responsibilities which should help it stand
out as an independentlayer.
2. To avoid function overlapping, there should be enough layers.
The OSI reference model has the following advantages:
1. It simplifies network related operations.
2. It provides compatibility and standard interfaces for systems designed by
different institutions.
3. It enables all manufactures to be able to produce compatible network
devices, which facilitates the standardization of networks.
4. It lays the complex concept of communications down into simpler and smaller
problems, which facilitates our understanding and operations.
5. It separates the whole network into areas, which guarantees changes in one
area will not affect other areas and networks in each area can be updated
quickly and independently.
page 48
Page 59
The OSI reference model has seven layers. From bottom to top, they are physical layer,
data link layer, network layer, transport layer, session layer, presentation
layer and application layer.
The bottom three layers are usually called lower layer or the media layer, which is
responsible for transmitting data in the network. Networking devices often work at lower
layers and network interconnection is achieved by the cooperation of software and hardware.
Layer 5 to layer 7 form the upper layer or the host layer. The upper layer guarantees data is
transmitted correctly, which is achieved by software.
Page 60
page 49
The functions of each layer of the OSI Reference Model are listed as follows:
Physical layer: providing a standardized interface to physical transmission media including
voltage, wire speed and pin-out of cables.
Data link layer: combines bits into bytes and bytes into frames. Provides access to media
using MAC address and error detection.
Network layer: providing logical addresses for routers to decide path.(path selection)
Transport layer: providing reliable or unreliable data transfer services and error correction
before retransmission.
Session layer: establishing, managing and terminating the connections between the local
and remote application. Service requests and responds of application
programs in different devices form the communication of this layer RPC,NFS and SQL
belong to this layer.
Presentation layer: providing data encoding and translation. Make sure that the data sent
by the application layer of one system can be understood by the application layer of
another system.
Application layer: providing network services as the closest layer to users among the
seven layers.
page 50
Page 61
Since the OSI reference model and protocols are comparatively complicated, they do not
spread widely. However, TCP/IP has been widely accepted for its openness
and simplicity. The TCP/IP stack has already been the main stream protocols for the
Internet.
The TCP/IP model also takes a layered structure. Each layer of the model is independent
from each other but they work together very closely.
The difference between the TCP/IP model and the OSI reference model is that the former
groups the presentation layer and the session layer have been merged into the application
layer. So the TCP/IP model has only five layers. From bottom to top, they are: physical
layer, data link layer, network layer, transport layer and application layer.
Page 62
page 51
Each layer of the TCP/IP model corresponds to different protocols. The TCP/IP protocol
stack is a set of communication protocols. Its name, the TCP/IP protocol
suite, is named after two of its most important protocols: the Transmission Control
Protocol (TCP) and the Internet Protocol (IP). The TCP/IP protocol stack ensures the
communication between network devices. It is a set of rules that define how information is
delivered in the network.
page 52
Page 63
Each layer of the TCP/IP model uses Protocol Data Unit (PDU) to exchange information
and enable communication between network services. During encapsulation, each
succeeding layer encapsulates the PDU that it receives from the layer above. At each
stage of the process, a PDU has a different name to reflect its new appearance.
For example, the transport layer adds TCP header to the PDU from the upper layer to
generate the layer 4 PDU, which is called a segment. Segments are then
delivered to the network layer. They become packets after the network layer adds the IP
header into those PDUs. The packets are transmitted to the data link
layer, where they are added data link layer headers to become frames. Finally, those
frames are encoded into bit stream to be transmitted through network medium. This
process in which data are delivered following the protocol suite from the top to the bottom
and are added with headers and tails is called
encapsulation.
After encapsulation, data is sent to the receiving device after transmission. The receiving
device will decode the data to extract the original service data unit and
decides how to pass the data to an appropriate application program along the protocol
stack. This reverse process is called de-encapsulation. The corresponding layers, or
peers, of different devices communicates through encapsulation and de-encapsulation.
As the figure above shows, Host A is communicating with Host B. Host A delivers data
transformed from an upper layer protocol to the transport layer. The transport layer
encapsulates the data within the segment and send it to the network layer, which adds a
header. Then the segment is encapsulated within an IP packet, which adds another
header, called the IP header. Next, the IP packet is sent to data link layer where it is
encapsulated within a frame header and trailer. The physical layer then transforms the
frame into bit stream and sends it to Host B through the physical cable.
Page 64
page 53
When Host B receives the bit stream, it sends it to its data link layer. The data link layer
removes the frame header and trailer, then passes the packet to the upper layer - network
layer. Then the network layer removes the IP header from the packet and passes segment
to the transport layer. In the similar way, the transport layer extracts the original data and
delivers it to the top layer, the application layer.
The process of encapsulation or decapsulation is done layer by layer. Each layer of the
TCP/IP has to deal with data both from its upper and lower layers by adding
or deleting packet headers.
page 54
Page 65
Page 66
page 55
Physical layer mediums include coaxial cable, twisted pair, fiber and wireless radio. Coaxial
cable is an electrical cable consisting of a round conducting wire. The
coaxial cable can be grouped into thick coaxial cable and thin coaxial cable according to their
diameters. The thick coaxial cable is more suitable for large LANs since its transmission
distance is longer and it is more reliable. The thick coaxial cable does not need to be cut but
you must install transceiver for networks using thick coaxial cable. The thin coaxial cable is
easy to install and is much cheaper, but you need to cut the thin coaxial cable and put basic
network connectors (BNC) on its two sides and then inserts the two sides into T-shape
connectors when installing the cable. So when there are many connectors, the safety is
influenced.
Twisted pair is the most widely used cable, which is twisted by a pair of insulated copper
wires whose diameters are about 1mm. Twisted pair has two types: Shielded Twisted Pair
(STP) and Unshielded Twisted Pair (UTP) . STP cabling includes metal shielding over each
individual pair of copper wires, so it is very capable of keeping electromagnetic interferences
and wireless radio interference at bay. STP is easy to install but its price is comparatively
high. UTP is easy to install and its price is cheaper, however, its capability of antiinterference is not as powerful as that of STP and its transmission distance is not that long.
Fiber consists of fiberglass and the shielding layer and it will not be interfered by
electromagnetic signals. The transmission speed of fiber is fast and the transmission
distance is long, but fiber is very expensive. Optical fiber connectors are connectors for the
light, which are very smooth and should not have any cuts.
Fiber connectors are not installed easily.
Wireless radio makes communications without physical links. Wireless radio refers to
electromagnetic waves with frequencies within the radio frequency that are transmitted in the
space including the air and vacuum. We should put all the aspects into consideration such as
the distance, price, bandwidth requirement, cables that the network devices support etc.
when we make a choice of physical medium.
Repeaters and hubs are devices working at the physical layer, but with the development of
networks, they are not used so much as in the past. Well not discuss them here.
page 56
Page 67
Data link layer is the first logical layer of the physical layer. It encodes physical address for
terminals and help network devices decide whether to pass data to
upper layers along the protocol stack. It also points out which protocol the data should be
delivered to with some of its fields and at the same time, it provides
functions like sequencing and traffic control.
The data link layer has two sub-layers: Logical Link Control sublayer (LLC) and Media
Access Control sublayer (MAC) .
LLC lies between the network layer and the MAC sublayer. This sublayer is responsible for
identifying protocols and encapsulating data for transmission. The LLC sublayer performs
most functions of the data link layer and some functions of the network layer such as
sending and receiving frames. When it sends a frame,
it adds the address and CRC to the original data. When it receives a frame, it takes apart the
frame and performs address identification and CRC. It also provides flow control, frame
sequence check, and error recovery. Besides these, it can perform some of the network
functions including datagram, virtual links and multiplexing.
The MAC sublayer defines how data is transmitted through physical links. It communicates
with the physical layer, specifies physical addresses, network topology, and line standards
and performs error notification, sequence transmission and traffic control etc.
Page 68
page 57
Data link layer protocols specify the frame encapsulation at the data link layer. A
common data link layer protocol for LANs is IEEE 802.2LLC.
Common data link layer protocols for WANs include High-level Data Link Control
(HDLC) , Point-to-Point Protocol (PPP) and Frame Relay (FR).
HDLC is a bit-oriented synchronous data link layer protocol developed by the
ISO. HDLC specifies data encapsulation for synchronous serial links with frame
characters and CRC.
PPP is defined by Request For Comment (RFC) 1661. PPP consists of the Link
Control Protocol (LCP) , the Network Control Protocol (NCP) and other PPP
extended protocol stacks. PPP is commonly used to act as a data link layer
protocol for connection over synchronous and asynchronous circuits and it
supports multiple network layer protocols. PPP is the default data link layer
protocol for data encapsulation of the serial ports of VRP routers.
FR is a protocol conforming with the industrial standards and it is an example of
packet-switched technology. PPP uses error verification mechanism, which
speeds up data transmission.
Ethernet switches are common network devices work at the data link layer.
page 58
Page 69
As every person is given a name for identification, each network device is labeled with a
physical address, namely, the MAC address. The MAC address of a network device is
unique globally. A MAC address consists of 48 binary digits and is often printed in
hexadecimal digits for human use. The first six hexadecimal bits are assigned to
producers by IEEE and the last six bits are decided by producers themselves. For
example, the first six hexadecimal bits of the MAC address of Huaweis products is
0x00e0fc.
Network Interface Card (NIC) has a fixed MAC address. Most NIC producers burn the
MAC address of their products into the ROM. When an NIC is initialized, the MAC
address in the ROM is read into the RAM. When you insert a new NIC into a computer,
the physical address of the computer is replaced by the physical address of the NIC.
However if you insert two NICs into your computer, then your computer may have two
MAC addresses, so a network device may have multiple MAC addresses.
Page 70
page 59
The data link layer ensures that datagram are forwarded between devices on the same
network, while the network layer is responsible for forwarding packets from
source to destination across networks. The functions of the network layer can be
generalized as follows:
Provide logical addresses for transmission across networks.
Routing: to forward packets from one network to another.
The router is a common network device that works at the network layer. Routers functions
mainly for forwarding packets among networks. In the above figure,
Host A and Host B reside on different networks or links. When the router that resides on
the same network as Host A receives frames from Host A, the router
passes those frames to the network layer after it ensures that the frames should be sent to
itself by analyzing the frame header. Then the network layer checks
where those frames should go according to the destination address in the network layer
header and later it forwards those frames to the next hop. The process
repeats until the frames are sent to Host B.
page 60
Page 71
Common network layer protocols include the Internet Protocol (IP) , the Internet Control
Message Protocol (ICMP) , the Address Resolution Protocol (ARP) and the Reverse
Address Resolution Protocol (RARP) .
IP is the most important one among the network layer protocols and its functions
represent the main functions of the network layer. The functions of IP include
providing logical address, routing and encapsulating or de-encapsulating packets. ICMP,
ARP and RARP facilitate IP to achieve the network layer functions.
ICMP is a management protocol and it provides information for IP. ICMP information is
carried by IP packets.
ARP maps an IP address to a hardware address, which is the standard method for finding
a host's hardware address when only its network layer address is known.
RARP maps a hardware address to an IP address, which means to get a hosts IP
address through its hardware address.
Page 72
page 61
The network layer address we mentioned here refers to the IP address. The IP address is
a logical address instead of a hardware address. The hardware address
such as the MAC address, is burned on the NIC and it is for the communication between
devices that are on the same link. However, the IP address is used for communication
between devices on different networks.
An IP address is 4-byte long and is made up of the network address and the host address.
It is often presented in dotted decimal notation, for example, 10.8.2.48.
More information about the IP address will be introduced in later chapters.
page 62
Page 73
The transport layer provides transparent transfer of data between hosts. It shields the
complexity of communications for the upper applications and is usually responsible for
end-to-end connection. The main functions of the transport layer involve:
Encapsulate data received from the application layer and decapsulate data received
from the network layer.
Create end-to-end connections to transmit data streams.
Send data segments from one host to another, perform error recovery, flow control, and
ensure complete data transfer.
Some of the transport layer protocols ensure data are transmitted correctly which means
data are not lost or changed during transmission and the order of data
packets remains the same when they are received at the end.
Page 74
page 63
Transport layer protocols mainly include the Transmission Control Protocol (TCP) and the
User Datagram Protocol (UDP) .
page 64
Page 75
Although TCP and UDP are both protocols of the transport layer, their contributions
to the application layer differ greatly.
TCP provides connection-oriented and reliable transmission. Connection-oriented
transmission means that applications which use TCP as their transport layer
protocol need to create a TCP connection before they exchange data.
TCP provides reliable transmission services for the upper layer through its
mechanisms of error detection, verification and reassembly. However, creating the
TCP connection and performing these mechanisms may bring a lot of extra efforts
and increase the cost.
UDP does not guarantee reliability or ordering in the way that TCP does. It provides
a simpler service that does not guarantee the reliability which means datagrams
may arrive out of order, appear duplicated, or go missing without notice. UDP
focuses on applications that require more on transmission efficiency such as SNMP
and Radius. Take SNMP as an example, it monitors networks and sends out
warnings from time to time. If SNMP is demanded to create a TCP connection every
time when it sends a small amount of information, undoubtedly, the transmission
efficiency will be affected. So time-sensitive applications like SNMP and Radius
often use UDP as their transport layer protocol. Besides this, UDP is also
appropriate for applications that are equipped with some mechanisms for reliability
by themselves.
Page 76
page 65
page 66
Page 77
The application layer has many protocols and the following protocols may help you use and
manage a TCP/IP network.
File Transfer Protocol (FTP) is used to transfer data from one computer to another over the
Internet, or through a network. It is often used for interactive user sessions.
Hypertext Transfer Protocol (HTTP) is a communication protocol used to transfer or
convey information on the World Wide Web.
TELNET is used to transmit data that carries the Telnet control information. It provides
standards for interacting with terminal devices or terminal processing. Telnet supports endto-end connections and process-to-process distributed communications.
Simple Message Transfer Protocol (SMTP) and Post Office Protocol 3 (POP3) are for
sending and receiving emails.
DNS (Domain Name Server) translates a domain name to an IP address and allows
decentralized management on domain resources.
Trivial File Transfer Protocol (TFTP ) is a very simple file transfer protocol. TFTP is
designed for high throughput file transfer for ordinary purposes.
Routing Information Protocol (RIP) is the protocol for routers to change routing information
through an IP network.
Simple Network Management Protocol (SNMP) collects network management information
and makes that information exchanged between the network management control console
and network devices including routers, bridges and servers.
Remote Authentication Dial In User Service (Radius) performs user authorization,
authentication and accounting.
Page 78
page 67
page 68
Page 79
To illustrate the encapsulation process, imagine there is network whose transport layer
uses TCP, the network layer applies IP and the data link layer takes Ethernet standards.
The above figure shows the encapsulation of a TCP/IP packet on that network.
The original data is encapsulated and delivered to the transport layer. And then the
transport layer adds a TCP header to the data and passes it down to the network layer. The
network layer encapsulates the IP header in front of the segment and delivers it to the data
link layer. The data link layer encapsulates Ethernet header and trailer to the IP packet and
then passes it to the physical layer. At last, the physical layer sends the data to the physical
link as bit streams. The length of each field in the header is pointed out in the above figure.
Now, well take a close look into the whole process from the top to the bottom.
Page 80
page 69
The above is a TCP data segment encapsulated in an IP packet. The TCP segment
consists of the TCP header and the TCP data. The maximum length of a TCP header is
60 bytes. If there is not the Option field, normally, the header is 20-bytes long.
The structure of a TCP header is shown as in the above figure. We are going to explain
just some of it. For more details, please refer to the transport layer protocols.
Source Port: Indicates the source port number. TCP allocates source port numbers for
every application.
Destination Port: Indicates the destination port number.
Sequence Number: Indicates the sequence number which labels TCP data streams.
Port number is used to distinguish applications,80 means HTTP application,23 for
telnet,20 and 21 for ftp,53 for DNS.
Ack Num: Indicates the acknowledgement sequence number. Ack Num includes the next
sequence number that the sender expects. The value of this field is the
sequence number that the sender of the acknowledgement expects next.
Option: Indicates the optional fields.
page 70
Page 81
The network layer adds the IP header to TCP datagram which it receives from the transport
layer. Usually, the IP header has a fixed length of 20 bytes which does
not include the IP options. The IP header consists of the following fields:
Version: indicates the version of the IP protocol. At present, the version is 4. The version is
6 for the next generation IP protocol.
IP header length is the number of 32-bit words forming the header including options. Since it
is a 4-bit field, its maximum length is 60 bytes.
TOS: 8 bits. It consists of a 3-bit COS (Class of Service) field, a 4-bit TOS field and a 1-bit
final bit. The 4 bits of the TOS field indicates the minimum delay, the
maximum throughput, the highest reliability and the minimum cost respectively.
Total length: indicates the length of the whole IP packet including the original data. This field
is 16 bit long which means an IP packet can be 65535 bytes at most. Although an IP packet
can be up to 65535 byte long, most data link layers segment them before transmission.
Furthermore, hosts cannot receive a packet more than 576 bytes and UDP limits packets
within 512 bytes. However, nowadays many applications allow IP datagram that are more
than 8192 bytes to go through the links especially for applications that support NFS.
Identification: identifies every datagram the host sends. The value increases with the
number of datagram the host sends.
Time to Live (TTL) : indicates the number of routers a packet can travel through. The value
decreases one every time the packet passes a router. When the value turns to 0, the packet
will be discarded.
Protocol: indicates the next level protocol used in the data portion of the internet datagram.
It is similar to the port number. IP protocols use protocol number to mark upper layer
protocols. The protocol number of TCP is 6 and the protocol number of UDP is 17.
Header checksum: calculates the checksum of the IP header to see if the header is
complete.
The source IP address field and the destination IP address filed point out the IP addresses
of the source and the destination.
Page 82
page 71
The physical layer has limitations on the length of frame it sends every time. Whenever the
network layer receives an IP datagram, it needs to decide which interface the
datagram should choose and check the MTU of that interface. IP uses a
technique called fragmentation to solve the problem of heterogeneous MTUs.
When a datagram is longer than the MTU of the network over which it must be sent, it is
divided into smaller fragments which are sent separately.
Fragmentation can be done on the source host or the intermediary router.
Fragments of an IP datagram are not reassembled until they arrive at the final destination.
The reassembly is performed by the IP layer at the destination.
Datagram can be fragmented for more than one time. The IP header provides enough
information for fragmentation and reassembly.
Flags: 3 bits
Multiple control bits:
0bit: reserved, must be 0.
1bit: (DF) 0 = can be fragmented, 1 = cannot be fragmented.
2bit: (MF) 0 = final fragmentation, 1 = more fragmentation.
The values of DF and MF cannot be 1 at the same time.
012
+---+---+---+
||D|M|
|0|F|F|
+---+---+---+
Fragment offset: indicates the position of the fragment within the original datagram. When
an IP datagram is fragmented, each fragment becomes a packet with its own
IP header and will be routed independently of any other datagrams.
page 72
Page 83
Page 84
page 73
page 74
Page 85
The above is an example of an HTTP packet that is captured, which may facilitate your
understanding towards packet encapsulation. The bottom displays the actual
data and the top is information analyzed by the software.
Page 86
page 75
This page illustrates data encapsulation at the data link layer. The encapsulation format
used here is Ethernet, which is mentioned earlier.
The figure above shows DMAC at the top and then comes SMAC and the type field is listed
at the bottom.
DMAC is 00d0: f838: 43cf
SMAC is 0011: 5b66: 6666
Type field value is 0x0800, which indicates that it is an IP packet.
page 76
Page 87
This page illustrates data encapsulation at the network layer. An IP packet is made up of
two parts, the IP header and the IP data. As described previously, the IP header consists
of many fields. In the above example, the value of the version field is 4, which indicates
the packet is an IPv4 packet. The packet header is 20-byte long. The protocol field is 0x06,
which tells us that the packet to be encapsulated is a TCP packet. The IP address of the
source is 192.168.0.123 and the IP address of the destination is 202.109.72.70.
Page 88
page 77
This page illustrates data encapsulation at the transport layer. The transport layer here
uses TCP protocols. The source port number is a random number 3514 and
the destination port number is 80, which is the number assigned for the HTTP protocol.
So the datagram is from the source to visit the HTTP service of the destination host.
page 78
Page 89
Page 90
page 79
page 80
Page 92
page 81
Page 93
page 82
Page 94
page 83
Page 95
page 84
Page 96
TCP uses IP as the network layer protocol, and TCP segment is encapsulated into
the IP packet.
TCP segment is made up of two parts, TCP Header and TCP Data. If there is
no option field, the length is 20 bytes.
TCP header includes the fields showed in the slide. There are some explanations
of some fields:
16-bit source port number: TCP will allocate a source port number for the source
application.
16-bit destination port number: The port number of destination application.
Source and Destination Port: Every TCP segment includes the source and
destination port number, used to find the sending and receiving application. Using
these two numbers, together with the source and destination IP address of IP
header, a unique TCP connection could be confirmed.
Sequence Number is a 32-bit number that identifies where the encapsulated data
fits within a data stream from the sender.
Acknowledgment Number is a 32-bit field that identifies the sequence number the
source next expects to receive from the destination. The Acknowledgement
Number is the last data sequence number plus one.
page 85
Page 97
page 86
Page 98
TCP provides full-duplex transmission protocol which is reliable and connectionoriented. The reliability of TCP is guaranteed by some methods. One of them is
to
establish the connection before sending any data.
The TCP connection is established through three-way handshakes procedure:
1. Request end (or Client) sends a SYN field, indicating the clients expectation
to connect to the port of server, with Initial Sequence Number (ISN) a.
2. The Server replied SYN with sequence number b. At the same time, the
acknowledgement number is set to be a+1 to acknowledge the SYN packet of
the client.
3. The Client will sent the acknowledgement packet with acknowledgement
number set to be b+1 to acknowledge the SYN packet of the server. The TCP
connection is then established.
page 87
Page 99
As it is mentioned before, TCP is a full-duplex transport layer protocol. Fullduplex indicates the two ends of the connection could transmit or receive data at
the same time. Thus, the two parties should terminate the connection individually.
The TCP connection is established through three-way handshakes procedure,
while the TCP connection is terminated through four-way handshake procedure:
1. Request end (or Client) sends a FIN field, indicating the clients expectation to
terminate the connection, with initial sequence number a.
2. The Server set the acknowledgement number to be a+1 to acknowledge the
FIN packet of the Client.
3. The Server replied sends FIN field with sequence number b,
acknowledgement number a+1.
4. The client will send the acknowledgement packet with acknowledgement
number set to be b+1.
The TCP connection is then terminated.
page 88
Page 100
Multiplexing indicates that the same transport layer connection is used by multiple
applications to transmit data. The data is divided to different segments by the transport layer
according to different applications. And based on FIFO rule, the segments are to be sent.
These segments could be with the same or different destinations.
Supposing two servers www.huawei.com and ftp.huawei.com are sending data packets to
destination host at the same time. The following is the end-to-end communication procedure
of transport layer. When the www and ftp applications are called, the server will allocate a
port number for every application. (Note:
This port number is different from the physical port of network equipment. It is a virtual
interface between the application and transport layer protocol). The
segments are then created.
In the transport layer, a session connection should be established between the server and
the host. (Note: It is a virtual connection instead of a physical one.) In order to begin the data
transmission, the two applications of the server and terminal host will inform their own
operation systems, to initialize the connection. After the virtual end-to-end connection is
established, the data transmission could begin.
During the transmission procedure, the server and the host continue to communicate using
their protocol software, to check whether the data has been correctly received.
After the terminal equipment receives the data flow, it will sort the data so that the transport
layer could send the data flow to the host correctly.
After the data transmission finished, the two party negotiate to terminate the virtual link.
page 89
Page 101
MSS (Maximum Segment Size) indicates the maximum size of the segment could be sent to
the other end of the connection. When a connection is established, their two ends should
advertise its own MSS. The default value of MSS is 536 bytes, so the allowable length of IP
packet is 576 bytes(536 +20 byte IP header +20byte TCP header).
Through the negotiation of MSS, the network resources could be used more efficiently and the
network performance could be improved.
page 90
Page 102
page 91
Page 103
TCP Sliding Window technology is able to control the data flow between two hosts by
dynamically changing window size. Every TCP/IP host supports full-duplex data
transmission, so there are 2 Sliding Windows in TCP: one is used for receiving, the other is
used for sending. whats more, TCP uses positive acknowledgement technology whose
acknowledgement number refers to next expected bytes.
As shown above, it is an example of single direction sending, which introduces how Sliding
Window achieves flow control.
The server sends to client 4 1024-byte segments, and the window size of sender is 4096
bytes. Receiver will acknowledge by using ACK4097, and modify window size to 2048
bytes. This means client (receiver) only has 2048-byte buffer space. Therefore, sender
changes its sending speed and sends 2048-byte segment which the receiver can afford.
Sliding window mechanism provides reliable flow control method for data transmission
between end-to-end devices. However, it is only on source and destination devices that
Sliding Window mechanism will take effect. When there is congestion between interim
devices ( like routers), Sliding Window has no use. Thus ICMP source quench mechanism
could be used in congestion management.
page 92
Page 104
page 93
Page 105
page 94
Page 106
UDP, like TCP, also uses IP as network layer protocol. UDP segment is encapsulated in a
IP packet. Since UDP doesnt provide reliable transmission like TCP, its segment format is
relatively simple.
The UDP header is made up of the following field:
16-bit source port number: applying source port number for source application.
16-bit destination number: port number of destination application.
16-bit UDP length: referring to the length of both UDP header part and UDP data part. The
min value is 8.
16-bit UDP checksum: this segment provide the same function as TCP checksum.
But this is an extra parameter.
page 95
Page 107
As shown above, the picture compare TCP protocol with UDP protocol. It is able to get a
conclusion through comparison that TCP is suitable for high-reliability service;
while UDP is suitable for speed-sensitive services.
As UDP supports a connectionless service, it requires that the upper layer of providing
error detection and retransmission mechanism.
page 96
Page 108
page 97
Page 109
page 98
Page 111
Page 112
page 99
page 100
Page 113
The ping command is a common way to check the IP connectivity of the network and the
connection to the host. The ping command uses a series of Internet Control Message
Protocol (ICMP) messages to check whether the destination is reachable, the
communication delay, and the packet loss ratio. Ping is a process in which the device
sends a request and waits for response. The device that run the ping command sends an
Echo message to the destination, and then waits for a response. If the Echo message
reaches the destination and an Echo Reply message is returned to the source within the
specified period, the device can ping through the peer. If the source does not receive the
Echo Reply message, the Request timed out message is displayed. In this example, the
following command is typed on the PC:
Ping 1.1.1.1
To test the connectivity, send the Echo message to address 1.1.1.1.
Besides basic commands, the ping command provides various optional parameters, for
example a and i. -a source-ip-address: sets the source IP address that sends the ICMP
ECHOREQUEST message. -i interface-type interface-number: sets the interface that
sends the ICMP ECHOREQUEST message. In this example, the ping 1.1.1.1 a 1.1.1.2
command can also be used.
Page 114
page 101
ICMP is an important part of the network layer. IP does not provide reliability, so the device
cannot obtain the network fault information. By using ICMP, the device
can obtain the information about the network faults.
ICMP can send the information of error, control, and packet query. The ICMP packets are
encapsulated in IP packets. The value of the protocol field is 1. Some upper layer
applications may use the ICMP protocol, for example, ping and Tracert.
page 102
Page 115
The ICMP packet uses the basic IP header, namely 20 bytes. The ICMP packet is
encapsulated in the IP packet. The first 64 bits of the datagram refer to the ICMP
packet. Therefore, an ICMP packet consists of an IP packet and the first 64 bits of the
datagram.
The ICMP packet consists of the Type, Code, Checksum, and unused fields. The formats
of the messages vary with the message types. The details are omitted here.
Type: indicates the type of the ICMP message.
Code: in the same ICMP message type, the messages express different contents by using
the codes.
For example: The Destination Unreachable message of which the Type value is 3 contains
the following four types of messages:
0 = net unreachable
1 = host unreachable
2 = protocol unreachable
3 = port unreachable
Checksum: contains 16 bits. This field is not in use and the value is 0.
Page 116
page 103
ICMP provides the various message types. The following are commonly used:
0 Echo Reply
3 Destination Unreachable
4 Source Quench
5 Redirect
8 Echo
11 Time Exceeded
12 Parameter Problem
13 Timestamp
14 Timestamp Reply
Some messages are used together. For example, the Echo Reply message is the response
to the Echo message. The messages of the same type contain different information. The
following describes the message types and formats.
page 104
Page 117
Tracert is used to check the path from the source node to the destination node. It deducts 1
from the TTL value of the packet every time the packet traverses a router. When the TTL
value becomes 0, the router reports TTL timeout.
Tracert sends a packet of which the TTL value is 1, so the first hop returns an ICMP error
message to notify that the packet cannot be forwarded because the TTL times out. Then,
Tracert sends a packet of which the TTL is 2, and the second hop returns the same
message. Tracert continuously sends such packets until one packet can be sent to the
destination. The packet uses an invalid port number (33434 by default), so the destination
host returns an ICMP unreachable message to notify that the Tracert operation completes.
Tracert records the source address that sends the ICMP error message. Thus it can
provide the IP addresses of the gateways through which the user packets pass.
Tracert can also provide a function to test the connectivity. When a fault occurs on the
network, it can be located according to the path displayed by Tracert.
Page 118
page 105
Ping and Tracert are taken as an example here. The two methods can test whether RTA and
interface 3.3.3.3 of RTC can communicate.
As shown in the displayed information, the ping command can directly display whether the
RTC is reachable, while Tracert can display the forwarding path in details. The packet
reaches 10.1.1.2, and then to 10.2.2.2, and finally reaches 3.3.3.3. In addition, the tracert
command can locate the fault. In this example, if the displayed information is as follows, it
indicates that the packet can be sent to next hop 10.1.1.2, but cannot be forwarded by the
router, therefore the fault occurs between this router and the destination.
[RTA]tracert 3.3.3.3
traceroute to 3.3.3.3(3.3.3.3) 30 hops max,40 bytes packet
1 10.1.1.2 31 ms 31 ms 32 ms
2***
page 106
Page 119
Telnet is used for the remote service. The user can log in to the remote server through Telnet.
The transport protocol used by Telnet is TCP and the port number is 23. The telnet command
is as follows:
telnet 192.168.1.22 23
192.168.1.22: IP address of the router server.
23: port number. The default value is 23. The value can be null. If the port number is not 23,
the user must enter the port number. For the detailed operation related to telnet based access
to a device, refer to the basic configuration of VRP.
Page 120
page 107
FTP is an Internet standard for file transfer. It adopts two TCP links to transfer a file. One is
control link and the other is the data link. FTP adopts different TCP ports according to the port
mode, Port or Passive. In the past, the default client mode is Port. In recent years, the Passive
mode is widely used because the Port mode is not secure (easy to be attacked.) In Port mode,
FTP adopts two default port numbers 20 and 21. Port 20 is used to transfer data, and port 21
is used to transfer commands.
The VRP routers can act as the FTP client or the FTP server. In this example, the PC
functions as the FTP client to log in to the FTP server through the FTP protocol. The PC run
the FTP program (that is, enter the ftp 1.1.1.1 command). The system displays the login
dialog box to request the user to enter user name and password, then the user can log in.
page 108
Page 121
If the VRP router needs to download a file from the remote server, it can act as the FTP
client to access files from the FTP server. Enter FTP IP address of the remote server in
the VRP system view. The user is prompted to enter the user name and password. Then,
the prompt is changed into [FTP]. It indicates that the user logs in successfully.
Get and Put are two operations performed on files. Get means downloading files from the
server, while Put means uploading files to the server. In this example, the Get vrpcfg.def
vrp1 command means that the client downloads the vrpcfg.dev file and saves the file as
vrp1.
Page 122
page 109
To upload files to the FTP server, use the put command. In this example, the Put
vrpcfg.def command means that the local file vrpcfg.def is uploaded to the authorized
directory of the FTP server and the file name is not changed. The configuration software
of the FTP server is different. The details are omitted here.
page 110
Page 123
The Trivial File Transfer Protocol (TFTP) is used when the user needs to transfer file
between server and client and complex interaction is not required. TFTP uses
UDP and the port number is 69. The VRP router can act as only the TFTP client to
download files from the TFTP server.
Page 124
page 111
page 112
Page 125
page 113
Page 127
In TCP/IP protocols, each layer has its own communication method, Data Link Layer use
MAC Addresses, the Network Layer use IP Addresses. After understanding the functions
of these layers, this course mainly introduces IP Addressing used at the Network Layer,
as well as packet forwarding between Network Layer devices, which is the basis for
routing.
This chapter introduces the layer 3 Network Layer in TCP/IP protocols. The main
function of the Network Layer is achieved through using the IP protocol, which includes IP
Addressing and IP Routing.
page 114
Page 128
page 115
Page 129
page 116
Page 130
page 117
Page 131
The network layer receives data from the transport layer, and adds source address and
destination address into the data. As learned in previous chapters, the data link layer has
the physical address (MAC address), which is globally unique. When there is data to be
sent, the source network equipment queries the MAC address of the other end equipment,
and sends it out.
However, the MAC addresses are existent in a flat address space, without clear address
classification. Thus, it is only suitable for the communication within the same network
segment. Besides, the MAC address is fixed in the hardware, with poor flexibility. Hence,
for communication between different networks, usually it is based on IP
address based on software, to provide better flexibility.
page 118
Page 132
IP address is composed of 32 bits, which are divided into four octets, or four
bytes.
The IP address could be represented in the following methods:
Dotted decimal format:10.110.128.111
Binary format00001010.01101110.10000000.01101111
Hexadecimal format:0a.7e.80.7f
Usually, IP addresses are represented in the dotted decimal format; and seldom
in hexadecimal format. The hierarchical scheme for IP addresses is composed of
two parts, network and host.
The hierarchical scheme of IP addresses is similar to that of telephone
numbering, which is also globally unique. For example, the telephone number
010-8288248: the 010 represents the city code of Beijing, and 82882484
represents a telephone in Beijing city. It is the same for IP addresses. The
preceding network part of an address represents a network segment, while the
latter host portion represents the device in a given network segment. In using this
hierarchical design for every network layer device, the network is able to be
segmented. This mechanism enables routers to decrease the number of routing
table entries greatly, and increases routing flexibility.
page 119
Page 133
Page 134
the class B addresses are allocated to medium-sized companies, and class C addresses
are allocated to small-sized companies. With the fast development of the Internet and
also the waste of IP addresses, the IP address is becoming insufficient.
page 121
Page 135
page 122
Page 136
When planning IP addresses, usually private IP addresses are used within the
same company. Private IP addresses, reserved by InterNIC, can be freely used
by companies. The private IP addresses cannot be used to access the Internet.
The reason is that the private IP addresses cannot have corresponding routes on
the public network and the IP addresses may conflict.
When the user with private IP address needs access to the Internet, the private
IP address must be translated to the public address that can be identified by the
public network through Network Address Translation (NAT) technique. InterNIC
reserves the following network segments as the private IP addresses:
class A: 10.0.0.0-10.255.255.255;
class B: 172.16.0.0-172.31.255.255;
class C: 192.168.0.0-192.168.255.255.
By using the private IP addresses, the enterprises reduce the cost on buying
public addresses and the IP addresses are saved.
page 123
Page 137
Subnet masks are used to distinguish the network and host bits. In a subnet mask, the 1
bits represent the network, and 0 for host. The subnet mask of class A network in dotted
decimal format by default is 255.0.0.0, the subnet mask of class B network is 255.255.0.0,
and the subnet mask of class C network is 255.255.255.0.
page 124
Page 138
page 125
Page 139
page 126
Page 140
As this slide shows, for 11101001, calculate bit by bit as a decimal number, then convert
the binary to the decimal value.
page 127
Page 141
page 128
Page 142
page 129
Page 143
page 130
Page 144
After learning the conversion between binary and decimal, it is easy to understand the
corresponding relationship for that of IP address and subnet masks. In this slide, the
number of bits of a subnet mask is 8+8+8+4=28, which indicates the number of
consecutive 1 in the network mask is 28, i.e., the network address bits is of a 28-bit
length. The subnet can be represented in another method: as /28, indicating that the first
28 bits represent the network ID.
page 131
Page 145
As shown in the slide, the IP address and subnet mask are already known. The network
address is obtained from the AND operation between the IP address and the subnet
mask. The AND operation is 1&1=1, 1&0=0, and 0&0=0. Therefore, the calculation of the
AND operation for the example in this slide is as follows:
11000000, 10101000, 00000001, 00000111
&11111111, 11111111, 11111111, 11110000
11000000, 10101000, 00000001, 00000000
The calculation result is the network address.
page 132
Page 146
The number of hosts is calculated through the subnet mask. First, it is necessary to
identify how many 0s there are in the subnet mask. As shown in the above figure, if there
are N-bit 0s, then, the number of hosts is 2n. The number of IP addresses that can be
allocated to the host is 2n -2 (minus the network address which is all 0s and the
broadcast address which is all 1s).
page 133
Page 147
page 134
Page 148
201.222.5.232~201.222.5.239
201.222.5.240~201.222.5.247
201.222.5.248~201.222.5.255
page 135
Page 149
For the network of class B, if there are 8 bits for subnet, then 256 subnets could
be provided, and 254 hosts could be included in each subnet.
Subnet bits subnet mask subnet number
host number in each subnet
1
255.255.128.0 2
32766
2
255.255.192.0 4
16382
3
255.255.224.0 8
8190
4
255.255.240.0 16
4094
5
255.255.248.0 32
2046
6
255.255.252.0 64
1022
7
255.255.254.0 128
510
8
255.255.255.0 256
254
9
255.255.255.128
512
126
10
255.255.255.192
1024
62
11
255.255.255.224
2048
30
12
255.255.255.240
4096
14
13
255.255.255.248
8192
6
14
255.255.255.252
16384
2
page 136
Page 150
For the network of class C, if there are 5 bits for subnet, then 32 subnets could be
provided, and 6 hosts could be included in each subnet.
Subnet bits
Subnet mask
Subnet number
255.255.255.128
126
255.255.255.192
62
255.255.255.224
30
255.255.255.240
14
16
255.255.255.248
32
255.255.255.252
64
page 137
Page 151
A network can be divided into multiple subnets, and each subnet uses a unique
ID. But the number of hosts in every subnets may be different. If the length of
subnet mask is fixed and the number of IP addresses in the subnets is the same,
lots of IP addresses are wasted. In this case, the variable length subnet mask
(VLSM) technique can be used. If the subnet has lots of nodes, the subnet mask
could be shorter. The IP address with shorter subnet mask represents less
networks/subnets, but more IP addresses can be allocated to hosts. If the subnet
has a few nodes, the subnet mask could be longer. The IP address with longer
subnet mask represents more logical networks/subnets, but less IP addresses
can be allocated to hosts. Such addressing scheme can save lots of IP
addresses, which can be used in other subnets. As shown in the above figure, a
company deploys the IP addresses subnet planning with class C address
192.168.1.0. The company has bought five routers. One router, which works as
the gateway of the intranet, is connected to the local ISP. The other four routers
are connected to four branch offices. Each office has 20 PCs, so each office
needs 20 host addresses.
As shown in the above figure, 8 subnets are required. 4 offices need 21 IP
addresses (including a router interface). The 4 network segments connected with
the gateway need 2 IP addresses. The IP address number of every network
segment is different, so the VLSM could be used. The four network segments for
the office adopt the subnet mask 255.255.255.224, 3 bits for subnet, and 5 bits
for hosts. This means at most 25-2=30 hosts could be included. The four network
segments connecting office router and gateway, are support 6 bits for subnet,
and 2 bits for hosts, therefore at most 2 hosts could be included.
page 138
Page 152
page 139
Page 153
page 140
Page 154
page 141
Page 155
The function of Proxy ARP is to make hosts or routers in different networks segment can
communicate. Usually, when a router R receives an ARP Request, it will check whether
the requested destination address is its own: if so, the ARP Reply will be sent; if not, the
request packet is discarded.
However, if the router R enables the Proxy ARP function, when router R receives an ARP
Request, and finds the destination address is not its own, router R will not discard the
packet immediately. Instead, router R looks up the routing table, if there is a route to this
destination, it will send its own MAC address to the request party, and the request party
will send the packet with this destination to router R, and router R will forward it further.
page 142
Page 156
Gratuitous ARP: The host sends ARP Request to find the corresponding MAC
address of its own IP address. If in the network, there is no another host with the
same IP address, the host will not receive any reply. However, if the host
receives reply, it indicates that another host in the network is configured with the
same IP address. Hence, in the terminal log of host, an error information will be
created, indicating that a duplicate IP address is configured.
Functions of Gratuitous ARP:
1. Through sending Gratuitous ARP packets, it could be confirmed whether there
is IP address conflict in the network. If the Request party receives a Gratuitous
ARP reply, it indicates that there is an equipment with a duplicate IP address.
2. Updating the old hardware address information. If the host sending Gratuitous
ARP just changes its hardware address, such as changing network card, the
Gratuitous ARP could be used to update the old hardware address information.
When the receiving party receives an ARP Request, and this ARP information
already exists in the ARP table, then the receiving party must update the old
ARP information table, using the address information in the new ARP Request.
page 143
Page 157
Sometimes, RARP ( Reverse Address Resolution Protocol) is needed when dealing with
diskless workstations. This equipment knows its own MAC address, and needs to obtain
IP address. In order to make RARP work properly, in the LAN, at least one host has to be
the RARP Server. In this example, the diskless workstation needs its own IP address. It
broadcasts the RARP Request in the network. The RARP Server receives this broadcast
request, and sends the reply. Thus, the diskless workstation will obtain the IP address.
Similarly with ARP Request, RARP Request are sent using broadcasts, ARP Reply and
RARP Reply are usually forwarded as unicast packets
page 144
Page 158
page 145
Page 159
The main function of a router is to interconnect different networks. The data must
also be capable of being forwarded to the Internet.
Data forwarding: A router should have the ability to forward data packets
according to the destination address of data packets.
Routing: In order to forward data packets, the router should have the ability to
establish, update and forward data packets based on routing table.
Backup, traffic flow control: In order to guarantee the reliability of network,
usually, the router has the ability to switch to backup link and the function of
traffic flow control.
Speed adapting: Different interfaces have different speeds, the router can
implement the adjustment according to its buffer and other flow control protocols.
Isolating network: The router can isolate broadcast network and prevent
broadcast storms. At the same time, it can apply flexible filter policy to the data
packet, to guarantee network security.
Interconnecting heterogeneous networks: Presently, at least two kinds of
network protocols could be implemented in the router to interconnect
heterogeneous networks. For example, routers that support ATM and FR
interfaces can be considered as belonging to a router that can interconnect
heterogeneous networks.
page 146
Page 160
page 147
Page 161
The ability to forward data packets is due to the routing table. Every router
maintains a routing table, in which every route indicates the corresponding
physical port of the router through which the destination subnet or host could be
reached. In the routing table, the following key items are included:
Destination: It is used to identify the destination address or network of the IP
packet.
Mask: Together with the destination address, it is used to identify the network
segment address in which the destination host or router is located. After
implementing logical AND to the destination address and network mask, the
network segment address could be obtained in which the destination host or
router is located.
Interface: Indicates to the current router, through which interface the IP packet is
to be forwarded.
Next Hop: Indicates the interface address of the next router through which the IP
packet should pass.
page 148
Page 162
page 149
Page 163
page 150
page 153
Page 167
page 154
Page 168
page 155
Page 169
page 156
Page 170
VRP is the network operation system used by Huawei based routing & switching products.
VRP can be used as general software platform of all Huaweis network devices to
provide TCP/IP routing services. Currently version 5.7 is used for many products.
page 157
Page 171
VRP adopts componentized architectureVRP is made up of five planes: GCP, SCP DFP
SMP and SSP.
For example, GCP is General Control Plane, it supports internet protocols such as IPv4
and IPv6. The protocols and functions that GCP supports include SOCKET, TCP/IP, route
management, routing protocols and so on VRP just needs to add or delete corresponding
planes to fit different switch or router functionality.
page 158
Page 172
page 159
Page 173
At present, Huaweis routers and switches support three configuration modes, two of which
are listed as follows:
Local configuration through the Console port
Local or remote configuration through Telnet
page 160
Page 174
You can build a configuration environment only through the Console port for the two
following occasions:
(1)The router is powered on for the first time. There is only default configuration
(2) You can directly connect the device
The procedures of configuring a router through the Console port are as follows:
Procedure 1: Connect the console cable
(1) Connect the RJ45 connector to the Console port of the router.
(2) Connect the 9-pin or 25-pin RS232 connector to the serial port (COM) of the computer.
page 161
Page 175
page 162
Page 176
If it is not the first time for the router to be powered on and you cannot directly connect to the
router console port, it may be possible depending on the current device configuration settings,
to use TELNET to enter the device. There are two methods you may use to configure the
router, either from a PC through the local network to directly Telnet to the router from a PC
using a console connection to a router (e.g. router1), and then Telnet from this router to
another router. The device running the VRP system operation can serve as a TELNET client.
page 163
Page 177
For the PC to use Telnet to reach the Telnet server requires two conditions to be met
1Client and server must be able to communicate
2The server is configured to allow clients to use the Telnet service establish a session.
In the example given, the configuration is represents the router configuration that is acting
as the Telnet server. The initial step requires configuration of the router Ethernet interface,
to make sure the client and the server (router) can communicate. The second step involves
configuration of the VTY interface including selecting the password mode as the
authentication mode of Telnet, setting user permission level.
page 164
Page 178
page 165
Page 179
After accessing the router, the user will be given the prompt in user view. It is from here
that the user can switch to the system view by entering the System-view command. It is
then possible to enter views of other services by running corresponding commands in the
system view. Commands that can be run in different views can be seen listed in the graphic.
page 166
Page 180
When accessing the device for the first time, all users will start off in the user view, from
where users can switch to the system view using the System-view command. The system
view can be switched back to the user-view after entering the quit command. It is possible
to return to the user view from any view by entering the return command or using the
composite key command <Ctrl+Z>.
For example
#Enter the system view from the user view.
<Huawei>system-view
Enter system view, return user view with Ctrl+Z
#Enter the interface view from the system view.
[Quidway]interface Serial 0/0/0
[Quidway-Serial0/0/0]
#Return to the system view from the interface view.
[Quidway-Serial0/0/0]quit
[Quidway]
#Return to the user view from the system view.
[Quidway]return
<Huawei>
page 167
Page 181
In this example, through using the ? command, it is possible to obtain a brief of all the
commands at a given level. All levels will support the use of this command to display
possible completions. Another use of this command will allow for completion based on
matches to a partial entry. If only the first letter of a command can be recalled, the ?
command can be inserted as shown in the example above, in order to obtain all the
commands with the same matching parameters, in this case, the same first letter.
page 168
Page 182
VRP supports two languages and allows users to enter the language-mode command to
switch between the two languages. The procedure is as follows:
<Huawei>language-mode ?
chinese Chinese environment
english English environment
<Huawei>language-mode chinese
Change language mode, confirm? [Y/N]y
Info:Switch to the Chinese mode.
<Huawei>
page 169
Page 183
The command line interface automatically stores commands input by users which so that
users can recall used commands at any time and repetitively. By default,
the command line interface can keep records of up to 10 commands for a user.
display history-command:
To display the commands that a user has input.
Up-arrow key or <Ctrl+P>:
Display the earlier record if there is one; otherwise the alarm goes off.
down-arrow key or <Ctrl+N>:
Display the next record if there is one; otherwise, the command is cleared up and the alarm
goes off.
When you use the command record function, please note the following:
(1) The format of command records kept by VRP complies with the format of commands
input by users. If the format of commands input by users is not intact,
then the format of commands kept by VRP is not intact either.
(2) If a command is run by a user for many times, VRP only keeps the first running of this
command as record. If a command is run in different formats several times, it is treated as
different commands. For example, if you run the display ip routing-table command
several times, VRP will keep it as only one record. If you run disp ip routing and display
ip routing-table, VRP will keep them as two records.
page 170
Page 184
page 171
Page 185
Some services require that there be synchronization of time with other devices, often as a
security measure and therefore the system time should be set correctly.
VRP supports the setting of the time zone and daylight savings time features.
#Set the time.
<Huawei>clock datetime 10:19:30 2006/12/12
<Huawei>
<Huawei>display clock
10:19:36 UTC Tue 2006/12/12
<Huawei>
page 172
Page 186
You can display the VRP version information by running the display version command.
<Router>display version
Huawei Versatile Routing Platform Software
VRP WVRP-CEN Software Version VRPV5R1B12D054
Copyright (c) 2003-2010 by VRP Team Beijing Institute Huawei Tech, Inc
page 173
Page 187
VRP manages software and configuration files through the file system. The file system is
used for managing the files and directories in the storage device, which
includes creating the system file, changing names of files and directories, creating, deleting,
modifying files and directories and display files. The two main functions of the file system are
storage device management and files management. The storage device is the hardware
device that keeps information. At present, flash memory, hard disks and CF cards can be
used by routers as storage devices. Different products use different devices to store
information. File system is a mechanism for information storage and management. File
directories are mechanisms for organizing files and they are the logical vessels for keeping
files.
Delete a file
<Huawei> delete flash:/test/test.txt
Delete flash:/test/test.txt?[Y/N]
<Huawei>
Restore the file that was deleted.
<Huawei> undelete sample.bak
Undelete flash:/test/sample.bak ?[Y/N]:y
% Undeleted file flash:/test/sample.bak
Delete files in the recycle bin.
<Huawei> reset recycle-bin
Display a file.
<Huawei> more test.txt
AppWizard has created this test application for you. This file contains a summary of what
you will find in each of the files that make up your test application.
Test.dsp
Copy a file.
<Huawei> copy hda1:/sample.txt flash:/
Copy hda1:/sample.txt to flash:/sample.txt ?[Y/N]:Y
% Copyed file hda1:/sample.txt to flash:/sample.txt
page 174
Page 188
Create a directory
<Huawei> mkdir dd
Created dir dd.
Delete a directory
<Huawei> rmdir test
Rmdir test?[Y/N]:y
% Removed directory test
Display the current directory
<Huawei> pwd
flash:/test
page 175
Page 189
page 176
Page 190
When the router is powered on, it reads the configuration file from the default storage path
to initialize itself. The configuration in the configuration file is called
the initial configuration. If there are no configuration files in the default storage path, the
router will initialize itself with the default parameters. The configuration used when the
router is running is called the current configuration.
Users can change the current configurations of the router through the command line
interface. To make the current configuration to be the initial configuration for
the router when the router is powered on next time, you need to save the current
configuration in the default storage path with the save command. You can view the saved
configuration of the router by running the display saved configuration command.
You can view the current configuration of the router by running the display currentconfiguration command.
You can save the current configuration by running the save command. The detailed
procedure is as follows:
<Huawei>save
The current configuration will be written to the device.
Please make sure configuration recovery has been finished.
Are you sure?[Y/N]y
Now saving the current configuration to the device.....
Info:The current configuration was saved to the device successfully
You can erase the configuration file in the storage device by running the reset savedconfiguration command. The detailed procedure is as follows:
<Huawei>reset saved-configuration
page 177
Page 191
page 178
Page 192
VRP can backup its software and configuration files through FTP, TFTP and XMODEM.
Here we will introduce the basic operations for routers or switches to obtain version files
through the three modes, which is the general knowledge about version update. For details
about version update methods and procedures, please refer to the update guidelines we
provide for a product or a specific version of a product.
FTP, TFTP and XMODEM are all file transport protocols for transporting files between
users and devices.
File Transfer Protocol (FTP) is based on TCP and takes the mode of Server/Client. VRP
can act both as the FTP server and the FTP client. When it acts as the FTP server, users
can log in to the router to visit files on the router by running the FTP client program. When
VRP acts as the FTP client, users can run FTP commands to connect with the remote FTP
server and then visit files on the remote host after they built connections with the router
through the terminal emulation program or Telnet.
Trivial File Transfer Protocol (TFTP), different from FTP, does not require any
authentication mechanisms, which is fit for an environment that does not involve much
interaction between clients and servers. TFTP is based on UDP and takes the mode of
Server/Client. TFTP transfer is initiated by the client. When there are
files to download, the client sends requests to the TFTP server for reading the files and
receives packets from the server and at last, it sends confirmation to the
server. When there are files to upload, the client sends requests to the TFTP server for
writing the files and sends packets to the server and at last, it sends confirmation to the
server. TFTP files have two modes, one is the binary mode that is used for program files
and the other is the ASCII mode that is for text files.
VRP can only act as the TFTP client and can transfer files only in the binary mode.
XModem protocol transfers files through serial ports, which is widely used for its simplicity
and capabilities. VRP supports receiving program through XModem
which can be applied to the AUX interface.
page 179
Page 193
As the above figure illustrates, the PC and Router A are connected through serial ports
and Router A and the FTP server are connected to the LAN. Router A
obtains version files from the FTP server as the FTP client. Set the username and
password to quidway and huawei respectively on the FTP server. Log in to Router A from
the PC by the super terminal and make the following operations to obtain version files.
#Log in to the FTP server from Router A.
<Router> ftp 172.16.104.110
Trying 172.16.104.110 ...
Connected to 172.16.104.110.
User(172.16.104.110:(none)):quidway
331 Give me your password, please
Password:
230 Logged in successfully
#Obtain the version file vrp.cc from the FTP server by running the get command.
[ftp] get vrp.cc
page 180
Page 194
As the above figure illustrates, the PC and Router A are connected through serial ports and
Router A and the FTP client are connected to LAN. Router A is configured as the FTP
server to obtain version files from the FTP client. Run the following commands to configure
Router A as the FTP server.
#Enable the FTP server on the router.
[Quidway]ftp server enable
#Enter the AAA view and configure the authentication and authorization of the FTP server.
Only users that pass the authentication and are authorized successfully can enjoy the
services offered by the FTP server.
[Quidway]aaa
#Create a local user named quidway.
[Quidway-aaa] local-user quidway
#Set the service type to FTP.
[Quidway-aaa] local-user quidway service-type ftp
#Configure the password to huawei.
[Quidway-aaa] local-user quidway password simple huawei
#Configure the authorization directory of FTP users on the FTP server.
[Quidway-aaa] local-user quidway ftp-directory flash:/ftp/quidway
page 181
Page 195
As the above figure illustrates, the PC with the IP address of 10.111.16.160 runs the TFTP
software to act as the TFTP server and Router A obtains version software from the TFTP
server.
Run the following command on Router A to obtain version software.
#Run the tftp command to obtain the vrp.cc file and save it under cfcard:/.
<Huawei> tftp 10.111.16.160 get vrp.cc cfcard:/vrp.cc
Run the dir command to check if the version file is obtained and save in the
defined directory.
<Huawei> dir
Directory of cfcard:/
0 -rw- 86211956 Jun 08 2006 15:20:14 v300r001b02ssp02.cc
1 -rw- 40 Jun 24 2006 09:30:40 private-data.txt
2 -rw- 396 May 19 2006 15:00:10 rsahostkey.dat
3 -rw- 540 May 19 2006 15:00:10 rsaserverkey.dat
4 -rw- 2718 Jun 21 2006 17:46:46 1.cfg
5 -rw- 14343 May 19 2006 15:00:10 paf.txt
6 -rw- 6247 May 19 2006 15:00:10 license.txt
7 -rw- 14343 May 16 2006 14:13:42 paf.txt.bak
8 -rw- 80975644 Jun 08 2006 14:50:20 v300r001b02msp06.cc
9 -rw- 86235884 Feb 05 2001 10:23:46 vrp.cc
508752 KB total (261112 KB free)
page 182
Page 196
page 183
Page 197
page 184
Page 198
page 185
Page 199
page 186
Page 200
Page 202
page 187
Each layer of the TCP/IP layered model has its own function and role. Built around the
main principles of network layer addressing and routing. This section of Routing Protocol
Basics focuses on details of IP address structure, address classification and subnet
planning; In addition, how the data packet is carried in network, and router principles are
further illustrated. Routing protocol basics is a basic course with great significance for
understanding the different routing protocols. Based on previous sections, this section
focuses on how the packet is forwarded between routers and the structure of the routing
table.
page 188
Page 203
Page 204
page 189
page 190
Page 205
Here we take the previous example to explain the process of IP routing. As the above
figure shows: RTA connects with network 10.3.1.0 on the left and RTC connects with
network 10.4.1.0 on the right. Here is a datagram to be sent from network 10.3.1.0 to
network 10.4.1.0. The process of IP routing is as follows:
1.
The packet is sent to E1 port of RTA that directly connects with network 10.1.1.0.After
receiving the packet, RTA looks up the routing table and finds that
the next hop to the destination is 10.1.2.2 and the egress is E0. Then the packet is sent
out from E0 to 10.1.2.2.
2. When the packet reaches E0 port of network 10.1.2.2, RTB looks up its routing table to
find the route to the destination of the packet. The routing table tells that the next hop
to the destination is 10.2.1.2 and the egress is E1. Then the packet is sent out from
E1 to head for its next hop, network 10.2.1.2.
3. When the packet reaches E0 port of network 10.2.1.2, RTC looks up its routing table
and finds that the destination of the packet is in its own segment and
the next hop for the packet is 10.4.1.1 and the egress is E1. Then the packet is sent out
from E1 to its destination.
0
Page 206
page 191
The analysis of the process of IP routing shows us that data forwarding is totally dependent
on the information in the routing table. To function effectively and
efficiently, a router should:
1. Check the destination of a packet: Does the router have information about the destination
of the packet?
2. Find the source of the information: Where is the information about the route to the
destination from? Is it defined by the administrator statically? Or is it obtained from other
routers?
3. Search for possible routes to the destination: What are the possible routes to the
destination?
4. Select the best route: which is the best route to the destination? Should the router use
the loading balance mechanism to send the packet by multiple routes?
5. Verify and maintain routing information: Is a route valid? Is it the latest?
Routers have to verify and maintain routing information to ensure that the information is
correct.
page 192
Page 207
Routers check the destination of the packets they receive and if the destination of the
packets is not the interface of local routers, they will look up their routing tables to find out
to which port the packets should be forwarded.
1. If the destination network connects with the router directly, the router knows to which port
the packet should be forwarded.
2. If the destination network does not directly connects with the router, the router should
find out of the possible routes to send the packet and then select one of them to forward the
packet. Routes in the routing table can be sorted to three categories according to their
sources:
1. Routes found by data-link layer protocols (interface routes or direct routes)
2. Static routes manually configured by network administrators.
3. Routes found by dynamic routing protocols.
Page 208
page 193
The protocol field in the routing table indicates the source of the routes. Routes come from
three sources. The first source is those routes discovered by the data-link layer. When
data-link layer protocols are up, routes of this sort are generated and their protocol field
value in the routing table is shown as direct. Routes discovered by the data-link layer do
not need maintenance, which reduces the workload. However, data-link layer can only find
routes to segments directly connected with its interfaces and can not discover routes that
cross segments. Routes that cross segments can only be discovered by other methods.
page 194
Page 209
The second source is the statically configured routes. Static routes are configured by
administrators manually and they can also help to build connectivity between networks.
Static routes however cannot make adjustments automatically when networks fail. They must
be managed by administrators.
Page 210
page 195
The last group of routes are discovered by dynamic routing protocols. Configuring routes
statically for a network with a complicated topology is a demanding task and may result in
errors easily. So it is better to use dynamic routing protocols to find and change routes, which
does not need manual maintenance. However, the cost of dynamic routing protocols is. As
the figure above shows, routes whose Proto field values are RIP and OSPF are routes
discovered by RIP dynamic routing protocol or OSPF routing protocol. Details about dynamic
routing protocols will be given later.
page 196
Page 211
As we mentioned just now, routes come from three sources. Here, we make a comparison
between static routes and dynamic routes.
1. Static routes must be defined by administrators. When the network topology changes,
administrators have to change the configurations of static routes
manually. Static routes are more suitable for simple and small networks. If the network is
complicated, administrators may struggle to support the complexity and work needed to
manage numerous static routes.
2. Routing protocols collect network information for dynamic routes. When the network
topology changes, routers update their information automatically without the help of
administrators.
Page 212
page 197
page 198
Page 213
Page 214
page 199
According to the algorithms used, routing protocols can be divided into the following
categories:
Distance-Vector routing protocol: RIP and BGP. BGP is also called the Path -Vector
Protocol.
Link-State Protocol: OSPF and IS-IS.
The differences between the algorithms used by the Distance-Vector routing protocols and
the Link-State protocols lie in the way they find and calculate routes. Distance-Vector
routing protocols concern is to the number of the hops to the destination, while Link-State
protocols care more about the network topology and bandwidth resources used to reach a
given destination.
page 200
Page 215
Routing protocols can be divided into unicast routing protocols and multicast routing
protocols according to their applications. Unicast is one of the data transmission modes. In
this mode, the destination of a datagram is unique, which can be a host or a device.
Multicast is another data transmission mode. In this mode, the destination address is a
multicast address, which means a group of hosts or devices can receive a datagram at the
same time. Here, we only focus on unicast routing protocols. For details about multicast
routing protocols, see references for multicast modules.
Page 216
page 201
Routing tables play a key role in packet forwarding. Each router holds a routing table and
every entry in the routing table tells a packet should be sent through which physical port of
a router to reach a subnet or a host before the packet arrives at the next hop router or its
destination.
A routing table contains the following items:
Destination: indicates the destination or the destination network of an IP packet.
Mask: We have already learned the structure and functions of mask in our TCP/IP course.
Similarly, network masks are important information in a routing
table. If we let an IP address and a network mask go through a logical AND operation, we
can get information about the network segment. As the example
here, the destination address is 8.0.0.0 and the mask is 255.0.0.0. After they go through
the logical AND, we may know that the segment is 8.0.0.0/8 which is a Class A address.
Another function of network masks is that when there are multiple route entries to the
same destination in a routing table, the router can
choose the route with the longest mask.
Interface: indicates which interface an IP packet should be forwarded from.
Nexthop: indicates the IP address of the next interface that an IP packet will go through.
Other fields in the routing table will be discussed later.
page 202
Page 217
Routes to the same destination may come from different sources. So the next hop of those
routes may be the same or different. In this case, how routers
make their choice about those routes? Route preference is here for this problem.
In the figure above, there are two routes to the segment 10.0.0.0: R0 and R1. R0 is
discovered by RIP protocol and R1 is discovered by OSPF protocol. By
default, OSPF has a higher route preference level than that of RIP . So routers use the
route discovered by OSPF on this occasion and add it to the global
routing table for packet forwarding.
Page 218
page 203
The default route preference on VRP platform is shown in the above table. Preference 0 is
for direct routes and 255 is for untrustworthy routes. Except direct routes, the preference of
all dynamic routing protocols can be configured manually according to the requirements of
our customers. And you should note that usually a preference is for all routes of the
protocol with that preference. For example, routes discovered by IS-IS have the same
preference 15. The static route is an exception because each static route may have its own
preference.
An operator can adjust preference to control routes. If a router receives two routes, one
route from an IBGP source and one route from an EBGP source, it will select EBGP source
according BGP protocol's principle. 255 is the maximum preference value. Routes learned
from an IGP source is more credible than those from an EGP (BGP) source. By default the
preference of IBGP and EBGP is set to 255.
page 204
Page 219
The route metric reflects the cost of a route to its destination. Route metrics are often
decided by factors including the delay, bandwidth, line occupation rate, line reliability, hops
and the maximum transmission unit. Different dynamic routing protocols choose different
factors to calculate a route cost. For example, RIP uses hops to calculate the route metric.
Route metrics make sense only for routes discovered by the same routing protocol. It is
meaningless to compare route metrics calculated by different protocols and there is no
formula to make conversions between route metrics come from different routing protocols.
The
route metric of the static route is 0.
Router A learns routes to Router D from Router B and Router E with the same protocol. As
the figure above illustrates, the route metric of the route that Router A gets from Router B is
9. While the route metric of the route that Router A gets from Router E is 12. Obviously, the
route that Router A gets from Router B is better than the route Router A learns from Router
E. So Router A adds the first route to its routing table. Router B is the next hop for that route.
Page 220
page 205
If there are multiple routes to the same destination and their route metrics and route
preference are the same, all these routes will be added to the routing table. IP packets are
sent on these routes alternatively, which helps to realize the load balancing.
At present, routing protocols that support load balancing are RIP, OSPF, BGP and IS-IS.
The static route also supports load balancing.
page 206
Page 221
In the routing table above, there are three routes to the network 10.1.1.1/32. The three
routes have the same preference and the preference is the highest
preference. So all the three routes are added to the routing table to balance the load.
Page 222
page 207
Data packets are forwarded according to the IP addresses of their destinations. When a
packet reaches a router, the router first gets to know the IP address of the destination of the
packet and then looks up its routing table to make the logical AND operation for the IP
address and the mask in the table. If the result of the logical AND operation agrees with the
destination IP address of the entry in the table, it means the entry is the route to the
destination of the IP packet; Otherwise, it is not. When all the entries that meet the
requirement are found, the router will choose the one with the longest mask among them.
page 208
Page 223
Imagine that a packet whose destination IP address is 9.1.2.1 reaches the router. The router
looks up its routing table and finds three matching routes there. They are:
0.0.0.0/0 whose matching length is 0 bit.
9.0.0.0/8 whose matching length is 8 bits.
9.1.0.0/16 whose matching length is 16 bits.
The last route has the longest mask length. So the router will choose this one to forward the
packet through serial 0/0.
Page 224
page 209
Routing loop is a network problem in which packets are sent from one router and return back
to the router after travelling in the network for a while. When the routing loop problem occurs,
packets travel around several routers until they are discarded when TTL is 0, which wastes
the network resource quite a lot. Steps should taken to keep routing loops at bay.
As the figure above shows, RTA has a packet heading for network N. The packet is
forwarded to RTC and the value of TTL is decremented by one. When RTC receives the
packet, it forwards it to RTB which leads to a routing loop occurrence, at which point the TTL
value again decrements by one. RTC receives the packet and forwards it to RTA and then
RTA sends the packet again to RTC. This process continues until the packet is discarded
once the TTL value is reduced to 0. The routing loop is very harmful to the network and care
should taken to avoid its occurrence.
The possible causes for a routing loop may be:
1. A temporary loop occurs when the network converges
2. Algorithm defect
3. Information that can prevent routing loops is lost when routes are imported to different
routing domains.
4. Configuration mistakes
page 210
Page 225
1. What are the sources of routes, and what are their characteristics?
Routes come from three sources: direct routes discovered by the Data-Link layer; manually
configured static routes; routes discovered by dynamic routing protocols. Routes found by
the Data-Link layer do not need maintenance and they are discovered automatically when
protocols at Data-Link layer are up. The
disadvantage of this source is that it can only find routes to the directly connected segments
and routes to other segments cannot be discovered. Manually
configured static routes need maintenance and they cannot be modified automatically when
the network topology changes. Dynamic routing protocols can
discover and modify routes automatically without human interference but the cost of these
protocols is huge and the configuration process is rather complicated.
2. What are the classifications for dynamic routing protocols?
Dynamic routing protocols can be grouped into the IGP and EGP protocols according to their
working areas and Distance-Vector and Link-State protocols
according to their algorithms and unicast routing protocols and multicast routing protocols
according to their applications.
3. What are the values that can be found in a routing table?
The routing table includes factors like destination, mask, protocol, preference, metric,
nexthop and interface.
The equal cost multi-path refers to routes that head for the same destination with
the same metric. When these routes have the same preference, they are all
added to the routing table and IP packets are sent on them alternatively.
4. What does equal cost multi-path mean?
Equal cost multi-path refers to two or more routes to a single destination from a single
source, that are capable of supporting load balancing due to the fact that both routes support
a metric that is considered equal to the routing protocol being used. Should the protocol be
RIP, the number of hops to a given destination should be equal. Alternatively if the protocol
happened to be OSPF, the distance between the source and the destination over the two
routes must reflect an equal cost, based on the link type e.g Serial/Ethernet and supported
bandwidth of the such links in accordance with OSPF cost values.
page 211
Page 226
page 212
Page 228
page 213
Page 229
page 214
Page 230
A static route is a special route that is configured by a network administrator manually. The
disadvantage of static routes is that they cannot adapt to the change in a network automatically,
so network changes require manual reconfiguration. Static routes are fit for networks with
comparatively simple structures. It is not advisable to configure and maintain static routes for a
network with a complex structure.
page 215
Page 231
page 216
Page 232
respectively. If one static route is labeled with the reject attribute, all the packets sent to
the destination of the route will be discarded and an ICMP packet will be sent to notify the
source that the destination is unreachable. When a static route is assigned the attribute
blackhole, any packet heading for the destination of the static route will be abandoned
and in this case, no ICMP packet will be sent to notify the source.
In the example above, the two routers to the loopback segment of RTA on RTB. The
command for configuring the route can be in one of the three forms below:are connected
by serial ports and we can configure a static route destined
[RTB] ip route-static 10.1.1.1 255.255.255.255 1.1.1.1
[RTB] ip route-static 10.1.1.1 32 1.1.1.1
[RTB] ip route-static 10.1.1.1 32 Serial 0
In the first form, the mask is represented by a dotted decimal number.
In the second form, the mask is shown by its length.
In the last form, gateway address is taken place by the interface name.
page 217
Page 233
You can query the routing table by running display ip routing-table command after the
static route is configured. The static route is displayed in the routing table as highlighted in
red here.
page 218
Page 234
Load balancing: Packets are sent through several links alternately when there are multiple
paths to the destination of those packets with the same cost. Static routes support load
balancing.
As shown in the figure above, three routes are configured to the same destination,
network 10.1.1.1/32, on RTB. The three static routes have the same preference value with
the default value 60 and there are no routes heading for this network with higher
preference value than these three routes. In this case, these three routes are equal routes
which can share the load, and packets will be sent through the three routes alternately.
page 219
Page 235
Looking up the routing table, you can see there are three routes destined to the network
10.1.1.1/32 which will share the load over each ECMP supported link.
page 220
Page 236
Route backup: Multiple routes heading for the same destination are configured, amongst
which there is one with a higher preference value that acts as the main route. Other equal
cost routes with lower preference values become backup routes.
As the above figure shows, two static routes are configured, destined for the network
10.1.1.1/32 on RTB. One of the routes has the preference value of the default value 60
while the other static route is configured with a less preferred preference value of 100.
page 221
Page 237
By looking up the routing table, you may find that there is only one route heading for the
network 10.1.1.1/32 which acts as the main route. The route with the preference value of 100
has not been added to the routing table. It will be added to the routing table only after the route
with the preference value of 60 becomes invalid.
page 222
Page 238
After running the display ip routing-table protocol static command, you can see the route
whose preference value is 60 is active, which means it is the main route to forward packets
to the network 10.1.1.1/32.
The route whose preference value is 100 is inactive and acts as the backup route. It will not
be added to the routing table or used for forwarding packets until the route with a preference
of 60 is no longer available, or the preference of this route is changed to a value lower than
the currently preferred route.
Note: The routing table here is a global routing table.
display ip routing-table can only list the active routes at present.
display ip routing-table protocol static can list all the static routes, including the
active routes and the inactive routes.
page 223
Page 239
A look up of the routing table after disabling a port for the active route with the shutdown
command will result in the backup route becoming the active route, and being added to the
routing table to forward packets in place of the lost route.
page 224
Page 240
The default route is one kind of special route. Usually, default routes are configured by
administrators manually but they can also be generated by routing protocols such as OSPF
and IS-IS.
When a router receives a packet whose destination is not listed in the routing table, the
router will forward the packet to the next hop defined by the default route. You can run the
display ip routing-table command to see if a default route is configured.
A packet will be forwarded to the default route if its destination does not match any
destinations of the routes in the routing table. If there is no default route either, then the
packet will be discarded and an ICMP message notifying the source that the destination or
the network is inaccessible will be sent.
page 225
Page 241
A default route is configured by setting the destination address and the mask to be 0s
(0.0.0.0 0.0.0.0) when you run the ip route-static command to configure a static route.
page 226
Page 242
In the routing table, you may see the destination address of the default route is set to be
0.0.0.0 and the mask length is 0.
page 227
Page 243
The default route supports both the load balancing and route backup mechanisms. If multiple
default routes are configured with the same preference value, they will share the load
together. If they have different preference values, the one with the highest route acts as the
main route and others are backup routes.
As the above table shows, the two static routes highlighted in red share the load for each
other after they are configured with the same preference value of the default value of 60.
page 228
Page 244
What are the differences between load balancing and route backup for static routes?
Load balancing: Packets are sent through several links alternately when there are multiple
paths to the destination of those packets with the same metric.
Route backup: If there are multiple routes heading for the same destination, one of them
which having the highest preference value will act as the main route, and the others with
lower preference value will act as the backup routes. The backup routes will be in use only
after the main route becomes invalid.
What is a default route?
The default route is a kind of the special route used for last resort forwarding. Usually,
default routes are configured by administrators manually but they can also be generated
by routing protocols such as OSPF and IS-IS. A default route is the route whose network
address and mask are both 0s in the routing table.
page 229
Page 245
page 230
Page 247
Page 248
page 231
page 232
Page 249
Routing protocols are like languages that build bridges between routers for information
exchange. Information like the network status and its accessibility range is
shared among routers with the help of those routing protocols.
Dynamic routing protocols are not only responsible for selecting routes, they are also capable
of finding another best route to the destination when the original one
is not available. This feature of dynamic routing is especially noteworthy when a network
topology changes which makes it the advantage of dynamic routing protocol over static routing
protocol.
The common routing protocols in use at present are RIP, OSPF, ISIS and BGP. RIP is famous
for its simplicity of configuration and deployment and it is designed for exchanging routing
information within a small to medium-size network as it converges slowly.
Developed by IETF, OSPF is a complicated but widely used protocol. ISIS is a routing
protocol based on a simple design with good extendibility and is extensively applied to large
scale SP networks.
BGP is used for communicating route information between AS.
Page 250
page 233
At present, the common dynamic routing protocols include RIP, OSPF, ISIS, BGP routing
protocols. RIP routing protocol configuration is simple, but the convergence rate is slow,
and RIP is commonly used in small and medium-sized networks. OSPF protocol
developed by the IETF, the protocol principle of OSPF is more complex, and it is widely
used; the ISIS design idea is simple, and it has good scalability, presenting in large SP
network configuration.
The BGP is used to exchange routing information between AS.
page 234
Page 251
A traditional definition for autonomous system (AS) is a collection of IP networks and routers
under the control of one entity that presents a common routing policy to the Internet. Now,
the definition of AS has developed into a collection of networks and routers that are
managed by multiple entities and adhere to several routing policies.
AS numbers are assigned by the IANA and each AS is allocated with a unique number to
differentiate from another. AS number ranges from 1 to 65535 and are divided into two
ranges. The first are public AS numbers, which may be used on the Internet and range from
1 to 64511. AS number in the second range, from 64512 to 65534, are known as private
numbers, and can only be used internally within an organization.
Page 252
page 235
Routing protocols can be divided into IGP and EGP according to their working area.
IGPInterior gateway protocols
A set of routing protocols that are used within an autonomous system, such as RIP and IS-IS.
IGP is mainly used to search and calculate routes within an autonomous
system.
EGPExterior gateway protocolsis used to connect different autonomous systems. An
EGP, such as BGP, controls communication of route information between
autonomous systems with routing policies and route filtering mechanisms.
page 236
Page 253
Routing protocols can be divided into Distance-vector protocols and Link-state protocols. RIP
and BGP are examples of Distance-vector protocols and OSPF and IS-IS fall in the group of
Link-state protocols. BGP is also called Path-vector protocol.
Distance-vector Routing Protocol
They use the Bellman-Ford algorithm to calculate paths. In Distance-vector routing protocols,
each router sends complete routing tables to their neighboring routers at fixed intervals. It is
the metric which means the distance between the router and the destination network and the
vector which indicates the interface from which data is forwarded that routers in a Distancevector routing protocol network really care about .
Advantages of Distance-vector protocol:
They are easy to configure and take up comparatively few resources of memory and CPU.
Disadvantages of Distance-vector protocol:
Poor extendibility, for example, the maximum hops of RIP is limited to 16.
Link-state Routing Protocol
They are based on the Dijkstra algorithm which is sometimes called the Shortest Path First
(SPF) algorithm. This algorithm pays attention to the state of links or interfaces in the
network, including whether they are up or down, their IP addresses and masks. Routers
advertise information about link states they know to other routers in the area through which
each router in the area builds up a complete link state database for the area. Then every
router draws its own topology map based on the information it collected in the form of a
graph showing which nodes are connected to which other nodes.
The primary advantage of link-state routing is that it reacts more quickly, and in a bounded
amount of time, to connectivity changes. Routers send update information only when the link
state changes which saves the bandwidth of the links between routers. Some of the update
information only covers the information about the changes of link state instead of the whole
routing table.
Page 254
page 237
In some occasions, route information should be shared among different routing protocols. For
example, route information obtained from RIP may possibly needs to be imported to OSPF.
The process of exchanging route information between protocols is called route importation.
This process could be a one-way street as we see in the example of import information from
RIP to OSPF. And it could also be a two-way process as RIP and OSPF can learn route
information from each other.
The cost of each protocol can not be compared and there are no formula to convert the cost of
one protocol to another's. So we must set the Metric again ( some protocols can use the
default value set by the system) when we import route information from one protocol to another.
Improper importation may impose burdens on routers or lead to loops, so we must be careful
with it.
page 238
Page 255
Page 256
page 239
page 240
Page 257
page 241
Page 259
page 242
Page 260
page 243
Page 261
page 244
Page 262
When a router starts (on t0), it generates an entry for each directly-connected network
segment. The router is directly connected to the network segment, so the hop count is 0
and the next hop router is represented as " " in the entry. The router then broadcasts
the routing information to all links.
page 245
Page 263
On t1, routers receive and process the first update message. RTA receives the update
message from RTB and finds that RTB has a route to 10.1.3.0 with 0 hops. This route is
not contained in the routing table of RTA, so RTA adds this route to its routing table and
increases the hop count by 1. Thus RTA learns the route to 10.1.3.0 from the update
message sent by RTB. Similarly, RTB learns the route to 10.1.1.0 from the update
message sent by RTA and learns the route to 10.1.4.0 from the update message sent by
RTC. RTC learns the route to 10.1.2.0 from the update message sent by RTB.
page 246
Page 264
On t2, the update period begins and new update packets are broadcasted. RTA learns the
route to 10.1.4.0 from RTB. RTC learns the route to 10.1.1.0 from RTB. Through the
periodical update mechanism, each router obtains routes to all network segments. Finally,
the network converges.
page 247
Page 265
The D-V algorithm requires that each router sends its routing table to adjacent
routers. When receiving the route update message, the router compares the new
routing information with the original routing information in its routing table. The
router then modifies the local routing table according to the comparison to keep
pace with the change of network.
The principles of updating the routing table are:
1. Adding new routes.
As shown in the figure, RTB receives the route update message from RTA. If a
route entry of RTA, for example, the route to 10.1.1.0 does not exist in the routing
table of RTB, RTB will adds this entry to its own routing table. In the routing table
of RTB, the destination network of this route is 10.1.1.0; the metric (hop count) is
the metric of this entry for RTA plus 1; the next hop address is the IP address of
RTA's interface connected to RTB, namely, 10.1.2.1.
page 248
Page 266
page 249
Page 267
page 250
Page 268
page 251
Page 269
When a network fault occurs, network convergence may slow down because the
routes in the routing table may be inconsistent with routes in the actual network
topology. In this case, routing loop may be generated. This figure provides a
simple network structure to show how a route loop is generated. Before a fault
occurs to network 11.4.0.0, all routers have correct and consistent routing tables
and the network is converged. In this example, route metric is represented by
hop count, so the metric of each link is 1. Router C is directly connected to
network 11.4.0.0, so the hop count is 0. Router B is connected to network
11.4.0.0 through Router C, so the hop count is 1. Router A is connected to
network 11.4.0.0 through Router B and Router C, so the hop count is 2. When a
fault occurs to network 11.4.0.0, route loop may be generated. The process is as
follows:
1. When a fault occurs to network 11.4.0.0, Router C receives the information
about the fault first. Router C then regards 11.4.0.0 as unreachable, and
waits till the update period begins to advertise the route change to the
adjacent router. If the update period of Router B begins earlier than the
update period of Router C, Router C will learn a new route to 11.4.0.0 from
Router B. Actually, the learnt route is incorrect. Thus, the routing table of
Router C records an incorrect route. (The next hop is Router B; the
destination is 11.4.0.0; the hop count is increases to 2.)
2. After leaning a wrong route, Router C advertises this route to Router B. Route
B also records a wrong route to 11.4.0.0, of which the next hop is Router C
and the hop count is increases to 3.
3. Router B considers that network 11.4.0.0 is reachable through Router C, and
Router C considers that network 11.4.0.0 is reachable through Router B. Thus, a
loop is generated.
page 252
Page 270
When a route loop occurs, the count of hops to network 11.4.0.0 keeps
increasing, and the network cannot converge. To avoid this problem, the RIP
protocol limits the maximum hop count to 16. In the figure, when the hop count
reaches 16, network 11.4.0.0 is considered unreachable. The router marks this
route unreachable in the routing table and does not update the route to 11.4.0.0
any more. By defining the maximum hop count, the distance-vector routing
protocol prevents the route metric from increasing infinitely when route loop
occurs. In addition, incorrect route information is corrected. However, routing
loop still exists before the hop count reaches the maximum value. That is to say,
this solution is only a remedial measure but it cannot avoid route loops. This
solution can only mitigate the damage caused by route loop. Therefore,
designers of routing protocols provide other solutions to reduce the probability
generating the route loops, for example, split horizon and triggered update.
page 253
Page 271
page 254
Page 272
page 255
Page 273
Route poisoning can avoid routing loops to some extent and can suppress
network flapping caused by interface resetting. When a fault occurs in the
network or an
interface is reset, route poisoning suppresses the related route and starts a holddown timer. Within the hold-down time, the router does not update the routing
table. In this way, the routing loop is avoided and network flapping is suppressed.
As shown in the figure:
1. When a fault occurs in network 11.4.0.0, Router C suppresses the related
route entry in the routing table, that is, it sets the metric of the route to this
network to 16 or unreachable. At the same time, Router C starts a hold-down
timer. Within the hold-down time, if Router C receives a route reachable
message from the same neighbor (or the same direction), it marks the network
as reachable and stops the hold-down timer.
2. If Router C receives an update message from other neighbors, advertising the
route with higher weight, Router C updates the routing table by selecting the new
route. At the same time, Router C stops the hold-down timer.
3. Within the hold-down time, if Router C receives a route reachable update
message, but the weight of the new route is lower, Router C will not accept the
new route. After the hold-down timer expires, if Router C receives this update
message again, it will update the routing table.
page 256
Page 274
page 257
Page 275
Triggered update avoids the route loop to some extent, but it still cannot avoid
the following problems:
1. The packet containing the update message may be discarded or damaged.
2. If a router receives the periodically sent update message from the adjacent
router before receiving the triggered update messages, the router will learn the
wrong routing information.
The above problems can be solved when triggered update is combined with the
hold-down timers. Within the hold-down timers, the router does not update the
route
to the destination network which becomes unreachable. Therefore, combining
triggered updates with the hold-down timers ensures that the triggered update
message has enough time to be transmitted in the network. As shown in the
figure, when Route C detects that network 11.4.0.0 is disconnected, it deletes the
route to this network immediately. Then Router C sends a triggered update
message to Router B. Router C sets the route metric to 11.4.0.0 to infinite (16) to
suppress this route. After receiving the triggered update message, Router B
starts the hold-down timer and marks the network as "may be disconnected." At
the same time, Router B sends a reverse update message to Router C, then
sends a triggered update message to Router A to advertise that network 11.4.0.0
is unreachable. Router A then suppresses the route to 11.4.0.0 and sends a
reverse update message to Router B.
page 258
Page 276
page 259
Page 277
page 260
Page 279
Page 280
page 261
page 262
Page 281
Page 282
page 263
When RIP is enabled, the initial routing table contains only direct routes. After
RIP is enabled on a router, the router broadcast Request packets to all directly
connected interfaces. When the adjacent router receives the Request packet
from an interface, it broadcast Response packet to the network connected to this
interface according to its routing table. When the router receives the Response
packet from the adjacent router, it generates the routing table according to the
Response packet. Based on the characteristics of the D-V algorithm, the devices
involved in RIP are classified into active devices and passive devices. The active
device actively broadcasts route update packets, and the passive device
receives route update packets passively. Generally, a host is a passive device,
and a router is both an
active device and a passive device. That is, a router not only broadcasts route
update packets, but also receives the D-V packets from other active devices and
updates the routing table.
Huawei Networking Technology and Device Module 2 Part 6 RIP Routing
Protocol
Confidential Information of Huawei. No Spreading without Permission
Copyright 2008 Huawei Technologies Co., Ltd. All rights reserved.
Page 294
page 264
Page 283
Based on RIP, a router broadcasts its routing table through the Response
packets every 30 seconds. After receiving the Response packet from the
neighbor, the router calculates the route metric in the packet through RIP. Then
the router compares the calculated metric with the metric of the route in the
routing table and updates the routing table. The route metric is calculated by the
following formula: metric = Min (metric + cost, 16). Here, "metric" is the metric in
the packet. Cost is the metric from the neighbor to the network where the packet
is received. The default value of cost is 1 (one hop). 16 means that the
destination network is unreachable. When the local router receives a route
update packet, it updates the routing table based on the following principles:
For an existing route entry in the routing table, if the next hop is the adjacent
router, the local router updates the entry (keeps the original metric and only
resets the aging timer), regardless of whether the metric in the route up date
packet is larger or smaller. If the next hop is not the adjacent router, the local
router updates the route entry only when the metric in the router update packet is
smaller than the previous metric.
For a route entry that does not exist in the routing table, the router adds it to the
routing table if the metric is less than 16 (unreachable). Each entry in the routing
table has an aging timer. If a route entry is not updated within 180 seconds, the
aging timer times out and the metric of this route changes to 16 (unreachable).
After the metric of a route changes to 16 and the route is advertised through the
Response packet for four times (120 seconds), this route will be deleted from the
routing table.
Page 284
page 265
RIP has two versions: RIPv1 and RIPv2. RIPv1 does not support Variable Length Subnet
Masks (VLSM). RIPv2 supports VLSM, route aggregation, and Classless Inter-Domain
Routing (CIDR). In addition, RIPv2 supports plain text authentication and MD5
authentication. In RIPv1, packets are transmitted in broadcast mode. RIPv2 supports two
transmission modes: broadcast and multicast. Multicast is adopted by RIPv2 by default.
The multicast address for RIPv2 is 224.0.0.9. An advantage of multicast transmission is
that the networks that do not support RIP will not receive the RIP packets. Also with
multicast, network segments that run RIPv1 will not receive or process the RIPv2 routes.
page 266
Page 285
This figure shows the format of the RIPv1 packet. A RIPv1 packet contains a
command field, a version field, and multiple route entries (up to 25 entries). Each
route entry consists of the Address Family Identifier, reachable IP address, and
hop count (Metric). If a router needs to send more than 25 route entries, the
entries must be sent in multiple RIP packets. From this figure, you can see that
the RIP packet header takes four bytes, and each route entry takes 20 bytes.
Therefore, the length of a RIP packet is 4 + 25 x 20 = 504 bytes. Counting the 8byte UDP header, the maximum length of the RIP packet (excluding the IP
header) is 512 bytes.
The values and functions of the fields in the RIP packet are as follows:
Command: The value can be only 1 or 2. 1 represents the Request packet; 2
represents the Response packet. A router or host sends the Request packet to
require routing information from the peer router. The peer router responds by the
Response packet. But in most cases, a router periodically sends Response
packets without waiting for the Request packet.
Version: For RIPv1, the value is 1.
Address Family Identifier (AFI): For the IP protocol, the value is 2.
IP address: indicates the destination address of the route. The value can be a
network address or the address of a host.
Metric: indicates the hop count. The value ranges from 1 to 16.
Page 286
page 267
Compared with the RIPv1 packet, the RIPv2 packet has the following new fields:
Route tag: 16 bits, used to mark the external route or the route redistributed to RIPv2
protocol.
Subnet mask: 32-bit mask, used to identify the network address and subnet address in
the IP address.
Next hop: 32-bit next-hop IP address.
page 268
Page 287
Page 288
page 269
The display rip command is used to display the running status and configuration
of RIP. Part of the display information is described in the following. Pay attention
to the contents marked in red.
RIP process: 1 indicates that the RIP process number is 1.
Public VPN-Instance indicates the public network VPN.
RIP version: RIP-2 indicates that the RIP version is 2.
Preference: 100 indicates that the precedence of the RIP protocol is 100.
Maximum number of balanced paths: 6 indicates that the maximum number of
equal-cost routes is 6.
Update time: 30 sec indicates that the route update interval is 30 seconds.
Age time: 180 sec indicates that the aging time of the RIP route is 180
seconds.
Suppress time: 0 sec indicates that the route suppression duration is 0
seconds.
Networks: 192.168.1.0 172.16.0.0 indicates the network where RIP is enabled.
page 270
Page 289
The display rip route command is used to display all active and inactive routes, and the
timer of each route.
Destination (Dest) destination IP address
Nexthop
Cost
Tag
Sec
Page 290
page 271
page 272
Page 291
Page 292
page 273
page 274
Page 293
Using the undo summary command, you can disable classful route aggregation to allow
routing between subnets. In this case, routing information of the subnet is
advertised. Route aggregation reduces the routing information in the routing table. By
default, route aggregation is enabled in RIPv2. In this example, three IP addresses are
configured for three loopback interfaces on Router A. RIP is enabled on these IP
addresses. Route aggregation is disabled by the undo summary command. These IP
addresses are advertised to other routers. Viewing the routing table of Router B, you can
find three host routes with these IP addresses.
Page 294
page 275
Using the rip summary-address ip-address mask command, you can configure
a RIP router to advertise an aggregated local IP address. This command is
supported
by VRP5.3.
ip-address: network address to be aggregated
mask: subnet mask
Using the undo rip summary-address ip-address mask command you can
cancel the configuration. If both auto route aggregation and manual route
aggregation are enabled, the manually aggregated routes are integrated into the
automatically aggregated routes. Namely, auto route aggregation take effect. If
the mask length of aggregated route is smaller than natural mask length, use
manual route aggregation to perform it and do not use auto route aggregation
together.
page 276
Page 295
Each routing protocol has a preference. The preference influences the routing policy in
selecting the route learned through a certain protocol as the best route. The larger the
value, the lower is the preference. You can set the preference of the RIP protocol
manually.
Set the preference of RIP protocol.
[Quidway-rip] preference value
Restore the preference of the RIP protocol to the default value.
[Quidway-rip] undo preference
By default, the preference of RIP protocol is 100.
Page 296
page 277
RIP allows RIP to import the routing information of other routing protocol into the
RIP routing table. You can set the default cost of the imported route. Routes that
can be imported to the RIP routing table are: direct routes, static routes, OSPF
routes, BGP routes, and IS-IS routes. Enable RIP to import routes of other
routing protocols.
[Quidway-rip] import-route protocol [ allow-ibgp ] [ cost value ] [ routepolicy
route-policy-name ]
Disable RIP to import routes of other routing protocols by default.
[Quidway-rip] undo import-route protocol
By default, RIP does not import routes of other routing protocols. When protocol
is specified as BGP, the allow-ibgp keyword is optional. The import-route bgp
command configures RIP to import only EBGP routes. The import-route bgp
allow-ibgp command configures RIP to import both EBGP routes and IBGP
routes. This configuration may cause route disorder, so use this command with
caution then takes default-cost as route cost. If route cost is not set for the
imported routes. The default-cost value is 0. In this example, the route cost is set
to 10. Therefore, the cost of imported routes is calculated by route-cost plus 1,so
the cost of routes received by RTB is 11.
page 278
Page 297
Using the rip metricin value command, you can set the metric increment for the
RIP route received on an interface.
value: specifies the metric increment for the RIP route received on an interface.
The value ranges from 0 to 15. By default, the value is 0.
When receiving a route, the router adds the RIP increment of the receiving
interface to the route, and then adds the route to the routing table. Thus, the
metric in the routing table is changed. Therefore, when the RIP metric of an
interface increases, the metric value of the RIP routes received on the interface
also increases.
when RTA receiving route 10.1.1.1/32 by rip update message, it will calculate
cost 10.1.1.1/32. The metric-in value of RTA's receiving interface is changed to 5,
the
cost of 10.1.1.1/32 in rip message is 1. So the cost of 10.1.1.1/32 in RTA's the
routing-table is 6 (5+1=6).
Page 298
page 279
Using the rip metricout value command, you can set the increment of metric for the RIP
route sent from an interface.
value: specify the metric increment for the RIP route sent from an interface. The value
ranges from 1 to 15. By default, the value is 1.
Before a route is advertised, the metric increment is added to this route. Therefore, when
the RIP metric of an interface increases, the metric value of the RIP routes
sent from the interface also increases. However, the metric in the routing table is not
changed. When RTB receives routes 172.16.1.X by rip update messages, the metric of
172.16.1.X in the update is 4, which is set by "rip metricout 4" on RTA's serial interface.
The default metric in of RTB's serial interface is 0, so RTB calculates 4+0=4, 4 being the
cost of 172.16.1.X in RTB's routing table.
page 280
Page 299
page 281
page 282
Page 301
page 283
Page 303
page 284
Page 304
page 285
Page 305
page 286
Page 306
The Routing Information Protocol (RIP) is a relatively simple dynamic routing protocol. RIP
is a routing protocol based on the distance-vector (D-V) algorithm.
RIP exchanges routing information through UDP. Based on RIP, a router sends update
messages every 30 seconds. If a router does not receive the update
message from the peer router within 180 seconds, the router marks all routes learned from
the peer router as unreachable. If the router still does not receive
the update message from the peer router in the subsequent 120 seconds, it deletes these
routes from the routing table.
RIP represents the distance to the destination network by the hop count. In RIP, the hop
count between a router and the directly connected network is 0. If the network is reachable
reached through another router, the hop count is 1. The hop count increases with the
number of routers between the source router and the destination network. In RIP, the
metric is an integer ranging from 0 to 15. The hop count equal to or larger than 16 is
defined as infinite, that is, the destination network or host is unreachable.
RIP is on the upper layer of UDP. Routing information for RIP is encapsulated in the
datagram of UDP. RIP uses port 520 to exchange routing information. When a router
receives the route update message from the remote router, the router notifies other routers
of the changed route. In this way, routes are synchronized on all routers in the network.
To improve the routing performance and avoid route loop, RIP supports split horizon,
poisoned reverse, and triggered update. In addition, RIP can import
routes learned through other routing protocols.
page 287
Page 307
RIP has two versions: RIPv1 and RIPv2. RIPv1 does not support Variable Length Subnet
Masks (VLSM). RIPv2 supports VLSM, route aggregation, and Classless Inter-Domain
Routing (CIDR). In addition, RIPv2 supports plain text authentication and MD5
authentication. In RIPv1, packets are transmitted in broadcast mode. RIPv2 supports two
transmission modes: broadcast and multicast. Multicast is adopted by RIPv2 by default.
The multicast address for RIPv2 is 224.0.0.9. An advantage of multicast transmission is
that the networks that do not support RIP will not receive the RIP packets. Also with
multicast, network segments that run RIPv1 will not receive or process the RIPv2 routes.
page 288
Page 308
page 289
Page 309
Network description:
RTA is connected to RTB through serial interface. RTA and RTB are configured with two
loopback interfaces each. IP addresses of these interfaces are shown in the figure.
Fault description:
After the configuration, the routes learned through RIP are not found in the routing table.
Command lines in part 7 RTP trouble shooting base on V3.4.
page 290
Page 310
The flowchart provides the main procedure for troubleshooting. When a router fails to receive
part of or all the routes, follow the following steps to locate the fault:
1. Check whether RIP is enabled on the incoming interface.
Use the network command to specify the network segment where RIP is enabled. An
interface can receive and send RIP routes only if the RIP protocol is enabled on this interface.
You can use the display current-configuration configuration rip command to view the
information about the RIP-enabled network segment and check whether the incoming
interface is included in this network segment. The specified network segment must be a
natural network segment.
2. Check whether the incoming interface works normally.
Use the display interface command to view the status of the incoming interface. If the
physical status of the interface is Down or Administratively Down, or the protocol status is
Down, RIP cannot function normally on the interface. Therefore, you must ensure that the
status of the incoming interface is normal.
3. Check whether the version of the RIP packets sent from the peer is the same
as the RIP version configured on the local interface. If the version of the received RIP
packets is different from the RIP version configured on the incoming interface, the RIP routes
may not be accepted correctly.
4. Check whether the undo rip input command is configured on incoming interface. The rip
input command is used to allow the specified interface to receive RIP packets. The undo rip
input command is used to prohibit the specified interface to receive RIP packets. If the undo
rip input is configured on the incoming interface, RIP packets received on this interface
cannot be processed, so RIP routes cannot be received.
page 291
Page 311
page 292
Page 312
page 293
Page 313
page 294
Page 314
Networking description:
RTA is connected to RTB through serial interface. RTA and RTB are configured with two
loopback interface each. IP addresses of these interfaces are shown in
the figure.
Fault description:
After the configuration, the router does not send all or some of the routes.
page 295
Page 315
The flowchart provides the main procedure for troubleshooting. When a router
fails to send part of or all routes, follow the following steps to locate the fault:
1. Check whether RIP is enabled on the outgoing interface.
Use the network command to enable the network segment of the interface .
An interface can receive and send RIP routes only if the RIP protocol is enabled
on this interface. You can use the display current-configuration configuration
rip command to view the information about the RIP-enabled network segment and
check whether the outgoing interface is included in this network segment. The
specified network segment must be a natural network segment.
2. Check whether the outgoing interface works normally.
Use the display interface command to view the status of the outgoing interface. If
the physical status of the interface is Down or Administratively Down, or the
protocol status is Down, RIP cannot function normally on the interface. Therefore,
you must ensure that the status of the incoming interface is normal.
3. Check whether the silent-interface command is configured on the outgoing
interface.
The silent-interface command is used to suppress the interface from sending the
RIP packet.
The display current-configuration configuration rip command is used to check
whether the interface is suppressed from sending the RIP packet.
Enable the interface if it is disabled.
page 296
Page 316
page 297
Page 317
page 298
Page 318
page 299
Page 319
Fault description: RTA and RTB use different authentication keys, so they cannot receive
routes from each other.
Analysis: If a router cannot receive any route from the peer, check the following:
1. Whether RIP is enabled on the interfaces connecting the peer.
2. Whether the link between the routers is normal.
3. Whether the routing protocol is configured properly.
Using related commands, you can see that RIP is enabled on the interfaces and the link
between the routers is normal, but the configuration of RTA is different from that of RTB.
Comparing their configurations, you can see that password authentication is configured for
RTA and RTB. Following the preceding sections, you already know that RIPv2 supports
the authentication of update packets to improve security. The authentication modes and
authentication keys must be the same on the two routers. If the authentication modes or
authentication keys on two routers are different, the routers cannot exchange routing
information and they ignore the update packets.
page 300
Page 320
After the authentication mode and key are configured correctly, the routers can learn
routing information of each other from the update packets.
page 301
Page 321
Fault description: The metric of the route exceeds the hop count limit in RIP, so the router
cannot accept route.
Analysis: RIP limits the hop count to 15. If the hop count in a network exceeds 15, RIP is
not applicable to this network.
Additional metric is the increment (hop count) added to the original metric. The rip metricin
command is used to set the increment added to the received route when the route is added
to the routing table. The metric of this route is also changed in the routing table. The rip
metricout command is used to set the increment added to a route to be advertised. But the
metric of this route is not changed in the local routing table. For example, after you
configure rip metricout 16 on RTB, the hop count of the route to 172.16.3.0 is 16 when the
route is received by RTA. RTA does not add the route to the routing table.
Command lines used here are based on VRP3.4.
page 302
Page 322
Viewing the routing tables of RTA and RTB, you can find that the route to 172.16.2.0 is
added to the routing table of RTB, while route to 172.16.3.0 is not added to the routing table
of RTA.
page 303
Page 323
Change the additional metric to 15, and RTA will add the route to 172.16.3.0 to its routing
table. Command lines used here are based on VRP3.4.
page 304
Page 324
Fault description: The subnets are not continuous, and thus the routing information cannot be
added to the routing table.
Analysis: Network 162.16.0.0 segment is divided by network segment. RTA and RTB uses
RIPv1, which is a classful routing protocol. Therefore, the routers send
update packets with a Class-B network segment address 162.16.0.0 but not the accurate
network addresses 162.16.2.0 and 162.16.3.0.
page 305
Page 325
When RTA receives the update packet for the route to 162.16.0.0, RTA does not add the
route to the routing table, because it has a directly connected network
segment 162.16.2.0.
page 306
Page 326
To solve this problem, you can enable RIPv2 on the routers, because RIPv2 is a classless
routing protocol. Use the undo summary command to enable the CIDR function.
page 307
Page 327
After you modify the configuration, RTB advertises the route with the accurate address
162.16.3.0, and RTA adds the route to the routing table.
page 308
Page 328
page 309
Page 329
page 310
Page 331
Page 332
page 311
page 312
Page 333
Open Shortest Path First (OSPF) is an IGP protocol based on the link state algorithm.
OSPF is brought forward by the Internet Engineering Task Force (IETF). OSPF has three
versions. OSPFv1 is defined in RFC 1131. This version was in the experimental stage and
has never been released for public use. OSPFv2 is used for IPv4 and was initially defined in
RFC 1247. RFC2328 is the latest standard document for OSPFv2. OSPFv3 is used for IPv6.
Unless otherwise specified, OSPF refers to OSPFv2 in this course.
OSPF is borne by the IP protocol and uses IP protocol number 89. An OSPF packet
consists of the header and the packet body. The format of the OSPF packet is described in
the HCDP course and is not explained here
Page 334
page 313
page 314
Page 335
OSPF is an open standard routing protocol and is extensively used by various network carriers.
OSPF can be applied to both the enterprise network and the carrier-class IP network. This slide
lists the differences between OSPF, RIPv2 and RIPv1.
Page 336
page 315
Compared with RIP, OSPF is a more advanced interior gateway protocol. OSPF
and RIP are totally different, although they are both IGPs. OSPF is based on the
link state algorithm, while RIP is based on the distance-vector algorithm. As
described in the course of the RIP protocol, distance-vector protocols select
routes based on the hop count and do not consider network resources such as
the link bandwidth. Under this condition, a path with high bandwidth may not be
selected.
OSPF selects routes according to the link state. OSPF enables fast convergence
of routes and do not limit the hop count. OSPF routers advertise the link
information, instead of periodically sending route update packets. Therefore,
OSPF is more applicable to large-scale networks. (A link can be regarded as an
interface on a router. The link state is the description of the interface and the
relation between the local router and the adjacent router.) The calculation
process of OSPF will be described later.
page 316
Page 337
Unlike early routing protocols that use the distance-vector algorithm, OSPF uses the link
state algorithm. The following describes the route calculation process of the link state
algorithm.
An OSPF router floods the link state advertisement (LSA) to notify other routers of the
status of the local link, for example, available interface, reachable neighbor, and the
information about the adjacent network segment. Flooding is a process of sending and
synchronizing the link state between routers.
Each router generates a link state database (LSDB) according to the LSAs advertised by
other routers and its local LSAs. The LSDB describes the detailed network topology in the
routing area. In the same area, all routers have the same LSDB.
Based on the LSDB, each router calculates a shortest path tree with the SPF algorithm. The
local router is the root of the tree, and other nodes in the network are leaves. The shortest
path tree calculated through the SPF algorithm does not have route loops.
The shortest path tree of each router provides the routing table listing the routes to other
nodes in the network. Thus, each OSPF router knows the routes to other routers.
Page 338
page 317
OSPF supports five types of packets, which contain the same OSPF header. An OSPF
router uses the following packets to discover neighbors and maintain the neighbor relation,
synchronize the LSDB, and exchange routing information:
Hello packet: It is a common packet used to discover neighbors and maintain the neighbor
relation. The Hello packet is also used to elect the designated router (DR) and backup
designated router (BDR) in the broadcast network and NBMA network.
DD packet: Routers use DD packets to describe their LSDBs when they synchronize the
LSDBs. A DD packet consists of an LSA and an LSA header. The header uniquely identifies
an LSA. The LSA header makes a small part of the packet, and thus the traffic of protocol
packets transmitted between routers can be reduced. The peer router checks whether an
LSA already exists according to the LSA header.
LSR packet: After two routers exchange the DD packets, each router knows the LSAs that
exist in the LSDB of the peer but do not exist in the local LSDB. Then the router sends an
LSR packet to request for these LSAs. The LSR packet contains the summary of the
required LSAs.
LSU packet: This packet is used to send the required LSAs to the peer router. An LSU
packet contains the combination of multiple LSAs.
LSAck packet: This packet is used to acknowledge the received LSU packet.
page 318
Page 339
To exchange the link status and routing information, two OSPF routers need to establish the
neighbor relation.
Neighbor
After an OSPF router is started, it sends Hello packets through the OSPF interface to
discover neighbors. The OSPF router that receives the Hello packet checks the parameters
in the Hello packet. If the parameters are consistent on the two routers, the two routers
establish the neighbor relation.
Adjacency
Not all neighboring routers can establish the adjacency. Adjacency establishment depends
on the network type. The real adjacency is established only if the routers exchange DD
packets successfully and can exchange LSAs. To send LSAs, a router must discover the
neighbor and establish the adjacency with the neighbor.
In this example, RTA is connected to three routers through Ethernet. RTA has three
neighbors, but you cannot say that RTA establishes three adjacencies.
Page 340
page 319
Not all neighbors can establish the adjacency to exchange link status and routing
information. Adjacency establishment depends on the network type, namely, the layer-2 link
type of the OSPF network.
Based on the link-layer protocol, OSPF networks are classified into the point-to-point
network, broadcast network, NBMA network, and point-to-multipoint network. A point-topoint network is a network that directly connects two routers.
Link layer protocols for the point-to-point network are PPP, LAPB, and HDLC. In a point-topoint network, neighboring routers can establish the adjacency directly. Broadcast network:
If the link-layer protocol is Ethernet or FDDI, the network is considered as a broadcast
network by default. The network shown in the right of the figure is a broadcast network.
Ethernet is a common link layer protocol for a broadcast network. In the broadcast network,
NBMA network, and point-to-multipoint network, routers establish adjacency selectively.
page 320
Page 341
A non-broadcast network can connect more than two routers, but it does not support
broadcast.
In non-broadcast networks, OSPF has two operation modes: non broadcast multiaccess (NBMA) and point-to-multipoint.
NBMA
In an NBMA network, routers must establish a full connection. An ATM network adopting full
connection is an NBMA network.
In an NBMA network, OSPF simulates the operations on broadcast networks, however
neighbors of each router must be configured manually.
Common link layer protocols for the NBMA network are Frame Relay and ATM.
Page 342
page 321
Point-to-multipoint
A network that cannot establish the full connection needs to adopt the point-to-multipoint
mode. A Frame Relay network is such a network. In this mode, the entire non-broadcast
network is regarded as a group of point-to-point networks. A router discovers its neighbors
by using a lower layer protocol, for example, inverse ARP. The point-to-multipoint network
type is not a default type in OSPF.
page 322
Page 343
In broadcast and NBMA networks, if any two routers need to establish the adjacency, route
convergence is very slow. Use of the designated router (DR) and backup designated router
(BDR) solves this problem.
A broadcast network or NBMA network containing at least two routers has one DR and a
BDR.
Functions of the DR and BDR:
The DR and BDR reduce adjacencies, thus reduce exchanges of link state information and
routing information. Use of the DR and BDR reduces bandwidth consumption and lowers the
burden of routers. A router that is neither the DR nor the BDR is called a DRother. A DRother
establishes the adjacencies and exchanges the link state information and routing information
only with the DR and BDR. This mode greatly reduces adjacencies and raises route
convergence speed in large-scale broadcast and NBMA networks.
In the figure, RTA has three neighbors, but it establishes adjacencies only with the DR and
BDR. RTA does not establish the adjacency with the other router and does not exchange
routing information with this router. To sum up, establishment of the adjacency depends on
the network type. In a point-to-point network, two routers can establish the adjacency. A
point-tomultipoint network can be regarded as a group of point-to-point networks. An
adjacency is established between each two directly connected routers. In a broadcast or
NBMA network, a DR and a BDR are selected. Drothers establishes adjacencies only with
the DR and the BDR.
Page 344
page 323
Autonomous system
An autonomous system is a combination of routers that use the same routing policy and are
managed by the same technical management organization. In the course of OSPF, an
autonomous system refers to a group of routers that exchange routing information by using
the same routing protocol. In this course, autonomous system is referred to as AS for short.
As an IGP protocol based on the link state algorithm, OSPF takes effect only within the AS.
Area
An area is a combination of routers and the networks connected to these routers. As shown
in the figure, three routers and the networks connected to the routers forms an area. Single
area means that all routers running OSPF belong to the same area. OSPF requires that all
routers in the same area have the same LSDBs.
Router ID:
To run OSPF, a router must have a router ID. The LSDB records the topology of the
network, including routers in the network. Each router must have a unique identifier to
identify itself in the LSDB. A router ID is a 32-bit integer used to uniquely identifies a router
in an AS. Each OSPF router has a router ID. Router ID uses the format of an IP address.
The IP address of Loopback interface of a router is recommended as the router ID.
page 324
Page 345
In this example, the topology for OSPF single area configuration is as follows:
RTA and RTB are located in the network. Each router uses the IP address of Loopback0 as
the router ID. RTA and RTB belong to Area 0. Here, configuration of interfaces and IP
addresses is not mentioned. For the configuration, refer to related basic courses.
The procedure for basic OSPF configuration is as follows:
Run the router id router-id command to specify the router ID. If the router ID is not specified,
OSPF uses the largest loopback IP address as the router ID. If no loopback interface is
configured, the largest IP address of physical interfaces is used as the router ID.
Run the ospf [ process-id] command to enable OSPF. OSPF supports multiple processes. If
the process ID is not specified, process 1 is used by default. Run the area area-id command
to enter the area view.
Run the network ip-address wildcard command to specify the network segment included in
the area. When specifying the network, use the wildcard mask of the network segment.
Page 346
page 325
After the configuration, you can use related commands to check the configuration. For
example, you can use the display ospf routing command to display information about the
OSPF routing table.
[RTA] display ospf routing
In this example, this command displays the OSPF routing table of RTA. The display shows
that the OSPF routing table of RTA contains three route entries and they are all in Area 0.
Each route entry shows network segment, next hop, router that advertises this route, and the
area the route belongs to. From the display information, you can see that the OSPF
configuration is correct and RTA and RTB can exchange routing information.
page 326
Page 347
As the network size keeps increasing it increases the number of devices that are part of the same
converged domain and in turn their routing tables. If all routers in a large-scale network run OSPF,
an increasing amount of storage space becomes occupied, The reason is that the LSDB becomes
very large when a large number of routers are added to the network. A huge LSDB makes
calculation of SPF algorithm very complicated and burdens the CPU. When the network size is
enlarged, the probability of topology changes also increases and the network often flaps. Under
such a condition, a large amount of OSPF packets are transmitted in the network, which lowers the
bandwidth utility. To make it worse, each time the network topology changes, all routers in the
network need to recalculate routes.
To avoid this problem, OSPF divides an AS into areas. Areas logically classify routers into different
groups. An area is identified by the area ID.
An area is a combination of network segments.
OSPF allows network segment to form an area.
Dividing an AS into areas reduces the LSDB size and reduces network
traffic.
The detailed topology within an area is not sent to other areas. Areas exchanges only abstract
routing information but not the link state information. Areas maintain different LSDBs. Each router
maintains a independent LSDB that records each area connected to it. Since the link state
information is not advertised to other areas, the size of LSDB is much more smaller.
Area 0 is the backbone area that advertises the inter-area routing information (not detailed link
state information) summarized by edge routers to non-backbone areas. To avoid routing loop, nonbackbone areas cannot advertise inter-area routing information. Each edge router, therefore, must
have at least one interface in Area 0. That is, all non-backbone networks must be connected to the
backbone area.
Page 348
page 327
page 328
Page 349
As shown in the figure, the area is divided into Area 0, Area 1, and Area 2. OSPF requires
non-backbone areas to be directly connected to the backbone network, and so Area 1 and
Area 2 are connected to Area 0. On RTA, Area 1 must be configured. On RTB, Area 0 and
Area 1 must be configured. On RTC, Area 0 and Area 2 must be configured. On RTD, Area 2
must be configured.
RTA and RTB exchanges LSAs to generate LSDBs. LSDBs of RTA and RTB are the same.
Since RTB also belongs to Area 0, RTB maintains another LSDB for Area 0. This LSDB is
the same as the LSDB on RTC. Similarly, RTD and RTC maintain same LSDBs for Area 2.
The configuration is similar to configuration of a single area, and the commands are omitted
here. When configuring multiple areas, network segments must be specified for area
separately. For example, network segment 2.2.2.2 is specified Area 1, and so this network
segment cannot be specified in Area 0. Note that a network segment cannot belong to
multiple areas.
The configuration of RTA is similar that of RTD. please take note of the configuration of RTD
later.
Page 350
page 329
This page shows the configuration of RTC. Two areas are configured on RTC:
Area 0 and Area 2. Their network segments are specified separately.
page 330
Page 351
Only one area (Area 2) is configured on RTD, and so network segments are
specified only for Area 2.
Page 352
page 331
Using the display ospf routing command, you can verify the configuration. You can also use
the following command to view information about neighbors of an OSPF router.
display ospf peer
In the output information:
Area indicates the area a neighbor belongs to.
Interface indicates the interface connected to this neighbor.
Router Id indicates the router ID of the neighbor.
Address indicates the address of the neighboring interface.
RTB has two neighbors: RTA in Area 1 and RTC in Area 0.
First line of the output information: OSPF Process 1 with Router ID 2.2.2.2 indicates that the
router ID of RTB is 2.2.2.2.
The following lines:
Area 0.0.0.0 interface 10.1.2.1(Ethernet0/1)'s neighbors
Router ID: 3.3.3.3 Address: 10.1.2.2
These lines indicate that the neighbor belongs to backbone area Area 0; the IP address of the
interface connected to the neighbor is 10.1.2.1; the router ID of the neighbor is 3.3.3.3; the IP
address of the neighboring interface is 10.1.2.2.
Information about the neighbor in Area 2 is similar to the above information.
page 332
Page 353
Besides the OSPF routing table and OSPF neighbor information, you can view the global
routing table. In this example, you can use the display ip routing-table command to view
the global routing table. The output information shows that five route entries are learned
through OSPF.
Page 354
page 333
page 334
Page 355
page 337
Page 359
page 338
Page 360
page 339
Page 361
IEEE issued the standard for Fast Ethernet, namely, the 802.3u standard.
1998
1999
page 340
Page 362
In the early days, Ethernet was a shared network medium. It often ran using the following
transmission media:
10Base5: thick coaxial cable commonly known as thicknet. The 5 refers to a maximum
transmission distance of 500 meters.
10Base2: thin coaxial cable commonly known as thinnet. The 2 refers to a maximum
transmission distance of close to 200 meters, the true distance is 185 meters.
Before shared Ethernet came into being, coaxial cable was connected with a device called
a pigtail which is was inserted by cutting a small hole in the coaxial cable. Extreme care had
to be taken when inserting a pigtail into the coaxial cable due to the potential for the central
core to short out on contact with the metallic shield, which could cause the failure of an
entire segment.
page 341
Page 363
At the end of 1980s, unshielded twisted pair (UTP) came into being and was soon
widely used. UTP is cheap and easily made and with UTP data can be sent and
received over different wires, which makes full duplex easily applied.
Twisted pair cable comes in two types: shielded twisted pair (STP) and unshielded twisted
pair (UTP). STP is very effective at protecting cables from external electromagnetic
interference. Twisted cables are categorized by the length of a single twist for each wire
pair, and they come in the following types:
Category-3 twisted-pair cable The cable defined by ANSI and EIA/TIA568. Its
transmission frequency is 16MHz and is mainly for transmitting voice or transmitting data
with data rates of up to 10Mbps. It is often used for 10base-T networks.
Category-4 twisted-pair cable Mainly used for transmitting voice or transmitting data
with a typical data rate of 16Mbps. It is commonly used in token ring LANs and 10baseT/100base-T networks.
Category-5 twisted-pair cable Mainly for transmitting voice or data at the rates of up to
100Mbps. It is often used for 100base-T and 10base-T networks. It one of the most widely
used Ethernet cables, however has generally been superseded by an enhanced version
known as Cat5e. The Cat5e standards are much more stringent and give a support the
use of 4 wire pairs as opposed to 2 wire pairs used by Cat5, allowing Cat5e to support
Gigabit Ethernet transmissions.
page 342
Page 364
Ethernet interfaces on networking devices come into two types: Medium Dependent
Interface (MDI) and Medium Dependent Interface Crossover (MDI_X). Ethernet interfaces
of routers and interfaces of Network Interface Cards (NIC) are often MDIs. The Interfaces of
hubs are considered MDI_X interfaces.
Twisted-pair cables can be divided into straight cable and crossover cable types. Straight
cables are used for connecting MDI and MDI_X type devices; crossover cables are mainly
for connecting MDI and MDI or MDI_X and MDI_X device types. It should be noted that the
pair sequence in a crossover cable results in a crossover at each end of the cable between
pins 1 & 3 and 2 & 6.
page 343
Page 365
page 344
Page 366
Usually 10 Mbit/s Ethernet is only located at the access layer of the network. The
new generation multimedia products, video and database products may easily
chew up the bandwidth of 10Mbit/s Ethernet.
page 345
Page 367
Besides coaxial cable and twisted pair cable, the IEEE802.3 cable also incudes
fiber 10BASE-F. 10BASE-F was once used at the early age of Ethernet and its
transmission distance can reach 2 Km.
page 346
Page 368
The standard (10Mbps) Ethernet transmission rate is too low to meet the demands of
todays networks. To meet these higher demands, IEEE issued the IEEE802.3u standard
for fast Ethernet, supporting data transmission rates of 100Mbps.
page 347
Page 369
Full-duplex fast Ethernet is capable of sending and receiving data at 100 Mbps/s rate
simultaneously. Data sending and receiving are independent due to the use of separate
wire pairs for transmitted and received data, which avoids collisions and interference and
improves the network efficiency.
The standards body EIA/TIA stands for Electronic Industries Alliance/Telecommunication
Industry Association.
page 348
Page 370
Gigabit Ethernet is an extension of the Ethernet defined by IEEE802.3, for which transmission
speeds of 1Gbps are achieved.
There are two standards that have been defined for gigabit Ethernet, they are IEEE802.3z (for
fiber and copper) and
IEEE802.3ab (for twisted-pair).
page 349
Page 371
page 350
Page 372
10G Ethernet is the cutting-edge technology in the Ethernet world. Its transmission speed is
10 times that of a gigabit Ethernet and its working area is much wider. 10G Ethernet can be
applied not only to the traditional LANs, but also WANs and MANs which were once closed to
Ethernet due to its limited capabilities. 10G Ethernet can be compatible with DWDM
seamlessly which stretches Ethernet to a global geographical scope without being limited by
distance.
Two organizations, IEEE and 10 Gigabit Ethernet Alliance (10GEA), played an important role
in the standardization of 10G Ethernet. IEEE is in charge of setting standards for 10G
Ethernet and it has issued IEEE802.3ae as of June 2006. IEEE802.3ae specifies the
standard of 10G Ethernet that runs on fiber, a standard not so suitable for enterprise LANs
that commonly transmit data through copper cabling. To meet the requirements from the 10G
Ethernet that runs on copper cables, IEEE issued the 802.3ak standard in March 2004 and
the IEEE 802.3an standard for 10G Ethernet over twisted-pair cabling.
page 351
Page 373
The standard for 10G Ethernet over fiber is IEEE802.3ae, which consists of 10G
BASE-X, 10GBASE-R and 10GBASE-W.
10GBASE-X uses a tightly packed package which involves a rather simple WDM device, four
receivers and four lasers that work at the wavelength of about 1300nm at an interval of around
25nm. Each sender and receiver pair works at a speed of 3.125 Gbps with a data rate of 2.5
Gbps.
10GBASE-R is a form of serial interface based on a 64B/66B coding scheme instead of the
8B/10B scheme applied to the gigabit Ethernet. Its data rate is 10.000 Gbps/s which leads to a
clock rate of 10.3 Gbps.
10GBASE-W refers to the WAN interface, which is compatible with SONET OC-192. The
clock rate and data rate of 10 GBASE-W are 9.953 Gbps and 9.585 Gbps respectively.
The 10G Ethernet standard for fiber is IEEE802.3ae. IEEE802.3ak is the standard for 10G
Ethernet over coaxial cables. 10GBASE-CX4 allows 10G Ethernet to transmit over coaxial
copper lines up to a distance of 15 meters.
page 352
Page 374
page 353
Page 375
page 354
Page 377
Page 378
page 355
page 356
Page 379
Page 380
page 357
Carrier Sense Multiple Access with Collision Detection (CSMA/CD) is a set of rules
determining how network devices respond when two devices attempt to use a data
channel simultaneously. The basic working theories of CSMA/CD are as follows:
(1) If the transmission media is not occupied at that time, a particular station can
transmit, otherwise move on to the next step.
(2) The station waits for a while until the data channel is not occupied and then it begins
to send data.
(3) If the station detects a collision which is known as the voltage level is as twice as
usual, it stops transmitting that frame and transmits a jam signal in order let all the
participating stations know the collision.
(4) After a random time interval, the station that collided attempts to transmit again,
which goes back to step 1 again and the process cycles.
page 358
Page 381
Limited by the algorithm of CSMA/CD, the length of a frame sent over Ethernet
using 10M half duplex should be at least 64 bytes.
Page 382
page 359
Two network devices appeared during the period when the Ethernet developed
from a shared to a switched network, one is the Hub, the other is the Repeater.
When the network is extended, signals degrade as they travel long distances,
which may often lead to corrupted data. The repeater is an electronic device
that helps to recover or amplify signals. The hub and repeater both work at the
physical layer.
page 360
Page 383
Page 384
page 361
page 362
Page 385
Page 386
page 363
The Ethernet switch operates at the data-link layer and has two basic functions:
Learning MAC addresses
Switching or filtering data
page 364
Page 387
In the figure above, DMAC indicates the MAC address of the destination and
SMAC is the MAC address of the source. The meaning of the Length/Type field
various with its values. When its (hexadecimal) value exceeds 1500, it indicates
the field is a type field; when the value is less than or equal to 1500, the field
indicates it is a frame length field. The value of the DATA/PAD field represents
the length of the data filled to make the frame length to be 64 bytes or above.
FCS refers to the extra checksum characters added to a frame for error detection
and correction.
When the value of the Length/Type field exceeds 1500, the MAC sub-layer can
submit the frame to a protocol at the upper layer immediately without going
through the LLC sub-layer. This structure is the Ethernet_II structure which is
very popular and used by most protocols. In this structure, the data-link layer
only involves the MAC sub-layer anddoes not implement the LLC layer. When
the value of the Length/Type is less than or equals to 1500, it indicates the
Ethernet_SNAP structure which is set by the 802.3 committee but is not widely
used.
Page 388
page 365
page 366
Page 389
In an 802.3 frame, there is a three-byte 802.2 LLC and a five-byte 802.2 SNAP
header. The values of Destination Service Access Point (DSAP) and Source
Service Access Point (SSAP) are both set to 0xAA. The Ctrl field is set to 3 and
the 3-byte org code field that comes after it is set to 0. The following TYPE field
functions the same as that of the Ethernet_II frame.
Page 390
page 367
page 368
Page 391
Page 392
page 369
page 370
Page 393
The switch receives a data frame from the local segment via one of its port
interfaces.
The switch builds its MAC address table by learning the source MAC of the
frames and maintains its MAC address table with the aging mechanism. The
switch looks for the destination MAC in its MAC address table and if the
destination MAC is in the table, then the switch sends the frame to the
corresponding port (the source port is not included); if the switch cannot find the
destination MAC in its table, then it sends the frame to all the ports except the
source port.
Page 394
page 371
page 372
Page 395
L2 switches help to avoid collisions in a shared Ethernet but broadcast flooding is still
widespread. How can this problem be resolved?
Page 396
page 373
page 374
Page 397
L3 switches tend to take the form of a switch. Compared with routers, L3 switches are
endowed with all the functions that L2 switches possess, including MAC-address based
frame forwarding, STP and VLAN. However, L3 switches also have the L3 functions that
L2 switches are not given, which enables them to realize the L3 internetworking for
VLANs.
Most of the lower or middle-end L3 switches realize L3 forwarding through L3 exact
match, which means to search the cache according to the destination IP address of data
frames directly. While, traditional routers use the longest matching method, that is to
search the routing table for the destination IP address and forward data with the longest
matching address in the table. Different manufacturers use different approaches to
forward data. Exact search is more suitable for a network that has stable routes and
whose topology does not often change.
High-end L3 switches are often applied to complex networks. So if they use the exact
search approach to find routes, the odds to hit the cache is not optimistic. Furthermore,
most high-end switches use hardware to realize longest matching search which may be
as efficient as the exact search approach. So for high-end switches, exact search is not a
must choice.
Finally, L3 switches have evolved from L2 switches and they are always thought to be
designed for LANs. So L3 switches do not support many interface types except the
interfaces relevant to VLANs such as Ethernet interface, ATM VLAN virtual interface,
which avoids problems that have bothered routers with multi-type interfaces. Since every
interface of a L3 switch is an Ethernet interface, collisions are avoided and the odds of
segmentation is lowered. But for the efficiency of up-link, many L3 switches are equipped
with high-speed POS interfaces.
Page 398
page 375
IP Network Rules:
1. Communication with the same segment.
When a host communicates with the destination host, it judges whether the
destination is in the same segment with its own IP address and subnet mask. If
they are in the same segment, the host searches for the MAC address of the
destination through ARP and fill the MAC address in the frame header.
2. Communications across segments
If the host finds that the destination is not in the same segment with itself, then it
searches for the MAC address of the gateway instead of the MAC address of the
destination and fills the MAC address of the gateway in the frame header. Layer3 switches make decisions on whether to make layer-2 forwarding or layer-3
forwarding according to the above rules. The layer-3 switch performs layer-3
forwarding if it is given the MAC address of an interface defined by a VLAN;
otherwise the switch performs layer-2 forwarding within the VLAN.
page 376
Page 399
Page 400
page 377
page 378
Page 402
page 379
Page 403
page 380
Page 404
page 381
Page 405
page 382
Page 406
When both the negotiation parties support more than one operational mode, there should
be a precedence order to decide the final operational model. The table above lists the
precedence of operational modes from high to low defined by IEEE 802.3. The basic
principle is the 100Mbps mode has a higher precedence than the 10Mbps and full duplex
is better than half duplex. 1000BASE-T4 is listed before 100BASE-TX because 100BASET4 supports more cable types. Ethernet over fiber does not support auto-negotiation. You
need to configure the operation mode for the two link parties manually, which includes the
rate, duplex mode and traffic control. If the two parties are configured differently, they
cannot communicate with each other.
Note: 100BASE-T4 can be realized through Type3, Type4 and Type5 UTP and all the four
pairs are used. 100BASE-TX can only run over Type5 UTP or STP and two pairs of the
four pairs are used.
page 383
Page 407
Configuration Example
To set the duplex mode to full duplex:
[Quidway-Ethernet0/1] duplex full
Restore the duplex mode to its default value:
[Quidway-Ethernet0/1] undo duplex
page 384
Page 408
You can configure the speed of Ethernet port with the following commands. If the port
speed is configured to be decided by the auto-configuration mechanism, the two parties
will negotiate to decide the port speed together. You can also configure the port speed
manually by running the speed command.
By default, the port speed is in the auto state (decided by auto-negotiation).
Configuration Example
Set the port speed of Ethernet to 100Mbps:
[Quidway-Ethernet0/1] speed 100
Restore the port speed of Ethernet to its default value:
[Quidway-Ethernet0/1] undo speed
page 385
Page 409
page 386
Page 410
Network congestion occurs when data are transmitted between two ports with different
speed rates (for example, when a 100Mbps port sends data to a 10Mbps port.) or a link or
node is carrying so much data that its quality of service deteriorates. Typical effects
include queuing delay, packet loss or more retransmissions which wastes network
resources dramatically. In real networks, especially for LANs, network congestion seldom
occurs. So no switch manufacturers produce switches with flow control functions. Highcapability switches should support backpressure in the half duplex mode and flow control
in the full duplex mode defined by IEEE802.3x.
page 387
Page 411
page 388
Page 412
In the full duplex environment, the link between the server and the switch is a collision-free
channel and the backpressure technology cannot be applied to it. So the server continues
to send packets to the switch until the frame cache of the switch overflows. To solve the
problem, IEEE made a compound full duplex flow control standard, namely, IEEE 802.3x.
IEEE 802.3x defines the format of a 64-byte MAC control frame named PAUSE. When
congestion occurs at the port, the switch sends PAUSE to the source to tell
it to stop sending information for a while.
page 389
Page 413
page 390
Page 414
Configuration Example
Enable the flow control of Ethernet port
[Quidway-Ethernet0/1] flow-control
Shut down the flow control of Ethernet port:
[Quidway-Ethernet0/1] undo flow-control
Note: By default, the flow control of Ethernet port is disabled.
page 391
Page 415
page 392
Page 416
page 393
Page 417
page 394
Page 418
The parameters of the two peers of aggregation ports must be the same. Parameters here
include physical parameters and logical parameters.
Physical parameters include:
Number of the aggregation ports
Speed of the aggregation ports
Duplex mode of the aggregation ports
Logical parameters include:
Spanning Tree Protocol (STP)
Quality of Service (QoS)
VLAN
Port
STP configuration includes:
enable/disable the STP function at the port, port link type (point-to-point or not point-topoint), STP preference level, route cost, speed limit for sending packets, loop protection,
root protection and edge port.
QoS configuration includes: flow speed control, preference mark, the default preference
level of 802.1p, bandwidth guarantee, congestion prevention, flow redirection and flow
statistics.
VLAN configuration includes: VLANs that are allowed to pass the port and default VLAN ID.
Port configuration includes port link types such as Trunk, Hybrid and Access.
page 395
Page 419
Configuration Procedure:
1 Configure the IP address of the interface
Create the layer-3 addresses 10.1.1.1/30 and 10.1.1.2/30 of VLAN1 on SW1 and
SW2 respectively.
2 Configure attributes of the aggregated ports
Before configuring port aggregation, you should make sure that all the aggregated ports of
Sw1 and Sw2 work in the full duplex mode and at the same speed rate instead of the autonegotiation mode.
3 Configure port aggregation
Result Testing:
<Sw1>display link-aggregation
Master port: Ethernet0/1
Other sub-ports:
Ethernet0/2
Mode: both
The configuration commands may be different for some switches, please refer to product
operation manuals for relevant information.
page 396
Page 420
page 397
Page 421
Port mirroring is applied to traffic observation and fault location by making a copy of service
data and sending them to the monitor device to be analyzed. Port mirroring has two types:
port-based mirroring and flow-based mirroring.
page 398
Page 422
Port-based mirroring makes a full copy of the data on the mirrored port to the mirroring port
to observe its flow and locate faults. Ethernet switches support many to one mapping, which
means a copy of traffic from multiple ports can be mirrored to a single monitor port.
page 399
Page 423
Flow-based mirroring is only applied to flows that meets certain defined classifications,
which may include the same destination address, the same port number and so on. The
classifications can be set as required.
page 400
Page 424
page 401
Page 425
page 402
Page 426
Page 428
page 403
page 404
Page 429
Page 430
page 405
page 406
Page 431
The traditional Ethernet switch adopts source address learning mode when it
forwards data. It can automatically learn the MAC address of host connecting to
each port to, form the forwarding table, and then forward Ethernet frames
according to the table. The whole forwarding process is completed automatically,
all the ports can communicate with each other, and maintenance personnel can
not control the forwarding between any two ports. For example, they can not
implement prevention to restrict host B from reaching host A. The following
disadvantages exist in this kind of network:
Network Security is bad. All the ports can communicate with each other, which
increases the possibility that users will attack the network.
Network efficiency is low. Users may receive abundant unnecessary frames,
which is a waste of bandwidth and host CPU resources, e.g. unnecessary
broadcast packets.
Service expanded capability is bad. The network cannot implement
differentiated services, for example, it can not forward an Ethernet frame used for
network management with higher priority.
Page 432
page 407
VLAN technology divides users into multiple logical networks (groups). Communication is
allowed within a group, but it is prohibited among groups. Layer-2 unicast packet, layer-2
multicast packet and layer-2 broadcast packets can only be forwarded within a group. It is
easy to add and delete group members using VLAN technology.
VLAN technology provides a management method to control the intercommunication
among terminals regardless of physical location in the LAN. In the figure above, PCs in
group 1 and group 2 can not communicate with each other.
page 408
Page 433
In order to control forwarding, the switch will add a VLAN tag to an Ethernet
frame before forwarding it, then use this tag to manage the frame, which may
include discarding the frame, forwarding the frame, and adding & removing tags.
Before forwarding the frame, the switch will check the VLAN tag of the packet
and decide whether the tag is allowed to be forwarded from the port. In the figure
above, if the switch adds tag 5 to all the frames sent from A, and then look up the
layer-2 forwarding table, and according to the destination MAC address, forward
them to the port connected to B. However this port is configured to only allow
VLAN 1 to pass, so the frames sent by A will be discarded.
The switch supporting VLAN will hence forward Ethernet frames not only
according to the destination MAC address but also the VLAN configuration of the
port, so as to implement layer-2 forwarding control.
Page 434
page 409
4-byte VLAN tag is added to the Ethernet frame header directly. Document IEEE802.1Q
describes VLAN tagging.
TPID Tag Protocol Identifier 2 bytes fixed value 0x8100 new type defined by
IEEE, it indicates that it is a frame with 802.1Q tag.
TCI Tag Control Information 2 bytes.
Priority 3 bits, defines the priority of an Ethernet frame. It has 8 priority levels, 0 7, is
used to provide differentiated forwarding service.
CFI Canonical Format Indicator 1 bit. Used to indicate bit order of address
information in token ring or source route FDDI media access, namely, whether the low bit
is transmitted before high bit.
VLAN Identifier VLAN ID 12 bits, from 0 to 4095. Combined with VLAN configuration
of port, it can control the forwarding of an Ethernet frame.
Ethernet frame has two formats: the frame without tag is called an untagged frame; the
frame with tagging is called a tagged frame.
This course will only discuss the VLAN ID of VLAN tag.
page 410
Page 435
All the Ethernet frames exist in the switch in the form of tagged frames. Certain
ports may receive untagged frames from peer devices, but the frame from the
port of the local switch must be a tagged frame. If the frame received is tagged, it
will be forwarded; if it is untagged, a tag will be added to it. The device can
implement a VLAN in the following way.
Port based: Network manager configures a PVID for every port of a switch,
known as the Port VLAN ID or port default VLAN. If an untagged frame is
received, the VLAN ID will be the PVID.
MAC based: Network manger configures the mapping relation for each MAC
address to a VLAN ID, if an untagged frame is received, the VLAN ID will be
added according to the mapping relationship table.
Protocol based: Network manager configures a mapped relationship between
the protocol field of the Ethernet frame and a VLAN ID; if an untagged frame is
received, the VLAN ID will be added according to the mapping relationship table.
Subnet based: Adding of a VLAN ID according to the IP address information in
a packet.
Policy based: Provides strict control capability, based on MAC address and IP
address, MAC address, or IP address and port. If implementation of the VLAN is
successful, it can forbid users from changing the MAC address or the IP address.
If the device can support multiple methods at one time, the general priority order
from high to low is : Policy basedMAC basedSubnet basedProtocol
basedPort based. Presently, port based VLAN tagging is the most common
method.
Page 436
page 411
page 412
Page 437
The tag in Ethernet frames combined with VLAN configuration of the port can
control packet forwarding. A received Ethernet frame on port A will check
whether the destination MAC is attached to port B. After the introduction of VLAN
tagging, two key points will decide whether the frame should be forwarded from
port B:
Whether the VLAN ID in the frame is created by switch. There are two methods
to create VLANs: Manual configuration or automatically created using GVRP.
Whether the destination port will allow the VLAN frames to pass. VLAN lists
determine whether to allow frames to pass through a port and can be created by
the administrator or automatically created by GVRP (GARP VLAN Registration
Protocol).
In the forwarding process, there are two types of tag operation:
Add tag For untagged frames add the PVID, it is completed after receiving the
frame from the peer device.
Remove tag delete the VLAN tagging information in the frame then send it to
peer device in the form of an untagged frame. In normal cases, the switch will not
change the VLAN ID in a tagged frame, while some devices supporting special
services may provide the function for changing the VLAN ID.
Page 438
page 413
After introducing VLAN functionality, switch ports may be one of three types: Access port,
Trunk port and Hybrid port.
An access port is used to connect host and has features as follows:
Only permit allowed VLAN IDs to pass through the port, or the VLAN ID is the same with
PVID of the port.
If the frame received from peer device is untagged, the switch will add a PVID to the
frame automatically.
The frame sent by an access port is always an untagged frame.
The default port type of many types of switch is access PVID is 1 by default
is created by the system and cannot be deleted.
page 414
VLAN 1
Page 439
The following command can be used to set ports as access ports and implement the
PVID of the access port after creating the VLAN:
[Switch]vlan 3
[Switch-vlan3]port ethernet 0/1
[Switch]vlan 5
[Switch-vlan5]port ethernet 0/2
The port mode should be specified as either access or trunk when making any change to
the PVID.
Page 440
page 415
Trunk port: used to connect switches and transmit tagged frames among
switches. It can be set to permit multiple VLAN IDs, even those VLAN IDs that
may differ from its own.
A trunk port will send tagged frames to other devices, using the following rule
base:
If the VLAN ID of the tagged frame does not exist in the VLAN permitted list, it
will be discarded;
If it does exist and the VLAN ID of the tagged frame is the same as the PVID,
the frame will be forwarded after removing the tag. The PVID of each port is
unique, however in this case, the frame will be untagged when sent by the trunk
port.
If the VLAN ID of the tagged frame is different from PVID, the frame will be
forwarded to the peer device without modification.
VLAN forwarding will generally query the tagging information of the VLAN frame
for forwarding, and compare the frame to the VLAN permit list to look for a match.
If a VLAN which is registered by GVRP however, it must also register on the port,
otherwise the VLAN ID will not exist in the VLAN permit list, and the
corresponding VLAN frame cannot be forwarded from the port.
page 416
Page 441
As shown in the figure above, the following commands can be used to configure the trunk
port attribute:
\\create VLAN
[Switch]vlan 3
\\configure port type
[Switch-Ethernet0/3]port link-type trunk
\\configure PVID of Trunk-Link port
[Switch-Ethernet0/3]port trunk pvid vlan 3
\\configure permitted VLAN of Trunk-Link
[Switch-Ethernet0/3]port trunk allow-pass vlan 5
Page 442
page 417
As an access port, a packet is sent to another device in the form of an untagged frame,
as a trunk port it can send out untagged frame only in when the trunk VLAN ID is the
same as the frame VLANID. In other cases, it sends frames as tagged. Hybrid VLANs
can be used to effectively control the VLAN tagging process. For example, a device
connected to the switch cannot support VLANs, but the ports still can be used to isolated
the devices.
Hybrid ports can flexibly control the VLAN tag. In this example, if the VLAN ID of frame is
3, then forward it according to the forwarding mode of trunk port. If it is 4, remove tag 4
and then forward it.
page 418
Page 443
If a Hybrid port is only configured to allow untagged VLAN forwarding, the port will take
on the same role as an access port.
If a port is configured to support only tagged VLANs, it will have the same function as a
trunk port.
If a switch port is configured with a PVID that is both tagged and supports untagged
VLANS for example VLAN2 on Ethernet0/1, it is capable of communicating with other
hybrid ports that support the same untagged VLANS, as opposed to ports such as 0/3,
which only supports VLAN3. The configuration above thus shows how it is possible to
implement isolation between port 0/1 and port 0/3, but still allow both to communicate
with the host connected to port 0/24.
Page 444
page 419
To configure access and trunk ports on SWA and SWB, it is necessary to create
VLAN 2 on SWB, and allow VLAN 2 to traverse the two ports of SWB, to allow
PC1 and PC2 to communicate with each other. SWB will not connect to any
users, it is a transitional switch; in large-scale networks, there may be many
transitional switches for which the configuration and management is difficult. The
manager only cares about the user intercommunication control, for example,
after new user joins the network, the manager should configure the access port
which connects the new user and make the port as part of a certain VLAN group.
If the transition switch can automatically implement intercommunication among
logical group members, it will save cost for network maintenance. GVRP can
implement this function. After all the switches are enabled with GVRP
functionality, VLAN configuration on the edge switches can transmit to the whole
network though GVRP, and automatically implement configuration of VLANs on
each port.
page 420
Page 445
The command gvrp is used to enable GVRP on a switch. The command undo gvrp
can be used to disable it. The GVRP protocol is disabled by default. In the system view,
the command GVRP is used to enable or disable GVRP for all ports, whereas the
command GVRP will enable or disable GVRP on a particular port when used at the
interface as shown in the example.
Note:
Before enabling port based GVRP, GVRP must be enabled at the system view first. If
GVRP is in disabled status at the system level, GVRP will also disabled on all ports, and
the user will not able to change the status of the port based GVRP.
GVRP should be enabled and disabled on the trunk port. After Enabling GVRP on Trunk
port, switch is not allowed to change trunk port to any other port type.
Page 446
page 421
page 422
Page 447
page 423
Page 449
page 424
Page 450
page 425
Page 451
page 426
Page 452
VLANs create and isolate layer-2 broadcasts domains, therefore isolating the traffic of
different VLANs. This results in users being unable to sustain communication when
associated with different VLANs.
page 427
Page 453
Flows between different VLANs cannot directly cross VLAN boundaries, and so the ability
to route traffic is needed to allow the forwarding of packets from one VLAN to another.
Hosts of different VLANS are assigned as entities of different networks. When a default
gateway been configured on a host local for a given VLAN, any communication destined
for hosts that are not associated with the same VLAN will automatically forward traffic to
the default gateway which shall in turn route traffic between VLANS.
page 428
Page 454
page 429
Page 455
page 430
Page 456
The third option for VLAN routing is through the use of a layer-3 switch. A layer-3 switch
effectively integrates the functionality of a layer-2
switch and a layer-3 routing, therefore combines the advantages of advantages of both. The
limitation lies mainly in the cost of such devices due to its extended functionality.
page 431
Page 457
Huawei supports layer-3 switching through the means of switch and route processing units, or
SRU, and may support multiple SRU boards for redundancy. All the routable packets are sent
by the forwarding engine to the SRU board for processing. The SRU board also broadcasts and
filters packets and executes routing policies. The SRU will support VLAN switching, default
VLANs as well as other more advanced VLAN technologies including Q-in-Q and dynamic
VLAN allocation based on MAC addressing. The example above reflects how a layer-3 switch
can be used associate VLAN gateways directly with VLAN interfaces within a single device.
page 432
Page 458
page 433
Page 459
In this example two VLANs are present, VLAN100 and VLAN200. A host in VLAN100
wishes to forward traffic to a host in VLAN200. Each VLAN is part of a separate
broadcast domain and therefore as different network. Each host has been assigned a
network host address respective to the VLAN it belongs to, and the gateway address for
the network. The forwarding of traffic requires VLAN trunking to support multiple VLANs
over a single physical link and sub interface configuration for the layer 3 router. How is
this achieved?
page 434
Page 460
//create VLAN100
[SWA]vlan 100
//configure ethernet 0/1 belonging to VLAN100
[SWA-vlan100]port ethernet 0/1
//create VLAN200
[SWA]vlan 200
//configure ethernet 0/2 belonging to VLAN200
[SWA-vlan200]port ethernet 0/2
//enter into interface view
[SWA]interface ethernet 0/24
//configure port type as Trunk
[SWA-Ethernet0/24]port link-type trunk
//permit all VLAN to pass
[SWA-Ethernet0/24]port trunk allow-pass vlan all
page 435
Page 461
Using the control-vid command, you can specify the mappings between the control VLAN
and the Ethernet sub-interface to differentiate termination sub-interfaces of the same main
interface. Using the undo control-vid command, you can remove the mappings between
the control VLAN and Ethernet sub-interfaces. By default, no mapping between a control
VLAN and an Ethernet sub-interface is specified. The dot1q-termination indicates that the
encapsulation mode of a sub-interface is dot1q. This mode applies to single-tagged packets
(as opposed to dual tagged packets used in Q-in-Q configuration).
Using the arp broadcast enable command, you can enable the ARP broadcast function on
a sub-interface for VLAN tag termination. Using the undo arp broadcast enable command,
you can disable the ARP broadcast function on a sub-interface for VLAN tag termination. By
default, the ARP broadcast function is disabled on sub-interfaces for VLAN tag termination.
page 436
Page 462
Connectivity between the hosts of different VLANs can be verified through means such as
the ping commandIf the host 192.168.10.10 in VLAN100 can ping host 192.168.20.20 in
VLAN 200, it indicates that the configuration is correct.
page 437
Page 463
On the layer-3 switch (SWA) port 1 and port 2 represent a local network that has been
logically segmented through the implementation of VLANs. The hosts via port 1 have been
assigned to VLAN 100 and hosts via port 2 to VLAN 200. The hosts of VLAN 100 and VLAN
200 are able to support the forwarding of traffic between VLANs 100 & 200 through SWA. The
example demonstrates how a single host from each VLAN would be configured to support this
forwarding of traffic.
page 438
Page 464
When a layer-3 switch needs to communicate with devices at the network layer, a logical
interface can be created, namely, a VLANIF interface. A VLANIF interface is a network layer
interface and can be configured with an IP address. The layer-3 switch then uses the VLANIF
interface to communicate with devices at the network layer. The IP address that is assigned to
each VLANIF is recognised as the gateway address by the respective VLAN hosts. The
command interface vlanif <vlan-id> specifies the ID of the VLAN that a VLANIF interface
belongs to. The value of the vlan-id is an integer that ranges from 1 to 4094.
page 439
Page 465
In the same way that is was possible to verify VLAN routing using a layer-3 router, it is also
possible to verify connectivity between hosts of different VLANs supported by a layer-3 switch.
If host 192.168.10.10 in VLAN100 is able to successfully ping host 192.168.20.20 in VLAN 200,
it indicates that the configuration is correct.
page 440
Page 466
page 441
Page 467
page 442
Page 469
Page 470
page 443
page 444
Page 471
Page 472
page 445
A switch forwards data frames based on the MAC address table. The MAC address table
specifies the mapping between destination MAC addresses and destination ports.
1: Assume that PCA sends a data frame to PCB. The destination MAC address of this data
frame is set to the MAC address of PCB, namely, 00-0D-56-BF-88-20.
When SWA receives this frame, it searches the MAC address table. According to the entries
in the MAC address table, SWA forwards the data frame through port E0/3.
The switch does not make any modification to the data frame before forwarding it. If the
switch receives a broadcast frame or a frame whose MAC address wasn't included in the
MAC address table, it forwards the frame to all ports.
2: When SWB searches the MAC address table, it will use the information stored to make
forwarding decisions. In the example, SWB forwards a frame through port E0/6. No
modification is made to the data frame.
3: When PCB receives the frame, it will search through the MAC address table to find that
the destination MAC address is its own MAC address. PCB will then process this data frame
and send the de-encapsulated data to the upper layer protocol .
page 446
Page 473
If a switch receives a broadcast data frame from a port, the switch forwards the
data frame to all other ports. In addition, does not make any modification to the
data frame before forwarding it. Therefore, if a loop exists in the network, the
broadcast frames are forwarded in the network infinitely, thus causing the
broadcast storm.
Page 474
page 447
A switch forwards data frames based on the MAC address table, but the MAC
address table is empty when the switch is started. Therefore, the switch needs to
learn the MAC address table.
A switch learns the MAC address table based on the mapping between the source
address of the received data frame and the receiving port.
1: Assume that PCA sends a data frame to PCB. The destination address of the
frame is the MAC address of PCB, namely, 00-0D-56-BF-88-20. The source
address is the MAC address of PCA, namely 00-0D-56-BF-88-10.
When SWA receives the data frame, it checks the source address of the frame
and adds mapping between the source address and receiving port to the MAC
address table. Thus, the mapping between the destination address and
destination port is recorded in the table.
2: When SWB receives this frame, it also adds the mapping between the source
address and receiving port to the MAC address table as a MAC address entry.
3: When PCB receives the frame, it processes this frame.
page 448
Page 475
A switch generates a MAC address entry according to the source address and
receiving port of the received data frame.
PCA sends a data frame. Assume that the destination MAC address of the data
frame does not exist in any MAC address table of the switches in the network.
When SWA receives this data frame, it generates a MAC address entry, in which
the MAC address 00-0D-56-BF-88-10 maps port E0/2.
Because the MAC address table of SWA does not contain any entry with this
destination MAC address, SWA forwards the data frame to E0/3 and E0/4.
The MAC address table of SWB also does not contain any entry with this
destination MAC address. So, after SWB receives the data frame on E0/5, it
forwards the frame to SWA through E0/6.
After SWA receives this data frame on E0/4, it deletes the previous entry with this
address and generates a new entry. In the new entry, MAC address 00-0D-56BF-88-10 maps port E0/4. In this case, the MAC address table is unstable and
wrong entries are generated.
Page 476
page 449
page 450
Page 477
The main function of STP (Spanning Tree Protocol) is to avoid switching loops where
redundant links are present in the network. As the figure of this slide shows, a ring is
composed of SWA, SWB and SWC, which may cause problems such as broadcast storms.
After the spanning tree protocol is enabled, calculations cause the network to converge
resulting in the interfaces performing various operational roles including the blocking of one
or more ports in order to remove the possibility of any loop occuring. In this example, it is
assumed that port E0/2 of SWB is blocked to remove the loop.
Page 478
page 451
After the port E0/2 of SWB is blocked, there is no loop in the network.
page 452
Page 479
Page 480
page 453
The basic idea of STP is quite simple. If the network could develop like a tree, the loop will
be prevented. Thus, STP defines some concepts, including Root Bridge, Root Port,
Designated Port, Path Cost, etc. The purpose is to cut out redundant loop through
constructing a tree, and implementing link backup and path optimization at the same time.
The algorithm used to construct the tree is the spanning tree algorithm.
In order to calculate the spanning tree, relative information and parameters need to be
exchanged between switches. These information and parameters are encapsulated in the
BPDU (Bridge Protocol Data Unit), and transmitted between switches.
The following tasks are done through the exchange of BPDU between bridges:
1. Select a bridge as the root bridge among all bridges;
2. Calculate the shortest path from the current bridge to the root bridge;
3. For every shared network segment, select the bridge nearest to the root bridge
as the designated bridge, responsible for the data forwarding of this network
segment;
4. For every bridge, select a root port.
5. Select the designated port besides the root port.
page 454
Page 481
Page 482
page 455
page 456
Page 483
STP elects the designated port for each network segment. The designated port
forwards the data transmitted between the root bridge and this network segment.
The switch where the designated port is located is called the designated switch.
When electing the designated port and designated bridge for a network segment,
STP compares the root path cost of the switch on which the port is connected to
this network segment. If the switches have the same root path cost, STP
compares their bridge identifiers. The port on the switch with the smallest
identifier has the highest priority. If their identifiers are also the same, STP
compares the identifiers of the ports connected to the network segment. The port
with the smallest identifier has the highest priority.
On the root bridge, all ports are the designated ports of the connected network
segments. Therefore, the designated ports of LANA and LANB are both on SWA.
LAND and LANE are both connected to the port of only one switch, and the
connected ports are designated port for LAND and LANE respectively.
LANC is connected to the ports of two switches and the two switches have the
same root path cost. Therefore, the identifiers of the switches are compared.
SWB has a smaller identifier (because its MAC address is smaller), so the
designated port for LANC is on SWB.
The port that is neither the root port nor the designated port is called the alternate
port. The alternate port does not forward data and is in Blocking state.
Page 484
page 457
STP defines three roles for the STP-enabled port that works normally on the physical layer
and data link layer. The root port and designated port are in Forwarding state. The port
that is not enabled is called the Disabled port.
page 458
Page 485
After enabled, a port switches to Listening state and begins to calculate the spanning tree.
After the calculation, if the port is set to the alternate port, the port state changes
to Blocking. If the port is set to the root port or designated port, the port state switches from
Listening to Learning after a period of forward delay. After another period of forward delay,
the port state switches from Learning to Forwarding, and the port can forward data frames.
Page 486
page 459
After enabled, a port switches to Listening state and begins to calculate the spanning tree.
After the calculation, if the port is set to the alternate port, the port state changes
to Blocking. If the port is set to the root port or designated port, the port state switches from
Listening to Learning after a period of forward delay. After another period of forward delay,
the port state switches from Learning to Forwarding, and the port can forward data frames.
1:The port is elected as the designated port or root port.
2: The port is elected as the alternate port.
3: The port waits a period of the forward delay. By default, the forward delay is 15 seconds.
When a port is disabled, it switches to Disabled state. Before switching from nonForwarding state to Forwarding sate, a port needs to wait two times as along as the forward
delay . Thus, the potential risk of temporary loop is avoided.
page 460
Page 487
Page 488
page 461
This figure shows the physical topology. The priority of SWA is 4096; the priority of SWB is
8192; the priority of SWC is 32678. Therefore, SWA becomes the root bridge and SWB
becomes the designated switch of LANC.
page 462
Page 489
Page 490
page 463
In the global information, the root bridge identifier is different from the identifier of this switch.
It indicates that this switch is a non-root switch.
page 464
Page 491
Page 492
page 465
page 466
Page 493
When the port role and state changes, temporary loops may be formed. In this example,
SWA is the root bridge initially. Among all switches, only SWD has an alternate port E0/2
and the port is in a non-Forwarding state. Assume that the priority of SWC is changed so
that SWC becomes the new root switch. In this case, E0/2 of SWD will become the new
root port and switch to a Forwarding state. E0/1 of SWD will become the new designated
port and switch to a Forwarding state. E0/2 of SWB should become the new alternate port
and switch to a non-forwarding state. If E0/2 of SWD switches from a non-Forwarding state
to a Forwarding state before E0/2 of SWB switches from a Forwarding state to a nonForwarding state, a temporary loop is formed in the network. To avoid temporary loops, a
port (for example, E0/1 of SWC) must wait enough time before switching from anonForwarding state to a Forwarding state. Therefore, the ports that need to switch to a nonForwarding state have enough time to calculate the spanning tree and switch to a nonForwarding state.
Page 494
page 467
In STP, for a port, the transition from a blocking state to a forwarding state will take the period
at least two times the Forward Delay, which is not suitable for many applications. RSTP
(Rapid Spanning Tree Protocol) resolves this problem through the following mechanism:
1. Allocating two port roles, an Alternate Port and a Backup Port for root port and designated
root, for fast state transition. When the root port is invalid, the Alternate Port will become the
new root port and switch to a forwarding state without delay; when the designated port is
invalid, the Backup Port will become the new designated port and switch to a forwarding state
without delay.
2. In the point to point link only connecting two switch ports, following a one way handshake to
the downstream bridge, the designated port could change to a forwarding state without time
delay. If more than three bridges are connected by the shared link, the downstream bridge will
not respond to the handshake request sent from upstream designated port; only after two
times Forward Delay would it change to a forwarding state.
3. The port is defined as an Edge Port if it is connected with a terminal directly instead of other
bridges, the Edge Port could enter a forwarding state without any time delay. However, it
should be configured manually since the bridge cannot identify whether the port is directly
connected with the terminal or not.
page 468
Page 495
In STP all VLANs in a LAN will generally share the same spanning tree, therefore load
balancing cannot be implemented between VLANs. It is possible that packets of some VLANs
cannot be forwarded. As this slide shows, both of SWB and SWC connect with users of
VLAN10 and VLAN20. The link between SWB and SWA and that between SWA and SWC
allow VLAN10 and VLAN20 to pass. Other links only allow VLAN10 to pass. If the port E0/20
is blocked, the VLAN20 users of SWB can only use the link between SWB and SWC to
communicate with SWC. However, this link only allows VLAN10 to pass, thus a failure in
communication occurs.
Page 496
page 469
In order to solve the second problem, MSTP (Multiple Spanning Tree Protocol) is put forward.
MSTP is a newer protocol defined by IEEE under 802.1Q-2005 which introduces the concept
of Instances. Simply speaking, STP/RSTP is port based, while MSTP based on instances.
An instance is a collection of multiple VLANs under a single converged spanning tree.
Through binding multiple VLANs into a single instance, the communication cost and network
resources could be saved. In MSTP, the topology calculation of every instance is
independent. Load balancing could be implemented in these instances. In use, multiple
VLANs with the same topology could be mapped to the same instance.
page 470
Page 497
Page 498
page 471
page 472
Page 500
page 473
Page 501
page 474
Page 502
In this example:
There is only one router RTA in the LAN, which is used as the gateway by all the PCs,
therefore there is no redundancy provided. Should RTA fail, all PCs in the network will be
unable to reach external networks. In other words, there is a single point failure within this
kind of network, resulting in a high chance of isolation from external networks.
page 475
Page 503
page 476
Page 504
A Virtual Router is identified by both Virtual Router ID and associated Virtual IP Address.
Multiple Virtual Routers could be configured on the same interface. A Virtual Router ID (VRID)
is the identifier of a Virtual Router. Configurable item in the range 1-255 (decimal). The Virtual
Router IDs configured on all the VRRP routers of the same virtual group must be the same. A
Virtual Router can be associated with more than one Virtual IP Addresses. However, the
Virtual IP Addresses configured for the VRRP routers of the same Virtual Router should be the
same. If VRRP routers with the same VRIDs but different virtual IP addresses; or reversely,
with same IP address but different VRIDs, in VRRP, they are regarded as different Virtual
Routers.
page 477
Page 505
By default, the ICMP Echo messages that are sent to the Virtual IP address will not be
responded to, even by the Master router. In the Master router, under system view, the
following commands can be used to enable the function by which ICMP Echo messages
sent to the Virtual IP address will be responded to.
vrrp virtual-ip ping enable
undo vrrp virtual-ip ping enable
page 478
Page 506
Master: The VRRP router that is assuming the responsibility of forwarding packets sent to
the IP address(es) associated with the virtual router, and answering ARP requests for these
IP addresses.
Backup: The set of VRRP routers available to assume forwarding responsibility for a virtual
router if the current Master fails.
The election of Master is based on the value of Priority. For the same interface, different
Priority values could be assigned to different associated virtual routers.
page 479
Page 507
page 480
Page 508
In this case:
There is a VRRP router that has the virtual router's IP address(es) as real interface address(es).
Such a router is called the IP Address Owner.
page 481
Page 509
No matter what the Config Priority is, the Run Priority of IP address owner is always 255. The
IP address owner is always the Master. Although the configured priority value of RTB is higher
than that of RTA, the RTB is still the Backup, since its Run Priority is lower than that of RTA.
Hence, when it comes to the election of the Master, the contributing factor is the value of Run
Priority instead of Config Priority.
page 482
Page 510
When the Master stops running VRRP, it will immediately send a VRRP advertisement with
the value of 0 in Priority field. When the Backup receives such an advertisement, it will
change from the Backup to Master state immediately.
page 483
Page 511
In this case:
There are two routers in this LAN, RTA and RTB. A single Virtual Router is to be configured,
with VRID 1 and Virtual IP Address 10.1.1.254. The Priority of RTB is to be configured as 200,
and that of RTA as 100, so as to make RTB the Master.
page 484
Page 512
Virtual IP address.
By default, if the Priority of the virtual router is not designated, the default value is 100.
page 485
Page 513
The VRID and Virtual IP Address should be the same as is configured on RTA.
vrrp vrid virtual-router-ID priority priority-value
undo vrrp vrid virtual-router-ID priority
virtual-router-ID The identifier of Virtual Router, in the range of 1-255.
priority-value The value of Priority, with configured range from 1 to 254.
When configuring the priority, the VRID should be specified. Different virtual routers can be
configured with different priority values.
page 486
Page 514
In this case:
There are two routers in the LAN. Two Virtual Routers are to be configured. One of them is
with VRID 1 and Virtual IP Address 10.1.1.100; the other with VRID 2 and Virtual IP Address
10.1.1.200. Configuring the Priority of Virtual Router 1 as 200 on RTA while 100 on RTB, so
that in Virtual router 1, RTA is the Master. Configuring the Priority of Virtual Router 2 as 200
on RTB while 100 on RTA, so that in Virtual router 2, RTB is the Master.
Hence, RTA is the Master of Virtual Router 1 and the Backup of Virtual Router 2; RTB is the
Master of Virtual Router 2 and the Backup of Virtual Router 1. In the LAN, PCs can use
different Virtual IP addresses as the default gateway, so as to implement traffic sharing.
page 487
Page 515
page 488
Page 516
page 489
Page 517
page 490
Page 518
The configuration of RTA is the same as the configuration of single Virtual Router.
By default, the Priority is 100.
page 491
Page 519
By configuring the Priority as 200, RTB is the Master. Configuring tracking interface
Ethernet 1/0 on RTB. If interface Ethernet 1/0 is down, the Priority is reduced by 150, and
the new Priority is 50. Hence, RTA will be the new Master.
page 492
Page 520
This is the VRRP States if the tracked interface is down. On RTB, although the Configured
Priority is 200, the Running Priority is reduced to 50. Hence, RTA will become the Master.
page 493
Page 521
page 494
Page 522
page 497
Page 527
page 498
Page 528
page 499
Page 529
page 500
Page 530
The HDLC drafted by the ISO is a bit-based communication protocol. The basic unit
transmitted by HDLC is the frame. The most outstanding feature is that the data may not be
the specified set of character . Any bit flows can be transmitted transparently.
In the 1970s, IBM put forward the bit-oriented synchronous data link control (SDLC). Then,
ANSI and ISO adopted and developed the SDLC, and also put forward their own standards:
Advanced Data Communication Control Procedure (ADCCP) of ANSI and HDLC of ISO.
As a bit-based protocol, the HDLC protocol has the following features:
1. The protocol is independent of any set of characters .
2. Packets can be transmitted transparently. The 0-bit insert method for transparent
transmission can be implemented based on hardware.
3. The full-duplex communication can be implemented. Data can be transmitted continuously
without waiting. The data transmission on the link is highly efficient.
4. All the frames adopt CRC check. The frames are numbered. Thus no frame is lost or
received repeatedly. The transmission reliability is high.
5. The transmission control is separated from processing, which makes HDLC flexible and
controllable.
All of the protocols in the standard HDLC protocol suite run on the synchronous serial lines.
page 501
Page 531
An HDLC frame consists of the flag field (F), the address field (A), the control field (C), the
information field (I), and the sequence number field (FCS).
Flag field (F)
The flag field is in the 01111110 format. The two flag fields indicate the start and the end of a
frame. The flag field can also be used as the filling character between frames.
Address field (A)
The address field carries the address information.
Control field (C)
The control field forms the commands and the responses to monitor and control the link. The
main node or the combination node of the sender uses the control field to
request the slave node or the combination node to perform the specified operation. The slave
node uses this field to respond to the commands and report the completed operations or the
change of status.
Information field (I)
The information field can be any binary bit string. The length of the string is not
limited. The upper limit of the string length depends on the FCS field or the cache
capacity of the communication node. The commonly used length is 1000-2000 bytes.
The lower limit can be 0, namely, no information field. The supervisory frame,
however, cannot have the information field.
Sequence number field (FCS)
The FCS field contains 16 bits. It is used to verify the entire frame between the
two flag fields.
page 502
Page 532
The HDLC frame is classified into the information frame (I frame), the supervisory
frame (S frame), and the unnumbered frame (U frame).
Information frame (I frame)
The I frame transmits the valid information or data.
Supervisory (S frame)
The S frame controls errors and traffic. If the first two bits of the control field in a
frame are 10, it is an S frame. The S frame does not contain the information bit. It
contains only 6 bytes, namely, 48 bits.
Unnumbered frame (U frame)
The U frame is used to establish, delete, and control the link.
page 503
Page 533
page 504
Page 534
The HDLC configuration on the serial link is simple. The user only needs to configure HDLC
in the interface view, and then configure the IP address. The link-protocol hdlc command
configures the link-layer protocol for the encapsulation on the interface to be HDLC.
NOTE: The encapsulation modes on the two interfaces of the communication nodes must
be the same. The default encapsulation protocol on the serial interface of the
VRP based routers is PPP. When the VRP-based routers are interconnected with the
devices of other vendors, make sure that the encapsulation modes are the same.
page 505
Page 535
After configuration is complete, the user can use ping to check whether the configuration is
correct. If the two nodes can send and receive ping packets, the configuration is deemed
successful; otherwise, check whether the configuration on the corresponding interfaces is
accurate.
page 506
Page 536
As is shown in the figure above, RouterA and RouterB are connected through the serial
interface. HDLC runs on the interfaces. Interface S0/0/1 on Router A borrows the IP address
of the local loopback interface. The IP address of the loopback interface adopts the 32-bit
mask. The ip address unnumbered interface LoopBack 0 command configures interface
S0/0/1 to borrow the IP address of interface loopback 0. The ip route-static 10.1.1.0 24
Serial 0/0/1 command configures the static route. The egress of the static route to network
10.1.1.0 is Serial0/0/1. For the configuration of the static route, refer to the routing module.
page 507
Page 537
The display ip interface brief command displays the IP addresses of the interfaces. In this
example, you can see that Serail0/0/1 and Loopback0 use the same IP address. If the
interface does not borrow the IP address of another interface, a message is displayed to
remind you of the IP addresses conflict. In this example, however, Serial0/0/1 borrows the IP
address of Loopback0, so the IP addresses are not in conflict.
page 508
Page 538
We can use PING to test the connectivity between the two routers. If the test succeeds, it
verifies that the router configuration is correct, otherwise it will be necessary to check
whether the corresponding interface configuration match.
page 509
Page 539
1. What is HDLC?
High-level Data Link Control, HDLC, is a bit-based link-layer protocol. The protocols of the
HDLC protocol suite run on synchronous serial links.
2. The HDLC frame structure is comprised of which fields?
An HDLC frame consists of the flag field (F), address field (A), control field (C), information
field (I), and a sequence number field (FCS).
page 510
Page 540
Page 542
page 511
page 512
Page 543
Page 544
page 513
page 514
Page 545
PPP is placed in the data link layer of the TCP/IP stack. It is the most popular point-topoint link layer protocol. PPP is used to encapsulate and transmit IP packets on the serial
link, ATM link, and SDH link.
Page 546
page 515
PPP consists of three components, namely, data encapsulation method, Link Control
Protocol (LCP) , and Network Control Protocol (NCP) . The datagram encapsulation method
defines how to encapsulate multi-protocol packets.
To be adapted to various link types, PPP defines LCP. LCP can test the link environment (for
example, whether a loop is generated) and negotiate link parameters (for example, the
maximum length of the packet and the type of the authentication protocol) . Compared with
other link layer protocols, PPP can
provide authentication. The two ends of the link can negotiate the authentication protocol to
be used and implement the authentication. The session can be
established only after the authentication succeeds. With this feature, PPP can be used by
ISP to receive the access of dispersive subscribers.
PPP defines a group of NCP protocols. Each protocol matches a network layer protocol. The
NCP protocol is used to negotiate the parameters like IP addresses. For example, IPCP
negotiates IP control parameters, and IPXCP negotiates IPX control parameters.
page 516
Page 547
The encapsulation method of PPP data frame is used for differentiating the packets of each
upper layer protocol. The encapsulation format of PPP contains only three fields.
Protocol: This field contains two bytes. It identifies the type of protocol encapsulated in the
PPP frame, for example, IP, LCP, and NCP. The common values are shown in the above
figure.
Information: This field contains the data encapsulated in PPP, for example, LCP data, NCP
data, and network-layer packets. The length of this field is variable.
Padding: This field is used for filling in the information field.
The total length of the Padding and Information fields is the maximum receive unit (MRU) of
PPP. The default value of MRU is 1500 bytes.
If the Information field is shorter than MRU, PPP fills in the Padding field to reach the length of
MRU to make the transmission convenient. But the padding is not mandatory. That is to say,
the Padding field is optional.
Page 548
page 517
PPP frames cannot be transmitted directly on the link. Additional encapsulation modes and
control mechanisms must be used depending on the types of the links. The PPP frames
transmitted on the serial link must comply with HDLC.
Flag: indicates the start bit or the end bit of the frame. The value is 01111110.
Address: indicates the IP address. It is all 1s. Because PPP is a point-to-point protocol, it
does not need the addressing mechanism. The address of all 1s
represents the receiver end.
Control: indicates the control field. HDLC can use this field to transmit data and control
packets orderly. In PPP, the value of this field is 0x03, which indicates that the data is
transmitted in countless mode. This is a simple working mechanism.
page 518
Page 549
The basic configuration of PPP on the serial link is simple. Configure PPP encapsulation
interface view, and then configure the IP address. The link-protocol ppp command is used
to configure the link layer protocol of the interface as PPP.
Page 550
page 519
page 520
Page 551
This table lists four types of LCP packets used to negotiate link-layer parameters.
Configure-Request The first packet during the link-layer negotiation process, indicating
the beginning of link-layer parameter negotiation of the two ends.
Configure-Ack After receiving the Configure-Request packet sent by the peer, if the
values of negotiated parameters are acceptable, this packet is used for acknowledgement.
Configure-Nak After receiving the Configure-Request packet sent by the peer, if the
values of the negotiated parameters are not acceptable, this packet is used for reply,
carrying the locally acceptable parameters.
Configure-Reject After receiving the Configure-Request packet sent by the peer, if the
values of the negotiated parameters cannot be identified, this packet is used for reply
carrying the parameters not identified.
Page 552
page 521
As is shown in the figure above, RTA and RTB are connected through the serial link and
they run PPP. When the physical layer link is up, RTA and RTB negotiate the link
parameters through LCP. In this example, RTA sends an LCP packet. RTA sends a
Configure-Request packet to RTB. The packet contains the link layer parameters
configured on RTA. After RTB receives the Configure-Request packet, it returns a
Configure-Ack packet to RTA if RTB can identify the parameters in the packet and the
parameter values are acceptable.
If RTA does not receive the Configure-Ack packet, it will re-sends the Configure-Request
packet every three seconds. If RTA still dose not receive the Configure -Ack packet after it
sends 10 Configure-Request packets, RTA considers RTB failed and stops sending the
Configure-Request packet.
NOTE: If the above process has finished it only indicates that RTB considers the link
parameters on RTA acceptable. RTB still needs to send the Configure-Request packet to
RTA to let RTA check whether the parameters on RTB are acceptable.
page 522
Page 553
After RTB receives the Configure-Request packet sent by RTA, RTB checks the parameters
contained in the packet. If RTB can identify the link layer parameters but finds that some or
any of the parameter values cannot be accepted, RTB returns a Configure-Nak packet to RTA.
This Configure-Nak packet contains only the unacceptable parameters. The values (or value
ranges) of these parameters are changed into the values that
can be accepted by RTB.
After receiving the Configure-Nak packet, RTA modifies the parameter values locally
according to the parameter values in the packet, and then re-sends a
Configure-Request packet.
After five negotiations, if the values still cannot be accepted, the parameters are forbidden
without further negotiation.
Page 554
page 523
After RTB receives the Configure-Request packet sent by RTA, RTB checks the parameters
contained in the packet. If RTB cannot identify some or any of the
link layer parameters in the packet, RTB returns a Configure-Reject packet to RTA. The
Configure-Reject packet contains only the unidentified parameters.
After receiving the Configure-Reject packet, RTA re-sends a Configure-Request packet to RTB.
This packet does not contain the unidentified parameters.
page 524
Page 555
On the VRP platform, MRU is represented by MTU configured on the interface. The PPP
authentication protocols widely used are PAP and CHAP (will be
described in the following chapters). The two ends of a PPP link can authenticate each other
using different authentication protocols. The authenticated party, however, must support the
authentication protocol used by the peer and the authentication information such as user
name and password should be configured correctly.
LCP uses magic number to detect abnormal cases such as loop. A magic number is
generated randomly. The random mechanism has to guarantee that the two ends generate
the magic numbers.
After one end receives the Configure-Request packet, it compares the magic number
contained in the packet with the local magic number. If the two numbers are different, it
indicates that no loop occurs on the link, and the receiver end sends a Configure-Ack packet
(other parameters are also agreed), indicating the magic number is agreed. If the packets
sent later contain the magic numbers, the magic numbers are set to the negotiated one, and
LCP does not generate new magic numbers any more.
If the magic number in the Configure-Request packet is the same as the local magic number,
the receiver end sends a Configure-Nak packet, which contains a new magic number. Then,
LCP sends a new Configure-Request packet with a mew magic number whether the
received Configure-Nak packet contains the same magic number or not . If loop occurs on
the link, this process is repeated continuously. If there is no loop, the packet interaction is
restored.
Page 556
page 525
If the authentication fails or the administrator closes the connection manually, LCP will stop
the connection.
LCP stop connections by using the Terminate-Request and Terminate-Ack packets. The
Terminate-Request packet is used for the peer to request stop the
connection. If one end receives a Terminate-Request packet, LCP must return a TerminateAck packet to confirm the closure of connection.
If the sender does not receive the Terminate-Ack packet, it will re-sends the TerminateRequest packet every three seconds. If the sender still fails to receive
the Terminate-Ack packet after it sends two request packets, it considers the peer failed and
will close the connection.
page 526
Page 557
After establishing a connection, LCP detects the status of the link by using the EchoRequest and Echo-Reply packets. After receiving an Echo-Request packet, it returns an
Echo-Reply packet to tell that the link status is normal. On the VRP platform, an EchoRequest packet is sent every 10 seconds.
Page 558
page 527
page 528
Page 559
Page 560
page 529
The working process of PAP authentication is simple. After LCP negotiation, the authenticator
requests the peer to use PAP authentication. The peer sends the user name and password in
plain text through the Authenticate-Request packet to the authenticator. In this example, the
user name is huawei and the password is hello.
After receiving the user name and password, the authenticator checks whether the information
is correct in the local database. If the information is correct, it returns an Authenticate-Ack
packet; otherwise, it returns an Authenticate-Nak packet, indicating failure of the
authentication.
page 530
Page 561
Page 562
page 531
The CHAP authentication contains three interaction phases. To match the request packet
and response packet, the packet carries the Identifier field. All the packets in one
authentication process use the same identifier.
After the LCP negotiation, the authenticator sends a Challenge packet to the peer. The
packet contains the Identifier field and the Challenge character string which is generated
randomly. This Identifier will be used by the consequent packet of the same authentication
process.
After the peer receives the Challenge packet, it encrypts the packet. The encryption formula
is MD5{ Identifier + password + Challenge }. The character string consisting of Identifier,
password, and Challenge undergoes the MD5 calculation. Then, a 16-byte digest is
generated. The digest and the CHAP user name configured on the port are encapsulated in
the Response packet and sent back to the authenticator. In this example, after the
encryption, the digest information and user name huawei are sent to the authenticator.
After the authenticator receives the Response packet sent by the peer, it searches the local
database for the challenge message matching the user name.
Then, the authenticator encrypts the password. The encryption calculation is the same as
that used by the peer. Then, the authenticator compares the digest information with that
encapsulated in the Response packet. If they are the same, the authentication succeeds;
otherwise, the authentication fails.
As this shown in the previous process, CHAP sends the password in cipher text instead of
plain text, hence the security is enhanced greatly.
page 532
Page 563
Page 564
page 533
PPP defines a group of NCP protocols. Each protocol matches a network layer protocol. The
NCP protocol negotiates the network layer parameters. For example, IPCP is used for
negotiating and controlling IP parameters, and MPLSCP is used for negotiating and MPLS
parameters. This course discusses only IPCP.
page 534
Page 565
IPCP uses the same negotiation mechanism and packet type as LCP, but IPCP does not
invoke LCP. This is the same as LCP in terms of working procedure, packet and so on.
There are two types of IP address negotiation methods: static configuration and dynamic
configuration.
As it is shown in the figure, the IP addresses on the two ends are 10.1.1.1/30 and
10.1.1.2/30. The two IP addresses are in network segment 10.1.1.0/30.
The negotiation process for the static configuration of IP addresses is as follows:
1. The two ends send the Configure-Request packets, which contain the local IP address.
2. After receiving the Configure-Request packet, the two ends check the IP address
contained in the packet. If the IP address is a valid unicast IP address and it is different
from that configured locally (no confliction), it indicates that the peer can use this IP
address and the local end returns a Configure-Ack packet.
Page 566
page 535
As it is shown in the routing table, the IP address of the peer on the PPP link is a 32-bit
host address. The reason is that by sending information through IPCP, the two ends of
the PPP link can know the IP address of the peer.
page 536
Page 567
As is shown in the figure above, RTA asks the peer to allocate an IP address, and RTB uses
static IP address 10.1.1.2/30. RTB enables the function to allocate IP address for the peer,
allocating IP address 10.1.1.1 for RTA.
The process of the dynamic negotiation of dynamic IP address is as follows:
RTA sends a Configure-Request packet to RTB. The packet contains IP address 0.0.0.0,
which indicates a request for an IP address allocating . After RTB receives the ConfigureRequest packet, it considers IP address 0.0.0.0 invalid and returns a Configure-Nak packet
containing IP address 10.1.1.1; After RTA receives the Configure-Nak packet, it updates
the local IP address and re-sends a Configure-Request packet, which contains IP address
10.1.1.1; When RTB receives the Configure-Request packet, it considers the IP address
contained in the packet valid and returns a Configure-Ack packet.
At the same time, RTB sends a Configure-Request packet to RTA, which means that RTB
requests to use IP address 10.1.1.2. If RTA considers the IP address valid, it will return a
Configure-Ack packet.
Page 568
page 537
page 538
Page 569
Page 570
page 539
page 540
Page 572
page 541
Page 573
page 542
Page 574
page 543
Page 575
The FR technology is a fast packet switching technology that transmits and switches data
units in a simplified manner when compared to X.25. The FR adopts a virtual circuit based
behavior, transmitting data through logical links, rather than physical links. Multiple logical
links can be multiplexed on one physical link. The bandwidth can therefore be multiplexed
and dynamically allocated. This facilitates the transmission of data for multiple users and
multiple rates. The network resource is fully used. As shown in the figure above, the virtual
circuit is used so that the network resource is fully utilized. Frame Relay has the features of
high throughput and low delay. It is applicable to the service that has burst traffic.
FR simplifies the layer-3 function of X.25, however does not support retransmission when an
error occurs.
page 544
Page 576
Frame Relay is found at the second layer of the OSI model. It is a simplified way to transmit
and switch data units at the data link layer. FR realizes the functions of the physical layer
and the link layer. The functions such as traffic control and error checking are realized by the
intelligent terminal. Hence the protocol between nodes is simplified. FR can transmit various
routing protocols. The packets of the routing protocols are encapsulated in the FR data
frame, as shown in the figure above.
page 545
Page 577
page 546
Page 578
The above figure shows the FR network model. The model consists of the DTE and the
FR switching fabric.
The FR switching fabric consists of a group of DCE. The LANs on the two ends are
interconnected through the FR network. The data of the LAN is transmitted through the
PVC.
The terms related to the FR network are as follows:
Data Terminal Equipment (DTE): refers to the user-side device.
Data Circuit-terminating Equipment (DCE): refers to the switching equipment on the
network, like FR switch. The DTE and the DCE are directly connected. The DCE is
connected to a port on the switch. Multiple connections are set up between multiple
switches. The links between the DTE are established, as shown in the figure above.
Data Link Connection Identifier (DLCI): identifies the link interface. Every link on the FR
network uses a DLCI. The FR is a connection-oriented technology. Before communication
starts, a link must be established between the devices. The link between the DTE is called
virtual circuit. The virtual circuit of the FR is classified into PVC and SVC. The PVC is
widely used in FR.
page 547
Page 579
A Frame Relay (FR) network provides data communication between user devices (such as
routers and hosts).
According to different functions, FR devices and interfaces can be divided into the following
three types:
The user device is called Data Terminal Equipment (DTE). The interfaces on the DTE are
called DTE interfaces.
The device that provides access for DTE is called Data Circuit-terminating Equipment
(DCE). The interfaces on the DCE devices are called DCE interfaces or Network-toNetwork Interfaces (NNIs) interfaces. The interfaces that connect the DTE and the DCE
are User-to-Network Interfaces (UNIs).
The interface between the FR switches are NNIs. In practice, the DTE interface can be
connected only with the DCE interface; the NNI interface can be connected only with the
NNI interface.
page 548
Page 580
page 549
Page 581
FR is a statistical multiplexing protocol. One physical link can provide multiple virtual links.
Each virtual link is identified by the DLCI. The address field in the FR frame can identify the
virtual link that FR frame belongs to.
The DLCI is applied to the local interface and the peer interface that is directly connected to
the local interface. It is not used globally. That is, in the FR network, a DLCI on different
physical interfaces may identify multiple virtual links. The user interface on a FR network
supports up to 1024 virtual circuits. The value of the DLCI that can be used by users ranges
from 16 to 1007. The virtual circuit is connection-oriented, so different local DLCIs are
connected to different peer devices. The local DLCI can be considered as the FR address
of the peer device. The FR network is public facility. It is often provided by the telecom
service provider. Users can also establish a FR network by using private switches. No matter
which method is used, the provider of the FR network allocates the DLCI to the PVCs that
are used by the users routers. Some DLCI numbers represent special functions. For
example, DLCI 0 and DLCI 1023 are used by only the LMI protocol.
Address mapping of FR is to associate the protocol address of the peer device with the FR
address (local DLCI) of the peer device so that the upper layer protocol can find the peer
device through the protocol address of the peer device. FR is mainly used to carry the IP
protocol. Before the device sends the IP packet, the DLCI matching the next hop address
must be known . The device can find the DLCI by searching the mapping table. Address
mapping can be configured manually or dynamically maintained by the protocol.
page 550
Page 582
Local Management Interface (LMI): monitors the status of the PVC. The system supports
three kinds of LMI protocol: Q.933 Annex A of ITU-T, T1.617 Annex D of ANSI, and the nonstandard compatible protocol. The nonstandard compatible protocol is used for
interconnecting a device with the devices of other vendors.
The working method of LMI is : DTE sends ak Status Enquiry packet at a interval to query the
status of the virtual circuit. When the DCE receives the packet, it sends a Status packet to
notify DTE of the status of all the virtual circuits on the current interface.
The PVC status of the DTE-side devices depends on the DCE-side devices. The PVC status
of the DCE-side devices depends on the network. If two network devices are directly
connected, the PVC status of the DCE-side devices is set by the administrator.
page 551
Page 583
The FR network can connect the disparate networks. The network architecture may be fullmeshed, partial-meshed, or star. In terms of cost, the star structure is the best as it limits the
number of PVCs required. A central node is connected to the distributed nodes by using
multiple PVCs on one interface. This architecture is applicable to the company where the
headquarters needs to be connected to multiple branches. The disadvantage of this
architecture is that the disparate nodes can communicate only through the central node.
In the full-meshed structure, all the nodes are interconnected through PVCs. Any two nodes
can communicate directly without passing other nodes. The reliability of such a architecture
is high. If one PVC fails, the data can be transmitted through another. The disadvantage of
such architecture is that a great number of PVCs are required. If one node is added to the
network, many new PVCs need to be added.
In the partial-meshed structure, some nodes are connected directly. The default FR network
architecture is non-broadcast multi-access (NBMA). That is to say, although the nodes in the
FR network can communicate with each other, the FR network does not support broadcasts.
If a node receives routing information, it recreates the packet and then sends the duplicated
packet carrying the routing information to other nodes.
page 552
Page 584
Address mapping of FR associates the protocol address of the peering device with the local
DLCI, so that frame relay can identify the PVC that should be used in order to reach a given
destination.
It should be noted that the mapping table is based on a logical interface. The logical
interface has its own mapping table. The key in the mapping table is the relationship
between the peer protocol address and the local DLCI.
page 553
Page 585
The inverse ARP protocol is used for resolving the network address of a peer over a virtual
circuit, with support for both IP and IPX addressing. If the protocol address of the peer is
known, the inverse ARP protocol can locally generate a mapping relationship between the
peer network address and the DLCI (MAP). The address mapping therefore need not be
configured manually.
The process is as follows:
When a new virtual circuit is found (the local interface is already configured with the protocol
address), the inverse ARP protocol sends an Inverse ARP request packet to the peer. The
packet contains the local protocol address. When the peer receives the request, it obtains the
local protocol address, and generates a mapping relationship. At the same time, the inverse
ARP protocol sends a response packet and generates the mapping locally.
It should be noted that:
1)
If the static mapping relationship is configured manually, the Inverse ARP protocol does
not send the request packet to the peer, no matter whether the peer's address is in the
static mapping is correct or not.
2)
After receiving the inverse ARP request packet, the dynamic mapping cannot be
generated if the receiver discovers that the peer protocol address is the same as the
network address in the local mapping table.
3) The multiprotocol host responds only to the protocol address that is the same as the
protocol address in the request packet.
4) The multiprotocol host applies addresses for all the protocol addresses on each interface.
page 554
Page 586
As is shown in the figure above, Router A is connected to three routers, Router B, Router C,
and Router D, through interface S0. If three DLCIs are mapped to three routers over S0,
then the route update information on S0 is not sent out through S0. The distance vector
routing protocol implements split horizon. The router cannot forward the route update
information out through the interface on which the information was received. As shown,
Router B advertizes the routing information to Router A. The split horizon mechanism
results in Router A being unable to forward to Router C and Router D through interface S0.
There are two ways to resolve this problem, one is to connect multiple neighboring nodes
through multiple physical interfaces. This method requires that the router have multiple
physical interfaces, which results in increased cost to support the additional physical node
interfaces. Another method is to implement sub-interfaces. In this manner, a single physical
interface is configured with multiple logical interfaces. Each sub-interface has its own
network address, and operated like an independent physical interface. It is also possible to
disable the split horizon feature, but doing so will increase the possibility of routing loops
being generated.
page 555
Page 587
The split horizon problem can be solved by configuration of sub-interfaces. One physical
interface can support multiple logical sub-interfaces. Each sub-interface can be connected to
the peer router through one or many DLCIs over a FR network.
The logical sub-interfaces are defined on the serial link. The sub-interfaces are connected to
the peer router through one or more DLCIs. After a DLCI is configured on the sub-interface,
the mapping between the addressing of the destination end and the DLCI should be
generated.
As is shown in the figure above, the physical serial interface S0 on Router A, the DLCI of S0.1
is mapped to Router B, the DLCI of S0.2 is mapped to Router C, and the DLCI of S0.3 is
mapped to Router D.
The sub-interfaces in FR are classified into two types:
Point-to-point sub-interface: connects to a single remote node. Each sub-interface is
configured with one PVC. The peer can be found without the static address mapping.
Therefore, the peer address is determined when the sub-interface is configured on the PVC.
Point-to-multipoint sub-interface: connects multiple remote nodes. One sub-interface is
configured with multiple PVCs. Each PVC is mapped to the connected remote protocol
address. Thus, the PVC can be connected to the corresponding remote end. The address
mapping must be configured manually or set up through the inverse ARP protocol.
Before creating the FR sub-interface, the user should configure the interface to use FR as the
link-layer protocol. The default sub-interface type is p2mp.
page 556
Page 588
page 557
Page 589
RTA and RTB are connected by a serial link. The IP address planning is as shown in the
above figure. The link-layer protocol is FR. The configuration of FR in this example is
similar to the configuration in the preceding example. The difference is that the mapping
between the interface network address and the FR address is generated by the inverse
ARP protocol.
The fr inarp [ ip [ dlci-number ] ] command enables the dynamic address mapping. In
VRP, the dynamic address mapping is enabled on the FR interface by default. So this step
is optional.
page 558
Page 590
The display fr interface command displays the information about the FR interfaces, the
operation mode of the FR interfaces, and the physical status and protocol status of the FR
interfaces.
The display interface Serial 0 command displays the information about the interfaces,
including the physical status and protocol status of the interfaces, the IP address, the linklayer encapsulation mode, and the LMI type.
page 559
Page 591
RTA and RTB are connected by a serial link. The IP address planning is as shown in the
above figure. The link-layer protocol is FR.
The link-protocol fr command encapsulates the link-layer protocol into FR. By default, the
link-layer protocol is encapsulated into PPP. When the FR protocol is encapsulated, the
encapsulation format is IETF by default.
ietf: indicates the standard IETF encapsulation, which complies with the RFC 1490. It is the
default encapsulation format.
nonstandard: indicates the encapsulation format of the nonstandard compatible protocol.
The fr interface-type command sets the FR interface type.
dte, dce, and nni: indicates the three types of the FR interfaces.
In FR, the two parties of the communication are at the user side and the network side
respectively. The user-side party is called DTE. The network-side party is called DCE.
In the FR network, the interfaces between the FR switches are NNI interfaces. The
corresponding interfaces adopt the NNI mode. If the devices are used for FR switching, the
interfaces should work in NNI mode or DCE mode.
The fr dlci command configures the virtual circuit for the FR interface. The IP address
10.1.1.1 30 command configures the IP address for the interface.
The fr map ip command adds a mapping relationship between the FR address and the DLCI
static address. ip-address: indicates the IP address of the peer.
ip-mask: indicates the subnet mask. The format of the subnet mask is X.X.X.X. X is an integer
ranging from 0 to 255. dlci-number: indicates the number of the local virtual circuit. The value
ranges from 16 to 1007.
page 560
Page 592
The display fr map-info command displays the mapping between the protocol address and
the FR address.
In this example, RTA displays the information showing that the address mapped to DLCI 200
is 10.1.1.2, the network address and FR address of RTB. The local interface S0 on RTA
works in DCE mode.
page 561
Page 593
page 562
Page 594
In this example, the router functions as the FR switch. The PVC is configured manually.
page 563
Page 595
page 564
Page 596
page 565
Page 597
Configuration of RTD is similar to those of RTA. It needs to Configure data link protocol,
interface type and IP address.
page 566
Page 598
Serial2(200)
Active Serial2(200)
Serial0(100)
page 567
Page 599
page 568
Page 600
Using the display fr map-info command, you can view the FR address mapping table.
[RTA]dis fr map-info
Map Statistics for interface Serial0 (DTE)
DLCI = 100, IP INARP 10.1.1.2, Serial0
create time = 2007/06/04 17:34:59, status = ACTIVE
encapsulation = ietf, vlink = 20, broadcast
It is possible to verify the PVC is operational from the active state.
<RTA>ping 10.1.1.2
PING 10.1.1.2: 56 data bytes, press CTRL_C to break
Reply from 10.1.1.2: bytes=56 Sequence=1 ttl=255 time=31 ms
Reply from 10.1.1.2: bytes=56 Sequence=2 ttl=255 time=31 ms
Reply from 10.1.1.2: bytes=56 Sequence=3 ttl=255 time=31 ms
Reply from 10.1.1.2: bytes=56 Sequence=4 ttl=255 time=31 ms
Reply from 10.1.1.2: bytes=56 Sequence=5 ttl=255 time=31 ms
page 569
Page 601
The fr switch command enables the backup function for the PVCs of the switches.
fr switch name [ interface interface-type in-interface-number dlci indlci
interface interface-type out-interface-number dlci out-dlci
name: specifies the name of the PVC for FR switching. The value is a string of 1 to 31
characters.
interface interface-type in-interface-number dlci in-dlci: specifies the interface type, interface
number, and DLCI value on the ingress of the PVC. The value of dlci-number ranges from 16
to 1007. interface interface-type out-interface-number dlci out-dlci: specifies the interface type,
interface number, and DLCI value on the ingress of the active or
backup PVC. The value of dlci-number, ranges from 16 to 1007.
It should be noted that:
The interface for FR switching must be set to NNI or DCE mode; otherwise, the FR switching
function cannot take effect. Before or after the static route of the PVC is configured, the user
must run the fr switching command to enable the FR switching route.
page 570
Page 602
Active
Serial0(100)
Serial2(200)
page 571
Page 603
Using the display fr pvc-info command, you can view the configuration and statistics of
the FR PVC:
If no parameter is specified, basic FR configuration and statistics of all interfaces are
displayed.
If interface numbers are specified but the DLCI number is not specified, basic FR
configuration and statistics of DLCI of the specified interface are displayed. If both interface
number and DLCI number are specified, basic FR configuration and statistics of specified
DLCI of specified interface are displayed. The FR QoS configuration and statistics are also
displayed. By the statistic of input packets and output packets, It is easy to know PVC is OK.
page 572
Page 604
page 573
Page 605
page 574
page 577
Page 609
page 578
Page 610
page 579
Page 611
page 580
Page 612
In a practical sense, a firewall acts as a separator, and also an analyzer, to supervise any
activity between an internal and external network, and assist in assuring the security of
the internal network is maintained.
The firewall can be in the form of a series of hardware devices or supported software
within a given device.
The firewall can be divided into several parts, some parts implement other function
besides the function of a firewall.
Firewall is the accumulation of hardware, software and control policies, where the control
policy can be divided into two kinds:
1. Strict policy highly secure but may disrupt many services due to non-reviewed policy
restrictions
2. Loose policyprovides much freedom to users however may leave many security
holes in the network if good policy management has not been applied.
Commonly firewalls will take on a more secure policy and assess policy for additional
permissions on a case by case basis should additional policy restrictions need to be
relinquished. However this can take some effort due to a series of security review
processes that are often necessary to ensure the permission for release of restrictions
does not threaten the integrity of the internal network to external threats.
page 581
Page 613
With the development of firewall technology, the function of firewall is more and more
diverse, seen from the technology development aspect, variations have formed and can be
classified into three kinds: packet filtering, proxy and state detection. At present, the more
popular type is the state detection firewall.
page 582
Page 614
page 583
Page 615
Proxy firewall:
Proxy firewall regards itself as a intermediary node of service access; for a client node, it
represents a server; for a server, it represents the client. A proxy firewall provides high
security, but the cost is also high. It is hard to develop a corresponding proxy service for
every application, so a proxy firewall can not support an abundance of services, it can only
provide proxy service for some applications such as HTTP services, Proxy etc.
page 584
Page 616
page 585
Page 617
As shown in the figure above, in the security system, firewall is analogous to a door, it can
prevent people from entering, but it can not prevent malicious attacks from people that have
permission to enter the network or are located internally. An access control system can
prevent people with low priority from doing work which exceeds their authority, but it can not
prevent people with high priority from malicious actions. It also cannot prevent people with
low priority from obtaining high priority through illegal behavior. Intrusion detection system
(IDS) is a unique device to identify whether the system is safe or not according to the data
and behavior mode, it is the second security door following the firewall. There is a classical
comparison: firewall corresponds a security system of a community, it will audit all the people
who go through the gateway. But it cannot audit the people inside the community or with legal
identity. IDS can supervise the internal community.
IDS is analogous to a security camera of a network, it can capture and record all the data; at
the same time ,it is also an intelligent camera, it can analyze and abstract doubtful and
abnormal network data with the intelligence to
Penetrate disguised data and identify the actual content. The advanced IDS can beat back,
terminate connection and close path automatically to regulate illegal behavior.
There are other technologies in security system besides those mentioned, for example,
identity authentication technology, ACL packet filtering, special user system access,
protection to special source linked servers through
reinforced and installed immunity systems, discovery of system holes and patching through
scanning software; transmission of encrypted data or use of VPN technology to transmit, so
as to guarantee the security (often end to end). Supervisory system operation through a
security management center, and operational event logging and threat detection using
alarms and threat response processes.
page 586
Page 618
Firewall strictly manages access from external networks into the internal network. The
access from the internal network to external network is relatively loose in comparison.
Firewall can not renew operation software periodically as other virus software, so the
defense provided to new generated safety menace is sometimes not enough.
If depth detection function is configured, the firewall will detect the partial content of a data
packet, which will also increase the forwarding delay time and affect forwarding
performance.
Firewall cannot provide detection to encrypted packets or other packets transmitted in VPN
tunnels that passes through the firewall.
page 587
Page 619
page 588
Page 620
The Eudemon series firewall is the firewall product series of HUAWEI. According to the
system structure of Eudemon, the following are the three series models: E100 , E200 ,
E300 and 500 1000.
E100 is foremost series of firewall, it has two fixed 100M Ethernet module and two
extended slots.
E200 and E300 , 500, 1000 are more popular products, the performance also increases
orderly according to the series number. E200 has one main control board, with two
Ethernet interfaces on the board and the interface is connected to system bridge directly.
This allows the two interfaces to have a higher performance than other interfaces. There
are also two extended slots, they are the number 1 slot and number 2 slot from left to right,
the bottom layer is equipped with a dummy panel which is used for extended usage.
E300/500/1000 also has one main control board, the two Ethernet interfaces on the board
has low service performance, it can not be chosen as service interface, but as a service
management interface. There is one NP board behind the main control board, it is used for
service forwarding. The device provides 4 service slots, from left to right, they are number
1, 2, 3 and number 4 slots. The number 3 slot is a low speed slot, it can only support a low
speed service board. Number 1, 2, 4 are high speed slots used support high speed service
boards. The high speed board can only be supported by high speed slots, low speed
boards can be supported on low speed slots and also high speed slots.
page 589
Page 621
Throughput
Usually, large packets of 1K 1.5K byte are used to scale the packet filtering performance
of a firewall. In the network, most of the traffic is comprised of packets of around 200 bytes,
so it is necessary to consider the performance of forwarding small packets. ACL rule should
also be configured on a firewall, so it is also necessary to consider the performance
capability of firewall when a great number of rules have been defined.
The rate of establishing connection for every second
It indicates that number of complete TCP connections that can be established through a
firewall every second. The connection of firewall is dynamic, which is established
dynamically according to state of the communicating peer. Before the session
establishment for exchanging data packets, a connection must be established with the
firewall. If the rate of establishing a connection with the firewall is low, the reflection on
client is that a great delay exists in communication. So the larger the index, the higher the
forwarding speed. When the firewall is attacked, the larger the index, the stronger the
defense ability. The larger the index, the stronger the state backup ability.
Subsequent connection number
It indicates the maximum number of connections that a firewall can support at one time, one
connection is one TCP/IP access.
Note:
As shown in the figure above, some high capability service boards can not guarantee line
speed forwarding, for example, 2G service board provides two 1G service interfaces, but
the bandwidth of backboard slot is only 1G, if 2G
throughput is needed, it is suggested to choose two 1G boards.
page 590
Page 622
Shown in the figure above is the hardware system structure of Eudemon200. The blue pane
is the processing board (RPU) of the firewall. On the RPU board, there are two 10/100BaseTX interfaces which are connected to the system bridge directly, it adopts MAC forwarding
on the main control board, so the two interfaces have a higher performance than other
interfaces. Eudemon200 adopts dual PCI bus structure and connects to two slots separately.
This kind of structure reduces the collision in packet forwarding, and improves the forwarding
performance effectively.
Eudemon200 processing board mainly completes protocol processing, low-speed packet
forwarding, interface control, fault detection and so on, it is the core part of the product. The
component state supervising in the system for components such as the fan, power, system
working state, is indicated by light indicators on the RPU board of the firewall. It also can
report alarm relating to the fan, power and system temperature. The RPU board also
supports a hardware reset button.
page 591
Page 623
page 592
Page 624
Q: How many variations of firewall are there, and what features do they support?
A: Firewalls represent three variations: packet filtering, proxy and state detection.
Packet filtering firewall utilizes special rule defined before (source/destination IP
address, source/destination TCP/IP port and protocol number) to filter packets.
Proxy firewalls are regarded as middle node of service access; for a client node, it
represents a server and for a server, it represents the client. State detection is
used to detect protocol information of the application layer and supervise the
protocol state of connection-oriented application layer. Through detecting the state
of TCP/UDP based connection, a firewall can dynamically determine whether the
packet can pass through the firewall or not.
Q: Which models make up the Eudemon firewall series?
A: it includes: E100, E200 and E300, 500, 1000.
page 593
Page 625
page 594
Page 627
Page 628
page 595
page 596
Page 629
Page 630
page 597
A zone is an important firewall security concept. Firewalls is are generally located at the
boundary of a network, and so allows different networks to be represented part of
alternative zones. The firewall adds interfaces into zones and enables security detection
between zones (called a security policy). It can be used to filter the data flowing through
different zones. The common methods used for security detection includes ACL based
detection and application state detection.
Eudemon firewall has four reserved security Zones:
Untrusted zone: A low-level security zone, the security priority assigned is 5.
DMZ: A mid-level security zone, the security priority assigned is 50.
Trust Zone: A high-level security zone, the security priority assigned is 85.
Local Zone: The highest-level security zone, the security priority assigned is 100.
If necessary, users can configure new security zones and define the security priority. With
exception to the Local zone, before using any other zones, the security zone should be
associated with the firewall interfaces, achieved by adding the interface of firewall into a
security zone. The interface can only be added into only one zone. The interface can be a
physical or logical interface. Adding an interface to a zone means that the network
connected to the interface belongs to the zone, the interface itself belongs to the local zone.
Association of security zones and networks should obey the following rules: internal
networks should belong to Zone with a higher priority; external networks should belong to
zones with a lower priority; some network that can provide
conditioned services for external users should belong to the DMZ.
The purpose of defining security priority is to distinguish the direction of data flow amongst
security zones, whether inbound or outbound.
page 598
Page 631
When the data flow is forwarded between security zones, the firewall security detection
mechanisms will spring into action, in particular the security policy of the firewall
implemented between zones to manage traffic flow for example between the untrusted zone
and trusted zone. Different security policies can be implemented between different zones for
example, packet filtering policy, state filtering policy and so on.
There are two directions of data flow between zones:
Inbound: In which the data flow is transmitted from a zone with a low priority to a zone with a
high priority.
Outbound: In which the data flow is transmitted from a zone with a high priority to a zone with
a low priority.
Any two security zones cannot operate the same priority; the interfaces in the same Zone
can forward packets directly without filtering, thus nullifying the zone defenses. An interface
is unable to forward packets before it is added into a zone.
Page 632
page 599
This example introduces how a security zone is created and how to configure the priority
and apply an interface to the created zone.
[Eudemon] firewall zone name userzone
// creates a security zone named userzone, the system can support up to 16 zones in total,
including the default 4 zones.
[Eudemon-zone-userzone] set priority 60
//configures the priority, with a range from 1 to 100, any two zones can not use the same
priority, the priority of default 4 zones cannot be modified.
[Eudemon-zone-userzone] add interface Ethernet 0/0/1
//adds an interface to a zone, one zone can support 1024 interfaces at most.
Command [Eudemon]display zone username can be used to display related information
for a given security zone.
page 600
Page 633
This example introduces how to configure security policy between zones. When data flows
between security zones, the security detection mechanism will initialize. Generally, data from
an untrusted zone can not enter a trusted zone, unless permitted explicitly. After applying the
configuration displayed, data from an untrusted zone can forward to a trusted Zone.
[Eudemon]acl 3000
[Eudemon-acl-adv-3000] rule permit ip
//create ACL rule, to permit any data to pass.
[Eudemon] firewall interzone trust untrust
//enter trust-untrust view.
[Eudemon-interzone-trust-untrust]packet-filter 3000 inbound
//distribute security policy.
inbound Filters the data packet forwarding from a low level zone to a high level zone.
outbound Filters the data packet forwarding from a high level zone to a low level zone.
Input the command firewall interzone and choose the two zones involved, the relation of
inbound and outbound is determined by the priority. For example, a trusted zone and an
untrusted zone, inbound means that data is received from the untrusted zone by the trusted
zone. When implementing ACL between Zones, for every direction (inbound or outbound),
Eudemon200, Eudemon300 and Eudemon500 can implement one ACL rule.
Page 634
page 601
page 602
Page 635
Eudemon firewall can work in three modes: route mode, transparent mode and composite
mode.
If the Eudemon firewall connects to the external network at layer 3 (meaning an IP address
has been configured on the external interface), it is regarded that the firewall is operating in
route mode. As shown in the figure above, when the Eudemon firewall is located between an
internal network and an external network, the three interfaces on the firewall that connect to
internal network, external network and the DMZ area should be configured with IP addresses
as part of different network segments. The topology would recognize the firewall as
corresponding to the operation of a router. When adopting route mode, it can complete ACL
packet filtering, ASPF (status based packet filtering) dynamic filtering and NAT functionality.
However, when using route mode the network topology should be modified (the users on the
internal network should change the gateway of the end system, the router should change the
route configuration and so on).
Page 636
page 603
If the Eudemon firewall connects externally at layer 2 (an IP address is not configured on the
interface), the firewall is considered to be operating in transparent mode. If the Eudemon
firewall adopts the transparent mode, the firewall only needs to be inserted into the network
as bridge, the greatest advantage is that it is not necessary to modify any configuration; the
firewall functions as a switch, and the internal network and external network must remain in
the same subnet. At present, the Eudemon firewall can not support STP, so the usage of
firewall should be done with care so as to avoid layer 2 loops in the network. In this mode,
the firewall will not only forward packets like a switch, but also analyze the packet.
page 604
Page 637
If Eudemon firewall has not only interface which is working in route mode (interface has IP
address ) but also supports an interface which is working in transparent mode (interface has
no IP address), then the firewall is considered to be working in composite mode. This kind of
mode is the mix of transparent mode and route mode, at
present, it is only used in special applications of transparent mode to provide dual device hot
backup.
The IP address should be configured for the interface which has VRRP (Virtual Router
Redundancy Protocol) function enabled. The other interfaces do not require an IP address,
furthermore the internal network and external network must be in the same subnet.
Page 638
page 605
page 606
Page 639
Page 640
page 607
A firewall must provide the ability to control the network data flow, so as to guarantee security,
QoS requirement and constituting policy. ACL (Access Control Lists) are one of the methods
that can be used to control data flow. An ACL is a series of ordered rules composed of permit
or deny statements. These rules describe data packet through parameters such as the source
address, destination address, port number and protocol. An ACL can be applied in the
following situations:
1, Packet filtering as part of the network security protection mechanism. Packet filtering is
used between two networks with different priorities to control the data flow of a network
(inbound and outbound). When a firewall forwards the data packet, it will first detect packet
header (i.e: source address/destination address, source port/destination port and upper-layer
protocol), and then compare with configured rules. According to the result of the comparison,
it can determine whether to forward the packet or to discard the packet. To implement packet
filtering, a series of filtering rules are needed. It is possible to adopt an ACL to define filtering
rule, and then apply the ACL to filter between the firewall zones, so as to implement packet
filtering.
2, NAT (Network Address Translation) is the process used to translate the IP address in a
data packet header to another IP address. It mainly implements this function so that the
internal network (using a private IP addressing) can forward traffic to the external network
(using public IP addressing). In the actual application, it is hoped that some internal hosts
(supporting private IP addresses) can access the external network or Internet, while other
internal hosts can not. It is implemented through association of the ACL and NAT address
pool, meaning only data packets that satisfy the ACL rule can translate addresses, so as to
control the range of address translation.
An ACL can also be applied to other scenarios involving IPSec, QoS and routing policies.
page 608
Page 641
Page 642
page 609
One ACL can be composed of multiple ACL rules that include key word permit or
deny.
Use command acl [ number ] acl-number [ vpn-instance vpn-instance-name ] in
system view to create an ACL.
[ number ] acl-number can define one ACL. For basic ACL, the range is 2000 2999;
for advanced ACL, the range is 3000 3999; for a firewall ACL, the range is 5000
5499.
vpn-instance refers to the creation of a firewall ACL rule.
After enter basic ACL view, the command rule [ rule-id ] { permit | deny }
[ source { source-address source-wildcard | any } ] [ time-range time-name ] can be
used to create basic ACL rule:
rule-id is the number for each ACL rule, it is an optional parameter. When defining
the ACL rule, if the ACL defines a number that already exists, the newly defined rule
overwrite the old one. If it does not exist, it will create a new rule. If an ACL number is
not appointed, and an ACL rule is defined, the system will automatically assign a
number to the ACL rule.
Permit and deny means the applied action when a match occurs. permit will
implement NAT or security policy detection on the data packet and allow accordingly.
deny is opposite, it will not implement corresponding detection on a packet that is
page 610
Page 643
Page 644
page 611
The following example will introduce an example of ACL application and configuration of the
firewall.
A certain company accesses the Internet through Ethernet 1/0/0 of the Eudemon firewall, the
interface belongs to Untrust Zone. The firewall connects to the internal network through
Ethernet 0/0/0, this interface belongs to the Trust Zone. The company provides WWW, FTP
and Telnet services for outside, the subnet is 129.38.1.0. The internal FTP server address is
129.38.1.1, the internal Telnet server address is 129.38.1.2, and the internal WWW server
address is 129.38.1.3. Through configuring the firewall, it is hoped the following requirements
will be implemented:
In the external network, only the special user 202.39.2.3 can access internal servers.
In the internal network, the three servers and special user 129.38.1.4 can access the external
server.
page 612
Page 645
Page 646
page 613
page 614
Page 647
Page 648
page 615
Internet address distribution regulates that the following three network address ranges are
reserved as private address ranges.
10.0.0.0 - 10.255.255.255
172.16.0.0 -172.31.255.255
192.168.0.0-192.168.255.255
The three network addresses will not be distributed on Internet, but they can be used as part
of an internal enterprise (LAN). The enterprise chooses proper network address range
according to foreseeable host quantity required. Different enterprises can have the same
internal network addressing. If a company does not choose the network address above as an
internal network address, the routing table may endure some confusion. So when
constructing an internal LAN, it is recommended that one of the network address schemes
above should be used for internal network addressing.
Public addressing is legal and IP addresses can be obtained from Internet address
distribution organization, most this means application of public addressing from ISP as part of
a typical subscription package.
As referred before, when private IP address users wish to access public address domains
such as the Internet, they must translate private addresses to public addresses through NAT.
page 616
Page 649
When the Trust Zone establishes a connection to the Untrust Zone and DMZ on the Eudemon
firewall, it will detect whether corresponding data needs to implement NAT translation. If it is
needed, it will be completed at the egress of IP forwarding interface, the source address of the
packet (a private address) is translated to a public address. At the
ingress of the IP layer, the reply packet destination address (public address) will be translated
to a private address.
As shown in the figure above, the Eudemon firewall is located at a private/public network
boundary. When an internal PC A (192.168.1.3) sends data packet1 to external server B
(202.120.10.2), the data packet will go through the firewall. The NAT process will check the
content of the packet header, it will find that the packet is destined for an external network,
and translate private address 192.168.1.3 in the source address field of packet 1 into public
address 202.169.10.1. The packet can then be sent with the translated address to external
server B and record the private to public address mapping in the NAT table. External server B
will send a reply packet (packet 2) to internal PC A (the initial destination address is
202.169.10.1), when the packet gets to the firewall, NAT will check the packet and lookup
record in the NAT table. The destination address will be replaced by a private address
192.168.1.3 of the internal PC. The NAT process referred above is transparent to end system
devices (for example, the PC A-D and server). For the external server, it regards IP address
of internal PC as 202.169.10.1, it is totally unaware of the address 192.168.1.3. Therefore in
this manner, NAT is able to hide the private network of an enterprise.
Page 650
page 617
On Eudemon firewall, there are two modes of address transition: NO_PAT and NAPT.
NO_PAT: Individual private addresses correspond to individual public addresses, it does not
need to associate ports with addresses in order to translate, and is straight forward to
implement. The disadvantage is that by corresponding a single private address to a single
public address, it does not solve the shortage problem associated with public addressing. It
does help to map internal devices such as servers to allow direct mapping which simplifies the
ability for external devices to reach such devices internally without knowing the associated
internal address, or having any means to bypass the firewall.
NAPT: It permits multiple private addresses to map to a single public address. NAPT will map
IP addresses and port numbers. The data packet from different internal addresses can be
mapped to the same external address, but the port number in each case or session will be
different so as to distinguish between the different internal hosts. As shown in the figure above,
when four data packets with internal addresses reach the NAT server, packet 1 and 2 are
shown to be from the same internal address but since the destination is different for the two
packets, there will be a different port number associated with each packet. Packet 3 and 4 are
from different internal addresses but have same port number. Through NAT transition, the four
packets are transited to the same external address, but each packet has different source port
number, so the differentiation between the four packets is maintained. When the reply packet
gets to the NAT server, the NAT server will also identify the packet according to the
destination address, and the port number of the reply packet helps to forward packet to the
right internal host. Eudemon adopts this mode by default. Eudemon series of firewall supports
overlapping of IP addresses for outgoing interfaces and address pools. Eudemon100/200
supports regarding IP address of outbound interface as translated source addresses (called
Easy IP), Eudemon300/500/1000 however does not support this function.
page 618
Page 651
NAT hides the structure of the internal network, and has the capability to shield internal
hosts, while at the same time it makes it capable for external devices to access internal hosts,
for example, WWW server or FTP server. NAT can support internal servers, for example,
address 202.168.0.11 which can be used as an external address for the Web server, or
address 202.168.0.12 which can be used as the external address of internally located FTP
server.
NAT provides internal server function that external network can access. As shown in the
figure above, when user of external network access internal server, NAT will translate public
destination IP addresses of packets into private destination IP addresses of internal servers.
For the reply packet of each internal server, NAT can translate the source of reply packets to
public addresses.
NAT and NAPT can only translate header addresses of IP packets and also the port
information of TCP/UDP headers. For some special protocol, like ICMP and FTP, the data
part of a packet may include an IP address or port information, this content can not be
translated by NAT effectively, which will lead to problems. For example, one FTP server that
uses an internal IP address needs to send its IP address to a peer when it establishes a
session with an external host. The address information is carried in the data part of the packet,
it can not be translated by NAT. When external network host receives the private address and
uses it, FTP server will regard it as unreachable. The solution to solve this NAT problem is
through a special protocol ALG (Application Level Gateway) in NAT implementation. ALG is a
translation proxy of a special application protocol, it alternates with NAT and uses NAT state
information to change special data that is encapsulated in data part of an IP packet, it also
completes other necessary work to make the application protocol run in different ranges.
Eudemon firewalls functions as a perfect address translation application level gateway
mechanism, it can support all kinds of special application protocol, it is unnecessary to modify
NAT platform and has good extension.
At present, it has implemented ALG function of application protocol for: DNS, FTP, H.323,
HWCC, ICMP, ILS, MGCP (Media Gateway Control Protocol), MSN , NetBIOS, PPTP , QQ,
RAS and SNP.
page 619
Page 652
NAT combines NO-PAT mode and NAPT effectively on Eudemon firewall. If NAPT function
is configured, in the process of address transition, NAT will first translate private IP
addresses into one public IP address, and then choose
another public IP address to complete address translation. Address pool is the aggregation
of public IP addresses used for transition. Users should configure a proper address pool
according to the legal IP address quantity, host quantity within internal network and actual
applications.
Eudemon firewall utilizes ACL to limit address translation. Only the data packets that satisfy
ACL can implement address translation, which can control the range of address translations
effectively and allow the special host access to the external network.
page 620
Page 653
This example introduces how NAT is configured on Eudemon. As shown in the figure above,
firewall divides network into the internal network Trust Zone, external Untrust Zone and DMZ.
The host with the private address in Trust Zone needs to access the external network
(Internet). The host with public address in Untrust Zone needs to access
the three servers of the DMZ.
Page 654
page 621
page 622
Page 655
Page 656
page 623
Command Display nat all can be used to display Nat information for the firewall. The
information includes three parts: address pool, address transition and internal server mapping
information.
page 624
Page 657
Page 658
page 625
page 626
page 629
Page 663
page 630
Page 664
Huawei routers have evolved for three generations. The first generation
routers use integrated single-core design, the second generation routers
integrated multi-core design, and the third-generation routers distributed
multi-core design.
Huawei AR G3 series routers (AR G3 routers for short) support multiple
network access modes, including Ethernet, PON, and 3G.
The AR G3 routers are the next-generation routing and gateway devices
that provide routing, switching, wireless, voice, and security services. The
AR G3 routers include the AR1200, AR2200, and AR3200 series routers.
page 631
Page 665
The AR G3 routers provide the highest port density in the industry and
flexible service interface card (SIC) slots, allowing enterprise customers to
connect to a LAN, WAN, or wireless network. The AR G3 routers provide
the most economical enterprise network solutions.
The AR G3 routers provide flexible slot combinations. Two SIC slots can be
combined into one WSIC slot, two WSIC slots into one XSIC slot, and two
XSIC slots into one EXSIC slot.
With extensible hardware design, the AR G3 routers allow customers to
choose SICs flexibly and to expand networks economically.
page 632
Page 666
page 633
Page 667
Depending on telecom carriers' networks, users can access these networks by using CE1/CT1, FE/GE,
ADSL, G.SHDSL, or Synchronization Agent (SA). The AR G3 routers provide dual-uplink to ensure
service reliability. These routers provide the following services for access users:
Provide the security, routing, switching, VPN, and wireless services to ensure secure, fast,
and reliable data packet forwarding.
Provide a variety of value-added services, including DHCP, network address translation
(NAT), domain name system (DNS), and billing services.
Provide security control mechanisms, including controlling access to internal networks and
user rights, to ensure the access security on the enterprise intranet and isolate the
departments of an enterprise.
Provide the attack defense function to protect user traffic against attacks from the external
and internal networks.
Guarantee user-specific QoS and service-specific QoS and flexibly allocate bandwidth for
services as needed.
The headquarters and branches use the AR G3 routers to connect each other on the Internet. The
enterprise establishes a VPN and uses GRE/IPSec VPN tunnels to secure the data. The employees on
a business trip use IPSec VPN tunnels to communicate with the headquarters.
The AR G3 routers, located between the enterprise intranet and the Internet, ensure information
security on the entire intranet and intranet LANs. Additionally, the AR G3 routers provide network
access control (NAC) to restrict the access permissions of internal users. This ensures that only
authorized users can access the intranet.
An enterprise can build a voice communication system over the IP network, saving fees on internal
communication. Within the voice communication system, an AR G3 router can function as an IP PBX
or SIP access gateway (AG). In the downlink direction, the router connects to POTS users (analog
phones or fax machines) and SIP user equipment (UE) users (IP phones or PC software terminals)
through FXS or Ethernet interfaces. In the uplink direction, the router connects to the PSTN through
E1 or FXS interfaces or to the IP network through Ethernet interfaces.
page 634
Page 668
The AR200 series routers apply to small-scale offices. They integrate switching and routing functions.
These routers provide wireline LAN access and wireless AP access to users. With them, users can
access the Internet through Ethernet, 3G, or PPPoE.
The AR1200 series routers feature powerful routing functions. They provide multiple access modes,
such as wireline LAN and wireless AP. Additionally, these routers provide flexible slots that allow users
to install subcards to extend interfaces and enrich functions.
The AR2200 series routers feature powerful routing functions and multiple access modes. They
support a variety of subcards to apply to different usage scenarios. Their slots can be combined to
achieve a higher port density. Among them, the AR2240 is equipped with two main control boards
and two power supplies for redundancy backup. This redundancy backup design improves the router
usability and reliability.
The AR3200 series routers have a large capacity. They provide many flexible slots that allow users to
install different cards in different usage scenarios. Additionally, their slots can be combined to provide
a higher port density. To improve system reliability, these routers are configured with two main
control boards and two power supplies for redundancy backup.
page 635
Page 669
page 636
Page 670
page 637
Page 671
page 638
Page 672
Huawei has the most extensive enterprise switch families in the industry, ranging from low-end,
medium-range, to high-end.
The S1700, S2700, S3700, and S5700 switches are used at the access layer of a campus network. The
S1700 and S2700 provide Layer 2 FE access. The S3700 supports Layer 3 FE access. The S5700 allows
for Layer 3 GE access and has a high port density. Additionally, the S5700 supports cluster
management and features high fault tolerance through the use of stacking technology.
The S5700, S7700, and S9300 are used at the convergence layer of a campus network. These switches
provide powerful switching functions and have a high port density. They also support a variety of
cards to apply to different usage scenarios where varying interfaces are required.
The S5700, S6700, S9300, and S12700 are high-end switches. These switches are used at the core
layer of a campus network. They also apply to the access and core switching layers of a large-scale
data center. With a high port density and a variety of cards, these switches provide various ports to
meet different requirements.
page 639
Page 673
page 640
Page 674
The SX7 series switches are intended for the enterprise market. They provide Layer 2 and Layer 3
access and FE, GE, and 10GE ports. Among these series switches, the ST-level core switch7700 uses a
distributed architecture and provides up to 12 slots that allow users to install different cards in
various usage scenarios.
page 641
Page 675
S2700
S2700-26TP-SI
page 642
Page 676
S2700-26TP-EI
page 643
Page 677
page 644
Page 678
page 645
Page 679
page 646
Page 680
page 647
Page 681
page 648
Page 682
page 649
Page 683
page 650
Page 684
Principle
S2700/3700/5700/6700 is integrated with internal HTTP server, and can access the device in the
switch three-layer interface through a variety of WEB browse.
page 651
Page 685
page 652
Page 686
If all the member switches meet the stack setup prerequisites, the stack system is automatically
created when these switches are powered on.
The master switch is selected as follows:
The device that starts first becomes the master switch.
If all the devices start at the same time, the one of the highest priority becomes the master switch.
If all the devices have the same priority and start at the same time, the one with the smallest MAC
address becomes the master switch.
The slave switch is selected as follows:
The device that starts first among all the other devices excluding the master switch becomes the
slave switch.
If all the other switches excluding the master switch start at the same time, the master switch
preferentially selects the switch connected to its stack interface 1 as the standby switch.
If all the other switches excluding the master switch start at the same time and no switch is
connected to stack interface 1 on the master switch, the master switch selects the switch
connected to its stack interface 0 as the standby switch.
page 653
Page 687
The S7700 is a next generation switch of Huawei. It provides large capacity, line-speed forwarding,
and high density ports. The S7700 is an important product for establishing the MANs in the future.
The S7700 can be used as an aggregation switch or a core switch for enterprise networks, campus
networks, and data centers.
The S7700s are classified into the S7703, S7706, and S7712.
The S7700 is a high-end network product that provides wire-speed FE, GE, and 10GE interfaces. The
S7700 can function as a core switch for enterprise networks, campus networks, and data centers.
page 654
Page 688
The S7700s are classified into the S7703, S7706, and S7712.
The S7700 uses a fully distributed architecture and the latest hardware forwarding engine
technology. The services supported by all the interfaces can be forwarded at wire speed.
These services include IPv4, MPLS, and Layer 2 forwarding services. The S7700 can also use
ACLs to forward packets at wire speed.
The S7700 supports wire-speed forwarding of multicast packets. The hardware implements
2-level multicast replication:
The SFU replicates multicast packets to the LPU.
Then the forwarding engine of the LPU replicates the multicast packets to the interfaces on
the LPU.
The S7700 supports 2 Tbit/s switching capacity and various high-density cards to meet the
requirements for the large capacity and high-density interfaces of core and convergence
layer devices. The S7700 can meet the increasing bandwidth requirements and maximally
reduce investments.
S7703's switching capacity:
Adopting the full mesh architecture, the S7703 provides 16 Gbit/s bandwidth in each HIG group, that
is, 4 x 5 Gbit/s x 8/10 (8B/10B code). The channel between each slot and the backplane supports
eight HIG groups; therefore, the total bandwidth for each slot is 128 Gbit/s.
There is no switching network unit in the full mesh architecture. The switching capability is 720 Gbit/s,
that is, 120 Gbit/s x 2 x 3 (3 LPUs).
S7706/S7712's switching capacity:
Adopting the switching network architecture, the S7706 or S7712 provides 16 Gbit/s bandwidth in
each HIG group, that is, 4 x 5 Gbit/s x 8/10 (8B/10B code). The channel between each slot and the
backplane supports four HIG groups (an active SRU and a standby SRU); therefore, the total
bandwidth for each slot is 64 Gbit/s. Each 12x10GE LPU slot supports eight HIG groups; therefore, the
total bandwidth is 128 Gbit/s. (Only two 12x10GE LPUs of the S7712 support wire-speed forwarding.)
The maximum switching capability of the S7706 or S7712 is 2048 Gbit/s, that is, 16 Gbit/s x 16 (ports)
x 1 (switching network unit) x 2 (bidirectional) x 4 SRUAs.
page 655
Page 689
page 656
Page 690
page 657
Page 691
page 658
Page 692
The figure on this slide shows a typical enterprise campus network. Within this network, you can
clearly see where Huawei switches, routers, firewalls, servers and other IT products are located.
Actually, Huawei can provide a full range of IT products and the most comprehensive network
solutions in the industry.
page 659
Page 693
page 660
Page 695
Page 696
page 661
page 662
Page 697
Page 698
page 663
page 664
Page 699
Page 700
page 665
This is the introduction of NE40E product family. All LPUs can be applied to NE40E-X16,
X8 or X3. The main difference between LPUs is forwarding capability.
page 666
Page 701
Page 702
page 667
The NE40E-X adopts a system architecture as shown in Figure above. In this architecture, the
data plane, management and control plane, and monitoring plane are separated. This design
helps to improve system reliability and facilitates separate upgrade of each plane.
page 668
Page 703
Page 704
page 669
page 670
Page 705
The SFU on the NE40E-X16 switches data for the entire system at wire speed of 640 Gbit/s
(320 Gbit/s for the upstream traffic and 320 Gbit/s for the downstream traffic). This ensures
a non-blocking switching network.
The NE40E-X16 has four SFUs working in 3+1 load balancing mode. The entire system
provides a switching capacity at wire speed of 2.56 Tbit/s.
The four SFUs load balance services at the same time. When one SFU is faulty or replaced,
the other three SFUs automatically take over its tasks to ensure normal running of services.
Page 706
page 671
page 672
Page 707
Page 708
page 673
The SFU on the NE40E-X8 switches data for the entire system at wire speed of 480 Gbit/s (240
Gbit/s for the upstream traffic and 240 Gbit/s for the downstream traffic). This ensures a nonblocking switching network.
The NE40E-X8 has three SFUs working in 2+1 load balancing mode. The entire system provides
a switching capacity at wire speed of 1.44 Tbit/s.
The three SFUs load balance services at the same time. When one SFU is faulty or replaced, the
other two SFUs automatically take over its tasks to ensure normal running of services.
page 674
Page 709
Page 710
page 675
page 676
Page 711
Page 712
page 677
page 678
Page 713
Page 714
page 679
page 680
Page 715
The control plane of the NE40E is separated from the data plane and the monitoring plane.
The SRU is adopted on the NE40E-X8. The SRU integrates an SFU used for data switching.
The following USB interface attributes are supported by SRU:
Supports the biggest USB fat32 format, and supports the memory available in the
market.
For security reasons not allowed to write USB storage device .
Updates automatically, insert the USB memory without any operating.
Page 716
page 681
The MPU of the NE40E-X3 controls and manages the system and switches data. The MPUs
work in 1+1 backup mode. The MPU consists of the main control unit, switching unit, system
clock unit, synchronous clock unit, and system maintenance unit. The functions of the MPU are
described from the following aspects.
page 682
Page 717
A switching network is a key component of the NE40E and is responsible for switching data
between LPUs.
Page 718
page 683
page 684
Page 719
Page 720
page 685
As shown in the figure, the Packet Forwarding Engine (PFE) adopts a Network Processor (NP) or
an Application Specific Integrated Circuit (ASIC) to implement high-speed packet routing.
External memory types include Static Random Access Memory (SRAM), Dynamic Random
Access Memory (DRAM), and Net Search Engine (NSE). The SRAM stores forwarding entries; the
DRAM stores packets; the NSE performs non-linear searching.
Data forwarding processes can be divided into upstream and downstream processes based on
the direction of the data flow.
Upstream process: The Physical Interface Card (PIC) encapsulates packets to frames and then
sends them to the PFE. On the PFE of the inbound interface, the system decapsulates the
frames and identifies the packet types. It then classifies traffic according to the QoS
configurations on the inbound interface. After traffic classification, the system searches the
Forwarding Information Base (FIB) for the outbound interfaces and next hops of packets to be
forwarded. To forward an IPv4 unicast packet, for instance, the system searches the FIB for the
outbound interface and next hop according to the destination IP address of the packet. Finally,
the system sends the packets containing information about outbound interfaces and next hops
to the traffic management (TM) module.
Downstream process: Information about packet types that have been identified in the upstream
process and about the outbound interfaces is encapsulated through the link layer protocol and
the packets are stored in corresponding queues for transmission. If an IPv4 packet whose
outbound interface is an Ethernet interface, the system needs to obtain the MAC address of the
next hop. Outgoing traffic is then classified according to the QoS configurations on the
outbound interfaces. Finally, the system encapsulates the packets with new Layer 2 headers on
the outbound interfaces and sends them to the PIC.
page 686
Page 721
Page 722
page 687
page 688
Page 723
Page 724
page 689
page 690
Page 725
Page 726
page 691
The NE40E supports entire HQoS solutions, HUAWE is the only vendor that supports HQoS,
DS-TE and MPLS HQoS, the other vendors support one or two. Thus, HUAWEI can provide
a entire HQoS solution to meet kinds of scenarios of carrier-class services.
page 692
Page 727
Page 728
page 693
The main scenario of NE40E Router: Campus and IDC interconnection, Large branch
access, Key nodes of WAN.
page 694
Page 729
Page 730
page 695
What is the difference between the control planes of NE40E-X8 and NE40E-X16?
The control plane of the NE40E-X8 is separated from the data plane and the monitoring
plane. The SRU is adopted on the NE40E-X8. The SRU integrates an SFU used for data
switching.
The control plane of the NE40-X16 is MPU, on which doesnt integrate SFU.
What is the difference between the SFUs of NE40E-X8 and NE40E-X16?
The SFU on the NE40E-X8 switches data for the entire system at wire speed of 480
Gbit/s (240 Gbit/s for the upstream traffic and 240 Gbit/s for the downstream traffic).
This ensures a non-blocking switching network. The NE40E-X8 has three SFUs working
in 2+1 load balancing mode. The entire system provides a switching capacity at wire
speed of 1.44 Tbit/s. The three SFUs load balance services at the same time. When one
SFU is faulty or replaced, the other two SFUs automatically take over its tasks to ensure
normal running of services.
The SFU on the NE40E-X16 switches data for the entire system at wire speed of 640
Gbit/s (320 Gbit/s for the upstream traffic and 320 Gbit/s for the downstream traffic).
This ensures a non-blocking switching network. The NE40E-X16 has four SFUs working
in 3+1 load balancing mode. The entire system provides a switching capacity at wire
speed of 2.56 Tbit/s. The four SFUs load balance services at the same time. When one
SFU is faulty or replaced, the other three SFUs automatically take over its tasks to ensure
normal running of services.
page 696
Page 731