Anda di halaman 1dari 105

One Net DCN Solution

V100R001C01
Design Guide

Issue 01
Date 2012-05-15

HUAWEI TECHNOLOGIES CO., LTD.


Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
i

Copyright Huawei Technologies Co., Ltd. 2012. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any means without prior
written consent of Huawei Technologies Co., Ltd.

Trademarks and Permissions
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.

Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.






Huawei Technologies Co., Ltd.
Address: Huawei Industrial Base
Bantian, Longgang
Shenzhen 518129
People's Republic of China
Website: http://www.huawei.com
Email: support@huawei.com


One Net DCN Solution
Design Guide Contents

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
ii

Contents
1 Overview ......................................................................................................................................... 1
1.1 Data Center Solution Overview ....................................................................................................................... 1
1.2 Network Architecture of a Data Center ............................................................................................................ 1
2 Data Center Network Design ..................................................................................................... 3
2.1 Overview .......................................................................................................................................................... 3
2.2 Core Zone Network Design .............................................................................................................................. 3
2.2.1 Layer 2 Design ........................................................................................................................................ 3
2.2.2 Layer 3 Design ........................................................................................................................................ 4
2.3 Logical Design of the Data Center Network .................................................................................................... 5
2.3.1 VLAN Design ......................................................................................................................................... 5
2.3.2 IP Service and Application Design .......................................................................................................... 8
2.3.3 DNS Design .......................................................................................................................................... 11
2.3.4 Route Design ......................................................................................................................................... 15
2.3.5 VPN Design .......................................................................................................................................... 17
2.3.6 Reliability Design ................................................................................................................................. 19
2.3.7 Load Balancing Design ......................................................................................................................... 22
2.3.8 QoS Design ........................................................................................................................................... 24
2.4 Network Design of the Server Zone ............................................................................................................... 25
2.4.1 Deployment Model of the Server Zone ................................................................................................. 25
2.4.2 Typical Server Zone Topology .............................................................................................................. 27
2.4.3 VLAN Design ....................................................................................................................................... 28
2.4.4 Aggregation Layer Design .................................................................................................................... 29
2.4.5 Server Gateway Design ......................................................................................................................... 29
2.4.6 Value-Added Services Design ............................................................................................................... 32
2.4.7 Access Reliability Design ..................................................................................................................... 34
3 Data Center Solution Design .................................................................................................... 39
3.1 Overview ........................................................................................................................................................ 39
3.2 Internet Zone Design ...................................................................................................................................... 39
3.2.2 DDoS Detection and Cleaning Design .................................................................................................. 41
3.2.3 FW Design ............................................................................................................................................ 43
3.2.4 Load Balancing Design ......................................................................................................................... 47
3.2.5 Routing Design ..................................................................................................................................... 50
One Net DCN Solution
Design Guide Contents

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
iii

3.2.6 VPN Access Design .............................................................................................................................. 51
3.3 Partner Access Zone Design ........................................................................................................................... 54
3.3.1 Security Design ..................................................................................................................................... 54
3.3.2 Routing Design ..................................................................................................................................... 56
3.4 Intranet Zone Design ...................................................................................................................................... 57
4 Connection and Disaster Recovery Solution Design ........................................................... 61
4.1 Overview ........................................................................................................................................................ 61
4.2 Network Architecture for Multiple Data Centers ........................................................................................... 61
4.3 Routing Reliability Design ............................................................................................................................. 63
4.3.1 Routing Overview ................................................................................................................................. 63
4.3.2 BGP Routing Design Between Regional Data Centers and Global Data Centers ................................. 63
4.3.3 BGP Routing Design Between Regional Data Centers and a Country/Region Branch ........................ 65
4.4 Service Recovery Design in Multiple Data Centers ....................................................................................... 66
4.5 Layer 3 Communication Design ..................................................................................................................... 68
4.5.1 Overview ............................................................................................................................................... 68
4.5.2 Layer 3 Communication Using MPLS L3VPN .................................................................................... 69
4.5.3 Layer 3 Communication Using SDH .................................................................................................... 70
4.6 Layer 2 Communication Design ..................................................................................................................... 71
4.6.1 Overview ............................................................................................................................................... 71
4.6.2 Layer 2 Communication Using VPLS .................................................................................................. 71
4.6.3 Layer 2 Communication Using SDH .................................................................................................... 73
4.7 Optical Transmission-based Communication Design ..................................................................................... 74
4.7.1 Overview ............................................................................................................................................... 74
4.7.2 Network Design .................................................................................................................................... 74
5 Network Management Design ................................................................................................. 76
5.1 Overview ........................................................................................................................................................ 76
5.1.1 NMS ...................................................................................................................................................... 76
5.1.2 Network Scale ....................................................................................................................................... 76
5.1.3 NMS Design ......................................................................................................................................... 77
5.2 Commonly Used NMS Management Tools .................................................................................................... 77
5.3 eSight System Design..................................................................................................................................... 78
5.3.1 Overview ............................................................................................................................................... 78
5.3.2 Design Roadmap ................................................................................................................................... 94
5.3.3 Design Requirements ............................................................................................................................ 95
5.3.4 Design Elements ................................................................................................................................... 96

One Net DCN Solution
Design Guide 1 Overview

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
1

1 Overview
1.1 Data Center Solution Overview
Nowadays, an enterprise with higher informatization level has a higher competitive power. A
data center provides key IT infrastructure for key service systems of an enterprise. Managing
the core data of an enterprise, the data center performs access control, security filtering,
service applications, information calculation, and backup operations.
As network and communication technologies develop, data centers have become the core of
enterprise informatization. A well-designed data center will improve efficiency and growth
rate of enterprises.
The components in a data center include: civil engineering site, power supply system, network
devices used on the data network, computing network, and storage network, servers (with
operating systems and software installed), storage devices, security system, and operating and
maintenance system.
Huawei One Net Data Center Network (DCN) Solution provides a general proposal for the
network of the data center to ensure high performance, security, and reliability of the data
center.
1.2 Network Architecture of a Data Center
The network architecture of data centers may differ based on service features, application
architecture, security requirements, and IT infrastructure of enterprises. Figure 1-1 shows a
typical data center network that consists of the intranet zone, extranet zone, management zone,
server zone, and storage zone.
One Net DCN Solution
Design Guide 1 Overview

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
2

Figure 1-1 Network architecture of a data center
S
e
r
v
e
r

z
o
n
e
O&M servers
Core
layer
Server zone 1
O&M terminals
DCI zone Internet
DMZ zone DMZ zone
Extranet
A
c
c
e
s
s

z
o
n
e
Management
zone
Intranet
Server zone 2 Server zone N
S
t
o
r
a
g
e

z
o
n
e
Development and
test zone


Multiple zones exist on the network. You can divide the network into fewer zones based on
the security level of the application cluster. Zones are connected to a high-speed switch with
standard security control measures. Each zone is designed with standard resource pools of the
firewall (FW), load balancer (LB), and servers.
This network architecture has the following advantages:

Servers in a resource pool
Quickly deploys service systems without adding physical devices or links.

Good flexibility
Supports horizontal traffic at each layer on a dual-active network.
Better supports backup among data centers through external leased lines.
Provides services through Internet and leased lines.
Allows service systems to flexibly access different zones.

High scalability
Adds server zones in data centers or among data centers without changing the topology
or security design.

High resource efficiency
Reduces the number of physical zones, enlarges configurable logical zones, and
dynamically allocates servers to application systems.

Security control points and standard rules
Simplifies management and lowers the network complexity and errors.

Standard cabling and layout
Simplifies management and saves energy.

One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
3

2 Data Center Network Design
2.1 Overview
A data center network is divided into three layers: access layer, aggregation layer, and core
layer. Multiple zones exist on the data center network based on service features and interact
through the core layer. Data center network consists of the server zone, management zone,
storage zone, intranet zone, extranet zone, and Internet zone.
This chapter describes the data center network design, including the design plans for the core
zone, server zone, data center routes, VLANs, IP addresses, security, and QoS.
2.2 Core Zone Network Design
Core zone network design considers the data center services, port density, and scalability.
High-bandwidth and high-performance core switching devices are deployed in the core zone
to exchange data quickly among service modules. The core zone is the hub connecting the
data center to devices of the core layer, Internet zone, wide area network (WAN) zone, and
partner access zone. 10GE interfaces are used in the core zone to meet service performance
requirements.
The core zone is connected to the aggregation zone using Layer 2 connection through the
Ethernet or Layer 3 connection through routes.
2.2.1 Layer 2 Design
Also called flattened design, the Layer 2 networking design places the data center network in
one switching domain. The network includes the core and aggregation layers.
In this networking, two devices are deployed at the core layer and virtualized into a logical
device using the cluster switching system (CSS) technology. The CSS or iStack technology is
used to virtualize aggregation switches so that the network is loop free. The value-added
service modules such as the FW and LB are deployed at the core layer. Figure 2-1 shows the
networking diagram.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
4

Figure 2-1 Layer 2 design for the core zone network
Core layer
Aggregation layer
CSS
iStack CSS
LAG LAG



Two S9700 switches at the core layer are virtualized into a logical switch using CSS
through stack cables.

If the aggregation layer is deployed in End or Rack (EOR) mode, two S9300 or S9700
switches are virtualized into a logical switch using CSS.

Two S5700 switches at the aggregation layer are virtualized into a logical switch using
iStack through stack cables.

Four links between aggregation switches and core switches are bundled into an
Eth-Trunk. The network is changed to a loop-free tree network.
This network has the following advantages:

Lowers the network complexity by simplifying the network topology and increasing
forwarding efficiency.

The data center has only one switching domain, simplifying the network design and
deployment and adapting to different application architecture. Services can be migrated
within the data center.
This network has the following disadvantages:

The data center has only one broadcast domain. Failure of a server may cause the
broadcast storm on the data center network.

The service gateway (core switch) of the data center has pressure in ARP processing and
may have reliability risks.

The network scale is limited, supporting only two core switches.
2.2.2 Layer 3 Design
In the Layer 3 networking design, core switches and aggregation switches exchange data
through routes. This networking design applies to large data centers. Most traffic in the core
zone is the access traffic to a data center, while some is the exchange traffic among service
zones inside the data center. The network is easy to expand, has a clear architecture, and
simplifies service deployment, security control, and management.
Figure 2-2 shows the network where aggregation switches are dual-homed to the core
switches and services on the aggregation switches that are load balanced using Equal-Cost
Multipath Routing (ECMP) routes.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
5

Generally, two core switches can meet reliability and performance requirements in a data
center. To expand the data center scale, add core switches so that four or eight ECMP routes
are available.
Figure 2-2 Layer 3 design for the core layer network
Core layer
Aggregation layer
CSS
CSS iStack


S9700 switches at the core layer are load balanced using routes.
2.3 Logical Design of the Data Center Network
2.3.1 VLAN Design
Overview
Logically a local area network (LAN) is divided into multiple logical subnets and each logical
subnet is a broadcast domain, that is, a virtual local area network (VLAN). Devices on a LAN
are allocated into network segments in logical instead of physical mode and each network
segment is a VLAN. In this way, broadcast domains are isolated on a LAN.
Interconnecting devices are divided into a VLAN, and those that do not interconnect are
divided into different VLANs. Broadcast domains are isolated to reduce broadcast storms and
improve information security. With the VLAN technology, a network failure can be restricted
within a local area, protecting the overall network. This keeps the network robust.
Design Roadmap

Single-layer or double-layer VLAN
VLAN is used to isolate services and users. Layer 2 network covers the core layer,
aggregation layer, and access layer of a data center bearer network. The core and
aggregation layer can be integrated into one layer. The core layer is configured on the
Layer 2 network as a server gateway. It exchanges routing information with the access
routers by using IP or MPLS technology. Services provided by the access servers in the
data center are allocated to a single-layer VLAN on the Layer 2 network.

Allocation of VLANs in other areas
Server areas must be allocated to VLANs based on service or server types. In addition,
external access layers are allocated to VLANs. The external access layer of the data
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
6

center can be connected to internal networks, customers, the Internet, the disaster
recovery data center, or the management and maintenance area. On Layer 3
interconnection links, Huawei recommends that multiple physical ports are virtualized to
a logical port, tags are sealed on a VLAN, and IP packets are forwarded.

Constraints
A maximum of 4094 different single-layer VLANs are allocated based on services and
areas so that certain VLANs can be reserved for future expansion.
Design Principles

Distinguish service VLANs, management VLANs, and interconnection VLANs.

Allocate VLANs based on service areas.

In a service area, allocate VLANs based on service types, such as web, application, and
DB services.

Allocate VLANs consecutively to ensure the proper use of VLAN resources.

Reserve certain VLANs for future expansion.
Design Elements
Allocate network segments for VLANs based on the following areas:

Core area: 100199 network segments.

Server area: 200999 network segments. 10001999 network segments are reserved.

Access network: 20002999 network segments.

Management network: 30003999 network segments.
Figure 2-3 shows the allocation schematic drawing.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
7

Figure 2-3 VLAN planning
VLAN accessed
by the
enterprise:
20002199
VLAN accessed
by an associated
enterprise:
22002299
VLAN accessed
by Internet users:
23002399
VLAN accessed
by the disaster
recovery center:
24002999
Core network VLAN: 100199
VLAN for
production
service area:
200399
VLAN for
office service
area:
400599
VLAN for
other service
areas:
600799
VLAN for
DMZ service
area:
800999

Storage area
M
a
n
a
g
e
m
e
n
t

V
L
A
N
:
3
0
0
0

3
9
9
9
Internal network
of an enterprise
Enterprise
branch
Private network
Partner
Internet
External
user
Disaster
recovery
network
Remote disaster
recovery
center


Figure 2-4 shows the architecture of the access-layer VLAN in the service area of the data
center.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
8

Figure 2-4 Architecture of the access-layer VLAN in the service area of the data center


Table 2-1 describes the functions of nodes in the VLAN architecture.
Table 2-1 Function of nodes in the VLAN architecture
Node VLAN Configuration Description
Access switch Multiple VLANs are configured. The
downlink ports of servers are added to
the VLANs in access mode. Uplink
ports are added to the VLANs in
trunk mode.
VLANs are allocated based
on services and areas.
Servers are connected to
different VLANs to
implement L2 broadcast
isolation.
Core/Aggregation
switch
A VLAN is configured. Server traffic
is forwarded to the FWs or the LBs
based on the layer 2 VLAN.
Gateways are configured
on the LBs or FWs to
implement server load
balancing.

2.3.2 IP Service and Application Design
Overview
In most cases, an IP address (network ID+host ID) is used to specify a network device, such
as an interface. The IP addresses of devices ensure network interconnection and applications;
IP address planning is the base of implementing network functions.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
9

Figure 2-5 IP address types

A 0 Network(7 bits) Host(24 bits)
B 1 0 Network(14 bits) Host(16 bits)
C 1 1 0 Network(21 bits) Host(8 bits)
D 1 1 1 0 Multicast address
E 1 1 1 1 0 Reserved


Design Roadmap
While planning the IP addresses of network segments, network segment routes are aggregated
on a core switch or router. The route quantity and maintenance cost are reduced when routes
are distributed. For example, to allocate a class C IP address 192.168.1.0/24 to the data center,
you can use the variable length subnet mask (VLSM) to allocate the address into four network
segments: 192.168.1.0/26, 192.168.1.64/26, 192.168.1.128/26, and 192.168.1.192/26.
Allocate the four network segments to the service area and each network segment contains up
to 62 host IP addresses. Routes on the four network segments can be aggregated on the core
server and only one network segment route with a class C IP address is announced.
During IP address assignment, consecutive IP addresses are allocated to each service area and
certain IP addresses are reserved for future network expansion. In a service area, consecutive
IP addresses are allocated to the servers providing the same services and functions.
An enterprise plans the IP addresses of the internal networks in the data center. When the
servers that provide services for external users use private IP addresses, a network address
translation (NAT) device must be used on the egress router to translate private IP addresses to
public ones. If address resources are adequate, public network IP addresses can be directly
allocated to the servers.
Design Principles

Uniqueness
An IP address can be used only by one host on an IP network even when the
Multiprotocol Label Switching/virtual private network (MPLS/VPN) technology
supports address overlap.

Continuity
Continuous IP addresses facilitate path overlap on a hierarchical network. This reduces
entries in a routing table and improves routing algorithm efficiency.

Scalability
Reserve certain addresses in each hierarchy to ensure the continuity of overlapped
addresses during network expansion.

Meaningfulness
Ensure that an IP address has a meaning, that is, an IP address enables you to determine
the device that it belongs to.
Design Elements

IP address mode
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
10

Currently, the Internet adopts the IPv4 protocol in most cases; however, public IPv4
addresses are in shortage and will be replaced by the IPv6 addresses. Therefore, the
compatibility with the IPv6 protocol must be considered during IP address planning so
that the transition and upgrade to IPv6 is prepared.

Two types of IPv4 addresses for services
Private IPv4 address
Huawei recommends that you adopt a class B private IP address defined in IETF
RFC1918 and use L3VPN/VLAN to isolate services using private IP addresses.
Theoretically, each type of service can exclusively use a private address space;
however, the address segments of services may be re-allocated to ensure efficient
management.
Private IP addresses can be allocated to the services that do not require external
resources.
If a large number of IP addresses are required, private IP addresses must be allocated
to the services which need to access public resources. In this case, a NAT device is
used to translate the private IP addresses to public ones.
Public IPv4 address
If public IP addresses of service providers (SPs) are allocated to services, the services
that adopt public IP addresses must be restricted because these address resources are
in shortage. Do not assign public IP addresses to services that consume many address
resources.

Service IP addresses
Figure 2-6 describes the specifications for IP address design
Figure 2-6 Specifications for IP address design


The following defines and allocates server IP addresses:
Identifier 1: 8 bits, indicates a private IP address, such as 10.x.x.x/8.
Identifier 2: 8 bits, indicates a grade 1 institute. Each grade 1 institute can apply for
multiple class B IP address segments.
Identifier 3: 4 bits, indicates a grade 2 institute. The active data center adopts 0000 to
0111 and the standby data center adopts 1000 to 1111.
Identifier 4: 4 bits, indicates a service area in the data center, such as the production
service area and the office area.
Identifier 5: 8 bits, indicates a host or a server address.

IP addresses are planned to meet customers' requirements. The server quantity and service types are
customized as required. For example, the IP address of the production service is 10.100.0.0/17.

Device management IP address
Assign one or more class C IP addresses for network management.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
11

To facilitate management, assign consecutive IP addresses for L3 device management
and consecutive IP addresses for L2 device management.

Device interconnection IP address
An interconnection IP address is in the x.x.x.x/30 format. Assign a small IP address to
a device in the higher hierarchy. For the services in the same hierarchy, assign a
smaller IP address to a device that has small loopback. For example, the address of a
port that connects to the downlink is even, such as 10.1.1.2/30. The address of a port
that connects to the uplink is odd: such as 10.1.1.1/30
Reserve adequate space for future upgrades and expansion.

IP address of the server gateway
19 specifies the IP address of a gateway or a gateway VRRP.
10254 specifies a service IP address, such as a server IP address.
2.3.3 DNS Design
Overview
In addition to IP addresses, TCP/IP provides a special naming mechanism for hosts in
character string mode, that is, domain name system (DNS). As a hierarchical naming method,
the DNS specifies a name for a network device and sets a domain name resolution server on
the network to create the mapping between a domain name and an IP address. This enables
users to use easy-to-remember and meaningful domain names instead of complex IP
addresses.
Considerations
A user accesses a server using a domain name instead of the IP address because a domain
name is easily understood and remembered, for example, the Web address of Baidu is
www.baidu.com. Deploy servers in the DMZ of the data center to provide FTP and web
services. The DNS holds the mapping between the domain names and the corresponding IP
addresses and decouples the application system and the servers.
The DNS is important for the data center. If the DNS fails, users in the data center cannot
access the application servers, which severely affect the production of an enterprise. Therefore,
multiple DNS servers must be deployed in the data center. The following lists the roles of
these DNS servers:

Master server
As the management server in the data center, it can add, delete, and change the domain
name. The changed information can be synchronized to the slave servers. Generally, you
can deploy only one master server.

Slave servers
These servers obtain domain names from the master server, and provide DNS services as
a cluster. They adopt hardware-based LBs to provide the server cluster function. You can
deploy two slave servers.

Cache servers
These servers cache the results of the DNS requests from internal users to speed up
subsequent access. They are deployed on the slave servers.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
12

Design Principles

Easy-to-remember
The DNS holds the mapping between the domain names and the corresponding IP
addresses. The domain names must be simple and easy-to-remember. The domain names
must be closely related to services provided by the servers.

Hierarchical
The domain names must be hierarchically designed based on physical locations or
logical service areas.

Intelligent
The DNS can intelligently distinguish carriers from broadband users, and map the
domain names to the IP addresses of the carriers to speed up user access.
Design Elements

NAT mapping of the domain names on the FWs.
Implement NAT mapping for the Internet domain names. Map the virtual address of the
slave servers to a public IP address and set the IP address to the access address for
external Internet users.
An internal user sends requests to the DNS, which then may send the requests to slave
DNS1 and DNS2 servers. If the slave DNS1 server fails, all these requests are allocated
to slave DNS2 server. If all slave DNS servers fail, the master DNS server handles these
requests.
Figure 2-7 NAT mapping of the domain names on the FWs
Core/Convergence
CSS
FW
LB
Primary data center network
LLB
UTM
Master
DNS
LB
DMZ
iStack
LB
Slave
DNS2
Slave
DNS1
Internet
External
user
Campus network
DNS server of
the carrier
Internal user
Server
cluster
10.0.2.5 172.16.0.6 172.16.0.5
Virtual IP address
of the DNS:
10.0.3.10
NAT
External: Internet
IP address
Internal: 10.0.3.10
Master DNS: 10.0.3.10
Slave DNS: 10.0.2.5 DNS query


One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
13

Table 2-2 describes the suggestions for network deployment.
Table 2-2 Network deployment 1
Item The suggestions for network deployment
DNS server type Deploy master and slave DNS servers. The following lists the
deployment details:

Deploy a cluster of slave DNS servers to provide services for
external Internet users. Cache servers can be deployed on these
servers to speed up subsequent access.

Deploy a master DNS server and back up the server in the DMZ.
The standby DNS servers providing services for internal users can
be deployed in non-DMZ areas as standby ones.

Deploy a private IP address for the slave DNS servers. The IP
address is displayed as a virtual one by the hardware-based LBs.
Configuration of the
client for an internal
user
Configure an active DNS server and standby DNS servers. The
following lists the mapping between DNS servers on the client and
those in the data center:

The active DNS server corresponds to the master DNS server.

The standby DNS server corresponds to the slave DNS server.
Internal DNS service The server uses a private IP address and provides DNS services for
internal users. Deploy FW protection and filtering instead of an IPS
policy for internal users accessing the DNS servers.
External DNS
service
Implement NAT forwarding on the FWs. This translates the private IP
addresses to the public ones to provide DNS query services for
external users. When Internet users access the DNS servers, the traffic
passes through the intrusion protection system (IPS) and the FWs in
order. The IPS implements anti-attack protection and traffic cleaning,
and the FWs implement protection and filtering.


Deploy the intelligent DNS on the LBs to provide services for Internet users.
Internet users initiate DNS requests to the DNS server of a carrier. After receiving the
requests, the intelligent DNS implements DNS resolution. The blue line in Figure 2-8
indicates the preceding procedure.
The intelligent DNS distinguishes the users and resolves the domain names to
corresponding IP addresses. If the user is from China Netcom, the DNS policy resolution
servers resolve the Netcom IP address corresponding to the domain name for the user. If
the user is from China Telecom, the DNS policy resolution servers resolve the Telecom
IP address corresponding to the domain name for the user.
The intelligent DNS detects the quality of the links at the carrier side. When the links of
a carrier are interrupted, the DNS returns the IP address of another carrier to ensure
service continuity.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
14

Figure 2-8 Deploying the intelligent DNS to provide services for Internet users
Core/Convergence
CSS
FW
LB
Primary data center
network
LLB
UTM
Master
DNS
LB
DMZ
iStack
LB
Slave
DNS2
Slave
DNS1
Internet
External
user
Campus
network
DNS server
of the carrier
Internal user
Server
cluster
10.0.2.5 172.16.0.6 172.16.0.5
Virtual IP
address of the
DNS: 10.0.3.10
Master DNS: 10.0.3.10
Slave DNS: 10.0.2.5 DNS query


Table 2-3 describes the suggestions for network deployment.
Table 2-3 Network deployment 2
Item The suggestions for network deployment
DNS server type Deploy master and slave DNS servers for internal users. Deploy the
intelligent DNS on the link load balancer (LLBs) for Internet users.
Configuration of
the client for an
internal user
Configure an active DNS server and standby DNS servers. The
following lists the mapping between DNS servers on the client and
those in the data center:

The active DNS server corresponds to the master DNS server.

The standby DNS server corresponds to the slave DNS server.
Internal DNS
service
The server uses an internal IP address and directly provides DNS
services for internal users. Deploy FW protection and filtering instead
of an IPS policy for internal users accessing the DNS servers.
External DNS
service
Deploy the intelligent DNS on the LBs. The DNS provides the public
IP addresses without NAT forwarding. The DNS services for Internet
users are irrelevant to those for internal users.

One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
15

2.3.4 Route Design
Overview
A route is a relay used to forward data packets through the optimal path. A route provides two
main functions:

Routing function: finds and selects the optimal path.

Forwarding function: forwards data packets to the destination address after selecting the
path.
There are two types of routes: static route and dynamic route.

Dynamic routes include Border Gateway Protocol (BGP) route and common Interior
Gateway Protocol (IGP) route, including Routing Information Protocol (RIP),
Intermediate System to Intermediate System (IS-IS), and Open Shortest Path First (OSPF)
route. Dynamic route completes route learning, selection, and maintenance by
themselves. If the network typology changes after a dynamic route is configured, the
dynamic route learns the change and modifies the routing table accordingly.

Static route accurately controls paths for data packets forwarding; however, it cannot
flexibly vary with network change due to static configurations.
IGP is the base of an IP network. IGP is used to carry the Internet segment route and loopback
route between routers. IGP design affects the configurations of critical parameters, such as the
network traffic model, convergence performance, reliability, and security.
OSPF Design

IGP selection
IS-IS and OSPF are the most mature and widely used IGP protocols. There are
differences between them, whereas the two protocols have little difference in functions
and performance.
In addition to technology considerations, the selection of the IGP protocol involves
competitive marketing strategies. For example, Cisco wishes to deploy IS-IS to shield
competitors. You can select one of the two protocols based on the following principles:
IS-IS: used on IP bearer networks and for public routes. Most national large NSP bearer
networks adopt the IS-IS protocol.
OSPF: used on MANs and for private routes. Adopt OSPF dynamic route protocol in the
data center to ensure network stability and rapid convergence of routes and to facilitate
future management and maintenance.

Design principles
When a network is divided into areas, deploy all routes in Area0.
When the network is allocated into multiple areas, the number of routes in each area
must be considered.
Generally, metric values of the two ends of a link need to be consistent.

Area design
Adopt OSPF dynamic route protocol in the data center. The internal routes are small in
the data center; OSPF can be configured between the core/aggregation devices and the
egress routers. In addition, OSPF is configured between the campus core switches and
the egress routers, and configured between the core/aggregation devices in the disaster
recovery data center in the city. All the devices are allocated into backbone Area0.
Allocate one or more network segment addresses for each service area at the access layer.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
16

OSPF announces the routes on the core/aggregation devices to ensure reachable routes in
the network.
Figure 2-9 Route plan in a data center

Area0
Core switches on the
campus network
Egress routers in
the data center
Core switch
Service zone 1 Service zone 2 Service zone n
Area1 Area2 AreaN



Cost design
You can flexibly design the cost based on the following principles:
Distance between links and peer links
Link bandwidth
Cost design determines network traffic direction (TE deployment is excluded), and
customer requirements for network traffic direction must be focused on. Before the
cost design, you need to know network traffic directions in different end-to-end
scenarios.
OSPF specifies the following two methods to set the link cost value:
Set cost values for the interfaces in the interface view.
Calculate cost values based on the bandwidth in the system view.
The two methods are used for different commands. If the commands are run at the same
time, the preferred method of setting cost values for the interfaces is in the interface.
Reserve smaller cost values for large bandwidth to ensure that multiple 100GE links that
interconnect devices are bundled in future.

Reliability design
OSPF rapid convergence
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
17

OSPF rapid convergence has the bidirectional forwarding detection (BFD) for OSPF
feature.
The BFD for the OSPF feature enables a rapid detection of link faults.
Table 2-4 describes the parameters and values for this feature:
Table 2-4 Parameters and their values
Timer Parameter Reference Value
Hello interval 10 ms
Dead interval 40 ms
spf-schedule-interval
{intelligent-timer max-interval start-interval hold-interval}
5000 50 50
lsa-originate-interval
{intelligent-timer max-interval start-interval hold-interval}
5000 0 20
LSA arrival interval 15 ms
Flooding-control 30 ms

Security design
OSPF supports interface authentication and area authentication.
Interface authentication indicates the authentication and encryption of Hello packets
sent and received on the interface. Area authentication indicates the authentication
and encryption of Update packets in the area.
The preceding two authentication modes support simple password and MD5
authentication methods. MD5 authentication method ensures higher security.
2.3.5 VPN Design
Overview
A lot of production and office servers are deployed in a data center. In most cases, the servers
are deployed on the campus network in the headquarters of an enterprise. A campus network
indicates the office network of an enterprise. As regional enterprise branches are excluded
from the campus network, a campus network can be considered as a private network. The
servers in a data center are classified based on information security levels. Enterprise users are
classified based on responsibilities. Virtual private networks (VPNs) can be created to plan the
access relationship between the servers and the users.
In Figure 2-10, the servers are classified into class I, class II, and class III. The users are
classified into class A and class B. The users and servers are allocated into different VPNs and
routes are deployed between VPNs.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
18

Figure 2-10 Route-based isolation between the servers in the data center


VPN deployment
The core/aggregation devices in the data center are connected to the core devices on the
campus network. In most cases, MPLS VPNs are deployed for the core devices on the campus
network. MCE is deployed at the core/aggregation layer to implement service isolation. For
details, see Figure 2-11.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
19

Figure 2-11 VPN deployment


2.3.6 Reliability Design
Overview
End-to-end reliability design includes the reliability design for device nodes, network
topologies, and service systems. Currently, the reliability of nodes and network topologies is
less important than that of service systems. The application of the service system reliability
technology in specific scenarios is the focus of end-to-end reliability design.
The following lists the principles for reliability design:

Networking layer principle: divides a network into three layers, the core, aggregation,
and access layers. Aggregate the core and aggregation layers of a small network as
required.

Backup principle: back up physical bases, such as links, devices, paths, and planes.

Failover principle: adopt the proper reliability technology to ensure failover and failover
back with a minimum of packet loss when a fault occurs.

End-to-end principle: deploy the reliability technology in an end-to-end and hierarchical
mode to prevent faults.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
20

Device Reliability
To increase reliability, a redundant device is deployed for the following devices:

Control boards

Switching network boards

Line cards

Service cards

Fan modules

Power supply modules
All Huawei products meet the preceding reliability requirements. For details about reliability
features, refer to the manuals of related products.
As an increasing number of users and devices access the data center, a single switch can no
longer meet the increasing network reliability requirements. Huawei provides the following
solutions to address this issue:

Huawei S9300 core/aggregation device supports the CSS function. This function
connects two switches through private stack cables. The two switches are displayed as a
logical switch.

Huawei S5700 and S6700 access devices support the stack function. A maximum of nine
devices can be connected together and displayed as logical switch to forward packets.
Switch stack ensures that a large amount of data is forwarded with high network
reliability.
The CSS feature brings the following benefits:

Helps customers maximize return on investments during network expansion.

Virtualizes two physical devices into a logical one during network expansion,
simplifying the device configuration and management.

Improves system reliability with device redundancy and backup.
Figure 2-12 shows the reliability design for devices.
Figure 2-12 Reliability design for devices

One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
21

Table 2-5 describes the non-blocking stack counters of access switches.
Table 2-5 Stack counters of access switches
Item S5700 Series S6700 Series
Supportable topology Link and loop Link and loop
Stack switch capacity 48 Gbit/s 80 Gbit/s
Maximum number of
stacked devices
9 9
Stack interface Private stack interface Configured common interfaces
Stack cable 57 private cables Optical fiber

Network Reliability
Redundant network topologies are required for upper-layer reliability, in addition to the
network reliability. Select network topologies based on the reliability and available resources
to meet customer requirements. Figure 2-13 shows a network topology.
Figure 2-13 Network topology



The active and standby servers in the network interface centers (NICs) are deployed.
Dual uplinks are deployed for each server to ensure high availability.
NIC teaming indicates that two NICs are bundled into a virtual one.
The two NICs use the same IP address and MAC address and work in load balancing
mode. They forward traffic at the same time, doubling data bandwidth.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
22


Link aggregation (LAG) is adopted between the core/aggregation device (as a server
gateway) and access devices to protect the links.
The following lists the advantages of link aggregation:
Increased link bandwidth
Link load balancing
Improved link reliability by the backup for links of group members
IGP is configured between core/aggregation devices in the data center and the core
devices and egress routers on the campus network. The BFD for IGP or IP FRR
technology is adopted to provide reliability protection. LAG is adopted to protect links.
Service Reliability
Service reliability covers service software reliability, server cluster reliability, and the
reliability of service handover by translating a domain name to the corresponding IP address
by DNS.

Service software reliability: prevents a software operation fault that may cause a task
failure, or worse.

Server cluster reliability: Multiple servers are clustered and displayed as a server to
provide specific services. The cluster can use multiple computers to operate in parallel,
resulting in higher speed computing. In addition, the cluster can use multiple computers
as backup. When a device in the cluster fails, the cluster still operates and provides
services. The cluster operation can reduce service interruption due to a single point of
failure and ensure high availability of cluster resources.
The reliability of service handover by translating a domain name to the corresponding IP
address by DNS: adopted when all server clusters fail. For details, see section 2.3.3
"DNS Design".
2.3.7 Load Balancing Design
Overview
Generally, an LB is called as a L4 or L7 switch.

A L4 switch analyzes the IP layer and Transmission Control Protocol/User Datagram
Protocol (TCP/UDP) layer to ensure traffic load balancing on L4.

A L7 switch supports L4 load balancing and analyzes application layer information, such
as HTTP URL and cookie information.
Figure 2-14 shows the LB function.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
23

Figure 2-14 LB function
Access
network
Service flow
Server state
detection
S
e
r
v
e
r

c
l
u
s
t
e
r


Design Principles
The following lists the principles for LB design:

As a solution for network bottlenecking, server LBs must support high throughput.

LBs support multiple load balancing algorithms, including polling and IP-based and
content-based Hash, to ensure the rationality of load balancing.

LBs support reliability check and feature ease-of-expansion to ensure system
redundancy.

LBs forward the traffic of the loaded server clusters; they must be deployed at the
core/aggregation layer to prevent traffic bottleneck.
LB Deployment Modes

Symmetry mode
The LB translates the destination address of the traffic from the client to the server and
translates the source address of the traffic from the server to the client.
Figure 2-15 shows the deployment schematic drawing.
Figure 2-15 LB deployment in symmetry mode




One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
24

Advantage: The LB can be deployed at the aggregation or the core layer. The LB
controls incoming and outgoing traffic and implements control policies in real time.
Disadvantage: The incoming and outgoing traffic requires high LB performance and may
cause a bottleneck.

Asymmetry mode
The LB changes the destination MAC address of the traffic from the client to the server
into the server's MAC address, instead of translating the IP address. The traffic from the
server to the client does not pass through the LB. The loopback address must be
configured as a VIP address on the server.
Figure 2-16 shows the LB deployment in asymmetry mode.
Figure 2-16 LB deployment in asymmetry mode


Advantage: The traffic from the client to the server does not pass through the LB.
Compared to the symmetry mode, this mode requires lower LB performance. This mode
is suitable for large-throughput video distribution services.
Disadvantage: The LB must be deployed on the core switch layer, and the LB cannot
count or charge the traffic because the traffic from the client to the server does not pass
through it.
2.3.8 QoS Design
Overview
A data center requires a different QoS when bearing data services. The IP network must
distinguish the data packets, color them, and provide congestion management, congestion
avoidance, traffic policing, and traffic shaping. Using these methods, the network devices can
provide specific services for each customer.
QoS services cover the three models, best-effort forwarding, integration service, and
Diff-Serv service. Huawei uses the Diff-Serv model for the data center.
The data center network covers multiple types of services, including high-priority services
and mid- and low-priority services. Congestion or delay may occur on certain nodes due to
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
25

the bandwidth restriction. To ensure that high-priority services get prioritized handling when
the network involves congestion or delay, a strategy for handling bandwidth and priority must
be planned for each service type.
Service QoS
In most cases, a data center can endure burst traffic; QoS is not required.
In collaborative computing service scenarios, such as search engine, oil exploration, and
meteorological computing, the computing tasks must be collaboratively handled by multiple
servers. Multiple servers may send computing results to the same server and the burst traffic
may cause data packet loss on a port due to congestion.
The blue servers in Figure 2-17 send response to the yellow servers. Congestion occurs at the
points marked . If the forwarding queues on the network nodes are full, packets are lost.
The problem can be solved by using large-capacity line cards on the EOR switch and core
switch. Large-capacity line cards cache burst data to prevent packet loss.
Currently, large-capacity line cards support a 48 GE optical interface (G48SBC) and a 48 GE
electrical interface (G48TBC). Each line card has 1.2 GB memory and is deployed in the
downstream access switches.
Figure 2-17 Data burst
Collaborative computing data burst in EOR
mode at the array cabinet side
The position where burst
data congestion occurs
Large-capacity cable clips are added to the core/
convergence switch to avoid burst data loss.
The position where large-cache
cable clips are deployed


2.4 Network Design of the Server Zone
2.4.1 Deployment Model of the Server Zone
A multi-layer service model is constructed based on HTTP application programs and is
composed of the web, application, and database layers. The web, application, and database
services run on different servers, improving the service resilience and security. Attackers
cannot access the application servers or database servers even if they intrude into web servers.
The web and application services can be deployed on one server, whereas the database service
is deployed on a single server.
Each layer supports physical and logical networking modes.

Physical networking: Each layer has independent aggregation switches, access switches,
and FWs.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
26

Figure 2-18 Physical networking in the server zone



Logical networking: The web, application, and database services are isolated by VLANs.
Communication among layers must pass the FW, and communication between the web
and application layers needs to use the LB.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
27

Figure 2-19 Logical networking in the server zone
Stack cable
Web
VLAN 10
DB
VLAN 30
App
VLAN 20


Physical networking is highly secure and has low performance requirements on switches.
Logical networking simplifies network deployment and supports high scalability.

The physical networking mode applies to scenarios that require high security and
hierarchical services and support easy communications among services at different
layers.

The logical networking mode applies to scenarios in which different layers are difficult
to communicate and the service traffic is heavy.
Enterprise services are becoming more and more complex, leading to difficulty in
communications among services at different layers. The physical network is hard to deploy,
and logical networking has high scalability, high resource efficiency, and flexible service
access; therefore, the logical networking mode is used in the server zone of the data center.
The following uses logical networking as an example to describe the network design of the
server zone.
2.4.2 Typical Server Zone Topology
Service layers on the logical network are isolated in the server zone. In most cases, the star
topology is used for the single-layer or dual-layer logical network, as shown in Figure 2-20.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
28

Figure 2-20 Topology in the server zone


Aggregation switches are dual-homed to two core switches, connecting the server zone to the
core zone. If the number of servers is small, the servers can be directly connected to the
aggregation layer.
FWs and LBs are deployed in the server zone. LBs can be deployed at the access layer and
aggregation layer.

If services are complex, LBs are deployed at the access layer and are tightly coupled
with services.

If services are load balanced using network protocols, LBs are deployed at the
aggregation layer to simplify service expansion.

FWs are deployed in bypass in-line or in-linebypass mode at the aggregation layer based
on requirements on security and performance.
2.4.3 VLAN Design
The server zone is divided into multiple VLANs for isolation based on services. For example,
web services include web servers, application servers, and database servers. They are assigned
to different VLANs. For details about design rules, see section 2.3.1 "VLAN Design".
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
29

2.4.4 Aggregation Layer Design
Access switches aggregates uplinks at the aggregation layer and exchange data between
service servers of the data center server zone and the communication and aggregation zones
outside the data center.
Switches at the aggregation layer must provide 10GE and GE ports. Value-added service
modules such as FWs and LBs need to be deployed at the aggregation layer.
The aggregation layer is the boundary between Layer 2 and Layer 3. Layer 3 protocols are
used between the aggregation layer and core layer, and Layer 2 protocols are used between
the aggregation layer and access switches. To prevent Layer 2 loops, aggregation switches are
virtualized into a logical device using the CSS technology. The aggregation layer requires
devices with high reliability. It is recommended that you use S9700 or S9300 switches.
2.4.5 Server Gateway Design
The boundary between routing and switching devices can be set at the aggregation layer or
access layer.

The boundary is set at the aggregation layer of large data centers whose Layer 2
networks are large and services are flexibly deployed.

The boundary is set at the access layer when Layer 3 routing is used between the access
layer and aggregation layer, preventing Layer 2 loops. However, the broadcast domain is
smaller.
FWs and LBs need to be deployed in most server zones at the boundary between routing and
switching. The gateway configuration varies with the service. If servers require load
balancing such as web servers, the gateway is deployed on the LB; if servers do not require
load balancing, the gateway is deployed on the FW; if servers do not require security, the
gateway is deployed on the switch.
The following describes the gateway configuration procedures when the FW and LB are
connected to the switch in bypass mode and when the FW and LB are deployed
symmetrically.
Server Gateways on the FWs
Figure 2-21 Server gateways on the FWs



Deployment
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
30

The aggregation switch and FW-1 advertise routes to each other by configuring OSPF.
Configure VRRP between FW-1 and FW-2 to ensure that high availability (HA) and load
balancing are implemented between FW-1 and FW-2. Set the gateway IP address to VIP
10.30.0.254.
Figure 2-21 shows the connection relationship between FW-1 and the aggregation switch.
FW-2 configurations are similar to FW-1 configurations.

Flow path
Client-server (C-S)
1. Packets are transmitted from the aggregation switch to the core/aggregation switch
through VLAN 120.
2. The core/aggregation switch forwards packets to FW-1 through VLAN 110 based on the
destination IP address (10.30.0.11).
3. FW-1 queries the route and forwards packets to the servers in VLAN 300.
----End
Server-client (S-C)
1. A server transmits packets back to the FW-1.
2. FW-1 queries the route and forwards packets at Layer 3 to the core/aggregation switch
through VLAN 110.
3. The core/aggregation switch queries the route and forwards packets through VLAN 120.
----End
Server-server (S-S) on the same network segment
1. The access switch forwards packets directly at layer 2.
----End
S-S crossing network segments
1. Packets are transmitted from the access switch and core/aggregation switch to FW-1
through VLAN 300.
2. FW-1 filters packets and forwards filtered packets at layer 3 to the server through VLAN
400.
----End
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
31

Server Gateways on the LBs
Figure 2-22 Server gateways on the LBs



Deployment
The aggregation switch and FW-1 advertise routes to each other by configuring OSPF.
Configure VRRP between FW-1 and FW-2. Set the gateway IP address of the LB-1 next hop
to VIP 10.50.0.254. Configure a default route for LB-1, and set the next hop of LB-1 to point
to the VIP 10.50.0.254 of FW-1. LB-1 uses the private address172.16/16 for the servers and
uses the server VIP address 10.50.0.11 for external users or devices. Configure VRRP
between LB-1 and LB-2 and set the gateway IP address to VIP 172.16.1.254.
Figure 2-22 shows the connection relationship between FW-1, LB-1 and the aggregation
switch. FW-2 configurations are similar to FW-1 configurations, and LB-2 configurations are
similar to LB-1 configurations.

Flow path
C-S
1. Data flows are transmitted from the access switch to the aggregation switch through
VLAN 120.
2. The aggregation switch forwards data flows to FW-1 through VLAN 110 based on the
destination IP address (10.50.0.11).
3. FW-1 queries routes and forwards data flows to LB-1 through VLAN 100.
4. LB-1 terminates packets as an agent and changes the source IP address (172.16.1.254)
and the destination IP address (172.16.1.11). In addition, LB-1 allocates packets to a
server based on a load balancing algorithm.
----End
S-C
1. A server transmits packets back to the LB-1 (gateway).
2. LB-1 terminates packets as an agent. After changing the source IP address (10.50.0.11)
and the destination IP address (the IP address of a client) of data flows, LB-1 forwards
packets to FW-1 through VLAN 100.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
32

3. FW-1 queries the route and forwards packets at Layer 3 to the core/aggregation switch
through VLAN 110.
4. The core/aggregation switch queries the route and forwards packets through VLAN 120.
----End
S-S on the same network segment
1. The access switch forwards packets directly at layer 2.
----End
S-S crossing network segments
1. Packets are transmitted from the access switch and core/aggregation switch to LB-1
through VLAN 300.
2. LB-1 terminates packets as an agent. After replacing the source and destination IP
addresses of the packets, LB-1 forwards the packets to FW-1.
3. If FW-1 finds that the route to the destination server is directly connected, FW-1
forwards the packets to the destination server.
4. If FW-1 finds that the next hop of route points to LB-1, FW-1 forwards packets to LB-1.
LB-1 terminates packets as an agent. After replacing the source and destination IP
addresses of the packets, LB-1 forwards the packets to the destination servers.
----End
2.4.6 Value-Added Services Design
FW Deployment
FWs are deployed at the aggregation layer in the server zone. The following is required for
FWs:

Reliability: FWs support data backup in a two-node cluster.

High performance: FWs are chassis-shaped devices, preventing service bottleneck.

High scalability: Boards can be inserted into FWs to implement service expansion.

Virtualization: This feature allows FWs to isolate services.
FWs can be deployed in in-line or bypass mode.

In in-line mode, upstream and downstream traffic passes through the FW, ensuring
service security. However, the FW may become the service bottleneck.

In bypass mode, services can be flexibly deployed. Based on the service security level,
you can determine whether ingress traffic, egress traffic, or no traffic passes through the
FW.
In most server zones, FWs are deployed in bypass mode. The following describes detailed
design for FW deployment:
Each FW is directly connected to the aggregation switch to ensure service reliability. Two
interfaces on the FW connect to the switch. If the traffic is not heavy, one interface can be
used for transmitting upstream and downstream traffic that is isolated by VLANs. When
traffic becomes heavy, connect more interfaces to the switch.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
33

As shown in Figure 2-23, FWs run in Layer 3 mode. Two FWs support data backup in a
two-node cluster and form a VRRP group. The switch imports traffic in either of the following
modes:

Configure upstream and downstream gateways on the FW. The configuration for this
mode is simple. Traffic is imported based on the service or subnet mode. In this mode,
the control granularity is coarse.

Use policy-based routing on the aggregation switch. The configuration for this mode is
complex but flexible, and requires high performance switches. Different policies can be
adopted based on the flow mode.
Figure 2-23 FW deployment


LB Deployment
LBs can be deployed in symmetrical (NAT) or asymmetrical (triangle) mode. In symmetrical
mode, traffic is easy to manage and monitor, and devices can be easily deployed. This mode
does not support data collection or accounting and has no special requirements for networks.
Therefore, you are advised to use symmetrical mode except for services requiring heavy
traffic such as video.
When deployed in the server zone, LBs in a two-node cluster form a VRRP group. Web
servers and application servers require the load balancing function. Deploy the service
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
34

gateway on the LB. Set the next hop of the LB to the FW IP address. Deploy the service
gateway of database servers on FW.
NetStream Deployment
The configuration of the NetStream function is supported on aggregation switches. Traffic is
collected and analyzed to obtain detailed information about the network and application
resource usage. In this way, resources are efficiently planned and allocated.
The S9300 or S9700 switches support NetStream boards in integrated and distributed modes.
The distributed NetStream function can be enabled and does not require additional hardware,
but the sampling ratio is low. The integrated NetStream function has a high sampling ratio but
requires LPU slots.

When NetStream is deployed on downstream interfaces of aggregation switches, the
function analyzes traffic distribution among servers and services in the server zone. It is
recommended that you configure distributed NetStream on downstream interfaces. This
function analyzes downstream traffic, simplifying service planning for servers and
locating faults on the network and servers.

It is recommended that you deploy integrated NetStream board on the upstream interface
of the aggregation switch. The function analyzes vertical traffic in the server zone,
ensuring traffic collection performance. The traffic analysis result shows the user
behavior and service loading. You can plan and adjust services to prevent security risks
based on the traffic analysis result.
2.4.7 Access Reliability Design
Access switches in the server zone run in Layer 2 mode.
Connections between access switches and aggregation switches support multiple redundancy
modes, such as the U-shaped networking, square networking, triangle networking, and CSS
and iStack networking.

U-shaped networking
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
35

Figure 2-24 Server zone access to the aggregation layer (U-shaped networking)


As shown in Figure 2-24, two access switches and two aggregation switches form a U-shaped
network, and a Layer 3 link is configured between two aggregation switches. This is a
loop-free network that does not require the configuration of Spanning Tree Protocol (STP) or
Multiple Spanning Tree Protocol (MSTP). Two aggregation switches form a VRRP group.
Different VRRP groups are assigned different VLANs. VLANs have different active nodes to
implement load balancing. A switch supports a maximum of 255 VRRP groups, so it can be
assigned 255 VLANs. BFD for VRRP can be used for services requiring high reliability to
speed up convergence.
In this networking, VRRP heartbeat packets pass through access switches. Access switches in
different VRRP groups (in different racks) can be assigned only different IP network
segments.

Square networking
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
36

Figure 2-25 Server zone access to the aggregation layer (square networking)


As shown in Figure 2-25, two access switches are connected to aggregation switches, and two
aggregation switches are connected through a Layer 2 trunk. The link between aggregation
switches is used for transmitting VRRP heartbeat packets and forwarding traffic. Configure
STP or MSTP on aggregation switches and access switches to prevent Layer 2 loops. In the
preceding networking, traffic is transmitted among switches, and a fault may cause some
servers' failure in accessing the network. Therefore, you must configure the trunk to ensure
network stability and reliability. In addition, configure two aggregation switches to form a
VRRP group, implementing redundancy backup at the next hop.
This networking plan requires the configuration of STP and long fault recovery time. This
networking plan applies to scenarios that tolerate long convergence time.

Triangle networking
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
37

Figure 2-26 Server zone access to the aggregation layer (triangle networking)
Access layer
Active VRRP Standby VRRP
VRRP
Heartbeat
link
Traffic route upon a fault
SmartLink


As shown in Figure 2-26, each access switch is dual homed to aggregation switches, and the
two aggregation switches are connected through a Layer 2 trunk. Configure STP or SmartLink
on switches to prevent triangle loops. It is recommended that you configure SmartLink that
requires shorter convergence time. The two aggregation switches form a VRRP group, and
switches in different VRRP groups (in different racks) can use the same IP network segment.
This networking plan is widely used because it has high reliability and link efficiency. Each
access switch has more than one uplink, requiring high port density of aggregation switches
and complex cabling.

CSS and iStack
The preceding three networking plans must have technologies such as xSTP and SmartLink
configured to prevent Layer 2 loops. These technologies, however, has low link efficiency and
their deployments are complex. The CSS and iStack technologies are developed to simplify
the configuration and management. Most data centers have adopted this plan using the CSS
and iStack technologies.
CSS and iStack networking plans are available whether iStack is deployed or not deployed at
the access layer. Two aggregation switches form a CSS. The double uplinks of an access
switch are configured as an Eth-Trunk that connects to the cluster of aggregation switches.
The L2 and L3 hash algorithms are used to implement link load balancing, improving link
efficiency.
The networking plan with iStack deployed at the access layer is more flexible. Two or four
uplinks can be configured for access layer switches. The service capacity can be dynamically
configured, and switches are easy to manage with one management unit for multiple access
switches. The plan is not applicable to networks with heavy horizontal traffic among access
switches because the stacked link capacity is insufficient.
One Net DCN Solution
Design Guide 2 Data Center Network Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
38

Figure 2-27 Server zone access to the aggregation layer (aggregation layer cluster)


Figure 2-28 Server zone access to the aggregation layer (aggregation layer cluster and access layer
iStack)


Select a proper networking plan based on the service type, maintenance, and investments. The
cluster and iStack networking plans are mostly used now.
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
39

3 Data Center Solution Design
3.1 Overview
The access zone functions as a bridge for a data center to provide external services. Security,
connectivity, and reliability are key factors for designing the access zone.
The access zone network has the egress router layer and security protection layer. Based on
the access type and service type, the access zone is divided into multiple connected areas,
such as Internet zone, partner access zone, disaster backup access zone, and intranet zone. The
security protection layer is optional for different access areas. For example, the disaster
backup access zone does not require the security protection layer. The high-level and
medium-level security protection layers are required for the Internet zone and partner access
zone respectively.
An actual data center may have only one or two access zones. Use a proper plan and
technologies according to the site requirements.
3.2 Internet Zone Design
As shown in Figure 3-1, the egress routers, LBs, FWs, intrusion detection system (IDS)
device, distributed denial of service (DDoS) device, and Secure Sockets Layer VPN (SSL
VPN) gateway are deployed in this zone.

When two ISP egresses are leased, deploy LLBs to respond to requests from different
carriers. If there is only one ISP egress, you do not need to deploy LLBs.

The DDoS device can be deployed in bypass or in-line mode. The bypass mode allows
you to deploy fewer devices and helps eliminate a single point of failure (SPOF) and
improve service reliability.

FW-1 two-node cluster is deployed in in-line mode. The cluster has powerful attack
defense capabilities, and it functions as the first protection layer. FW-1 can function as
the IPSec VPN gateway for terminating tunnel packets.

The IDS device is deployed inside the firewall for detecting intrusions and preventing
incorrect alarm reporting.

The SSL VPN device is used to terminate tunnel packets.

FW-2 two-node cluster is deployed in in-line mode. The cluster has powerful attack
defense capabilities, and it functions as the second protection layer. The cluster improves
security by isolating the DMZ and core zone.
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
40

Figure 3-1 Topology in the Internet zone
CSS
ISP1
ISP2
CSS
DDoS
LB1
LB1
FW1
FW2
IDS
SSL VPN
Core zone
DMZ
FW heartbeat
traffic
Mirrored or split
traffic


One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
41

3.2.2 DDoS Detection and Cleaning Design
Figure 3-2 Procedure for traffic detection and cleaning
Detection
device
Optical
splitter
Router1
Internet
Mirrored or split traffic
Protected
target
2.2.2.0/24
Router2
Cleaning
device
Management
center
GE1/0/1
7.7.1.2/24
GE2/0/1
7.7.1.1/24
GE1/0/2
7.7.2.2/24
GE2/0/2
7.7.2.1/24
Traffic before cleaning
Traffic after cleaning
Logs and captured traffic
Management traffic


A bypassed DDoS device can be connected in mirroring mode and using optical splitters.
Use an optical splitter or the port mirroring technology to import traffic to the DDoS detection
center where Huawei E8000E-X or E1000E-X is deployed. The DDoS detection center
reports the attacks to the management center. The management center then enables the
protection function and use BGP routes to import traffic to the cleaning device. The cleaned
traffic is forwarded to the router using policy routes, MPLS VPN, or in Layer 2 transparent
transmission mode.

An optical splitter copies traffic without affecting forwarding. Optical splitters are
mostly used on carrier networks. The advantage is that no interface is required on
switches, whereas the disadvantage is that the optical splitter deployment cost is high.

The port mirroring technology copies inbound, outbound, and bidirectional traffic on an
interface. When the interface transmission rate reaches the threshold, mirrored traffic
may be lost. The advantage is that this technology saves costs by using interfaces on
common switches, whereas the disadvantage is that you must configure port mirroring in
the command line interface (CLI) on the router.
Detected DDoS attacks are cleaned in the following process:
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
42

Assumes that the IP address of the protected target is 2.2.2.0/24. When detecting abnormal
traffic to 2.2.2.2/32, the traffic is automatically transmitted to the cleaning device.
In the management center, configure the protected target 2.2.2.0/24 and set the import mode
to automatic. On the cleaning device, configure the next hop address 7.7.2.2 for generating
dynamic routes. This address is the IP address of interface GE1/0/2 of the router that is
directly connected to the cleaning device.
When detecting abnormal traffic to 2.2.2.2/32, the management center delivers the import task
to the cleaning device. The cleaning device generates a static route to 2.2.2.2 and the next hop
address of the static route 7.7.2.2, and then delivers the route to the forwarding information
base (FIB) table. If cleaned traffic matches the FIB entry, the traffic is forwarded to GE1/0/2
on Router1.


To use MPLS or GRE reinjection, run the firewall ddos bgp-next-hop fib-filter command to
prevent generated static routes from being delivered to the FIB table to ensure MPLS or GRE
forwarding.
Configure BGP community attributes and advertise routes that are dynamically generated.
The routes that are statically generated are then transmitted to the BGP community that
advertises them to Router1.
Multiple reinjection modes are available, whose configurations are simple. The reinjection
modes include the static route, policy route, GRE, MPLS VPN, and Layer 2 reinjection modes.
Other reinjection modes are also available such as MPLS Label Distribution Protocol (MPLS
LDP) reinjection mode, which is not described in this document.

As the most simple reinjection mode, static route reinjection is applicable to scenarios
when only one reinjection link exists. The management center delivers the configuration,
and a static route is automatically generated.

Policy route reinjection is a commonly used reinjection mode and is applicable to
scenarios when multiple reinjection interfaces exist. The customized policy route is used
on the reinjection interface. It is recommended that you use the policy route reinjection
mode because of its simple configuration. When the network topology changes, you
must change the policy route configuration. If the change is great and the IP addresses of
the protected targets are sparse, you need to configure a large number of policy routes.
This requires human labor for maintenance, and a large number of policy routes
configured on the cleaning device affect system performance. Therefore, you are advised
to configure the MPLS reinjection mode in this scenario.

The GRE reinjection mode allows the cleaned traffic to be transmitted to the reinjection
router through a GRE tunnel configured on the cleaning device.
The reinjection is used only on unidirectional traffic that is cleaned. GRE reinjection
cannot be used on TCP traffic. If BGP importing is used, GRE reinjection avoids import
routes and directly delivers reinjection traffic to downstream routers that cannot learn
import routes. This prevents routing loops. The GRE reinjection mode only requires
routing features for the GRE function and basic route forwarding function. This mode is
applicable to scenarios where a few reinjection routers are deployed. If the GRE tunnels
are established between the cleaning device and multiple reinjection routers, the manual
configuration is complex. In this case, you are advised to configure the dynamic route
reinjection mode.
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
43


If BGP importing is used, MPLS VPN reinjection avoids import routes and directly
delivers reinjection traffic to downstream routers that cannot learn import routes. This
prevents routing loops. The MPLS VPN reinjection mode can be deployed flexibly and
has high scalability. This mode, however, requires that the router supports MPLS
functions.

Layer 2 reinjection is used when only Layer 2 forwarding devices exist between the core
switching device and protected target.
3.2.3 FW Design
In most cases, network attackers attempt to intrude or destroy servers (hosts), steal sensitive
data on servers, occupy network bandwidth, interrupt services provided by servers, or even
crash network devices. In the case of a network attack, the network service is degraded or
even unavailable. Network attacks are classified into traffic attacks, scanning and snooping
attacks, malformed packet attacks, and special packet attacks.

Traffic attack
Attackers send a large number of unnecessary data to the server, occupying most server
resources and network bandwidth. The excessive load of the server leads to the failure in
providing services.
Traffic attacks are flooded by numerous TCP, UDP, and ICMP packets sent to target
hosts. Some attackers may use bogus source addresses to attack the target hosts without
being detected. Configure defense against SYN flood attacks to limit the rate of SYN
packets based on the interface, IP address, or security zone. Configure defense against
DNS attacks to check DNS packets validity on the interface and implement reverse
source authentication. Configure defense against UDP flood attacks, ICMP flood attacks,
TCP flood attacks, and other protocol attacks.

Scanning and snooping attack
Scanning and snooping attacks are implemented based on IP address scanning or port
scanning. IP address scanning refers to that attackers send IP (TCP/UDP/ICMP) packets
to destination addresses at random to the hosts and networks to find out potential targets.
Port scanning refers to that attackers scan TCP and UDP ports on the attacked server to
identify the operating system and the monitored services. By scanning and snooping, an
attacker can know the service type and security vulnerability of the system and prepare
for further intrusion to the system.

Malformed packet attack
Malformed packet attacks are sent by attackers to the system. If such an attack occurs,
the system breaks down when processing the malformed IP packets, affecting proper
running of the system. Malformed packet attacks are classified into: Ping of Death
attacks, Teardrop attacks, IP address spoofing attacks, Land attacks, Smurf attacks,
Fraggle attacks, WinNuke attacks, and large ICMP attacks.

Special packet attack
An attacker sends valid packets to a network to detect the network topology for
preparing for attacks. To prevent special packet attacks, configure defense against ICMP
packets whose length exceeds 5000 bytes, ICMP unreachable packets, ICMP redirection
packets, Tracert attacks, packets with the record route option, and packets with the
timestamp option.
Huawei Eudemon FWs use the multi-layer filtering technology to prevent various attacks,
distinguish attack traffic from normal traffic, and take measures to protect internal networks.
The FW provides the following functions:
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
44


Access control list (ACL). An ACL is composed of a list of rules including permit and
deny. It classifies packets by matching packet information such as the source IP address,
destination IP address, source port number, destination port number, and upper-layer
protocol. The ACL includes the basic ACL and advanced ACL.
Basic ACL
The basic ACL ranges from 2000 to 2999 and matches packets based on the source IP
address.
Advanced ACL
The advanced ACL ranges from 3000 to 3999 and matches packets based on the
source IP address, destination IP address, source port number, destination port
number, and upper-layer protocol.

Packet filtering
Packet filtering controls incoming and outgoing traffic between networks at different
security levels.
The FW obtains the packet header information and matches packets based on the source
IP address, destination IP address, source port number, destination port number, and
upper-layer protocol number. According to the matching result, the FW forwards or
discards the packets.
The FW supports interzone packet filtering and default packet filtering.

Time range
In practice, users may want certain ACL rules to be valid only in a certain period. This
indicates that the ACL rules are used to filter packets based on the time range.

Application specific packet filter (ASPF)
The stateful FW supports multi-channel protocols such as FTP. The stateful FW needs to
detect control channel information and dynamically create a ServerMap entry based on
the packet payload.
The ASPF function meets the preceding requirements by detecting packet information at
the network layer and application layer.

Long connection
On actual networks, sessions between some special services cannot be aged out. The
long connection function ensures proper running of these services.

Network address translation (NAT)
NAT refers to the process of translating the IP address in an IP header into another IP
address. NAT helps users with private network IP addresses to access public networks.

Virtual FW
A large number of servers are deployed in the Internet zone, but they are located on
different VPNs. To address this problem, deploy virtual FWs.
Each virtual FW integrates a VPN instance, a security instance, and a configuration
instance. These instances provide users with the proprietary route forwarding service,
security service, and configuration management service.
AVPN instance provides isolated VPN routes for the users under each virtual FW.
These VPN routes are used to forward the packets received by each virtual FWs.
A security instance provides isolated security services for the users under each virtual
FW. The security instance contains private interfaces, security zones, interzones, ACL
rules, and NAT rules. In addition, it provides the security services such as blacklist,
packet filtering, attack defense, ASPF, and NAT for the users under virtual FWs.
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
45

A configuration instance provides isolated configuration management services for the
users under each virtual FW. The configuration instances allow the users to log in to
their own virtual FW to manage and maintain VPN routes and security instances.
The virtual FW supports the Layer 2 transparent and Layer 3 mode.
Figure 3-3 Networking diagram of virtual FWs in Layer 3 mode


As shown in Figure 3-3, the FW provides the virtual FW service, vfw1 is used by web
service A, and vfw2 is used by application service B. A and B belong to different VPNs.
The configuration roadmap is as follows:
Configure VPN instances on the FW, create and configure ACL rules, and configure
packet filtering.
Figure 3-4 Networking diagram of virtual FWs in Layer 2 transparent mode


In the preceding networking diagram, FW interfaces work in Layer 2 mode, GE1/0/1 and
GE1/0/2 belong to VLAN 10, and GE1/0/3 and GE1/0/4 belong to VLAN 20.
The configuration roadmap is as follows:
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
46

Configure VPN instances on the FW, bind VLANs to the VPN instances, configure
working modes of interfaces, and add the interfaces to the VLANs. Create and configure
ACL rules, and configure packet filtering.

Deep packet inspection (DPI)
The DPI technology detects content of data at the application layer. The DPI technology
matches parsed data based on the DPI signature file and analyzes application type of data
at the IP and UDP/TCP layers. After the data is matched, DPI allows or blocks the
transmission, or limits its transmission rate according to the application type.
The DPI signature library identifies known protocols, making DPI take effect. The DPI
signature library is a set of signature rules of network packets such as P2P, VoIP, and
video packets. Security service networks periodically update the DPI signature library to
update the protocols at hotspots. The periodical update ensures that devices have the
latest signature rules. The DPI signature rules are classified into:General rule: includes
P2P, VoIP, IM, Streaming, Stock, Game, Mail, and PeerCasting.
Application rule: is secondary-level category, for example, the BT application
belongs to the P2P category. Protocol rule: Protocol is the smallest category and is
contained in the application categories. Protocols vary with the update of the DPI
signature file.
Control network protocols that greatly affect bandwidth to reduce their impacts on
bandwidth. For example, to detect P2P and video protocols, limit the P2P and video
traffic rate within 500 kbit/s. In this way, P2P applications and online video occupy
only limited bandwidth. The DPI technology is applicable to this kind of scenarios.

Intrusion prevention system (IPS)
With emerging attack methods, tools, and technologies, traditional detection technologies
used on the FW network layer cannot meet the current security requirements. To address
this problem, the IPS technology is developed.
Compared to the traditional FW and IDS device, IPS has the following advantages:
The traditional FW cannot prevent or defend against attacks at the application layer,
while the IPS device can defend against these attacks, for example, buffer overflow
attacks, Trojan horses, backdoor attacks, and worms.
The IDS device only detects attacks and reports alarms, while the IPS device not only
detects intrusions but also prevents intrusions, protecting the information system in
real time.
The IPS process is as follows:
Reassemble application data
Reassemble fragmented IP packets and TCP flows, ensuring application data
continuity and preventing attacks that attempt to avoid IPS detection.
Identify protocols
The IPS technology identifies multiple protocols commonly used at the application
layer and detects the data, improving effectiveness of detecting attacks.
Match signatures
Match packet content with signatures, and only packets matching signatures are
processed further.
After IPS detection is complete, the FW processes packets matching IPS features
according to the configured policy and response mode.
The IPS working modes include the protection mode and alarm mode. The response
blocking configured in the IPS policy takes effect only in protection mode. In alarm
mode, only alarms are generated.
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
47

3.2.4 Load Balancing Design
Load balancing in this zone refers to link load balancing (LLB).
LLB is classified into inbound LLB and outbound LLB.

Inbound LLB uses the Smart DNS technology.

Outbound LLB uses the Smart NAT technology.
Inbound LLB
Figure 3-5 Mechanism for inbound LLB access
LLB
ISP1
ISP2
UTM
DMZ
VSIP1 VSIP2
abc.comin VSIP2
1
4
2 3
NDS
Smart DNS
abc.com<-> VSIP1
abc.com<-> VIP2
5 6


The following table describes the LLB access process.
No. Step Description
1 The PC client sends a request for visiting
www.abc.com.
The PC queries the local
DNS server.
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
48

No. Step Description
2 The DNS server sends a request to the LLB for the
parsed IP address of www.abc.com.
The recursive query of
the local DNS server
shows that the LLB has
the right on DNS parsing.
3 The LLB provides DNS services and returns the
parsed domain name to the local DNS server based on
the load balancing policy. The load balancing policies
include which server is idlest, which server returns
ICMP packets in the fastest rate, and the proximity
rule.
N/A
4 The local DNS server returns www.abc.com->VSIP2
to the PC.
N/A
5 The PC client accesses ISP1 and servers through
VSIP2.
N/A
6 Servers return response messages. The LLB selects an
interface based on the records of ISP router selection
to ensure that data is returned through this ISP.
N/A

The Smart DNS function binds public IP addresses of multiple ISPs and parses IP addresses
of Internet users. The Smart DNS parse function and the load balancing policy allow the LLB
to dynamically select an optimal link to provide internal resources for external users. Traffic
on multiple links is dynamically distributed to balance the load. The LLB monitors each link.
If any ISP link is faulty, the LLB stops delivering parsed IP addresses on this ISP to users,
ensuring uninterrupted 24/7 services.
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
49

Outbound LLB
Figure 3-6 Mechanism for outbound LLB access
LLB
ISP1
ISP2
UTM
DMZ
LLB selects
different
routes based
on the policy.
Router 2


The following table describes the LLB access process.
No. Step Description
1 Packets from servers in the DMZ pass
through the LLB.
N/A
2 Based on the preset load balancing algorithm,
the LLB selects Router2 as the egress
gateway and translates private IP addresses
into public IP addresses of ISP2.
The LLB can set multiple default
gateway IP addresses that are in the
default gateway pool.
3 The Internet server receives and processes
request packets, and then returns response
packets to the LLB.
The IP addresses of the outbound
traffic have been translated into
public IP addresses of the ISP. The
traffic returned by the Internet
server follows the original path.
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
50

No. Step Description
4 Upon receiving the response packets, the LLB
translates the public IP addresses into private
IP addresses and forwards packets to servers
in the DMZ.
N/A

The LLB provides reliable wide area network (WAN) connections and dynamic load
balancing by detecting link availability and loss. In addition, the LLB can detect the fastest
path based on the static IP address segment, meeting time, and path quality. The LLB then
imports user traffic to this path, ensuring high-quality connections. The Smart NAT
technology automatically translates IP addresses to corresponding ISP source addresses
according to the outbound path, ensuring path consistency.
3.2.5 Routing Design
As shown in Figure 3-7, the egress routers function as ASBRs, and External Border Gateway
Protocol (EBGP) neighbor relationships between outbound interfaces and ISPs. OSPF is
configured for inbound interfaces on egress routers, and egress routers advertise default routes.
Internal Border Gateway Protocol (iBGP) is configured between the two egress routers. If the
ISP link of one egress router is faulty, the other egress router quickly synchronizes routing
information, implementing fast traffic switchover.
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
51

Figure 3-7 Routing design in the Internet zone


3.2.6 VPN Access Design
VPN applications in the Internet zone provide IP Security VPN (IPSec VPN) access, Secure
Sockets Layer VPN (SSL VPN) access, and Layer 2 Tunneling Protocol VPN (L2TP VPN)
access.
IPSec VPN Access
The IPSec protocol family is a series of protocols defined by the Internet Engineering Task
Force (IETF). This protocol family provides high quality, interoperable, and cryptology-based
security for IP packets. The two parties in communication can encrypt data and authenticate
the data source at the IP layer to ensure the confidentiality and integrity of the data and
prevent replay on the network.
IPSec implements these functions using two security protocols: Authentication Header (AH)
protocol and Encapsulating Security Payload (ESP). Internet Key Exchange (IKE) provides
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
52

automatic key negotiation, security association (SA) establishment, and SA maintenance
functions to simplify IPSec use and management.
Figure 3-8 Networking diagram for configuring IKE negotiation


As shown in Figure 3-8, an IPSec tunnel is established between RouterA and RouterB. This
IPSec tunnel protects data flows transmitted between the subnet of PC A (10.1.1.x) and subnet
of PC B (10.1.2.x). The IPSec tunnel uses the ESP protocol, DES encryption algorithm, and
SHA-1 authentication algorithm. The configuration roadmap for configuring IKE negotiation
is as follows:
Configure IP addresses for interfaces on RouterA and RouterB. Configure IKE proposals.
Specify the local IDs and IKE peers. Configure ACLs to define packets to protect. Configure
static routes, IPSec proposals, and security policies to the peers. Apply ACL and IPSec
proposals on the interfaces.
SSL VPN Access
As the Internet technologies develop, people can access an enterprise's internal resources
regardless whether they are at home, at work, or on the move. Enterprise employees,
customers, and partners desire access to enterprises' intranets anywhere and anytime. This
may also allow unauthorized users or insecure access hosts, which threatens the security of
enterprises' intranets. The SSL VPN technology meets access requirements of enterprise
employees, customers, and partners.
SSL VPN protects enterprises' intranets against attacks and prevents data theft. SSL VPN is a
type of secure access VPN technologies. Based on the Hypertext Transfer Protocol Secure
(HTTPS) protocol, SSL VPN uses the data encryption, user identity authentication, and
message integrity check mechanisms of the SSL protocol to help ensure that remote access to
enterprise intranets is safe and secure.
SSL VPN is a remote access technology. As shown in Figure 3-9, SSL VPN is applicable to
the following scenarios:

Dynamic remote access: Users can use any terminal to access an enterprise's intranet
through the Internet anytime and anywhere.
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
53


Differentiated user access privileges: The SSL VPN gateway assigns different access
privileges to employees, partners, and other users on the Internet. Each user can only
access authorized resources.

Various operating environments for remote terminals: Remote terminals can have
different operating systems installed and use different application programs to access the
enterprise's internal resources.
Figure 3-9 SSL VPN gateway network


An enterprise's intranet connects to the Internet though a router or an SSL VPN device. The
router functions as the SSL VPN gateway, through which enterprise employees, VIP
customers, and partners can access the enterprise's internal resources.
The networking requirements are as follows:

Enterprise employees are allowed to access the internal web server and mail server and
share desktop with the internal host (10.138.10.21) and successfully ping hosts on the
network segment 10.138.10.64 to 10.138.10.95.

VIP customers are allowed to access the internal mail server and use Telnet to access the
internal application server.

Partners are allowed to access the internal web server.

To meet access requirements of different types of users, the network administrator can
perform corresponding configurations on the router or SSL VPN device.
Create virtual gateways for enterprise employees, VIP customers, and partners, and configure
resources on the virtual gateways.
L2TP VPN
L2TP is a standard that is defined by the IETF and integrates the advantages of Level2
Forwarding Protocol (L2F) and Point to Point Tunneling Protocol (PPTP). L2TP messages
(classified into control messages and data messages) are exchanged to maintain L2TP links
and transit Point-to-Point Protocol (PPP) data. L2TP messages are transmitted on UDP port
1701 based on TCP/IP. PPP defines an encapsulation technology and is a data link layer
protocol used to transmit network layer datagrams on P2P links. PPP runs between users and
the network access server (NAS), and the Layer 2 link end and PPP session point locate on the
same device. L2TP supports transmission of PPP link layer datagrams over tunnels. The Layer
2 link end and PPP session point locate on different devices. Information is exchanged using
the packet switching technology, expanding the PPP model.
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
54

The following connection modes are supported:

NAS-Initiated
A remote user initiates a connection, and the remote system dials in to the L2TP access
concentrator (LAC) over the PSTN/ISDN. The LAC then sends a request over the
Internet to the L2TP network server (LNS) for setting up a tunnel. The LNS allocates an
IP address to the remote user. The authentication and accounting on the remote user can
be performed on the LAC agent or LNS.

Client-Initiated
A LAC client (local user supporting L2TP) initiates a connection. The LAC client can
directly send a request to the LNS for setting up a tunnel without the LAC. The LNS
allocates an IP address to the LAC client.

LAC-Auto-Initiated
In most cases, an L2TP client initiates a PPP connection to the LAC. If the LAC also
functions as the client, the connection between the user and LAC can be other types
except PPP. The LAC can forward IP packets from the user to the LNS. To use the LAC
as a client, create a virtual PPP user on the LAC and a corresponding virtual PPP server.
The virtual user initiates PPP negotiation with the virtual PPP server. The virtual PPP
server then sets up an L2TP tunnel to extend negotiation to the LNS.
Figure 3-10 Networking diagram for configuring LAC-Auto-Initiated VPN


The configuration roadmap is as follows:
1. Enable L2TP and create a virtual PPP user on the LAC. The PPP user sends a request to
the server headquarters through an L2TP tunnel. After authentication is successful, the
headquarters allocates an internal IP address to the PPP user.
2. Configure a route whose destination address segment is the headquarters, outbound
interface is the PPP user interface. Enable LAC-Auto-Initiated.
3. Configure the address pool on the LNS to allocate addresses to users.
3.3 Partner Access Zone Design
3.3.1 Security Design
The partner access zone provides access for members who have close relationships with
enterprise services. The partner access zone has the same security level with the DMZ and
cannot be directly connected to the intranet of the data center. As shown in Figure 3-11, a
typical partner access zone is composed of egress routers, FW1, front service zone, and FW2.
The IDS and SSL VPN devices are optional.
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
55

Partners are connected to the partner access zone through egress routers. Some data centers
support partner access through the Internet. In this case, configure VPN access (IPSec or SSL
VPN) on FWs. Use dedicated VPN devices such as Huawei-Symantec SVN3000 and
SVN5000.
Figure 3-11 Networking diagram of a partner access zone
CSS
ISP2
FW 1
FW 2
IDS SSL VPN
Core zone
DMZ
Heartbeat
packets
between FWs
ISP1
CSS


The networking requirement is that partners can access the front service zone of the partner
access zone. On FW1, grant partners access only to IP subnets of the front service zone. NAT
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
56

is configured on FW1 to hide the internal network topology. Defense against viruses and
worms must be enabled.
Servers in the front service zone are used to terminate user data. Access to the internal
network is initiated from the front service zone. On FW2, configure traffic from the front
service zone to access the internal network.
You are advised to deploy FWs in in-line mode and deploy two-node cluster for both FW1
and FW2 in hot backup mode. For details about FW deployment, see section 3.2.3 "FW
Design".
3.3.2 Routing Design
In the partner access zone, determine IP address allocation mode based on the relationship
with the partner.

The enterprise dominates the relationship with Partner 1 (for example, a manufacturer).
The egress router assigns an IP address to Partner 1. Partner 1 connects to the enterprise
through static routes.

Partner 2 (for example, a bank) dominates the relationship with the enterprise. The egress
router assigns an IP address to the enterprise. Partner 2 connects to the enterprise through
static routes.
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
57

Figure 3-12 Routing design for the partner access zone
CSS
OSPF
Area 0
ISP1
Egress router
in the partner
access zone
ASBR
Static route Static route
ISP2


3.4 Intranet Zone Design
The intranet zone provides access for internal users in an enterprise. When the data center is
deployed out of the enterprise campus, internal users access the data center through a WAN.
When the data center is deployed inside the enterprise campus, internal users access the data
center through the campus LAN.
Internal users who locate in campuses different from the data center can access the data center
in either of the following ways:

Connects to the campus core network through a WAN and then connects to the data
center through the campus LAN. This mode is the same as the scenario when the data
center is deployed inside the enterprise campus.
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
58


Directly connects to the data center.
When the data center is deployed inside the enterprise campus, the core switch in the data
center is connected to the core switch on the campus network with FWs cascaded between.
When the data center is deployed out of the enterprise campus, two egress routers connect to
the WAN and connect to the core switch in the data center through FWs. As shown in Figure
3-13, if a data center requires both LAN and WAN access, combine FW1 cluster and FW2
cluster based on the number of access users and FW capacity.
Figure 3-13 Networking diagram of the intranet zone


Configure OSPF for the core switch on the campus network and the router. An OSPF
backbone in the data center is therefore composed of these two devices and the core switch.
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
59

The intranet is a secure zone that faces fewer risks. Most risks include unauthorized service
access, attacks on the core switch, viruses, and worms. Grant different access privileges to
employees to restrict their access to different zones. FWs need to support the Unified Threat
Management (UTM) function. Specify access ranges for internal users by configuring ACLs,
for example, allow internal users to access only the office server. Deploy FW two-node
clusters in hot backup mode.
Branches can be connected to the data center through the WAN in multiple methods such as
the dedicated line, VPN, and satellite link. Theses modes have a long delay, but WAN
resources are precise. With limited bandwidth, deploy QoS to guarantee the quality of services.
Deploy application speedup systems both in the data center and branches to improve link
efficiency and decrease response delay.
Deploy a high-performance accelerator in the data center and low performance accelerator in
the branches in symmetrical mode.
On the WAN, accelerators can be deployed in bypass or in-line mode. The bypass mode is
commonly used. In this mode, you can select some applications to accelerate, leasing pressure
on the device.
Accelerators are connected to egress routers in bypass mode, as shown in Figure 3-14.
One Net DCN Solution
Design Guide 3 Data Center Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
60

Figure 3-14 Networking diagram for a branch access to the data center


One Net DCN Solution
Design Guide 4 Connection and Disaster Recovery Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
61

4 Connection and Disaster Recovery
Solution Design
4.1 Overview
Enterprise data centers are deployed more and more to ensure information security and
service expansion. The following describes the routing, Layer 3 service connection, and Layer
2 service expansion for multiple data centers.
4.2 Network Architecture for Multiple Data Centers
A data center processes core services and stores numerous service data. Multiple regions
require services of a data center. Enterprise campuses require 24/7 data center services. Single
data centers are developed to multiple centers and multiple regions. The deployment modes of
data centers include:

Two data centers in the same city
A backup data center is established within the range of 80 km away from the active data
center in same city. The backup data center replicates service data on the active data
center in real time using dedicated lines or transmission devices. In addition to backup,
some services can be migrated from the active data center to the backup data center. The
active and backup data centers are in dual-active state.

Three data centers in two regions
In case of natural disasters such as earthquakes, it is recommended that the enterprise
establishes a remote disaster backup center in another city more than 400 km away from
the active and backup centers. The remote disaster backup center regularly synchronizes
data in the production center and the local disaster backup center. If a disaster occurs, the
remote disaster backup center can recover services using backup data, ensuring data
integrity. This mode is most commonly used for the deployment of multiple data centers.
Figure 4-1 shows the networking diagram of multiple data centers.
One Net DCN Solution
Design Guide 4 Connection and Disaster Recovery Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
62

Figure 4-1 Networking diagram of three data centers in two regions
Active data center
IP/FC SAN IP/FC SAN
SDHring
IP/FC SAN
Acceleration Acceleration
Internet
IP WAN
Production/Disaster recovery
data center in the same city
Disaster recovery data
center in a different city



Three data centers in two regions+multiple data centers with different levels
With the development of enterprise services, the network architecture of three data centers in
two areas cannot meet the requirements for service development. The architecture of multiple
data center with different levels has emerged to replace the original network architecture. If a
data center with different levels is established in a region, enterprises in the region can access
the data center first. In this manner, the load of global data centers is relieved, the WAN
bandwidth is saved, and the response time of regional services is shortened. In addition, if a
fault occurs in a region, services in other regions are not affected.
Figure 4-2 shows the networking diagram of three data centers in two regions+multiple
data centers with different levels.
One Net DCN Solution
Design Guide 4 Connection and Disaster Recovery Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
63

Figure 4-2 Networking diagram of three data centers in two regions+multiple data centers with
different levels
WAN A
Active data center
in the same city
Regional data
center 2
Disaster recovery data
center in the same city
DWDM
Disaster recovery data
center in a different city
WAN B
SDH
DWDM
WAN

Regional data
center 1
Regional data
center N
Synchronization


4.3 Routing Reliability Design
4.3.1 Routing Overview
Interior Gateway Protocol (IGP) routing protocols are configured for the intranet zone of the
data center, for example, OSPF and IS-IS. To maintain and manage the network conveniently,
it is recommended that you use OSPF to ensure network stability and fast convergence. BGP
has powerful routing control and policy functions and applies to large connected networks.
BGP is recommended for transmitting routes among multiple data centers.
4.3.2 BGP Routing Design Between Regional Data Centers and
Global Data Centers
When regional data centers are connected to the global data center, each data center functions
as an Autonomous System (AS). ASs advertise their routes using EBGP. The AS-Path and
MED routing attributes are used to control and select routes, enhancing link reliability.
One Net DCN Solution
Design Guide 4 Connection and Disaster Recovery Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
64

The global data center is connected to the global disaster recovery backup center using two
independent links: production service link and disaster backup link. The two links are isolated
to guarantee bandwidth. The regional active data center is connected to the regional disaster
backup center using two independent links: production service link (connecting to the active
data center) and disaster backup link (connecting to the disaster backup center).
Figure 4-3 BGP AS-Path route selection


EBGP prefers the route with the shortest AS-Path. As shown in Figure 4-3, AS 3 receives
information on route 10.1/16 from AS 1, AS 2, and AS 4. The AS-Paths of these routes are AS
1, AS 2 1, AS 4 1, and AS 4 2 1.

The route advertised from AS 1 (active path) has the shortest AS-Path. Therefore, this
route has the highest priority and is selected.

The route advertised from AS 4 2 1 (standby path 3) has the longest AS-Path. Therefore,
this route has the lowest priority.

The routes advertised from AS 2 1 (standby path 2) and AS 4 1 (standby path 1) have the
same AS-Path. The BGP MED attribute is needed to distinguish their priorities. As
shown in Figure 4-4, the MED value of route 10.1/16 advertised from AS 4 is 100,
smaller than that of the route advertised from AS 2. Therefore, standby path 1 has the
higher priority than standby path 2.
One Net DCN Solution
Design Guide 4 Connection and Disaster Recovery Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
65

Figure 4-4 BGP MED route selection


BGP has powerful routing control and selection capabilities. By controlling the BGP AS-Path
and MED attributes, users can effectively solve the route selection and link reliability
problems in multiple data centers.
4.3.3 BGP Routing Design Between Regional Data Centers and a
Country/Region Branch
A country/region branch is connected to the regional active/standby data centers by using
active/standby links from different carrier. Figure 4-5 shows the network between a
country/region branch and regional data centers.
One Net DCN Solution
Design Guide 4 Connection and Disaster Recovery Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
66

Figure 4-5 Country/region branch's connection to regional data centers
Regional active
DC
Regional standby
DC
Production service link
Disaster backup link
Country/region branch
Active path
Standby path 2
Standby path 1


The active link of the country/region branch is connected to the regional active data center,
and the backup link to the regional backup data center. The regional active data center,
regional backup data center, and country/region branch are defined as different ASs by EBGP.

Active path. Generally, the country/region branch is directly connected to the regional
active data center using the active access link.

Standby path 1. If the active access link is faulty, the country/region branch is connected
to the regional active data center through the regional backup data center using the
standby access link.

Standby path 2. If the regional active data center is faulty, the traffic is switched to the
standby path 2 on the application layer using DNS mechanism.
4.4 Service Recovery Design in Multiple Data Centers
Enterprise users and branches are connected to the active data center. If a fault occurs in the
data center, services of enterprise users and branches are switched to the backup data center.
The switchover can be implemented in multiple modes such as manual switchover, DNS
switchover, HTTP redirection switchover, and route health injection (RHI) switchover.
The manual switchover is used for cold backup in a data center. The active data center in
dual-active data centers can use DNS or HTTP redirection switchover to implement load
balancing and adjacent selection. The backup data center in dual-active data centers can use
RHI switchover.
One Net DCN Solution
Design Guide 4 Connection and Disaster Recovery Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
67

Manual Switchover
This section assumes that the IP subnet of the active data center is A.B.0.0, through which
enterprise users and branches connect to the active data center. The IP subnet of the disaster
recovery data center is set to A.B.0.0, which is not frequently used, however. If a disaster
occurs in the active data center, manually switch to the IP subnet of the disaster recovery data
center so that enterprise users and branches can directly connect to the disaster recovery data
center.
DNS Switchover
To implement DNS switchover, configure an intelligent DNS device that is also known as
Global Server Load Balance (GSLB) in a data center. Based on real-time synchronization,
automatic switchover and active/active load balancing can be implemented for services. As
shown in Figure 4-6, intelligent DNS servers (global LBs) monitor the status of web servers
and local LBs, and provide DNS resolution results based on the status.
If a web server fails in the active data center, the local LB switches services on this web
server to the other web server in the data center. If the whole active data center fails, the
global LB switches services in the data center to the disaster recovery data center.
Figure 4-6 Automatic switchover and active/active load balancing implemented based on the
active/backup intelligent DNS/GSLB
Web server
Local load
balancer
Intelligent DNS
(global load
balancer)
Web server
Local load
balancer
Carrier
DNS
1
2
3
Active DC
Disaster backup
DC
Normal access
Backup access
Monitoring the status of servers and load balancers
Intelligent DNS
(global load
balancer)


One Net DCN Solution
Design Guide 4 Connection and Disaster Recovery Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
68

The DNS service has a great impact on services in the data center, so disaster recovery for
DNS servers must be taken into consideration. In multiple data centers, it is recommended
that you deploy the slave DNS server in the active data center, and master DNS server in the
backup data center. This guarantees the proper operation of DNS services when the whole
active data center fails.
HTTP Redirection Switchover
HTTP redirection switchover requires GSLB devices and is implemented in a similar way to
DNS switchover. Upon receiving a request from a user, a GSLB device selects an optimal IP
address and redirects the request to the optimal IP address using application layer protocols.
This mode has low performance and supports only redirection protocols are supported such as
HTTP and Multimedia Messaging Service (MMS).
The HTTP redirection switchover process is as follows:
1. A user sends an HTTP/RTSP request message.
2. The message reaches the active data center.
3. When detecting a fault in the active data center, the GSLB device sends the IP address of
the selected disaster recovery data center to the user through an HTTP/RTSP 302
redirection message.
4. The user request is automatically redirected to the IP address of the selected disaster
recovery data center.
RHI Switchover
LBs in the active and backup data centers can detect the status of the background servers in
the data centers. If servers in a data center are running properly, the LB in the data center
server sends a host route to the network. The host route sent from the active data center has a
low path cost value, while the host route sent from the backup data center has a high path cost
value. When the two data centers are working properly, a user receives two host routes with
different path cost values after sending a request. In most cases, the route with lower path cost
value is used to connect to the active data center. If a disaster occurs in the active data center,
the user receives only one route with a high path cost value and uses this route to connect to
the backup data center.
4.5 Layer 3 Communication Design
4.5.1 Overview
Communication in a data center is implemented using VPN or Synchronous Digital Hierarchy
(SDH) dedicated lines. VPN transmits encapsulated or encrypted private data on a network to
establish an enterprise intranet by leveraging public network resources.
MPLS L3VPN can be configured to allow Layer 3 communication. The egress router in a data
center functions as a customer edge (CE) to connect to the provider edge (PE) on the ISP
network. VPN routes are exchanged between CEs and PEs. PEs advertise the VPN routes on
the public network. The VPN routes are automatically transmitted to other PEs and CEs. A
CE matches packets based on its own routing table and VPN routing and forwarding (VRF)
table on the PE, encapsulates matched packets with two layer tags, and then sends them to the
CE in the peer data center. Packets are forwarded at Layer 3 on the CE. Layer 3
communication is implemented among multiple data centers.
One Net DCN Solution
Design Guide 4 Connection and Disaster Recovery Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
69

4.5.2 Layer 3 Communication Using MPLS L3VPN
Figure 4-7 Layer 3 communication using MPLS L3VPN
Data center 1
Data center 2
IP/MPLS
WAN connection
PE
GRE/LSP tunnel
PE
Tunnel
header
Label IP data


As shown in Figure 4-7, each data center is a site belonging to one VPN. IP addresses on a
VPN cannot overlap. Egress routers function as CEs to connect to PEs. To implement load
balancing and link backup, configure two CEs to connect to a PE through one link. If the data
center is large, add CEs and access links.
A VRF table is created on each PE. Configure the RD and RT for CE interfaces to determine
route switching mode. Information is exchanged between CEs and PEs using EBGP, OSPF,
RIP, or static routes.

Run EBGP between a CE and a PE.
As shown in Figure 4-8, if the CE and PE located in different ASs, run BGP between the
CE and PE.
Figure 4-8 EBGP running between a CE and a PE



Run OSPF or RIP between a CE and a PE.
OSPF allows the extended community attribute to carry the link-state advertisement
(LSA). LSAs are exchanged between OSPF of the peer VPN.
One Net DCN Solution
Design Guide 4 Connection and Disaster Recovery Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
70

Figure 4-9 OSPF or RIP running between a CE and a PE


4.5.3 Layer 3 Communication Using SDH
Egress routers in data centers function as PEs and connect to the carrier SDH network. Users
can configure POS interfaces on the routers. To transmit storage, Office Automation (OA),
and production services among data centers, isolate data centers using VPNs, ensuring service
security.
Figure 4-10 Layer 3 communication using SDH


The configuration roadmap is as follows:
1. Run OSPF to advertise routes between PE1 and PE2.
2. Configure basic MPLS functions and LDP for PE1 and PE2.
One Net DCN Solution
Design Guide 4 Connection and Disaster Recovery Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
71

3. Configure VRF instances for PE1 and PE2, and bind interfaces to the VRF instances.
4. Set up Multiprotocol IBGP (MP-IBGP) peer relationships between PE1 and PE2.
5. Configure a reachable route between PE1 and CE1 and between PE2 and CE2.
4.6 Layer 2 Communication Design
4.6.1 Overview
Multiple data centers can communicate at Layer 2 using the VPN, MPLS, or SDH technology.
In the traditional MPLS L2VPN mode using virtual leased line (VLL), the point-to-point
L2VPN service provided on the public network allows two points on the network to be
connected directly. However, this service does not support multipoint switching for the
service provider. Virtual Private LAN Service (VPLS) is developed based on the MPLS
L2VPN mode to provide multipoint-to-multipoint VPN services. VPLS is a complete solution
for Layer 2 communication among data centers.
4.6.2 Layer 2 Communication Using VPLS
As shown in Figure 4-11, each data center is a site. Egress routers function as CEs to connect
to PEs. PEs are fully meshed through Pseudo-Wires (PWs) and forward packets according to
split horizon, which prevents loops.
Figure 4-11 Layer 2 communication using VPLS
CSS CSS
CSS
IP/FC SAN
IP/FC SAN IP/FC SAN
PE
PE
MPLS
DC3
CSS
DCN
CSS
VSI MAC PORT
VPN1 A
VLAN 10
Port1
VPN1 B PW1
PW
PW
PW
PW
DC1
DC2
VSI A
VSI A


One Net DCN Solution
Design Guide 4 Connection and Disaster Recovery Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
72

Multiple servers are deployed in the data center and are assigned multiple VLANs based on
the service type.

Ethernet
If the frame header of a packet sent from the CE to PE contains a VLAN tag, the VLAN
tag is an inner tag. Enable QinQ on access interfaces of the PE, add a P-tag to the packet,
and attach a PW label and tunnel label before sending the packet. The peer PE queries
the virtual switch instance (VSI) upon receiving the packet and then forwards the packet
to the CE. The CE forwards the packet to the specified server based on the inner VLAN
tag and MAC address of the packet. Layer 2 communication is implemented among data
centers.

VLAN
The carrier is assigned a VLAN. To send packets from the CE to the PE, attach tag of
this VLAN to packets; otherwise, communication fails. This mode affects VLAN
planning of the data center and requires that VLANs of different CEs connected to a PE
do not overlap.
If QinQ is not enabled on access interfaces and the PW works in Raw mode, the PE may
remove the user tag when processing VPLS packets.
To implement load balancing and link backup, dual-home CEs to PEs, as shown in 4.6.2 3.
The configuration roadmap is as follows:
1. Create an Eth-Trunk in static LACP mode between CE1 and PE1 and between CE1 and
PE2. Add member interfaces to the Eth-Trunks.
2. Create an E-Trunk on PE1 and PE2. Set the LACP priority, system ID, E0Trunk priority,
source IP address, and peer IP address.
3. Add the Eth-Trunks to the E-Trunk.
Figure 4-12 VPLS networking (CEs dual-homed to PEs)


If a data center is large and multiple CEs need to connect to the PE, lease multiple AC links
from the carrier. Enable Hierarchical VPSL (H-VPLS) and add a user-end provider edge
(UPE) for connecting to all CEs. Connect the UPE to the network provider edge (NPE) using
Link State Protocol (LSP) and QinQ.
One Net DCN Solution
Design Guide 4 Connection and Disaster Recovery Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
73

Figure 4-13 Layer 2 communication using H-VPLS


If only one link exists between the UPE and the NPE, a link failure causes disconnections of
all VPNs attached to the aggregation device. Therefore, link redundancy is required both for
LSP and QinQ modes. A device uses only a master link to access the VPLS network. If
detecting master link failure, the VPLS system switches traffic to the backup link to ensure
uninterrupted VPN services.

LSP mode: Set up an LDP session between the UPE and NPE. Determine whether the
master PW fails based on the LDP session status.

QinQ mode: Run STP between the maximum transmission unit (MTU) and connected
PE, ensuring that one link is used if the other fails.
4.6.3 Layer 2 Communication Using SDH
Egress routers in data centers function as PEs and connect to the carrier SDH network. Users
can configure POS interfaces on the routers.
One Net DCN Solution
Design Guide 4 Connection and Disaster Recovery Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
74

Figure 4-14 Layer 2 communication using SDH


The configuration process is as follows:
1. Run IS-IS to advertise routes between PE1 and PE2.
2. Configure basic MPLS functions and LDP for PE1 and PE2.
3. Enable MPLS L2VPN on PE1 and PE2.
4. Configure VSIs for PE1 and PE2 in LDP mode.
5. Bind interfaces on PE1 and PE2 to the VSIs.
4.7 Optical Transmission-based Communication Design
4.7.1 Overview
Data centers in the same city are near to each other and can easily obtain optical fibers. The
bare optical fiber connection and self-constructed transmission help implement
high-bandwidth access of various services. Service data can be synchronized among data
centers, and key service data is backed up.
4.7.2 Network Design
As shown in Figure 4-15, four optical transport network (OTN) devices are deployed in two
data centers form a 1+1 backup data center group, and different services can be multiplexed
based on the wavelength.
One Net DCN Solution
Design Guide 4 Connection and Disaster Recovery Solution Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
75

Figure 4-15 Optical fiber connection between data centers in the same city
CSS
OTN
WEB
DB
Apps
CSS
OTN
WEB
DB
Apps
Optical wavelength
division
transmission
Core area
Router Router


The fiber channel (FC) interface on an OTN device connects to the FC switch in a data center
for replicating storage data in the data center in real time. A storage area network (SAN)
network requires separated network planes. Two OTN devices need to be deployed in each
data center. If an enterprise has no high requirement for reliability, use one OTN device to
process services on SAN planes using different wavelengths.
The data centers back up FC and IP data for service data backup and communication among
other services. To process IP services, configure routers between OTN devices and core
switches in the data centers. POS interfaces on the routers connect to the OTN devices.
Configure EBGP on the routers for exchanging routes between data centers.
Various services are transmitted among data centers, for example, production, OA, and
backup services. Data backup includes backup of files, IP SAN data, and database servers. To
process these services, connect multiple interfaces on the routers to the OTN devices.
Configure MPLS L3VPN to isolate different services, and configure H-QoS for the routers to
ensure service quality.
If communication at Layer 2 is required among data centers in the same city, configure MPLS
and VPLS for the routers.
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
76

5 Network Management Design
5.1 Overview
5.1.1 NMS
An increasing number of network resources are deployed in the data center due to the increase
of enterprise services. As a result, the network scale is larger and larger. This requires efficient
end-to-end network management. An NMS manages the network elements (NEs) and the
overall network, including network resources, the topology, faults, configurations,
performance, and reports. An NMS efficiently manages the network and reduces the workload
for network maintenance personnel.
5.1.2 Network Scale
When evaluating network scale, consider the following concept:

Equivalent NE
NEs support different functions and features, different cross-connect capacities, and a
different number of boards, ports, and channels, and occupy different NMS resources.
The maximum number of NEs that can be managed depends on the NE types.
The equivalent network element is a unified computing standard by converting NE types
and the number of ports into equivalent NEs based on occupied system resources.

Equivalent coefficient
Equivalent coefficient = Resources occupied by physical NEs or ports/System resources
occupied by equivalent NEs

Network scale
Refer to the NE equivalent coefficients to calculate the number of equivalent NEs in
each type and the network scale. Reserve certain network capacities for future expansion.
If there is no expansion plan, reserve capacities in the 1:0.6 ratio. The calculation
formula for network scale: Planned network scale = Current network scale + Reserved
network scale.
The following lists the mapping between network scale types and the number of equivalent
NEs:

Small network: less than 2000 equivalent NEs

Middle network: 2000 to 6000 equivalent NEs
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
77


Large network: 6000 to 15000 equivalent NEs

Very large network: 15000 to 20000 equivalent NEs
5.1.3 NMS Design
The NMS has a widely-used architecture. Inband or outband networking mode is adopted for
the communication between servers and NEs. Therefore, a server fault does not affect the
managed device networking or the services provided by these devices. The following lists the
details of the two networking modes:

Inband networking
Inband networking indicates that the NMS uses the service channels of the managed
device to transmit NMS information to manage the network. The NMS interaction
information is transmitted through the service channels of the managed device.
Advantage: flexible and simplified networking and cost-efficiency.
Disadvantage: When the network fails, the NMS cannot maintain the network
because the information channels between the NMS and the managed network are
interrupted.

Outband networking
Outband networking indicates that the NMS uses the service channels provided by the
other devices, instead of the managed device, to transmit NMS information to manage
the network. Generally, the management interfaces on the control boards of the managed
device are used as the access interface.
Advantage: The NMS is not directly connected to the managed devices, but
connected to the managed devices through other devices. Compared with the inband
networking mode, the outband networking mode provides more reliable device
management channels. When the managed device fails, the NMS can locate the
device information in a timely fashion and monitor the device in real time.
Disadvantage: The networking is costly. The NMS manages the devices by creating a
network that consists of the non-managed devices. The created network provides
maintenance channels which are irrelevant to the service channels.
Huawei provides eSight network management system based on the network scale and
application features of enterprise data centers.
5.2 Commonly Used NMS Management Tools

SecureCRT
SecureCRT is a terminal emulation program that supports SSH1, SSH2, Telnet, and
rlogin protocols. SecureCRT connects to remote systems that run the Windows, UNIX,
or VMS operating system. SecureCRT transmits encrypted files using the built-in VCP
CLI program. SSH1 and SSH2 protocols support authentication based on DES, 3DES,
RC4 password, or RSA.

MG-SOFT MIB Browser
MIB Browser uses standard Simple Network Management Protocol version 1 (SNMPv1),
Community-based SNMP version 2 (SNMPv2c), and SNMPv3 protocols to:
Monitor and manage SNMP devices
Perform operations such as SNMP Get, SNMP GetNext, SNMP GetBulk, and SNMP
Set
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
78

Capture SNMP trap and notification packets sent by SNMP devices

Whatsup Gold
Whatsup Gold is a powerful automatic detection and alarm system. It can be used to
search for devices on a specified network segment and discover services running on
these devices. Whatsup Gold can automatically discover interfaces on a router or switch
that runs SNMP and create the network topology view.
5.3 eSight System Design
5.3.1 Overview
eSight, the network management system for a data center, is in the B/S architecture. Therefore,
the system is updated or maintained by only updating the software on the server. This reduces
the cost and workload of system maintenance and upgrade. The B/S architecture has the
following advantages:

This architecture is in distributed mode. NMS administrators can query and view
information anytime and anywhere.

Services can be easily extended. Server functions can be added by adding Web pages.

Maintenance is easy. The information can be updated for all NMS administrators by
modifying Web pages.
eSight supports the following functions:

Security management

Log management

NE access

Topology management

Alarm management

Performance management

Report management

Configuration file management

Hierarchical NMS management
These functions meet the requirements for enterprise-class data center services.
Security Management
eSight ensures the NMS security by managing users, roles, rights, and operation sets. eSight
controls user rights based on the role model, and specifies types of devices to manage and
NMS operations for users.

Allows NMS administrators to specify the type of devices to manage.

Allows NMS administrators to specify NMS operation sets.

Allows NMS administrators to specify the time and IP address segment for the NMS
server access.

Allows NMS administrators to set the security policy for setting the account and
password to improve account security.
User management
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
79

eSight defines user roles, sets the access control policy, and specifies user permission for
managing certain types of devices, performing certain operations, and logging in to eSight
based on the user role and access control policy. The following user attributes can be set:

User name

Password

Usage status

User role

Login period

IP address segment

User description
NOTE

By default, the admin user and the super administrator roles exist in eSight.
Role management
eSight manages user roles based on the user role model including the role name, managed
devices, operations, and description. eSight allows NMS administrators to configure the type
of managed devices and configure operation permission so that NMS administrators can
control devices and perform certain operations after being granted specific role permission.

Permission for managing devices: Allows NMS administrators to manage certain subnets
and devices and import the self-defined device sets in the management domain.

Operation permission: Specifies NMS operations for users.
NOTE

By default, the super administrator, security administrator, operator, and monitor roles exist in eSight.
Access control management
eSight specifies user login period and the IP address segment.

Login period: Configures the login date, time, duration, and the day of login operations.

Access control: Configures the IP address and IP address segment of a device accessing
eSight.
Security policies
eSight provides configurable security policies, such as the account and password policies.
Security policies take effect on all user accounts in eSight.

Account security policies
Minimum length of a user account.
Policy for locking the user account when a user fails to log in to eSight using the
account.
Policy for disabling user accounts those are out of use for a specified period.

Password security policies
Minimum length of a user password.
Password policy.
Maximum times that the same character is used in a password.
Restrictions on special characters in a password.
Minimum duration for changing the password.
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
80

Policy for changing the password that is about to expire.

Security monitoring
eSight can monitor the current user, obtain user information such as the user name, IP
address, login time, and user role, and force the user to go offline.

Self-defined management domain
eSight defines devices of the same type in a management domain and associates the
management domain with the user role.
Log Management
Logs include operation logs, security logs, and system logs, and record important operations
of different security levels such as warning, minor, and major. NMS administrators can view
and search for logs in the log list, and view details of system logs.

Operation logs
Records NMS operations that NMS administrators initiate.

Security logs
Records information about system security.

System logs
Records key information about operations and tasks that the NMS triggers.
NE Management
eSight allows NMS administrators to manage NEs on the NMS, create subnets, and add NEs
to subnets based on the NE location.

Add NEs to the NMS.
NMS administrators can add NEs to the NMS in the following ways:
Manually add NEs to the NMS
NMS administrators can enter the IP address of an NE, set SNMP parameters, add the
NEs to be managed by NMS, and create subnets.
Automatically discover and add NEs to the NMS
NMS administrators can search for NEs in batches based on the network segment,
and discovered NEs are automatically added to the NMS.
Import NEs in batches
NMS administrators can import IP addresses and SNMP parameters of NEs in files to
the NMS in batches.

Delete NEs
NMS administrators can delete devices that are out of use from the NMS.

Manage subnets
NMS administrators can create subnets, add NEs to subnets based on the NE location,
and delete subnets.

Change subnets of NEs
NMS administrators can change subnets to which NEs belong based on the NE location.

Manage a single NE
View NE information
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
81

NE overview, such as the basic information, key performance indicator (KPI), TOPN
alarm, and traffic on NE interface.
NE panel that displays the NE information on the GUI.
Alarm list that displays the current alarms generated on the NE.
Performance status that indicates the KPI of the current NE.
Configure the NE
Manage the NE on the built-in web management page.
Configure NE services using intelligent configuration tools.
Enable and disable NE interfaces displayed in the interface list, and enable and
disable the alarm masking function of interfaces.
Query the IP address of the NE in the IP address list.
Query and back up the configuration file of the NE.
Modify protocol-related parameters
Modify Telnet parameters of the NE.
Modify SNMP parameters of the NE.
Configure information on the NE management home page that displays:
Basic information such as the NE name, type, SYSOID, and version. NMS
administrators can perform the ping, trace, and Telnet operations on the NE.
KPI diagram of the NE.
Top N alarms that the NE generates. These alarms are sorted by severity and then by
time in descending order.
Traffic on NE interfaces.

Manage NE adaptive packages
eSight provides NMS administrators with web services that are independent of the
network application platform so that NMS administrators can load, install, upload, and
upgrade NE adaptive packages.
Topology Management
The topology view displays managed NEs and the NE connection status. NMS administrators
can view the operating status of the entire network in the topology view.
Physical and IP topology views are provided.
Term Description
NE NEs are devices to be managed, and they are core components in topology
management. In the topology view, different icons indicate different NE types.
Subnet A network can be divided into several subnets based on certain rules such as the
NE location or NE type.
Link Links are physical or logical connections among communication devices.


Topology view
Figure 5-1 shows a typical topology view of NE hierarchical architecture, NE
connections, and NE running status on the entire network.
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
82

Figure 5-1 Topology view



Display the topology view
In the preceding figure, the topology tree on the left pane displays the hierarchical
architecture, and the topology view on the right pane shows the NE deployment.
The topology view supports eagle eye and full screen display modes.
The topology view displays alarm status of NEs and links, and corresponding tips.

Perform operations in the topology view
Zoom in or zoom out the topology view.
Export images from the topology view, print images, and set background images.
Move NEs in the topology view and save the settings.
Use the shortcut function to perform other operations.

Display the alarm severity
The color of an NE in the topology view reflects the alarm severity. NMS administrators
can view alarms and alarm severities of all NEs on the entire network in order to handle
critical alarms immediately.

Access the NE management page
NMS administrators can use the shortcut menu of NEs displayed in the topology view to
access the NE management page.

IP topology
NMS administrators can access the IP topology page to view routers, Layer 2 devices,
and links among them in the IP topology menu.
Alarm Management
eSight monitors network status in real time and allows the network administrator to view
real-time alarms, report alarms, configure alarm masking rules and alarm tone, and receive
alarms remotely. The network administrator can handle alarms in time to ensure the proper
network running.
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
83

To manage alarms more effectively, the network administrator can configure the alarm tone,
and rules for remote alarm notification and alarm masking.
eSight allows the network administrator to perform the following operations:

View alarm records
NMS administrators can view different types of alarm records on multiple pages.

Query the number of alarms on the alarm board
NMS administrators can query the number of alarms on the alarm board.

View the current alarm
eSight monitors alarms based on preset conditions and allows NMS administrators to
view current alarms and query alarms based on the search criteria. Figure 5-2 shows the
page for viewing the current alarm.

View historical alarms
NMS administrators can view inactive alarms that have been archived in the historical
alarm list.

View events
NMS administrators can view reported events.

View masked alarms
NMS administrators can view suppressed alarms.
Figure 5-2 Viewing the current alarm


Alarm reporting

Lock the alarm page
After NMS administrators lock the current alarm page, new alarms are not displayed on
the page.

Distinguish alarms
NMS administrators can distinguish processed alarms from alarms to be processed based
on acknowledged alarms.
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
84


Clear alarms
NMS administrators can manually clear alarms and move cleared alarms to the alarm
library.

Export alarms
NMS administrators can export certain alarms or all alarms to an EXCEL file.

Locate alarms
NMS administrators can locate alarms on NEs or panels.
Alarm configuration

Configure alarm masking rules
NMS administrators can configure alarm masking rules based on the date, time period,
alarm source, and alarm type to mask certain alarms so that these alarms are not
displayed on the alarm page.

Configure the alarm tone
NMS administrators can configure the alarm tone and alarm frequency at four levels.
Remote alarm notification
The remote alarm notification function allows eSight to notify maintenance personnel of
alarms by sending emails or SMS messages.

Send emails
Configure the alarm server to forward emails of alarm notifications to maintenance
personnel.

Customize the alarm notification template
NMS administrators can customize the alarm notification template before sending them
in emails.

Manage user groups remotely
Configure user groups for sending alarm notifications, and configure email addresses of
users in the user groups.

Configure the remote notification rule
Based on the alarm severity and alarm name, NMS administrators can configure the
alarm notification rule involving the name of an alarm notification, whether to enable or
disable the notification after an alarm is cleared, and NMS administrator groups.
Performance Management
eSight monitors KPIs of a network in real time and provides statistics on the collected
performance data. NMS administrators can control the network performance on the GUI.

Configure the performance monitoring function
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
85

Figure 5-3 GUI for monitoring network performance


Figure 5-3 shows the GUI for monitoring network performance, which allows NMS
administrators to:
Query KPIs of monitored NEs.
Monitor KPI status when adding, deleting, enabling, or disabling NEs in batches.
Modify the period and threshold for collecting KPIs and determine whether to enable
template settings.
Displays the KPI collection status of monitored NEs.
Create a view of monitored NEs conveniently.

Configure the template for monitoring network performance
NMS administrators can configure the template for monitoring network performance and
use the default KPI settings of monitored NEs. eSight supports automatic KPI collection
when NEs are created, and allows NMS administrators to:
Allows NMS administrators to determine whether to monitor KPIs that are defined in
the template by default.
Configure the period for collecting KPIs.
Configure the default threshold for collecting KPIs.
eSight automatically collects the following KPIs by default:
CPU usage
Memory usage
Traffic on network interfaces

View KPIs in the performance monitoring diagram

NMS administrators can obtain KPIs and view them on the GUI. NMS administrators
can query KPIs in a specified time period and predict the change of network performance
based on these KPIs. NMS administrators can perform the following operations:
Query historical KPIs
View the curve chart of historical KPIs
Import queried KPIs to a .csv file
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
86

Physical Resource Management
eSight allows enterprise NMS administrators to configure NEs and query NEs, frame, boards,
subcards, and ports.

NE
Allows NMS administrators to query and export information about NEs.
Allows NMS administrators to configure SNMP and Telnet parameters in batches and
synchronize NE data in batches.
Allows NMS administrators to modify the NE description and maintenance fields,
and query NE entity data.
Figure 5-4 shows the page for managing NEs.
Figure 5-4 NE Resource



Frame
Allows NMS administrators to query and export frame information, and modify frame
descriptions.

Board
Allows NMS administrators to query and export board information, and modify board
descriptions.

Subcard
Allows NMS administrators to query and export subcard information, and modify
subcard descriptions.

Port
Allows NMS administrators to query and export port information, and modify port
descriptions.
Report Management
eSight generates periodical and instant reports when tasks are executed in PDF, Excel, and
Word format. eSight provides various report templates for common network operation and
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
87

maintenance, and provides report designing tools for the NMS administrator to customize
report templates.

Report management
Figure 5-5 shows the report management page. NMS administrators can create and
manage reports on this page.
Periodical reports or instant reports are generated when tasks are executed. NMS
administrators can configure email information in the task so that eSight can send email
reports to others.
Instant reports
NMS administrators can manually generate instant reports that display real-time
statistics, view generated reports immediately, and export them in a specific format.
Periodical reports
eSight periodically collects statistics, generates reports, and saves them. NMS
administrators can view and manage all reports generated in a specific period, and
delete, export, or send email reports in batches.
Figure 5-5 Report management



Pre-defined reports
eSight supports the following pre-defined reports:
Device type report
Alarm severity report
Link connection/disconnection report
NE connection/disconnection report
Port status report
NE basic performance report
Interface traffic report

Customized report
eSight provides powerful report designing tools on the GUI, which allow NMS
administrators to customize reports.
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
88

NMS administrators can collect statistics flexibly using various charts, such as pie charts,
column charts, line charts, area charts, Gantt charts, and candlestick charts. NMS
administrators can modify an existing report or create a new report based on the template,
and upload the report to eSight.
Customized Management
eSight allows enterprise NMS administrators to customize the following information and
manage NEs as required:

Basic information of the manufacturer
NMS administrators can customize the basic information of the manufacturer, and add,
delete, and modify the information.
Manufacturer name
Manufacturer description (optional)
Service phone number of the manufacturer (optional)
Maintenance personnel of the manufacturer (optional)
Customization type: Determines whether the manufacturer information is defined by
NMS development engineers or customized by customers

NE type
If the NE type is not defined before NMS administrators load an NE to the NMS, the NE
type is unknown in the NMS, and NMS administrators can view the NE information.
After customizing the NE type, NMS administrators can view the NE type and monitor
alarms and performance of NEs in the NMS.
NE OID: Defines the NE type
NE category: Classifies NEs into categories, such as the switch, router, server, printer,
and security device
Web NMS link: Allows NMS administrators to connect an NE to the web NMS using
the NMS access interface
NE icon: Identifies the NE type
Customization type: Determines whether the NE type is defined by NMS
development engineers or customized by customers

Alarm parameters
NMS administrators can customize alarm parameters based on the SNMP v1 or SNMP
v2c/v3, and add, delete, and change alarm parameters. If an NE alarm is not defined by
the eSight, NMS administrators must customize the alarm so that the NMS alarm module
can process it.
If NMS administrators delete customized NE alarm parameters, alarm history remains in
the NMS while the NMS does not process this type of alarm.
NMS administrators can change alarm parameters such as the alarm severity, event type,
alarm cause, handling suggestion, detailed information, and alarm location.
Manufacturer name: Distinguishes alarm parameters generated on the NEs that are
provided by different manufacturers
Alarm name
Alarm severity: critical, major, minor, or warning
Notification type: alarm, clear alarm, and event
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
89

Alarm type: communication alarm, NE alarm, error alarm, service quality alarm,
environment alarm, integrity alarm, operation alarm, physical resource alarm,
security alarm, and time domain alarm
SNMP v1 or SNMP v2c/v3 alarms
Generic, specific, and enterprise IDs: key parameters for NMS administrators to
locate an SNMP v1 alarm
Alarm OID: trap OID of the alarm package, which is used to locate SNMP v2c/v3
alarms
Alarm causes
Suggestion for handling alarms
Detailed alarm information
Parameters for locating alarms

KPIs
NMS administrators can customize, add, delete, and modify a KPI, and customize
monitoring instances of the KPI in the performance management module. The
performance management module collects data of the customized KPI based on the KPI
collection period after the customization.
KPI name
KPI group: KPIs are classified into different groups, such as the customized NE KPI
group, customized frame KPI group, customized board KPI group, customized port
KPI group
Type of the NE for which KPIs are customized.
KPI unit
KPI equation about the monitored MIB object and MIB-related calculations

Configuration file
NMS administrators can customize commands for managing configuration files,
including commands for backing up and restoring configuration files, and for restarting
NEs. After customizing configuration files, NMS administrators can customize the
backup task for current NEs in the configuration file management module so that the
NMS can manage and back up NE configuration files.
Type of the NE for which the configuration file is customized.
Command for backing up configuration files
Command for restoring configuration files
Command for restarting an NE
Figure 5-6 shows the page for customizing configuration files.
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
90

Figure 5-6 Configuration file customization



NE panel
By default, the NMS displays the panel for a customized NE type. NMS administrators
can customize the NE panel using a panel image that shows the panel frame, board,
subnet, and ports. After customization, NMS administrators can view the panel image of
an NE in the NMS.
Configuration File Management
NMS administrators can manage NE configurations and export, back up, and restore NEs, and
manage configuration files based on the baseline version. If a fault occurs on the network,
NMS administrators can compare the current NE configuration file with the baseline version
configuration file to locate the fault.

NE configuration management
Configuration file backup
eSight can automatically back up configuration files immediately or at the scheduled
time (daily, weekly, or monthly) before NE configurations are modified. Figure 5-7
shows the NE configuration page.
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
91

Figure 5-7 NE configuration


Configuration file
NMS administrators can download a configuration file to a local host, restore the
operation configurations in the file, roll back the file to the baseline version, and
query the NE operation configuration.
If the configuration file has been backed up to a local host, NMS administrators can
view the file, compare it with the configuration file of another version, or delete the
file.

System parameter management
FTP service parameter
NMS administrators can set parameters of FTP service that the current NMS server
uses, including the user name, password, and the root directory for the FTP service.
NMS administrators can view the running status of the FTP server directly in the
NMS.
Backup parameter
NMS administrators can set the maximum number of configuration files that an NE
can save on the NMS server, and apply this value to all NEs that the NMS server
manages.
NMS administrators can configure whether eSight supports automatic backup of
configuration files on an NE before its configuration is changed.
Intelligent Configuration
eSight allows NMS administrators to use templates and data plans to intelligently configure
services and deliver configurations to Huawei NEs. Templates are used to deliver the same
service configurations to multiple NEs at one time, while data plans are used to deliver similar
service configurations to NEs.

Delivering service configurations to NEs using a template
NMS administrators can use a preset template or customize a configuration template to
deliver service configurations to NEs, and can run commands to verify the delivery
results. Figure 5-8 shows a configuration template.
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
92

Figure 5-8 Configuration template



Delivering service configurations to NEs using a data plan
NMS administrators can set parameters in a data plan, import data plan parameters to
eSight, and deliver service configurations to Huawei NEs in a wizard.
eSight Portal
eSight portal displays important monitoring information on the GUI, and allows NMS
administrators to customize displayed information and display format.
Figure 5-9 shows the eSight portal.
Figure 5-9 eSight portal


One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
93

The eSight portal displays NE statistics and other important monitoring information.

NE statistics
NE category statistics
NE manufacturer statistics
NE TOPN alarm statistics

Important monitoring information
TOP 10 NEs that generate alarms
TOP 10 NEs that occupy the CPU usage
TOP 10 NEs that occupy the memory usage
Current alarm
Information about lower-layer NMSs
Subnet list
Data Storage and Backup
eSight supports web services independent of the network application platform so that NMS
administrators can back up and restore the database.

Database backup
NMS administrators can back up database objects and data in files and save these files
on the server on the page shown in Figure 5-10.
Data backup and restoration do not affect eSight running.
Figure 5-10 Database backup and recovery



Backup file management
NMS administrators can manage backup files in the following ways:
Enter the criteria to search for backup files.
Query the backup time, description, backup status, restoration time, and restoration
status.
Restore the database using backup files.
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
94

Delete backup files.

Database restoration
NMS administrators can restore the database on the backup file management page after
eSight stops running.
5.3.2 Design Roadmap
eSight supports inband and outband networking modes.
eSight supports the single-server mode and hierarchy-deployment mode. Generally, enterprise
data centers are not managed based on areas or hierarchies. Therefore, Huawei recommends a
single-server mode. Multiple browsers can access the eSight at the same time. eSight adopts
LBs to handle requests from multiple NMS administrators and allocates the requests to
different Web servers. Service components of the eSight are deployed on the same server.
Figure 5-11 Working mode of the eSight


eSight is deployed with a scalable architecture and multiple modules. It can manage each data
network and the entire network. In addition, the eSight can manage the Huawei and
non-Huawei devices commonly used in industry.
Field Device
Switch S9700, S9300E, S9300, S7700, S2700, S3700, S5700, S6300, and S6700
NE series
of routers
8090 series of routers: NE40E, NE40E-4, NE40E-X3, and NE80E
8011 series of routers: NE40 and NE80
8070 series of routers: NE20E-8 and NE20-2/4/8
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
95

Field Device
AR series
of routers
AR1220, AR1220 W, AR1240, AR1240 W, AR2220, AR 2240, and AR3260
Wireless
devices
Fit AP series and WS6600 series
Security
device
Eudemon security devices: Eudemon8000E, Eudemon1000E, E200E-B/C/F,
E100E, and E200S
SRG security devices: SRG 2200
SVN security devices: SVN 3000
Third-party
device
Printer, server, and pre-integrated devices that are provided by third parties
such as the H3C, Cisco, and ZTE

eSight can be integrated with the OSS. eSight adopts SNMP to report network alarms. This
enables eSight to interconnect with the OSS alarm system.
Figure 5-12 Networking used when eSight is integrated with the OSS
eSight
DCN
OSS
SNMP


The following lists the advantages of integrating the eSight with the OSS:

Improved network management capability

Separation of NE management and network management

Ensured enterprise O&M mechanisms
5.3.3 Design Requirements
eSight is based on the SNMP protocol and Fault, Configuration, Accounting, Performance,
Security (FCAPS) management model. The following lists the design requirements:

Use standard SNMP, Telnet, and SSH.
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
96


Display the network topology and configuration items, IP resources, and IT resources on
a unified GUI.

Support modules in the Service-Oriented Architecture (SOA) for flexible module
installation and expansion.

Compatible with mainstream third-party devices.

Provide open and flexible interfaces of various granularities to integrate with third-party
systems.

Provide powerful management capability.
When eSight manages different types of NEs, the following factors need to be considered:

The number of used fiber cables and deployed services vary with the NE type. In
addition, NEs occupies different database space.

eSight in basic, standard, and professional editions have different management
capabilities.

The management capability is affected by hardware platforms where eSight is deployed.
5.3.4 Design Elements
Table 5-1 lists the network management solutions provided by the three editions of eSight
based on network scales and functions.
Table 5-1 eSight editions and functions
Edition Function
Basic edition Manages the following functions for a single user:
Topology
NEs
Links
Physical resources
Electronic labels
Alarms
Performance
Configuration files
Logs
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
97

Edition Function
Standard
edition
Manages the following functions for multiple users:
Services provided by the basic edition
Self-defined NEs
Reports
Intelligent configuration tools
WLAN
IPSec VPN
MPLS VPN
SLA
IP network topology
SNMP Extensible Markup Language northbound interface (NBI)
Security management
Database backup tools
Fault collection tools
Professional
edition
Supports services of the standard edition and is used as an upper-layer
NMS

Requirements for the eSight Server
Table 5-2 describes requirements for the software and hardware of eSight in basic, standard,
and professional editions.
Table 5-2 Requirements for the eSight server
Edition Number of
Managed
Nodes
Server
Configuration
Recommende
d Server
Operating
System
Database
Basic
edition
60 One due-core
CPU, 2.0 GHz or
higher
Memory: 2 GB
Disk space: 20
GB
Computer: I3
2100 or higher,
2 GB-320 GB
or higher
Windows 7
(32-bit)
MySQL
5.5
(built-in)
Standard
edition
0200 One dual-core
CPU, 2 GHz or
higher
Memory: 4 GB
Disk space: 40
GB
Note
The PC server is
recommended.
IBM
X3650M3-Xeo
n quad-core
E5506 2.13G
or higher-4G
(1*4G)-2*300
GB
Windows Server 2003
standard edition 64-bit,
MySQL 5.5
Or
Windows Server 2003
standard edition 64-bit,
Microsoft SQL Server
2008 R2 edition
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
98

200500 Two dual-core
CPUs, 2 GHz or
higher
Memory: 4 GB
Disk space: 60
GB
Note
The PC server is
recommended.
5002000 Two quad-core
CPUs, 2 GHz or
higher
Memory: 8 GB
Disk space: 120
GB
Note
The PC server is
recommended.
IBM
X3650M3-2*X
eon quad-core
E5506 2.13G
or higher-8G
(2*4G)-3*300
GB
20005000 Two quad-core
CPUs, 2 GHz or
higher
Memory: 16 GB
Disk space: 250
GB
Note
The PC server is
recommended.
IBM
X3650M3-2*X
eon quad-core
E5620 2.4G or
higher-16G
(2*8G)-3*300
GB
Professi
onal
edition
0200 One dual-core
CPU, 2 GHz or
higher
Memory: 4 GB
Disk space: 40
GB
Note
The PC server is
recommended.
IBM
X3650M3-Xeo
n quad-core
E5506 2.13G
or higher-4G
(1*4G)-2*300
GB
Windows Server 2003
standard edition 64-bit,
MySQL 5.5
Or
Windows Server 2003
standard edition 64-bit,
Microsoft SQL Server
2008 R2 edition
200500 Two dual-core
CPUs, 2 GHz or
higher
Memory: 4 GB
Disk space: 60
GB
Note
The PC server is
recommended.
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
99

5002000 Two quad-core
CPUs, 2 GHz or
higher
Memory: 8 GB
Disk space: 120
GB
Note
The PC server is
recommended.
IBM
X3650M3-2*X
eon quad-core
E5506 2.13G
or higher-8G
(2*4G)-3*300
GB
20005000 Two quad-core
CPUs, 2 GHz or
higher
Memory: 16 GB
Disk space: 250
GB
Note
The PC server is
recommended.
IBM
X3650M3-2*X
eon quad-core
E5620 2.4G or
higher-16G
(2*8G)-3*300
GB
500020000 Four quad-core
CPUs, 2 GHz or
higher
Memory: 32 GB
Disk space: 320
GB
Note
The PC server is
recommended.
IBMX3850X5-
4* 8-core
E7-4820 2.0G
or higher-32G
(8*4G)-8*300
G
SUSE
Linux 11
SP1 64-bit
Oracle 11g
standard
edition
11.1.0.6.0

The requirements for the browser version and memory on the PC are as follows:

Browser version: Internet Explorer 8 or Firefox 3.6

Recommended resolution: 1024 x 768

Memory: 1 GB or higher
IP Address Planning for the NMS Servers
The IP address planning must be complied with the following principles:

The IP address must be unique in the network.

The servers can communicate with the managed devices.

The servers can communicate with the clients.

Only one IP address can be assigned to a network port.

The IP address of a hardware control device, such as a disk array controller, must be
planned based on the hardware on the NMS servers.

The IP address of the NMS must be planned based on the NMS deployment solutions.
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
100

Port Planning for the NMS Servers
FWs filter traffic based on the IP address and the port number of TCP/UDP.
The system separates data packets based on the port number of TCP/UDP and sends the data
packets to the appropriate application programs.
The port number range (065535) of TCP/UDP is allocated into the following three segments:

01023: identify standard services, such as FTP, Telnet, and Trivial File Transfer
Protocol (TFTP).

102449151: allocated by the Internet assigned number authority (IANA) to registered
applications.

4915265535: dynamically allocated to applications as private port numbers.
When FWs are deployed between the eSight server and NEs, clients, or the OSS, ports need to
be deployed to facilitate the connection between the eSight server and these devices.
Table 5-3 Ports to be enabled
Source
Device
Destination
Device
Protocol Source Port Destination
Port
Description
eSight server SNMP device UDP A random port 161 eSight server sends
commands to port 161 of
SNMP.
eSight server SNMP device TCP A random port 22
23
eSight server sends Telnet
requests to port 23, and SSH
requests to port 22 on SNMP.
eSight server SNMP device TCP A random port 1400 Port that NMS administrators
use to listen to the NMS or
command line.
eSight server SNMP device UDP A random port 69 Port that NMS administrators
use to back up and deploy NE
configurations in TFTP mode.
eSight server SNMP device UDP A random port 1500 Port used for the NMS to
automatically detect NEs and
monitor the detection process.
eSight server SNMP device TCP A random port 5432 Port used for sending the SSL
command line.

For details about other ports to be enabled, see the eSight product documentation.
eSight is designed based on the B/S architecture. You must comprehensively consider
adapting the eSight to the existing network scale.
Technical Specifications
Table 5-4 shows the technical specifications of eSight.
One Net DCN Solution
Design Guide 5 Network Management Design

Issue 01 (2012-05-15) Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
101

Table 5-4 Technical specifications
Item Sub-item Basic Edition Standard Edition Professional
Edition
Management
capability
Number of
managed NEs
100 5000 20000
Number of
clients
10 100 200
Occupied
resource
Memory usage 512 MB 1 GB 2 GB
CPU usage N/A More than 30%
occupancy within 15
minutes
More than 50%
occupancy within 15
minutes
Storage
capacity
Current alarm
capacity
20000 20000 20000
Historical alarm
capacity
N/A 1.5 million 15 million
Log data
capacity
1 million 1 million 1 million
Performance
data capacity
N/A N/A 60 million
Handling
capability
Alarm response
duration
No more than 30s No more than 30s No more than 30s
Rate for
collecting
performance
data
N/A Collecting 30000
performance data within
15 minutes
Collecting 30000
performance data
within 15 minutes
GUI operation
response time
3s 3s 3s
Status update
duration
Device status N/A No more than 30s No more than 60s
Link status N/A No more than 5s No more than 5s


Equation for calculating the storage space that performance data, alarm records, and log records occupy:
(Number of performance data items + Number of alarm records + Number of log records) x 0.5/(1024 x
1024)
Unit: GB
Calculation rule: A performance record, alarm record, or log record takes about 0.5 KB in database
tablespace.