Anda di halaman 1dari 51

215 86877 EACK TR Ed.

01, June 2005

System Description Hardware Architecture Page 103

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 104

Figure 6. hardware-numbering-scheme - correlation of different schemes

hardware-components used in the Flex21 Chassis


An iron-rack is filled with up to 3 cPCI shelves. Each chassis contains

1 x BPA with 2 fabric switch positions and 19 equipment positions and the
related RTM PBA positions

19 PSU and 2 merged PSU/FlexManager positions 3 x Fan Units with 2 fans each 2 x Power Input Units 1 x Dust Filter 1 x LED panel

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 105

Figure 7. Alcatel 5020 MGC Chassis View

Backplane / Midplane
The standard Flex21 midplane provides PICMG 2.16 support for all slots and H.110 support for all slots 3 through 19 (slots 2 and 20 allow used of ethernet switches instead). PICMG 2.16 midplane support includes dual Ethernet to every slot at up to Gigabit speeds (it is required to equip related jumpers on the midplane to enable all slots for 2.16 ethernet communication). H.110 midplane support allows for transport of TDM data between slots for payload applications.

Power entry modules (FlexAlarm Modules)


The Flex21 platform provides dual power input connections for each of the redundant 48VDC power feeds. This ensures that current constraints on a single power connection, typically 20 Amps, do not limit the power capabilities of the platform.
Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 106

Each of the redundant FlexAlarm power input trays supports both connections for one of these redundant power feeds, allowing either of the power input trays to support 48VDC distribution to the entire chassis. This configuration allows either power-input tray to be replaced without effecting power to the platform.

Power-supplies
3U FlexPower modules enable independently powered node slots in a flexible chassis. Located above each node slot, these hot-swappable modules also incorporate serial and IPMI controllers to enable unified chassis and system management capabilities.

Processor: Cygnal 8051 microprocessor running at 20MHz Memory: 16kByte FLASH / 1.2kByte RAM Flex managers
3U FlexManager modules provide redundant, modular, unified chassis and system level management capabilities with open, easily extensible interfaces for integrating with any application. Located above each fabric slot, these hot-swappable modules also provide power to each Ethernet fabric switch.

Processor: PXA250 (ARM) running at 800MHz Memory: 64Mbyte RAM Operating System: LINUX cTCA-CEs - (single pentium CPCA) - General Description (CPCA)
The CEs used in the new cTCA chassis are called either ITCEs (which does not define a CE-type but is a general term to distinguish these CEs from the ones running call engine legacy OSN) or call engine CEs or they are of type "server". The board used for all these CEs consists of a PCB assembly with a CPU core, DRAM, non-volatile memory, and a series of user interfaces. The ITCE baseboard performs as a non-systems slot controller and comply with the Basic Hot Swap specifications as defined in the cPCI Hot Swap Specification, PICMG 2.1 R1.0, August 3, 1998.

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 107

The following describes the qualities: of the used PBA:

Board-name: Cheetha-Cr single slot 6U board PentiumM processor, 1.6GHz, providing a 400MHz front side bus One PMC expansion site is being provided on board. Each PMC has a
front-panel as well as rear I/O access. Up to now this PMC site is used for SLSN7 only - but may be used in future to support e.g. additional I/O requirements of an application..

1Mbyte of level 2 cache separated 32kByte level 1 caches for instruction and data 1Gbyte DRAM on-board memory providing ECC Error Detection and
Correction (EDAC) protection.

The following interfaces are provided: 32-bit 33 MHz PCI interface on backplane Two 10/100/1000Base-T Ethernet links to backplane (PICMG 2.16) Two 10/100Base-Tx Ethernet ports via backplane on RTM (if equipped) Four serial ports, two on the front panel and two via backplane on RTM (if
equipped)

Three USB-ports, one on the front panel and two via backplane on RTM (if
equipped)

Two IPMB interfaces on backplane (PICMG 2.9) The BIOS supports a PXE (Pre eXecution Environment) for network booting Dual-Processor Board (CPCB)
There is a need to further improve footprint of the Alcatel 5020 MGC. Leveraging FLEX21s high cooling capabilities of 75W/slot to introduce a dual processor board into the system will do this. Following assumptions will be fulfilled when introducing the dual-processor board:

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 108

The dual processor board houses (more or less) twice the components of the
Cheetha-Cr blade. It is implemented with two independent (except power supply) Pentium M processors, each having 1GByte RAM (default)

mixed equipment practise in one chassis is possible, i.e. some single-pentium


boards and some dual-pentium boards may be equipped in same chassis (slot configuration will be fixed offline; no online changes will be allowed/possible)

no domain-mix on one physical board allowed (domains are: call engine,


ITCE, server and hardware-management). Both CPUs on same physical board host either call engine-CEs or ITCEs or EAxS server. PLDA must not have another call engine -CE on 2nd CPU due to power-restrictions (two disks on RTM to be provisioned as well). OAM may have another ITCE on 2nd CPU. IPACC may have another ITCE on 2nd CPU if bandwidth requirements for external connectivity do not conflict.

Redundancy scheme of R2.0 as defined in this document remains untouched,


i.e. is independent of board-type. A/B redundancy-block as already defined remains valid. Same board-type (single or dual-processor) must be equipped on redundant slot pairs. Care has to be taken w.r.t. load-sharing CE groups: to avoid degradation of reliability figures we should not map two members of same load-sharing group onto same physical board.

RTM-usage possible with dual-processor-board for CPU1 only; CPU2 can


never have an RTM.

It is assumed that it will not be possible to equip a PMC on the dual processor
board (also no PIM-usage) we cannot use this board for SLN7S implementation C in this case it will be implemented using single processor boards. If for any reason PMC-usage is possible with the dual-processor board we will have one site that is connected to CPU1 with 64-bit 66MHz PCI. A PIM on corresponding RTM may be used in this case.

Serial interface: serial routing for the dual processor board will be done in a
way that two serial ports per CPU (COM1/COM2) are supported/provided on the front panel. Two serial ports (COM3 and COM4) will be provided via RTM for both CPUs - switched by a serial port selector that gets selection signals via the IPMI-controller. Via the IPMI controller of CPU#1 one can switch COM3 and COM4 either to CPU#1 or CPU#2.

Ethernet access: both CPUs have access to both backplane ethernet


connections (PICMG 2.16). There is an additional switch on the
Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 109

dual-processor board to connect both CPUs to the backplane. Plane A and B will be kept separate by using VLANs, which will be pre-configured on the board.

IPMI-access will be possible for both CPU-cores via two dedicated


IPMI-controllers. By that both CPUs can be addressed and triggered independently. Both controllers can be addressed using different IPMI-addresses.

Addressing and geographical location identification aspects: the addressing


scheme has been enhanced and covers now up to two bits to identify CPU-Id (one bit used in R2., usage of a second bit possible but this reduces the number of slot-bits [4 bits remaining => max. 16 slots] and may be used later for aTCA systems) and up to four bits to identify the VM-Id; 2 bits available enabling up to 4 VMs; if more bits are used for VM-Id this reduces the number of rack-bits [max. 4 VMs & max. 8 racks / max. 8 VMs & max. 4 racks / max. 16 VMs C max. 2 racks ]. For CPU identification the address-bit W0 (W1) has been used. To avoid the situation that addresses are resulting that are not valid in a native call engine environment we use a specific algorithm for PCE-ID generation (see figure MAC Address composition for call engine -Domain / ITCE-Domain (except PLDA) on MAC Address composition for call engine-Domain / ITCE-Domain (except PLDA) ). The two processors on one blade will now be identified by address-tandems. An example will illustrate this situation: Each dual processor board will represent two NAs which have a fixed correlation by W0-bit, e.g. 000F/000E, 0101/0103, 4337/4336 are all address-tandems. For more details see info and figures in chapter Computing MAC-Addresses

ESWT topology aspects: the topology remains unchanged. This means that
ELM will NOT see the new on-board switch on the dual-processor board. The number of ETSLs per fabric switch has to be adapted (38 if we consider dual-processor boards only; 152 if we consider up to 4 VMs per CPU)

CHACO adaptations: there will be some reglib-enhancements, which will be


provided by CCPU that will reflect the concept of dual-processor boards per slot. We will have to handle these reglib-enhancements.

System administration: UNIX maintenance-GUIs e.g. EQUIP-GUI have to be


enhanced to cover dual-processor boards, i.e. two CEs of probably different type (but same domain) have to be equipped per slot.

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 110

Reset Control: the dual-processor board enables reset to either of the CPUs
via different reset inputs: IPMI-command, front panel reset, external cPCI reset. The following figures will illustrate the dual-processor solution as described above:

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 111

Figure 8. Principles of dual-pentium board implementation (processor / ethernet / IPMI)

Figure 9. Principles of dual-pentium board implementation (serial interfaces)

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 112

Figure 10. Principles of dual-pentium board implementation (RTM / PMC / PIM usage)

Figure 11. Principles of dual-pentium board implementation (addressing)

ITCE-types
The following sections describes the hardware-implementation of all ITCE-types that are implemented by cTCA-hardware covering as well potential usage of PMCs, RTMs and peripherals. Resource Manager (RM-CE) The RMCE configuration will be as follows:

CTCA single or dual - processor as described in chapter cTCA-CEs - (single


pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB)

No PMC No RTM No associated peripherals

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 113

Connection Control Manager (COCO) The COCO configuration will be as follows:

CTCA single or dual - processor as described in chapter cTCA-CEs - (single


pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB)

No PMC No RTM No associated peripherals


SIP Control (SIPCE) The SIPCE configuration will be as follows:

CTCA single or dual - processor as described in chapter cTCA-CEs - (single


pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB)

No PMC No RTM No associated peripherals


SIP IAD Registry (SIPRG) The SIPCE configuration will be as follows:

CTCA single or dual - processor as described in chapter cTCA-CEs - (single


pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB)

No PMC No RTM No associated peripherals


SIGTRAN Control (STRAN) The SIGTRAN configuration will be as follows:

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 114

CTCA single or dual - processor as described in chapter cTCA-CEs - (single


pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB)

No PMC No RTM No associated peripherals


Internet Protocol Signalling Access (IPACC) The IPACC configuration will be as follows:

CTCA single or dual - processor as described in chapter cTCA-CEs - (single


pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB) other CPU must not have an RTM in case of dual-prcessor-board.

No PMC
RTM: PM 1116 SFF transition board / no disks - for serial and Eth ITF access

No associated peripherals
OAM Agent (OAM) The OAM configuration will be as follows:

CTCA single or dual - processor as described in chapter cTCA-CEs - (single


pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB) other CPU must not have an RTM in case of dual-prcessor-board.

No PMC RTM: yes - 2 slot "PM-1116 SFF transition board 1x73GB disk" - for 2 serial,
1 USB and 2 Gbit Eth ITF access, 1 SCSI disk, 2 SCSI Ctrl., 1 external SCSI ITF

One 2.5 SCSI Hard Disk on RTM


Signalling Link N7 Control (SLN7S) The SLN7S configuration will be as follows:

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 115

CTCA single or dual - processor as described in chapter cTCA-CEs - (single


pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB) other CPU must not have an RTM in case of dual-prcessor-board.

PMC for processing the layers MTP1 and MTP2 ADAX "HDC II" RTM: yes - 1 slot "SLN7S Feature RTM" - for 2 serial, 2 USB and 2 100Mbit
Eth ITF access, 2 external SCSI ITFs plus HDCII PIM form ADAX (corresponding to HDCII PMC)

No associated peripherals Server-types


The following sections describes the hardware-implementation of all Server-types that are implemented by cTCA-hardware covering as well potential usage of PMCs, RTMs and peripherals. Extended Assistant Billing System (EABS) The EABS configuration will be as follows:

CTCA single or dual - processor as described in chapter cTCA-CEs - (single


pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB) other CPU must not have an RTM in case of dual-prcessor-board.

No PMC RTM: yes - 2 slot "PM-1116 SFF transition board 1x73GB disk" - for 2 serial,
1 USB and 2 Gbit Eth ITF access, 1 SCSI disk, 2 SCSI Ctrl., 1 external SCSI ITF

One 2.5 SCSI Hard Disk on RTM


Extended Assistant Routing System (EARS) The EARS configuration will be as follows:

CTCA single or dual - processor as described in chapter cTCA-CEs - (single


pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB).

No PMC
Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 116

No RTM No associated peripherals


Extended Assistant User Profile System (EAUS) The EAUS configuration will be as follows:

CTCA single or dual - processor as described in chapter cTCA-CEs - (single


pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB).

No PMC No RTM No associated peripherals call engine CE-types


The following sections describes the hardware-implementation of all call engine CE-types that are implemented by cTCA-hardware covering as well potential usage of PMCs, RTMs and peripherals. Peripheral and Load (PLCE or PLDA) The PLDA configuration will be as follows:

CTCA single processor-board (due to power consumption of 2 disks on RTM


no dual-P allowed) as described in chapters cTCA-CEs - (single pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB) .

No PMC RTM: yes - 3 slot "PM-1116 SFF transition board 2x36GB" - for 2 serial, 1
USB and 2 Gbit Eth ITF access, 2 SCSI dissk, 2 SCSI Ctrl., 1 external SCSI ITF

Two 2.5 SCSI Hard Disks on RTM


Trunk Gateway Control Element (TGWCS) The TGWCE configuration will be as follows:

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 117

CTCA single or dual-processor-boards as described in chapters cTCA-CEs (single pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB)

No PMC No RTM No associated peripherals


Access Gateway Control Element (AGWCS) The AGWCE configuration will be as follows:

CTCA single or dual-processor-boards as described in chapters cTCA-CEs (single pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB)

No PMC No RTM No associated peripherals


Trunk Resource Allocator Control Element (TRAMC) The TRAMC configuration will be as follows:

CTCA single or dual-processor-boards as described in chapters cTCA-CEs (single pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB)

No PMC No RTM No associated peripherals


Traffic Destination Code Control Element (TDC) The TDC configuration will be as follows:

CTCA single or dual-processor-boards as described in chapters cTCA-CEs (single pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB)

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 118

No PMC No RTM No associated peripherals


RCDS Control Element (RCDS) The RCDS configuration will be as follows:

CTCA single or dual-processor-boards as described in chapters cTCA-CEs (single pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB)

No PMC No RTM No associated peripherals


Traffic Management Control Element (INTM) The INTMCE configuration will be as follows:

CTCA single or dual-processor-boards as described in chapters cTCA-CEs (single pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB)

No PMC No RTM No associated peripherals cTCA-processors LED-usage


call engine-CEs There are 3 user-LEDs available on each processor board named user1, user2, user3. Two of them are port driven (USR2 and USR3) the third one (USR1) can be driven via IPMI commands. For call engine-domain LEDs USR3 and USR2 will be used as known from legacy environment. They will be directly driven using port-comands. They will represent the upper two LEDs known from legacy boards (mapping: legacy

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 119

LED1/upper LED->USR3 LED on cTCA; legacy LED2/middle LED->USR2 LED on cTCA) USR1 LED on cTCA board will currently not be used; driving via IPMI controller will currently not be implemented. This will result in some LED indications that will not be unique any more in some specific loading- or error-cases. For normal operation we will still be able to see an call engine-CE in idle for ACT and STBY processors. This will be mapped as shown in figure below:

Figure 12. LED usage for call engine-CEs on cTCA-blades compared to legacy-blades For call engine -CEs which are running Jaluna s OSware with several call engine -instances on top the LED function will be used different:

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 120

The LEDs USR3 and USR2 will be used to indicate that number of actual running call engine -VMs via blinking frequency as shown in the figure below:

Figure 13. LED usage for call engine CEs using Julunas OSware

Filler Panels
3U Filler Panels (FP3U) In order to maintain EMC and airflow, all not used power supply slots must have faceplates. The filler panel is used when there is no PSU card used in that slot. 6U Air-blocker Filler Panels (FP6UR / FP&U) In order to maintain EMC, all slots must have a front panel installed. The blank front panel is used when the cPCI slot card function has not been defined and the slot is unused. In order to maintain EMC and airflow all not used front and TM slots must have faceplates. The filler-panel is used when there is no front card resp. RTM associated with a slot.

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 121

Transition Modules
Transition cards (XB) are located on the rear panel and of them provide access serial ports and dry contacts. There are several types of RTMs used for the Alcatel 5020 MGC as described below Fabric switch RTM "SwitchUp 6M+2G XB" (SFRTM) This RTM provides rear access to the three 100Mbit/s spare ports of the fabric switches via RJ45 connectors. In addition it provides 2 electrical Gbit up-link connections Backbone switch RTM "SwitchUp 24G XB" (S24RTM) This RTM provides rear access to the 24 electrical Gbit ports of the backbone node switches via 24 RJ45 connectors at a 2-slot face-plate (this means 2 slots at the rear are occupied). This RTM is needed if the "switchUp 24G" backbone switch is used. The RTM requires space of 2 RTM slots (slots n, n+1, where n is the related processor node-slot). OAMCE/EABS RTM "PM 1116 SFF transition board 1x73GB" (SDRTM) This RTM provides rear access to additional 2 x 10/100/1000 Ethernet ports (which are not connected to the internal network but used for external connections to the OAMCE/EABS), 1xUSB and 2xserial ITFs. In addition it provides a Dual-SCSI controller, one of them (controller0) is needed for onboard HD access and the second one (controller1) has a connector for external SCSI device connection. In addition this RTM provides one onboard 2.5 SCSI disks (73Gbyte), which is, connected to SCSI controller0. SCSI-ID of disk is fixed to 0. The RTM requires space of 2 RTM slots (slots n, n+1, where n is the related processor node-slot). ITCE RTM "PM 1116 SFF transition board / no disks" (OGRTM) This RTM provides rear access to additional 2 x 10/100/1000 Ethernet ports (which are not connected to the internal network but used to connect via the

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 122

external control-network to the gateways) for the related ITCE (IPACC), 1xUSB and 2xserial ITFs. In addition it provides a Dual-SCSI controller, one of them (controller0) could be used for onboard HD access (HD not equipped) and the second one (controller1) has a connector for external SCSI device connection. PLDA RTM "PM 1116 SFF transition board 2x36GB (DDRTM) This RTM provides rear access to additional 2 x 10/100 Ethernet ports (which are not connected to the internal network but used for external connections to the OAMCE/EABS), 1xUSB and 2xserial ITFs. In addition it provides a Dual-SCSI controller, one of them (controll0) is needed for onboard HD access and the second one (controller1) has a connector for external device connection. In addition this RTM provides two onboard 2.5 SCSI disks (36Gbyte each), which are both, connected to SCSI controller0. SCSI-IDs of disks are fixed to 0 and 1. The RTM requires space of 3 RTM slots (slots n, n+1, n+2, where n is the related processor node-slot). SLN7S RTM "SLN7S Feature RTM" (OFRTM) This RTM provides rear access to additional 2 x 10/100 Ethernet ports (not used), 2xUSB and 2xserial ITFs and in addition it provides a Dual-SCSI controller (not used). It provides connectivity for one PIM which is used for "ADAX nnn PIM" to provide connectivity for 4 E1-links. Fan-Module (FANB) Cooling is accomplished using three, redundant fan tray modules to provide high-velocity, and vertical airflow across all slots. Each hot-swappable fan tray module combines two, high performance 5 fans with integrated air intake and performance monitoring. Air dispersal allows cooling to continue to be provided to boards above a failed fan Air-filter (AIFA) A pressurized, NEBS compliant, replaceable filter is provided below the node cards.

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 123

LED Panel (LEDP) A LED Panel with two rows of LEDs is mounted above the fan modules. The top row LEDs are indicators for failure in the corresponding slots above the LEDs. The bottom row LEDs are bicolour. The colour choices are red or green, which can be chosen by the user via software. Serial interface access Each FlexAlarm module provides rear Ethernet, serial, and alarm connectivity for both FlexManager cards. If connected to the Flexmanager we can get a serial over IPMI connection from the active Flexmanager to any FlexPower which is equipped. From there we are connect physically to the serial interface console. The serial interface console offers the option to connect the serial interface to each slot via RTM (mandatory!) .

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 124

Via that chain (as descibed above) we can connect via serial interface from FlexManager to any processor if the related slot has an RTM equipped.

Figure 14. Overview on serial interface connectivity in Flex21 as used for Alcatel 5020 MGC

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 125

To avoid unreachability of chassis in case of ethernet switch outage we introduced in addition an inter-chassis cabling per rack that is shown in the figure below:

Figure 15. Overview on serial inter-chassis connectivity in first rack for Alcatel 5020 MGC (other racks similar)

PMC Modules
The single-P cTCA-CEs support one PMC slot with rear I/O access. Application specific hardware may reside on the PMC slots to provide the cTCA-CE with the capability to perform predefined operations.
Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 126

Currently there is only one PMC used for the Alcatel 5020 MGC. N7 termination PMC "ADAX HDC II SS7 PMC" (N7PMC) This PMC provides E1-framers and handling of layer 1 and 2 (MTP1 / MTP2) of an N7-link termination via E1-link into the MGC (N7-termination of 64kbit/s and/or 2Mbit/s possible - but currently only 64kbit/s option used) . This means MTP1 and MTP2 are covered by firmware there.

PIM Modules
The "IPACC Feature RTM" from CCPU that is currently in use supports one PIM slot with rear connectivity. Currently there is only one PIM used for the Alcatel 5020 MGC. N7 termination PIM "ADAX HDCII SS7 PIM" (N7PIM) This PIM provides connectivity for 4 E1-links from rear via RJ45 connectors (120 ohm). The E1-links are connected trough to the PMC that is equipped on the front board (75 ohm connections can be provided using BNC connectors via a balun-adapter).

Topology of internal Packet-Network


Connectivity of the CEs in Alcatel 5020 MGC
The Figure "System Architecture Overview for Alcatel MGC 5020" shows that there is only one transmission network available for CE-interconnections. Processing nodes of all domains are connected to the internal packet network that is set up by a bunch of Ethernet Switches using Ethernet link layer switching technology. The purpose of this packet network is to switch all inter- and intra- domain controland signalling- traffic between the CEs that are connected to that network. All processing nodes (using the same cTCA-hardware) are connected to the internal packet network only via redundant connections using two Ethernet controllers. The Ethernet switches are connected to CEs in a star configuration so that each CE may communicate with any other CE in the network. In general an ESWT offers transmission capacity to a CE that will not be degraded by any other CE's communication needs.
Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 127

Plane- and Stage-Configurations


In the Alcatel 5020 MGC we use a two-plane topology, which means that each CE that participates in communication via the packet network will be connected via two ports to two independent networks, which are not interconnected. The two networks are called PlaneA and PlaneB. In addition for Alcatel 5020 MGC we use a "staged" network topology that is extendable for growing exchanges. The 1st stage is formed by a group of Fabric Switches equipped in the Flex21 chassis to which all processing nodes of all domains are connected. These 1st stage switches are inter-connected via their Gbit up-links in a way that we avoid any redundant paths. This is necessary due to the fact that we have to avoid configurations that have loops in the topology (to avoid broadcast storms, packet forwarding in the loop). Based on the results of studies done for signalling server and call server we will not use the spanning tree protocol (IEEE 802.1d) for this purpose but use appropriate network topologies. Note: Also the resilient link concept (which was used for signalling server and call server configurations) is not used anymore because this was a proprietary 3com technology, which we are not using in our ALCATEL 5020 MGC. This is reflected in the display modules, which will represent the actual topologies. The decisions mentioned above and the fact that we have 2 uplink ports only will limit the one-stage topology to a maximum of three 1st stage ESWTs (if we want to avoid potential bottlenecks between the 1st stage switches in case of daisy chain concept), which corresponds exactly to a Alcatel 5020 MGC configuration using three Flex21 chassis (one-rack). Note: interconnection of the three switches of a one-stage configuration is not symmetric for PlaneA and PlaneB to ensure full redundancy even in case of a complete chassis outage; see interconnections in figure on Physical View of packet network topology in One Rack Configuration In case of larger Alcatel 5020 MGC configurations a 2nd stage is introduced which is implemented by a 6U Gbit-switch-pair that is located in the Flex21 chassis 4 and 5 (upper two chassis in second iron rack). In this case all 1st stage switches will be connected to this Gbit switch-pair via their Gbit-uplinks.

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 128

There will be no direct links between any of the 1st stage switches to avoid redundant paths (see above). Note: Each Flex21 chassis requires one pair of Ethernet switches - called fabric switches (100Mb switches used - but Gbit switches possible as well). Gigabit ports are used to interconnect the Ethernet switches for exchanging inter-Ethernet switch traffic. Note: PlaneA is always composed of the fabric switches in slots 1. If there is a 2nd stage Gbit switch in addition then the one in chassis 4 (slot 2) is assigned to plane A. Note: PlaneB is always composed of the fabric switches in slots 21. If there is a 2nd stage Gbit switch in addition then the one in chassis 5 (slot 2) is assigned to plane B. In the following figures we find the logical and physical topology view of a One-Rack- and Two-Rack-Configuration in Alcatel 5020 MGC

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 129

Note: that potential usage of dual-processor boards and call engine-VMs is already considered there. For more details see chapters 2.2.6.5 and 2.2.6.6.:

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 130

Figure 16. Logical View of packet network topology in a One-Rack-configuration

Figure 17. Logical View of packet network topology in a fully equipped Two Rack Configuration

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 131

Figure 18. Physical View of packet network topology in One Rack Configuration
Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 132

Figure 19. Physical View of packet network topology in Two Rack Configuration

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 133

Fabric Switches
According to PICMG 2.16 each chassis contains two fabric switches that provide via backplane - two Ethernet-connections to each node slot. Each fabric switch represents belongs to one of the two planes in our MGC topology view. As we have 19 node slots plus two FlexManagers (requiring 21 ports in total) but the fabric switch is implemented as a 24-port-switch there are three spare ports that are externally available via a dedicated RTM. These ports can be used to connect additional equipment. Currently there is only one of these connections needed connection to local operator position. There are always exactly 2 fabric switches per chassis which means that we will have 6 of them in a One-Rack-configuration with three chassis-pairs, 12 of them in a fully equipped Two-Rack-configuration with six chassis-pairs, etc. - but note that we currently support no larger configuration than 24 chassis in Alcatel 5020 MGC. In Alcatel 5020 MGC the fabric switches will provide 10/100Mbit/s at the 2.16 ports in the backplane - but we have as well the option to replace them by Gbit switches in the future (which are the same ones as described below for backbone switch usage - but this requires an integration cycle before being introduced). The backplane is Gbit-ready but currently there is no need to use this option. The fabric switches (switchUp 24+2R layer2) that are used for Alcatel 5020 MGC provide:

Full PICMG 2.16 compliance Layer2 switching at wire-speed Manageable via serial or Ethernet SNMP agent supporting MIBs according to IETF RFCs 24 Fast Ethernet 10/100 Mbit/s Three of those 10/100 ports are accessible via RJ45 connectors at the rear
(via RTM)

Two copper 10/100/1000 up-link-ports accessible via RJ45 connectors at the


rear (via RTM)

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 134

Backbone Node Switches


As soon as we have to use more than 3 chassis for a configuration we have to connect at least four 1st stage switches per plane. Our solution for this case is introduction of a 2nd stage using a pair of Gbit-backbone-switches. This switch-pair is as well implemented as a 6U switch blade that will be plugged into node-slots and is therefore called a node-switch. Because we use this switch-pair to form a 2nd stage "backbone" of our internal packet-network we call this specific node-switch also a backbone-switch. For all types of configurations we have to use a 24-port Gbit switch (even if smaller switch would be sufficient because 8-port variant is now end of life). The Backbone-switch position is fixed to slot 2.

In the Flex21 chassis the 8 port Gbit-switch can be plugged into any node-slot
position.

For the 24-port Gbit-switch only the slot 2 is possible. The J4-connector that
provides rear I/O-access enables rear connection of all 24 Gbit links for slots 2 and 20 only because for these slot the backplane is not connected to the H.110 bus. In case of any other node-slot used (which is possible as well) only ports 1 to 19 would be available due to lack of J4/P4 pass-through.

The 24-port Gbit-switch needs a 2 slot wide RTM (to provide all 24 port
connectors at rear) which requires that slot 3 must not contain any RTM (in case of slot 20 used for the 24-port Gbit-switch we could not allow an RTM on slot 21 which is impossible).

Currently no 1-slot RTM with an optimized number of connectors is available except the one used for Gbit fabric switches that provides 4 rear link connectors only The backbone node switches (switchUp 8G/24G) that are used for Alcatel 5020 MGC provide:

Layer2 switching at wire-speed Layer3 switching Manageable via IPMI

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 135

Manageable via serial or Telnet (CLI) SNMP agent supporting MIBs according to IETF RFCs 24 Gbit Ethernet ports All of these 8 / 24 ports accessible via RJ45 connectors at the rear (via RTM)
plus two RJ45 connectors for two fast Ethernet maintenance ports

On-board Switches
According to the chapter Dual-Processor Board (CPCB) There is one 8-port on-board switch on each dual-processor board. This switch will not be seen / considered in the network topology. It is not considered as an independent item that could be maintained or replaced independently but is in fixed association with the related board itself. This onboard switch will be configured autonomously at power-on in a way that 2 VLANs will be setup each of them representing a network plane (planeA / planeB).

Virtual Ethernet Switches in JALUNAs call engine-virtualisation


There will be a Virtualisation of several call engine-CEs on one processor hardware (which might be on a single-P or dual-P blade). The Virtualisation approach of Jaluna includes the provisioning of a virtual ethernet switch via software on that processor instance. This switch will not be seen / considered in the network topology. It is not considered as an independent item that could be maintained or replaced independently but is in fixed association with the related Jaluna software. This virtual switch will be used to interconnect the softswitch ethernet planes and the virtualised ethernet planes on each blade. It connects logically two different ethernet network segments. The networking stack of the primary LINUX will bridge the ethernet traffic between the two subnets to make it appear as a single ethernet network. This gives the view of having a virtual ethernet switch on each processor. Note: the virtual ethernet MAC addresses used internally for the different VMs are visible outside on the real ethernet planes.

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 136

This virtual switch will learn dynamically MAC addresses (real and virtual ones) and provide traffic accounting. In addition features could be switched by configuration (e.g. aging, spanning tree - currently not used in Alcatel 5020 MGC). For details of ethernet addressing scheme see chapter General Addressing/Communication and Location Identification Issues Note that we can define following command line parameters in this approach: buseth-net=(<net#>,<mask>) with that option we can define the virtual MAC address to be used by a specific OS instance when connecting to the virtual switch. buseth-mac=(<net#>,<addr>[/br][,<rxsize]) with that option we can use an optimisation by specifying a network mask that is used to speed up internal transmission (check is possible whether MAC address is part of given virtual network). This virtual switch will be configured autonomously at starting of Jaluna software package using some parameters that will be passed to primary LINUX as command line parameters.

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 137

The figures below will visualize all the concepts described above:

Figure 20. Switching topology on single-P and dual-P PBA without Virtualisation

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 138

Figure 21. Switching topology on single-P PBA with Virtualisation

Figure 22. Switching topology on dual-P PBA with Virtulisation

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 139

ELMs Virtual Switch Concept


A totally different approach is needed to make life easier for ELM: The Ethernet Link Maintenance is able to provide a "virtual switch concept" which means that ELM will see one physical switch as two independent virtual switches in its data (generated offline for ELM). This means that ... ELM sees always a clear switch stage structure ELM will see only the logical view represented in data One physical switch may be represented in ELM-data as 2 independent switches as follows...

ELM sees two switches in stage 2 where each of them has max. 12 links to 1st
stage switches (this is due to the fact that we can provide only 16 box-IDs for the SBL-NA of the first stage switches; see figure below). Both virtual switches are implemented by the same physical backbone switch (the link in between is of course as well only a virtual one). If we have large configurations (applied to 2nd stage Gbit switches when the 24-port version is used (see chapter Ethernet Link Maintenance (ELM)) ). Maintenance will map both virtual switches to the same physical switch (RIT), which means that maintenance view fits to physical reality and reports to operator will refer to physical RITs only.

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 140

The following figure shows the virtual switch concept and explains the SBL network addressing impact:

Figure 23. Virtual Switch Concept solving SBL-network-address range-issue

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 141

Ethernet Link Maintenance (ELM)


Instead of using available protocols for dynamically managing and monitoring the internal network (e.g. SNMP) we use a mode of operation that configures the switches once (for the 3rd party switches via TELNET-interface) at system set-up time and avoids any OAM-intervention afterwards. For monitoring the Ethernet links and switching elements the ELM subsystem was introduced. This subsystem checks for broken links in the network by starting communication checks from outside. ELM is implemented in the Call Engine-domain and ITCE-domain. (including diskless servers). ELM-functionality: Any CE of the related domains which is connected to the internal packet-network can be used to trigger a "ping" to any other CE in the network. But the box-to-box check, which supervises the reachability between the 1st stage switches, can be triggered from call engine-OSN CEs only. Note: Therefore it is required that we have at least one call engine-CE per CCPU-chassis (better 2 or more - to overcome failure -cases) to enable full functionality of ELM. For this purpose ELM needs some network configuration data that describe the topology of the internal network. From that information ELM concludes which target-CEs will be selected for communication checks and which faults must be reported in case of a failure. The configuration data will as well enable error-correlation to detect total failures of switches. With introduction of the Virtualisation concept for call engine -CEs ELM-selection of a target CEs for connectivity-checks will consider now the correlation of CEs to HW-blades. A CE which is located on the same HW-blade will not be selected as destination for any communication-checks. ELM-data must always be able to reflect the network topologies and therefore it is likely that ELM (configuration-data and reporting facilities) will be impacted whenever a change or extension of the possible network-topologies occurs.

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 142

Connectivity Aspects
To avoid that we get simplex configuration by accident we will have to configure strictly either auto/auto or fixed-duplex/fixed-duplex configurations for the communication links. In Alcatel 5020 MGC all controllers of the processing-boards and all ports of the Ethernet-switches will be configured to "auto-negotiation" (IEEE 802 standard). In Alcatel 5020 MGC all controllers of boards loaded with Jalunas VM-software will be configured to "promiscuous mode".

Switch Port Assignment


All cTCA processors are automatically equipped at the fabric switch ports (PICMG 2.16) according to their slot position. The spare-ports at the fabric-switches are currently not used - with following exception: the local operator position will be connected to PlaneB of 1st chassis and PlaneA of 2nd chassis. For more details see chapter OA&M (OAM-concept)

EMC - System requirements


The complete call engine MGC system complies with:

for Emission: for Immunity:

class A, and level2 OTC

Note: OTC = equipment installed other than telecommunication centres

Safety
The Flex21 chassis complies with IEC 60950 and

for Europe: for USA:

EN 60950 UL 60950

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 143

Environmental
ETSI
The system meets ETSI environmental criteria according ETS 300 019-1 for

Class 1.2 equipment for storage Class 2.3 equipment for transportation Class 3.2 equipment for operation NEBS
The system meets NEBS Level 3 environmental criteria according Bellcore SR-3580, GR-63-CORE and GR-1089-CORE.

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Platform Architecture Page 144

Platform Architecture
Software-Platform of ITCE-Domain
The SW platform of our ITCEs consists of following parts:

Commercial POSIX compliant operating system Proprietary middle-ware called "server platform" Some 3rd party SW products
The following figure gives an overview on these building blocks and related components:

Figure 1. ITCE Software-Platform Overview


Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Platform Architecture Page 145

POSIX compliant operating systems


The operating systems for our non-legacy CEs are:

LINUX MVL CGE 3.0 (kernel 2.4.x)

for all diskless ITCEs for the OAM-node

SOLARIS x86 Release 7 (INTEL platform edition)

Migration of OAMCE from SOLARIS7 to MVL CGE 3.1


We use as well for OAMCE the MVL CGE 3.1 instead of SOLARIS7. Major reasons for that change were:

end of life for SOLARIS7 unified UNIX-OS in MGC -> reduced maintenance effort and licence cost open OS for easier introduction of new features
Consequences of OS migration are:

remove COM SERVER to avoid high porting effort (replaced by other


solutions)

migrate 3rd party SW to latest LINUX-version (where available) replace some 3rd party packages by freeware due to...
unavailability of LINUX-version cost reduction alignment with ALCATEL corporate recommendation

3rd party SW Products


The following 3rd party products from the NAOS list are considered to be platform related: Montavista Oracle Oracle LINUX Enterprise Edition JDBC-drivers (freeware) CGE 3.1 10g R 9.2.0.3 all ITCEs OAM OAM

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Platform Architecture Page 146

Apache Prismtech ILOG ISC Berkley snmp in CGE3.1 SUN

WebServer (freeware) TAO (freeware) DB Link DHCP Server (freeware) TFTP Server (freeware) distribution

R 1.3.31 R 1.4.1.0 R 5.0

OAM OAM OAM OAM OAM OAM OAM

Java Runtime Environment R 1.4.2

The middle-ware layer called "SERVER PLATFORM"


The Server Platform is an operating system independent package of software subsystems, which provides platform services for end user applications. The services provided by the Server Platform apply to applications that may range from administration and maintenance functions up to near real time call processing applications. The Server Platform was intended to consists of six different software subsystems:

Core Platform Support components (SpCor) Common Application components (SpApp) Hardware Configuration and Management components (Sphardwarem) Human Machine Interface components (SpHmi) System Management components (SpSys) Database Management components (SpDbm)
The Core Platform Support subsystem was intended to provide a set sub-functions like (note that not all functions may be implement at current state) :

Node Management Application Management Server Forwarding Agent Communications Thread Management Timer Management
Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Platform Architecture Page 147

Event Management Status Return Interface POSIX Wrappers Memory Management Initialization Management
The Common Application subsystem was intended to provide a set sub-functions like (note that not all functions may be implement at current state) :

Trace Management Alarm Management Report Management Report Analyzer Command Logging History Log Management Common Services NE Administration
The Hardware Configuration and Management subsystem was intended to provide a set sub-functions like (note that not all functions may be implement at current state) :

Configuration Management Performance Monitoring Fault Management Fault Isolation Disk Management
The Human Machine Interface subsystem was intended to provide a set sub-functions like (note that not all functions may be implement at current state) :

Web Based Enterprise Management Framework Launcher


Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Platform Architecture Page 148

Physical Equipment View/Logical Equipment View Security Management


The System Management subsystem was intended to provide a set sub-functions like (note that not all functions may be implement at current state) :

managing all of the node protection groups that make up the system co-ordination of redundancy switchover for subordinate nodes based on
information received from node management in each of the protection groups. Generally when system management is available on a system, node management of the node protection groups will be configured "No Auto Failover" so that all protection switching is co-ordinated at the system level. All system and node maintenance commands interface with system management that then directs the behaviour of the node management software on the appropriate processors. System management provides certain safety checks such as preventing accidental removal of the last in service processor in a protection group for example. The Database Management subsystem can be described as follows: There are 2 database systems used: RDBMS and RTDB

RDBMS (DataBase Management System)


This is the main database engine and resides in the OAM manager Stores data-tables on disk and may have tables in memory (applications on OAM manager use only DBMS) Updates RTDB with copies of some tables that are deemed real time tables which are those tables ... on which applications in ITCEs require any access on which any application require fast access

RTDB (Real Time DataBase) consisting of RTDB Client and RTDB Server
RTDB Server is the part of the RTDB residing in the OAM manager keeps track of real time tables manages distribution of real time tables to ITCEs maintains registration list (what clients are interested in what [ranges of which] tables)
Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Platform Architecture Page 149

ITCE applications register (setup during application init) RTDB Client is the part of the RTDB residing in the ITCE processors interacts with RTBD server receives all real time tables for which ITCE applications require access RDBMS is implemented by ORACLE 10. The both OAM processor blades are operated in active/standby mode. Oracle multi-master replication is configured on both OAM processors to reach the required high availability (5 NINES). In case of an active/standby failover, client applications will re-connect to the Oracle database on the new active processor blade (no transparent failover required). Access to the Oracle database is only performed via dedicated Alcatel Softswitch applications:

Via graphical user interface (GUI) applications for data provisioning and
retrieval

Via real time database for data distribution on the diskless call control
processor blades RTDB is implemented as a proprietary solution called iDM.

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Platform Architecture Page 150

Some iDM (ALCATEL proprietary RTDB) qualities are presented in the table below:

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Platform Architecture Page 151

The following figure will give a basic overview on the usage of databases on the ITCE-side:

Figure 2. TCE Databases

General Addressing/Communication and Location Identification Issues


The following levels of connectivity need to be distinguished: 1. Physical Transmission Connectivity (connecting e.g. Ethernet controllers (NIC) to each other via ESWT) 2. Transport Protocol Connectivity (Each CE can be (physically) addressed and is able to talk the same transport protocol in order to exchange data packets. 3. Addressing Capability (address (logically) a dedicated CE / a distinct application to receive a message)

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Platform Architecture Page 152

Physical Transmission Connectivity


The physical connectivity gains access to a transmission media (ESWT). There is the MAC layer provided by Ethernet Controllers. For security reason each interface is composed of two ports. All cTCA PBAs have 2 Ethernet interfaces connected to the Ethernet Switches - composing a 2-plane switching topology. (in case of dual-P we have 2x2 Ethernet Interfaces on one blade). Load / Traffic Distribution across Ports There are several methods how the interfaces/ports are used. (This is dependent on the projects.)

ITCEs running LINUX OS make use of one Ethernet Interface only - until it
fails. The second ITF is put into service when the other link fails or the destination cannot be reached via that port and access to the Ethernet Switch plane.

Middleware that is used for communication between ITCEs and call engine
-CEs uses also only one dedicated port always to address an ITCE and swaps only in case of a failure.

call engine CEs running call engine-OSN make use of both Ethernet ports
(toggle) to communicate amongst each other (internal in call engine domain).

cPCI blades with Virtualisation approach for call engine CEs behave like in
call engine environment. Both interfaces are toggled for each communication. For ITCE communication between two parties (CEs) always one ESWT plane is used in order to guarantee the sequence of packets because only IP protocol is used. IP does not support an acknowledge mechanism, flow control, re-sequencing or retransmission etc. like TCP does. If one port/link fails communication to all destinations shall still be possible via the remaining Ethernet port. Ethernet MAC Address For the two ESWT planes that do not interconnection the same MAC address can in principle be used at both interfaces (isolated subnets). But by a rule of thumb there are different addresses defined according DSN principles. The addresses used are different by H40. All MAC address values must be unique within a shared medium, e.g. ESWT plane. Link Maintenance software uses the PCE and (PCE+H'40) as reference numbers for administrative reason.

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Platform Architecture Page 153

Basically a distinction shall be made between the physical board reference number (PCE-ID) of a PBA and an access ports number (NA or MAC) to connect the board to a transmission medium (Alcatel 5020 MGC: ESWT only). PCE-ID was equivalent in the past legacy call engine environment to NA, but it will not be anymore. NA was a port number of our DSN-switch. The boards are no longer connected to DSN and may have several ports. The MAC address composed for Alcatel 5020 MGC is enlarged by a CPU identification field in order to support processor blades with several CPUs mounted. There is also a field introduced for the Virtualisation approach because each individual VM (= call engine CE) will have its own PCE-id equal to a MAC address. Because the PCE-id cannot cover all information fields in its wide range anymore some of the bit fields are used alternatively. Ethernet transmission flow All Ethernet communication between two distinct CEs (exception: call engine-message communication; see chapter Ethernet transmission flow) shall be handled via the same ESWT interface (port) in order to guarantee contiguous message/packet flow in sequence. TCP/IP in principle supports re-sequencing but would cost an additional delay which shall be avoided in telecommunication systems. In case of a failure the traffic is switched to the other Ethernet interface and therefore the other ESWT plane is used. All communication relationships are distributed randomly over the two switch planes. In the Jaluna Virtualisation approach each call engine -VM makes use according call engine principles of both Ethernet Links alternatively. The virtual Ethernet switch implemented on the processor blade behaves like the two-layered approach with no interconnections supported. For the CPCB blade (Dual Processor Blade) the physical Ethernet Switch onboard is operated in VLAN mode, which allows keeping also two planes isolated from each other for any communication flow. Resilient Link The chosen Ethernet Switch topology guarantees deterministic data flow routes with no racing conditions. The switching matrix is according a staged approach and looks similar to the well-proven DSN topology. Therefore no resilient links exists which could be used as alternative routes for data packet flow. Fault

Alcatel 5020 MGC

Anda mungkin juga menyukai