1 x BPA with 2 fabric switch positions and 19 equipment positions and the
related RTM PBA positions
19 PSU and 2 merged PSU/FlexManager positions 3 x Fan Units with 2 fans each 2 x Power Input Units 1 x Dust Filter 1 x LED panel
Backplane / Midplane
The standard Flex21 midplane provides PICMG 2.16 support for all slots and H.110 support for all slots 3 through 19 (slots 2 and 20 allow used of ethernet switches instead). PICMG 2.16 midplane support includes dual Ethernet to every slot at up to Gigabit speeds (it is required to equip related jumpers on the midplane to enable all slots for 2.16 ethernet communication). H.110 midplane support allows for transport of TDM data between slots for payload applications.
Each of the redundant FlexAlarm power input trays supports both connections for one of these redundant power feeds, allowing either of the power input trays to support 48VDC distribution to the entire chassis. This configuration allows either power-input tray to be replaced without effecting power to the platform.
Power-supplies
3U FlexPower modules enable independently powered node slots in a flexible chassis. Located above each node slot, these hot-swappable modules also incorporate serial and IPMI controllers to enable unified chassis and system management capabilities.
Processor: Cygnal 8051 microprocessor running at 20MHz Memory: 16kByte FLASH / 1.2kByte RAM Flex managers
3U FlexManager modules provide redundant, modular, unified chassis and system level management capabilities with open, easily extensible interfaces for integrating with any application. Located above each fabric slot, these hot-swappable modules also provide power to each Ethernet fabric switch.
Processor: PXA250 (ARM) running at 800MHz Memory: 64Mbyte RAM Operating System: LINUX cTCA-CEs - (single pentium CPCA) - General Description (CPCA)
The CEs used in the new cTCA chassis are called either ITCEs (which does not define a CE-type but is a general term to distinguish these CEs from the ones running call engine legacy OSN) or call engine CEs or they are of type "server". The board used for all these CEs consists of a PCB assembly with a CPU core, DRAM, non-volatile memory, and a series of user interfaces. The ITCE baseboard performs as a non-systems slot controller and comply with the Basic Hot Swap specifications as defined in the cPCI Hot Swap Specification, PICMG 2.1 R1.0, August 3, 1998.
Board-name: Cheetha-Cr single slot 6U board PentiumM processor, 1.6GHz, providing a 400MHz front side bus One PMC expansion site is being provided on board. Each PMC has a
front-panel as well as rear I/O access. Up to now this PMC site is used for SLSN7 only - but may be used in future to support e.g. additional I/O requirements of an application..
1Mbyte of level 2 cache separated 32kByte level 1 caches for instruction and data 1Gbyte DRAM on-board memory providing ECC Error Detection and
Correction (EDAC) protection.
The following interfaces are provided: 32-bit 33 MHz PCI interface on backplane Two 10/100/1000Base-T Ethernet links to backplane (PICMG 2.16) Two 10/100Base-Tx Ethernet ports via backplane on RTM (if equipped) Four serial ports, two on the front panel and two via backplane on RTM (if
equipped)
Three USB-ports, one on the front panel and two via backplane on RTM (if
equipped)
Two IPMB interfaces on backplane (PICMG 2.9) The BIOS supports a PXE (Pre eXecution Environment) for network booting Dual-Processor Board (CPCB)
There is a need to further improve footprint of the Alcatel 5020 MGC. Leveraging FLEX21s high cooling capabilities of 75W/slot to introduce a dual processor board into the system will do this. Following assumptions will be fulfilled when introducing the dual-processor board:
The dual processor board houses (more or less) twice the components of the
Cheetha-Cr blade. It is implemented with two independent (except power supply) Pentium M processors, each having 1GByte RAM (default)
It is assumed that it will not be possible to equip a PMC on the dual processor
board (also no PIM-usage) we cannot use this board for SLN7S implementation C in this case it will be implemented using single processor boards. If for any reason PMC-usage is possible with the dual-processor board we will have one site that is connected to CPU1 with 64-bit 66MHz PCI. A PIM on corresponding RTM may be used in this case.
Serial interface: serial routing for the dual processor board will be done in a
way that two serial ports per CPU (COM1/COM2) are supported/provided on the front panel. Two serial ports (COM3 and COM4) will be provided via RTM for both CPUs - switched by a serial port selector that gets selection signals via the IPMI-controller. Via the IPMI controller of CPU#1 one can switch COM3 and COM4 either to CPU#1 or CPU#2.
dual-processor board to connect both CPUs to the backplane. Plane A and B will be kept separate by using VLANs, which will be pre-configured on the board.
ESWT topology aspects: the topology remains unchanged. This means that
ELM will NOT see the new on-board switch on the dual-processor board. The number of ETSLs per fabric switch has to be adapted (38 if we consider dual-processor boards only; 152 if we consider up to 4 VMs per CPU)
Reset Control: the dual-processor board enables reset to either of the CPUs
via different reset inputs: IPMI-command, front panel reset, external cPCI reset. The following figures will illustrate the dual-processor solution as described above:
Figure 10. Principles of dual-pentium board implementation (RTM / PMC / PIM usage)
ITCE-types
The following sections describes the hardware-implementation of all ITCE-types that are implemented by cTCA-hardware covering as well potential usage of PMCs, RTMs and peripherals. Resource Manager (RM-CE) The RMCE configuration will be as follows:
No PMC
RTM: PM 1116 SFF transition board / no disks - for serial and Eth ITF access
No associated peripherals
OAM Agent (OAM) The OAM configuration will be as follows:
No PMC RTM: yes - 2 slot "PM-1116 SFF transition board 1x73GB disk" - for 2 serial,
1 USB and 2 Gbit Eth ITF access, 1 SCSI disk, 2 SCSI Ctrl., 1 external SCSI ITF
PMC for processing the layers MTP1 and MTP2 ADAX "HDC II" RTM: yes - 1 slot "SLN7S Feature RTM" - for 2 serial, 2 USB and 2 100Mbit
Eth ITF access, 2 external SCSI ITFs plus HDCII PIM form ADAX (corresponding to HDCII PMC)
No PMC RTM: yes - 2 slot "PM-1116 SFF transition board 1x73GB disk" - for 2 serial,
1 USB and 2 Gbit Eth ITF access, 1 SCSI disk, 2 SCSI Ctrl., 1 external SCSI ITF
No PMC
Alcatel 5020 MGC
No PMC RTM: yes - 3 slot "PM-1116 SFF transition board 2x36GB" - for 2 serial, 1
USB and 2 Gbit Eth ITF access, 2 SCSI dissk, 2 SCSI Ctrl., 1 external SCSI ITF
CTCA single or dual-processor-boards as described in chapters cTCA-CEs (single pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB)
CTCA single or dual-processor-boards as described in chapters cTCA-CEs (single pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB)
CTCA single or dual-processor-boards as described in chapters cTCA-CEs (single pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB)
CTCA single or dual-processor-boards as described in chapters cTCA-CEs (single pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB)
CTCA single or dual-processor-boards as described in chapters cTCA-CEs (single pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB)
CTCA single or dual-processor-boards as described in chapters cTCA-CEs (single pentium CPCA) - General Description (CPCA) and Dual-Processor Board (CPCB)
LED1/upper LED->USR3 LED on cTCA; legacy LED2/middle LED->USR2 LED on cTCA) USR1 LED on cTCA board will currently not be used; driving via IPMI controller will currently not be implemented. This will result in some LED indications that will not be unique any more in some specific loading- or error-cases. For normal operation we will still be able to see an call engine-CE in idle for ACT and STBY processors. This will be mapped as shown in figure below:
Figure 12. LED usage for call engine-CEs on cTCA-blades compared to legacy-blades For call engine -CEs which are running Jaluna s OSware with several call engine -instances on top the LED function will be used different:
The LEDs USR3 and USR2 will be used to indicate that number of actual running call engine -VMs via blinking frequency as shown in the figure below:
Figure 13. LED usage for call engine CEs using Julunas OSware
Filler Panels
3U Filler Panels (FP3U) In order to maintain EMC and airflow, all not used power supply slots must have faceplates. The filler panel is used when there is no PSU card used in that slot. 6U Air-blocker Filler Panels (FP6UR / FP&U) In order to maintain EMC, all slots must have a front panel installed. The blank front panel is used when the cPCI slot card function has not been defined and the slot is unused. In order to maintain EMC and airflow all not used front and TM slots must have faceplates. The filler-panel is used when there is no front card resp. RTM associated with a slot.
Transition Modules
Transition cards (XB) are located on the rear panel and of them provide access serial ports and dry contacts. There are several types of RTMs used for the Alcatel 5020 MGC as described below Fabric switch RTM "SwitchUp 6M+2G XB" (SFRTM) This RTM provides rear access to the three 100Mbit/s spare ports of the fabric switches via RJ45 connectors. In addition it provides 2 electrical Gbit up-link connections Backbone switch RTM "SwitchUp 24G XB" (S24RTM) This RTM provides rear access to the 24 electrical Gbit ports of the backbone node switches via 24 RJ45 connectors at a 2-slot face-plate (this means 2 slots at the rear are occupied). This RTM is needed if the "switchUp 24G" backbone switch is used. The RTM requires space of 2 RTM slots (slots n, n+1, where n is the related processor node-slot). OAMCE/EABS RTM "PM 1116 SFF transition board 1x73GB" (SDRTM) This RTM provides rear access to additional 2 x 10/100/1000 Ethernet ports (which are not connected to the internal network but used for external connections to the OAMCE/EABS), 1xUSB and 2xserial ITFs. In addition it provides a Dual-SCSI controller, one of them (controller0) is needed for onboard HD access and the second one (controller1) has a connector for external SCSI device connection. In addition this RTM provides one onboard 2.5 SCSI disks (73Gbyte), which is, connected to SCSI controller0. SCSI-ID of disk is fixed to 0. The RTM requires space of 2 RTM slots (slots n, n+1, where n is the related processor node-slot). ITCE RTM "PM 1116 SFF transition board / no disks" (OGRTM) This RTM provides rear access to additional 2 x 10/100/1000 Ethernet ports (which are not connected to the internal network but used to connect via the
external control-network to the gateways) for the related ITCE (IPACC), 1xUSB and 2xserial ITFs. In addition it provides a Dual-SCSI controller, one of them (controller0) could be used for onboard HD access (HD not equipped) and the second one (controller1) has a connector for external SCSI device connection. PLDA RTM "PM 1116 SFF transition board 2x36GB (DDRTM) This RTM provides rear access to additional 2 x 10/100 Ethernet ports (which are not connected to the internal network but used for external connections to the OAMCE/EABS), 1xUSB and 2xserial ITFs. In addition it provides a Dual-SCSI controller, one of them (controll0) is needed for onboard HD access and the second one (controller1) has a connector for external device connection. In addition this RTM provides two onboard 2.5 SCSI disks (36Gbyte each), which are both, connected to SCSI controller0. SCSI-IDs of disks are fixed to 0 and 1. The RTM requires space of 3 RTM slots (slots n, n+1, n+2, where n is the related processor node-slot). SLN7S RTM "SLN7S Feature RTM" (OFRTM) This RTM provides rear access to additional 2 x 10/100 Ethernet ports (not used), 2xUSB and 2xserial ITFs and in addition it provides a Dual-SCSI controller (not used). It provides connectivity for one PIM which is used for "ADAX nnn PIM" to provide connectivity for 4 E1-links. Fan-Module (FANB) Cooling is accomplished using three, redundant fan tray modules to provide high-velocity, and vertical airflow across all slots. Each hot-swappable fan tray module combines two, high performance 5 fans with integrated air intake and performance monitoring. Air dispersal allows cooling to continue to be provided to boards above a failed fan Air-filter (AIFA) A pressurized, NEBS compliant, replaceable filter is provided below the node cards.
LED Panel (LEDP) A LED Panel with two rows of LEDs is mounted above the fan modules. The top row LEDs are indicators for failure in the corresponding slots above the LEDs. The bottom row LEDs are bicolour. The colour choices are red or green, which can be chosen by the user via software. Serial interface access Each FlexAlarm module provides rear Ethernet, serial, and alarm connectivity for both FlexManager cards. If connected to the Flexmanager we can get a serial over IPMI connection from the active Flexmanager to any FlexPower which is equipped. From there we are connect physically to the serial interface console. The serial interface console offers the option to connect the serial interface to each slot via RTM (mandatory!) .
Via that chain (as descibed above) we can connect via serial interface from FlexManager to any processor if the related slot has an RTM equipped.
Figure 14. Overview on serial interface connectivity in Flex21 as used for Alcatel 5020 MGC
To avoid unreachability of chassis in case of ethernet switch outage we introduced in addition an inter-chassis cabling per rack that is shown in the figure below:
Figure 15. Overview on serial inter-chassis connectivity in first rack for Alcatel 5020 MGC (other racks similar)
PMC Modules
The single-P cTCA-CEs support one PMC slot with rear I/O access. Application specific hardware may reside on the PMC slots to provide the cTCA-CE with the capability to perform predefined operations.
Alcatel 5020 MGC
Currently there is only one PMC used for the Alcatel 5020 MGC. N7 termination PMC "ADAX HDC II SS7 PMC" (N7PMC) This PMC provides E1-framers and handling of layer 1 and 2 (MTP1 / MTP2) of an N7-link termination via E1-link into the MGC (N7-termination of 64kbit/s and/or 2Mbit/s possible - but currently only 64kbit/s option used) . This means MTP1 and MTP2 are covered by firmware there.
PIM Modules
The "IPACC Feature RTM" from CCPU that is currently in use supports one PIM slot with rear connectivity. Currently there is only one PIM used for the Alcatel 5020 MGC. N7 termination PIM "ADAX HDCII SS7 PIM" (N7PIM) This PIM provides connectivity for 4 E1-links from rear via RJ45 connectors (120 ohm). The E1-links are connected trough to the PMC that is equipped on the front board (75 ohm connections can be provided using BNC connectors via a balun-adapter).
There will be no direct links between any of the 1st stage switches to avoid redundant paths (see above). Note: Each Flex21 chassis requires one pair of Ethernet switches - called fabric switches (100Mb switches used - but Gbit switches possible as well). Gigabit ports are used to interconnect the Ethernet switches for exchanging inter-Ethernet switch traffic. Note: PlaneA is always composed of the fabric switches in slots 1. If there is a 2nd stage Gbit switch in addition then the one in chassis 4 (slot 2) is assigned to plane A. Note: PlaneB is always composed of the fabric switches in slots 21. If there is a 2nd stage Gbit switch in addition then the one in chassis 5 (slot 2) is assigned to plane B. In the following figures we find the logical and physical topology view of a One-Rack- and Two-Rack-Configuration in Alcatel 5020 MGC
Note: that potential usage of dual-processor boards and call engine-VMs is already considered there. For more details see chapters 2.2.6.5 and 2.2.6.6.:
Figure 17. Logical View of packet network topology in a fully equipped Two Rack Configuration
Figure 18. Physical View of packet network topology in One Rack Configuration
Alcatel 5020 MGC
Figure 19. Physical View of packet network topology in Two Rack Configuration
Fabric Switches
According to PICMG 2.16 each chassis contains two fabric switches that provide via backplane - two Ethernet-connections to each node slot. Each fabric switch represents belongs to one of the two planes in our MGC topology view. As we have 19 node slots plus two FlexManagers (requiring 21 ports in total) but the fabric switch is implemented as a 24-port-switch there are three spare ports that are externally available via a dedicated RTM. These ports can be used to connect additional equipment. Currently there is only one of these connections needed connection to local operator position. There are always exactly 2 fabric switches per chassis which means that we will have 6 of them in a One-Rack-configuration with three chassis-pairs, 12 of them in a fully equipped Two-Rack-configuration with six chassis-pairs, etc. - but note that we currently support no larger configuration than 24 chassis in Alcatel 5020 MGC. In Alcatel 5020 MGC the fabric switches will provide 10/100Mbit/s at the 2.16 ports in the backplane - but we have as well the option to replace them by Gbit switches in the future (which are the same ones as described below for backbone switch usage - but this requires an integration cycle before being introduced). The backplane is Gbit-ready but currently there is no need to use this option. The fabric switches (switchUp 24+2R layer2) that are used for Alcatel 5020 MGC provide:
Full PICMG 2.16 compliance Layer2 switching at wire-speed Manageable via serial or Ethernet SNMP agent supporting MIBs according to IETF RFCs 24 Fast Ethernet 10/100 Mbit/s Three of those 10/100 ports are accessible via RJ45 connectors at the rear
(via RTM)
In the Flex21 chassis the 8 port Gbit-switch can be plugged into any node-slot
position.
For the 24-port Gbit-switch only the slot 2 is possible. The J4-connector that
provides rear I/O-access enables rear connection of all 24 Gbit links for slots 2 and 20 only because for these slot the backplane is not connected to the H.110 bus. In case of any other node-slot used (which is possible as well) only ports 1 to 19 would be available due to lack of J4/P4 pass-through.
The 24-port Gbit-switch needs a 2 slot wide RTM (to provide all 24 port
connectors at rear) which requires that slot 3 must not contain any RTM (in case of slot 20 used for the 24-port Gbit-switch we could not allow an RTM on slot 21 which is impossible).
Currently no 1-slot RTM with an optimized number of connectors is available except the one used for Gbit fabric switches that provides 4 rear link connectors only The backbone node switches (switchUp 8G/24G) that are used for Alcatel 5020 MGC provide:
Manageable via serial or Telnet (CLI) SNMP agent supporting MIBs according to IETF RFCs 24 Gbit Ethernet ports All of these 8 / 24 ports accessible via RJ45 connectors at the rear (via RTM)
plus two RJ45 connectors for two fast Ethernet maintenance ports
On-board Switches
According to the chapter Dual-Processor Board (CPCB) There is one 8-port on-board switch on each dual-processor board. This switch will not be seen / considered in the network topology. It is not considered as an independent item that could be maintained or replaced independently but is in fixed association with the related board itself. This onboard switch will be configured autonomously at power-on in a way that 2 VLANs will be setup each of them representing a network plane (planeA / planeB).
This virtual switch will learn dynamically MAC addresses (real and virtual ones) and provide traffic accounting. In addition features could be switched by configuration (e.g. aging, spanning tree - currently not used in Alcatel 5020 MGC). For details of ethernet addressing scheme see chapter General Addressing/Communication and Location Identification Issues Note that we can define following command line parameters in this approach: buseth-net=(<net#>,<mask>) with that option we can define the virtual MAC address to be used by a specific OS instance when connecting to the virtual switch. buseth-mac=(<net#>,<addr>[/br][,<rxsize]) with that option we can use an optimisation by specifying a network mask that is used to speed up internal transmission (check is possible whether MAC address is part of given virtual network). This virtual switch will be configured autonomously at starting of Jaluna software package using some parameters that will be passed to primary LINUX as command line parameters.
The figures below will visualize all the concepts described above:
Figure 20. Switching topology on single-P and dual-P PBA without Virtualisation
ELM sees two switches in stage 2 where each of them has max. 12 links to 1st
stage switches (this is due to the fact that we can provide only 16 box-IDs for the SBL-NA of the first stage switches; see figure below). Both virtual switches are implemented by the same physical backbone switch (the link in between is of course as well only a virtual one). If we have large configurations (applied to 2nd stage Gbit switches when the 24-port version is used (see chapter Ethernet Link Maintenance (ELM)) ). Maintenance will map both virtual switches to the same physical switch (RIT), which means that maintenance view fits to physical reality and reports to operator will refer to physical RITs only.
The following figure shows the virtual switch concept and explains the SBL network addressing impact:
Connectivity Aspects
To avoid that we get simplex configuration by accident we will have to configure strictly either auto/auto or fixed-duplex/fixed-duplex configurations for the communication links. In Alcatel 5020 MGC all controllers of the processing-boards and all ports of the Ethernet-switches will be configured to "auto-negotiation" (IEEE 802 standard). In Alcatel 5020 MGC all controllers of boards loaded with Jalunas VM-software will be configured to "promiscuous mode".
Safety
The Flex21 chassis complies with IEC 60950 and
EN 60950 UL 60950
Environmental
ETSI
The system meets ETSI environmental criteria according ETS 300 019-1 for
Class 1.2 equipment for storage Class 2.3 equipment for transportation Class 3.2 equipment for operation NEBS
The system meets NEBS Level 3 environmental criteria according Bellcore SR-3580, GR-63-CORE and GR-1089-CORE.
Platform Architecture
Software-Platform of ITCE-Domain
The SW platform of our ITCEs consists of following parts:
Commercial POSIX compliant operating system Proprietary middle-ware called "server platform" Some 3rd party SW products
The following figure gives an overview on these building blocks and related components:
end of life for SOLARIS7 unified UNIX-OS in MGC -> reduced maintenance effort and licence cost open OS for easier introduction of new features
Consequences of OS migration are:
migrate 3rd party SW to latest LINUX-version (where available) replace some 3rd party packages by freeware due to...
unavailability of LINUX-version cost reduction alignment with ALCATEL corporate recommendation
WebServer (freeware) TAO (freeware) DB Link DHCP Server (freeware) TFTP Server (freeware) distribution
Core Platform Support components (SpCor) Common Application components (SpApp) Hardware Configuration and Management components (Sphardwarem) Human Machine Interface components (SpHmi) System Management components (SpSys) Database Management components (SpDbm)
The Core Platform Support subsystem was intended to provide a set sub-functions like (note that not all functions may be implement at current state) :
Node Management Application Management Server Forwarding Agent Communications Thread Management Timer Management
Alcatel 5020 MGC
Event Management Status Return Interface POSIX Wrappers Memory Management Initialization Management
The Common Application subsystem was intended to provide a set sub-functions like (note that not all functions may be implement at current state) :
Trace Management Alarm Management Report Management Report Analyzer Command Logging History Log Management Common Services NE Administration
The Hardware Configuration and Management subsystem was intended to provide a set sub-functions like (note that not all functions may be implement at current state) :
Configuration Management Performance Monitoring Fault Management Fault Isolation Disk Management
The Human Machine Interface subsystem was intended to provide a set sub-functions like (note that not all functions may be implement at current state) :
managing all of the node protection groups that make up the system co-ordination of redundancy switchover for subordinate nodes based on
information received from node management in each of the protection groups. Generally when system management is available on a system, node management of the node protection groups will be configured "No Auto Failover" so that all protection switching is co-ordinated at the system level. All system and node maintenance commands interface with system management that then directs the behaviour of the node management software on the appropriate processors. System management provides certain safety checks such as preventing accidental removal of the last in service processor in a protection group for example. The Database Management subsystem can be described as follows: There are 2 database systems used: RDBMS and RTDB
RTDB (Real Time DataBase) consisting of RTDB Client and RTDB Server
RTDB Server is the part of the RTDB residing in the OAM manager keeps track of real time tables manages distribution of real time tables to ITCEs maintains registration list (what clients are interested in what [ranges of which] tables)
Alcatel 5020 MGC
ITCE applications register (setup during application init) RTDB Client is the part of the RTDB residing in the ITCE processors interacts with RTBD server receives all real time tables for which ITCE applications require access RDBMS is implemented by ORACLE 10. The both OAM processor blades are operated in active/standby mode. Oracle multi-master replication is configured on both OAM processors to reach the required high availability (5 NINES). In case of an active/standby failover, client applications will re-connect to the Oracle database on the new active processor blade (no transparent failover required). Access to the Oracle database is only performed via dedicated Alcatel Softswitch applications:
Via graphical user interface (GUI) applications for data provisioning and
retrieval
Via real time database for data distribution on the diskless call control
processor blades RTDB is implemented as a proprietary solution called iDM.
Some iDM (ALCATEL proprietary RTDB) qualities are presented in the table below:
The following figure will give a basic overview on the usage of databases on the ITCE-side:
ITCEs running LINUX OS make use of one Ethernet Interface only - until it
fails. The second ITF is put into service when the other link fails or the destination cannot be reached via that port and access to the Ethernet Switch plane.
Middleware that is used for communication between ITCEs and call engine
-CEs uses also only one dedicated port always to address an ITCE and swaps only in case of a failure.
call engine CEs running call engine-OSN make use of both Ethernet ports
(toggle) to communicate amongst each other (internal in call engine domain).
cPCI blades with Virtualisation approach for call engine CEs behave like in
call engine environment. Both interfaces are toggled for each communication. For ITCE communication between two parties (CEs) always one ESWT plane is used in order to guarantee the sequence of packets because only IP protocol is used. IP does not support an acknowledge mechanism, flow control, re-sequencing or retransmission etc. like TCP does. If one port/link fails communication to all destinations shall still be possible via the remaining Ethernet port. Ethernet MAC Address For the two ESWT planes that do not interconnection the same MAC address can in principle be used at both interfaces (isolated subnets). But by a rule of thumb there are different addresses defined according DSN principles. The addresses used are different by H40. All MAC address values must be unique within a shared medium, e.g. ESWT plane. Link Maintenance software uses the PCE and (PCE+H'40) as reference numbers for administrative reason.
Basically a distinction shall be made between the physical board reference number (PCE-ID) of a PBA and an access ports number (NA or MAC) to connect the board to a transmission medium (Alcatel 5020 MGC: ESWT only). PCE-ID was equivalent in the past legacy call engine environment to NA, but it will not be anymore. NA was a port number of our DSN-switch. The boards are no longer connected to DSN and may have several ports. The MAC address composed for Alcatel 5020 MGC is enlarged by a CPU identification field in order to support processor blades with several CPUs mounted. There is also a field introduced for the Virtualisation approach because each individual VM (= call engine CE) will have its own PCE-id equal to a MAC address. Because the PCE-id cannot cover all information fields in its wide range anymore some of the bit fields are used alternatively. Ethernet transmission flow All Ethernet communication between two distinct CEs (exception: call engine-message communication; see chapter Ethernet transmission flow) shall be handled via the same ESWT interface (port) in order to guarantee contiguous message/packet flow in sequence. TCP/IP in principle supports re-sequencing but would cost an additional delay which shall be avoided in telecommunication systems. In case of a failure the traffic is switched to the other Ethernet interface and therefore the other ESWT plane is used. All communication relationships are distributed randomly over the two switch planes. In the Jaluna Virtualisation approach each call engine -VM makes use according call engine principles of both Ethernet Links alternatively. The virtual Ethernet switch implemented on the processor blade behaves like the two-layered approach with no interconnections supported. For the CPCB blade (Dual Processor Blade) the physical Ethernet Switch onboard is operated in VLAN mode, which allows keeping also two planes isolated from each other for any communication flow. Resilient Link The chosen Ethernet Switch topology guarantees deterministic data flow routes with no racing conditions. The switching matrix is according a staged approach and looks similar to the well-proven DSN topology. Therefore no resilient links exists which could be used as alternative routes for data packet flow. Fault