Anda di halaman 1dari 35

Here is Your Customized Document

Your Configuration is:


Action to Perform - Learn about storage system
Information Type - Hardware and operational overview
Storage-System Model - CX4-120

Reporting Problems
To send comments or report errors regarding this document, please email:
UserCustomizedDocs@emc.com. For issues not related to this document, contact
your service provider.
Refer to Document ID: 1433746

Content Creation Date 2010/9/27

Content Creation Date 2010/9/27

Content Creation Date 2010/9/27

CX4-120 Storage Systems

Hardware and Operational


Overview

This document describes the hardware, powerup and powerdown


sequences, and status indicators for the CX4-120 storage systems with
UltraFlex technology.
Major topics are:

Storage-system major components..................................................


Storage processor enclosure (SPE)...................................................
Disk-array enclosures (DAEs).........................................................
Standby power supplies (SPSs).......................................................
Powerup and powerdown sequence ...............................................
Status lights (LEDs) and indicators .................................................

2
4
11
17
18
22

Storage-system major components


The storage system consists of:

A storage processor enclosure (SPE)

A standby power supply (SPS) and an optional second SPS

One Fibre Channel disk-array enclosure (DAE) with a minimum


of five disk drives

Optional DAEs

A DAE is sometimes referred to as a DAE3P.

The high-availability features for the storage system include:

Redundant storage processors (SPs) configured with UltraFlex


I/O modules

Standby power supply (SPS) and optional second standby power


supply

Redundant power supply/cooling modules (referred to as


power/cooling modules)

The SPE is a highly available storage enclosure with redundant power


and cooling. It is 2U high (a U is a NEMA unit; each unit is 1.75 inches)
and includes two storage processors (SPs) and the power/cooling
modules.
Each storage processor (SP) uses UltraFlex I/O modules to facilitate:

4 Gb/s and/or 8 Gb/s Fibre Channel connectivity, and 1 Gb/s


and/or 10 Gb/s Ethernet connectivity through its front-end ports to
Windows, VMware, and UNIX hosts

10 Gb/s Ethernet Fibre Channel over Ethernet (FCoE) connectivity


through its front-end ports to Windows, VMware, and Linux hosts.
The FCoE I/O modules require FLARE 04.30.000.5.5xx or later on
the storage system.

4 Gb/s Fibre Channel connectivity through its back-end ports to the


storage systems disk-array enclosures (DAEs).

The SP senses the speed of the incoming host I/O and sets the speed
of its front-end ports to the lowest speed it senses. The speed of the
2

Hardware and Operational Overview

DAEs determine the speed of the back-end ports through which they
are connected to the SPs.
Table 1 gives the number of Fibre Channel, FCoE, and iSCSI I/O
front-end ports and Fibre Channel back-end ports supported for each
SP. The storage system cannot have the maximum number of Fibre
Channel front-end ports, maximum number of FCoE front-end, and
the maximum number of iSCSI front-end ports listed in Table 1. The
actual number of Fibre Channel, FCoE, and iSCSI front-end ports for an
SP is determined by the number and type of UltraFlex I/O modules
in the storage system. For more information, refer to UltraFlex I/O
modules, page 6 .
Table 1

Front-end and back-end ports per SP


Storage
system

Fibre Channel
front-end I/O
ports

FCoE
front-end I/O
ports

iSCSI
front-end I/O
ports

Fibre Channel
back-end disk
ports

CX4-120

2 or 6

1 or 2

2 or 4

The storage system requires at least five disks and works in conjunction
with one or more disk-array enclosures (DAEs) to provide terabytes of
highly available disk storage. A DAE is a disk enclosure with slots
for up to 15 Fibre Channel or SATA disks. The disks within the DAE
are connected through a 4 Gb/s point-to-point Fibre Channel fabric.
Each DAE connects to the SPE or another DAE with simple FC-AL
serial cabling.
The CX4-120 storage system supports a total of eight DAEs for a total of
120 disks on its single back-end bus. You can place the disk enclosures
in the same cabinet as the SPE, or in one or more separate cabinets.

Hardware and Operational Overview

Storage processor enclosure (SPE)


The SPE components include:

A sheet-metal enclosure with a midplane and front bezel

Two storage processors (SP A and SP B), each consisting of one CPU
module and an I/O carrier with slots for I/O modules

Four power supply/system cooling modules (referred to as


power/cooling modules) two associated with one SP A and two
associated with SP B.

Two management modules one associated with SP A and one


associated with SP B. Each module has SPS, management, and
service connectors.

Figure 1 and Figure 2 show the SPE components. If the enclosure


provides slots for two identical components, the component in
slot A is called component-name A. The second component is called
component-name B. For increased clarity, the following figures depict the
SPE outside of the rack cabinet. Your SPE may arrive installed in a
rackmount cabinet.
Power/cooling modules A0 - A1

CPU module A
Figure 1

Power/cooling modules B0 - B1

CPU module B

SPE components (front with bezel removed)

Hardware and Operational Overview

CL4135

SP A

SP B

10/100/1000

10/100/1000

Management
module B
Figure 2

Management
module A

CL4134

SPE components (back)

Midplane
The midplane distributes power and signals to all the enclosure
components. The CPU modules, I/O modules, and power/cooling
modules plug directly into midplane connectors.

Front bezel
The front bezel has a key lock and two latch release buttons. Pressing
the latch release buttons releases the bezel from the enclosure.

Storage processors (SPs)


The SP is the SPEs intelligent component and acts as the control center.
Each SP includes:

One CPU module with:


z

One dual-core processor

3 GB of DDR-II DIMM (double data rate, dual in-line memory


module) memory

I/O module enclosure with five UltraFlex I/O module slots , of


which three are usable

One management module with:


z

One GbE Ethernet LAN port for management and backup (RJ45
connector)

One GbE Ethernet LAN port for peer service (RJ45 connector)

Hardware and Operational Overview

One serial port for connection to a standby power supply (SPS)


(micro DB9 connector)

One serial port for RS-232 connection to a service console (micro


DB9 connector)

UltraFlex I/O modules


Table 2 lists the number of I/O modules the storage system supports
and the slots the I/O modules can occupy. More slots are available
for optional I/O modules than the maximum number of optional I/O
modules supported because some slots are occupied by required I/O
modules. With the exception of slots A0 and B0, the slots occupied by
the required I/O modules can vary between configurations. Figure
3 shows the I/O module slot locations and the I/O modules for the
standard minimum configuration with 1 GbE iSCSI modules. The 1
GbE iSCSI modules shown in this example could be 10 GbE iSCSI or
FCoE I/O modules.
Table 2

Number of supported I/O modules per SP


All I/O modules

Storage system

Optional I/O modules

Number
supported per SP

SP A slots

SP B slots

Number
supported per SP

SP A slots

SP B slots

A0-A2

B0-B2

A1-A2

B1-B2

CX4-120

10/100/1000

B0
Figure 3

10/100/1000

B1

B2

B3

B4

A0

A1

A2

A3

A4

CL4127

I/O module slot locations (1 GbE iSCSI and FC I/O modules for a standard
minimum configuration shown)

The following types of modules are available:

4 or 8 Gb Fibre Channel (FC) modules with either:


z

2 back-end (BE) ports for disk bus connections and 1 front-end


(FE) port for server I/O connections (connection to a switch or
server HBA).
or

Hardware and Operational Overview

4 front-end (FE) ports for server I/O connections (connection to


a switch or server HBA).

The 8 Gb FC module requires FLARE 04.28.000.5.7xx or later.

10 Gb Ethernet (10 GbE) FCoE module with 2 FCoE front-end (FE)


ports for server I/O connections (connection to a FCoE switch and
from the switch to the server CNA). The 10 GbE FCoE module
requires FLARE 04.30.000.5.5xx or later.

1 Gb Ethernet (1 GbE) or 10 Gb Ethernet (10 GbE) iSCSI module


with 2 iSCSI front-end (FE) ports for network server iSCSI I/O
connections (connection to a network switch, router, server NIC,
or iSCSI HBA). The 10 GbE iSCSI module requires FLARE 04.29
or later.

Hardware and Operational Overview

Table 3 lists the I/O modules available for the storage system and the
number of each module that is standard and/or optional.
Table 3

I/O modules per SP


Number of modules per SP
Module

Standard

Optional

4 or 8 Gb FC module:
1 BE port (0)
2 FE ports (2, 3)
(port 1 not used)

4 or 8 Gb FC module:
4 FE ports (0, 1, 2, 3)

10 GbE FCoE module:


2 FE ports (0, 1)

1 or 0 (see note 1)

1 (see note 2)

1 or 10 GbE iSCSI module:


2 FE ports (0, 1)

1 or 0 (see note 1)

1 (see note 2)

Note 1: The standard system has either 1 FCoE module or 1 iSCSI module per SP, but
not both types.
Note 2: The maximum number of 10 GbE FCoE modules or 10 GbE iSCSI I/O modules
per SP is 1.

IMPORTANT
Always install I/O modules in pairs one module in SP A and one
module in SP B. Both SPs must have the same type of I/O modules in
the same slots. Slots A0 and B0 always contain a Fibre Channel I/O
module with one back-end port and two front-end ports. The other
available slots can contain any type of I/O module that is supported
for the storage system.

The actual number of each type of optional Fibre Channel, FCoE,


and iSCSI I/O modules supported for a specific storage-system
configuration is limited by the available slots and the maximum
number of Fibre Channel, FCoE, and iSCSI front-end ports supported
for the storage system. Table 4 lists the maximum number of Fibre
Channel, FCoE, and iSCSI FE ports per SP for the storage system.

Hardware and Operational Overview

Table 4

Maximum number of front-end (FE) ports per SP


Maximum
Fibre Channel FE
ports per SP

Maximum
FCoE FE ports
per SP

Maximum
iSCSI FE ports
per SP
(see note)

Storage system
CX4-120

Note: The maximum number of 10 GbE iSCSI ports per SP is 2.

Back-end (BE) port connectivity


Each FC back-end port has a connector for a copper SFP-HSSDC2
(small form factor pluggable to high speed serial data connector)
cable. Back-end connectivity cannot exceed 4 Gb/s regardless of the
I/O modules speed. Table 5 lists the FC modules that support the
back-end bus.
Table 5

FC I/O module ports supporting back-end bus


Storage system and FC modules

Back-end bus (module port)

CX4-120
FC module in slots A0 and B0

Bus 0 (port 0)

Fibre Channel (FC) front-end connectivity


Each 4 Gb or 8 Gb FC front-end port has an SFP shielded Fibre Channel
connector for an optical cable. The FC front-end ports on a 4 Gb FC
module support 1, 2, or 4 Gb/s connectivity, and the FC front-end ports
on an 8 Gb FC module support 2, 4, or 8 Gb/s connectivity. You cannot
use the FC front-end ports on an 8 Gb FC module in a 1 Gb/s Fibre
Channel environment. You can use the FC front-end ports on a 4 Gb FC
module in an 8 Gb/s Fibre Channel environment if the FC switch or
HBA ports to which the modules FE ports connect auto-adjust their
speed to 4 Gb/s.
FCoE front-end connectivity
Each FCoE front-end port on a 10 GbE FCoE module runs at a fixed
10 Gb/s speed, and must be cabled to an FCoE switch. Versions
that support fiber-optic cabling include SFP shielded connectors for
optical Ethernet cable. Supported active twinaxial cables include SFP
connectors at either end; the ports in FCoE modules intended for active
twinaxial cabling do not include SFPs.
Hardware and Operational Overview

iSCSI front-end connectivity


Each iSCSI front-end port on a 1 GbE iSCSI module has a 1GBaseT
copper connector for a copper Ethernet cable, and can auto-adjust the
front-end port speed to 10 Mb/s, 100 Mb/s, or 1 Gb/s. Each iSCSI
front-end port on a 10 GbE iSCSI module has an SFP shielded connector
for an optical Ethernet cable, and runs at a fixed 10 Gb/s speed. You
can connect 10 GbE iSCSI modules to supported switches with active
twinaxial cable after removing the optical SFP connectors. Because the
1 GbE and the 10 GbE Ethernet iSCSI connection topologies are not
interoperable, the 1 GbE and the 10 GbE iSCSI modules cannot operate
on the same physical network.

Power/cooling modules
Each of the four power/cooling modules integrates one independent
power supply and one blower into a single module. The power
supply in each module is an auto-ranging, power-factor-corrected,
multi-output, offline converter.
The four power/cooling modules (A0, A1, B0, and B1) are located
above the CPUs and are accessible from the front of the unit. A0 and
A1 share load currents and provide power and cooling for SP A, and B0
and B1 share load currents and provide power and cooling for SP B. A0
and B0 share a line cord, and A1 and B1 share a line cord.
An SP or power/cooling module with power-related faults does
not adversely affect the operation of any other component. If one
power/cooling module fails, the others take over.

SPE field-replaceable units (FRUs)


The following are field-replaceable units (FRUs) that you can replace
while the system is powered up:

Power/cooling modules

Management modules

SFP modules, which plug into the 4 Gb and 8 Gb Fibre Channel


front-end port connectors in the Fibre Channel I/O modules

Contact your service provider to replace a failed CPU board, CPU


memory module, or I/O module.

10

Hardware and Operational Overview

Disk-array enclosures (DAEs)


DAE UltraPoint (sometimes called "point-to-point") disk-array
enclosures are highly available, high-performance, high-capacity
storage-system components that use a Fibre Channel Arbitrated Loop
(FC-AL) as the interconnect interface. A disk enclosure connects to
another DAE or an SPE and is managed by storage-system software
in RAID (redundant array of independent disks) configurations.
The enclosure is only 3U (5.25 inches) high, but can include 15 hard
disk drive/carrier modules. Its modular, scalable design allows for
additional disk storage as your needs increase.
A DAE includes either high-performance Fibre Channel disk modules
or economical SATA (Serial Advanced Technology Attachment, SATA
II) disk modules. CX4120 systems also support solid state disk (SSD)
Fibre Channel modules, also known as enterprise flash drive (EFD)
Fibre Channel modules. You cannot mix SATA and Fibre Channel
components within a DAE, but you can integrate and connect FC and
SATA enclosures within a storage system. The enclosure operates at
either a 2 or 4 Gb/s bus speed (2 Gb/s components, including disks,
cannot operate on a 4 Gb/s bus).
Simple serial cabling provides easy scalability. You can interconnect
disk enclosures to form a large disk storage system; the number and
size of buses depends on the capabilities of your storage processor. You
can place the disk enclosures in the same cabinet, or in one or more
separate cabinets. High-availability features are standard in the DAE.
The DAE includes the following components:

A sheet-metal enclosure with a midplane and front bezel

Two FC-AL link control cards (LCCs) to manage disk modules

As many as 15 disk modules

Two power supply/system cooling modules (referred to as


power/cooling modules)

Any unoccupied disk module slot has a filler module to maintain air
flow.
The power supply and system cooling components of the
power/cooling modules function independently of each other, but the
assemblies are packaged together into a single field-replaceable unit
(FRU).
Hardware and Operational Overview

11

The LCCs, disk modules, power supply/system cooling modules,


and filler modules are field-replaceable units (FRUs), which can be
added or replaced without hardware tools while the storage system
is powered up.
Figure 4 shows the disk enclosure components. Where the enclosure
provides slots for two identical components, the components are called
component-name A or component-name B, as shown in the illustrations.
For increased clarity, the following figures depict the DAE outside of the rack
or cabinet. Your DAEs may arrive installed in a rackmount cabinet along with
the SPE.

Power/cooling module B

Link control card B

Power LED
(green or blue)

Fault LED
(amber)

!
!

PRI

EXP

PRI

EXP

#
PRI

EXP

PRI

EXP

B
#

Power/cooling module A
Figure 4

Link control card A

Disk activity
LED (green)

Fault LED
(amber)

EMC3437

DAE outside the cabinet front and rear views

As shown in Figure 5, an enclosure address (EA) indicator is located on


each LCC. (The EA is sometimes referred to as an enclosure ID.) Each
link control card (LCC) includes a bus (loop) identification indicator.
The storage processor initializes bus ID when the operating system is
loaded.

12

Hardware and Operational Overview

Bus ID

Enclosure
address

0
1
2
3

0
1
2
3

EA selection
(press here to
change EA)

4
5
6
7

4
5
6
7

!
!

PRI

EXP

PRI

EXP

#
PRI

EXP

PRI

#
!

EXP

EMC3210

Figure 5

Disk enclosure bus (loop) and address indicators

The enclosure address is set at installation. Disk module IDs are


numbered left to right (looking at the front of the unit) and are
contiguous throughout a storage system: enclosure 0 contains modules
0-14; enclosure 1 contains modules 15-29; enclosure 2 includes 30-44,
and so on.

Midplane
A midplane between the disk modules and the LCC and power/cooling
modules distributes power and signals to all components in the
enclosure. LCCs, power/cooling modules, and disk drives the
enclosures field-replaceable units (FRUs) plug directly into the
midplane.

Front bezel
The front bezel has a locking latch and an electromagnetic interference
(EMI) shield. You must remove the bezel to remove and install drive
modules. EMI compliance requires a properly installed front bezel.

Link control cards (LCCs)


An LCC supports and controls one Fibre Channel bus and monitors
the DAE.

Hardware and Operational Overview

13

Expansion link
active LED

Primary link
active LED
Fault LED (amber)

PRI
!

EXP

Power LED (green)

PRI

EXP
!
!

PRI

EXP

PRI

EXP

#
PRI

EXP

PRI

#
!

EXP

EMC3226

Figure 6

LCC connectors and status LEDs

A blue link active LED indicates a DAE enclosure operating at 4 Gb/s. The link
active LED(s) is green in a DAE operating at 2 Gb/s.

The LCCs in a DAE connect to other Fibre Channel devices (processor


enclosures, other DAEs) with twin-axial copper cables. The cables
connect LCCs in a storage system together in a daisy-chain (loop)
topology.
Internally, each DAE LCC uses FC-AL protocols to emulate a loop;
it connects to the drives in its enclosure in a point-to-point fashion
through a switch. The LCC independently receives and electrically
terminates incoming FC-AL signals. For traffic from the systems storage
processors, the LCC switch passes the input signal from the primary
port (PRI) to the drive being accessed; the switch then forwards the
drives output signal to the expansion port (EXP), where cables connect
it to the next DAE in the loop. (If the target drive is not in the LCCs
enclosure, the switch passes the input signal directly to the EXP port.)
At the unconnected expansion port (EXP) of the last LCC, the output
signal (from the storage processor) is looped back to the input signal
source (to the storage processor). For traffic directed to the systems
storage processors, the switch passes input signals from the expansion
port directly to the output signal destination of the primary port.
Each LCC independently monitors the environmental status
of the entire enclosure, using a microcomputer-controlled FRU
(field-replaceable unit) monitor program. The monitor communicates
14

Hardware and Operational Overview

status to the storage processor, which polls disk enclosure status.


LCC firmware also controls the LCC port-bypass circuits and the
disk-module status LEDs.
LCCs do not communicate with or control each other.
Captive screws on the LCC lock it into place to ensure proper
connection to the midplane. You can add or replace an LCC while the
disk enclosure is powered up.

Disk modules
Each disk module consists of one disk drive in a carrier. You can
visually distinguish between module types by their different latch
and handle mechanisms and by type, capacity, and speed labels on
each module. An enclosure can include Fibre Channel or SATA disk
modules, but not both types. You can add or remove a disk module
while the DAE is powered up, but you should exercise special care
when removing modules while they are in use. Drive modules are
extremely sensitive electronic components.
Disk drives
The DAE supports Fibre Channel and SATA disks. The Fibre Channel
(FC) disks, including enterprise flash (SSD) versions, conform to FC-AL
specifications and 4 Gb/s Fibre Channel interface standards, and
support dual-port FC-AL interconnects through the two LCCs. SATA
disks conform to Serial ATA II Electrical Specification 1.0 and include
dual-port SATA interconnects; a paddle card on each drive converts the
assembly to Fibre Channel operation. The disk module slots in the
enclosure accommodate 2.54 cm (1 in) by 8.75 cm (3.5 in) disk drives.
The disks currently available for the storage system and the usable
capacities for disks are listed in the EMC CX4 Series Storage Systems
Disk and FLARE OE Matrix (P/N 300-007-437) on the EMC Powerlink
website. The vault disks must all have the same capacity and same
speed. The 1 TB, 5.4K rpm SATA disks are available only in a DAE that
is fully populated with these disks. Do not intermix 1 TB, 5.4K rpm
SATA disks with 1 TB, 1.2K rpm SATA disks in the same DAE, and do
not replace a 1 TB, 5.4K rpm SATA disk with a 1 TB, 1.2K rpm SATA
disk, or vice versa.

Hardware and Operational Overview

15

The 1 TB SATA disks operate on a 4 Gb/s back-end bus like the 4 Gb FC disks,
but have a 3 Gb/s bandwidth. Since they have a Fibre Channel interface to the
back-end loop, these disks are sometimes referred to as Fibre Channel disks.

Disk power savings


Some disks support power savings, which lets you assign power saving
settings to these disks in a storage system running FLARE version
04.29.000.5.xxx or later, so that these disks transition to the low power
state after being idle for at least 30 minutes. For the currently available
disks that support power savings, refer to the EMC CX4 Series Storage
Systems Disk and FLARE OE Matrix (P/N 300-007-437) on the EMC
Powerlink website.
Drive carrier
The disk drive carriers are metal and plastic assemblies that provide
smooth, reliable contact with the enclosure slot guides and midplane
connectors. Each carrier has a handle with a latch and spring clips.
The latch holds the disk module in place to ensure proper connection
with the midplane. Disk drive activity/fault LEDs are integrated into
the carrier.

Power/cooling modules
The power/cooling modules are located above and below the LCCs.
The units integrate independent power supply and dual-blower
cooling assemblies into a single module.
Each power supply is an auto-ranging, power-factor-corrected,
multi-output, offline converter with its own line cord. Each supply
supports a fully configured DAE and shares load currents with the
other supply. The drives and LCCs have individual soft-start switches
that protect the disk drives and LCCs if they are installed while the
disk enclosure is powered up. A FRU (disk, LCC, or power/cooling
module) with power-related faults does not adversely affect the
operation of any other FRU.
The enclosure cooling system includes two dual-blower modules.
If one blower fails, the others will speed up to compensate. If two
blowers in a system (both in one power/cooling module, or one in each
module) fail, the DAE goes offline within two minutes.

16

Hardware and Operational Overview

Standby power supplies (SPSs)


A 1U, 1200-watt DC SPS provides backup power for storage processor
A and LCC A on the first (enclosure 0, bus 0) DAE adjacent to it. An
optional second SPS provides the same service for SP B and LCC B. The
SPSs allow write caching which prevents data loss during a power
failure to continue. Each SPS rear panel has one AC inlet power
connector with power switch, AC outlets for the SPE and the first
DAE (EA 0, bus 0) respectively, and one phone-jack type connector
for connection to an SP. Figure 7 shows the SPS connectors. A service
provider can replace an SPS while the storage system is powered up.

SPE

SP
interface

Active
LED
(green)
On battery
LED
(amber)

AC
power
connector

Power
switch

Fault
LED
(amber)

Replace
battery
LED
(amber)
EMC2292

Figure 7

1200 W SPS connectors

Hardware and Operational Overview

17

Powerup and powerdown sequence


The SPE and DAE do not have power switches.

Powering up the storage system


1. Verify the following:
Master switch/circuit breakers for each cabinet/rack power
strip are off.
The power cord for SP A is plugged into the SPS and the power
cord retention bails are in place.
The power cord for SP B is plugged into the nearest power
distribution unit on a different circuit feed from the SPS and
power cord retention bails are in place. (In systems with two
SPSs, plug SP B into its corresponding SPS.)
The serial connection between management module A and the
SPS is in place. (In systems with two SPSs, each management
module has a serial connection to its corresponding SPS.)
The power cord for LCC A on the first DAE (EA 0, bus 0; often
called the DAE-OS) is plugged into the SPS and the power cord
retention bails are in place.
The power cord for LCC B is plugged into the nearest power
distribution unit on a different circuit feed than the SPS. (In
systems with two SPSs, each LCC plugs into its corresponding
SPS.)
The power cords for the SPSs and any other DAEs are plugged
into the cabinets power strips.
The power switches on the SPSs are in the on position.
Any other devices in the cabinet are correctly installed and
ready for powerup.
2. Turn on the master switch/circuit breakers for each cabinet/rack
power strip.
In the 40U-C cabinet, master switches are on the power distribution
panels (PDPs), as shown in Figure 8.

18

Hardware and Operational Overview

ON
I

ON
I

O
OFF

O
OFF

!
!

PRI

EXP

PRI

EXP

#
PRI

EXP

PRI

EXP

B
#

ON
I

ON
I

O
OFF
!

O
OFF

!
!

PRI

EXP

PRI

EXP

#
PRI

EXP

PRI

EXP

B
#

ON
I

ON
I

O
OFF

O
OFF
!
!

PRI

EXP

PRI

EXP

B
A

PRI

EXP

#
!

EXP

PRI

EXP

PRI

!
!

PRI

EXP

PRI

EXP

B
A

#
!

EXP

PRI

EXP

PRI

EXP

PRI

EXP

PRI

!
!

PRI

EXP

PRI

EXP

B
A

#
!

!
!

PRI

EXP

PRI

EXP

O
OFF

B
#

ON
I

PRI

EXP

PRI

EXP

O
OFF
ON
I

PRI

EXP

Master switch

Master switch

!
!

PRI

EXP

PRI

EXP

B
A

#
!

O
OFF

O
OFF
ON
I

ON
I
!

PRI

EXP

EXP

PRI

DAE-OS
#

PRI

EXP

#
!

PRI

EXP

O
OFF

O
OFF
ON
I

SPE

ON
I

10/100/1000

10/100/1000

MGMT B

Power source B

SLOT B0

SLOT B1

SPS switch

SLOT B2

SLOT B3

SLOT B4

MGMT A

SLOT A0

SLOT A1

SLOT A2

SLOT A3

SLOT A 4

SPS switch

Power source A
CL4128

Figure 8

PDP master switches and power sources in the 40U-C cabinet with two PDPs
used (two SPSs shown)

The storage system can take 8 to 10 minutes to complete a typical


powerup. Amber warning LEDs flash during the power on self-test
(POST) and then go off. The front fault LED and the SPS recharge LEDs
commonly stay on for several minutes while the SPSs are charging.

Hardware and Operational Overview

19

The powerup is complete when the CPU power light on each SP is


steady green.
The CPU status lights are visible on the SPE when the front bezel is removed
(Figure 9).

SP A
Figure 9

SP B

CL4095

Location of CPU status lights

If amber LEDs on the front or back of the storage system remain on for
more than 10 minutes, make sure the storage system is correctly cabled,
and then refer to the troubleshooting flowcharts for your storage
system on the CLARiiON Tools page on the EMC Powerlink website
(http://Powerlink.EMC.com). If you cannot determine any reasons for
the fault, contact your authorized service provider.

Powering down the storage system


1. Stop all I/O activity to the SPE. If the server connected to the SPE is
running the AIX, HP-UX, Linux, or Solaris operating system, back
up critical data and then unmount the file systems.
Stopping I/O allows the SP to destage cache data, and may take
some time. The length of time depends on criteria such as the size
of the cache, the amount of data in the cache, the type of data in

20

Hardware and Operational Overview

the cache, and the target location on the disks, but it is typically
less than one minute. We recommend that you wait five minutes
before proceeding.
2. After five minutes, use the power switch on each SPS to turn off
power. Storage processors and DAE LCCs connected to the SPS
power down within two minutes.

CAUTION
Never unplug the power supplies to shut down an SPE.
Bypassing the SPS in that manner prevents the storage system
from saving write cache data to the vault drives, and results
in data loss. You will lose access to data, and the storage
processor log displays an error message similar to the following:
Enclosure 0 Disk 5 0x90a (Cant Assign - Cache Dirty)
0 0xafb40 0x14362c

Contact your service provider if this situation occurs.


3. For a system with a single SPS, wait two minutes and then unplug
the power cables for SP B on the SPE and LCC B on DAE 0, bus 0.
This turns off power to the SPE and the first DAE (EA 0, bus 0). You do
not need to turn off power to the other connected DAEs.

Hardware and Operational Overview

21

Status lights (LEDs) and indicators


Status lights made up of light emitting diodes (LEDs) on the SPE, its
FRUs, the SPSs, and the DAEs and their FRUs indicate the components
current status.

Storage processor enclosure (SPE) LEDs


This section describes status LEDs visible from the front and the rear
of the SPE.
SPE front status LEDs
Figure 10 and Figure 11 show the location of the SPE status LEDs that
are visible from the front of the enclosure. Table 6 and Table 7 describe
these LEDs.

CL4092

Figure 10

22

SPE front status LEDs (bezel in place)

Hardware and Operational Overview

SP A

SP B

Figure 11

SPE front status LEDs (bezel removed)

Table 6
LED

Symbol

Power

System fault

LED
Power cooling
module status

CPU power

Meaning of the SPE front status LEDs (bezel in place)


Quantity

State

Meaning

Off

SPE is powered down.

Solid blue

SPE is powered up.

Off

SPE is operating normally.

Solid amber

A fault condition exists in the SPE. If the fault is not obvious from
another fault LED on the front, look at the rear of the enclosure.

Table 7
Symbol
None

CL4095

Meaning of the SPE front status LEDs (bezel removed)


Quantity
1 per module

1 per CPU

State
Off

Meaning
Power cooling module is not powered up.

Solid green

Module is powered up and operating normally.

Amber

Module is faulted.

Off

CPU is not powered up.

Solid green

CPU is powered up and operating normally.

Hardware and Operational Overview

23

LED
CPU fault

Symbol

Quantity
1 per CPU

Unsafe to
remove

1 per CPU

State
Blinking amber

Meaning
Running powerup tests.

Solid amber

CPU is faulted.

Blinking blue

OS is loaded.

Solid blue

CPU is degraded.

Solid white

DO NOT REMOVE MODULE while this light is on.

SPE rear status LEDs


Table 8 describes the status LEDs that are visible from the rear of the
SPE.
Table 8
LED

Symbol

Management module
status (see note 1)

I/O module status (see


note 1)

BE port link (see note 2)

24

None

None

None

Meaning of the SPE rear status LEDs


Quantity

State

Meaning

1 per module

Solid green

Power is being supplied to module.

Off

Power is not being supplied to module.

Amber

Module is faulted.

Solid green

Power is being supplied to module.

Off

Power is not being supplied to module.

Amber

Module is faulted.

Off

No link because of one of the following conditions:


the cable is disconnected, the cable is faulted or it
is not a supported type.

Solid green

1 Gb/s or 2 Gb/s link speed.

Solid blue

4 Gb/s link speed.

Blinking green
then blue

Cable fault.

1 per module

1 per Fibre Channel


back-end port

Hardware and Operational Overview

LED
FE port link (see note 2)

Symbol
None

Quantity

State

Meaning

1 per Fibre Channel


front-end port

Off

No link because of one of the following conditions:


the host is down, the cable is disconnected, an SFP
is not in the port slot, the SFP is faulted or it is not a
supported type.

Solid green

1 Gb/s or 2 Gb/s link speed.

Solid blue

4 Gb/s link speed.

Blinking green
then blue

SFP or cable fault.

Note 1: LED is on the module handle.


Note 2: LED is next to the port connector.

DAE status LEDs


This section describes the following status LEDs and indicators:

Front DAE and disk modules status LEDs

Enclosure address and bus ID indicators

LCC and power/cooling module status LEDs

Front DAE and disk modules status LEDs


Figure 12 shows the location of the DAE and disk module status LEDs
that are visible from the front of the enclosure. Table 9 describes these
LEDs.

Hardware and Operational Overview

25

Fault LED
(Amber)

Disk Activity
LED
(Green)
Figure 12

26

Power LED
(Green or Blue)

Fault LED
(Amber)

Front DAE and disk modules status LEDs (bezel removed)

Hardware and Operational Overview

EMC3422

Table 9

Meaning of the front DAE and disk module status LEDs

LED

Quantity

State

Meaning

DAE power

Off

DAE is not powered up.

Solid green

DAE is powered up and back-end bus is running at 2


Gb/s.

Solid blue

DAE is powered up and back-end bus is running at 4


Gb/s.

DAE fault

Solid amber

On when any fault condition exists; if the fault is not


obvious from a disk module LED, look at the back of the
enclosure.

Disk activity

1 per disk module

Off

Slot is empty or contains a filler module or the disk is


powered down by command, for example, as the result of
a temperature fault.

Solid green

Drive has power but is not handling any I/O activity (the
ready state).

Blinking green, mostly on

Drive is spinning and handling I/O activity.

Blink green at a constant


rate

Drive is spinning up or spinning down normally.

Blinking green, mostly off

Drive is powered up but not spinning; this is a normal part


of the spin-up sequence, occurring during the spin-up
delay of a slot.

Solid amber

On when the disk module is faulty, or as an indication


to remove the drive.

Disk fault

1 per disk module

Enclosure address and bus ID indicators


Figure 13 shows the location of the enclosure address and bus ID
indicators that are visible from the rear of the enclosure. In this example,
the DAE is enclosure 2 on bus (loop) 1; note that the indicators for LCC
A and LCC B always match. Table 10 describes these indicators.

Hardware and Operational Overview

27

Bus ID

Enclosure
address

0
1
2
3

0
1
2
3

EA
selection

4
5
6
7

4
5
6
7

!
!

PRI

EXP

PRI

EXP

#
PRI

EXP

PRI

EXP

B
#

4
5
6
7

4
5
6
7

0
1
2
3

0
1
2
3

Enclosure
address

Bus ID

#
EA
selection

EMC3178

Figure 13
Table 10

Location of enclosure address and bus ID indicators


Meaning of enclosure address and bus ID indicators

LED

Quantity

State

Meaning

Enclosure address

Green

Displayed number indicates enclosure address.

Bus ID

Blue

Displayed number indicates bus ID. Blinking bus ID


indicates invalid cabling LCC A and LCC B are not
connected to the same bus or the maximum number of
DAEs allowed on the bus is exceeded.

DAE power/cooling module status LEDs


Figure 14 shows the location of the status LEDs for the power
supply/system cooling modules (referred to as power/cooling modules).
Table 11 describes these LEDs.

28

Hardware and Operational Overview

Power LED (green)


Power fault LED (amber)
Blower fault LED (amber)

!
!

PRI

EXP

PRI

EXP

#
PRI

EXP

PRI

EXP

B
#

Blower fault LED (amber)


Power fault LED (amber)
Power LED (green)
EMC3179

Figure 14

DAE power/cooling module status LEDs

Table 11

Meaning of DAE power/cooling module status LEDs

LEDs

Quantity

State

Meaning

Power supply active

1 per supply

Green

On when the power supply is operating.

Power supply fault


(see note)

1 per supply

Amber

On when the power supply is faulty or is not receiving AC


line voltage. Flashing when either a multiple blower or
ambient over-temperature condition has shut off power to
the system.

Blower fault (see


note)

1 per cooling module

Amber

On when a single blower in the power supply is faulty.

Note: The DAE continues running with a single power supply and three of its four blowers. Removing a power/cooling module constitutes
a multiple blower fault condition, and will power down the enclosure unless you replace a blower within two minutes.

DAE LCC status LEDs


Figure 15 shows the location of the status LEDs for a link control card
(LCC). Table 12 describes these LEDs.

Hardware and Operational Overview

29

Primary link
active LED (green or blue)

Expansion link
active LED
(2 Gb/s - green
4 Gb/s - blue)

Fault LED (amber)

PRI
!

EXP

Power LED (green)

PRI

EXP
!
!

PRI

EXP

PRI

EXP

#
PRI

EXP

PRI

EXP

B
#

EXP

PRI

Fault LED (amber)

PRI

EXP

Power LED (green)

Primary link
active LED

Expansion link
active LED
EMC3184

Figure 15
Table 12

DAE LCC status LEDs


Meaning of DAE LCC status LEDs

Light

Quantity

State

Meaning

LCC power

1 per LCC

Green

On when the LCC is powered up.

LCC fault

1 per LCC

Amber

On when either the LCC or a Fibre Channel connection


is faulty. Also on during power on self test (POST).

Primary link active

1 per LCC

Green

On when 2 Gb/s primary connection is active.

Blue

On when 4 Gb/s primary connection is active.

Green

On when 2 Gb/s expansion connection is active.

Blue

On when 4 Gb/s expansion connection is active.

Expansion link
active

1 per LCC

SPS status LEDs


Figure 16 shows the location of SPS status LEDs that are visible from
rear. Table 13 describes these LEDs.

30

Hardware and Operational Overview

Active
LED
(green)
On battery
LED
(amber)

Fault
LED
(amber)

Replace
battery
LED
(amber)
EMC3421

Figure 16
Table 13

1200 W SPS status LEDs


Meaning of 1200 W SPS status LEDs

LED

Quantity

State

Meaning

Active

1 per SPS

Green

When this LED is steady, the SPS is ready and operating normally. When
this LED flashes, the batteries are being recharged. In either case, the
output from the SPS is supplied by AC line input.

On battery

1 per SPS

Amber

The AC line power is no longer available and the SPS is supplying output
power from its battery. When battery power comes on, and no other online
SPS is connected to the SPE, the file server writes all cached data to disk,
and the event log records the event. Also on briefly during the battery test.

Replace battery

1 per SPS

Amber

The SPS battery is not fully charged and may not be able to serve its cache
flushing function. With the battery in this state, and no other online SPS
connected to the SPE, the storage system disables write caching, writing
any modified pages to disk first. Replace the SPS as soon as possible.

Fault

1 per SPS

Amber

The SPS has an internal fault. The SPS may still be able to run online, but
write caching cannot occur. Replace the SPS as soon as possible.

Hardware and Operational Overview

31

Copyright 2008-2010 EMC Corporation. All Rights Reserved.


EMC believes the information in this publication is accurate as of its publication date. The
information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION
MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO
THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an
applicable software license.
For the most up-to-date regulatory document for your product line, go to the Technical
Documentation and Advisories section on EMC Powerlink.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on
EMC.com.
All other trademarks mentioned herein are the property of their respective owners.
32

Hardware and Operational Overview

Anda mungkin juga menyukai