discontinued
OVERVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
DETAILED VIEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Chassis Front View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
Chassis Rear View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
BASE SERVER STANDARD CAPABILITIES and FEATURES . . . . . . . . . . . . . . . . . 6
CONFIGURING the SERVER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
STEP 1 VERIFY SERVER SKU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
STEP 2 SELECT CPU(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
STEP 3 SELECT MEMORY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
STEP 4 SELECT RAID CONFIGURATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
STEP 5 SELECT DRIVES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
STEP 6 SELECT PCIe OPTION CARD(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
STEP 7 ORDER OPTIONAL NETWORK CARD ACCESSORIES . . . . . . . . . . . . . . . . . . . . . . . . 26
STEP 8 SELECT AC POWER CORD(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
STEP 9 ORDER POWER SUPPLY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
STEP 10 ORDER OPTIONAL REVERSIBLE CABLE MANAGEMENT ARM . . . . . . . . . . . . . . . . . 34
STEP 11 ORDER A TRUSTED PLATFORM MODULE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
STEP 12 ORDER OPTIONAL USB 2.0 DRIVE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
STEP 13 SELECT OPERATING SYSTEM AND VALUE-ADDED SOFTWARE . . . . . . . . . . . . . . . . 37
STEP 14 SELECT OPERATING SYSTEM MEDIA KIT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
STEP 15 SELECT SERVICE and SUPPORT LEVEL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
OPTIONAL STEP - ORDER RACK(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
OPTIONAL STEP - ORDER PDU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
SUPPLEMENTAL MATERIAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
CHASSIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
CPUs and DIMMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Physical Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Memory Population Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Recommended Memory Population . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Additional DIMM Populations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Low-Voltage DIMM Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
RAID Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
RAID Option ROM (OPROM) Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
RACKS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
PDUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Motherboard USB Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
LED Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Front Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Rear Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
TECHNICAL SPECIFICATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Dimensions and Weight . . . . . . . . . . . . . . . . . . . . . . . .... ... .... .... ... .... . . 65
Power Specifications . . . . . . . . . . . . . . . . . . . . . . . . .... ... .... .... ... .... . . 65
Environmental Specifications . . . . . . . . . . . . . . . . . . . .... ... .... .... ... .... . . 67
Compliance Requirements . . . . . . . . . . . . . . . . . . . . . .... ... .... .... ... .... . . 68
OVERVIEW
The Cisco® UCS C22 M3 small form factor (SFF) drive Rack Server combines economics and a
density-optimized feature set from entry level enterprise to SMB with applications such as scale out,
virtualization, IT and web infrastructure, VPN servers, file/print servers, and appliances.
Building on the success of the Cisco UCS C-Series Servers, the Cisco UCS C22 M3 server and the Cisco 1225
VIC further extend the capabilities of the Cisco UCS portfolio in a 1RU form factor with the addition of the
Intel® Xeon® E5-2400 v2 and E5-2400 series processor family CPUs, which delivers the best combination of
performance and efficiency gains. In addition, the Cisco UCS C22 M3 server offers 12 DIMM slots, up to eight
disk drives, two PCIe slots and two 1 Gigabit Ethernet LAN-on-motherboard (LOM) ports that provide both
price/performance in a compact 1U form factor.
Front View
Rear View
DETAILED VIEWS
Chassis Front View
Figure 2 shows the Cisco UCS C22 M3 High-Density SFF Rack Server.
1 2 3 4 5 6 7 8
302158
HDD04 HDD05 HDD06 HDD07 HDD08
9 10
1 2 3
PCIe 2 PCIe 1
PSU 1
302159
4 5 6 7 8 9
2 Slot 2: Low-profile PCIe slot on riser 2 7 VGA video port (DB-15 connector)
PCIe Gen 3 slot, x8, half-height, half-length
Capability/Feature Description
CPU One or two Intel® Xeon E5-2400 v2 or E5-2400 series processor family CPUs
Expansion slots Two riser cards with one PCIe card slot in each riser
■ Riser 1 (controlled by CPU 1)
• Slot 1: PCIe Gen 3 slot, x16 extended connector (Cisco CNIC),
half-length, full-height, with NCSI1 and Cisco CNIC2 support. The
Cisco 1225 virtual interface card requires an NCSI slot.
■ Riser 2 (controlled by CPU 2)
• Slot 2: PCIe Gen 3 slot, x8 connector, half-length, half-height, no
NCSI support
Internal storage devices ■ Up to eight 2.5-inch SAS+SATA hot-swappable hard disk drives (HDDs) or
solid state drives (SSDs)
■ One connector on the motherboard that can accommodate a USB 2.0
drive. You can order an 8 or 16 GB drive for this connector and use it as
a hypervisor or license dongle. The 8 or 16 GB USB drive comes blank.
Capability/Feature Description
Front Panel ■ A front panel controller provides status indications and control buttons
Power subsystem One 450 W power supply or one 650 W power supply, depending on server
configuration and loading. Prior to making a power supply selection, the UCS
Power Calculator should be run to determine the right size power supply for
your server configuration. A fully configured C22 M3 server may require the
650 W power supply. The UCS Power Calculator can be found at the following
link:
http://ucspowercalc.cisco.com
Fans Chassis:
■ Five fans for front-to-rear cooling. N + 1 redundant.
Power supply:
■ The power supply is equipped with a fan.
Capability/Feature Description
Notes . . .
1. NCSI = network communications services interface protocol. The slot stays active when the server is in standby
power state.
2. CNIC = Cisco network interface card.
UCSC-C22-M3S UCS C22 M3 SFF with rail kit, without PSU, CPU, memory, HDD, SSD, or PCIe
cards
NOTE: Use the steps on the following pages to configure the server with
the components that you want to include.
Select CPUs
Highest DDR3
Clock Cache
Intel Power QPI DIMM Clock
Product ID (PID) Freq Size Cores
Number (W) (GT/s) Support
(GHz) (MB)
(MHz)1
Notes . . .
1. If higher or lower speed DIMMs are selected than what is shown in the table for a given CPU, the DIMMs will be
clocked at the lowest common denominator of CPU clock and DIMM clock.
Approved Configurations
■ Select two identical CPUs from any one of the rows of Table 3 on page 11.
Caveats
■ DIMMs
— Clock speed: 1600 or 1333 MHz
— Ranks per DIMM: 1, 2, or 4
— Operational voltage: 1.35 V or 1.5 V
— Registered ECC DDR3 DIMMS (RDIMMS), unregistered DIMMs (UDIMMs), or
load-reduced DIMMs (LRDIMMs)
■ Memory is organized with three memory channels per CPU, with up to two DIMMs per
channel, as shown in Figure 4.
Select DIMMs
Select the memory configuration. The available memory DIMMs are listed in Table 4.
Ranks
Product ID (PID) PID Description Voltage
/DIMM
DIMM Options
Approved Configurations
■ Select from 1 to 6 DIMMs. Refer to Memory Population Rules, page 52, for more detailed
information.
■ Select either 2 or 4 DIMMs. The DIMMs will be placed by the factory as shown in the following
table:
■ Select the memory mirroring option (N01-MMIRROR) as shown in Table 4 on page 14.
■ Select from 1 to 6 DIMMs per CPU (2 to 12 DIMMs total). Refer to Memory Population Rules,
page 52, for more detailed information.
■ Select 2 or 4 DIMMs per CPU (4 or 8 DIMMs total). The DIMMs will be placed by the factory as
shown in the following table:
2 CPU 1: CPU 2:
1 DIMM in Channel B 1 DIMM in Channel E
1 DIMM in Channel C 1 DIMM in channel F
4 CPU 1: CPU 2:
2 DIMMs in Channel B 2 DIMMs in Channel E
2 DIMMs in Channel C 2 DIMMs in Channel F
■ Select the memory mirroring option (N01-MMIRROR) as shown in Table 4 on page 14.
NOTE: System performance is optimized when the DIMM type and quantity are equal
for both CPUs.
Caveats
NOTE: 32 GB LRDIMMs run at a maximum speed of 1333 MHz for 1 DPC and 2 DPC
even though their specified maximum speed is 1600 MHz.
NOTE: In this server, 4 GB UDIMMs run slower than their specified maximum speed of
1600 MHz. In addition, the following rules apply to UDIMMs:
■ 1 DPC: 1333 MHz is supported at 1.35 V and 1.5 V
■ 2 DPC: 1066 MHz is supported at 1.35 V and 1.5 V
For more information regarding memory, see CPUs and DIMMs, page 50.
NOTE: If you do not select a PCIe RAID controller or one of the embedded RAID
upgrade options, you will have an embedded SATA-only RAID controller that supports
up to four SATA-only drives (RAID 0, 1, 10)
NOTE: The number of RAID groups (virtual drives) supported per controller is as
follows (note however, that this server can have a maximum of only eight internal
physical drives):
■ LSI MegaRAID SAS 9270CV-8i RAID controller card = 64
■ LSI MegaRAID 9240-8i RAID controller card = 16
■ LSI MegaRAID 9220-8i RAID controller card = 16
NOTE: The RAID configuration options listed in Table 6 on page 18 are not available
if you choose embedded RAID (in this case, you must configure RAID yourself using
separate software).
Cisco can provide factory-configured RAID systems depending on the RAID controller chosen and
the number of drives ordered. Factory-configured RAID options are listed with each RAID card
description.
RAID levels 50 and 60 are supported, although they are not available as factory-configured RAID
options.
Select one RAID controller (either embedded RAID in Table 5 or hardware RAID using a PCIe RAID
controller and an appropriate RAID configuration option listed in Table 6 on page 18).
Embedded RAID
UCSC-RAID-ROM5 Onboard RAID, consisting of a plug-in storage controller unit (SCU), supporting
up to 8 internal SAS +SATA drives. SAS and SATA drives can be mixed. This option
supports RAID 0, 1, and 10, and operates at 3 Gb/s. Operating systems
supported are Windows and Linux only (no VMware support).
UCSC-RAID-ROM55 Onboard RAID, consisting of two plug-in modules (an SCU plus a software key
module) supporting up to 8 internal SAS+SATA drives. SAS and SATA drives can be
mixed. This option supports RAID 0, 1, 5, and 10 and operates at 3 Gb/s.
Operating systems supported are Windows and Linux only (no VMware support).
NOTE: VMware ESX/ESXi or any other virtualized environments are not supported
for use with the embedded MegaRAID controller. Hypervisors such as Hyper-V, Xen, or
KVM are also not supported for use with the embedded MegaRAID controller
NOTE: Although RAID levels 50 and 60 are not orderable from the factory, they are
supported for selected controllers as shown in Table 6.
Approved Configurations
Select an embedded RAID option from Table 5 on page 18 or one PCIe RAID controller from
Table 6 on page 18. You may also select an appropriate optional RAID configuration listed in
Table 6 on page 18.
NOTE: In 1-CPU configurations, PCIe slot 1 is the default slot supported for an
internal drive RAID controller. PCIe slot 2 is not supported for 1-CPU systems.
NOTE: If you do not select a PCIe RAID controller or one of the embedded RAID
upgrade options from Table 5 on page 18, you will have an embedded SATA-only
RAID controller that supports up to four SATA-only drives (RAID 0, 1, 10) and slot 1
can be used for a PCIe RAID controller (see Table 8 on page 24).
Select an embedded RAID option from Table 5 on page 18, or one PCIe RAID controller from
Table 6 on page 18. You may also select an appropriate optional RAID configuration listed in
Table 6 on page 18.
NOTE: In 2-CPU configurations, PCIe slot 2 is the default slot supported for an
internal drive RAID controller. PCIe slot 1 can be used for a PCIe expansion card
(including the VIC card). You can change the default card slot for a RAID controller to
slot 1 by going into the BIOS and reconfiguring the option ROM (OPROM) settings.
Keep in mind, however, that if you use slot 1 for a RAID controller, you will not be
able to install a VIC card, which can be installed only in slot 1.
NOTE: If you do not select a PCIe RAID controller, or one of the embedded RAID
upgrade options from Table 5 on page 18, you will have an embedded SATA-only
RAID controller that supports up to four SATA-only drives (RAID 0, 1, 10) and slots 1
and 2 can be used for PCIe cards (see Table 8 on page 24).
Caveats
■ In 1-CPU configurations, there is only one full-height PCIe slot available (located on riser 1).
Slot 2 is not supported in 1-CPU systems.
— If you choose embedded RAID, you can select one optional PCIe card from Table 8
on page 24 to be installed in slot 1.
— If you choose a PCIe RAID controller from Table 5 on page 18, it will be installed in
slot 1 by default and you will not be able to select any optional PCIe card from
Table 8 on page 24.
— Note that only a single Cisco 1225 Virtual Interface Card (VIC) card is supported and
it must be installed in the full-length, full-height PCIe slot (slot 1) on riser 1, which
is the only slot that supports NCSI. So take this into account when populating RAID
controller cards. You can have either a PCIe RAID controller or a VIC card in slot 1.
— Slot 2 is not supported in 1-CPU systems.
■ In 2-CPU configurations, there are two PCIe slots available (one full-height slot, slot 1, and
one half-height slot, slot 2).
— If you choose embedded RAID, you can select up to two optional PCIe cards from
Table 8 on page 24 to be installed in slots 1 and 2.
— If you choose a PCIe RAID controller from Table 5 on page 18, it will be installed in
slot 2 by default and you will be able to select one additional optional PCIe card
from Table 8 on page 24 to be installed in slot 1.
■ The optional PCIe RAID controllers are all half-height. If you choose one of these cards in a
2-CPU configuration, only the full-height PCIe card slot will be available for an additional
optional PCIe card.
■ Note that only a single Cisco UCS 1225 Virtual Interface Card (VIC) card is supported and it
must be installed in the full-length, full-height PCIe slot (slot 1) on riser 1, which is the only
slot that supports NCSI. So take this into account when populating RAID controller cards.
■ You can choose only one type of RAID controller, either embedded RAID or a PCIe RAID
controller. If you choose embedded RAID, slot 1 in a 1-CPU system and slots 1 and 2 in a
2-CPU system are available for adding optional PCIe cards.
■ For PCIe RAID controllers, you can choose an optional RAID configuration (RAID 0, 1, 5, 6, or
10), which is preconfigured at the factory. The RAID level you choose must be an available
RAID choice for the controller selected. RAID levels 50 and 60 are supported, depending on
the RAID controller selected, although they are not available as configuration options.
NOTE: For more important information regarding RAID support, see RAID Summary,
page 56 and RAID Option ROM (OPROM) Settings, page 57.
Select Drives
Drive
Product ID (PID) PID Description Capacity
Type
HDDs
SSDs
UCS-SD480G0KS2-EV 480 GB 2.5 inch Enterprise Value 6 G SATA SSD SATA 480 GB
UCS-SD240G0KS2-EV 240 GB 2.5 inch Enterprise Value 6 G SATA SSD SATA 240 GB
UCS-SD120G0KS2-EV 120 GB 2.5 inch Enterprise Value 6 G SATA SSD SATA 120 GB
NOTE: The number of RAID groups (virtual drives) supported per controller is as
follows:
■ LSI MegaRAID SAS 9270CV-8i RAID controller card = 64
■ LSI MegaRAID 9240-8i RAID controller card = 16
■ LSI MegaRAID 9220-8i RAID controller card = 16
Approved Configurations
■ Select up to eight SAS/SATA drives (note that none of the RAID configuration options listed in
Table 6 on page 18 can be chosen for embedded RAID). You can mix SAS and SATA drives.
(2) LSI MegaRAID 9270CV-8i (with data cache and supercap), LSI MegaRAID 9240-8i, and LSI
MegaRAID 9220-8i
■ Select up to eight SAS/SATA drives listed in Table 7 on page 22. You can mix SAS and SATA
drives.
■ If you did not select embedded RAID or a PCIe RAID controller, you can only select up to four
SATA drives, which will be controlled by the embedded SATA-only RAID controller.
Caveats
Card
Product ID (PID) PID Description
Height
Converged Network Adapters (CNAs)
UCSC-PCIE-CSC-02 Cisco VIC 1225 Dual Port 10Gb SFP+ CNA Half
UCSC-PCIE-C10T-02 Cisco VIC 1225T Dual Port 10GBaseT CNA Half
UCSC-PCIE-C40Q-02 Cisco VIC 1285 Dual Port 40Gb QSFP CNA Half
UCSC-PCIE-ESFP Emulex OCe11102-FX dual-port 10 GbE FCoE CNA Half
UCSC-PCIE-Q8362 Qlogic QLE8362 dual-port 10 GbE FCoE CNA Half
Network Interface Cards (NICs)
N2XX-ABPCI01-M3 Broadcom 5709 Dual-Port Ethernet PCIe Adapter w/TOE for M3 Servers Half
N2XX-ABPCI03-M3 Broadcom 5709 Quad Port 10/100/1Gb NIC w/TOE iSCSI for M3 Servers Half
UCSC-PCIE-BTG Broadcom 57712 Dual Port 10GbE 10G-BaseT w/TOE Half
N2XX-AIPCI01 Intel 10GbE 2-Port Niantec Controller with Copper SFP+ Cable Half
UCSC-PCIE-IRJ45 Intel Quad Port GbE controller (this is a quad-port i350 controller) Half
UCSC-PCIE-ITG Intel X540 Dual Port 10GBase-T Adapter Half
UCSC-PCIE-B3SFP BROADCOM 57810 10Gb AFEX SFP+ Half
Host Bus Adapters (HBAs)
N2XX-AEPCI03 Emulex LPe 11002, 4Gb Fibre Channel PCIe Dual Channel HBA Half
N2XX-AEPCI05 Emulex LPe 12002, 8Gb dual port Fibre Channel HBA Half
N2XX-AQPCI03 Qlogic QLE2462, 4Gb dual port Fibre Channel HBA Half
N2XX-AQPCI05 Qlogic QLE2562, 8Gb dual port Fibre Channel HBA Half
UCSC-PCIE-Q2672 Qlogic QLE2672-CSC, 16Gb Fibre Channel HBA with SR Optics Half
UCSC-PCIE-E16002 Emulex LPe16002-M6, 16Gb Fibre Channel HBA with SR Optics Half
Approved Configurations
■ If you did not choose a PCIe RAID controller (for example, you selected embedded RAID (LSI
MegaSR) or no RAID controller at all), you can select up to one PCIe option card listed in
Table 8 on page 24 for 1-CPU configurations or up to two cards for 2-CPU configurations.
■ If you selected a PCIe RAID controller, you cannot select any additional PCIe cards for 1-CPU
configurations; however, you can select one additional optional PCIe card for 2-CPU
configurations.
NOTE: If a PCIe RAID controller is installed, any optional PCIe card you select will be
installed in the full-height slot. RAID controllers are always installed in the
half-height slot (unless you change the OPROM setting to enable the full-height slot,
slot 1) for RAID.
Caveats
■ Only a single Cisco UCS 1225 Virtual Interface Card (VIC) card is supported and it must be
installed in the full-height slot of riser 1, which is the only slot that supports NCSI.
■ All PCIe cards will fit in either riser 1 or riser 2, except the VIC card, which must be installed
in the full-height slot of riser 1. The Cisco 1225 VIC card must be plugged into slot 1, which
supports NCSI.
■ Additional considerations for the Cisco 1225 VIC card:
— Supports 10G SFP+ optical and copper twinax connections
— The server supports installation of only one Cisco VIC 1225 card and it is supported
only in PCIe slot 1. Slot 1 can operate while the server is in standby power mode.
— Requires that the server has CIMC firmware version 1.4(6) or later installed. There is
a heartbeat LED on the top of the card that indicates when firmware is active.
— To use this card for UCS integration (Cisco UCS Manager mode) with Cisco UCS
Manager 2.1(0) or later, the minimum card-firmware and uboot image level is
2.1(0.306).
■ To help ensure that your operating system is compatible with the card you have selected,
check the Hardware Compatibility List at this URL:
http://www.cisco.com/en/US/products/ps10477/prod_technical_reference_list.html
Table 9 lists the copper twinax cables available for the PCIe cards. You can choose cable lengths
of 1, 3, 5, 7, or 10 meters. The two longer cables (7 and 10 meters) are active, which means
that they contain active components within the SFP+ housing to improve signal quality.
Approved Configurations
(1) Choose Up to Two Twinax Cables for Each Network Card Ordered
■ You may choose one or two twinax cables for each compatible PCIe network card ordered.
The cables can be different lengths; however, you would normally order two cables of equal
lengths to connect to the primary and redundant network switching equipment.
Approved Configurations
(1) Choose Up to Two SFP+ Modules for Each Network Card Ordered
■ You may choose one or two SFP+ optical modules cables for each compatible PCIe network
card ordered. You would normally order two modules for connecting to the primary and
redundant network switching equipment. With the SFP+ optical modules, you can use
common fiber optic cables, widely available.
See the Figure 5 on page 30 for typical SFP+ and twinax connections to the network cards.
Caveats
Check the table on the following page for compatibility between the PCIe network cards and
SFPs or twinax cables.
NOTE: The table shows all PCIe network cards for all C-series servers. Not all of the
cards shown in the table are supported in this server. The intent of the table is to
show compatibility between cards and twinax cables or SFPs.
Connector:
IEC60320/C13
186570
CAB-AC-L620-C13 AC Power Cord, NEMA L6-20 - C13,
2M/6.5ft
CAB-C13-C14-AC CORD,PWR,JMP,IEC60320/C14,IEC6
0320/C13, 3.0M
186571
CAB-9K10A-AU Power Cord, 250VAC 10A 3112 Plug,
Australia
Cordset rating: 10 A, 250 V/500 V MAX
Length: 2500mm
Connector:
Plug: EL 701C
EL 210 (EN 60320/C15)
186580
(BS 1363A) 13 AMP fuse
Connector:
EL 701
186573
(IEC60320/C13)
186576
CAB-250V-10A-ID Power Cord, SFS, 250V, 10A, India
OVE
Connector:
EL 701
187490
Connector:
Plug: EL 701B
EL 212 (IEC60320/C13)
186574
(SI-32)
186575
CAB-9K10A-SW Power Cord, 250VAC 10A MP232
Plug, Switzerland
Cordset rating: 10 A, 250 V
Plug: Length: 8 ft. 2 in (2.5 m)
MP232-R
Connector:
IEC 60320 C15
186578
CAB-9K10A-UK Power Cord, 250VAC 10A BS1363
Plug (13 A fuse), UK
Cordset rating: 10 A, 250 V/500 V MAX
Length: 2500mm
Connector:
Plug: EL 701C
EL 210 (EN 60320/C15)
186580
(BS 1363A) 13 AMP fuse
Plug: Connector:
NEMA 5-15P IEC60320/C15
192260
CAB-250V-10A-BR Power Cord - 250V, 10A - Brazil
1 76.2 From Plug End
2,133.6 ± 25
Prior to making a power supply selection, the UCS Power Calculator should be run to determine
the right size power supply for your server configuration. A fully configured C22 M3 server may
require the 650 W power supply. The UCS Power Calculator can be found at the following link:
http://ucspowercalc.cisco.com
NOTE: If you order and install a cable management arm, you will need the longest
power cable available.
For more information about the cable management arm, see the Cisco UCS C22 M3 Installation
and Service Guide at this URL:
http://www.cisco.com/en/US/docs/unified_computing/ucs/c/hw/C22/install/C22.pdf
NOTE: The module used in this server conforms to TPM v1.2/1.3, as defined by the
Trusted Computing Group (TCG).
The 8 or 16 GB drive (if ordered) is blank and comes installed on the motherboard. It can be
used for a hypervisor or a license dongle.
See Figure 6 on page 49 for the location of the internal USB connector.
Table 18 OS Media
If you have noncritical implementations and choose to have no service contract, the following
coverage is supplied:
For support of the entire Unified Computing System, Cisco offers the Cisco SMARTnet for UCS
Service. This service provides expert software and hardware support to help sustain
performance and high availability of the unified computing environment. Access to Cisco
Technical Assistance Center (TAC) is provided around the clock, from anywhere in the world.
For UCS blade servers, there is Smart Call Home, which provides proactive, embedded
diagnostics and real-time alerts. For systems that include Unified Computing System Manager,
the support service includes downloads of UCSM upgrades. The Cisco SMARTnet for UCS Service
includes flexible hardware replacement options, including replacement in as little as two hours.
There is also access to Cisco's extensive online technical resources to help maintain optimal
efficiency and uptime of the unified computing environment. You can choose a desired service
listed in Table 19.
For faster parts replacement than is provided with the standard Cisco Unified Computing System
warranty, Cisco offers the Cisco SMARTnet for UCS Hardware Only Service. You can choose from
two levels of advanced onsite parts replacement coverage in as little as four hours. SMARTnet
for UCS Hardware Only Service provides remote access any time to Cisco support professionals
who can determine if a return materials authorization (RMA) is required. You can choose a
service listed in Table 20.
Service On
Product ID (PID) Description
Level GSP Site?
Cisco Partner Support Service (PSS) is a Cisco Collaborative Services service offering that is
designed for partners to deliver their own branded support and managed services to enterprise
customers. Cisco PSS provides partners with access to Cisco's support infrastructure and assets
to help them:
■ Expand their service portfolios to support the most complex network environments
■ Lower delivery costs
■ Deliver services that increase customer loyalty
Partner Unified Computing Support Options enable eligible Cisco partners to develop and
consistently deliver high-value technical support that capitalizes on Cisco intellectual assets.
This helps partners to realize higher margins and expand their practice.
PSS is available to all Cisco PSS partners, but requires additional specializations and
requirements. For additional information, see the following URL:
www.cisco.com/go/partnerucssupport
Partner Support Service for UCS provides hardware and software support, including triage
support for third party software, backed by Cisco technical resources and level three support.
Service On
Product ID (PID) Description
Level GSP Site?
CON-PSJ1-C22M3S PSJ1 No UCS SUPP PSS 8X5XNBD UCS C22 M3 Server - SFF
CON-PSJ2-C22M3S PSJ2 No UCS SUPP PSS 8X5X4 UCS C22 M3 Server - SFF
CON-PSJ3-C22M3S PSJ3 No UCS SUPP PSS 24X7X4 UCS C22 M3 Server - SFF
CON-PSJ4-C22M3S PSJ4 No UCS SUPP PSS 24X7X2 UCS C22 M3 Server - SFF
Partner Support Service for UCS Hardware Only provides customers with replacement parts in as
little as two hours. See Table 22.
Service On
Product ID (PID) Description
Level GSP Site?
Combined Services makes it easier to purchase and manage required services under one
contract. SMARTnet services for UCS help increase the availability of your vital data center
infrastructure and realize the most value from your unified computing investment. The more
benefits you realize from the Cisco Unified Computing System (Cisco UCS), the more important
the technology becomes to your business. These services allow you to:
Service On
Product ID (PID) Description
Level GSP Site?
CON-NCF2-C22M3S NCF2 No CMB SPT SVC 24X7X2 UCS C22 M3 Server - SFF
CON-NCF2P-C22M3S NCF2P Yes CMB SPT SVC 24X7X2OS UCS C22 M3 Server - SFF
CON-NCF4P-C22M3S NCF4P Yes CMB SPT SVC 24X7X4OS UCS C22 M3 Server - SFF
CON-NCF4S-C22M3S NCF4S Yes CMB SPT SVC 8X5X4OS UCS C22 M3 Server - SFF
CON-NCFCS-C22M3S NCFCS Yes CMB SPT SVC 8X5XNBDOS UCS C22 M3 Server - SFF
CON-NCFE-C22M3S NCFE No CMB SPT SVC 8X5X4 UCS C22 M3 Server - SFF
CON-NCFP-C22M3S NCFP No CMB SPT SVC 24X7X4 UCS C22 M3 Server - SFF
CON-NCFT-C22M3S NCFT No CMB SPT SVC 8X5XNBD UCS C22 M3 Server - SFF
With the Cisco Unified Computing Drive Retention (UCDR) Service, you can obtain a new disk
drive in exchange for a faulty drive without returning the faulty drive. In exchange for a Cisco
replacement drive, you provide a signed Certificate of Destruction (CoD) confirming that the
drive has been removed from the system listed, is no longer in service, and has been destroyed.
Sophisticated data recovery techniques have made classified, proprietary, and confidential
information vulnerable, even on malfunctioning disk drives. The UCDR service enables you to
retain your drives and ensures that the sensitive data on those drives is not compromised, which
reduces the risk of any potential liabilities. This service also enables you to comply with
regulatory, local, and federal requirements.
If your company has a need to control confidential, classified, sensitive, or proprietary data, you
might want to consider one of the Drive Retention Services listed in Table 24.
NOTE: Cisco does not offer a certified drive destruction service as part of this
service.
Service Service
Service Description Service Level Product ID (PID)
Program Name Level GSP
Service Service
Service Description Service Level Product ID (PID)
Program Name Level GSP
For more service and support information, see the following URL:
http://www.cisco.com/en/US/services/ps2961/ps10312/Unified_Computing_Services_Overview.pdf
For a complete listing of available services for Cisco Unified Computing System, see this URL:
http://www.cisco.com/en/US/products/ps10312/serv_group_home.html
Notes . . .
1. Use these same base PIDs to order spare racks (available only as next-day replacements).
For more information about the R42610 rack, see RACKS, page 58.
For more information about the PDU, see PDUs, page 60.
SUPPLEMENTAL MATERIAL
CHASSIS
An internal view of the C22 M3 chassis with the top cover removed is shown in Figure 6.
2 3 4 5 6 7
SYS FAN1
8
SYS FAN2 CPU 1 PCIe 1
9
SYS FAN3
PSU 1
12
285200
14
16 15 13
8 Internal USB 2.0 port 16 Mounting point for SuperCap power module
(RAID backup unit)
(on motherboard under PCIe riser 1)
Physical Layout
Each CPU has three DIMM channels:
Each DIMM channel has two banks: Bank 1 and Bank 2. The blue-colored DIMM banks are for Bank 1 and the
black-colored are for Bank 2.
As an example, DIMM slots A1, B1, and C1 belong to Bank 1, while A2, B2, and C2 belong to Bank 2.
Figure 7 shows how banks and channels are physically laid out on the motherboard. The DIMM slots on the
top (channels A, B, and C) are associated with CPU 1, while the DIMM slots on the bottom (channels D, E,
and F) are associated with CPU 2. Bank 1 slots (blue) are populated before Bank 2 slots (black).
A1
A2
B1
B2
C1
C2
CPU1
Front of Server
D1
D2
E1
E2
F1
F2
CPU2
285209
■ Each channel has two DIMM slots (for example, channel A = slots A1 and A2).
— Each channel can operate with one or two DIMMs installed.
— If a channel has only one DIMM, populate slot 1 first (the blue slot).
■ When both CPUs are installed, populate the DIMM slots of each CPU identically.
— Fill blue slots in the channels first: A1, D1, B1, E1, C1, F1
— Fill black slots in the channels second: A2, D2, B2, E2, C2, F2
■ Any DIMM installed in a DIMM socket for which the CPU is absent is not recognized.
■ For optimum performance, populate at least one DIMM per memory channel per CPU.
■ When populating DIMM slots, multiples of 6 DIMMs are best for achieving optimum
performance, since there are 3 memory channels per CPU socket and 2 CPUs must be
populated.
■ At 1 DPC and 2 DPC, 1600-MHz DIMMS will run at 1600 MHz (provided the CPU supports 1600
MHz DIMMs) and 1333-MHz DIMMs will run at 1333 MHz.
■ When mixing DIMMs, the following rules need to be followed:
— When mixing DIMMs of different densities (sizes), populate DIMMs with the highest
density first. For example, if you have to mix 16 GB with 8 GB DIMMs, populate 16
GB DIMMs in blue slots (or bank 1) and then 8 GB in black slots (or bank 2).
— When mixing DIMMs with different ranks, populate DIMMs with the higher rank first.
For example, populate 16 GB 2R or 8 GB 2R before 4 GB 1R DIMMs.
— DIMMs of different sizes (4 GB, 8 GB, 16 GB) can be mixed within a channel. Total
memory size in each single channel should be the same. For example, if you are
mixing 4 GB and 8 GB DIMMs to achieve total 72 GB (6x8 GB + 6x4 GB) for a server,
then populate blue slots with 8 GB DIMMs and black slots with 4 GB DIMMs and total
memory size should be 12 GB for each of the 3 channels for both CPUs.
— When mixing DIMMs, all the DIMMs in the server must be the same speed.
NOTE: This table lists only some recommended and suggested configurations. There
are numerous other possible configurations supported by Cisco. Cisco supports all
mixing and population configurations of the Cisco DIMMs as long as the mixing does
not violate the few fundamental rules noted in this document.
CPU 1 CPU 2
Total
Blue Slots Black Slots Blue Slots Black Slots Total DIMMs in
Memory
Bank 1 Bank 2 Bank 1 Bank 2 System
Size
(A1, B1, C1) (A2, B2, C2) (D1, E1, F1) (D2, E2, F2)
48 GB1 3 x 4 GB 3 x 4 GB 3 x 4 GB 3 x 4 GB 12
3 x 8 GB — 3 x 8 GB — 6
72 GB 3 x 8 GB 3 x 4 GB 3 x 8 GB 3 x 4 GB 12
96 GB2 3 x 8 GB 3 x 8 GB 3 x 8 GB 3 x 8 GB 12
3 x 16 GB — 3 x 16 GB — 6
120 GB 3 x 16 GB 3 x 4 GB 3 x 16 GB 3 x 4 GB 12
144 GB 3 x 16 GB 3 x 8 GB 3 x 16 GB 3 x 8 GB 12
192 GB 3 x 16 GB 3 x 16 GB 3 x 16 GB 3 x 16 GB 12
384 GB 3 x 32 GB 3 x 32 GB 3 x 32 GB 3 x 32 GB 12
Notes . . .
1. For 48 GB, there are two possible sweet spot configurations:
6 x 8 GB occupies 1 DPC, so there is a possibility of increasing total memory size in the system (beyond 48 GB) by
going to 2 DPC while still maintaining memory speed at 1600 MHz.
If there is no plan to increase total memory size in the system (beyond 48 GB), then 12 x 4GB is more price
effective at this time.
2. For 96 GB, there are two possible optimal configurations:
6 x 16GB occupies 1 DPC, so there is a possibility of increasing total memory size in the system (beyond 96 GB) by
going to 2 DPC while maintaining the maximum memory speed at 1600 MHz.
12 x 8GB gives the peak performance for this configuration and is recommended if there is no plan of increasing
the total memory in the future.
Total Total
# CPU 1 CPU 1 # CPU 2 CPU 2
CPU 1 DIMMs CPU 2 DIMMs DIMMs for Capacity
DIMMs Capacity DIMMs Capacity
both CPUs for 2 CPUs
1 x 4 GB 1 4 GB 1 x 4 GB 1 4 GB 2 8 GB
2 x 4 GB 2 8 GB 2 x 4 GB 2 8 GB 4 16 GB
1 x 8 GB 1 8 GB 1 x 8 GB 1 8 GB 2 16 GB
3 x 4 GB 3 12 GB 3 x 4 GB 3 12 GB 6 24 GB
4 x 4 GB 4 16 GB 4 x 4 GB 4 16 GB 8 32 GB
2 x 8 GB 2 16 GB 2 x 8 GB 2 16 GB 4 32 GB
1 x 16 GB 1 16 GB 1 x 16 GB 1 16 GB 2 32 GB
5 x 4 GB 5 20 GB 5 x 4 GB 5 20 GB 10 40 GB
6 x 4 GB 6 24GB 6 x 4 GB 6 24GB 12 48 GB
3 x 8 GB 3 24 GB 3 x 8 GB 3 24 GB 6 48 GB
4 x 8 GB 4 32 GB 4 x 8 GB 4 32 GB 8 64 GB
2 x 16 GB 2 32 GB 2 x 16 GB 2 32 GB 4 64 GB
3 x 8 GB + 3 x 4 GB1 6 36 GB 3 x 8 GB + 3 x 4 GB2 6 36 GB 12 72 GB
5 x 8 GB 5 40 GB 5 x 8 GB 5 40 GB 10 80 GB
6 x 8 GB 6 48 GB 6 x 8 GB 6 48 GB 12 96 GB
3 x 16 GB 3 48 GB 3 x 16 GB 3 48 GB 6 96 GB
3 x 16 GB + 3 x 4 GB3 6 60 GB 3 x 16 GB + 3 x 4 GB4 6 60 GB 12 120 GB
4 x 16 GB 4 64 GB 4 x 16 GB 4 64 GB 8 128 GB
3 x 8 GB + 3 x 16 GB5 6 72 GB 3 x 8 GB + 3 x 16 GB6 6 72 GB 12 144 GB
5 x 16 GB 5 80 GB 5 x 16 GB 5 80 GB 10 160 GB
6 x 16 GB 6 96 GB 6 x 16 GB 6 96 GB 12 192 GB
6 x 32 GB 6 192 GB 6 x 32 GB 6 192 GB 12 384 GB
Notes . . .
1. UCS-MR-1X082RY-A + UCS-MR-1X041RY-A
2. UCS-MR-1X082RY-A + UCS-MR-1X041RY-A
3. UCS-MR-1X162RY-A + UCS-MR-1X041RY-A
4. UCS-MR-1X162RY-A + UCS-MR-1X041RY-A
5. UCS-MR-1X082RY-A + UCS-MR-1X162RY-A
6. UCS-MR-1X082RY-A + UCS-MR-1X162RY-A
■ Low-voltage DIMMs within the server must have the identical manufacturer, type, speed,
and size.
■ Low-voltage DIMMs and standard-voltage DIMMs can be mixed in the same server. Note that
this causes the system BIOS to default to standard-voltage operation (Performance Mode).
That is, the server cannot operate in Power Saving Mode unless all DIMMs are low-voltage
DIMMs.
■ CPUs that have a maximum memory frequency less than 1333 MHz support low-voltage
DIMMs operating in Power Saving Mode only, and do not support Performance Mode.
RAID Summary
■ ROM5 and ROM55 embedded RAID upgrade options each support up to 8 drives.
■ The LSI MegaRAID SAS 9270CV-8i, 9240-8i, and 9220-8i RAID controller cards each support up
to 8 drives
NOTE: If you do not select a PCIe RAID controller or one of the embedded RAID
upgrade options (ROM5 or ROM55), you will have an embedded SATA-only RAID
controller that supports up to four SATA-only drives (RAID 0, 1, 10).
Installed
1 Not allowed Not allowed 8 internal O U
C22 M3 SFF slot 1 (default)3
1 Enabled Not allowed Not allowed 8 internal A U
Installed
2 Not allowed Not allowed 8 internal A O
C22 M3 SFF slot 2 (default)
2 Enabled Not allowed Not allowed 8 internal A A
Notes . . .
1. If you want to boot from a device other than the 9270CV-8i, 9240-8i, or 9220-8i MegaRAID PCIe cards, you can
leave the cards installed. Just disable the OPROM for their slots, and the system will boot even with the cards
installed.
2. A = available slot, O = occupied slot, U = unsupported slot (slot 2 is not supported in 1-CPU systems)
3. The OPROM is enabled for the default PCIe RAID controller slots. If you want to enable a different slot, you
must go into the BIOS and enable the OPROM for the desired slot and disable the OPROM for the default PCIe
slot (see Figure 8 on page 57).
RACKS
The Cisco R42610 rack (see Figure 9) is certified for Cisco UCS installation at customer sites and is suitable
for the following equipment:
The rack is compatible with hardware designed for EIA-standard 19-inch racks. Rack specifications are
listed in Table 30.
NOTE: The AC input connector is an IEC 320 C-14 15 A/250 VAC power inlet.
Front view - door closed Front view - door open Front view - door removed
PDUs
Cisco RP Series Power Distribution Units (PDUs) offer power distribution with branch circuit protection.
Cisco RP Series PDU models distribute power to up to 24 outlets. The architecture organizes power
distribution, simplifies cable management, and enables you to move, add, and change rack equipment
without an electrician.
With a Cisco RP Series PDU in the rack, you can replace up to two dozen input power cords with just one.
The fixed input cord connects to the power source from overhead or under-floor distribution. Your IT
equipment is then powered by PDU outlets in the rack using short, easy-to-manage power cords.
The C-series severs accept the zero-rack-unit (0RU) PDU. See Figure 10).
Cisco RP Series PDU models provide two 20-ampere (A) circuit breakers for groups of receptacles. The
effects of a tripped circuit are limited to a receptacle group. Simply press a button to reset that circuit.
SYS FAN1
1
SYS FAN2 CPU 1 PCIe 1
SYS FAN3
PSU 1
LED Indicators
The server contains LED indicators on the front and rear panels. Their functions are described in the
following sections.
Front Panel
Figure 12 shows the front panel LEDs.
1 2 3 4 5 6 7 8 9
333247
HDD04 HDD05 HDD06 HDD07 HDD08
Blinking (amber) Critical fault state, indicating that two or more fans
have a fault.
Blinking (amber) One or more power supplies are in a critical fault state.
Hard Drive Activity Off There is no hard drive in the hard drive sled (no access,
no fault)
Rear Panel
Figure 13 shows the rear panel LEDs.
PCIe 2 PCIe 1
PSU 1
285202
1 2 4 6
3 5 7
Blinking (amber) An event warning threshold has been reached, but the
power supply continues to operate
TECHNICAL SPECIFICATIONS
Dimensions and Weight
Parameter Value
Weight
Maximum (8 HDDs, 2 CPUs, 12 DIMMs, 1 power supply) 27.78 lbs (12.6 kg)
Bare (0 HDD, 0 CPU, 0 DIMM, no power supply) 23.48 lbs (10.65 kg)
Power Specifications
The general power specifications for the C22 M3 SFF server 450 W power supply are listed in Table 34.
Description Specification
The general power specifications for the C22 M3 SFF server 650 W power supply are listed in Table 34.
Description Specification
For configuration-specific power specifications, use the Cisco UCS Power Calculator at this URL:
http://ucspowercalc.cisco.com
Environmental Specifications
The power specifications for the C22 M3 server are listed in Table 36.
Parameter Minimum
Compliance Requirements
The regulatory compliance requirements for C-Series servers are listed in Table 37.
Parameter Description