Anda di halaman 1dari 6

5/24/13

28th March 2012

UCS, Fiber Channel and SAN Connectivity, Pt 1

UCS, Fiber Channel and SAN Connectivity, Pt 1

Well; the several days wait for the next blog entry turned into several months. Such is the way it is with project schedules. This
blog session will include an introduction to SAN, supporting a VM centric data center. The following blog will concentrate on
Cisco UCS specific SAN FC support features.
This blogs entries will be broken down into a series as follows:
Introduction to SAN Storage Architcture and SAN (FC) as supported by Cisco UCS
SAN (FC) as supported by Cisco UCS
Special Backup Cases
Troubleshooting UCS Fiber Channel will be covered in the next blog entry.
Introduction to SAN Storage Architecture .
Choosing a system based either on a NAS (file based SMB/SAMBA/CIFS), or raw device (SAN) storage is an immediate
decision. For a data center hosting multi-tenant/virtual machines, raw device storage is a necessity, so as to support the volume
and speeds necessary for timely backups.
Fiber Channel: A SAN Protocol
Well, FC is a SAN lower level protocol suite with a little help from an applied higher layer SCSI protocol. There are features of FC
that make it very applicable to needs of a high density VM environment, with regular device backups:
Lossless transmission; by way of buffer credits
Device access partitioning efficiency:
a.
Fabric based zoning: limits host access to particular devices
b.
VSANs: allows extending FC switch to other virtual fabrics and prevents FC switch port wastage
Backup speed of raw devices exceeds NAS backups of files associated with an entire device
Supports multipath (load balancing) to storage devices, possibly necessitating 3rd party software

SAN Hardware Architecture


Theres plenty of documentation on SAN components and architecture, and this blog series wont be redundant in that regard.
This series concentrates on the implementation of Cisco UCS integration of FC, VHBAs, FCOE, Network Adapters and MDS
towards SAN FC, NPV and NPIV.
The NPV/NPIV fiber channel operation of the UCS system in End Host mode bears similarities to how UCS supports Ethernet
VLAN connectivity in End Host mode. Remember that the UCS Fabric Interconnects in End Host Mode behave like a host with
many ports and there is no MAC learning in the uplink facing side and the FI does MAC learning on the host facing side only. In a
similar fashion; the FI supports FC NPV in End Host mode, appearing as a device with many HBA connecting to few uplinks,
participating in no Domain ID, zoning or FC switch functions. The NPV FI is an (inexact) analogy to a host with many HBAs.
cloudfulness.blogspot.ca/2012/03/ucs-fiber-channel-and-san-connectivity.html

1/6

5/24/13

UCS, Fiber Channel and SAN Connectivity, Pt 1

The FI NPIV operation allows connected HBAs to login to a FC switch upstream of the FI NPV. The FI NPV performs the actual
FLOGI for all the connected HBAs via the FI uplink ports to a FC NPIV switch. A notable difference between the way an End Host
Mode FI operates in Ethernet vs Fiber Channel is that local intra-VLAN switching is supported for local hosts. Local switching is
not supported for Fiber Channel.
Basic Implementation caveats
Expense: Currently; SAN Fiber Channel (FC) is an extensively adopted standard, but requires an upfront cost:
a.
SAN requires a fabric including switch gear
b.
FC connected hosts require Host Bus Adapters (HBAs) for FC connectivity
Backups:
a.
Applications such as Cisco CUCM apps are highly integrated with their Linux OS. This has implications
involving exceptions w/r to how backups can be configured and executed in a SAN environment. This will be
explained more in the Backup Architecture section.
b.
SAN doesnt support file based backups. This issue has things in common with issue 2 above. Again; more
on this in the Backup Architecture section.

SAN (FC) as supported by Cisco UCS


The typical and most flexible implementation of Cisco UCS is as an End Host Mode implementation, where UCS appears as a
collection of hosts attached to the upstream switch. The other UCS Switching mode will not be included in this discussion
about UCS support of SANs. Therefore all following references are regarding the UCS in End Host Mode.
Implementation:
Network SAN basics:
NPort Virtualization: is (NPV) supported on the Cisco UCS Fabric Interconnect. NPV support allows the UCS to support
multiple FLOGIs, on behalf of the attached servers vHBAs. This action is very synergistic with the general End Host Mode (non
switch) operation, also including that an NPV device does not behave as a SAN switch, but as a login proxy.
NPort ID Virtualization: (NPIV) is the means for a fabric switch to associate multiple logins with a single NPort, such as via a
fabric interconnect NPV . The upstream FC switch (Cisco MDS switch for example) would be responsible for accepting the proxy
logins from the UCS fabric Interconnects.
The fabric interconnect NPV and MDS NPIV operation are integral for supporting a volume of virtual hosts such as would be
guests on UCS blade esxi hosts.
cloudfulness.blogspot.ca/2012/03/ucs-fiber-channel-and-san-connectivity.html

2/6

5/24/13

UCS, Fiber Channel and SAN Connectivity, Pt 1

Card level SAN basics:


Each Network Adaptor Palo UCS M81KR (for example) supports a programmable quantity of HBAs. Up to 128 NICs or HBAs
may be configured and assigned to VMs as necessary. HBAs assigned to a server are connected to the FI as (typical) N-Ports,
but the FI operating as an NPV device, is providing an F-Proxy port back to the servers HBA, with the relevant F Port existing
upstream on the MDS switch. In this fashion, the operation of the VMs HBA(s), FI and MDS are very much like the standard
operation of a physical server FC attachment to a NPIV switch via a NPV device.
UCS SAN Connectivity Summary
Refer to the diagram.

cloudfulness.blogspot.ca/2012/03/ucs-fiber-channel-and-san-connectivity.html

3/6

5/24/13

UCS, Fiber Channel and SAN Connectivity, Pt 1

[http://2.bp.blogspot.com/-r2rQSMVHC1s/T3MJW_ZxgKI/AAAAAAAAACg/I5cy3pv1gE0/s1600/storage_v2.jpg]

As has been illustrated in my earlier blog, automatic pinning will place the servers mounted on specific blades to specific fabric
uplinks to an FI, unless manual pinning has been employed to point a server to a particular fabric uplink. Generally; with 2 IOMs,
server traffic may be directed up either of 2 fabric uplink paths; with one a path going to each FI. In this example, HBA uplinks
have purposely been split, so that path redundancy exists for the pair of HBAs for each server.
cloudfulness.blogspot.ca/2012/03/ucs-fiber-channel-and-san-connectivity.html

4/6

5/24/13

UCS, Fiber Channel and SAN Connectivity, Pt 1

The servers dual HBAs are each statically pinned so that they each have a unique path via separate fabrics. The pinning is
accomplished by the UCM cli interface. The IOMs are supporting a full quad uplink each. Multiple paths from each server are
supported throughout to the data store. However; the way that each data stores SCSI ID is seen at each HBA may convince the
O/S that each HBA is connected to a different data store, even though they are connected to the same. The answer for this is in
vendors multi-pathing client software for the real or virtual hosts. In the case of EMC; PowerPath client software will enable multiHBA equipped hosts to utilize multiple paths to a single data store and allow this illustrated model to provide benefits.

Special Backup Cases


Many SAN backup solutions require some level of client software to support the interoperation of FC, the underlying server
technology (virtual machines) and the applications running on the virtual machines. Lets take a look at a particular instance:
Integration of UCS, VMWARE, Cisco CUCM Applications and Client Backups

Update (6/28/2012): Practically Speaking


If the esxi image is to be stored on a local disc, there's doesn't seem to be much reason to configure more than two
vHBAs per UCS blade server. Bear in mind that the VMs are ignorant that HBAs or Fiber Channel is involved at all.
The VMs believe that they are employing SAS storage. The virtual environment adds FC header and a WWNN to the
storage packets. The CNA adds one WWPN per vHBA (one per fabric). In this instance, two vHBAs would provide for
all the redundancy that is needed for fiber channel access via either fabric.
Boot From SAN
However; in the instance of boot from SAN, then we have to take in the operational considerations.
This topic gets pretty good treatment from: http://virtualeverything.wordpress.com/2011/05/31/simplifying-sanmanagement-for-vmware-boot-from-san-utilizing-cisco-ucs-and-palo.
In a nutshell; two vHBAs per adapter would be members of a storage group including all LUNs in the cluster. This would
support VMotion and DRS. The same would apply to all blade adapters in the cluster.
Best practice (or requirement) for VMWARE boot from SAN includes that each blade should have its own LUN
containing the boot image. This operation would involve two additional vHBAs, which would be exclusive members of a
storage group that would include the blade's boot LUN as the only LUN member. Therefore taking in consideration the
normal DRS/HA operations plus boot from SAN, a total of four vHBAs could be effectively employed.
Posted 28th March 2012 by mj1pate
cloudfulness.blogspot.ca/2012/03/ucs-fiber-channel-and-san-connectivity.html

5/6

5/24/13

UCS, Fiber Channel and SAN Connectivity, Pt 1

Add a comment

Enter your comment...

Comment as: Google Account


Publish

Preview

cloudfulness.blogspot.ca/2012/03/ucs-fiber-channel-and-san-connectivity.html

6/6

Anda mungkin juga menyukai