Abstract
This implementation guide provides information for establishing communications between an HP 3PAR Storage System and a
Solaris 8, 9, 10, or 11 host running on the SPARC, x64, and x86 platforms. General information is also provided on the basic
steps required to allocate storage on the HP 3PAR Storage System that can then be accessed by the Solaris host.
The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express
warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall
not be liable for technical or editorial errors or omissions contained herein.
Acknowledgments
Java and Oracle are registered trademarks of Oracle and/or its affiliates.
Contents 3
Verifying the VxDMP ASL Installation...............................................................................37
Using Sun StorageTek Traffic Manager Multipathing...............................................................37
Edits to the /kernel/drv/scsi_vhci.conf file for SSTM Multipathing........................................38
Additional edit to the /kernel/drv/scsi_vhci.conf file for Solaris 8/9....................................38
Persistent Target Binding Considerations....................................................................................39
Persistent Target Binding for Emulex lpfc Drivers.....................................................................40
Persistent Target Binding for QLogic qla Drivers......................................................................40
Persistent Target Binding for Solaris qlc and emlxs Drivers.......................................................41
Persistent Target Binding for JNI Tachyon Drivers....................................................................41
Persistent Target Binding for JNI Emerald Drivers....................................................................42
System Settings for Minimizing I/O Stall Times on VLUN Paths......................................................42
5 Configuring the Host for an iSCSI Connection..............................................44
Solaris Host Server Requirements..............................................................................................44
Setting Up the Ethernet Switch..................................................................................................45
Configuring the Solaris 11 Host Ports.........................................................................................45
Configuring the Solaris 10 Host Ports.........................................................................................46
Setting Up the iSCSI Initiator for Target Discovery.......................................................................47
Using the Static Device Discovery Method.............................................................................48
Using the SendTargets Discovery Method..............................................................................48
Using the iSNS Discovery Method........................................................................................49
Initiating and Verifying Target Discovery...............................................................................49
Setting Up Multipathing Using Sun StorEdge Traffic Manager.......................................................52
6 Allocating Storage for Access by the Solaris Host.........................................53
Creating Storage on the HP 3PAR Storage System.......................................................................53
Creating Virtual Volumes for InForm OS 2.2.x or 3.1.x............................................................53
Creating Virtual Volumes for InForm OS 2.2.3 and Earlier.......................................................54
Exporting LUNs to a Host with a Fibre Channel Connection..........................................................54
Creating a VLUN for Export................................................................................................54
VLUN Exportation Limits Based on Host HBA Drivers...............................................................55
Exporting LUNs to a Solaris Host with an iSCSI Connection..........................................................56
Discovering LUNs on Fibre Channel Connections........................................................................57
Discovering LUNs for QLogic qla and Emulex lpfc Drivers........................................................58
Discovering LUNs for Solaris qlc and emlxs Drivers.................................................................58
Discovering LUNs for the JNI Tachyon Driver..........................................................................59
Discovering LUNs for the JNI Emerald Driver..........................................................................59
Discovering LUNs for Sun StorEdge Traffic Manager...............................................................60
Discovering LUNs for Veritas Volume Manager’s DMP (VxDMP)...............................................61
Discovering LUNs on iSCSI Connections....................................................................................61
Removing Volumes for Fibre Channel Connections......................................................................63
Removing Volumes for iSCSI Connections...................................................................................63
7 Configuring the Host for an FCoE Connection..............................................65
Solaris Host Server Requirements..............................................................................................65
Configuring the FCoE Switch and FC Switch...............................................................................65
Configuring the Solaris Host Ports.............................................................................................65
8 Using the SunCluster Cluster Server.............................................................67
9 Using the Veritas Cluster Server..................................................................68
10 Booting from the HP 3PAR Storage System.................................................69
Preparing a Bootable Solaris Image for Fibre Channel.................................................................69
Dump and Restore Method..................................................................................................69
Net Install Method.............................................................................................................69
Installing the Solaris OS Image onto a VLUN..............................................................................69
Configuring Additional Paths and Sun I/O Multipathing..............................................................71
4 Contents
Configuration for Multiple Path Booting.....................................................................................73
Additional Devices on the Booting Paths....................................................................................74
SAN Boot Example.................................................................................................................74
A Configuration Examples............................................................................76
Example of Discovering a VLUN Using qlc/emlx Drivers with SSTM...............................................76
Example of Discovering a VLUN Using an Emulex Driver and VxVM..............................................76
Example of Discovering a VLUN Using a QLogic Driver with VxVM...............................................77
Example of UFS/ZFS File System Creation..................................................................................77
Examples of Growing a Volume................................................................................................78
Growing an SSTM Volume..................................................................................................78
Growing a VxVM Volume...................................................................................................80
VxDMP Command Examples....................................................................................................82
Displaying I/O Statistics for Paths........................................................................................82
Managing Enclosures.........................................................................................................82
Changing Policies..............................................................................................................83
Accessing VxDMP Path Information......................................................................................83
Listing Controllers..........................................................................................................83
Displaying Paths............................................................................................................83
B Patch/Package Information........................................................................85
Minimum Patch Requirements for Solaris Versions........................................................................85
Patch Listings for Each SAN Version Bundle................................................................................87
HBA Driver/DMP Combinations...............................................................................................89
Minimum Requirements for a Valid QLogic qlc + VxDMP Stack................................................89
Minimum Requirements for a Valid Emulex emlxs + VxDMP Stack.............................................90
Default MU level Leadville Driver Table.................................................................................90
C FCoE-to-FC Connectivity............................................................................92
Contents 5
1 Introduction
This implementation guide provides information for establishing communications between an
HP 3PAR Storage System and a Solaris 8, 9, 10, or 11 host running on the SPARC, x64, and x86
platforms. General information is also provided on the basic steps required to allocate storage on
the HP 3PAR Storage System that can then be accessed by the Solaris host.
The information contained in this implementation guide is the outcome of careful testing of the
HP 3PAR Storage System with as many representative hardware and software configurations as
possible.
Required
For predictable performance and results with your HP 3PAR Storage System, the information in
this guide must be used in concert with the documentation set provided by HP for the HP 3PAR
Storage System and the documentation provided by the vendor for their respective products.
Required
All installation steps should be performed in the order described in this implementation guide.
Supported Configurations
The following types of host connections are supported between the HP 3PAR Storage System and
hosts running a Solaris OS:
• Fibre Channel
• iSCSI
• Fibre Channel over Ethernet (FCoE) (host-side only)
For information about supported hardware and software platforms, see the HP Single Point of
Connectivity Knowledge (SPOCK) website:
http://www.hp.com/storage/spock
Audience
This implementation guide is intended for system and storage administrators who monitor and
direct system configurations and resource allocation for the HP 3PAR Storage System.
The tasks described in this guide assume that the administrator is familiar with Sun Solaris and the
InForm OS.
Although this guide attempts to provide the basic information that is required to establish
communications between the HP 3PAR Storage System and the Sun Solaris host, and to allocate
the required storage for a given configuration, the appropriate HP documentation must be consulted
in conjunction with the Solaris host and host bus adapter (HBA) vendor documentation for specific
details and procedures.
6 Introduction
NOTE: This implementation guide is not intended to reproduce any third-party product
documentation. For details about devices such as host servers, HBAs, fabric switches, and
non-HP 3PAR software management tools, consult the appropriate third-party documentation.
Related Documentation
The following documents also provide information related to the HP 3PAR Storage System and the
InForm OS:
InForm OS CLI commands and their HP 3PAR Inform OS Command Line Interface Reference
usage
Determining HP 3PAR Storage System • HP 3PAR E-Class/F-Class Storage System Physical Planning Manual
hardware specifications, installation
considerations, power requirements, • HP 3PAR S-Class/T-Class Storage System Physical Planning Manual
networking options, and cabling • HP P10000 3PAR Storage System Physical Planning Manual
Identifying storage server components HP 3PAR InForm OS 3.1.1 Messages and Operator's Guide
and detailed alert information
Using HP 3PAR Remote Copy HP 3PAR Remote Copy 3.1.1 Software User's Guide
Updating the InForm OS HP 3PAR InForm Operating System Upgrade Pre-Planning Guide
Typographical Conventions
This guide uses the following typographical conventions:
ABCDabcd Used for dialog elements such as When prompted, click Finish to complete the
titles, button labels, and other screen installation.
elements.
ABCDabcd Used for paths, filenames, and screen Open the file
output. \os\windows\setup.exe
[ABCDabcd] Used for options in user input. Modify the content string by adding the -P[x]
option after -jar inform.jar
# .\java -jar inform.jar -P[x]
Related Documentation 7
Advisories
To avoid injury to people or damage to data and equipment, be sure to observe the cautions and
warnings in this guide. Always be careful when handling any electrical equipment.
NOTE: Notes are reminders, tips, or suggestions that supplement the procedures included in this
guide.
Required
Requirements signify procedures that must be followed as directed in order to achieve a functional
and supported implementation based on testing at HP.
WARNING! Warnings alert you to actions that can cause injury to people or irreversible damage
to data or the operating system.
CAUTION: Cautions alert you to actions that can cause damage to equipment, software, or data.
8 Introduction
2 Configuring the HP 3PAR Storage System for Fibre
Channel
This chapter explains how to establish a Fibre Channel connection between the HP 3PAR Storage
System and a Solaris host and covers InForm OS 3.1.x, OS 2.3.x, or OS 2.2.x. For information
on setting up the physical connection for a particular HP 3PAR Storage System, see the appropriate
HP installation manual.
Required
If you are setting up a fabric along with your installation of the HP 3PAR Storage System, see
“Setting Up and Zoning the Fabric” (page 14) before configuring or connecting your HP 3PAR
Storage System.
Required
The following setup must be completed before connecting the HP 3PAR Storage System port to a
device.
NOTE: While the server is running, HP 3PAR Storage System ports that leave (e.g., due to an
unplugged cable) and return will be tracked by their World Wide Name (WWN). The WWN of
each port is unique and constant which ensures correct tracking of a port and its LUNs by the host
HBA driver.
If a fabric zoning relationship exists such that a host HBA port has access to multiple targets (for
example, multiple ports on the HP 3PAR Storage System), the driver will assign target IDs (cxtxdx)
to each discovered target in the order that they are discovered. The target ID for a given target
can change in this case as targets leave the fabric and return or when the host is rebooted while
some targets are not present.
# showport -par
N:S:P Connmode ConnType CfgRate MaxRate Class2 UniqNodeWwn VCN IntCoal
0:0:1 disk loop auto 2Gbps disabled disabled disabled enabled
0:0:2 disk loop auto 2Gbps disabled disabled disabled enabled
0:0:3 disk loop auto 2Gbps disabled disabled disabled enabled
0:0:4 disk loop auto 2Gbps disabled disabled disabled enabled
0:4:1 host point auto 4Gbps disabled disabled disabled enabled
0:4:2 host point auto 4Gbps disabled disabled disabled enabled
0:5:1 host point auto 2Gbps disabled disabled disabled enabled
0:5:2 host loop auto 2Gbps disabled disabled disabled enabled
0:5:3 host point auto 2Gbps disabled disabled disabled enabled
0:5:4 host loop auto 2Gbps disabled disabled disabled enabled
1:0:1 disk loop auto 2Gbps disabled disabled disabled enabled
1:0:2 disk loop auto 2Gbps disabled disabled disabled enabled
1:0:3 disk loop auto 2Gbps disabled disabled disabled enabled
1:0:4 disk loop auto 2Gbps disabled disabled disabled enabled
1:2:1 host point auto 2Gbps disabled disabled disabled enabled
1:2:2 host loop auto 2Gbps disabled disabled disabled enabled
1:4:1 host point auto 2Gbps disabled disabled disabled enabled
1:4:2 host point auto 2Gbps disabled disabled disabled enabled
2. If the port has not been configured, take the port offline before configuring it for connection
to a host server. To take the port offline, issue the InForm OS CLI command controlport
offline <node:slot:port>.
3. To configure the port to the host server, issue controlport config host –ct point
<node:slot:port>, where -ct point indicates that the connection type specified is a
fabric connection. For example:
# showhost
Id Name Persona -WWN/iSCSI_Name- Port
6 solarishost Generic 1122334455667788 ---
1122334455667799 ---
Solaris 8 1 1
Solaris 9 1 1
Solaris 10 1 1
Solaris 11 2 1
NOTE: Host persona 6 is automatically assigned following a rolling upgrade from InForm
OS 2.2.x.
If appropriate, you can change host persona 6 after an upgrade to the appropriate value as
shown out in Table 1 (page 11).
Host personas 1 and 2 enable two functional features:
• Host Explorer, which requires the SESLun element of host persona 1
• UARepLun, which notifies the host of newly exported VLUNs and triggers a LUN discovery
request on the host, making the VLUN automatically available in format.
Host persona 2 also enables Report Target Port Groups (RTPG).
CAUTION: If, when Host Explorer is installed, /usr/local is a symbolic link, this link will be
removed and be replaced by a directory. This may affect some applications. To prevent this, reply
No when asked, during installation, Do you want to install these conflicting
files?. Host Explorer will then install normally.
NOTE: See the HP 3PAR Inform OS Command Line Interface Reference or the InForm Management
Console Help for complete details on using the controlport, createhost, and showhost
commands.
These documents are available on the HP BSC website:
http://www.hp.com/go/3par/
Required
The following setup must be completed before connecting the HP 3PAR Storage System port to a
device.
Verify port personality 4, connection type loop, using the InForm OS CLI showport -par
command.
# showport -par
Verify port personality 7, connection type point, using the InForm OS CLI showport -par
command.
# showport -par
N:S:P ConnType CfgRate MaxRate Class2 VCN -----------Persona------------ IntCoal
0:5:1 point auto 4Gbps disable enabled (7) g_ven, g_hba, g_os, 0, FA enabled
Verify port personality 1, connection type loop, using the InForm OS CLI showport -par
command.
# showport -par
N:S:P ConnType CfgRate MaxRate Class2 VCN -----------Persona------------ IntCoal
1:4:2 loop auto 2Gbps disable enabled (1) g_ven, g_hba, g_os, 0, DC enabled
Verify port personality 7, connection type point, using the InForm OS CLI showport -par
command.
# showport -par
N:S:P ConnType CfgRate MaxRate Class2 VCN -----------Persona------------ IntCoal
0:5:1 point auto 4Gbps disable enabled (7) g_ven, g_hba, g_os, 0, FA enabled
Verify port personality 1, connection type loop, using the InForm OS CLI showport -par
command.
# showport -par
N:S:P ConnType CfgRate MaxRate Class2 VCN -----------Persona------------ IntCoal
1:4:2 loop auto 2Gbps disable enabled (1) g_ven, g_hba, g_os, 0, DC enabled
Verify port personality 9, connection type point, using the InForm OS CLI showport -par
command.
# showport -par
N:S:P ConnType CfgRate MaxRate Class2 VCN -----------Persona------------ IntCoal
0:5:1 point auto 4Gbps disable enabled (9) g_ven,g_hba, g_os, 0, FA enabled
Verify port personality 3, connection type loop, using the InForm OS CLI showport -par
command.
# showport -par
N:S:P ConnType CfgRate MaxRate Class2 VCN ----------Persona------------ IntCoal
1:4:2 loop auto 2Gbps disable disabled (3) jni, g_hba, g_os, 0, DC enabled
Verify port personality 7, connection type point, using the InForm OS CLI showport -par
command.
# showport -par
N:S:P ConnType CfgRate MaxRate Class2 VCN ----------Persona----------- IntCoal
0:5:1 point auto 4Gbps disable disabled *(7) g_ven,g_hba, g_os, 0, FA enabled
Verify port personality 3, connection type loop, using the InForm OS CLI showport -par
command.
# showport -par
N:S:P ConnType CfgRate MaxRate Class2 VCN ----------Persona------------ IntCoal
1:4:2 loop auto 2Gbps disable disabled (3) jni, g_hba, g_os, 0, DC enabled
Verify port personality 7, connection type point, using the InForm OS CLI showport -par
command.
# showport -par
N:S:P ConnType CfgRate MaxRat Class2 VCN -----------Persona------------ IntCoal.
0:4:1 point auto 4Gbps disable enabled(7) g_ven, g_hba, g_os, 0, FA enabled
2. To verify that the host has been created, issue the showhost command.
# showhost
Id Name -WWN/iSCSI_Name- Port
0 sqa-solaris 1122334455667788 ---
1122334455667799 ---
Required
Employ fabric zoning, using the methods provided by the switch vendor, to create relationships
between host server HBA ports and storage server ports before connecting the host server HBA
ports or HP 3PAR Storage System ports to the fabric(s).
Fibre Channel switch vendors support the zoning of the fabric end-devices in different zoning
configurations. There are advantages and disadvantages with each zoning configuration. Choose
a zoning configuration based on your needs.
The HP 3PAR arrays support the following zoning configurations:
• One initiator to one target per zone
• One initiator to multiple targets per zone (zoning by HBA). This zoning configuration is
recommended for the HP 3PAR Storage System. Zoning by HBA is required for coexistence
with other HP Storage arrays.
NOTE: The storage targets in the zone can be from the same HP 3PAR Storage System,
multiple HP 3PAR Storage Systems, or a mixture of HP 3PAR and other HP storage systems.
For more information about using one initiator to multiple targets per zone, see “Zoning by HBA”
in the “Best Practices” chapter of the HP SAN Design Reference Guide. This document is available
on the HP BSC website:
http://www.hp.com/go/3par/
If you use an unsupported zoning configuration and an issue occurs, HP may require that you
implement one of the supported zoning configurations as part of the troubleshooting or corrective
action.
After configuring zoning and connecting each host server HBA port and HP 3PAR Storage System
port to the fabric(s), verify the switch and zone configurations using the InForm OS CLI showhost
command, to ensure that each initiator is zoned with the correct target(s).
HP 3PAR Coexistence
The HP 3PAR Storage System array can coexist with other HP array families.
For supported HP arrays combinations and rules, see the HP SAN Design Reference Guide, available
on the HP BSC website:
http://www.hp.com/go/3par/
brocade2_1:admin> portcfgshow
Ports 0 1 2 3 4 5 6 7
-----------------+--+--+--+--+----+--+--+--
Speed AN AN AN AN AN AN AN AN
Trunk Port ON ON ON ON ON ON ON ON
The following fill-word modes are supported on a Brocade 8 G/s switch running FOS firmware
6.3.1a and later:
admin>portcfgfillword
Usage: portCfgFillWord PortNumber Mode [Passive]
Mode: 0/-idle-idle - IDLE in Link Init, IDLE as fill word (default)
1/-arbff-arbff - ARBFF in Link Init, ARBFF as fill word
2/-idle-arbff - IDLE in Link Init, ARBFF as fill word (SW)
3/-aa-then-ia - If ARBFF/ARBFF failed, then do IDLE/ARBFF
HP recommends that you set the fill word to mode 3 (aa-then-ia), which is the preferred
mode using the portcfgfillword command. If the fill word is not correctly set, er_bad_os
counters (invalid ordered set) will increase when you use the portstatsshow command
while connected to 8 G HBA ports, as they need the ARBFF-ARBFF fill word. Mode 3 will
also work correctly for lower-speed HBAs, such as 4 G/2 G HBAs. For more information, see
the Fabric OS command Reference Manual supporting FOS 6.3.1a and the FOS release
notes.
In addition, some HP switches, such as the HP SN8000B 8-slot SAN backbone director switch,
the HP SN8000B 4-slot SAN director switch, the HP SN6000B 16 Gb FC switch, or the HP
SN3000B 16 Gb FC switch automatically select the proper fill-word mode 3 as the default
setting.
• McDATA switch or director ports should be in their default modes as type GX-Port with a
speed setting of Negotiate.
• Cisco switch ports that connect to HP 3PAR Storage System ports or host HBA ports should
be set to AdminMode = FX and AdminSpeed = auto port, with the speed set to auto negotiate.
NOTE: When host server ports can access multiple targets on fabric zones, the assigned
target number assigned by the host driver for each discovered target can change when the
host server is booted and some targets are not present in the zone. This situation may change
the device node access point for devices during a host server reboot. This issue can occur
with any fabric-connected storage, and is not specific to the HP 3PAR Storage System.
# showport -iscsi
N:S:P State IPAddr Netmask Gateway TPGT MTU Rate DHCP iSNS_Prim iSNS_Sec iSNS_Port
0:3:1 loss_sync 0.0.0.0 0.0.0.0 0.0.0.0 131 1500 n/a 0 0.0.0.0 0.0.0.0 3205
0:3:2 loss_sync 0.0.0.0 0.0.0.0 0.0.0.0 132 1500 n/a 0 0.0.0.0 0.0.0.0 3205
1:3:1 loss_sync 0.0.0.0 0.0.0.0 0.0.0.0 131 1500 n/a 0 0.0.0.0 0.0.0.0 3205
1:3:2 loss_sync 0.0.0.0 0.0.0.0 0.0.0.0 132 1500 n/a 0 0.0.0.0 0.0.0.0 3205
# showport
N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type Protocol
0:3:1 suspended config_wait - - cna -
0:3:2 suspended config_wait - - cna -
# showport -i
N:S:P Brand Model Rev Firmware Serial HWType
0:3:1 QLOGIC QLE8242 58 0.0.0.0 PCGLT0ARC1K3SK CNA
0:3:2 QLOGIC QLE8242 58 0.0.0.0 PCGLT0ARC1K3SK CNA
Each HP 3PAR Storage System iSCSI target port that will be connected to an iSCSI Initiator must
be set up appropriately for your configuration as described in the following steps.
1. Set up the IP and netmask address on the iSCSI target port using the InForm OS CLI
controliscsiport command. Here is an example:
2. To verify the iSCSI target port configuration, issue the InForm OS CLI showport -iscsi
command.
# showport -iscsi
N:S:P State IPAddr Netmask Gateway TPGT MTU Rate DHCP iSNS_Prim iSNS_Sec iSNS_Port
0:3:1 ready 10.1.0.110 255.0.0.0 0.0.0.0 31 1500 1Gbps 0 0.0.0.0 0.0.0.0 3205
0:3:2 loss_sync 0.0.0.0 0.0.0.0 0.0.0.0 32 1500 n/a 0 0.0.0.0 0.0.0.0 3205
1:3:1 ready 11.1.0.110 255.0.0.0 0.0.0.0 131 1500 1Gbps 0 0.0.0.0 0.0.0.0 3205
1:3:2 loss_sync 0.0.0.0 0.0.0.0 0.0.0.0 132 1500 n/a 0 0.0.0.0 0.0.0.0 3205
NOTE: Make sure the IP switch ports, (where the HP 3PAR Storage System iSCSI target ports
and iSCSI Initiators host are connected), are able to communicate with each other. If the host
is already connected to the IP fabric or switch and its Ethernet interface has been configured,
you can use the ping command for this purpose on the Solaris host.
NOTE: The Solaris OS does not have its own iSNS server, so a Windows server that has
been installed with the iSNS feature must be used to provide the iSNS server functions instead.
4. Each HP 3PAR Storage System iSCSI port has a unique name, port location, and serial number
as part of its iqn iSCSI name. Use the InForm OS CLI showport command with the
-iscsiname parameter to get the iSCSI name.
# showport -iscsiname
N:S:P IPAddr ---------------iSCSI_Name----------------
0:3:1 10.1.0.110 iqn.2000-05.com.3pardata:20310002ac00003e
0:3:2 0.0.0.0 iqn.2000-05.com.3pardata:20320002ac00003e
1:3:1 11.1.0.110 iqn.2000-05.com.3pardata:21310002ac00003e
1:3:2 0.0.0.0 iqn.2000-05.com.3pardata:21320002ac00003e
5. Use the ping command on the Solaris host to verify that the HP 3PAR Storage System target
is pingable, and use the route get <IP> command to check that the configured network
interface is used for the destination route.
Example: After configuring the host and HP 3PAR Storage System ports, 11.1.0.110 is the
HP 3PAR Storage System target IP Address, 11.1.0.40 is host IP Address and the host uses
a ce2 network interface to route the traffic to the destination.
# ping 11.1.0.110
11.1.0.110 is alive
# route get 11.1.0.110
route to: 11.1.0.110
destination: 11.0.0.0
mask: 255.0.0.0
interface: ce2
flags: <UP,DONE>
As an alternative, you can use controliscsiport to ping the host from the HP 3PAR
Storage System ports.
For information on setting up target discovery on the Solaris host, see Section (page 47).
The following steps show how to create the host definition for an iSCSI connection.
1. You can verify that the iSCSI Initiator is connected to the iSCSI target port by using the InForm
OS CLI showhost command.
# showhost
Id Name Persona ---------------WWN/iSCSI_Name--------------- Port
-- Generic iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940 0:3:1
iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940 1:3:1
2. Create an iSCSI host definition entry by issuing the InForm OS CLI createhost -iscsi
<hostname> <host iSCSI name> command.
# showport -iscsi
N:S:P State IPAddr Netmask Gateway TPGT MTU Rate DHCP
iSNS_Prim iSNS_Sec iSNS_Port
0:3:1 ready 10.100.0.101 255.0.0.0 0.0.0.0 31 1500 1Gbps 0
0.0.0.0 0.0.0.0 3205
1:3:1 ready 10.101.0.201 255.0.0.0 0.0.0.0 131 1500 1Gbps 0
0.0.0.0 0.0.0.0 3205
CAUTION: If, when Host Explorer is installed, /usr/local is a symbolic link, this link will
be removed and be replaced by a directory. This may affect some applications. To prevent
this, reply No when asked, during installation, Do you want to install these
conflicting files?. Host Explorer will then install normally.
Creating an iSCSI Host Definition on an HP 3PAR Storage System Running InForm OS 3.1.x or 2.3.x 21
NOTE: HP recommends host persona 2 for Solaris 11 and host persona 1 for Solaris 8, 9,
and 10 (all supported MU levels). Host persona 1 for Solaris 10 is required to enable Host
Explorer functionality. However, host persona 6 is automatically assigned following a rolling
upgrade from 2.2.x. If appropriate, you can change host persona 6 after an upgrade to host
persona 2 for Solaris 11 or host persona 1 for Solaris 10. Host persona 1 enables Host
Explorer, which requires the SESLun element of Host persona 1. Newly exported VLUNs can
be seen in format by issuing devfsadm -i iscsi. To register the data VLUN 254 on
Solaris format, a host reboot is required.
NOTE: You must configure the HP 3PAR Storage System iSCSI target port(s) and establish
an iSCSI Initiator connection/session with the iSCSI target port from the host to be able to
create a host definition entry. For details, see “Configuring the Host for an iSCSI Connection”
(page 44).
# showhost
Id Name Persona ---------------WWN/iSCSI_Name--------------- Port
1 solaris-host-01 Generic iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940 0:3:1
iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940 1:3:1
# showhost -d
Id Name Persona ---------------WWN/iSCSI_Name--------------- Port
IP_addr
1 solaris-host-01 Generic iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940 0:3:1
10.1.0.40
1 solaris-host-01 Generic iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940 1:3:1
11.1.0.40
# showiscsisession
N:S:P --IPAddr--- TPGT TSIH Conns -----------------iSCSI_Name-----------------
-------StartTime-------
0:3:1 10.105.3.10 31 11351 1 iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940
2010-02-25 07:47:38 PST
1:3:1 10.105.4.10 131 11351 1 iqn.1986-03.com.sun:01:ba7a38f0ffff.4b798940
2010-02-25 07:47:37 PST
# showhost
Id Name -----------WWN/iSCSI_Name------------ Port-
iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d 1:3:1
iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d 0:3:1
# showhost
Id Name -----------WWN/iSCSI_Name------------ Port
1 solarisiscsi iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d 0:3:1 iqn.1986
03.com.sun:01:0003bac3b2e1.45219d0d 1:3:1
# showhost -d
Id Name -----------WWN/iSCSI_Name------------ Port IP_addr
1 solarisiscsi iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d 0:3:1 10.1.0.40
2 solarisiscsi iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d 1:3:1 11.1.0.40
# showiscsisession
N:S:P --IPAddr-- TPGT TSIH Conns ---iSCSI_Name-- ------StartTime-------
0:3:1 10.1.0.40 31 24435 1 iqn.1986-3.com.sun:01:0003bac3b2e1.45219d0d Fri Dec
08 11:57:50 PST 2006
1:3:1 11.1.0.40 131 17955 1 iqn.1986-3.com.sun:01:0003bac3b2e1.45219d0d Fri Dec
08 12:06:58 PST 2006
# showhost
Id Name -----------WWN/iSCSI_Name------------ Port
solarisiscsi iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d 0:3:1
iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d 1:3:1
# showhost -chap
Id Name -Initiator_CHAP_Name- -Target_CHAP_Name-
1 solarisiscsi solarisiscsi -
Enable CHAP as the authentication method after the secret key is set.
NOTE: In the example above, the default target CHAP Name is the target port iSCSI name
(iqn.2000-05.com.3pardata:21310002ac00003e) and host CHAP Name is the initiator
port iSCSI name (iqn.1986-03.com.sun:01:0003bac3b2e1.45219d0d).
g. Invoke devfsadm to discover the devices after the host is verified by the target.
# devfsadm -i iscsi
Target: iqn.2000-05.com.3pardata:20320002ac0000af
Alias: -
NOTE: The Target Chap name is set by default to the HP 3PAR Storage System name. Use
the InForm OS CLI showsys command to determine the HP 3PAR Storage System name.
5. Enter the Target Chap secret key target_secret0 for each connected target.
7. Set the CHAP name for the HP 3PAR Storage System for the iSCSI targets (Use the InForm OS
CLI showsys command to determine the HP 3PAR Storage System name).
9. Remove and create a new iSCSI session and invoke devfsadm -i iscsi to discover the
targets and all the LUNs.
2. On the host, disable and remove the target CHAP authentication on each target.
NOTE: For Solaris 10, a Sun MPXIO patch is required that contains MPXIO fixes applicable for
SCSI 3 reservations if SUN cluster is to be configured. For SPARC-based servers, use patch
127127-11 and for x86 based servers use patch 127128-11. For availability of later versions,
check the following Web site:
http:/support.oracle.com/CSP/ui/flash.html.
Solaris 8/9
Install the appropriate Sun SAN software package for Solaris 8 or 9 hosts available on the following
website:
http://www.oracle.com/us/products/servers-storage/storage/
storage-networking/index.htm
Consult the Solaris OS minimum patch listings in Chapter 6 (page 53).
NOTE: The SAN package may have an updated release of the emlxs /qlc drivers (also known
as the Leadville drivers).
For JNI HBAs, install the JNIfcaPCI (FCI-1063) or JNIfcaw (FC64-1063) driver package for the
Solaris OS. The driver install package files fca-pci.pkg and fcaw.pkg contain the JNIfcaPCI,
JNIfcaw and JNIsnia drivers.
Direct Connect
Configured by editing /kernel/drv/lpfc.conf and then running the udated_drv utility. On
versions of Solaris earlier than version 9, you have to manually reboot the host server to update
the host with the modified driver configuration settings.
Fabric Connect
Configured by editing /kernel/drv/lpfc.conf and then running the udated_drv utility. On
versions of Solaris earlier than version 9, you have to manually reboot the Solaris host to update
with the modified driver configuration settings. The sd.conf file is read by the SD driver at boot
time, so supporting entries for new LUNs must exist prior to the last server reboot.
Add entries to the /kernel/drv/sd.conf file between the boundary comments generated by
the Emulex driver package during installation.
A line is required for each LUN number (pre 6.20 driver requirement). For fabric configurations,
entries must be made for all target LUNs that will be exported from the HP 3PAR Storage System
to the Solaris host. These entries can be restricted to the Emulex lpfc driver only, so a useful strategy
is to add entries for all possible LUNs (0 to 255) on target 0. Testing at HP did not reveal any
noticeable increase in server boot time due to the probing of non-existent LUNs.
WARNING! Installing version 6.21g of the lpfc driver for Solaris may be significantly different
than in previous releases. Follow the driver instructions precisely as instructed for initial installation.
Failure to follow the proper installation steps could render your system inoperable.
#
# Determine how long the driver will wait [0 - 255] to begin linkdown
# processing when the hba link has become inaccessible. Linkdown processing
# includes failing back commands that have been waiting for the link to
# come back up. Units are in seconds. linkdown-tmo works in conjuction
# with nodev-tmo. I/O will fail when either of the two expires.
linkdown-tmo=1; default is linkdown-tmo=30
WARNING! Any changes to the driver configuration file must be tested before going into a
production environment.
NOTE: The currently supported QLogic driver versions, as listed in the current HP 3PAR OS InForm
Configuration Matrix, do not require target and LUN entries in the /kernel/drv/sd.conf file.
# Amount of time to wait for loop to come up after it has gone down
# before reporting I/O errors.
# Range: 0 - 240 seconds
hba0-link-down-timeout=1; default is hba0-link-down-timeout=60; DO NOT LOWER below 30 for solaris 9
WARNING! Any changes to the driver configuration file must be tested before going into a
production environment.
WARNING! DO NOT LOWER the qla2300.conf variable hba0-link-down-timeout
below 30 seconds for Solaris 9 hosts.
NOTE: 4 GB/s Sun StorageTek SG- SG-xxxxxxx-QF4 and QLogic QLA24xx will be limited to
256 LUNs per target unless patch 119130 or 119131 is at revision -21 or higher.
Direct Connect
Configured by editing the /kernel/drv/fca-pci.conf or /kernel/drv/fcaw.conf files:
Fabric Connect
Configured by editing the /kernel/drv/fca-pci.conf or /kernel/drv/fcaw.conf files:
The JNIsnia package is included with the driver installation but is optional and is not required to
access the HP 3PAR Storage System from the Solaris host. The driver packages and driver installation
instructions are available at: http://www.amcc.com
The fca-pci.conf and fcaw.conf files will be installed in the /kernel/ drv directory when
the driver package is installed.
In both direct connect and fabric configurations, (where each host HBA port logically connects to
only one HP 3PAR Storage System port), each initiator (host server HBA port) can only discover
one target (HP 3PAR Storage System port).
For these configurations, persistent target binding in the HBA driver, although possible, is not
required since there will only be one target found by each host HBA driver instance. When the
binding parameters in the /kernel/drv/fca-pci.conf or /kernel/drv/fcaw.conf files
def_hba_binding = "fca-pci*";
def_wwpn_binding = "$xxxxxxxxxxxxxxxx";
def_wwnn_binding = "$xxxxxxxxxxxxxxxx";
def_port_binding = "xxxxxx";
Default fcaw.conf settings:
def_hba_binding = "fcaw*";
def_wwpn_binding = "$xxxxxxxxxxxxxxxx";
def_wwnn_binding = "$xxxxxxxxxxxxxxxx";
def_port_binding = "xxxxxx";
If changes in the mapping of a device to its device node (/dev/rdsk/cxtxdx) cannot be tolerated
for your configuration, you can assign and lock target IDs based on the HP 3PAR Storage System
port's World Wide Port Name by adding specific target binding statements in the
/kernel/drv/fca-pci.conf or /kernel/drv/fcaw.conf file. Refer to the fca-pci or fcaw driver
documentation and the /opt/JNIfcaPCI/technotes or /opt/JNIfcaw/technotes files
for more information about mapping discovered targets to specific target IDs on the host.
The Solaris sd SCSI driver will only probe for targets and LUNs that are configured in the /kernel/
drv/sd.conf file. For fabric configurations, entries must exist for all target/LUNs that are exported
from the HP 3PAR Storage System to the Solaris host. The sd.conf file is read by the sd driver
at boot time, so supporting entries for new LUNs must exist prior to the last server reboot. These
entries can be restricted to the JNI fca-pci or fcaw driver only, thus, a useful strategy is to add
entries for all possible LUNs (0 to 255) on target "0".
For instance, add the following entries to the sd.conf file:
JNI fcaw driver:
Testing at HP did not reveal any noticeable increase in server boot time due to the probing of
non-existent LUNs.
For some installations, you may want to place specific entries for the actual LUN numbers exported
from the HP 3PAR Storage System ports in the sd.conf file. However, this approach requires
additional entries and a reboot of the Solaris host when new VLUNs are later exported with new
LUN numbers.
fca_nport = 1;
public_loop = 0;
def_wwpn_binding = "$xxxxxxxxxxxxxxxx";
def_wwnn_binding = "$xxxxxxxxxxxxxxxx";
def_port_binding = "xxxxxx";
Direct Connect
Configured by editing the /kernel/drv/jnic146x.conf file:
FcLoopEnabled = 1;
FcFabricEnabled = 0;
automap = 2;
Fabric Connect
Configured by editing the /kernel/drv/jnic146x.conf file:
FcLoopEnabled = 0;
FcFabricEnabled = 1;
automap = 1;
Install the JNI driver package version 5.3.1.3. The driver install package file JNIC146x.pkg
contains the JNIC146x and JNIsnia packages. The JNIsnia package is optional and is not
required to access the HP 3PAR Storage System from the Solaris host. The driver packages and
driver installation instructions are available at http://www.amcc.com. The jnic146x.conf file
Direct Connect
Unload and reload the jnic146x driver so that the edits to /kernel/drv/jnic146x.conf take
effect.
# /opt/JNIC146x/jnic146x_unload
# /opt/JNIC146x/jnic146x_load
Verify that each JNI HBA is loaded with FCode firmware version 3.91. There will be messages for
each HBA port in the /var/adm/messages file.
NOTE: If the HBAs are not using FCode firmware version 3.9.1 or later, upgrade the FCode
firmware. FCode firmware and installation instructions are available as install packages (specific
to each HBA model) from:
http://www.amcc.com/
JNIC146x driver versions 5.3 and greater do not require LUN and target entries in
the/kernel/drv/sd.conf file.
The optional EZFibre GUI utility is available at http://www.amcc.com/. This utility gives a view
of each JNI HBA port in the server and the targets and LUNs each has acquired. This utility can
also be used to statically target bind discovered targets and LUNs if that is a requirement of your
specific configuration.
Perform a reconfigure reboot of the host server (reboot -- -r) or create the file/reconfigure
so that the next server boot will be a reconfiguration boot.
# touch /reconfigure
Direct Connect
Relevant messages are recorded in the /var/adm/messages file for each port that has an
associated driver and can be useful for verification and troubleshooting.
NOTE: The Solaris-supplied emlxs driver may bind to the Emulex HBA ports and prevent the
Emulex lpfc driver from attaching to the HBA ports. Emulex provides an emlxdrv utility as part of
the "FCA Utilities" available for download from www.emulex.com. You can use the emlxdrv utility
to adjust the driver bindings on a per HBA basis on the server between the Emulex lpfc driver and
the Sun emlxs driver. You may need to use this utility if the lpfc driver does not bind to the Emulex
based HBAs upon reconfigure-reboot. Solaris 8 requires that the emlxdrv pkg be removed before
installing the lpfc driver.
NOTE: Refer to “Allocating Storage for Access by the Solaris Host” (page 53) for supported
driver/DMP combinations.
To enable the Veritas DMP driver to manage multipathed server volumes, install the Array Support
Library (ASL) for HP 3PAR Storage Systems (VRTS3par package) on the Solaris host. This ASL is
installed automatically with the installation of 5.0MP3 and above. For older versions of VxDMP,
the ASL will need to be installed separately.
• Install the VRTS3par package from the VRTS3par_SunOS_50 distribution package for
Veritas Volume Manager versions 5.0 and 5.0MP1.
• Install the VRTS3par package from the VRTS3par_v1.2_SunOS_40 distribution package
for Veritas Volume Manager versions 4.0 and 4.1.
These VRTS3par packages are available from http://support.veritas.com/.
NOTE: Some distributions of the Veritas software include a VRTS3par package that is copied
to the host server as the Veritas software is installed. This package is likely to be an older VRTS3par
package (version 1.0 or 1.1), which should not be used. Instead, install the current VRTS3par
package from the Veritas support site.
If not set, I/O will eventually failback to the recovered paths. The default value for the enclosure
is "fixed retry=5".
To return the setting to default:
# /opt/VRTS/bin/vxddladm listversion
WARNING! Failure to claim the HP 3PAR Storage System as an HP 3PAR array will affect the
way devices are discovered by the multipathing layer.
WARNING! The minimum supported software installation version for VxDMP_5.0MP3 is
VxDMP_5.0MP3_RP1_HF3 with vxdmpadm settune dmp_fast_recovery=off. This tunable
can be left at default values with later versions VxDMP_5.0MP3_RP2_HF1 and
VxDMP_5.0MP3_RP3.
CAUTION: You may need to reboot the host if you wish to reuse VLUN numbers with the following
VxDMP versions: VxDMP_5.0MP3_RP3 or VxDMP_ 5.1. Veritas has enhanced data protection
code which may be triggered if a VLUN number is reused, "Data Corruption Protection Activated".
Solaris 11
Edit the /kernel/drv/fp.conf file by removing the hash from the line so that it reads as follows:
mpxio-disable="no";
Solaris 10 and 11
Additionally, to enable the SSTM for all HBAs on Solaris 10 and 11 systems, issue the stmsboot
-e command to enable multipathing (stmsboot -d will disable multipathing).
CAUTION: When running Solaris 10 MU7, enabling SSTM on a fresh install using stmsboot
-e can corrupt the fp.conf configuration. To avoid this, issue stmsboot -d -D fp to disable
the fp mpxio. You should then be able to run stmsboot -e successfully without loss of the fp
HBA. For more information on this workaround, consult the following website:
http://bugs.opensolaris.org/bugdatabase
view_bug.do;jsessionid=8de823511efa700410638295d36c?bug_id=6811044
device-type-scsi-options-list =
"3PARdataVV", "symmetric-option",
"3PARdataSES", "symmetric-option";
symmetric-option = 0x1000000;
InForm OS 2.2.x
device-type-scsi-options-list =
"3PARdataVV", "symmetric-option";
symmetric-option = 0x1000000;
While the HP 3PAR Storage System is running, departing and returning HP 3PAR Storage System
ports (e.g., un-plugged cable) are tracked by their World Wide Port Name (WWPN). The WWPN
If a fabric zoning relationship exists such that a host HBA port has access to multiple targets (for
example, multiple ports on the HP 3PAR Storage System) the driver will assign target IDs (cxtxdx)
to each discovered target in the order that they are discovered. In this case, the target ID for a
given target can change as targets leave the fabric and return or when the host is rebooted while
some targets are not present. If changes in the mapping of a device to its device node (/dev/
rdsk/cxtxdx) cannot be tolerated for your configuration, you can assign and lock the target IDs
based on the HP 3PAR Storage System port's World Wide Port Name by adding specific target
binding statements in the /kernel/drv/qla2300.conf file. These statements associate a
specified target ID assignment to a specified WWPN for a given instance of the qla driver (a host
HBA port).
For example, to bind HP 3PAR Storage System WWPN 20310002ac000040 to target ID 6 for
qla2300 instance "0", you would add the following statement to /kernel/drv/qla2300.conf:
hba0-SCSI-target-id-6-fibre-channel-port-name="20310002ac000040";
With this binding statement active, a target with a WWPN of 20310002ac000040 that is
discovered on the host HBA port for driver instance 1, will always receive a target ID assignment
of 6, thus yielding a device node like the one shown in the following example.
/dev/rdsk/c4t6d20s2
The current HBA driver instance matching to discovered target WWPN associations (for connected
devices) can be obtained from entries in the /var/adm/messages file generated from the last
server boot.
New or edited binding statement entries can be made active without rebooting the Solaris host by
issuing the following command:
# /opt/QLogic_Corporation/drvutil/qla2300/qlreconfig -d qla2300
hba0-persistent-binding-configuration=1;
CAUTION: This procedure should not be attempted while I/O is running through the qla driver
instances as it will briefly interrupt that I/O and may also change a discovered device's device
nodes if there have been changes made to the persistent binding statements.
While running with Persistent binding only enabled, only persistently bound targets and
their LUNs will be reported to the operating system.
If the persistent binding option is disabled in /kernel/drv/qla2300.conf, changes to persistent
target binding will only take effect during the next host server reboot.
hba0-persistent-binding-configuration=0;
While running with the persistent binding option disabled, both persistently bound targets and
their LUNs and non-bound targets and their LUNs are reported to the operating system.
For information about mapping discovered targets to specific target IDs on the host, consult the
/opt/QLogic_Corporation/drvutil/qla2300/readme.txt file that is loaded with the
qla driver.
For more information on setting the persistent target binding capabilities of the QLogic HBA qla
driver, consult the QLogic documentation that is available on the following website:
http://www.qlogic.com/
def_hba_binding = "jnic146x*";
def_wwnn_binding = "$xxxxxxxxxxxxxxxx";
def_wwpn_binding = "$xxxxxxxxxxxxxxxx";
def_port_binding = "xxxxxx";
fcp_offline_delay=10;
Setting this value outside the range of 10 to 60 seconds will log a warning message to the /var/
adm/messages file.
Also edit the fp.conf file with the following fp_offline_ticker setting to change the timer
to 50:
fp_offline_ticker=50;
WARNING! Tuning these parameters may cause adverse affect on the system. If you are optimizing
your storage configuration for stability, we recommend staying with the default values for these
tuneables. Any changes to these tuneables are made at your risk, and could have unexpected
consequences (e.g., fatal I/O errors when attempting to perform online firmware upgrades to
attached devices, or during ISL or other SAN reconfigurations). Changes could also affect system
performance due to excessive path failover events in the presence of minor intermittent faults etc.
You may need to test any changes for your standard config/environment and specific tests, and
determine the best 'tradeoff' between a quicker fail over and resilience to transient failures.
Refer to http://www.sun.com/bigadmin/features/hub_articles/tuning_sfs.pdf
for the implications of changes to your host server.
CAUTION: It is not presently possible on Solaris to lower I/O stalls on iSCSI attached array paths
due to a Solaris related bug (Bug ID: 6497777). Until a fix is available in Solaris 10 update 9,
the connection timeout is fixed at 180 seconds and cannot be modified.
# more /etc/release
Solaris 10 5/08 s10s_u5wos_10 SPARC
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 24 March 2008
# showrev -p | grep 119090
Patch: 119090-25 Obsoletes: 121980-03, 123500-02 Requires: 118833-36 Incompatibles:
Packages:
SUNWiscsir, SUNWiscsiu
# pkginfo -l SUNWiscsir
PKGINST: SUNWiscsir
NAME: Sun iSCSI Device Driver (root)
CATEGORY: system
ARCH: sparc
VERSION: 11.10.0,REV=2005.01.04.14.31
BASEDIR: /
VENDOR: Sun Microsystems, Inc.
DESC: Sun iSCSI Device Driver
PSTAMP: bogglidite20061023141016
INSTDATE: Jul 03 2009 06:03
HOTLINE: Please contact your local service provider
STATUS: completely installed
FILES: 19 installed pathnames
14 shared pathnames
13 directories
2 executables
1266 blocks used (approx)
# pkginfo -l SUNWiscsiu
PKGINST: SUNWiscsiu
NAME: Sun iSCSI Management Utilities (usr)
CATEGORY: system
ARCH: sparc
VERSION: 11.10.0,REV=2005.01.04.14.31
BASEDIR: /
VENDOR: Sun Microsystems, Inc.
DESC: Sun iSCSI Management Utilities
PSTAMP: bogglidite20071207145617
INSTDATE: Jul 03 2009 06:04
HOTLINE: Please contact your local service provider
STATUS: completely installed
FILES: 15 installed pathnames
5 shared pathnames
NOTE: Ethernet switch VLANs and routing setup and configuration is beyond the scope of
this document. Consult your switch manufacturer's documentation for instructions of how to
set up VLANs and routing.
2. Create the two interfaces required for iSCSI on the host. In the following example, the oce2
and oce3 interfaces are used.
3. Check that the iSCSI interfaces are created and configured correctly. For example:
bash-3.00# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index
1
inet 127.0.0.1 netmask ff000000
4. Add the IP addresses and a symbolic name for the iSCSI interfaces to the hosts file. For
example:
::1 localhost
127.0.0.1 localhost loghost
10.112.2.174 sunx4250-01
10.100.11.3 net2
10.100.12.3 net3
5. Identify the IP address and netmask for both iSCSI host server interfaces in the netmasks file.
For example:
# cat/etc/netmasks
## The netmasks file associates Internet Protocol (IP) address
# masks with IP network numbers.
#
# network-number netmask
## The term network-numberrefers to a number otainedfrom the Internet Network
# Information Center.
## Both the network-number and the netmasks arespecified in
# "decimal dot" notation, e.g.:
##
10.112.0.0 255.255.192.0
10.100.11.0 255.255.255.0
10.100.12.0 255.255.255.0
bash-3.00# ifconfig bge1 plumb && ifconfig bge1 10.105.1.10 netmask 255.255.255.0
up
bash-3.00# ifconfig bge2 plumb && ifconfig bge2 10.105.2.10 netmask 255.255.255.0
up
# ifconfig -al
o0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL>mtu 8232 index 1
3. Add the IP addresses and a symbolic name for the iSCSI interfaces to the hosts file.
::1 localhost
127.0.0.1 localhost
192.168.10.206 sqa-sunv245
10.105.1.10 bge1
10.105.2.10 bge2
4. Create the following files for both iSCSI interfaces on the host.
5. Identify the IP address and netmask for both iSCSI host server interfaces in the netmasks file.
CAUTION: Configuring both static and dynamic device discovery for a given target is not
recommended since it can cause problems communicating with the iSCSI target device.
# ping 11.1.0.110
2. Define the static target address. Use showport -iscsiname to get the HP 3PAR Storage
System target iSCSi name.
# ping 11.1.0.110
CAUTION: The iSNS discovery method is not currently supported for Solaris 10 or Solaris 11
with an HP 3PAR Storage System running InForm OS 3.1.x.
1. Verify that an iSNS server IP address has been configured on the target port using InForm OS
CLI controliscsiport command.
2. Verify that the target is pingable.
# ping 11.1.0.110
# devfsadm -i iscsi
Target: iqn.2000-05.com.3pardata:21310002ac00003e
Alias: -
TPGT: 131
ISID: 4000002a0000
Connections: 1
3. The Solaris iSCSI initiator sets the Max Receive Data Segment Length target parameter
to a value of 8192 bytes and this variable determines the amount of data the HP 3PAR Storage
System can receive or send to the Solaris host in a single iSCSI PDU. This parameter value
should be changed to 65536 bytes for better I/O throughput and the capability to handle
large I/O blocks. The following command should be used to change the parameter and should
be set on each individual target port.
Example:
a. List the default target settings used by the iSCSI Initiator.
c. Change the value from 8192 to 65536 for all target ports.
Target: iqn.2000-05.com.3pardata:21310002ac00003e
---
Max Receive Data Segment Length: 8192/65536
4. Issue the iscsiadm list target -v command to list all the negotiated login parameters:
5. (Optional) You can enable CRC32 verification on the datadigest (SCSI data) and headerdigest
(SCSI packet header) of an iSCSI PDU in addition to the default TCP/IP checksum. However,
enabling this verification will cause a small degradation in the I/O throughput.
The following example modifies the datadigest and headerdigest for the initiator:
device-type-scsi-options-list =
"3PARdataVV", "symmetric-option",
"3PARdataSES", "symmetric-option";
symmetric-option = 0x1000000;
2. For Solaris 10 and 11, make sure that multipathing is enabled in the iSCSI configuration file
/kernel/drv/iscsi.conf; it is enabled by default and should match the following
example:
# reboot -- -r
Or
ok> boot -r
WARNING! If you are using Sun Multipath I/O (Sun StorEdge Traffic Manager), HP advises
that you not reuse a LUN number to export a different HP 3PAR Storage System volume, as
Solaris format output preserves the disk serial number of the first device ever seen on that LUN
number since the last reboot. Any I/O performed on the older disk serial number causes the
I/O to be driven to the new volume and can cause user configuration and data integrity issues.
This is a general Solaris issue with SUN multipath I/O and is not specific to HP 3PAR Storage
System target.
NOTE: For InForm OS 2.3.x, Solaris 10 and 11 support the largest virtual volume that can be
created, 16 TB. For InForm OS 2.2.x, Solaris 9 and 10 support only the largest virtual volume that
can be created, 2 TB; for Solaris 8, the largest virtual volume that can be created is 1 TB. In any
case, Veritas may not support the maximum size virtual volume possible. Consult Veritas support
at the following website:
http://www.veritas.com/support
Here is an example:
NOTE: The commands and options available for creating a virtual volume may vary for earlier
versions of the InForm OS.
Here is an example:
Consult the InForm Management Console Help and the HP 3PAR Inform OS Command Line Interface
Reference for complete details on creating volumes for InForm OS 2.2.3 and earlier.
Here is an example:
Consult the InForm Management Console Help and the HP 3PAR Inform OS Command Line Interface
Reference for complete details on exporting volumes and available options for the InForm OS
version that is being used on the HP 3PAR Storage System.
These documents are available on the HP BSC website:
http://www.hp.com/go/3par/
NOTE: the commands and options available for creating a virtual volume may vary for earlier
versions of the InForm OS.
NOTE: HP 3PAR Storage System with LUNs other than 0 will be discovered even when there is
no VLUN exported with LUN 0. Without a LUN 0, error messages for LUN 0 may appear in /var/
adm/messages as the host server probes for LUN 0. It is recommended that a real LUN 0 be
exported to avoid these errors.
The total number of SCSI devices that Solaris SPARC and x64 servers can reliably discover varies
between operating system versions, architecture, and server configurations. It is possible to export
more VLUNs from the InForm OS (InForm OS VLUN = scsi device on host) than the server can
reliably manage. Contact Oracle for the maximum device capability of your installation. HP tested
up to 256 VVs, each exported as four VLUNs, resulting in the discovery of 1024 SCSI devices by
Solaris, without any issues being noted.
NOTE: All I/O to an HP 3PAR Storage System port should be stopped before running any InForm
OS CLI controlport commands on that port. The InForm OS CLI controlport command
executes a reset on the storage server port while it runs and causes the port to log out of and back
onto the fabric. This event will be seen on the host as a "transient device missing" event for each
HP 3PAR Storage System LUN that has been exported on that port. In addition, if any of the
exported volumes are critical to the host server OS (e.g., the host server is booted from that volume),
the host server should be shut down before issuing the InForm OS CLI controlport command.
• Even though the HP 3PAR Storage System supports the exportation of VLUNs with LUN numbers
in the range from 0 to 16383, only VLUN creation with a LUN in the range from 0 to 255 is
supported.
• This configuration does support sparse LUNs (the skipping of LUN numbers).
• LUNs may be exported in non-ascending order (e.g. 5, 7, 3, 200).
• Only 256 VLUNs can be exported on each interface. If you export more than 256 VLUNs,
VLUNs with LUNs above 255 will not appear on the Solaris host.
• If you are using Sun Multipath I/O (Sun StorEdge Traffic Manager) you should avoid reusing
a LUN number to export a different HP 3PAR Storage System volume as the Solaris format
output preserves the disk serial number of the first device ever seen on that LUN number since
the last reboot.
CAUTION: If any I/O is performed on the old disk serial number, the I/O will be driven to the
new volume and can cause user configuration and data integrity issues. This is a general Solaris
issue with SUN multipath I/O and is not specific to the HP 3PAR Storage System target.
The following is an example:
# showvv -d
Id Name Rd Mstr Prnt Roch Rwch PPrnt PBlkRemain -----VV_WWN--- -----CreationTime----
10 demo.50 RW 1/2/3 --- --- --- --- - 50002AC01188003E Fri Aug 18 10:22:57 PDT 2006
20 checkvol RW 1/2/3 --- --- --- --- - 50002AC011A8003E Fri Aug 18 10:22:57 PDT 2006
# showvlun -t
Lun VVname Host ------------Host_WWN/iSCSI_Name------------- Port Type
50 demo.50 solarisiscsi ---------------- --- host
# format
AVAILABLE DISK SELECTIONS:
10 c5t50002AC01188003Ed0 <3PARdata-VV-0000 cyl 213 alt 2 hd 8 sec 304>
/scsi_vhci/ssd@g50002ac01188003e
On removing demo.50 volume and exporting checkvol at the same LUN number 50, the host
shows the new volume with the serial number of the earlier volume, demo.50
(50002AC01188003E) and not the new volume serial number (50002AC011A8003E).
# showvv -d
# showvlun -t
Lun VVname Host ------------Host_WWN/iSCSI_Name------------- Port Type
50 checkvol solarisiscsi ------------ --- host
# format
AVAILABLE DISK SELECTIONS:
10 c5t50002AC01188003Ed0 <3PARdata-VV-0000 cyl 213 alt 2 hd 8 sec 304>
/scsi_vhci/ssd@g50002ac01188003e ?? Incorrect device serial number display
CAUTION: Issue devfsadm -C to clear any dangling /dev links and reboot the host to correct
the device serial number or to reuse the LUN number.
• Solaris 10 or 11 can support the largest VV that can be created on an HP 3PAR Storage
System at 16 terabytes. VVs of 1 terabyte and larger are only supported using the Sun EFI
disk label and appear in the output of the Sun format command without cylinder/head
geometry.
• All I/O to an HP 3PAR Storage System port should be halted before running the InForm OS
CLI controlport command on that port. The InForm OS CLI controlport command
executes a reset on the storage server port while it runs. The reset is done on a per-card basis,
so any port reset (0:3:1) will cause reset on the partner port (0:3:2) and causes the port to
log out and back to a ready state. This event will be seen on the host as a transient device
missing event for each HP 3PAR Storage System LUN that has been exported on that port. In
addition, if any of the exported volumes are critical to the host server OS (e.g., the host server
is booted from that volume), the host server should be shut down before issuing the InForm
OS CLI controlport command.
# devfsadm -i sd
Before they can be used, newly-discovered VLUNs need to be labeled using the Solaris format
or format -e command.
# cfgadm -c configure c8
# cfgadm -c configure c9
# cfgadm -c configure c10
# cfgadm -c configure c11
NOTE: The cfgadm command can also be run on a per target basis:
# cfgadm -c configure c8::22010002ac000040
Once configured, the HP 3PAR Storage System VLUNs show up in the output from the Solaris
format command as devices and are thus available for use on the host.
The device node designation is comprised of 3 components:
• c -- represents the host HBA port
• t -- represents the target's WWPN
• d -- represents the LUN number
Therefore /dev/rdsk/c8t22010002AC000040d2s2 is a device node for VVC in the following
example that is exported from port 2:0:1 of HP 3PAR Storage System, serial number 0x0040 to
host port c8, as LUN 2.
# format
Searching for disks...done
NOTE: The HP 3PAR Storage System targets appear with their World Wide Port Names
associated with the C number of the host HBA port they are logically connected to. The host server
port WWNs will now appear on HP 3PAR Storage System in the output of the showhost command.
NOTE: The configuration will fail for visible targets that do not present any LUNs. At least one
VLUN must be exported from each HP 3PAR Storage System port before its associated host port
is configured. Running cfgadm with the configure option on an HP 3PAR Storage System port
that has no LUNs exported does not harm the system and will display the following error:
# cfgadm -c configure c9
cfgadm: Library error: failed to create device node: 23320002ac000040: Invalid argument
# devfsadm -i sd
NOTE: If VCN on LUN export is not disabled on each HP 3PAR Storage System port that connects
to a host server port, newly exported HP 3PAR Storage System LUNs will result in target offline
and online messages being generated on the host server console and in the /var/adm/messages
file. HP 3PAR recommends disabling VCN on LUN export (as indicated in the HP 3PAR Storage
System Setup section of this document) to prevent these messages and the possible disruption of
I/O to already exported and registered LUNs.
Newly discovered VLUNs need to be labeled using the Solaris format command before they can
be used.
# /opt/JNIC146x/jnic146x_update_drv -ra
NOTE: The -a option scans all instances of the JNIC14x driver (all host HBA ports). The command
can be limited to specific instances with other options.
Newly discovered VLUNs need to be labeled using the Solaris format command before they can
be used.
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
2. c14t50002AC000010038d0 <3PARdata-50002ac000010038-0000 cyl 43113 alt 2 hd 8
sec 304> /scsi_vhci/ssd@g50002ac000010038
Additional options can be used with the cfgadm command to display more information about the
HP 3PAR Storage System devices. For instance, issuing cfgadm with the -al option shows
configuration information for each device (or LUN):
Issuing cfgadm with the -alv option shows configuration information and the physical device
path for each device (or LUN):
NOTE: If Sun StorEdge Traffic Manager is enabled, the device nodes for the HP 3PAR Storage
System devices contains a "t" component which matches the HP 3PAR Storage System virtual
volume WWN (as generated by the InForm OS CLI showvv -d command).
The HP 3PAR Storage System port is designed to respond to a SCSI REPORT LUNs command with
one LUN (LUN 0) when there is no real VV exported as LUN 0 and no other VVs exported on any
other LUN, in order to comply with the SCSI spec. A partial indication of LUN 0 will appear in the
output of cfgadm for an HP 3PAR Storage System port that has no VVs exported from it. A real
VV exported as LUN 0 can be distinguished from a non-real LUN 0 as follows:
HP 3PAR Storage System port 0:0:1 has a real VV exported as LUN 0. HP 3PAR Storage System
port 1:0:1 has no VVs exported, which is indicated by an "unavailable" type and an "unusable"
condition. In fabric mode, new VLUNs that are exported while the host is running will not be
NOTE: When HP 3PAR Storage System VVs are exported on multiple paths to the Solaris host,
(and Sun StorEdge Traffic Manager is in use for multipath failure and load balancing), each path
(cx) should be configured individually. The cfgadm command will accept multiple "cx" entries in
one invocation but doing so may cause I/O errors to previously exiting LUNs under I/O load. For
a configuration where the HP 3PAR Storage System connects at c4 and c5 on the host, and a new
VV has been exported on those paths, the following commands should be run serially:
# cfgadm -c configure c3
# cfgadm -c configure c4
NOTE: If Sun StorEdge Traffic Manager is enabled for multipathing and a device (HP 3PAR
Storage System VV) is only exported on one path, I/O to that device will be interrupted with an
error if cfgadm -c configure is run on its associated host port. This error will not occur if Sun
StorEdge Traffic Manager is disabled. This situation can be avoided by always preventing multiple
paths to a VV when Sun StorEdge Traffic Manager is enabled. Alternatively, the I/O can be halted
beforecfgadm -c configure is run.
Newly discovered VLUNs need to be labeled using the Solaris format command before they can
be used. If the Solaris host is rebooted while the HP 3PAR Storage System is powered off or
disconnected, all device nodes for the host’s VLUNs will be removed. If the host is subsequently
brought up, the device nodes will not be restored (VLUNs will not appear in the output from the
format command) until the cfgadm -c configure command is run for each host port. This
phenomenon would occur for any fabric attached storage device. To reestablish the connection
to the HP 3PAR Storage System devices, perform the following steps once the host has booted:
1. Run cfgadm -al on the Solaris host.
This allows the HP 3PAR Storage System to see the host HBA ports (showhost) and export
the VLUNs.
2. Configure all host HBA ports as follows:
# vxdctl enable
After issuing this command, the volume can be admitted to and used by Veritas Volume Manager.
The iscsiadm command can be used to remove and modify targets and their parameters, as in
the following examples:
bash-3.00# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
CAUTION: Notice now the removed VLUN is referenced in format on the host. This listing
is not consistent across the x86/SPARC or the MU levels.
# devfsadm -i iscsi
Here is an example:
bash-3.00# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
iSCSI does not support removal of the last available path to the device if any iSCSI LUNs are
in use (such as in a mounted file system or where associated I/O is being performed) and
generates a “logical unit in use” error.
Example: There are two paths to the device having a mounted file system.
CAUTION: A reboot -r should be performed on the host to properly clean the system
after a VLUN has been removed.
NOTE: The FCoE switch must be able to convert FCoE traffic to FC and also be able to trunk this
traffic to the fabric that the HP 3PAR Storage System target ports are connected to. FCoE switch
VLANs and routing setup and configuration is beyond the scope of this document. Consult your
switch manufacturer's documentation for instructions of how to set up VLANs and routing.
bash-3.00# ifconfig qlge0 plumb && ifconfig qle0 10.105.1.10 netmask 255.255.255.0
up
bash-3.00# ifconfig qlge1 plumb && ifconfig qle1 10.105.2.10 netmask 255.255.255.0
up
2. Check that the FCoE NICs are created and configured correctly.
bash-3.00# ifconfig –a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index
1
inet 127.0.0.1 netnask ff000000
bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 192.168.10.205 netmask ffffff00 broadcast 192.168.10.255
ether 0:14:4f:b0:53:4c
qlge0: flags-1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 10.105.1.10 netmask ffffff00 broadcast 10.106.1.255
3. Add the IP addresses and a symbolic name for the FCoE NICs to the hosts file.
::1 localhost
127.0.0.1 localhost
192.168.10.206 sqa-sunv245
10.105.1.10 qlge0
10.105.2.10 qlge1
4. Create the following files for both FCoE NICs on the host.
• /etc/hostname.qlge0
qlge0
• /etc/hostname.qlge1
qlge1
5. Add the IP address and netmask for both FCoE host server NICs in the /etc/netmasks file.
# cat /etc/netmasks
#
# The netmasks file associates Internet Protocol (IP) address
# masks with IP network numbers.
#
# network-number netmask
#
# The term network-number refers to a number obtained from the Internet Network
# Information Center.
#
# Both the network-number and the netmasks are specified in
# "decimal dot" notation, e.g:
#
# 128.32.0.0 255.255.255.0
#
10.106.1.0 255.255.255.0
10.106.2.0 255.255.255.0
67
9 Using the Veritas Cluster Server
There are no specific settings required on the HP 3PAR array to work with Veritas Cluster server.
For further information, refer to the Veritas documentation, which can be found at:
http://seer.entsupport.symantec.com/docs/307506.htm
NOTE: For a Solaris 8 and 9 install image, the required SUN StorEdge SAN software must
also be added to the install server boot image.
6. For a SPARC host server, use the OpenBoot ok prompt to boot the host from the network or
CD:
For an x86 host server, use the BIOS network boot option (i.e., the F12 key) to boot the host
from the network or CD.
The host server should boot from the install server or CD and enter the Solaris interactive
installation program. Enter appropriate responses for your installation until you come to the
‘Select Disks’ menu. The LUN will be listed as more than one device if multiple paths are used.
The LUN will show as zero size, or you may receive the following warning:
No disks found.
> Check to make sure disks are cabled and
powered up.
The LUN needs to be labeled. Exit the installation process to a shell prompt.
NOTE: The “No disks found” message appears if the HP 3PAR Storage System volume is
the only disk attached to the host or if there are multiple disks attached to the host but none
are labeled. If there are labeled disks that will not be used to install Solaris, a list of disks will
be presented, but the unlabeled HP 3PAR Storage System VLUN will not be selectable as an
install target. In this case, exit and proceed to the next step.
7. On the host server, issue the format command to label the HP 3PAR Storage System VLUN.
# format
Searching for disks...WARNING:
/pci@8,700000/pci@1/SUNW,qlc@5/fp@0,0/ssd@w20520002ac000040,a
(ssd0):
corrupt label - wrong magic number
done
c3t20520002AC000040d10: configured with capacity of 20.00GB
AVAILABLE DISK SELECTIONS:
0. c3t20520002AC000040d10 <3PARdata-VV-0000 cyl 17244 alt 2 hd 8 sec 304>
/pci@8,700000/pci@1/SUNW,qlc@5/fp@0,0/ssd@w20520002ac000040,a
Specify disk (enter its number): 0
selecting c3t20520002AC000040d10
[disk formatted]
NOTE: If multiple paths to the LUNs have been used, the LUN appears as multiple instances
in the install program
NOTE: Continue the Solaris installation with appropriate responses, including selecting the
HP 3PAR Storage System LUN as an install target. A LUN will appear as multiple instances if
multiple paths have been used.
Select one instance for the Solaris OS installation. Configure the disk layout and confirm the
system warning of “CHANGING DEFAULT BOOT DEVICE” should it appear.
When the installation program completes, the server may not boot from the boot VLUN. If it
does not, check the SPARC PROM or the x86 BIOS settings for the HBA paths.
If the HP 3PAR Storage System virtual volume was exported to the host definition, it will now
be exported on both paths to the host server:
# showvlun -a
Lun VVname Host ----Host_WWN---- Port Type
10 san-boot solaris-server 210000E08B049BA2 0:5:2 host
10 san-boot solaris-server 210100E08B275AB5 1:5:2 host
2
4. Verify that two representations of the boot volume now appear in the Solaris format command:
# devfsadm
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c3t20520002AC000040d10 <3PARdata-VV-0000 cyl 17244 alt 2 hd 8 sec 304>
/pci@8,700000/pci@1/SUNW,qlc@5/fp@0,0/ssd@w20520002ac000040,a
1. c5t21520002AC000040d10 <3PARdata-VV-0000 cyl 17244 alt 2 hd 8 sec 304>
/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w21520002ac000040,a
Specify disk (enter its number):
#
#
mpxio-disable="no";# for solaris 8 & 9
# symmetric-option = 0x1000000;
device-type-scsi-options-list =
"3PARdataVV", "symmetric-option";
symmetric-option = 0x1000000;
"3PARdataSES", "symmetric-option";
6. Use the Solaris stmsboot command to enable multipathing for the boot device. The host
server will be rebooted when stmsboot –e is run.
# stmsboot -e
7. The stmsboot command makes edits to the /etc/dumpadm.conf and /etc/vfstab files
needed to boot successfully using the new Sun I/O Multipathing single device node for the
multipathed boot device. The new single device node incorporates the HP 3PAR Storage
System VLUN WWN:
Solaris host:
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c7t50002AC000300040d0 <3PARdata-VV-0000 cyl 17244 alt 2 hd 8 sec 304>
/scsi_vhci/ssd@g50002ac000300040
Specify disk (enter its number):
# showvv -d sunboot
Id Name Rd Mstr Prnt Roch Rwch PPrnt PBlkRemain -----VV_WWN-----
--------CreationTime--------
48 san-boot RW 0/1/3 --- --- --- --- - 50002AC000300040 Mon Mar 14 17:40:32 PST
2005#
8. For SPARC, the Solaris install process enters a value for “boot-device” in OpenBoot NVRAM
that represents the hardware path for the first path.
# eeprom
.
.
boot-device=/pci@8,700000/pci@1/SUNW,qlc@5/fp@0,0/disk@w20520002ac000040,a:a
.
.
.
.
.
State ONLINE
Controller /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0
Device Address 21520002ac000040,a
Host controller port WWN 210100e08b275ab5
Class primary
State ONLINE
.
9. For SPARC, create aliases for the alternative hardware paths to the boot-disk. The host server
console must be taken down to the OpenBoot ok prompt:
# init 0
#
INIT: New run level: 0
The system is coming down. Please wait.
System services are now being stopped.
Print services stopped.
May 23 16:51:46 sunb1k-01 syslogd: going down on signal 15
The system is down.
syncing file systems... done
Program terminated
{1} ok
SPARC
Set both paths as aliases in the PROM and set the boot-device parameter to both these aliases.
For example:
With these settings and the host server set to auto-boot on power up, the server should boot from
the second path automatically in the event of a failure on the first path.
NOTE: The host server in use should be updated to the newest version of OpenBoot available
from Sun and tested for booting under failed path scenarios.
CAUTION: Always refer to the driver notes on the effect of issuing a RescanLUNs on the driver
and already discovered VLUNs.
76 Configuration Examples
# ./hbacmd RescanLuns 10:00:00:00:C9:7B:C5:D6 21:53:00:02:AC:00:00:AF
HBACMD_RescanLuns: Success
# /opt/QLogic_Corporation/drvutil/qla2300/qlreconfig -d qla2300
CAUTION: Always refer to the driver notes on the effect of issuing qlreconfig on the driver
and already discovered VLUNs.
Remove the VLUN from the host (e.g., using removevlun), then issue the format command on
the host.
You will see the same list as before but the removed LUNs are noted as offline.
To correct this listing in format, run the following command:
The format shows everything back as expected. Only local disks listed.
CAUTION: Always refer to the Driver notes on the effect of issuing rescan on the driver and
already discovered VLUNs.
NOTE: If a new list of LUNs are exported to the host, only the LUNs which were discovered on
the first run are seen. All others not already read by the qlreconfig on the first run are not listed in
format. This is because the /dev/dsk and /dev/rdsk links are not removed.
By default, vxvm saves a backup of all disk groups to /etc/vx/cbr/bk. This can fill up quickly
and take up disk space. The directories inside /etc/vx/cbr/bk can be removed.
For example, to create a file system on the disk slice c0t3d0s4, you would use the following
command:
# newfs -v /dev/rdsk/c0t3d0s4
The -v option prints the actions in verbose mode. The newfs command calls the mkfs command
to create a file system. You can invoke the mkfs command directly by specifying a -F option
followed by the type of file system. For example:
2. Scan the device tree on the host (other commands are required for different HBA drivers):
3. Run the format command on the host, set the LUN type and label it:
# format
format> type
78 Configuration Examples
Label the LUN:
format> label
# newfs /dev/rdsk/c2t50002AC000010032d0s2
# mkdir /mnt/test
# mount /dev/dsk/c2t50002AC000010032d0s2 /mnt/test
# umount /mnt/test
# format
format> type
format> label
NOTE: For Solaris x86, Auto configure under the type option in format does not
resize the LUN. Resizing can be achieved by selecting ‘other’ under the ‘type’ option and
manually entering the new LUN parameters, such as number of cylinders, heads, sectors, etc.
# df -k /mnt/test
Summary:
• Create and export the initial LUN
• Scan the device tree on the host
• Run 'format' to configure the LUN (set type and label)
• Create and mount the file system on the host
• Grow the LUN on the HP 3PAR Storage System
• Rescan the device tree on the host
• Unmount the file system on the host
• Run 'format' to reconfigure the LUN (set type and label)
• Mount and grow the file system
VxVM vxdisk ERROR V-5-1-8643 Device Disk_10: resize failed: Cannot remove last disk
in disk group
WARNING! Always refer to the Veritas release notes before attempting to grow a volume.
Host - Sol 10 MU9
Stack - SSTM with emlxs
File System - VxFS
1. Create two LUNs (minimum) on the HP 3PAR Storage System and export to the host:
2. Scan the device tree on the host (other commands are required for different HBA drivers):
# vxdisk list
# vxdg init <disk_group> <vx_diskname1>=<device1>
# vxdg -g <disk_group> adddisk <vx_diskname2>=<device2>
80 Configuration Examples
('vxdiskadm' can also be used.)
If you cannot initialize the LUNs, check the paths are enabled:
# vxdisk path
5. Rescan the device tree on the host as shown above. Additionally, resize the logical VxVM
object to match the larger LUN size:
6. On the host, check there is the additional space in the disk group and grow the volume:
# df -k /mnt/test
CAUTION: Commands may vary with each version of Veritas Storage foundation. Always refer
to the version release notes.
Below are some examples of some common commands:
Enable VXvM and discover new disks:
# vxdctl enable
Display disks:
# vxdisk list
# vxdg list
Managing Enclosures
Display attributes of all enclosures:
82 Configuration Examples
============================================
3PARDATA0 MinimumQ MinimumQ
Setting I/O Policies and Path Attributes
Changing Policies
To change the I/O policy for balancing the I/O load across multiple paths to a disk array or
enclosure:
Listing Controllers
To list the controllers on a host server, use the vxdmpadm(1m) utility with the listctlr option:
The vxdmpadm(1m) utility also has a getctlr option to display the physical device path
associated with a controller:
# vxdmpadm getctlr c2
LNAME PNAME
===============
c2 /pci@80,2000/lpfc@1
Displaying Paths
To list the paths on a host server, use the vxdmpadm(1m) utility with the getsubpaths option:
To display paths connected to a LUN, use the vxdmpadm(1m) utility with the getsubpaths
option:
To display DMP Nodes, use the vxdmpadm(1m) utility with the getdmpnode option:
Here is an example:
84 Configuration Examples
B Patch/Package Information
This appendix provides minimum patch requirements for various versions of Solaris and other
associated drivers.
118822-20 118844-19
119374-01 119131-09
119130-04 119375-05
120222-01 120223-01
118822-25 118844-26
119130-04 119131-09
118833-17 119375-13
120222-05 120223-05
118833-17 118855-14
119130-04 19131-09
120222-09 120223-09
118833-33 118855-33
119130-04 119131-09
120222-13 120223-13
127127-11 127128-11
120222-31 (-29 has an issue) 120223-31 (-29 has an issue)
118833-36 118855-36
119130-33 also 125166-07 (qlc) 119131-33 also 125165-07 (qlc)
127127-11 127128-11
139608-02 (emlxs) 139609-02 (emlxs)
118833-36 118855-36
139606-01 (qlc) 139607-01 (qlc)
127127-11 127128-11
141876-05 (emlxs) 141877-05 (emlxs)
118833-36 118855-36
142084-02 (qlc) 142085-02 (qlc)
127127-11 127128-11
118833-36 118855-36
144188-02 (emlxs) 144189-02 (emlxs)
145098-01 (emlxs) 145097-01 (emlxs)
145096-01 (emlxs) 145099-01 (emlxs)
120224-08 (emlxs) 120225-08 (emlxs)
119130-33 (qlc) 119131-33 (qlc)
143957-03 (qlc) 143958-03 (qlc)
144486-03 (qlc) 144487-03 (qlc)
119088-11 (qlc) 119089-11 (qlc)
127127-11 127128-11
118833-36 118855-36
144188-02 (emlxs) 144189-02 (emlxs)
145953-06 (emlxs) 145954-06 (emlxs)
146586-03 (oce) 145097-03 (oce)
120224-08 (emlxs) 120225-08 (emlxs)
119130-33 (qlc) 119131-33 (qlc)
146489-05 (qlc) 146490-05 (qlc)
145648-03 (qlge) 145649-03 (qlge)
119088-11 (qlc) 119089-11 (qlc)
For the Emulex OCe10102 CNA card, the following minimum patch revisions are required (MU9):
For the Qlogic QLE8142 CNA card, the following minimum patch revisions are required (MU9):
86 Patch/Package Information
Table 3 Solaris 9 Minimum Patch Requirements
Patch Comment
118558-06
113277-01
108974-02
88 Patch/Package Information
Table 9 Sun Solaris 8 OS Patches for 4.4.13 (continued)
Patch ID Feature Addressed
qla/lpfc + VxVM
JNI + VxVM
qlc/emlxs + SSTM
qlc/emlxs + SSTM + SC
NOTE: SAN packages are installed on all combinations but they are only enabled for SSTM
combinations.
SPARC Platform
• Solaris 10 QLC driver patch 143957-03 or later
• Solaris 9 SAN 4.4.x: QLC driver patch 113042-19 or later (SAN 4.4.14)
• Veritas VxVM 4.1MP2_RP3 patch 124358-05 or later (for Solaris 8, 9 and 10)
• Veritas VxVM 5.0MP1_RP4 124361-05 or later (for Solaris 8, 9, and 10)
SPARC Platform
• 120223-27 was the minimum; now 120223-31 emlxs on Sol10
• 119914-13 emlxs on Sol9 (SAN 4.4.14)
• 119913-13 emlxs on Sol8 (SAN 4.4.13)
x86 Platform
• 120223-27 was the minimum; now 144189-02 emlxs on Sol10
90 Patch/Package Information
Table 12 Leadville Driver Version and Package (continued)
Solaris OS Version Leadville Driver Released MU Driver Level
(SunSolve patch)
92 FCoE-to-FC Connectivity