Anda di halaman 1dari 6

How to Virtualise EMC DMX-4 Storage Array behind HP

P9500 or HDS VSP storage arrays.


The configuration requirements and steps involved in virtualising a DMX storage
array behind HP P9500. As you may be aware the HP P9500 is a rebadged HDS VSP
storage array. Therefore the steps are identical whether you are using HDS VSP or
HP P9500. This proces assumes that you are a storage administrator/designer who
has worked with both EMC and HP/HDS storage arrays and are familiar with the
terminology.
1.

Create a Cache Logical Partition (CLPR) of 4GB within the HP P9500/VSP.

The CLPR license is bundled with the base license. You can create up to 32 CLPRs
without purchasing any additional license
It is recommended to use a dedicated CLPR for every external array that will be
virtualized with P9500/VSP
2.
Identify the CHA ports on the P9500 that will be used as external ports. You
will need a minimum of two front end ports, one from each cluster. In this example, I
use 5E (fabric A) and 6E (fabric B)
3.
You may connect the P9500 ports directly to the DMX FA ports or using fabric
switches. This example assumes that a fabric switch is being used. Set the external
port properties as below.
Port Mode : External
Speed : 8Gbps (assuming your fabric switch supports this, if not set to match the
switch port speed)
4.
Ensure that the P9500/VSP external ports are connected to the same fabric
where the EMC FAs are connected. In this example I am using FA ports 7A0, 7A1
connected fabric A and 10A0, 10A1 connected fabric B
5.

Zone in P9500/VSP external ports to the EMC FA ports identified

The zones will look like this


Fabric A

P9500_5E to DMX_7A0
P9500_5E to DMX_7A1
Fabric B
P9500_6E to DMX_10A0
P9500_6E to DMX_10A1
6.
Register the WWPNS of the P9500/VSP ports 5E and 6E on the corresponding
DMX FA ports. Set the below port attributes for P9500 WWPNs on the DMX

Parameter

Parameter setting

SC3 flag

Enable

SPC2 flag

Disable

7.

Create a 10GB (or any size) test LUN on the DMX-4

8.
Assign the test LUNs to P9500 WWPNs on FA ports 7A0,7A1 and 10A1 and
10B1
9.
On the P9500, In Remote Web Console, select Add External Volumes from the
General Tasks menu and display the Add External Volumes window.
10. To create a new path group and add external volumes, select By New
External Path Group and click Create External Path Group, and then create a path
group.
11. To add external volumes to the existing external path group, select By Existing
External PathGroup, and select the desired path group from the Available External
Path Groups list. Click Next.
12. Select external volumes from the Discovered External Volumes list and enter
the external volume group number and their sequential number in the Initial Parity
Group ID box, and then click Add. Select whether to create LDEVs in the external
volume in Allow Simultaneous Creation of LDEVs. If you want to take over the data
of the external volume when you create LDEVs, select Yes in External Storage
System Configuration, and then enter an LDEV name in the LDEV Namenbox. If you
do not want to take over the data, select No in Use External Storage

SystemnConfiguration, and then enter an LDEV name in the LDEV Name box.You
can also set the attribute of the external volume by clicking Options.
Select the following options
CLPR : 1
Path mode : Emulation :Open V
Cache mode :
Inflow control :
Select the CU where the external LUN will be place and the LDEV number.
Click Finish to display the Confirm window.
13. Procees with LUN mapping to the front end CHA port as required.

=======================================
=

Two node cluster storage migration from EMC to hitachi


1) Consider the data copying is done in the background from SAN side.
Prework
1. Sun Explorer
2. EMC Grab output
3. Emc database files
[/etc/power.custom ; /etc/emcp_devicesDB.dat
/etc/emcp_devicesDB.idx;kernel/drv/emcp.conf]
4. Output of inq no_dots
5. Output of cldev show v
6. Cldev list v |grep voor1 ( scdidadm l)
Actual migration
1) Shutdown the application in production
2) Offline all the resource groups.

3) Add the quorum server quorum device


follow http://docs.oracle.com/cd/E19787-01/820-7358/6nikf2f1r/index.html
4) Check the quorum configuration.
In cluster nodes run
# clq status
# clq list -v
5) Remove the emc quorum disk from cluster once you confirm there is
quorum added from qurum server.
6) Check the quorum server configuration
#clq status
# clq list -v
This will show the minimum quorum devices needed and available.
7) Reboot one node to check that new quorum device is working fine.
8) Disable EMC powerpath to enable native Multipathing on both the
servers.
#svcadm disable /system/emcpower/powerstartup
# sysadm disable /system/emcpower/powershift
9) We have to configure native Multipathing on one server and reboot ,
10) Repeat step 9 on node2 if step9 CHECK is successful.
11) All the devices are offline, cluster is running on quorum server and native
mulstipathing is enabled and emc powerpath is disabled.
At this stage SAN can remove EMC luns and assign HDS Lun.
12) San team will un-present luns
Below commands will refresh the DID list and remove the disk from OS
control
#cldevice refresh
#devfsadm -Cv
13) SAN team will present only quorum disk via Hitachi
After SAN team presents the LUN, we have to scan the disks.
#cfgadm -c configure c2/c3
#devfsadm Cv
#cldevice populate
At this point we shall see the quorum disk from Hitachi LUNS and Global
device ID will be generated.
14) Now we need to add the quorum disk .

#cldev list or scdidadm -L


15) We need to remove the quorum server device now.
( http://docs.oracle.com/cd/E19787-01/819-5360/gbdua/index.html )
# clsetup
[Select Quorum>Remove a quorum device]
[Answer the questions when prompted.] remove mint04
Quit the clsetup Quorum Menu and Main Menu.]
[Verify that the quorum device is removed:]
# clquorum list -v
Next clean up the cluster quorum server configuration
clquorumserver clear -c clustername -I clusterID quorumserver [-y]
#clqs show
16) SAN team present other LUNS
After SAN team presents the LUN, we have to scan the disks.
#cfgadm -c configure c2/c3
#devfsadm Cv
#cldevice populate
At this point we shall see the other Hitachi LUNS and Global device ID will be
generated.
17) Now we shall online the cluster RGs one by one in below order
# clrg online <res group>
If you have to copy data from OS side then follow same procedure to add
the hitachi disks including quorum server.
Mirror the disk.
remove EMC disk
boot from Hitachi disk
repopulate disk
configure quorum on hitachi disk and then remove quorum server.
Make sure you are configuring Quorum disk only after removing EMC disk
and repopulating device.
I have successfully migrated Two node cluster from EMC to hitachi using
above procedure.
==============================================
============================

Anda mungkin juga menyukai