Anda di halaman 1dari 19

1. Configure the node that is designated as Node A.

In a single node configuration,


this is the only node. In a cluster configuration, this is the lower node in the rack.
The wiring and labeling performed by the installation personnel have been
predicated on this. FSI mode is not clustered.
2. Configure the backend storage from Node A. (Node B is powered off for
this procedure.)
3. Build the repository.
4. Configure Node B. This step clusters the Node B with node A.
5. Set the time and timeserver.
6. Perform validation, if in a cluster.
7. Build the Library, Storage Server (STS), or File System.
1.2. Checkpoints
Be sure that you are working on the right machine by checking the system serial
number
Use the command /usr/sbin/dmidecode -t 1. Start installation with the node which is
physically installed as primary node (lower node).
[root@DYB-E4ZG2VTLN1-61 log]# dmidecode -t1 # dmidecode 2.11
SMBIOS 2.7 present.
Handle 0x0054, DMI type 1, 27 bytes
System Information
Manufacturer: IBM
Product Name: System x3850 X5 -[7143PEA]- Version: 06
Serial Number: KQ2R7AD
UUID: DC9E3336-BEF6-34D4-8A2D-DE5F3D17BC11 Wake-up Type: Power Switch
SKU Number: Not Specified
Family: System X

Check LUN assignment:


[root@DYB-E4ZG2VTLN1-61 log]# cat /proc/scsi/scsi Attached devices:
Host: scsi0 Channel: 02 Id: 00 Lun: 00
Vendor:IBM
Model: ServeRAID M5015 Rev: 2.13
Type: Direct-Access
ANSI SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor:IBM
Model: 2145
Rev: 0000
Type: Direct-Access
ANSI SCSI revision: 06
Host: scsi3 Channel: 00 Id: 00 Lun: 01
Vendor:IBM
Model: 2145
Rev: 0000
Type: Direct-Access
ANSI SCSI revision: 06
Host: scsi3 Channel: 00 Id: 00 Lun: 02
Vendor:IBM Model: 2145
Rev: 0000
Type: Direct-Access
ANSI SCSI revision: 06
Host: scsi3 Channel: 00 Id: 00 Lun: 03
Vendor:IBM Model: 2145
Rev: 0000
Type: Direct-Access
ANSI SCSI revision: 06
Host: scsi3 Channel: 00 Id: 00 Lun: 04

Vendor:IBM

Model: 2145

Rev: 0000

Type: Direct-Access
ANSI SCSI revision: 06
Host: scsi3 Channel: 00 Id: 00 Lun: 05
Vendor:IBM
Model: 2145
Rev: 0000
Type: Direct-Access
ANSI SCSI revision: 06
Host: scsi3 Channel: 00 Id: 00 Lun: 06
Vendor:IBM
Model: 2145
Rev: 0000
Type: Direct-Access
ANSI SCSI revision: 06
Host: scsi3 Channel: 00 Id: 00 Lun: 07
Vendor:IBM
Model: 2145
Rev: 0000
Type: Direct-Access
ANSI SCSI revision: 06
Host: scsi3 Channel: 00 Id: 00 Lun: 08
Vendor:IBM
Model: 2145
Rev: 0000
....

Check that the devices are well managed by the Linux multipathd driver
[root@DYB-E4ZG2VTLN1-61 ~]# multipath -ll
mpath2 (36005076802860cd64000000000000003) dm-6 IBM,2145 [size=1.0T][features=1
queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 3:0:1:2 sdag 66:0 [active][ready] \_ 4:0:1:2 sdec 128:64 [active][ready] \_ 5:0:1:2 sdhy
134:128 [active][ready] \_ 6:0:1:2 sdlu 68:448 [active][ready] \_ round-robin 0 [prio=40][enabled]
\_ 4:0:0:2 sdcz 70:112 [active][ready] \_ 3:0:0:2 sdd 8:48 [active][ready] \_ 5:0:0:2 sdgv 132:176
[active][ready] \_ 6:0:0:2 sdkr 66:496 [active][ready]
mpath38 (36005076802818bf8d800000000000009) dm-38 IBM,2145 [size=5.7T][features=1
queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 3:0:2:9 sdbq 68:64 [active][ready] \_ 4:0:2:9 sdfm 130:128 [active][ready] \_ 5:0:2:9 sdji 8:448
[active][ready]
\_ 6:0:2:9 sdne 71:256 [active][ready] \_ round-robin 0 [prio=40][enabled]
\_ 3:0:3:9 sdcl 69:144 [active][ready] \_ 4:0:3:9 sdgh 131:208 [active][ready] \_ 5:0:3:9 sdkd 66:272
[active][ready] \_ 6:0:3:9 sdnz 128:336 [active][ready]
mpath23 (36005076802860cd64000000000000019) dm-27 IBM,2145 [size=5.7T][features=1
queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 4:0:0:23 sddu 71:192 [active][ready] \_ 5:0:0:23 sdhq 134:0 [active][ready] \_ 6:0:0:23 sdlm
68:320 [active][ready] \_ 3:0:0:23 sdy 65:128 [active][ready] \_ round-robin 0 [prio=40][enabled]
\_ 3:0:1:23 sdbb 67:80 [active][ready] \_ 4:0:1:23 sdex 129:144 [active][ready] \_ 5:0:1:23 sdit
135:208 [active][ready] \_ 6:0:1:23 sdmp 70:272 [active][ready]
mpath40 (36005076802818bf8d80000000000000b) dm-40 IBM,2145
[size=5.7T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=200][active]
\_ 3:0:2:11 sdbs 68:96 [active][ready] \_ 4:0:2:11 sdfo 130:160 [active][ready] \_ 5:0:2:11 sdjk
8:480 [active][ready] \_ 6:0:2:11 sdng 71:288 [active][ready] \_ round-robin 0 [prio=40][enabled]
\_ 3:0:3:11 sdcn 69:176 [active][ready] \_ 4:0:3:11 sdgj 131:240 [active][ready] \_ 5:0:3:11 sdkf
66:304 [active][ready] \_ 6:0:3:11 sdob 128:368 [active][ready] ...

1.3. TS7650G Node1 Install


Attention: Before you begin ProtecTIER software configuration, confirm that the
attached disk storage has been properly configured for use with the TS7650G. Failure
to do so could result in the Red Hat Linux operating system having to be reinstalled on

one or more of the TS7650G servers.


Prerequisites:
TCP ports 6520, 6530, 6540, 6550, 3501, and 3503 are open in the customer's firewall.
Each ProtecTIER server being used for replication must allow TCP access through
these ports.
If you are in a dual nodes configuration, Second Node MUST BE powered-off.
You have acquired, or know where to locate, the server information about the customer's
LAN and replication network.
Note: The above information can be found on the completed IP Address Worksheet,
located in the IBM System Storage TS7600 with ProtecTIER Introduction and Planning
Guide for the TS7650G (3958 DD4).
Once you have validated those points, proceed on install/configuration of the
TS7650G
Connect to the system as user ptconfig (password ptconfig), the installer menu will
appear automatically, if not, issue command menu
Choose option 1) ProtecTier Configuration
Then choose again option 1) Configure ProtecTIER node

2. Backend Zoning information


The zoning between the ProtecTIER nodes and V7000 follows the SAN zoning
guidelines.

3.

File system layout creation

Now we will create the file system, using the menu tool.
Log on the primary node (node1) as ptadmin, then the menu tool starts automatically. If not,
issue the menu command.
Select the following options:
| 1) ProtecTIER Configuration (...)
| 7) File Systems Management (...)
| 1) Configure file systems on all available devices

|
|

This option creates the file system layout on all available mpath defined on this node.
Once the layout is done, use the fifth option in the file system menu as shown below, to
display the file system layout:

Once this task is done, proceed with the repository creation.


4.

Repository Creation 7.1. Planning

We performed our repository size plan using the PT Planner Tool. (https://w3connections.ibm.com/communities/service/html/communityview?communityUuid=c37424
c5-7cf6-449f-8a36- b418c85c466f#fullpageWidgetId%3DWbb131d2c8fb0 4d46 94b6
2717bc55af9e%26file%3Df575b89f-3aad4e1d-85aa-ddce89c6e0a9)
Below is the estimated output of ProtecTIER performance with the hardware
purchased.
Assumption of performance on Dedup ratio 8:1 and the overall throughput will be
1100MB/s, running on available hardware (24x300GB-15K+105x1.2TB-NLSAS).

4.1 Creation using PT manager GUI


Here is the procedure to create and format the repository using the GUI Start the wizard
using Menu> Repository> Create repository.

By default, the wizard uses the smallest file systems detected in the layout and
defines them as member of the Meta data part of the repository, the rest of file
systems are used for the user data part of the repository.
If for some reason you have specific file system size that would prevent this to be the
correct organization, look at the following chapter:7.4 Manual distribution of the file
system to create the repository.

Click finish to proceed with repository creation.

In our configuration, the repository creation operation took 12 hours. This type of
operation runs in background and prevents any other operation through the PT manager
GUI for the involved PT cluster.

4.3 Repository padding performance information

In this chapter we give some statistics of storage utilization while formatting the
repository.
4.3 V7000 UD + MD
Activity when padding operation started: Showing MB/s

Showing I/O

Activity when padding operation is in progress (after one hour): Showing IOPS

4.4 V7000 for UD only


Activity when padding operation started: Showing MB/s

Showing I/O

Activity when padding operation is in progress (after one hour) Showing IOPS
7.4. Manual distribution of the file system to create the repository
Usually the PT GUI manager assign the correct file system to the correct repository
part (UD or MD), based on the size of the file system/LUNs.
If the size of the LUNs prevents this to work as expected, you have to manually
determine the correct layout. Follow these steps.
Check and research the the serial of RAID10 LUNs using the multipath ll command
on the protectier node. In our case, the LUNs that are RAID 10 formatted have 042 in
their serial number:

[root@MOPB4VTLN142 sbin]# multipath -ll | grep mpath | grep "042.." mpath2


(36005076306ffc06c0000000000004202) dm-42 IBM,2107900
mpath1 (36005076306ffc06c00000000000 04201) dm-40 IBM,2107900
mpath0 (36005076306ffc06c00000000000 04200) dm-37 IBM,2107900
Then, find the link between PV (mpath) and LV (displayed in the PT GUI) Command to use:
pvdisplay /dev/mapper/mpath2p1
[root@MOPB4VTLN142 sbin]# multipath -ll | grep mpath | grep "042.." | awk '{ print
"pvdisplay /dev/mapper/"$1"p1 | grep -E \"PV Name|VG Name\" "}'
pvdisplay /dev/mapper/mpath2p1 | grep -E "PV Name|VG Name" pvdisplay
/dev/mapper/mpath1p1 | grep -E "PV Name|VG Name" pvdisplay /dev/mapper/mpath0p1 | grep E "PV Name|VG Name"
[root@MOPB4VTLN142 sbin]# pvdisplay /dev/mapper/mpath2p1 | grep -E "PV Name|VG
Name"
PV Name /dev/mapper/mpath2p1
VG Name
vg0
[root@MOPB4VTLN142 sbin]# pvdisplay /dev/mapper/mpath1p1 | grep -E "PV Name|VG
Name"
PV Name /dev/mapper/mpath1p1
Then, use the PT manager GUI repository creation to distribute the correct LUNs in the
correct part of the repository. RAID 10 as MD , others as UD members.

5. Clustered configuration
If you have a clustered environment, you have to configure the second node, and then
ask it to join the cluster and finally perform the fence tests.

5.1. Join the Node 2 within the cluster


/!\ Notice that the NODE1 might be fenced at NODE 2 startup.
Connect on NODE 1 and stop PT services, or it will be done via the menu tool from the
NODE 2 whilst configuring the NODE 2.
Once the services has been stopped... Continue on NODE 2.
Connect on NODE 2 with ptconfig user and start the menu
Enter your parameters when asked, about IP, mask, Gateway, name

You can skip the THE FENCE TEST at installation and do it later on using the menu tool.
Answer Q at the fence test question to validate the cluster configuration.
Below the figure of what occurs when selecting the Fence test during the second node
configuration (Node 1 is fenced)

To ensure the fence mechanism works well, the fence test has to be executed from each
node of the cluster. This can be done at any time using the menu tool (Option 12)
/!\ Caution, please not the following when you are doing reboot/fence tests: Avoid
performing the 2 fence tests within 1 hour, to avoid this error:
Tue May 15 23:08:41 CEST 2012 cmgnt 14103 (14103) 5020: The Cluster
Manager detected that there have been more than 2 unscheduled restarts in the last 60
minutes. It has stopped the process of bringing up the services to prevent endless
reboot cycles.

5.2. Validate the cluster configuration


This chapter shows how to validate the cluster configuration. Simply using the menu
tool, Protection configuration then Validate configuration option.

A notification of fence test appears in the notification center within the PT manager
GUI. The software alert button turns red and blinks. Following message is reported:

Anda mungkin juga menyukai