Vendor:IBM
Model: 2145
Rev: 0000
Type: Direct-Access
ANSI SCSI revision: 06
Host: scsi3 Channel: 00 Id: 00 Lun: 05
Vendor:IBM
Model: 2145
Rev: 0000
Type: Direct-Access
ANSI SCSI revision: 06
Host: scsi3 Channel: 00 Id: 00 Lun: 06
Vendor:IBM
Model: 2145
Rev: 0000
Type: Direct-Access
ANSI SCSI revision: 06
Host: scsi3 Channel: 00 Id: 00 Lun: 07
Vendor:IBM
Model: 2145
Rev: 0000
Type: Direct-Access
ANSI SCSI revision: 06
Host: scsi3 Channel: 00 Id: 00 Lun: 08
Vendor:IBM
Model: 2145
Rev: 0000
....
Check that the devices are well managed by the Linux multipathd driver
[root@DYB-E4ZG2VTLN1-61 ~]# multipath -ll
mpath2 (36005076802860cd64000000000000003) dm-6 IBM,2145 [size=1.0T][features=1
queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 3:0:1:2 sdag 66:0 [active][ready] \_ 4:0:1:2 sdec 128:64 [active][ready] \_ 5:0:1:2 sdhy
134:128 [active][ready] \_ 6:0:1:2 sdlu 68:448 [active][ready] \_ round-robin 0 [prio=40][enabled]
\_ 4:0:0:2 sdcz 70:112 [active][ready] \_ 3:0:0:2 sdd 8:48 [active][ready] \_ 5:0:0:2 sdgv 132:176
[active][ready] \_ 6:0:0:2 sdkr 66:496 [active][ready]
mpath38 (36005076802818bf8d800000000000009) dm-38 IBM,2145 [size=5.7T][features=1
queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 3:0:2:9 sdbq 68:64 [active][ready] \_ 4:0:2:9 sdfm 130:128 [active][ready] \_ 5:0:2:9 sdji 8:448
[active][ready]
\_ 6:0:2:9 sdne 71:256 [active][ready] \_ round-robin 0 [prio=40][enabled]
\_ 3:0:3:9 sdcl 69:144 [active][ready] \_ 4:0:3:9 sdgh 131:208 [active][ready] \_ 5:0:3:9 sdkd 66:272
[active][ready] \_ 6:0:3:9 sdnz 128:336 [active][ready]
mpath23 (36005076802860cd64000000000000019) dm-27 IBM,2145 [size=5.7T][features=1
queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 4:0:0:23 sddu 71:192 [active][ready] \_ 5:0:0:23 sdhq 134:0 [active][ready] \_ 6:0:0:23 sdlm
68:320 [active][ready] \_ 3:0:0:23 sdy 65:128 [active][ready] \_ round-robin 0 [prio=40][enabled]
\_ 3:0:1:23 sdbb 67:80 [active][ready] \_ 4:0:1:23 sdex 129:144 [active][ready] \_ 5:0:1:23 sdit
135:208 [active][ready] \_ 6:0:1:23 sdmp 70:272 [active][ready]
mpath40 (36005076802818bf8d80000000000000b) dm-40 IBM,2145
[size=5.7T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=200][active]
\_ 3:0:2:11 sdbs 68:96 [active][ready] \_ 4:0:2:11 sdfo 130:160 [active][ready] \_ 5:0:2:11 sdjk
8:480 [active][ready] \_ 6:0:2:11 sdng 71:288 [active][ready] \_ round-robin 0 [prio=40][enabled]
\_ 3:0:3:11 sdcn 69:176 [active][ready] \_ 4:0:3:11 sdgj 131:240 [active][ready] \_ 5:0:3:11 sdkf
66:304 [active][ready] \_ 6:0:3:11 sdob 128:368 [active][ready] ...
3.
Now we will create the file system, using the menu tool.
Log on the primary node (node1) as ptadmin, then the menu tool starts automatically. If not,
issue the menu command.
Select the following options:
| 1) ProtecTIER Configuration (...)
| 7) File Systems Management (...)
| 1) Configure file systems on all available devices
|
|
This option creates the file system layout on all available mpath defined on this node.
Once the layout is done, use the fifth option in the file system menu as shown below, to
display the file system layout:
We performed our repository size plan using the PT Planner Tool. (https://w3connections.ibm.com/communities/service/html/communityview?communityUuid=c37424
c5-7cf6-449f-8a36- b418c85c466f#fullpageWidgetId%3DWbb131d2c8fb0 4d46 94b6
2717bc55af9e%26file%3Df575b89f-3aad4e1d-85aa-ddce89c6e0a9)
Below is the estimated output of ProtecTIER performance with the hardware
purchased.
Assumption of performance on Dedup ratio 8:1 and the overall throughput will be
1100MB/s, running on available hardware (24x300GB-15K+105x1.2TB-NLSAS).
By default, the wizard uses the smallest file systems detected in the layout and
defines them as member of the Meta data part of the repository, the rest of file
systems are used for the user data part of the repository.
If for some reason you have specific file system size that would prevent this to be the
correct organization, look at the following chapter:7.4 Manual distribution of the file
system to create the repository.
In our configuration, the repository creation operation took 12 hours. This type of
operation runs in background and prevents any other operation through the PT manager
GUI for the involved PT cluster.
In this chapter we give some statistics of storage utilization while formatting the
repository.
4.3 V7000 UD + MD
Activity when padding operation started: Showing MB/s
Showing I/O
Activity when padding operation is in progress (after one hour): Showing IOPS
Showing I/O
Activity when padding operation is in progress (after one hour) Showing IOPS
7.4. Manual distribution of the file system to create the repository
Usually the PT GUI manager assign the correct file system to the correct repository
part (UD or MD), based on the size of the file system/LUNs.
If the size of the LUNs prevents this to work as expected, you have to manually
determine the correct layout. Follow these steps.
Check and research the the serial of RAID10 LUNs using the multipath ll command
on the protectier node. In our case, the LUNs that are RAID 10 formatted have 042 in
their serial number:
5. Clustered configuration
If you have a clustered environment, you have to configure the second node, and then
ask it to join the cluster and finally perform the fence tests.
You can skip the THE FENCE TEST at installation and do it later on using the menu tool.
Answer Q at the fence test question to validate the cluster configuration.
Below the figure of what occurs when selecting the Fence test during the second node
configuration (Node 1 is fenced)
To ensure the fence mechanism works well, the fence test has to be executed from each
node of the cluster. This can be done at any time using the menu tool (Option 12)
/!\ Caution, please not the following when you are doing reboot/fence tests: Avoid
performing the 2 fence tests within 1 hour, to avoid this error:
Tue May 15 23:08:41 CEST 2012 cmgnt 14103 (14103) 5020: The Cluster
Manager detected that there have been more than 2 unscheduled restarts in the last 60
minutes. It has stopped the process of bringing up the services to prevent endless
reboot cycles.
A notification of fence test appears in the notification center within the PT manager
GUI. The software alert button turns red and blinks. Following message is reported: