Anda di halaman 1dari 71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.

How to install a ProtecTier TS7650G


+ TSM related Setup

Software release 3.1.8.0


[PT_TS7650G_V3.1.8.0-full.x86_64]

Julien Sauvanet - SME IBM SO Delivery French IMT -

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

TABLE OF CONTENT
1.
2.

Context
ProtecTier at a glance
2.1.
TSSC / PT Manager
3.
ProtecTier TS7650G configuration
3.1.
Checkpoints
3.2.
TS7650G Node1 Install
4.
File system layout creation
5.
Repository Creation
5.1.
Planning
5.2.
Creation
6.
RAS Configuration
7.
Clustered configuration
7.1.
Join the Node 2 within the cluster
8.
Host Attachment
8.1.
Configure the port attributes
8.2.
Front End Adapter Zoning
8.3.
Zoning information output
8.4.
Host Initiator Management
9.
Enable the LUN Masking Group (LMG)
10.
Setup a Virtual Library (VLib)
10.1.
Virtual Library at a glance
10.2.
Naming convention used for DYB Infra
10.3.
Maximums
10.4.
Create a new library
11.
Configure the Lun Masking Group
12.
TSM
12.1.
TSM Attachment description
12.2.
Library definition
12.3.
Label the virtual tapes:
12.4.
Define the device class
13.
PT User Management
14.
PT Notification configuration
14.1.
Mail notifications
15.
Report and diagnostics
15.1.
Generate a report
15.2.
PT configuration files backup (out of the box)
16.
Best practices for TSM
17.
Tips
17.1.
Community & Wiki
17.2.
Cluster System startup procedure information
17.3.
Display the serial Id of a machine
17.4.
Log
17.5.
Reboot
17.6.
How to clean an old repository configuration [to be done only per IBM Support request]
17.7.
Display WWN of PT cluster nodes (on linux):
17.8.
Use the PT Manager GUI to plan your repository
17.9.
Information about Time setting
17.10.
WWN list of multipath devices:

Julien Sauvanet - SME IBM SO Delivery French IMT -

1
1
3
3
3
4
9
12
12
14
21
24
24
31
31
32
33
34
37
39
39
39
39
40
48
52
52
53
55
56
57
57
57
59
59
61
62
64
64
64
65
65
65
65
66
66
68
68

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

1. Context
This installation is part of a shared storage infrastructure build.
This documentation is intended to help folks that wanted to configure a VTL TS7650G, working with TSM
server 6.3

2. ProtecTier at a glance
The IBM System Storage TS7650G ProtecTIER Deduplication Gateway is
designed to meet the disk-based data protection needs of the enterprise data center while enabling significant
infrastructure cost reductions. The solution offers industry-leading inline deduplication performance and
scalability.
The TS7650G Model DD4 ProtecTIER Deduplication Gateway provides:
High-speed backups: Up to 2000 MB/sec or more (7.2 TB/hr) sustained inline deduplication backup
performance
Even faster restores: Up to 2800 MB/sec or more (10 TB/hr) sustained recovery Performance

Julien Sauvanet - SME IBM SO Delivery French IMT -

1/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Here is the general implementation within a rack:

Julien Sauvanet - SME IBM SO Delivery French IMT -

2/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

2.1. TSSC / PT Manager


To use the TSSC
http://service@TSSCIP:7080/TSSC (id: Service pwd: ibm2serv)

Default usernames and passwords


Permission Level
Administrator
Operator
Monitor
Grid Manager
Root

Default Username
ptadmin
ptoper
ptuser
gmadmin
root

Default Password
ptadmin
ptoper
ptuser
gmadmin
admin

Administrator can create new user accounts and grant authority.

3. ProtecTier TS7650G configuration


On following example we are working on a two-nodes cluster configuration. If you have only 1 node, ignore all
the tasks related to node2.

3.1. Checkpoints
Be sure that you are working on the right machine by checking the system serial number
Use the command /usr/sbin/dmidecode -t 1

Check LUN assignement:


cat /proc/scsi/scsi
Julien Sauvanet - SME IBM SO Delivery French IMT -

3/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Check that the devices are well managed by the Linux multipathd driver
multipath ll

3.2. TS7650G Node1 Install


Julien Sauvanet - SME IBM SO Delivery French IMT -

4/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Attention: Before you begin ProtecTIER software configuration, confirm that the attached disk storage has
been properly configured for use with the TS7650G. Failure to do so could result in the Red Hat Linux
operating system having to be reinstalled on one or more of the TS7650G servers.
Prerequisites:
TCP ports 6520, 6530, 6540, 6550, 3501, and 3503 are open in the customer's firewall. Each ProtecTIER
server being used for replication must allow TCP access through these ports.

If you are in a clustered gateway configuration, Second Node is powered-off.


You have acquired, or know where to locate, the server information about the customer's LAN and replication
network.
Note: The above information can be found on the completed IP Address Worksheet, located in the IBM
System Storage TS7600 with ProtecTIER Introduction and Planning Guide for the TS7650G (3958 DD4).
Once you have validated those points, proceed on install/configuration of the TS7650G
Connect to the system as user ptconfig ( password ptconfig ) , the installer menu will appear automatically, if
not, issue command menu

Choose option 1) ProtecTier Configuration


Then choose again option 1) Configure ProtecTier node

Julien Sauvanet - SME IBM SO Delivery French IMT -

5/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

The installer proceed on some verification (it takes up to 5 minutes)

Enter Yes.

Julien Sauvanet - SME IBM SO Delivery French IMT -

6/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

The second is asked only if the VTFD service is up. If so, click Yes .

These steps can take up to 30 minutes as well. (Step Checking repository elapsed time will depends on your
repository size, it can take a while )

Julien Sauvanet - SME IBM SO Delivery French IMT -

7/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Check too times and submit the value by entering Yes .

Julien Sauvanet - SME IBM SO Delivery French IMT -

8/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Note the Default Gateway connectivity failed in our case because the firewall installed on it rejects the icmp
protocol.
Once the PT has been configured, create the file system layout, later used for repository creation

4. File system layout creation


When the installation completes, change directories to /opt/dtc/app/sbin directory. To do so, enter the
following command:
cd /opt/dtc/app/sbin
Check if devices or file systems exist. To do so, enter the following commands:
./fsCreate -r
Displays all repository GFS file systems
./fsCreate -u Display unused devices
./fsCreate -g Display all non-repository GFS file systems
Create the file system on the server. To do so, enter the following command:
./fsCreate -n
A message displays stating that any existing data on the back-end storage will be removed
At the prompt, type data loss and press Enter.

Julien Sauvanet - SME IBM SO Delivery French IMT -

9/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

The fsCreate tool creates the logical volumes and file systems on all accessible LUNs, updates the /etc/fstab
file, and mounts all file systems.

Command displays all the disks without any other messages


Finally create the filesystems. The fsCreate tool will create one FS/LV per VG per mpath (mpath = path to a
LUN)

Julien Sauvanet - SME IBM SO Delivery French IMT -

10/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

It can run for a while depending of the number of mpath you have
Now, we will create the repository using the PT Manager GUI.

Julien Sauvanet - SME IBM SO Delivery French IMT -

11/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

5. Repository Creation
5.1. Planning
We performed our repository size plan using the PT Planner Tool. (https://w3connections.ibm.com/communities/service/html/communityview?communityUuid=c37424c5-7cf6-449f-8a36b418c85c466f#fullpageWidgetId%3DWbb131d2c8fb0_4d46_94b6_2717bc55af9e%26file%3Df575b89f-3aad4e1d-85aa-ddce89c6e0a9
)
Based on our assumption of performance ( dedup ratio 8:1 / throughput 1920MB/s), available hardware (7TB
of RAID10 disk capacity), we have the following results as output

We will start with a repository of 270 TB of user data, using an index of Meta Data of 7,6 TB , spreaded over 2
+ 1 File system
According the the PT Planner tool again, the maximum repository size we might have (keeping the same
ratio/performance) in case of n eeds, is 313 TB

Julien Sauvanet - SME IBM SO Delivery French IMT -

12/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Julien Sauvanet - SME IBM SO Delivery French IMT -

13/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

5.2. Creation

NAME: MOPB4VTL42

Julien Sauvanet - SME IBM SO Delivery French IMT -

14/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Julien Sauvanet - SME IBM SO Delivery French IMT -

15/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Default balance displayed by GUI between Meta Data and User Data repository

Julien Sauvanet - SME IBM SO Delivery French IMT -

16/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Usually the PT GUI manager assign the RAID10 detected LUN to the meta data group (left pane before).
The RAID10 LUNs in our case have specific serial numbers, identifiable by 43xx or 42xxx as serial end.
Check with the multipath ll to match the serial of RAID10 LUNs using the multipath ll command.
[root@MOPB4VTLN142 sbin]# multipath -ll | grep mpath | grep "042.."
mpath2 (36005076306ffc06c0000000000004202) dm-42 IBM,2107900
mpath1 (36005076306ffc06c0000000000004201) dm-40 IBM,2107900
mpath0 (36005076306ffc06c0000000000004200) dm-37 IBM,2107900

Correlation between PV and LV (displayed in the PT GUI)


Command to use: pvdisplay /dev/mapper/mpath2p1
[root@MOPB4VTLN142 sbin]# multipath -ll | grep mpath | grep "042.." | awk '{ print "pvdisplay
/dev/mapper/"$1"p1 | grep -E \"PV Name|VG Name\" "}'
pvdisplay /dev/mapper/mpath2p1 | grep -E "PV Name|VG Name"
pvdisplay /dev/mapper/mpath1p1 | grep -E "PV Name|VG Name"
pvdisplay /dev/mapper/mpath0p1 | grep -E "PV Name|VG Name"

Julien Sauvanet - SME IBM SO Delivery French IMT -

17/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

[root@MOPB4VTLN142 sbin]# pvdisplay /dev/mapper/mpath2p1 | grep -E "PV Name|VG Name"


PV Name
/dev/mapper/mpath2p1
VG Name
vg0
[root@MOPB4VTLN142 sbin]# pvdisplay /dev/mapper/mpath1p1 | grep -E "PV Name|VG Name"
PV Name
/dev/mapper/mpath1p1
VG Name
vg6
[root@MOPB4VTLN142 sbin]# pvdisplay /dev/mapper/mpath0p1 | grep -E "PV Name|VG Name"
PV Name
/dev/mapper/mpath0p1
VG Name
vg12

Check with the GUI displayed information (figure bellow)

Julien Sauvanet - SME IBM SO Delivery French IMT -

18/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Finalize the configuration

Repository created

Julien Sauvanet - SME IBM SO Delivery French IMT -

19/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Julien Sauvanet - SME IBM SO Delivery French IMT -

20/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

6. RAS Configuration
Once the repository has been created, we can proceed on RAS configuration, still working on the NODE 1.
Be aware that the RAS configuration is an offline procedure.

If the setup has already been done, you just have to check the values, if not, enter the right values

Julien Sauvanet - SME IBM SO Delivery French IMT -

21/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Julien Sauvanet - SME IBM SO Delivery French IMT -

22/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Configuration is done. You can access to the TSSC to see that your hosts are now attached.

Julien Sauvanet - SME IBM SO Delivery French IMT -

23/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

7. Clustered configuration
If you have a clustered environment, you have to configure the second node, and then ask it to join the cluster
and finally perform the fence tests.

7.1. Join the Node 2 within the cluster


/!\ On NODE 1 : STOP GFS & VTFD
Connect on NODE 1

Julien Sauvanet - SME IBM SO Delivery French IMT -

24/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Once the services has been stopped.. continue on NODE 2.


Connect on NODE 2 with ptconfig user and start the menu

Julien Sauvanet - SME IBM SO Delivery French IMT -

25/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Enter your parameters when asked, about IP, mask, Gateway, name .

DO NOT PERFORME THE FENCE TEST at installation.


Answer Q at the fence test question to validate the cluster configuration.

Julien Sauvanet - SME IBM SO Delivery French IMT -

26/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

You can proceed on Fence test later using the menu

Julien Sauvanet - SME IBM SO Delivery French IMT -

27/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

When the fence test on node 2 is completed, follow the same procedure from the node 1, this is a mandatory
step if you plan to use PT Replication feature.
During the fence test, you can have a look on the PT manager GUI side: corner right bottom, Software alert
button

Julien Sauvanet - SME IBM SO Delivery French IMT -

28/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

/!\ Caution, please not the following when you are doing reboot/fence tests:
Avoid performing the 2 fence tests within 1 hour, to avoid this error:
Tue May 15 23:08:41 CEST 2012
cmgnt 14103 (14103) 5020: The Cluster Manager detected that
there have been more than 2 unscheduled restarts in the last 60 minutes. It has stopped the process of
bringing up the services to prevent endless reboot cycles.

Julien Sauvanet - SME IBM SO Delivery French IMT -

29/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Validate that the cluster config is reflected on the PT Manager GUI:

The 2 nodes are defined within the same System


Cluster display in the PT GUI, in normal mode (2 nodes active)

Julien Sauvanet - SME IBM SO Delivery French IMT -

30/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

8. Host Attachment

8.1. Configure the port attributes


Select each node in the cluster and click on the Node => Port Attributes Menu.
Change the port settings according to your infrastructure and apply.

Julien Sauvanet - SME IBM SO Delivery French IMT -

31/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

In our case, we force the speed to 8Gbit, and P2P topology.

Once the port are set and connected, collect all the HBA information to build zoning.

8.2. Front End Adapter Zoning


Important Notice about the front-end VTL adapter zoning:
A ProtecTIER server front-end port shares zones with only backup server FC ports.
Each zone contains only one ProtecTIER front-end port.
Now you are able to zone free TSM FC adapter to VTL front-end FC adapter.
Best practices said that you must not mix up devices type through the same HBA, it means that we will find on
the system the free HBA ports to use them for VTL front end zoning.
Execute this command on your AIX TSM server to determine whether adapter have child.
i=0;while [[ $i -lt 36 ]]; do id=$(lsdev -l fcs${i} | awk '{ print $3 }');echo fcs${i}; lsdev | grep $id | grep -v fcs |
awk '{ print "==> " $1" "$3 }' ; i=$(( $i + 1 )); done

Julien Sauvanet - SME IBM SO Delivery French IMT -

32/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Collect the WWN for each desired HBA and pass it to your soning team.
At this point, you have to request the Zoning between VTL SAN Host adapters and your TSM server SAN
adapters.
You would have as many zones as you have VTL host adapter. In our case: dual node DD4 cluster => 8 Front
End ports => 8 zones (assuming that you have 8 FC ports on your backup server as well)

8.3. Zoning information output


You will have to work close to SAN team to build a correct zoning, according to your Lun Masking strategy.
We advice to have such output information to facilitate your Zoning and Lun Masking consistency.
TSM Server Hostname / Fabric / Building / WWN  VTL Node port / Building / Fabric / WWN
Example bellow:
DYB_e4zg2tsmsb-41 / 48 / G2
10:00:00:00:c9:e8:78:c0
DYB_e4zg2tsmsb-41 / 48 / G2
10:00:00:00:c9:e8:79:ae
DYB_e4zg2tsmsb-41 / 49 / G2
10:00:00:00:c9:e8:78:c1
DYB_e4zg2tsmsb-41 / 49 / G2
10:00:00:00:c9:e8:79:af
DYB_e4zg2tsmsb-41 / 48 / G2
10:00:00:00:c9:e8:7f:aa
DYB_e4zg2tsmsb-41 / 48 / G2
10:00:00:00:c9:e8:7f:a4

/ c0:50:76:04:ea:38:01:c2 --> DYB_E4ZG2VTLN1-41_host_s1p1 / G2 /

8 /

/ c0:50:76:04:ea:38:01:c4 --> DYB_E4ZG2VTLN1-41_host_s2p1 / G2 / 24 /


/ c0:50:76:04:ea:38:01:c6 --> DYB_E4ZG2VTLN1-41_host_s1p2 / G2 /

8 /

/ c0:50:76:04:ea:38:01:b8 --> DYB_E4ZG2VTLN1-41_host_s2p2 / G2 / 24 /


/ c0:50:76:04:ea:38:01:ba --> DYB_E4ZG2VTLN2-41_host_s1p1 / G2 / 40 /
/ c0:50:76:04:ea:38:01:bc --> DYB_E4ZG2VTLN2-41_host_s2p1 / G2 / 56 /

Julien Sauvanet - SME IBM SO Delivery French IMT -

33/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

DYB_e4zg2tsmsb-41 / 49 / G2 / c0:50:76:04:ea:38:01:be --> DYB_E4ZG2VTLN2-41_host_s1p2 / G2 / 40 /


10:00:00:00:c9:e8:7f:ab
DYB_e4zg2tsmsb-41 / 49 / G2 / c0:50:76:04:ea:38:01:c0 --> DYB_E4ZG2VTLN2-41_host_s2p2 / G2 / 56 /
10:00:00:00:c9:e8:7f:a5

DYB_mopb4tsmlc-42 / 48 / B4
10:00:00:00:c9:e8:7e:98
DYB_mopb4tsmlc-42 / 48 / B4
10:00:00:00:c9:e8:7f:98
DYB_mopb4tsmlc-42 / 49 / B4
10:00:00:00:c9:e8:7e:99
DYB_mopb4tsmlc-42 / 49 / B4
10:00:00:00:c9:e8:7f:99
DYB_mopb4tsmlc-42 / 48 / B4
10:00:00:00:c9:e8:78:b2
DYB_mopb4tsmlc-42 / 48 / B4
10:00:00:00:c9:e8:78:fc
DYB_mopb4tsmlc-42 / 49 / B4
10:00:00:00:c9:e8:78:b3
DYB_mopb4tsmlc-42 / 49 / B4
10:00:00:00:c9:e8:78:fd

/ c0:50:76:04:ea:34:01:a0 --> DYB_MOPB4VTLN1-42_host_s1p1 / B4 /

8 /

/ c0:50:76:04:ea:34:01:a2 --> DYB_MOPB4VTLN1-42_host_s2p1 / B4 / 24 /


/ c0:50:76:04:ea:34:01:a4 --> DYB_MOPB4VTLN1-42_host_s1p2 / B4 /

8 /

/ c0:50:76:04:ea:34:01:a6 --> DYB_MOPB4VTLN1-42_host_s2p2 / B4 / 24 /


/ c0:50:76:04:ea:34:01:a8 --> DYB_MOPB4VTLN2-42_host_s1p1 / B4 / 40 /
/ c0:50:76:04:ea:34:01:aa --> DYB_MOPB4VTLN2-42_host_s2p1 / B4 / 56 /
/ c0:50:76:04:ea:34:01:ac --> DYB_MOPB4VTLN2-42_host_s1p2 / B4 / 40 /
/ c0:50:76:04:ea:34:01:ae --> DYB_MOPB4VTLN2-42_host_s2p2 / B4 / 56 /

8.4. Host Initiator Management


Once the zoning has been done, configure here the connection between VTL host adapter and TSM server
adapters.

Julien Sauvanet - SME IBM SO Delivery French IMT -

34/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

You see there, all the VTL host initiators (VTL cluster nodes front end initiators = 4 x 4) and the 8 HBA zoned
for the TSM server (4 HBA per Fabric for each TSM server)

It is strongly recommended to Set alias to WWN, it will be easier to work with after.
Right click Modify button in the Host Initiator Popup, then enter the Alias name

Julien Sauvanet - SME IBM SO Delivery French IMT -

35/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

We include the Node Name_SlotsInfo_FabricInfo.


View is much friendlier with all the Alias defined:

Julien Sauvanet - SME IBM SO Delivery French IMT -

36/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

9. Enable the LUN Masking Group (LMG)

Warning: when you do that, you will mask all the devices if there is something already defined on the VT.

Julien Sauvanet - SME IBM SO Delivery French IMT -

37/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Click Yes

Click OK.

Julien Sauvanet - SME IBM SO Delivery French IMT -

38/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

10. Setup a Virtual Library (VLib)

10.1. Virtual Library at a glance

10.2. Naming convention used for DYB Infra


Here is a suggestion for naming convention

Library

<LOCATION><HW><VTLID>-4<N>

Virtual Volume

<BUILDING><LC_ID><NNN>L3

MOPB4TS3500-41
E4ZG2VTL42-01
MOPB4VTL41-03

B41000L3
G2R000L3

10.3. Maximums
Here are the maximum supported in the implemented TS7650G version:

Julien Sauvanet - SME IBM SO Delivery French IMT -

39/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

10.4. Create a new library


Using the PT Manager GUI:

Julien Sauvanet - SME IBM SO Delivery French IMT -

40/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Julien Sauvanet - SME IBM SO Delivery French IMT -

41/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

We will set each VTL with a total of 48 virtual drives spread among the 2 host initiator of each cluster nodes,
however activate only 32 per virtual lib tp backup systems (Using the LUN masking group. See details
hereafter). This setup will allow to add new virtual drive to backup system without any system downtime,
indeed a downtime of whole ProtecTier system is necessary to define new virtual drive to a virtual library.
[At this point, you must already think about your LUN Masking Group if you plan to use it. Indeed, you
will spread your devices over the LMG members only]

Julien Sauvanet - SME IBM SO Delivery French IMT -

42/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Control PATH: Assign 1 Control Path per VTL cluster node


Virtual DRIVES: Use the equally divide function to spread drive among host initiators.

Julien Sauvanet - SME IBM SO Delivery French IMT -

43/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Julien Sauvanet - SME IBM SO Delivery French IMT -

44/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Select the same WWN for initiator as specified on the LMG

VTape Sizing calculation:


Remind following assumption 1 Virtual Lib defined per TSM LC
Repo Size: 275456 GB
VTape Size (recommended): 100 GB
MaxVLib: 16
MaxLibrary_Client (means MaxVirtual_Lib): 4
 Max VTape = 275456 / 100 = 2754 Vtape
 2754 / MaxVLib = 172 Vtape /VLib
 2754 / MaxLC = 688 Vtape /Vlib
 Average value for VTape definition: 430 Vtape /Vlib
This will allow us to manage up to 6 VLib (means LC) or increase by 200 Vtape each VLib if we keep 4 Vlib.

Julien Sauvanet - SME IBM SO Delivery French IMT -

45/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Julien Sauvanet - SME IBM SO Delivery French IMT -

46/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Finalize the configuration, proceed on VLib creation

Julien Sauvanet - SME IBM SO Delivery French IMT -

47/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

After less than 30 minutes, VLib is created.

11. Configure the Lun Masking Group


Because the use of Lun masking group is recommended when more than two hosts will use the VTL, we have enabled
it. This will be usefull to load balance the VTL performance over the front end ports.
LUN Masking is recommended for use to establish two or more front-end paths to a backup server for redundancy
Julien Sauvanet - SME IBM SO Delivery French IMT -

48/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

LUN Masking groups define the connection between the host initiators and the VT library devices (robots or
tape drives). Devices which belong to a certain group can be seen by the host initiators of the group. A device
can belong to multiple LUN Masking groups, but a host initiator can belong to only one LUN Masking group.
LUN Masking is used to monitor device visibility by allowing specific devices (such as tape drives or robots),
to be seen only by a select group of host initiators.
This feature allows users to assign specific drives to a specific host running backup application modules. It
enables multiple initiators to share the same target FC port without having conflicts on the devices being
emulated.
The LUN Masking setup can be monitored and modified at all times during system operation. LUN Masking in
ProtecTIER influences the visibility of the devices by the hosts systems. Keep in mind that every modification
in the LUN Masking in ProtecTIER may affect the host configuration and may require re-scanning by the
hosts.
By default LUN masking is disabled. Without LUN Masking each backup host will be limited to one front-end
port or over-exposed all other hosts virtual devices. When LUN masking is enabled, no LUNs are assigned to
any host. The user must create LUN groups and associate them with backup host(s).
- When defining backup hosts aliases use a practical naming scheme as opposed to just WW names
(example: hostname-FE0).
- With more than two backup hosts it is recommended to use LUN Masking to load balance VTL
performance across multiple front-end ports.
- LUN Masking is recommended for use to establish two or more front-end paths to a backup server for
redundancy.
The Lun masking has been done accordingly to a defined strategy, illustrated on chapter 11.1
Use the graphical menu to do so:

Julien Sauvanet - SME IBM SO Delivery French IMT -

49/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

The goal here is to select good host initators to have redundancy over the two fabrics, and performance using
at least 2 VTL Host initiators
WARNING: You must have information from the zoning team of which Host Adapter is zoned with which
Backup Server FC port. This will give you the WWN you have to associate within a LMG.

Julien Sauvanet - SME IBM SO Delivery French IMT -

50/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

PS: last backup adapter not displayed in this panel due to scroll bar.
Result is as follows:

Then assign the resource to be accessed by this Lun Masking Group:

Julien Sauvanet - SME IBM SO Delivery French IMT -

51/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

In our configuration, one virtual library (and all its virtual drives) is assigned to One LUN masking Group
On you backup server, use the /tsm/bin/chtape_VTL.ksh script if to rename the devices.

12. TSM
12.1. TSM Attachment description
The purpose here is to have the performance and the availability together
To do so, we have chosen the following attachment (coming from the Best Practice guide)

Julien Sauvanet - SME IBM SO Delivery French IMT -

52/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

We added the dual fabric idea.


It means that each TSM server will be connected using at least 2 ports (one per fabric).
Each port will be connected to a cluster node, which will give access to the half quantity of defined virtual
drives in the virtual library.
So, orange path uses Fabric One and red path uses Fabric Two

12.2. Library definition


tsm:
MOPB4TSMLC-42>define
libr
MOPB4VTL42-02
libtype=VTL
resetdrive=yes autolabel=no relabelscratch=yes serial=autod
ANR8400I Library MOPB4VTL42-02 defined.

shared=yes

tsm:
MOPB4TSMLC-42>define
path
MOPB4TSMLC-42
MOPB4VTL42-02
srct=server
destt=libr device=/dev/DYB4_Vsmc0
ANR1720I A path from MOPB4TSMLC-42 to MOPB4VTL42-02 has been defined.

Check the TSM accessibility to virtual library:


[Warning: show command are not supported by IBM, use it at your own risk ]

Julien Sauvanet - SME IBM SO Delivery French IMT -

53/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

define drive MOPB4VTL42-02 DRIVE000 serial=autod onl=yes


define drive MOPB4VTL42-02 DRIVE001 serial=autod onl=yes

Julien Sauvanet - SME IBM SO Delivery French IMT -

54/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

define drive MOPB4VTL42-02 DRIVE002 serial=autod onl=yes


define drive MOPB4VTL42-02 DRIVE003 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE004 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE005 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE006 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE007 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE008 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE009 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE010 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE011 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE012 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE013 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE014 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE015 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE016 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE017 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE018 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE019 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE020 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE021 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE022 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE023 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE024 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE025 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE026 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE027 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE028 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE029 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE030 serial=autod onl=yes
define drive MOPB4VTL42-02 DRIVE031 serial=autod onl=yes

To define the paths, use the script chtape_VTL.ksh , it will generate also the path information
Example:
File lstapexx is the result of the script.

12.3. Label the virtual tapes:


tsm: MOPB4TSMLC-42>label libvol MOPB4VTL41-01 search=yes labelsource=barcode overwrite=yes
checkin=scratch
Julien Sauvanet - SME IBM SO Delivery French IMT -

55/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

tsm: MOPB4TSMLC-42>q pr
679 LABEL LIBVOLUME ANR8805I Labeling volumes in library MOPB4VTL42-02; 2 volume(s) labeled.
tsm: MOPB4TSMLC-42>q libv
MOPB4VTL42-02 B42000L3
MOPB4VTL42-02 B42001L3
MOPB4VTL42-02 B42002L3
MOPB4VTL42-02 B42003L3
MOPB4VTL42-02 B42004L3
MOPB4VTL42-02 B42005L3
MOPB4VTL42-02 B42006L3
MOPB4VTL42-02 B42007L3
MOPB4VTL42-02 B42008L3
MOPB4VTL42-02 B42009L3
MOPB4VTL42-02 B42010L3

Scratch
Scratch
Scratch
Scratch
Scratch
Scratch
Scratch
Scratch
Scratch
Scratch
Scratch

1,026
1,027
1,028
1,029
1,030
1,031
1,032
1,033
1,034
1,035
1,036

LTO
LTO
LTO
LTO
LTO
LTO
LTO
LTO
LTO
LTO
LTO

12.4. Define the device class


Specify mountretion to 0, there is no need to keep the virtual drive in use since the mount is not time
consuming.
Set the TSM device class to represent the Ultrium LTO3 tape without compression in the parameter
FORMAT=ULTRIUM3.
Configure the estimated capacity size, in the TSM device class, to represent the virtual tape size defined in
the VTL, using the parameter ESTCAPacity.
tsm: MOPB4TSMLC-42>define devclass B4DCVTL2 libr=MOPB4VTL42-02 devtyp=LTO format=drive
mountret=0 mountwait=60
ANR2203I Device class B4DCVTL2 defined.

Julien Sauvanet - SME IBM SO Delivery French IMT -

56/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

13. PT User Management

Click on System > User Management


We suggest you to create another account with Admin permission, just for safety.

14. PT Notification configuration


14.1. Mail notifications
Perform the mail configuration using the menu, from one of the cluster nodes.
Follow this procedure:
Menu > 4) Problem Alerting (...) > 5) Enable/Disable Notification by email >

Julien Sauvanet - SME IBM SO Delivery French IMT -

57/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Then test the function by sending a test mail, using the menu >
Notification

4) Problem Alerting (...) > 6) Test Email

Julien Sauvanet - SME IBM SO Delivery French IMT -

58/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

15. Report and diagnostics

15.1. Generate a report


To generate a report, use the menu tool.

Julien Sauvanet - SME IBM SO Delivery French IMT -

59/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Julien Sauvanet - SME IBM SO Delivery French IMT -

60/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

The file is stored under /pt_work folder.


The file is named based on the information you have provided.
/!\ : Because the FULL report store the /pt_work content, dont forget to remove previous or unused Report
files before to generate a new one.

15.2. PT configuration files backup (out of the box)


This section describes how to save/dump your PT Cluster configuration out of the box. This is not an IBM
recommendation, and you will have no support on the following topic.
Notice that this procedure will use the ssh key exchange, we assume that it is allowed by your policies.
We will exploit the configuration retained in a Report files (generated using the Menu > 6) > 6) Full)
Setup the ssh public key for, to allow automation of file copy out of the box.
/!\ : Do not generate a new rsa key ! There is already one existing and use for internal cluster
purposes.
- Copy the public key to your destination box:

Julien Sauvanet - SME IBM SO Delivery French IMT -

61/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Copy the content of file /root/.ssh/id_rsa.pub to your destination box, append it as a line into the
USER_HOME_DIR/.ssh/authorized_keys file
- Test the connection (without password) using the command
PT_shell> ssh USER@targetbox ..

Exit and go back to your VTL.


If the connection works, schedule a copy of your configuration files to the target box.

You can schedule this one time per week, using the crontab for instance.
[root@mopb4vtln141 pt_work]# crontab -l
00 12 * * * scp /pt_work/ProtecTier_*_full_*_Report.tar.gz ibm02woy@10.6.10.102:/tsm/drm/bkp_TS7650G/

16. Best practices for TSM


The following IBM Tivoli Storage Manager server and client options should be checked and, if necessary, changed to
enable optimum performance of the ProtecTIER Software:


Use 256K I/O for the virtual tape drives this provides the best factoring ratio.

Client compression should be disabled.

Ensure server option MOVEBATCHSIZE is set at 1000 (the default value).

Ensure server option MOVESIZETHRESHOLD is set at 2048 (the default value).

Julien Sauvanet - SME IBM SO Delivery French IMT -

62/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

When using Windows based TSM servers, the Tivoli TSM tape and libraries for Windows must be used. Native
Windows drivers for the emulated P3000 and DLT7000 drives will not function.

Given that ProtecTIER acts as a virtual tape library as well as a data deduplication device, the advantages
associated with disk backup over tape backup apply here too. The following points should also be considered
when using ProtecTIER with IBM Tivoli Storage Manager:
ITSM disk pools: For some large environments with several IBM Tivoli Storage Manager servers in place,
you do not need to assign dedicated ITSM disk storage pool(s) to each server. With ProtecTIER, you can
either share a virtual library or you can create virtual libraries for every server.

LAN-free backups are easier: As ProtecTIER is a virtual tape library; it has the major advantage of
presenting greatly increased tape resources to the backup server. This then positions you to be able to
perform LAN-free backups to ProtecTIER without much regard to the limitations normally applied to these
backups, such as tape drive availability. If you have many LAN-free clients already, then it is possible your
LAN-free backup windows were dictated not entirely by business needs but also by hardware availability.
With ProtecTIER and its maximum of 256 virtual tape drives per ProtecTIER node, you can almost
completely eliminate any hardware restrictions you may have faced previously, and schedule your backups
as and when they are required by your business needs.

Data streams: You may be able to reduce your current backup window by taking full advantage of
ProtecTIER's throughput performance capabilities. If tape drive availability has been a limiting factor on
concurrent backup operations on your ITSM server, you can define a greater number of virtual drives and
reschedule backups to run at the same time to maximize the number of parallel tape operations possible on
ProtecTIER servers.
Note: If you choose to implement this strategy, you may need to increase the value of the MAXSESSIONS option on
your ITSM server.

Reclamation: You should continue to reclaim virtual storage pools that are resident on ProtecTIER. The
thresholds for reclamation may need some adjustment for a period until the system reaches steady state
(see , Steady state on page 29 for an explanation of this term). When this point is reached,the fluctuating
size of the virtual cartridges should stabilize and you can make a decision on what the fixed reclaim limit
ought to be.

Number of cartridges: This is a decision with several factors to be considered:


In ProtecTIER, the capacity of your repository is spread across all yourdefined virtual cartridges. If you
define only a small number of virtual cartridges in ProtecTIER Manager, you may end up with cartridges
thathold a large amount of nominal data each. While this may reduce
cartridge complexity, it could also affect restore operations in that a cartridge required for a restore may be
in use by a backup or housekeeping task. Preemption can resolve this issue, but it may instead be better to
define extra cartridges so that your data is spread over more cartridges and drives to make the best use of
your virtual tape environment.
Reuse delay period for storage pool cartridges: When deciding how many virtual cartridges to define,
remember to consider the current storage pool reusedelay value. This is usually equal to the number of
days your ITSM database backups are retaining before expiring them. The same delay
period should apply to your storage pools that store data on ProtecTIER virtual cartridges and you may
need to increase the number defined to ensure that you always have scratch cartridges available for
backup.
Collocation: When using a virtual library, you should consider implementing collocation for your primary
storage pools. If you begin a restore while another task (for example, a backup or cartridge reclamation) is
using the virtual cartridge, you may not be able to access the data on it immediately. Using collocation will
mean all your data is contained on the same set of virtual cartridges. Because you do not have any of the
Julien Sauvanet - SME IBM SO Delivery French IMT -

63/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

restrictions of physical cartridges normally associated with this feature (such as media and slot
consumption), you can enable the option quite safely.
Consider these points when determining how many virtual cartridges are to be created. Remember that
you can always create additional virtual cartridges at any time.

Physical tape: Depending on your data protection requirements, it may still be necessary to copy the deduplicated data to physical tape. This can be achieved by using standard ITSM copy storage pools that
have device classes directing data to physical libraries and drives.

17. Tips

17.1. Community & Wiki


https://w3-connections.ibm.com/communities/service/html/communityview?communityUuid=c37424c5-7cf6449f-8a36-b418c85c466f
https://w3-connections.ibm.com/wikis/home?lang=en_US#/wiki/W84843342fbbb_4d9f_a4dc_1e8f183c2318

17.2. Cluster System startup procedure information


When you start the VTFD service, the ProtecTIER system tries to start the CMGNT which is the ProtecTIER
cluster management module (besides the RHEL cluster service CMAN). Both cluster services are used to
protect the ProtecTIER cluster.
The order of services which need to start before the VTL is up and running is: At first CMAN comes up
checking its configuration file /etc/cluster/cluster.conf with all infomation about the cluster devices together
with infos on the fencing WTI device and how to fence other devices (via the RHEL fenced daemon). Please
verify that this file is existent in your environment.
When CMAN came up the CLVMD and afterwards the GFS is started. Then the VTFD will be started which
includes the CMGNT service.
The CMGNT has lots of different jobs to do, e.g maintaining the Heartbeat between two cluster nodes
(timeout == fencing!), therefore check the internal Cluster connectivity and cabling. One special job which
could be the problem in your setup is the disk monitoring task of the CMGNT. This is a critical task checking
the ProtecTIER management LUN (you remember the one 1GB LUN cut out of one of the MetaData RAID
Arrays per Repository?). If the responsible file cluster_db.xml is missing during the vtfd initialization the
node will be fenced. So please also check for this file via "find / -name cluster_db.xml".

Julien Sauvanet - SME IBM SO Delivery French IMT -

64/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

In addition please verify your cabling, if you mixed up the cables, e.g. for the WTI switch, this could also
end up in a fencing condition.
And do not forget that if the VTFD service is up and running but the GFS mount points are not available, the
system will be fenced, so check also your backend disk connectivity.

17.3. Display the serial Id of a machine


Use the command /usr/sbin/dmidecode -t 1

17.4. Log
At any time you can check the /var/log/messages log to see whats going on on the system.

17.5. Reboot
CAUTION:
If for any reason you have to reboot a node , use the command shutdown r now , and not reboot command !

17.6. How to clean an old repository configuration [to be done only per IBM
Support request]
If, for any reason, you made an installation on a not fresh disk backend you migh have following message
when creating the file system layout using fsCreate tool:

Julien Sauvanet - SME IBM SO Delivery French IMT -

65/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

If so, please check contact your local support team, they might ask you to look at your CD/DVD set which
came with the machines for the "mfg_cleanup" script,it is the LUN scratch tool without recreating the Arrays
on the disk subsystems.

17.7. Display WWN of PT cluster nodes (on linux):


[root@MOPB4VTLN1_42]# cat /sys/class/fc_host/*/node_name
0x20000000c9e87f99
0x20000000c9e87e98
0x20000000c9e87e99
0x20000024ff3438ac
0x20000024ff3438ad
0x20000024ff3438cc
0x20000024ff3438cd
0x20000000c9e87f98
Note that all *c9* here are for host network (involved in SAN zoning to backup servers), all *24ff* are used to
access backend disks.

17.8. Use the PT Manager GUI to plan your repository


You can use the PT Manager GUI to evaluate the relevance of your sizing or in place configuration.
Click on Repository > Plan repository Creation

Julien Sauvanet - SME IBM SO Delivery French IMT -

66/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

Julien Sauvanet - SME IBM SO Delivery French IMT -

67/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

17.9. Information about Time setting


First, this is an DISRUPTIVE operation , the setup brings to cluster and all PT/VT services offline.
When setting up the time on PT cluster, you can have this message:
Jul 3 04:44:32 E4ZG2VTLN141 ntpd[7012]: time correction of 22539 seconds exceeds sanity limit (1000); set
clock manually to the correct UTC
Take care of the lag time before setting up your time; this can lead to the CMAN service to not be able to
restart after time reconfiguration ot the time configuration not to be applied.
If so, you will have to specify both time and NP servers
Menu > 4) > option 1) + option 3), then option c) to commit changes.

Entries in /var/log/messages when your time has been updated:


Jul
Jul
Jul
Jul
Jul

17.10.

3 06:58:01 E4ZG2VTLN142 dhclient: bound to 169.254.95.120 -- renewal in 253 seconds.


3 12:58:06 E4ZG2VTLN142 xinetd[6988]: START: time-stream pid=0 from=10.0.0.51
3 12:58:14 E4ZG2VTLN142 ntpdate[3752]: adjust time server 129.39.141.178 offset 75.291195 sec
3 12:58:14 E4ZG2VTLN142 ntpd[3763]: ntpd 4.2.2p1@1.1570-o Thu Nov 26 11:34:34 UTC 2009 (1)
3 12:58:14 E4ZG2VTLN142 ntpd[3764]: precision = 1.000 usec

WWN list of multipath devices:

[root@MOPB4VTLN141 multipath]# pwd


/var/lib/multipath
Julien Sauvanet - SME IBM SO Delivery French IMT -

68/71

ProtecTier 3.1.8.0 Install/Setup and Use with Tivoli Storage Manager 6.3

[root@MOPB4VTLN141 multipath]# cat bindings


# Multipath bindings, Version : 1.0
# NOTE: this file is automatically maintained by the multipath program.
# You should not need to edit this file in normal circumstances.
#
# Format:
# alias wwid
#
mpath0 36005076307ffc0b30000000000000112
mpath1 36005076307ffc0b30000000000000113
mpath2 36005076307ffc0b30000000000000114
mpath3 36005076307ffc0b30000000000000115
mpath4 36005076307ffc0b30000000000000100
mpath5 36005076307ffc0b30000000000000101
mpath6 36005076307ffc0b30000000000000102
mpath7 36005076307ffc0b30000000000000103
mpath8 36005076307ffc0b30000000000000104
mpath9 36005076307ffc0b30000000000000116
mpath10 36005076307ffc0b30000000000000105
mpath11 36005076307ffc0b30000000000000106
mpath12 36005076307ffc0b30000000000000107
mpath13 36005076307ffc0b30000000000000108
mpath14 36005076307ffc0b30000000000000109
mpath15 36005076307ffc0b30000000000000110
mpath16 36005076307ffc0b30000000000000117
mpath17 36005076307ffc0b30000000000000111
mpath18 36005076307ffc0b30000000000000118
mpath19 36005076307ffc0b30000000000000119
mpath20 36005076307ffc0b30000000000000120
mpath21 36005076307ffc0b30000000000000121
mpath22 36005076307ffc0b30000000000000122
mpath23 36005076307ffc0b30000000000000123
mpath24 36005076307ffc0b30000000000000124
mpath25 36005076307ffc0b30000000000000125

Julien Sauvanet - SME IBM SO Delivery French IMT -

69/71