1
IBM Systems Enterprise Architecture – GPFS Solution
This is one of the business partner skill enhancement programs is Residency in Korea. At that time
joined 17 companies for this residency. First, I want to say “Thanks join this program”. This
documentation wrote by partner engineer. Just, I translate to English from Korean. Before starting this
residency program, a lot of team was preparing to help from education dept, technical sales manager
and system admin team. Not easy to prepare demo system for this program, But support team makes
all of the demo system such as system p6 system, storage box and BladeCenter System. I can
assure all of the attendee that gains a lot of configuration experience and technical knowledge. This is
very helpful program for our business partner.
Business Partner Residency Program is one of the education programs in Korea. Usually, the
locations setup the out side from Seoul, such as YangPyung or ChungPyung. They will stay 5 days in
the resort. And then starting instruction focused topic. After base education, then starting test system
and write result documents. It is irregular education program, because this kind of topic was setup by
team discussion and change requirement from BP every year. One of the rules is one time execution
by topic.
Ojective this residency program, at this time topic is Advanced GPFS Solution design on cross
platform. Recently, customer requirement does not configure single platform for GPFS Solution. They
are want to mixed configuration such as Linux, pLinux, AIX and Windows. And Cross mount function
is mount remote area file system for collaboration. They are wanted to know what limitation
configuration for mixed cluster is consideration point and how configuration storage box for optimized
performance.
2
IBM Systems Enterprise Architecture – GPFS Solution
3
IBM Systems Enterprise Architecture – GPFS Solution
Index
1. Preparing Hardware _________________________________________________________ 5
2. Installation Redhat Enterprise Linux Server v5.4 x64 Version ______________________ 6
3. Installation Windows 2008 R2 x64 Enterprise Version _____________________________ 9
4. Preparing VIOS Client ______________________________________________________ 11
5. Configuration NIM Server ___________________________________________________ 16
6. Make VIO Client Logical Volume ______________________________________________ 21
7. Installation AIX on Partition _________________________________________________ 24
8. Configuration Storage System (DS3400) _______________________________________ 31
9. Configuration Storage System and Initilize Volume each OS (DS4300) ______________ 36
10. SAN Switch Congiruation Guide ____________________________________________ 43
11. Pre Installation GPFS - SSH Keygen ________________________________________ 50
12. AIX, Linux GPFS Server Installation _________________________________________ 52
13. Make Cluster and Configure GPFS Solution __________________________________ 57
14. pLinux GPFS Client Installation ____________________________________________ 62
15. Windows 2008 SP2 GPFS Client Installation __________________________________ 76
16. Rolling Upgrade to v3.3 from v3.2 __________________________________________ 97
17. Add / Remove NSD – GPFS Maintanence ___________________________________ 101
18. Cross over GPFS Mount _________________________________________________ 107
19. Failure group and GPFS Replication _______________________________________ 111
20. End of This bp residency _________________________________________________ 114
4
IBM Systems Enterprise Architecture – GPFS Solution
1. Preparing Hardware
This is node configuration for assigned residency team. Each team use same configuration
hardware system.
Before Configuration and OS Installation, must be check below list. All of the below list is very
important for configuration gpfs, because getting more stability and high performance. I recommand
use latest version of system firmware and driver.
5
IBM Systems Enterprise Architecture – GPFS Solution
6
IBM Systems Enterprise Architecture – GPFS Solution
You must include development package for build GPFS portable layer on first node at least. Usually,
First step of installation is build GPFS Portble Layer on first installed system, and then you can use
“make rpm” this is make rpm version of portable layer.
7
IBM Systems Enterprise Architecture – GPFS Solution
Start Installation
8
IBM Systems Enterprise Architecture – GPFS Solution
Complete Installation
9
IBM Systems Enterprise Architecture – GPFS Solution
At that time, Team2 and Team3 member was trying to installation GPFS v3.3 with Windows 2008
R2 System. They was can not configuration this operating system. Finally, they are reinstallation
Windows2008 SP2 on System. Refer GPFS v3.3 Document what is current version of GPFS v3.3
support Windows2008 SP2 Only. It is many differences between 2008 and 2008 R2 core. Windows
2008 R2 Core system has based on Windows7.
According to WW GPFS development team, GPFS v3.4 will support with Windows 2008R2 Server
System, This product announce plan is 2h 2011. And this version will support Windows Base GPFS
Server System. Current version (v3.3) support GPFS Client Side only. In other words must be
configuring mixed cluster Linux and Windows.
10
IBM Systems Enterprise Architecture – GPFS Solution
Usually, before installation AIX on System, you must config the partition on p570 system.
11
IBM Systems Enterprise Architecture – GPFS Solution
12
IBM Systems Enterprise Architecture – GPFS Solution
For make virtual SCSI Adapter, Click drop down menu, Action Create SCSI Adapter. Then no
problem use default SCSI ID. At this time, important thing is assign which virtual system or partition.
And then target partition need to choice adapter ID. On the both of the server and client parition
makes vscsi device is possible. And then you decide mapping ID.
13
IBM Systems Enterprise Architecture – GPFS Solution
Previsouly, Configured virtual SCSI adapter on server and client assign both of them.
14
IBM Systems Enterprise Architecture – GPFS Solution
15
IBM Systems Enterprise Architecture – GPFS Solution
After complete build logical patition, then setup NIM Server and Client.
Connect NIM Server and edit /etc/hosts. This ip and host name will use vio client side information.
16
IBM Systems Enterprise Architecture – GPFS Solution
17
IBM Systems Enterprise Architecture – GPFS Solution
Back to the NIM Main Menu and Choice Perform NIM Software Insatallation and Maintenance Tasks.
18
IBM Systems Enterprise Architecture – GPFS Solution
Already prepare system image by mksysb backup. So, client target system will installation mksysb
image OS.
19
IBM Systems Enterprise Architecture – GPFS Solution
ACCEPT new license agreements field must be yes, and Initiate reboot and installation now field must
be no. If this field set to yes, then NIM Server reboot when is complete installation.
20
IBM Systems Enterprise Architecture – GPFS Solution
Before start installation progress, need to configure logical volume and then assign target volume for
installation OS. When is connect to VIO Server, Not use root account. This is recommentation.
When use padmin account. It is limit of right So, you must change of authority for use
oem_setup_env or license –accept command, then you are no limit for use admin command.
Add a logical volume for VIO Client. You can use smitty lv and then choose Add a Logical Volume.
21
IBM Systems Enterprise Architecture – GPFS Solution
Choose volume group for add LV, and then choice rootvg.
22
IBM Systems Enterprise Architecture – GPFS Solution
You can check assign status of logical volume via “lsgv –l rootvg”
23
IBM Systems Enterprise Architecture – GPFS Solution
24
IBM Systems Enterprise Architecture – GPFS Solution
Choice ip range
25
IBM Systems Enterprise Architecture – GPFS Solution
Choice BOOTP
This menu is setup ip address on NIC Adapter. It is loading mksysb image from NIM server.
26
IBM Systems Enterprise Architecture – GPFS Solution
27
IBM Systems Enterprise Architecture – GPFS Solution
Choice 1
Choice 1
28
IBM Systems Enterprise Architecture – GPFS Solution
Do not change any other option for copy from the image.
29
IBM Systems Enterprise Architecture – GPFS Solution
30
IBM Systems Enterprise Architecture – GPFS Solution
The first setp is download latest version of storage manager and installation.
31
IBM Systems Enterprise Architecture – GPFS Solution
32
IBM Systems Enterprise Architecture – GPFS Solution
Volume Mapping
33
IBM Systems Enterprise Architecture – GPFS Solution
34
IBM Systems Enterprise Architecture – GPFS Solution
Almost storage systems are similar progress for attached server and storage.
1. Hardware Configuration and Complete Cabling
2. San switch Configuration such as domain ID and some kind of timeout value
3. Setup the volume configuration for recommend GPFS File system
4. Host type and group configuration
5. Volume mapping
6. HBA Driver update and installation on each server system
7. Check Volume on each system.
35
IBM Systems Enterprise Architecture – GPFS Solution
36
IBM Systems Enterprise Architecture – GPFS Solution
Check WWN on Linux Server for Installation Qlogic HBA CLI Command.
37
IBM Systems Enterprise Architecture – GPFS Solution
38
IBM Systems Enterprise Architecture – GPFS Solution
39
IBM Systems Enterprise Architecture – GPFS Solution
40
IBM Systems Enterprise Architecture – GPFS Solution
41
IBM Systems Enterprise Architecture – GPFS Solution
42
IBM Systems Enterprise Architecture – GPFS Solution
This is key point what configuration for multi SAN Switch Infra. The recommend configuration of
SAN Switch is same vendor and Fabric OS. Choose one vendor such as Brocade Company. If you
want to attache heterogenouse SAN Switch, must refer to interoperatibility guide. Under same switch
of vendor, you must check domain ID and time out value.
Check List
1. SAN Switch Domain ID
2. Timeout Value of SAN Switch.
A. R_A_TOV = 10 seconds ( The setting is 10000 )
B. E_D_TOV = 2 seconds ( The setting is 2000 )
3. ISL License for SAN Switch
43
IBM Systems Enterprise Architecture – GPFS Solution
If same ID of each SAN Swtcih, Set to disable on external SAN Switch. And then apply ISL license
on switch and change Domain ID, after set to enable on external SAN Switch.
44
IBM Systems Enterprise Architecture – GPFS Solution
When is configured Zoning, Delete All of Zone Configuration, And then Configuration ISL on IBM
Bladecenter SAN Switch Module. I think that factory default setting, this is easy.
Connect to 192.168.70.129 via HTTP. Be Careful. Before ISL Configuration, remove external cable.
If not remove external cable, confict Domain ID on both SAN Switch.
45
IBM Systems Enterprise Architecture – GPFS Solution
Set to Disable.
46
IBM Systems Enterprise Architecture – GPFS Solution
47
IBM Systems Enterprise Architecture – GPFS Solution
48
IBM Systems Enterprise Architecture – GPFS Solution
Configure Zone.
49
IBM Systems Enterprise Architecture – GPFS Solution
There are running all of the GPFS server and client side.
t1:/#>ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (//.ssh/id_rsa):
Created directory '//.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in //.ssh/id_rsa.
Your public key has been saved in //.ssh/id_rsa.pub.
The key fingerprint is:
bd:40:09:86:b7:89:a9:ae:40:a7:ed:51:3d:ae:18:7c root@T1
t1:/#>cd /.ssh
t1:/.ssh#>cp -rp id_rsa.pub authorized_keys
t1:/.ssh#>ls
authorized_keys id_rsa id_rsa.pub
t1:/.ssh#>cat authorized_keys
ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEA5nZUpuqDXCgQ5OEp1GzD5PTH0qjZufrLbUWPPMsfYVPBJ
sRxAyTQIDluaYQXVz+pCer4p87/HZNenqI9kgf9tJHC9RPhPLZxjyUauVgADvCmkzHm1TbKltwwnjaw
hZ1Oj8gY2FEhZPhSf7YEp5ysrNLQvR12li8VosDSSRuqNp3nBS5G5PYmMB0h0OGO48ZxB3Gf6R3
QUZqaoX4SZl9SinG8lF5sze9x8t/l0GKBQ3RtcHBjx7iHdSrOaETEaFhco/1QLcjBPtSKK7jT4FDi7dD0X
EHN4k0B5IdJYtx2Nl6Y6g1a5SpnTTm5n0QKe2buznMgD0TmML1PaaXnNDIUbw== root@t1
t2:/#>ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (//.ssh/id_rsa):
Created directory '//.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in //.ssh/id_rsa.
Your public key has been saved in //.ssh/id_rsa.pub.
The key fingerprint is:
19:33:82:5c:15:e5:60:fb:f2:8b:ce:50:5c:2d:03:6d root@T2
t2:/#>ls
id_rsa id_rsa.pub
t2:/.ssh#>cp -rp id_rsa.pub authorized_keys
t2:/.ssh#>cat authorized_keys
ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAIEAusPjMndj2JRzHaseb7/9/d8AdOsvtDBr8pZIQ/Aac48F/2iepmuo
gJjdxohbCYSSRjfTz35No+hNuLpYZpgvS/2+uco9dXnHZv7HJV+4rdwTREqJplLKZvPMrBNEkKLkHiP
1NJ3hq5bHeMEDyCKt/LYGcwl/VN3+nGXcJ2b5lsE= root@T1
50
IBM Systems Enterprise Architecture – GPFS Solution
t2:/.ssh#>
t1:/.ssh#>scp id_rsa.pub t2_gpfs:/home
The authenticity of host 't2_gpfs (10.10.10.2)' can't be established.
RSA key fingerprint is 0b:01:ad:da:58:5d:eb:40:71:f9:40:c3:d1:a0:8e:14.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 't2_gpfs,10.10.10.2' (RSA) to the list of known hosts.
root@t2_gpfs's password:
id_rsa.pub 100% 389 0.4KB/s 00:00
t2:/.ssh#>scp id_rsa.pub t1_gpfs:/home
The authenticity of host 't1_gpfs (10.10.10.1)' can't be established.
RSA key fingerprint is 40:ff:29:0b:fb:b6:68:79:ee:5c:63:b5:ab:b9:f7:f2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 't1_gpfs,10.10.10.1' (RSA) to the list of known hosts.
root@t1_gpfs's password:
id_rsa.pub 100% 389 0.4KB/s 00:00
Finally, all of id_rsa.pub files on each node. And Copy to authorized_keys. This file include rsa key
of all nodes finder print. Also, Windows GPFS Client side will need same operation.
51
IBM Systems Enterprise Architecture – GPFS Solution
52
IBM Systems Enterprise Architecture – GPFS Solution
Press “F4”
53
IBM Systems Enterprise Architecture – GPFS Solution
You must update latest version of GPFS. If it is not complete, GPFS daemon will not start.
This is same procedure for update.
54
IBM Systems Enterprise Architecture – GPFS Solution
55
IBM Systems Enterprise Architecture – GPFS Solution
Make and Install portable layer on Linux System. This step is only Linux System. This is build gpfs
module layer for Linux Kernel.
56
IBM Systems Enterprise Architecture – GPFS Solution
gpfs_node1:quorum-manager
gpfs_node2:quorum-manager
gpfs_node3:
Make a Cluster
team4_1:/tmp/gpfs#>mmcrcluster -n /tmp/gpfs/gpfs.allnodes -p gpfs_node1 -s gpfs_node2 -C AIX_gpfs -r
/usr/bin/ssh -R /usr/bin/scp
Wed Oct 28 20:56:28 KORST 2009: 6027-1664 mmcrcluster: Processing node gpfs_node1
Wed Oct 28 20:56:29 KORST 2009: 6027-1664 mmcrcluster: Processing node gpfs_node2
Wed Oct 28 20:56:30 KORST 2009: 6027-1664 mmcrcluster: Processing node gpfs_node3
------------- -------------------------------------------------------
---------------------------------------------------
clusterName AIX_gpfs.gpfs_node1
clusterId 13979456008081650028
clusterType lc
autoload no
minReleaseLevel 3.2.1.5
dmapiFileHandleSize 32
--------------------------------------------
(none)
57
IBM Systems Enterprise Architecture – GPFS Solution
hdisk2:gpfs_node1:gpfs_node2:dataAndMetadata:1:TB1
hdisk3:gpfs_node1:gpfs_node2:dataAndMetadata:1:TB2
hdisk4:gpfs_node1:gpfs_node2:dataAndMetadata:1:TB3
team4_1:/tmp/gpfs#>more disk.desc
hdisk2:gpfs_node1:gpfs_node2:dataAndMetadata:1:TB1
hdisk3:gpfs_node2:gpfs_node1:dataAndMetadata:1:TB2
hdisk4:gpfs_node1:gpfs_node2:dataAndMetadata:1:TB3
# hdisk2:gpfs_node1:gpfs_node2:dataAndMetadata:1:TB1
TB1:::dataAndMetadata:1::
# hdisk3:gpfs_node1:gpfs_node2:dataAndMetadata:1:TB2
TB2:::dataAndMetadata:1::
# hdisk4:gpfs_node1:gpfs_node2:dataAndMetadata:1:TB3
TB3:::dataAndMetadata:1::
team4_1:/#>mmlsconfig
-----------------------------------------------------
clusterName AIX_gpfs.gpfs_node1
clusterId 13979456008081616877
58
IBM Systems Enterprise Architecture – GPFS Solution
clusterType lc
autoload no
minReleaseLevel 3.2.1.5
dmapiFileHandleSize 32
tiebreakerDisks TB1;TB2;TB3
Filesystem NSD
team4_1:/tmp/gpfs#>more disk2.desc
hdisk5:gpfs_node1:gpfs_node2:dataAndMetadata:1:nsd_01
hdisk6:gpfs_node1:gpfs_node2:dataAndMetadata:1:nsd_02
team4_1:/tmp/gpfs#>mmcrnsd -F /tmp/gpfs/disk2.desc
team4_1:/tmp/gpfs#>more disk2.desc
# hdisk5:gpfs_node1:gpfs_node2:dataAndMetadata:1:nsd_01
nsd_01:::dataAndMetadata:1::
# hdisk6:gpfs_node1:gpfs_node2:dataAndMetadata:1:nsd_02
nsd_02:::dataAndMetadata:1::
team4_1:/tmp/gpfs#>more /tmp/gpfs/disk3.desc
hdisk7:gpfs_node1:gpfs_node2:dataAndMetadata:1:nsd_03
hdisk8:gpfs_node1:gpfs_node2:dataAndMetadata:1:nsd_04
team4_1:/tmp/gpfs#>mmcrnsd -F /tmp/gpfs/disk3.desc
team4_1:/tmp/gpfs#>more /tmp/gpfs/disk3.desc
# hdisk7:gpfs_node1:gpfs_node2:dataAndMetadata:1:nsd_03
nsd_03:::dataAndMetadata:1::
# hdisk8:gpfs_node1:gpfs_node2:dataAndMetadata:1:nsd_04
nsd_04:::dataAndMetadata:1::
team4_1:/tmp/gpfs#>mmlsnsd
---------------------------------------------------------------------------
59
IBM Systems Enterprise Architecture – GPFS Solution
Wed Oct 28 21:10:54 KORST 2009: 6027-1642 mmstartup: Starting GPFS ...
team4_1:/tmp/gpfs#>mmgetstate -a
------------------------------------------
1 gpfs_node1 active
2 gpfs_node2 active
3 gpfs_node3 arbitrating
GPFS: 6027-531 The following disks of gpfs01 will be formatted on node team4_1:
GPFS: 6027-535 Disks up to size 535 GB can be added to storage pool 'system'.
GPFS: 6027-531 The following disks of gpfs02 will be formatted on node team4_2:
GPFS: 6027-535 Disks up to size 710 GB can be added to storage pool 'system'.
60
IBM Systems Enterprise Architecture – GPFS Solution
Wed Oct 28 21:33:01 KORST 2009: 6027-1623 mmmount: Mounting file systems ...
team4_1:/tmp/gpfs#>mmmount /gpfs02
Wed Oct 28 21:33:06 KORST 2009: 6027-1623 mmmount: Mounting file systems ...
team4_1:/gpfs02#>df -gt
...
team4_2:/#>df -gt
...
team4_3:/#>df -gt
...
61
IBM Systems Enterprise Architecture – GPFS Solution
First, running ssh-keygen and this key file sync all of gpfs nodes. And set to disable Selinux and
iptables service. After need to one time system reboots.
collisions:0 txqueuelen:1000
collisions:0 txqueuelen:1000
185.100.100.147 plinux
## GPFS Network ##
194.1.1.44 gpfs_node1
194.1.1.45 gpfs_node2
194.1.1.46 gpfs_node3
194.1.1.47 gpfs_node4
194.1.1.48 gpfs_node5
62
IBM Systems Enterprise Architecture – GPFS Solution
PATH=$PATH:$HOME/bin:/usr/lpp/mmfs/bin
MANPATH=$MANPATH:/usr/lpp/mmfs/messages
Installation Pkg
[root@plinux gpfs]# ls
gpfs.msg.en_US-3.3.0-1
gpfs.gpl-3.3.0-1
gpfs.docs-3.3.0-1
gpfs.base-3.3.0-1
cpp present
gcc present
g++ present
ld present
63
IBM Systems Enterprise Architecture – GPFS Solution
rm -f //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver
done
cleaning (/usr/lpp/mmfs/src/ibm-kxi)
rm -f ibm_kxi.trclst
rm -f install.he; \
for i in cxiTypes.h cxiSystem.h cxi2gpfs.h cxiVFSStats.h cxiCred.h cxiIOBuffer.h cxiSharedSeg.h cxiMode.h Trace.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTypes.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSystem.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiCred.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMode.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/Trace.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMmap.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/LockNames.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/lxtrace.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/DirIds.h
cleaning (/usr/lpp/mmfs/src/ibm-linux)
rm -f install.he; \
64
IBM Systems Enterprise Architecture – GPFS Solution
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/Trace-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h
cleaning (/usr/lpp/mmfs/src/gpl-linux)
rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/tracedev.ko
rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfslinux.ko
rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfs26.ko
rm -f -f /usr/lpp/mmfs/src/../bin/lxtrace-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`
rm -f -f /usr/lpp/mmfs/src/../bin/kdump-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`
rm -f -f *.o *~ .depends .*.cmd *.ko *.a *.mod.c core *_shipped *map *mod.c.saved *.symvers *.ko.ver ./*.ver
install.he
rm -f -rf usr
done
65
IBM Systems Enterprise Architecture – GPFS Solution
touch install.he
touch install.he
66
IBM Systems Enterprise Architecture – GPFS Solution
touch install.he
touch install.he
Invoking Kbuild...
LD /usr/lpp/mmfs/src/gpl-linux/built-in.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/kdump-stub.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/mmwrap.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/ppc64/ss_ppc64.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o
LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o
LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o
LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfslinux.o
LD [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dummy.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dwarfs.o
HOSTCC /usr/lpp/mmfs/src/gpl-linux/lxtrace.o
HOSTCC /usr/lpp/mmfs/src/gpl-linux/lxtrace_rl.o
HOSTLD /usr/lpp/mmfs/src/gpl-linux/lxtrace
MODPOST
linux/mmfs.o_shipped
linux/libgcc.a_shipped
CC /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dummy.mod.o
LD [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dummy.ko
CC /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dwarfs.mod.o
LD [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dwarfs.ko
CC /usr/lpp/mmfs/src/gpl-linux/mmfs26.mod.o
LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.ko
CC /usr/lpp/mmfs/src/gpl-linux/mmfslinux.mod.o
67
IBM Systems Enterprise Architecture – GPFS Solution
LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfslinux.ko
CC /usr/lpp/mmfs/src/gpl-linux/tracedev.mod.o
LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.ko
done
installing (/usr/lpp/mmfs/src/ibm-kxi)
touch install.he
installing (/usr/lpp/mmfs/src/ibm-linux)
touch install.he
installing (/usr/lpp/mmfs/src/gpl-linux)
[root@plinux src]#
cpp present
gcc present
g++ present
ld present
rm -f //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver
68
IBM Systems Enterprise Architecture – GPFS Solution
done
cleaning (/usr/lpp/mmfs/src/ibm-kxi)
rm -f ibm_kxi.trclst
rm -f install.he; \
for i in cxiTypes.h cxiSystem.h cxi2gpfs.h cxiVFSStats.h cxiCred.h cxiIOBuffer.h cxiSharedSeg.h cxiMode.h Trace.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTypes.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSystem.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiCred.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMode.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/Trace.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMmap.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/LockNames.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/lxtrace.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/DirIds.h
cleaning (/usr/lpp/mmfs/src/ibm-linux)
rm -f install.he; \
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/Trace-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h
69
IBM Systems Enterprise Architecture – GPFS Solution
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h
cleaning (/usr/lpp/mmfs/src/gpl-linux)
CLEAN /usr/lpp/mmfs/src/gpl-linux
CLEAN /usr/lpp/mmfs/src/gpl-linux/.tmp_versions
rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/tracedev.ko
rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfslinux.ko
rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfs26.ko
rm -f -f /usr/lpp/mmfs/src/../bin/lxtrace-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`
rm -f -f /usr/lpp/mmfs/src/../bin/kdump-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`
rm -f -f *.o *~ .depends .*.cmd *.ko *.a *.mod.c core *_shipped *map *mod.c.saved *.symvers *.ko.ver ./*.ver
install.he
rm -f -rf usr
done
70
IBM Systems Enterprise Architecture – GPFS Solution
touch install.he
touch install.he
touch install.he
71
IBM Systems Enterprise Architecture – GPFS Solution
touch install.he
Invoking Kbuild...
LD /usr/lpp/mmfs/src/gpl-linux/built-in.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/kdump-stub.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/mmwrap.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/ppc64/ss_ppc64.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o
LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o
LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o
LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfslinux.o
LD [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dummy.o
CC [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dwarfs.o
HOSTCC /usr/lpp/mmfs/src/gpl-linux/lxtrace.o
HOSTCC /usr/lpp/mmfs/src/gpl-linux/lxtrace_rl.o
HOSTLD /usr/lpp/mmfs/src/gpl-linux/lxtrace
MODPOST
linux/mmfs.o_shipped
linux/libgcc.a_shipped
CC /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dummy.mod.o
LD [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dummy.ko
CC /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dwarfs.mod.o
LD [M] /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dwarfs.ko
CC /usr/lpp/mmfs/src/gpl-linux/mmfs26.mod.o
LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.ko
CC /usr/lpp/mmfs/src/gpl-linux/mmfslinux.mod.o
LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfslinux.ko
CC /usr/lpp/mmfs/src/gpl-linux/tracedev.mod.o
LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.ko
72
IBM Systems Enterprise Architecture – GPFS Solution
done
installing (/usr/lpp/mmfs/src/ibm-kxi)
touch install.he
installing (/usr/lpp/mmfs/src/ibm-linux)
touch install.he
installing (/usr/lpp/mmfs/src/gpl-linux)
INSTALL /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dummy.ko
INSTALL /usr/lpp/mmfs/src/gpl-linux/kdump-kern-dwarfs.ko
INSTALL /usr/lpp/mmfs/src/gpl-linux/mmfs26.ko
INSTALL /usr/lpp/mmfs/src/gpl-linux/mmfslinux.ko
INSTALL /usr/lpp/mmfs/src/gpl-linux/tracedev.ko
DEPMOD 2.6.18-128.el5
[root@plinux src]#
Add Nodes
team4_1:/tmp/gpfs#>mmaddnode -N gpfs_node5
Thu Oct 29 16:11:27 KORST 2009: 6027-1664 mmaddnode: Processing node gpfs_node5
mmaddnode: 6027-1254 Warning: Not all nodes have proper GPFS license designations.
73
IBM Systems Enterprise Architecture – GPFS Solution
team4_1:/tmp/gpfs#>mmlsnode -a
===============================================================================
| Warning: |
| This cluster contains nodes that do not have a proper GPFS license |
| Use the mmchlicense command and assign the appropriate GPFS licenses |
| to each of the nodes in the cluster. For more information about GPFS |
===============================================================================
------------- -------------------------------------------------------
Accept License
team4_1:/tmp/gpfs#>mmchlicense client -N gpfs_node5
gpfs_node5
Please confirm that you accept the terms of the GPFS client Licensing Agreement.
Thu Oct 29 15:13:26 CST 2009: mmmount: Mounting file systems ...
[root@plinux .ssh]# df
/dev/mapper/VolGroup00-LogVol00
74
IBM Systems Enterprise Architecture – GPFS Solution
---------------------------------------------------------------------------
Volume Status
75
IBM Systems Enterprise Architecture – GPFS Solution
76
IBM Systems Enterprise Architecture – GPFS Solution
77
IBM Systems Enterprise Architecture – GPFS Solution
78
IBM Systems Enterprise Architecture – GPFS Solution
79
IBM Systems Enterprise Architecture – GPFS Solution
UAC Disable
80
IBM Systems Enterprise Architecture – GPFS Solution
System Reboot
81
IBM Systems Enterprise Architecture – GPFS Solution
82
IBM Systems Enterprise Architecture – GPFS Solution
83
IBM Systems Enterprise Architecture – GPFS Solution
SUA Installation
84
IBM Systems Enterprise Architecture – GPFS Solution
Setup
85
IBM Systems Enterprise Architecture – GPFS Solution
86
IBM Systems Enterprise Architecture – GPFS Solution
Open Korn Shell on Subsystem for UNIX-based Application, and login root user as Windows Admin
Password (su -)
87
IBM Systems Enterprise Architecture – GPFS Solution
88
IBM Systems Enterprise Architecture – GPFS Solution
89
IBM Systems Enterprise Architecture – GPFS Solution
90
IBM Systems Enterprise Architecture – GPFS Solution
Control Panel
Generate SSH Key and Share on Windows Korn Shell. After make id_rsa.pub file then you must
update Authorized_keys on AIX Server, This file must sync all of the gpfs cluster nodes.
91
IBM Systems Enterprise Architecture – GPFS Solution
92
IBM Systems Enterprise Architecture – GPFS Solution
93
IBM Systems Enterprise Architecture – GPFS Solution
94
IBM Systems Enterprise Architecture – GPFS Solution
Mounted Volume
GPFS v3.3 multicluster configurations that include Windows clients should not upgrade Windows
machines with 3.3.0-3 or -4. You must install 3.3.0.-5 |when upgrading beyond 3.3.0-2 due to an issue
with OpenSSL |introduced in 3.3.0-3. Go to Download Update pakage
http://www14.software.ibm.com/webapp/set2/sas/f/gpfs/home.html
Windows nodes do not support directly accessing disks or operating as an NSD server. |This
function is covered in the GPFS documentation |for planning purposes only. This FAQ will be updated
with the tested |disk device support information when it is generally available.
Support for Windows Server 2008 R2 is not yet available. This FAQ will be updated when that
support is available. (GPFS v3.4 Plan)
There is no migration path from Windows Server 2003 R2 (GPFS V3.2.1-5 or later) to Windows
Server 2008 SP2 (GPFS V3.3).
95
IBM Systems Enterprise Architecture – GPFS Solution
User exits defined by the mmaddcallback command and the three specialized user exits provided
by GPFS are not currently supported on Windows nodes.
The Tivoli® Storage Manager (TSM) Backup Archive client for Windows does not
support unique features of GPFS file systems. TSM backup and archiving
operations are supported on AIX and Linux nodes in a cluster that contains
Windows. For information on TSM backup archive client support for GPFS, see:
The GPFS Application Programming Interfaces (APIs) are not supported on Windows.
The native Windows backup utility is not supported.
Symbolic links that are created on UNIX-based nodes are specially handled by GPFS
Windows nodes; they appear as regular files with a size of 0 and their contents cannot be
accessed or modified.
GPFS on Windows nodes attempts to preserve data integrity between memory-mapped I/O
and other forms of I/O on the same computation node. However, if the same file is memory
mapped on more than one Windows node, data coherency is not guaranteed between the
memory-mapped sections on these multiple nodes. In other words, GPFS on Windows does
not provide distributed shared memory semantics. Therefore, applications that require data
coherency between memory-mapped files on more than one node might not function as
expected.
96
IBM Systems Enterprise Architecture – GPFS Solution
97
IBM Systems Enterprise Architecture – GPFS Solution
98
IBM Systems Enterprise Architecture – GPFS Solution
This entire step is same on each node, that is not require Shutdown all of the GPFS Cluster File
Service during upgrade gpfs daemon. Rolling upgrade support operation mixed gpfs version after v3.x
or higher. This is very useful function for live service for customer. Each node is upgrade gpfs daemon
seperately, then file system version must change latest version.
This warning is need to update license accept information, you shoud fallow below command.
# mmchlicense client --accept –N w1
# mmchlicense server --accept –N l1,l2
99
IBM Systems Enterprise Architecture – GPFS Solution
100
IBM Systems Enterprise Architecture – GPFS Solution
Add NSD
p1:/#>lspv
hdisk0 00ca904f908ab237 rootvg active
hdisk1 none gpfs1nsd
hdisk2 none gpfs2nsd
hdisk3 none gpfs3nsd
hdisk4 none gpfs4nsd
hdisk5 none gpfs5nsd
hdisk6 none None
hdisk7 none gpfs6nsd
hdisk8 none gpfs7nsd
hdisk9 none gpfs8nsd
p1:/#>
Check Disk for Make NSD
p1:/#>mmlsnsd
p2:/#>mmlsdisk TEAM02_AIX -m
p1:/TEAM02_AIX#>mmdf TEAM02_AIX
============= ====================
===================
(total) 262144000 244347904 ( 93%) 16496 ( 0%)
101
IBM Systems Enterprise Architecture – GPFS Solution
Inode Information
-----------------
Number of used inodes: 4069
Number of free inodes: 254491
Number of allocated inodes: 258560
Maximum number of inodes: 258560
p1:/TEAM02_AIX#>
Check File System Usage
p1:/tmp/gpfs#>cat disk.desc
hdisk6:p1,p2::dataAndMetadata:1:
p1:/tmp/gpfs#>mmcrnsd -F /tmp/gpfs/disk.desc
mmcrnsd: Processing disk hdisk6
mmcrnsd: 6027-1371 Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
Make NSD
p1:/tmp/gpfs#>mmlsnsd
GPFS: 6027-531 The following disks of TEAM02_AIX will be formatted on node p2:
gpfs9nsd: size 52428800 KB
Extending Allocation Map
Checking Allocation Map for storage pool 'system'
20 % complete on Wed Oct 28 10:51:42 2009
39 % complete on Wed Oct 28 10:51:47 2009
59 % complete on Wed Oct 28 10:51:52 2009
78 % complete on Wed Oct 28 10:51:58 2009
98 % complete on Wed Oct 28 10:52:03 2009
100 % complete on Wed Oct 28 10:52:03 2009
GPFS: 6027-1503 Completed adding disks to file system TEAM02_AIX.
mmadddisk: 6027-1371 Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
p1:/tmp/gpfs#>
Add New NSD to TEAM02_AIX Volume
p1:/tmp/gpfs#>mmlsdisk TEAM02_AIX -M
102
IBM Systems Enterprise Architecture – GPFS Solution
p1:/tmp/gpfs#>mmdf TEAM02_AIX
disk disk size failure holds holds free KB free KB
name in KB group metadata data in full blocks in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: system (Maximum disk size allowed is 281 GB)
gpfs1nsd 52428800 1 yes yes 44486912 ( 85%) 6680 ( 0%)
gpfs2nsd 52428800 1 yes yes 44487936 ( 85%) 6368 ( 0%)
gpfs3nsd 52428800 1 yes yes 44487680 ( 85%) 10216 ( 0%)
gpfs4nsd 52428800 1 yes yes 44487936 ( 85%) 11304 ( 0%)
gpfs5nsd 52428800 1 yes yes 44488960 ( 85%) 7688 ( 0%)
gpfs9nsd 52428800 1 yes yes 51213056 ( 98%) 376 ( 0%)
------------- -------------------- -------------------
(pool total) 314572800 273652480 ( 87%) 42632 ( 0%)
============= ====================
===================
(total) 314572800 273652480 ( 87%) 42632 ( 0%)
Inode Information
-----------------
Number of used inodes: 4082
Number of free inodes: 254478
Number of allocated inodes: 258560
Maximum number of inodes: 258560
p1:/tmp/gpfs#>
Check NSD Status
p1:/tmp/gpfs#>mmrestripefs TEAM02_AIX -b
GPFS: 6027-589 Scanning file system metadata, phase 1 ...
3 % complete on Wed Oct 28 10:54:55 2009
7 % complete on Wed Oct 28 10:54:58 2009
9 % complete on Wed Oct 28 10:55:01 2009
13 % complete on Wed Oct 28 10:55:05 2009
16 % complete on Wed Oct 28 10:55:09 2009
20 % complete on Wed Oct 28 10:55:13 2009
78 % complete on Wed Oct 28 10:56:04 2009
82 % complete on Wed Oct 28 10:56:08 2009
86 % complete on Wed Oct 28 10:56:11 2009
90 % complete on Wed Oct 28 10:56:14 2009
93 % complete on Wed Oct 28 10:56:18 2009
97 % complete on Wed Oct 28 10:56:21 2009
100 % complete on Wed Oct 28 10:56:23 2009
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-589 Scanning file system metadata, phase 2 ...
1 % complete on Wed Oct 28 10:56:31 2009
34 % complete on Wed Oct 28 10:56:34 2009
59 % complete on Wed Oct 28 10:56:37 2009
95 % complete on Wed Oct 28 10:56:41 2009
100 % complete on Wed Oct 28 10:56:41 2009
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-589 Scanning file system metadata, phase 3 ...
31 % complete on Wed Oct 28 10:56:46 2009
59 % complete on Wed Oct 28 10:56:50 2009
87 % complete on Wed Oct 28 10:56:54 2009
100 % complete on Wed Oct 28 10:56:55 2009
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-589 Scanning file system metadata, phase 4 ...
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-565 Scanning user file metadata ...
GPFS: 6027-565 Scanning user file metadata ...
99 % complete on Tue Oct 27 21:25:37 2009
100 % complete on Tue Oct 27 21:42:01 2009
GPFS: 6027-552 Scan completed successfully.
This command is Volume resripe command for New NSD.
103
IBM Systems Enterprise Architecture – GPFS Solution
p1:/tmp/gpfs#>mmdf TEAM02_AIX
disk disk size failure holds holds free KB free KB
name in KB group metadata data in full blocks in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: system (Maximum disk size allowed is 281 GB)
gpfs1nsd 52428800 1 yes yes 29172480 ( 56%) 10984 ( 0%)
gpfs2nsd 52428800 1 yes yes 29171200 ( 56%) 12704 ( 0%)
gpfs3nsd 52428800 1 yes yes 29164544 ( 56%) 15296 ( 0%)
gpfs4nsd 52428800 1 yes yes 29163008 ( 56%) 15352 ( 0%)
gpfs5nsd 52428800 1 yes yes 29160960 ( 56%) 10720 ( 0%)
gpfs9nsd 52428800 1 yes yes 29792256 ( 57%) 6592 ( 0%)
------------- -------------------- -------------------
(pool total) 314572800 175624448 ( 56%) 71648 ( 0%)
Inode Information
-----------------
Number of used inodes: 4096
Number of free inodes: 254464
Number of allocated inodes: 258560
Maximum number of inodes: 258560
p1:/tmp/gpfs#>
Remove NSD
For remove gpfs1nsd, you must set to suspend for blocking disk IO. This command is mmchdisk.
p1:/tmp/gpfs#>mmlsdisk TEAM02_AIX
disk driver sector failure holds holds storage
name type size group metadata data status availability pool
------------ -------- ------ ------- -------- ----- ------------- ------------ ------------
gpfs1nsd nsd 512 1 yes yes suspended up system
gpfs2nsd nsd 512 1 yes yes ready up system
gpfs3nsd nsd 512 1 yes yes ready up system
gpfs4nsd nsd 512 1 yes yes ready up system
gpfs5nsd nsd 512 1 yes yes ready up system
gpfs9nsd nsd 512 1 yes yes ready up system
GPFS: 6027-741 Attention: Due to an earlier configuration change the file system
may contain data that is at risk of being lost.
p1:/tmp/gpfs#>
Check applied option
p1:/tmp/gpfs#>mmrestripefs TEAM02_AIX -r
GPFS: 6027-589 Scanning file system metadata, phase 1 ...
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-589 Scanning file system metadata, phase 2 ...
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-589 Scanning file system metadata, phase 3 ...
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-589 Scanning file system metadata, phase 4 ...
GPFS: 6027-552 Scan completed successfully.
104
IBM Systems Enterprise Architecture – GPFS Solution
p1:/tmp/gpfs#>mmlsdisk TEAM02_AIX
disk driver sector failure holds holds storage
name type size group metadata data status availability pool
------------ -------- ------ ------- -------- ----- ------------- ------------ ------------
gpfs1nsd nsd 512 1 yes yes suspended down system
gpfs2nsd nsd 512 1 yes yes ready up system
gpfs3nsd nsd 512 1 yes yes ready up system
gpfs4nsd nsd 512 1 yes yes ready up system
gpfs5nsd nsd 512 1 yes yes ready up system
gpfs9nsd nsd 512 1 yes yes ready up system
GPFS: 6027-739 Attention: Due to an earlier configuration change the file system
is no longer properly balanced.
Check Shutdown gpfs1nsd
p1:/tmp/gpfs#>mmdf TEAM02_AIX
disk disk size failure holds holds free KB free KB
name in KB group metadata data in full blocks in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: system (Maximum disk size allowed is 281 GB)
gpfs1nsd 52428800 1 yes yes 52359936 (100%) 248 ( 0%)
gpfs2nsd 52428800 1 yes yes 24581376 ( 47%) 15920 ( 0%)
gpfs3nsd 52428800 1 yes yes 24487168 ( 47%) 16696 ( 0%)
gpfs4nsd 52428800 1 yes yes 24495104 ( 47%) 17984 ( 0%)
gpfs5nsd 52428800 1 yes yes 24557824 ( 47%) 14816 ( 0%)
gpfs9nsd 52428800 1 yes yes 25102592 ( 48%) 11360 ( 0%)
------------- -------------------- -------------------
(pool total) 262144000 123224064 ( 47%) 76776 ( 0%)
============= ====================
===================
(total) 262144000 123224064 ( 47%) 76776 ( 0%)
Inode Information
-----------------
Number of used inodes: 4096
Number of free inodes: 254464
Number of allocated inodes: 258560
Maximum number of inodes: 258560
Check File System status
105
IBM Systems Enterprise Architecture – GPFS Solution
p1:/tmp/gpfs#>mmlsdisk TEAM02_AIX
disk driver sector failure holds holds storage
name type size group metadata data status availability pool
------------ -------- ------ ------- -------- ----- ------------- ------------ ------------
gpfs2nsd nsd 512 1 yes yes ready up system
gpfs3nsd nsd 512 1 yes yes ready up system
gpfs4nsd nsd 512 1 yes yes ready up system
gpfs5nsd nsd 512 1 yes yes ready up system
gpfs9nsd nsd 512 1 yes yes ready up system
GPFS: 6027-739 Attention: Due to an earlier configuration change the file system
is no longer properly balanced.
Check NSD Configuration
106
IBM Systems Enterprise Architecture – GPFS Solution
This senario is 2 gpfs cluster will cross mount each own volume.
- AIX 2 Node Cluster Nodes c1, c2
- Linux 2 Node Cluster Nodes c3, c4
- Windows 1 Node Client Node c5
107
IBM Systems Enterprise Architecture – GPFS Solution
108
IBM Systems Enterprise Architecture – GPFS Solution
C1:/ mmshutdown -a
AIX GPFS Cluster Shutdown
[root@c3 ] mmshutdown -a
Linux GPFS Cluster Shutdown
109
IBM Systems Enterprise Architecture – GPFS Solution
110
IBM Systems Enterprise Architecture – GPFS Solution
Do you want to use replication on GPFS? Then makes file system, Ready NSD Configuration for
replication of file system. This algorithem is write block IO based on NSD failure group.
# hdisk1:c1:c2:dataAndMetadata:1:team3_aix_nsd1
team3_aix_nsd1:::dataAndMetadata:1::
# hdisk2:c1:c2:dataAndMetadata:1:team3_aix_nsd2
team3_aix_nsd2:::dataAndMetadata:1::
# hdisk3:c1:c2:dataAndMetadata:1:team3_aix_nsd3
team3_aix_nsd3:::dataAndMetadata:1::
# hdisk4:c1:c2:dataAndMetadata:2:team3_aix_nsd4
team3_aix_nsd4:::dataAndMetadata:2::
# hdisk5:c1:c2:dataAndMetadata:2:team3_aix_nsd5
team3_aix_nsd5:::dataAndMetadata:2::
# hdisk6:c1:c2:dataAndMetadata:2:team3_aix_nsd6
team3_aix_nsd6:::dataAndMetadata:2::
# hdisk7:c1:c2:dataAndMetadata:3:team3_aix_nsd7
team3_aix_nsd7:::dataAndMetadata:3::
# hdisk8:c1:c2:dataAndMetadata:3:team3_aix_nsd8
team3_aix_nsd8:::dataAndMetadata:3::
# hdisk9:c1:c2:dataAndMetadata:3:team3_aix_nsd9
team3_aix_nsd9:::dataAndMetadata:3::
This is configuration for NSD Replication.
Make a file system, and then copy a single file, that size 7GB.
This file system use 14GB (450 – 14 = 436), why is write 2 times.
111
IBM Systems Enterprise Architecture – GPFS Solution
112
IBM Systems Enterprise Architecture – GPFS Solution
Do not use rebalance command, it is just add new NSD, then changed failure group information of
File system. For configure replication, minimum configuration is 3 set of storage box or 2 set of
storage box with disk discriptor function, what is similar tiebreaker option.
113
IBM Systems Enterprise Architecture – GPFS Solution
Education System
Maybe next time of GPFS residency program topic will be GPFS/ILM Solution with IBM Tivoli
Product. This diagram is integrated GPFS/TSM Architecture.
114
IBM Systems Enterprise Architecture – GPFS Solution
Trademarks
IBM, the IBM Logo, BladeCenter, DS4000, eServer, and System x are trademarks of International Business
Machines Corporation in the United States, other countries, or both.
For a complete list of IBM Trademarks, see http://www.ibm.com/legal/copytrade.shtml.
Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other countries, or
both. Other company, product, or service names may be trademarks or service marks of others.
115