Anda di halaman 1dari 7

ODM GET COMMANDS

i am having MPIO on vio , i use lspath display 1 of the SAN disk in vio server.
# lspath -l hdisk12 -H -F "name parent connection path_id"
name parent connection path_id
hdisk12
hdisk12
hdisk12
hdisk12

fscsi0
fscsi0
fscsi1
fscsi1

500507630700067a,4060400500000000
50050763070b067a,4060400500000000
500507630710067a,4060400500000000
50050763071b067a,4060400500000000

0
1
2
3

how to explain connection and path_id


AIX: odmget examples
odmget -q"name=hdisk0" CuDv # get ODM object in the CuDv that describes
hdisk0 disk device
odmget -q"name=hdisk0 and attribute=pvid" CuAt # get ODM object that
stores the PVID
odmget -q"value3=hdisk0" CuDvDr # get major,minor number of special file
for hdisk0
odmget -q"name=rootvg" CuDep # query ODM class to identify all LVs that
belong to rootvg
odmget -q"parent=rootvg" CuDv # the same as previous command
Can be usefull when troubleshooting LVM related problems
HOWTO: Setup Dual VIOS VSCSI and MPIO
User Rating:
Poor

/ 41
Best
Rate

This article is based on a cheat sheet I use when setting up virtual scsi client
partitions. Just enough detail is provided to get the job done. The testing I leave
up to you!
The basis of setting up dual VIO servers will always be:
*
*
*
*

Setup First Virtual I/O Server


Test Virtual I/O setup
Setup Second Virtual I/O Server
Test Virtual I/O setup

Once the Virtual I/O Servers are prepared focus moves to the client partitions

* Setup client partition(s).


* Test client stability
In the following - no hints are provided on testing. Thus, not 4 parts, but two
parts: setup VIOS; setup client. The setup is based on a DS4300 SAN Storage
server. Your attribute and interface names may be different.
Setting up Multiple Virtual I/O Servers(VIOS)
The key issue is the LUNs exported from the SAN environment must always be
available to any VIOS that is going to provide the LUN via the Virtual SCSI
interfaceto Virtual SCSI client partitions (or client LPAR). The hdisk attribute that
must be set on each VIOS for each LUN that will be used for a MPIO configuration
isreserve_setting. When there are problems the first thing I check is whether
the disk attribute reserve_setting has been set to none.
But it is not just this one setting. For failover we also need setup the adapters.
Note: to change any of these attributes the adapters and disks need to be in a
defined state - OR - you make the changes to the ODM and reboot. This second,
i.e. reboot, option is required when you cannot get the disks and/or adapters
offlined (varyoffvg followed by several rmdev -l commands).
To make an ODM change just add the parameter -P to the chdev -l commands
below. And be sure to not use rmdev -d -l <device> as this just removes the
device definition from the ODM and you have to restart.
Setup adapters
This example is using an IBM DS4000 storage system. Your adapter names may
be different.
First, make sure the disks are offline. If no clients are using the disks this is easy,
otherwise you must get the client partition to varyoffvg the disks, and usermdev
-l to each disk/LUN. Once the disks are not being used the following commands
bring the disks and adapters offline (or into a defined state). The
command cfgmgr will bring them back online with the new settings active.Do not
run cfgmgr until AFTER the disks attributes have been set.
Note: I have used the command oem_setup_env to switch from user padmin to
root. Below I will switch back to padmin (note $ or # prompt for commands for
padmin and root, respectively).
# rmdev -l dar0 -R
# rmdev -l fscsi0 -R
Update the adapter settings:
# chdev -l fscsi0 -a fc_err_recov=fast_fail
# chdev -l fscsi0 -a dyntrk=yes
Setup Disks
On my system the disks the first four disks are local and I have 8 LUNs that I
want to make MPIO ready for the clients. So, with both VIO servers having the
disks offline, or only one VIO server active, it is simple to get the disks setup for

no_reserve. I also add that I want the VIO server to be aware of the pvid the
clients put, or modify on these disks. Finally, I make sure the client sees the disk
as "clean".
#
>
>
>
>
>

for i in 4 5 6 7 8 9 10 11
do
chdev -l hdisk$i -a pv=yes
chdev -l hdisk$i -a reserve_policy=no_reserve
chpv -C hdisk$i ## this "erases the disk so be careful!
done

Now I can activate the disks again using:


# cfgmgr
Setup Clients
I assume you already know how to configure VSCI clients (you have already
configured at least two VIO Servers!!) so I'll skip over the commands on the HMC
for configuring the client partition.
Once the client LPAR has been configured - but before it is activated you need to
map the LUNs available on the VIOS to the client. The command to do this is
mkvdev.
Assuming that the VIOS are setup to have identical disk numbering any
command you use on the first VIOS can be repeated on the second VIOS.
$ mkvdev -vdev hdisk4 -vadapter vhost0
vtscsi0 Available
You can verify the virtual target (vt) mapping using the command:
$ lsmap -vadapter vhost 0
Repeat these commands on the other VIOS(s) and make sure they are all
connected to the same VSCSI client (VX-TY). The VX part is the most important that signifies that it is the same client partition (X == client partiion number).
The TY is the Virtual I/O bus slot number and this will be different for each
mapping. TheTY part signify the multiple paths.
The easiest way to setup the client drivers is to just install it. The AIX installation
sees the MPIO capability and install the drivers and creates the vpaths. After AIX
reboots there are few changes we want to make sure AIX recivers automatically
incase one of the VIO servers goes offline, or loses connection with the SAN.
On the client
For each LUN coming from the SAN do:
# chdev -l hdiskX -a hcheck_mode=nonactive -a hcheck_interval=20 \
-a algorithm=failover -P
As this is probably including your rootvg disks a reboot is required.

MPIO:
Included as part of AIX at no charge. Packaged as a kernel extension and
multipathing module (PCM, Path Control Module). Each storage device requires a
PCM. PCM is storage vendor supplied code that gets control from the device
driver to handle the path management.
With AIX and multipathing (on IBM storage) we have the following options:
-classic SDD: (ODM definitions: ibm2105.rte, SDD driver: deviceis.sdd.53.rte)
-default PCM (MPIO): it comes with AIX (it is activated only if there are no
SDD ODM definitions)
-SDDPCM: SDD version which uses MPIO and has same commands as SDD
(ODM def: devices.fcp.disk.ibm2105.mio.rte, SDDPCM
driver:devices.sddpcm.53.rte)
MPIO is installed as part of the base OS. Paths are discovered during system boot
(cfgmgr) and disks are created from paths at the same time. No further
configuration is required.
By default fail_over is set to algorithm. To change it to round robin, first change
reservation policy and then algorithm:
chdev -l hdiskX -a reserve_policy=no_reserve
chdev -l hdiskX -a algorithm=round_robin
smitty mpio
mkpath -l hdiskX -p fscsiY
add extra path that are attached to an
adapter
lspath
lists paths (lspath -l hdisk46)
lspath -l hdisk44 -H -F "name path_id parent connection status" shows
many info
chpath
changing path state (enabbled, disabled)
chpath -s enabled -l hdisk -p vscsi0 it will set the path to enabled status
rmpath -dl hdiskX -p fcsiY
dynamically remove all paths under a parent
adapter
(-d: deletes, without it puts it to define state)
(The last path cannot be removed, the command will fail
if you try to remove the last path)
rmpath -dl hdisk3 -p fscsi0 -w 50060e8000c3baf4,2000000000000
deletes a path
-------------------------------------------------Failed path handling:
(there were Hitachi disks in Offline (E) state, but they were not unconfigured
earlier)
-lspath | grep -v Enab

<--

-rmpath -p fscsiX -d
-cfgmgr -l fcsX
-lspath | grep -v Enab
-dlnkmgr view -lu -item

-------------------------------------------------Change adapter setting online:


rmpath -d -p vscsi0
<--removes all paths from adapt.
(rmpath -dl hdisk0 -p vscsi0, it removes only specified path)
rmdev -l vscsi0
<--puts adapter into defined state
chdev -l vscsi0 -a vscsi_err_recov=fast_fail <--change adapter setting (if
-P is used it will be activated after reboot)
cfgmgr -l vscsi0
<--configure back adapter

-------------------------------------------------------------------------------------------------------------------------------------------------

SDDPCM:
SDDPCM is a loadable path control module designed to support the multipath
configuration environment in the IBM TotalStorage Enterprise Storage Server, the
IBM System Storage SAN Volume Controller, and the IBM TotalStorage DS family.
When the supported devices are configured as MPIO-capable devices, SDDPCM is
loaded and becomes part of the AIX MPIO FCP (Fibre Channel Protocol) device
driver. The AIX MPIO device driver with the SDDPCM module enhances the data
availability and I/O load balancing.
You cannot install SDD and SDDPCM together on a server. When
supported storage devices are configured as non-MPIO capable devices (that is,
multiple logical device instances are created for a physical LUN), you should
install SDD to get multipath support. Where only one logical device instance is
created for a physical LUN you must install SDDPCM.
SDDPCM server daemon:
SDDPCM has a server daemon running in the background: lssrc/stopsrc/startsrc
-s pcmsrv
psmsrv provides path-recovery function for SDDPCM devices
these module added to the kernel: sddpcmke, sdduserke

pcmpath
pcmpath query adapter
shows adapter configuration
pcmpath query version
shows the version of the sddpcm
pcmpath query device
shows the sddpcm devices (pcmpath query
device 44 <--shows only this device)
pcmpath query essmap
good overview
pcmpath set device algorithm
dynamically change the path selection
algorithm
pcmpath set device hc_mode
dynamically chane the path health check
mode
pcmpath set device hc_interval dynamically change the path healthcheck
time interval
pcmpath set device M path N online/offline dynamically enable (online) or
disable (offline) a path
pcmpath set adapter N online/offline dynamically enable (online) or disable
(offline) an adapter
(SDDPCM resereves the last path of a device, it will fail if the
device is using the last path)
pcmquerypr
reads and clear persisten reserve and registration
keys
pcmquerypr -vh /dev/hdisk30
to query and display the persistent
reservation (-V verbose mode, more details)
pcmquerypr -rh /dev/hdisk30
release the persistent reservation if the
device is reserved by the current host
pcmquerypr -ch /dev/hdisk30
remove the persistent reservation and clear
all reservation key registration
pcmquerypr -ph /dev/hdisk30
remove the persistent reservation if the
device is reserved by the other host
pcmgenprkey
SDDPCM MPIO devices
------------------------------------

set or clear the PR_key_value ODM attribute for all

Change adapter settings (reconfigure paths):


1. If possible put the required adapter offline with the subsystem driver
(pcmpath,dlnkmgr):
(this will put into Disabled state)
-pcmpath set adapter 3 offline
-dlnkmgr offline -hba 08.07
2. Put all the Disabled path to Defined:
-for i in `lspath | grep Dis | grep fscsiX| awk '{print$2}'`; do rmpath -l $i -p
fscsiX; done
3. If there are other paths still in Enabled or Failed state put them into
Defined:

-rmpath -l hdisk9 -p fscsi1


4. Remove all devices from ODM from the mentioned adapter:
-rmdev -Rl fcs2
-lsdev -p fscsi2 <--should not show the disks
5. Change the adapter settings
-chdev -l fscsi1 -a dyntrk=yes -a fc_err_recov=fast_fail
-chdev -l fcs1 -a init_link=pt2pt

Anda mungkin juga menyukai