August 2011
Technical white paper
Contents
Introduction ......................................................................................................................................... 2 Overview of the DRD method ................................................................................................................ 3 Assumptions for the DRD method ........................................................................................................... 3 Related Information .............................................................................................................................. 3 Summary of Steps for the DRD method ................................................................................................... 3 Use the DRD clone to migrate to new hardware....................................................................................... 4 Step 1: Create a DRD clone of the source system ................................................................................. 4 Step 2: Modify file system sizes on the clone if needed......................................................................... 4 Step 3: Identify and install additional target software on the clone ......................................................... 4 Step 4: Determine additional kernel content that is needed on the target ................................................ 5 Step 5: Build a kernel on the clone suitable for the target ...................................................................... 5 Step 6 (Optional): Adjust target kernel tunables if needed ..................................................................... 6 Step 7: Set the system identity on the clone for boot on the target .......................................................... 7 Step 8: Mark the clone LUN for identification from the target EFI ........................................................... 7 Step 9: Move scenario: Disable the source system ............................................................................... 7 Step 10: Move storage ..................................................................................................................... 8 Step 11: Boot the clone on the target system ....................................................................................... 8 Step 12: Test the target system........................................................................................................... 8 Step 13: If not successful, revise and repeat. ....................................................................................... 9 Overview of the Ignite-UX Recovery Method ............................................................................................ 9 Assumptions for the Ignite-UX Recovery Method ....................................................................................... 9 Related Information ............................................................................................................................ 10 Detailed Migration Steps for the Ignite-UX Recovery Method ................................................................... 10 Step 1: Install the latest IGNITE bundle on the Ignite-UX server ............................................................. 10 Step 2: Make a recovery archive of the source system ........................................................................ 10 Step3: Copy source client config to the target config .......................................................................... 11 Step4: Clone scenario: Give the target read access to source recovery archive ..................................... 12 See the manpages dfstab(4) and share_nfs(1M) for more information. ....................................... 12 Step 5: Remove or modify the network information for the target ......................................................... 12 Step 6: Change a recovery-related variable in control_cfg ............................................................ 13 Step 7: Create depots with additional software for the target .............................................................. 14 Step 8: Ensure iux_postload scripts from the recovery archive run correctly ........................................... 14 Step 9: Create config files for the new depots ................................................................................... 16 Step 10: Add the new config files to the target CINDEX ..................................................................... 17 Step 11: Modify file system sizes if needed ....................................................................................... 18 Step 12: Check configuration syntax ................................................................................................ 18 Step 13: Move scenario- Disable the source system ............................................................................ 18 Step 14: Deploy the configuration to the target system ....................................................................... 18 Step 15: Boot and test the target system. .......................................................................................... 19 Step 16: Return to the source, if the target is not as desired................................................................. 19 Call to action .................................................................................................................................... 20
Introduction
Customers often find that they want to move an existing instance of an HP-UX installation to new hardware. The change may be to a new computer model or to a system of the same model with additional I/O or networking capacity. The move may provide greater compute capacity, more expansion for the future, lower power and cooling costs, and better use of data center real estate. In such cases, the prospect of setting the new system up from scratch can be daunting, since it may involve identifying and restoring all customizations made to the system. It is often preferable to move the previous system boot disk, either literally, or through a mechanism such as moving a DRD clone of the system, or deploying an Ignite-UX recovery image of the system, to the new hardware. This paper provides complete descriptions of two techniques which can be used to migrate the preexisting HP-UX 11i v3 systems to new computers. This raises the question of which technique is preferable given your circumstances. The following table is a list of some common migration situations along with a recommendation for the method to use for migration: If this is your situation If you are moving the entire system data as well as boot disks If you do not have an Ignite-UX server setup already If you have many boot disks on a given system or hard partition, making it challenging to identify a particular boot disk If your root disk is managed by VxVM If you need to clone or move an instance of HP-UX 11iv1 If you only use internal boot disks that are not hot-pluggable If you are moving a physical system to a vPar Consider using this method: Move the boot disk or DRD clone, along with all data. Device files arepreserved and very little additional setup is needed. Move the boot disk or a DRD clone. Deploy an Ignite-UX recovery image.
Deploy an Ignite-UX Recovery image. DRD rehosting does not support VxVM roots. Consult Successful System Cloning using Ignite-UX located at http://www.hp.com/go/ignite-ux-docs to see if this is possible. DRD is not available on HP-UX 11i v1. Deploy an Ignite-UX recovery image. If the physical source system does not support vPars, deploy an Ignite-UX recovery image plus an additional depot of vPars software. If the physical system already has vPars installed, move the boot disk or DRD clone, or deploy an Ignite-UX recovery image. Create the guest with hpvmcreate. Prepare the DRD clone, or the Ignite-UX recovery image, being sure to add the module hpvmdynmem and install the additional VM guest depot from the VM host. See also Using Ignite-UX with Integrity Virtual Machines located at http://www.hp.com/go/hpux-hpvmdocs Set up the vPars on the new hardware from the iLO, then either move DRD clones or deploy Ignite-UX recovery images. Use online or offline VM migration. 2
If you want to move an entire set of vPars to a next Generation server If you are moving a VM guest from one VM host to a new VM host.
Consider using this method: Install the new system from depots.
Related Information
Additional information regarding Dynamic Root Disk can be obtained from the DRD documentation web site located at: http://www.hp.com/go/drd-docs. The documents located on this site include the following: Dynamic Root Disk Administrators Guide Dynamic Root disk Quick Start & Best Practices Exploring DRD Rehosting in HP-UX 11iv2 and 11iv3
1. Create a DRD clone of the source system on storage that can be moved to the target. 2. Modify the file system sizes on the clone, if needed. 3. Identify and install additional target software on the clone. 4. Determine additional kernel content that is needed on the target. 5. Build a kernel on the clone suitable for the target. 6. Optional: Adjust target kernel tunables, if needed. 7. Set the system identify on the clone for boot on the target. 8. Mark the clone LUN for identification from the target EFI. 9. Move scenario: Disable the source system. 10. Move storage. 11. Boot the clone on the target system. 12. Test the target system. 13. If the target does not satisfy expectations, repeat the process.
swcopy command from the system containing the serial depot. See swcopy (1M) for more information. Once the target software has been included in accessible depots, install it to the clone. If one of the depots contains an entire new Operating Environment, the first installation should be run using drd runcmd update-ux with the Operating Environment depot as a source: drd runcmd update-ux s <OE_depot_location> Other depots can be installed after the update using drd runcmd swinstallas follows: drd runcmd swinstall s <directory_depot_location> <software_selection> (Note that the command drd runcmd does not support serial depots.)
#!/usr/bin/sh # # merge_system_files - Merges system file from new hardware # into system file on source clone # $1 - system file from new hardware to be merged into # system file on DRD clone. # typeset -i module_found system_new_hw=$1 system_merged=/var/opt/drd/mnts/sysimage_001/stand/system cp -p ${system_merged} ${system_merged}.save cat ${system_new_hw} | while read module_keyword module_name module_state do module_found=0 # ignore obsolete drivers case $module_name in "fcd_fcp" | "fcd_vbus" | "usb_ms_scsi" | "sasd_vbus" ) break ;; *) if [[ ${module_keyword} = "module" ]] then grep ${module_name} ${system_merged} | while read mod_keyword mod_name rest do if [[ ${mod_keyword} = "module" ]] then if [[ ${module_name} = ${mod_name} ]] then module_found=1 fi fi done if [[ ${module_found} -eq 0 ]] then echo "Adding module ${module_name} ..." echo "${module_keyword} ${module_name} ${module_state}" >> \ ${system_merged} fi fi ;; esac done Figure 1 /usr/local/bin/merge_system_files
Step 7: Set the system identity on the clone for boot on the target
This step is ordinarily needed for both the move and clone scenarios, since the MAC address(es) on the target will differ from those on the source. (The exception to this is the case where your MAC addresses have been virtualized through Virtual Connect, and you are moving the VC profile to the new system.) To set the identity of the target (hostname, mapping of IP addresses to network interfaces, language, time zone, etc.) perform the following while logged onto the source system as root: 1. Create a sysinfo file modeled on the template supplied in /etc/opt/drd/default_sysinfo_file. This file contains hostname, IP addresses or DHCP information, and other customizing information. If you prefer to wait until the target system boots to supply this information, leave the parameter SYSINFO_INTERACTIVE set to ALWAYS. Otherwise, comment out this variable and set the values for other variables in the sysinfo file. Additional information regarding the content and syntax of the sysinfo file is available in the sysinfo(4) manpage, packaged in PHCO_39064 or any superseding patch. A sample sysinfo file, including the required parameter SYSINFO_HOSTNAME, appears below. SYSINFO_HOSTNAME=myhost SYSINFO_DHCP_ENABLE[0]=0 SYSINFO_MAC_ADDRESS[0]=0x0017A451E718 SYSINFO_IP_ADDRESS[0]=192.2.3.4 SYSINFO_SUBNET_MASK[0]=255.255.255.0 SYSINFO_ROUTE_GATEWAY=192.2.3.75 SYSINFO_ROUTE_DESTINATION[0]=default SYSINFO_ROUTE_COUNT[0]=1 SYSINFO_DNS_DOMAIN=ours SYSINFO_DNS_SERVER=192.2.3.50 2. Issue the command: drd rehost f <sysinfo_file_location> For additional information about the drd rehost command, see the chapter Rehosting and unrehosting systems in the Dynamic Root Disk Administrators Guide, available at http://www.hp.com/go/drd-docs, and in the drd-rehost(1M) manpage.
Step 8: Mark the clone LUN for identification from the target EFI
It can be challenging to identify the clone LUN after it is moved to the target system. Since the LUN is partitioned, it is displayed with fs entries. However, if multiple partitioned disks are visible from the EFI menus of the target, an extra marker can help to identify the LUN. To create the marker, issue the following on the source system: # touch /tmp/move_to_new_hw # efi_mkdir -d <dsf of EFI partition of clone> EFI/HPUX/DRD # efi_cp -d <dsf of EFI partition of clone> /tmp/move_to_new_hw \ EFI/HPUX/DRD/move_to_new_hw
You may also want to run /usr/sbin/setboot to set the primary bootpath to the current boot disk as determined by vgdisplay. On the next reboot, you can speed up subsequent boots by using the steps above to reset EFIFCScanLevel back to 0.
For example, if the source is running 11iv3, the target must support 11iv3 as well. The target may need a newer revision of 11iv3, such as the March 2010 release, but not a new major release of HPUX.
Related Information
Additional information regarding Ignite-UX can be obtained from the Ignite-UX documentation web site, http://www.hp.com/go/ignite-ux-docs. This includes the following
Ignite-UX Administration Guide (March 2010, B3921-90006) Successful System Cloning using Ignite-UX Successful System Recovery using Ignite-UX Installing and Updating Ignite-UX
The white paper Using Ignite-UX with Integrity Virtual Machines provides additional information regarding the use of Ignite-UX to setup Integrity Virtual Machine. It is available at the following: http://www.hp.com/go/hpux-hpvm-docs
For example, if the Ignite-UX server hostname is ignsvr, the command would be: /opt/ignite/bin/make_net_recovery -s ignsvr -A For more information about creating recovery archives, see the Recovery chapter of the Ignite-UX Administration Guide for HP-UX 11i, available at http://www.hp.com/go/Ignite-ux-docs.
For the example, check whether $ ll /var/opt/ignite/clients/tgtsys/recovery/latest matches the cfg set to TRUE in /var/opt/ignite/clients/tgtsys/CINDEX.
11
Step4: Clone scenario: Give the target read access to source recovery archive
In the move scenario, the source and target systems have the same hostname, so the target system already has network access to the recovery archive. If the Ignite-UX server is running 11i v3 or later, edit the /etc/dfs/dfstab file to allow access to both the source and target clients as follows: 1. Open the dfstab file: #vi /etc/dfs/dfstab 2. Once open, append the following to the -argument of the line for the source system ro=<target hostname> For example, if the source hostname is srcsys, and the target hostname tgtsys, change the line share -F nfs -o anon=2,rw=srcsys \ /var/opt/ignite/recovery/archives/srcsys to share -F nfs -o anon=2,rw=srcsys,ro=tgtsys \ /var/opt/ignite/recovery/archives/srcsys 3. #shareall -F nfs See the manpages dfstab(4) and share_nfs(1M) for more information. If the Ignite-UX server is running a release prior to 11i v3, edit the /etc/exports file to allow access to both the source and target clients: 1. Open the exports file: #vi /etc/exports 2. Once open, append the following to the argument of the source client's line.: :<target hostname>
For example, if the source hostname is srcsys, and the target hostname tgtsys, change the line /var/opt/ignite/recovery/archives/srcsys -anon=2,access=srcsys to /var/opt/ignite/recovery/archives/srcsys -anon=2,access=srcsys:tgtsys 3. #exportfs av
See exports(4) for more information
when the target system is deployed, either by specifying it directly on Ignite-UX menus, or by choosing on the menus to supply the information when the system is first booted. To remove the pre-existing network information, edit /var/opt/ignite/clients/<target hostname>/recovery/latest/system_cfg to remove the _hp_custom_sys stanza. Alternatively, the stanza may be commented out by inserting # in column 1 of the lines it contains. Here is a sample of the _hp_custom_sys stanza that should be commented out: # # System/Networking Parameters # #_hp_custom_sys+={"HP-UX save_config custom sys"} #init _hp_custom_sys="HP-UX save_config custom sys" #_h p_custom_sys visible_if false #(_hp_custom_sys=="HP-UX save_config custom sys") { # final system_name="<source hostname>" # final ip_addr["<source NIC hw path"]="<source address>" # final netmask["<source NIC hw path"]="<source mask in hex>" # final broadcast_addr["<source NIC hw path"]="<broadcast>" # init _hp_default_final_lan_dev="<source NIC hw path>" # final route_destination[0]="default" # final route_gateway[0]="<source gateway>" # final route_count[0]=1 # final nis_domain="udl" # final wait_for_nis_server=TRUE # final dns_domain="<DNS domain>" # final dns_nameserver[0]="<IP address of DNS server>" # is_net_info_temporary=FALSE #} # end "HP-UX save_config custom sys" Prior to deploying the target system, determine the network configuration information needed for it. This is the same information that is needed to cold install the target system from a depot, including whether DHCP is used to manage the interfaces, IP addresses (if DHCP is not used), subnet masks, gateways, and optional NIS and DNS servers. If you prefer to modify the information in the system_cfg file itself, and have multiple network interfaces on the target system, you may need to identify the hardware path for each NIC prior to editing system_cfg. See instl_adm(4) for further information about the syntax of the networking parameters in the system_cfg file.
Step 8: Ensure iux_postload scripts from the recovery archive run correctly
Currently, Ignite-UX determines the list all iux_postload scripts after the recovery archive is installed. However, the iux_postload scripts are not actually run until after the depots are also loaded. This is not the correct processing for migration to new hardware. In this case, the scripts need to run before additional kernel software is installed, which may replace products with newer revisions, thus changing or removing iux_postload scripts. To ensure that the iux_postload scripts are run at the right time, and that Ignite-UX executes a harmless script later at the wrong time - create the following two scripts, with owner bin:bin and permission 755, named /var/opt/ignite/scripts/run_iux_postloads, as shown in Figure 2 and /var/opt/ignite/scripts/restore_iux_postloads, as shown in Figure 3.
14
#!/bin/sh IPD_DIR=/var/adm/sw/products IUX_SCRIPT_NAME=iux_postload /usr/bin/find ${IPD_DIR} -name ${IUX_SCRIPT_NAME} | while read script_path do echo " Running ${script_path} .... " ${script_path} # Need to leave a harmless script named iux_postload # for IUX to run later. /usr/bin/mv ${script_path} ${script_path}.save echo "/sbin/true" > ${script_path} echo "# To be removed after migration" >> ${script_path} /usr/bin/chmod 744 ${script_path} done exit 0 Figure 2/var/opt/ignite/scripts/run_iux_postloads
#!/bin/sh IPD_DIR=/var/adm/sw/products IUX_SCRIPT_NAME=iux_postload find ${IPD_DIR} -name ${IUX_SCRIPT_NAME}.save | while read saved_path do script_path=`echo ${saved_path} | \ sed -e 's/iux_postload.save/iux_postload/'` if [[ -e ${script_path} ]] then # The iux_postload exists. It may be the one we created, # or it may have been delivered by a new revision of the product. # Only in the first case should we restore it to the version # we saved, so look for identifying comment. grep -q "To be removed after migration" ${script_path} if [[ $? -eq 0 ]] then echo " Restoring ${script_path} .... " /usr/bin/mv ${saved_path} ${script_path} #else Didn't find identifying string. # Subsequent release must have delivered new iux_postload. # Don't touch. fi #else Didn't find the script at all. # Subsequent release must have removed it. Don't do anything. fi # Remove saved_path, which may or may not exist. /usr/bin/rm -f ${saved_path} done exit 0 Figure 3 var/opt/ignite/scripts/restore_iux_postloads
15
Create a config file, /var/opt/ignite/clients/<target hostname>/run_iux_postloads_cfg, as shown in Figure 4 that runs the scripts listed in Figures 2 and 3. sw_source "KernelFixup" { source_format = CMD load_order = 1 } init sw_sel "Run KernelFixup" { description = "Run iux_postloads from archive" sw_source="KernelFixup" sw_category="KernelFixupCategory" post_load_script = "/var/opt/ignite/scripts/run_iux_postloads" post_config_script = "/var/opt/ignite/scripts/restore_iux_postloads" }=TRUE
The specification of %match installs software and patches in the OE that match software that was included in the recovery archive. The specification of load_order=2 ensures that the depot is processed after the recovery archive and the execution of the iux_postload scripts. You can use any value for category_tag other than HPUXEnvironments, which has special meaning and is already used for the recovery archive. For each additional depot, create /var/opt/ignite/<target hostname>/errata_cfg<n> as shown in Figure 6, where you have filled in the hostname of the SD depot server and the location of the depot with the Errata contents: sw_source "Errata" { description = "HP-UX Errata Software" source_format = SD sd_server = <IP address of depot server> sd_depot_dir = <absolute directory path of SD depot dir> source_type = "NET" load_order = 3 } init sw_sel "Errata_selection" { description = "Additional software for model xxxx" sw_source = "Errata" sw_category = "Additional" sd_software_list = "additional_sw1 additiona_sw2 " } = TRUE
Figure 6 /var/opt/ignite/<target hostname>/errata_cfg<n> For the sd_software_list, list the actual bundles, products, or patches that have been included in the errata documentation. If you want to install all the software in the depot, you can specify a selection of *. The specification load_order=3 ensures that the depot is processed after the OE depot.
Step 10: Add the new config files to the target CINDEX
1. If multiple cfg clauses appear in /var/opt/ignite/clients/<target hostname>/CINDEX, choose the one set equal to TRUE to be <cfg_name> in the commands below. 2. Use manage_index to add the new config files to cfg: # /opt/ignite/bin/manage_index -a \ -f /var/opt/ignite/clients/<target hostname>/run_iux_postloads_cfg \ -c "<cfg name>" -v -i /var/opt/ignite/clients/<target hostname>/CINDEX # /opt/ignite/bin/manage_index -a \ -f /var/opt/ignite/clients/<target hostname>/updateOE_cfg \ -c "<cfg name>" -v -i /var/opt/ignite/clients/<target hostname>/CINDEX # /opt/ignite/bin/manage_index -a \ -f /var/opt/ignite/<target hostname>/errata_cfg<n> \ -c "<cfg name>" -v -i /var/opt/ignite/clients/<target hostname>/CINDEX If additional depots are used, use manage_index to add the config file corresponding to each depot.
17
A list of LAN devices is displayed. Choose the device that has network connectivity to the IgniteUX server. Since a Direct Boot Profile is being used, the Ignite-UX server does not need to be on the same subnet as the target.
18
From the Ignite-UX installation screens, choose the configuration that was created in the previous steps. If the system is being cloned, specify the correct configuration (hostname, IP address, etc.) that is used for the target system.
19
Call to action
HP welcomes your input. Please give us comments about this white paper, or suggestions for related documentation, through our technical documentation feedback website: http://www.hp.com/bizsupport/feedback/ww/webfeedback.html
2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. 5900-1078, August 2011
20