Anda di halaman 1dari 18

http://blogs.corenetworks.es/2009/08/how-to-create-a-zone-cluster-in-sun-cluster-3-2u2/ https://blogs.oracle.com/js/entry/configuration_steps_to_create_a http://system-log.tyr.org.

uk/2008/06/27/building-a-solaris-cluster-express-cluster-in-a-virtualbox-on-opensolaris/
cat /.profile PATH=$PATH:/usr/cluster/bin: export PATH MANPATH=/usr/share/man:/usr/cluster/man: export MANPATH

dladm show-dev

ifconfig qfe0 plumb up ifconfig qfe0 10.0.0.33 netmask 255.0.0.0 psrinfo prtconf | grep Memory
bash-3.2# clzonecluster configure zonecluster1 zonecluster1: No such zone cluster configured Use 'create' to begin configuring a new zone cluster. clzc:zonecluster1> create clzc:zonecluster1> set zonepath=/zones/zc1 clzc:zonecluster1> add node clzc:zonecluster1:node> set physical-host=Node1 clzc:zonecluster1:node> set hostname=node1zone1 clzc:zonecluster1:node> add net clzc:zonecluster1:node:net> set address=169.144.76.96 clzc:zonecluster1:node:net> set physical=e1000g3 clzc:zonecluster1:node:net> end clzc:zonecluster1:node> end clzc:zonecluster1> add sysid clzc:zonecluster1:sysid> set root_password=xOcjF.i8oifeU clzc:zonecluster1:sysid> xOcjF.i8oifeU clzc:zonecluster1:sysid> end clzc:zonecluster1> commit clzc:zonecluster1> exit bash-3.2# clzonecluster configure zonecluster1 clzc:zonecluster1> add node clzc:zonecluster1:node> set physical-host=Node2 clzc:zonecluster1:node> set hostname=node2zone1 clzc:zonecluster1:node> add net clzc:zonecluster1:node:net> set address=169.144.76.99 clzc:zonecluster1:node:net> set physical=e1000g3 clzc:zonecluster1:node:net> end clzc:zonecluster1:node> end clzc:zonecluster1> commit clzc:zonecluster1> exit

bash-3.2# clzonecluster verify zonecluster1 Waiting for zone verify commands to complete on all the nodes of the zone cluster "zonecluster1"... bash-3.2# clzonecluster status zonecluster1

=== Zone Clusters === --- Zone Cluster Status --Name ---zonecluster1 bash-3.2# bash-3.2# clzonecluster install zonecluster1 Node Name --------Node1 Node2 Zone Host Name -------------node1zone1 node2zone1 Status -----Offline Offline Zone Status ----------Configured Configured

bash-3.2# bash-3.2# bash-3.2# bash-3.2#

clzonecluster boot zonecluster1 clzonecluster status zonecluster1 zoneadm list -iv zlogin zonecluster1

bash-3.2# ifconfig a bash-3.2# bash-3.2#

/usr/cluster/bin/cluster status export PATH=$PATH:/usr/cluster/bin

bash-3.2# clrg create -n node1zone1,node2zone1 app-rg bash-3.2# clrt register SUNW.HAStoragePlus

---------

Master zone bash-3.2# zpool create zdata c2t1d0 bash-3.2# clzc configure zonecluster1 clzc:zonecluster1> add dataset clzc:zonecluster1:dataset> set name=zdata clzc:zonecluster1:dataset> end clzc:zonecluster1> exit bash-3.2# bash-3.2# clzc show -v zonecluster1 ---Zone Cluster bash-3.2# clrs create -g app-rg -t SUNW.HAStoragePlus -p zpools=zdata app-hasp-rs bash-3.2# bash-3.2# clrg online -eM app-rg bash-3.2# clrg switch -n node2zone1 app-rg bash-3.2# clrg offline app-rg

MasterZone ---------------bash-3.2# clzc configure zonecluster1 clzc:zonecluster1> add net clzc:zonecluster1:net> set address=169.144.76.103 clzc:zonecluster1:net> end clzc:zonecluster1> commit clzc:zonecluster1> exit bash-3.2# bash-3.2# clzonecluster reboot zonecluster1 ZoneCluster -------------/etc/hosts 169.144.76.103

IP103

bash-3.2# clzonecluster verify zonecluster1 Waiting for zone verify commands to complete on all the nodes of the zone cluster "zonecluster1"... bash-3.2# clzonecluster status zonecluster1 === Zone Clusters === --- Zone Cluster Status --Name ---zonecluster1 Node Name --------Node1 Node2 Zone Host Name -------------node1zone1 node2zone1 Status -----Offline Offline Zone Status ----------Configured Configured

bash-3.2# clzonecluster install zonecluster1 Waiting for zone install commands to complete on all the nodes of the zone cluster "zonecluster1"...

bash-3.2# bash-3.2# bash-3.2# bash-3.2# bash-3.2# clzonecluster boot zonecluster1 Waiting for zone boot commands to complete on all the nodes of the zone cluster "zonecluster1"... bash-3.2# clzonecluster status zonecluster1

=== Zone Clusters === --- Zone Cluster Status --Name ---zonecluster1 Node Name --------Node1 Node2 Zone Host Name -------------node1zone1 node2zone1 Status -----Offline Offline Zone Status ----------Running Running

bash-3.2# zoneadm list .iv global zonecluster1 bash-3.2# zoneadm list -iv ID NAME STATUS IP 0 global running native shared 1 zonecluster1 running cluster shared bash-3.2#

PATH / /zones/zc1

BRAND

1.

Add a resource to resource-group and set a dependency for the resource on hasp-resource. If you have more than one resource to add to the resource group, use a separate command for each resource.

phys-schost# clresource create -g resource-group -t resource-type \ -p Network_resources_used=hasp-resource resource -t resource-type


Specifies the resource type that you create the resource for.

-p Network_resources_used=hasp-resource
Specifies that the resource has a dependency on the HAStoragePlus resource, hasp-resource. resource The name of the resource that you create.

http://docs.oracle.com/cd/E18728_01/html/821-2845/gbvoc.html
http://blogs.corenetworks.es/2009/08/how-to-create-a-zone-cluster-in-sun-cluster-32u2/ https://blogs.oracle.com/js/entry/configuration_steps_to_create_a http://docs.oracle.com/cd/E19787-01/820-7358/cacjggea/index.html http://docs.oracle.com/cd/E37745_01/html/E37724/ghfwv.html http://www.scribd.com/doc/77444243/Oracle-Solaris-Cluster-Essentials-Oracle-SolarisSystem-Administration-Series-9780132486224-53276 http://wenku.baidu.com/view/68f9c610a2161479171128cc.html https://blogs.oracle.com/orasysat/entry/zones_clusters_clustering_zones_zoneclusters

1.

(Optional) Enable remote login by the LDAP server to the global-cluster node. a. In the /etc/default/login file, comment out the CONSOLE entry. b. Enable remote login.

phys-schost# svcadm enable rlogin


c. Modify the /etc/pam.conf file. Modify the account management entries by appending a Tab and typing allow_remote or allow_unlabeled respectively, as shown below.

other account requisite other account required allow_unlabeled


2.

pam_roles.so.1 Tab pam_unix_account.so.1 Tab

allow_remote

Modify the admin_low template. a. Assign the admin_low template to each IP address that does not belong to a Trusted Extensions machine that is used by the global zone.

b. c. d. e.

# tncfg -t admin_low tncfg:admin_low> add host=ip-address1 tncfg:admin_low> add host=ip-address2 tncfg:admin_low> exit

svcs multi-user-server node

clquorum list -v
clquorum show

clquorum add -i clconfigfile device-name

Add Quorum Device


cluster status -t node clquorum list phys-schost# clquorum remove device-name phys-schost# clquorum status hys-schost# cldevice populate phys-schost# ps -ef | grep scgdevs phys-schost# cldevice list -v phys-schost# clquorum add -t type device-name

-t type
Specifies the type of quorum device. If this option is not specified, the default type shared_disk is used. phys-schost# clquorum list

Register the resource types as needed # clresourcetype register -N SUNW.oracle_server # clresourcetype register -N SUNW.oracle_listener # clresourcetype register -N SUNW.HAStoragePlus Create the resource group # clresourcegroup create -n nodea:zonex,nodeb:zoney oracle-rg Add the logical host name resource to the Oracle resource group # clreslogicalhostname create -g oracle-rg oracle-lh Add a resource of type SUNW.HAStoragePlus to the resource group # clresource create -g oracle-rg -t SUNW.HAStoragePlus \ -p FileSystemMountPoints=/oracle:/failover/oracle \ -p AffinityOn=TRUE oracle-data-store-rs Add the Oracle server resource to the resource group # ORACLE_HOME=/oracle/oracle/product/10.2.0/db_1 export ORACLE_HOME # clresource create -g oracle-rg -t SUNW.oracle_server \ -p ORACLE_HOME=${ORACLE_HOME} \ -p Alert_log_file=${ORACLE_HOME}/admin/acme/bdump/alert_acme.log \ -p ORACLE_SID=acme -p Connect_string=hamon/H@M0nPwd \ -p Resource_dependencies=oracle-data-store-rs oracle-svr-rs # clresource create -g oracle-rg \ -t SUNW.oracle_listener -p ORACLE_HOME=${ORACLE_HOME} \ -p LISTENER_NAME=LISTENER_acme oracle-lsnr-rs

Create a resource group that can holds services in nodea zonex and nodeb zoney nodea# clresourcegroup create -n nodea:zonex,nodeb:zoney test-rg Make sure the HAStoragePlus resource is registered nodea# clresourcetype register SUNW.HAStoragePlus Now add a UFS [or VxFS] fail-over file system: mount /bigspace1 to failover/ export/install in NGZ nodea# clresource create -t SUNW.HAStoragePlus -g test-rg \ -p FilesystemMountPoints=/fail-over/export/install:/bigspace1 \ ufs-hasp-rs Now add a QFS fail-over file system: mount /bigspace2 to fail-over/ export/share in NGZ nodea# clresource create -t SUNW.HAStoragePlus -g test-rg \

-p FilesystemCheckCommand=/bin/true \ -p FilesystemMountPoints=/fail-over/export/install:/bigspace2 \ qfs-hasp-rs Now add a zpool (not a file system), the mountpoint is govern by the zfs file system property nodea# zfs set mountpoint=/extraspace myzpool/extraspace nodea# clresource create -t SUNW.HAStoragePlus -g test-rg \ -p Zpools=myzpool zfs-hasp-rs

http://docs.oracle.com/cd/E29086_01/html/E29085/fquzq.html#scrolltoc http://www.googlux.com/suncluster33u1.html

-http://docs.oracle.com/cd/E29086_01/html/E29085/ggzen.html http://docs.oracle.com/cd/E19680-01/html/821-1255/fquzq.html

zone
phys-grass1# zonecfg -z lzgrass1a lzgrass1a: No such zone configured Use 'create' to begin configuring a new zone. zonecfg:lzgrass1a> create zonecfg:lzgrass1a> set zonepath=/zones/lzgrass1a zonecfg:lzgrass1a> add net zonecfg:lzgrass1a:net> set address=lzgrass1a/24 zonecfg:lzgrass1a:net> set physical=bge0 zonecfg:lzgrass1a:net> set defrouter=10.11.112.1 zonecfg:lzgrass1a:net> end zonecfg:lzgrass1a> set autoboot=true zonecfg:lzgrass1a> commit zonecfg:lzgrass1a> exit phys-grass1# zoneadm -z lzgrass1a install A ZFS file system has been created for this zone. Preparing to install zone <lzgrass1a>. Creating list of files to copy from the global zone. Copying <10196> files to the zone. Initializing zone product registry. Determining zone package initialization order. Preparing to initialize <958> packages on the zone. Initialized <958> packages on zone. Zone <lzgrass1a> is initialized. Installation of <1> packages was skipped. The file </zones/lzgrass1a/root/var/sadm/system/logs/install_log> contains a log of

the zone installation. phys-grass2# zonecfg -z lzgrass2a lzgrass2a: No such zone configured Use 'create' to begin configuring a new zone. zonecfg:lzgrass2a> create zonecfg:lzgrass2a> set zonepath=/zones/lzgrass2a zonecfg:lzgrass2a> add net zonecfg:lzgrass2a:net> set address=lzgrass2a/24 zonecfg:lzgrass2a:net> set physical=bge0 zonecfg:lzgrass2a:net> set defrouter=10.11.112.1 zonecfg:lzgrass2a:net> end zonecfg:lzgrass2a> set autoboot=true zonecfg:lzgrass2a> commit zonecfg:lzgrass2a> exit phys-grass2# zoneadm -z lzgrass2a install A ZFS file system has been created for this zone. Preparing to install zone <lzgrass2a>. Creating list of files to copy from the global zone. Copying <10196> files to the zone. Initializing zone product registry. Determining zone package initialization order. Preparing to initialize <958> packages on the zone. Initialized <958> packages on zone. Zone <lzgrass2a> is initialized. Installation of <1> packages was skipped. The file </zones/lzgrass2a/root/var/sadm/system/logs/install_log> contains a log of the zone installation. Boot both zones. phys-grass1# zoneadm -z lzgrass1a boot phys-grass2# zoneadm -z lzgrass2a boot Use zlogin to log in to and configure the zones phys-grass1# zlogin -C lzgrass1a . . . rebooting system due to change(s) in /etc/default/init [NOTICE: Zone rebooting] phys-grass2# zlogin -C lzgrass2a . . . rebooting system due to change(s) in /etc/default/init [NOTICE: Zone rebooting] Create the resource group to hold the shared address. phys-grass1# clrg create -n phys-grass1:lzgrass1a,phys-grass2:lzgrass2a websvr-sa-rg
Create the shared address resource using the hostname tgrass1-0a.

phys-grass1# clrssa create -g web-svr-sa-rg -h tgrass1-0a web-svr-sa-rs Bring the shared address resource group into the online and managed state. phys-grass1# clrg online -eM web-svr-sa-rg

Create a file system that will be mounted globally to hold the web server binaries and data. phys-grass1# newfs /dev/global/rdsk/d4s2 newfs: construct a new file system /dev/global/rdsk/d4s2: (y/n)? y /dev/global/rdsk/ d4s2: 41938944 sectors in 6826 cylinders of 48 tracks, 128 sectors 20478.0MB in 427 cyl groups (16 c/g, 48.00MB/g, 5824 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920, Initializing cylinder groups: ....... super-block backups for last 10 cylinder groups at: 40997024, 41095456, 41193888, 41292320, 41390752, 41489184, 41587616, 41686048, 41784480, 41882912 Add entries to the global zone /etc/vfstab files on both nodes. phys-grass1# grep /global/websvr /etc/vfstab /dev/global/dsk/d4s2 /dev/global/rdsk/d4s2 /global/websvr ufs - no global,logging Create the mount points for /global/websvr in the global zone on both nodes. phys-grass1# mkdir -p /global/websvr Create the mount points for /global/websvr in the non-global zone on both nodes. lzgrass1a # mkdir -p /global/websvr Create a scalable resource group to hold the file system and web server resources. phys-grass1# clrg create -n phys-grass1:lzgrass1a,phys-grass2:lzgrass2a -p maximum_primaries=2 -p desired_primaries=2 web-svr-rg Create the SUNW.HAStoragePlus resource that performs the loopback mount of the global file system in the non-global zones. In this case, the loopback mount mounts the file system at the same location in the zone. phys-grass1# clrs create -g web-svr-rg -t SUNW.HAStoragePlus -p FileSystemMountPoints=/global/websvr:/global/websvr web-svr-hasp-rs Bring the resource group into the online and managed state. phys-grass1# clrg online -eM web-svr-rg Check the status of the web-svr-sa-rg and web-svr-rg resource groups. phys-grass1# clrg status web-svr-sa-rg web-svr-rg === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- -----web-svr-sa-rg phys-grass1:lzgrass1a No Online

phys-grass2:lzgrass2a No Offline web-svr-rg phys-grass1:lzgrass1a No Online phys-grass2:lzgrass2a No Online Check that the file system is mounted in each zone. lzgrass1a# df -h /global/websvr Filesystem size used avail capacity Mounted on /global/websvr 20G 20M 19G 1% /global/websvr Create a document root (docroot) for the web server. lzgrass1a# mkdir -p /global/websvr/docroot Create a cgi-bin directory. lzgrass1a# mkdir -p /global/websvr/cgi-bin Create a shell script called host.sh in the /global/websvr/cgi-bin directory, and then make the script executable. lzgrass1a# cat /global/websvr/cgi-bin/host.sh #!/bin/ksh /bin/cat << EOF Content-type: text/html <html> <body>The server host was $( /bin/uname -n ).</body> </html> EOF lzgrass1a# chmod +x /global/websvr/cgi-bin/host.sh In both zones, create a directory to store the web server logs and error files, and then change the owner and group of the directory to user nobody. lzgrass1a # mkdir -p /var/sjsws/tgrass1-0a/logs lzgrass1a # chown nobody:nobody /var/sjsws/tgrass1-0a/logs In both zones, create the access log file and set the permissions on the file. lzgrass1a# touch /var/sjsws/tgrass1-0a/logs/access lzgrass1a# chown webservd:webservd /var/sjsws/tgrass1-0a/logs/access Example 8.11 Completing the Web Server Configuration Register the SUNW.iws resource type. phys-grass1# clrt register SUNW.iws Create the Sun Java System Web Server resource. phys-grass1# clrs create -g web-svr-rg -t SUNW.iws \ > -p confdir_list=/global/websvr/webserver7/https-tgrass1-0a.example.com \ > -p scalable=true \ > -p resource_dependencies=web-svr-hasp-rs,web-svr-sa-rs \ > -p port_list=80/tcp \ > web-svr-rs Because the web-svr-rg resource group is already online, the web servers are started immediately.

Obtain the status for the resources in both the web-svr-sa-rg and web-svr-rg resource groups. phys-grass1# clrs status -g web-svr-sa-rg,web-svr-rg === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------web-svr-sa-rs phys-grass1:lzgrass1a Online Online - SharedAddress online. phys-grass2:lzgrass2a Offline Offline web-svr-hasp-rs phys-grass1:lzgrass1a Online Online phys-grass2:lzgrass2a Online Online web-svr-rs phys-grass1:lzgrass1a Online Online - Service is online. phys-grass2:lzgrass2a Online Online - Service is online. From a non-cluster node, check your web server load balancing using the wget command. client-ws # /usr/sfw/bin/wget -q -O - \ > http://tgrass1-0a.example.com/cgi-bin/host.sh <html> <body>The server host was lzgrass1a.</body> </html> client-ws # /usr/sfw/bin/wget -q -O - \ > http://tgrass1-0a.example.com/cgi-bin/host.sh <html> <body>The server host was lzgrass2a.</body> </html> Disable the web server on one node, and try the command again. phys-grass1# clrs disable -n phys-grass1:lzgrass1a web-svr-rs This results in the same response each time. client-ws # /usr/sfw/bin/wget -q -O - \ > http://tgrass1-0a.example.com/cgi-bin/host.sh <html> <body>The server host was lzgrass2a.</body> </html> client-ws # /usr/sfw/bin/wget -q -O - \ > http://tgrass1-0a.example.com/cgi-bin/host.sh <html> <body>The server host was lzgrass2a.</body> </html>

Example 8.12 Creating a Zone Cluster (Page 388)


From the global zone of either cluster node, create the zone cluster using the clzc command. When supplying the root_password property, you must provide a pre-encrypted entry. For example, you could take this entry from a user in an existing /etc/shadow file.

phys-grass1# clzc configure oracle-zc oracle-zc: No such zone cluster configured Use 'create' to begin configuring a new zone cluster. clzc:oracle-zc> create

clzc:oracle-zc> set zonepath=/zones/oracle-zc clzc:oracle-zc> set autoboot=true clzc:oracle-zc> add node clzc:oracle-zc:node> set physical-host=phys-grass1 clzc:oracle-zc:node> set hostname=lzgrass1b clzc:oracle-zc:node> add net clzc:oracle-zc:node:net> set address=lzgrass1b/24 clzc:oracle-zc:node:net> set physical=bge0 clzc:oracle-zc:node:net> end clzc:oracle-zc:node> end clzc:oracle-zc> add node clzc:oracle-zc:node> set physical-host=phys-grass2 clzc:oracle-zc:node> set hostname=lzgrass2b clzc:oracle-zc:node> add net clzc:oracle-zc:node:net> set address=lzgrass2b/24 clzc:oracle-zc:node:net> set physical=bge0 clzc:oracle-zc:node:net> end clzc:oracle-zc:node> end clzc:oracle-zc> add sysid clzc:oracle-zc:sysid> set system_locale=C clzc:oracle-zc:sysid> set terminal=dtterm clzc:oracle-zc:sysid> set security_policy=NONE clzc:oracle-zc:sysid> set name_service="NIS{domain_name=dev.example.com name_server=ni ssvr(10.11.112.4)}" clzc:oracle-zc:sysid> set root_password=********* clzc:oracle-zc:sysid> set timezone=US/Pacific clzc:oracle-zc:sysid> set nfs4_domain=dynamic clzc:oracle-zc:sysid> end clzc:oracle-zc> set limitpriv=default,proc_priocntl clzc:oracle-zc> set max-shm-memory=4294967296 clzc:oracle-zc> add capped-cpu clzc:oracle-zc:capped-cpu> set ncpus=1 clzc:oracle-zc:capped-cpu> end clzc:oracle-zc> commit clzc:oracle-zc> exit phys-grass1# clzc install oracle-zc Waiting for zone install commands to complete on all the nodes of the zone cluster "oracle-zc"... A ZFS file system has been created for this zone. Preparing to install zone <oracle-zc>. Creating list of files to copy from the global zone. Copying <10196> files to the zone. Initializing zone product registry. Determining zone package initialization order. Preparing to initialize <958> packages on the zone. Initialized <958> packages on zone. Zone <oracle-zc> is initialized. Installation of <1> packages was skipped. The file </zones/oracle-zc/root/var/sadm/system/logs/install_log> contains a log of the zone installation.

From the global zone of one cluster node, create a zpool to hold the Oracle binaries.
phys-grass1# zpool create oracle-zc_zpool \

> mirror /dev/did/dsk/d5s2 /dev/did/dsk/d6s2 \ > mirror /dev/did/dsk/d7s2 /dev/did/dsk/d9s2

Configure the zone cluster to enable use of the logical host and zpool that will be used by the HA-Oracle database.
phys-grass1# clzc configure oracle-zc clzc:oracle-zc> add net clzc:oracle-zc:net> set address=tgrass1-0b clzc:oracle-zc:net> end clzc:oracle-zc> add dataset clzc:oracle-zc:dataset> set name=oracle-zc_zpool clzc:oracle-zc:dataset> end clzc:oracle-zc> commit clzc:oracle-zc> exit

From one cluster node, boot the oracle-zc zone cluster.


phys-grass1# clzc boot oracle-zc Waiting for zone boot commands to complete on all the nodes of the zone cluster "oracle-zc"...

On all cluster nodes, ensure that tgrass1-0b is in the /etc/inet/hosts file.


phys-grass1# grep tgrass1-0b /etc/inet/hosts 10.11.112.30 tgrass1-0b

On one cluster node, log in to the zone-cluster node and create the Oracle resource group.
phys-grass1# zlogin -C oracle-zc [Connected to zone 'oracle-zc' console] lzgrass1b console login: root Password: Feb 9 06:56:38 lzgrass1b login: ROOT LOGIN /dev/console Last login: Tue Feb 9 05:15:14 on console Sun Microsystems Inc. SunOS 5.10 Generic January 2005 # exec bash lzgrass1b# PATH=$PATH:/usr/cluster/bin export PATH

Create a resource group called oracle-zc-rg.


lzgrass1b# clrg create oracle-zc-rg

Determine if the SUNW.HAStoragePlus resource type is registered.


lzgrass1b# clrt list SUNW.LogicalHostname:3 SUNW.SharedAddress:2

If the SUNW.HAStoragePlus resource type is not registered, register it.


lzgrass1b# clrt register SUNW.HAStoragePlus

Create a resource of type SUNW.HAStoragePlus to mount the zpool for the Oracle binaries.
lzgrass1b# clrs create -g oracle-zc-rg -t SUNW.HAStoragePlus \ > -p zpools=oracle-zc_zpool oracle-zc-hasp-rs

Create a logical hostname resource that will be used by the Oracle listener.
lzgrass1b# clrslh create -g oracle-zc-rg \ > -h tgrass1-0b oracle-zc-lh-rs

Bring the resource group online.


lzgrass1b# clrg online -eM oracle-zc-rg

On all zone-cluster nodes, create identical oracle user and group IDs following the steps in Example 8.6. Change the ownership of the Oracle zpool that will be used for the Oracle software.
lzgrass1b# chown oracle:oinstall /oracle-zc_zpool

Log in as the oracle user on the zone-cluster node with the /oracle-zc_zpool mounted, and install the Oracle software following Example 8.6. This time, set ORACLE_BASE to /oracle-zc_zpool/app/oracle. If you installed sparse-root zones, you must choose an alternative to the /usr/local/ bin directory. This is because the /usr/local/bin directory will be inherited read-only from the global zone. After the Oracle software installation process is complete, you must switch over the oracle-zc-rg resource group and run the root.sh script on the alternate zone-cluster node.
lzgrass1b# clrg switch -n lzgrass2b oracle-zc-rg

As the oracle user on the zone-cluster node that has the /oracle-zc_zpool file system mounted, create the listener and the database. You must have set up your Oracle environment variables and PATH correctly.
lzgrass2b$ echo $ORACLE_HOME /oracle-zc_zpool/app/oracle/product/11.1.0/db_1 lzgrass2b$ echo $PATH /usr/bin::/usr/sbin:/oracle-zc_zpool/app/oracle/product/11.1.0/db_1/bin

As the oracle user, run netca.

lzgrass2b$ netca The default choices produce the following a listener.ora file. lzgrass2b$ cat $ORACLE_HOME/network/admin/listener.ora # listener.ora Network Configuration File: /oracle-zc_zpool/app/oracle/product/ 11.1.0/db_1/network/admin/listener.ora # Generated by Oracle configuration tools. LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = lzgrass2b)(PORT = 1521)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) ) lzgrass2b$ dbca

Choose the Oracle Managed File option and install the database data files in directory ${ORACLE_BASE}/oradata and the flash archive recovery files in ${ORACLE_BASE}/ flash_recovery_area. In this example, the database is created with the ORACLE_SID of zcdemo. After dbca has created the database, stop the listener and modify both the listener.ora and tnsnames.ora files, and then restart the listener.
lzgrass2b$ lsnrctl stop lzgrass2b$ cat listener.ora # listener.ora Network Configuration File: /oracle-zc_zpool/app/oracle/product/ 11.1.0/ db_1/network/admin/listener.ora # Generated by Oracle configuration tools. LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = tgrass1-0b)(PORT = 1521)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) ) lzgrass2b$ cat tnsnames.ora # tnsnames.ora Network Configuration File: /oracle-zc_zpool/app/oracle/product/ 11.1.0/ db_1/network/admin/tnsnames.ora # Generated by Oracle configuration tools. ZCDEMO = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = tgrass1-0b)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = zcdemo) ) ) LISTENER = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = tgrass1-0b)(PORT = 1521)) ) lzgrass2b$ lsnrctl start . . .

As the oracle user on the first cluster node, create the database-monitoring user, hamon, and set the local_listener property to LISTENER.
lzgrass2b$ sqlplus '/ as sysdba' SQL*Plus: Release 11.1.0.6.0 - Production on Wed Jan 13 06:05:03 2010 Copyright (c) 1982, 2007, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> create user hamon identified by HaMon2010; User created. SQL> alter user hamon default tablespace system quota 1m on system; User altered. SQL> grant select on v_$sysstat to hamon; Grant succeeded. SQL> grant select on v_$archive_dest to hamon; Grant succeeded. SQL> grant select on v_$database to hamon; Grant succeeded. SQL> grant create session to hamon; Grant succeeded. SQL> grant create table to hamon; Grant succeeded. SQL> alter system set local_listener=LISTENER; System altered.

SQL> quit Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options

On one non-global zone, as the root user, register the SUNW.oracle_listener and SUNW.oracle_server resource types, and then create the Solaris Cluster resource for the listener and the database.
lzgrass2b# clrt register SUNW.oracle_listener lzgrass2b# clrt register SUNW.oracle_server lzgrass2b# clresource create -t SUNW.oracle_listener -g oracle-zc-rg \ > -p ORACLE_HOME=/oracle-zc_zpool/app/oracle/product/11.1.0/db_1 \ > -p LISTENER_NAME=LISTENER \ > -p resource_dependencies=oracle-zc-hasp-rs oracle-zc-lsnr-rs lzgrass2b# clresource create -t SUNW.oracle_server -g oracle-zc-rg \ > -p ORACLE_HOME=/oracle-zc_zpool/app/oracle/product/11.1.0/db_1 \ > -p ORACLE_SID=zcdemo \ > -p Alert_log_file=/oracle-zc_zpool/app/oracle/diag/rdbms/zcdemo/zcdemo/trace/alert_ zcdemo.log \ > -p resource_dependencies=oracle-zc-hasp-rs \ > -p connect_string=hamon/HaMon2010 oracle-zc-svr-rs

Check the status of the resource group, and then switch it back to the other global-cluster non-voting node.
lzgrass2b# clrg status === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- -----oracle-zc-rg lzgrass1b No Offline lzgrass2b No Online lzgrass2b# clrg switch -n lzgrass1b oracle-zc-rg lzgrass2b# clrg status === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- -----oracle-zc-rg lzgrass1b No Online lzgrass2b No Offline

oracle-solaris-cluster -essentials-.pdf

FAILOVER ZONE
http://prefetch.net/blog/index.php/2009/04/10/deploying-highlyavailable-zones-with-solaris-cluster-32/ http://www.oracle.com/technetwork/articles/servers-storage-admin/o11148-cluster-failover-zone-1395591.html

http://docs.oracle.com/cd/E19787-01/819-3063/gdpkaxy/index.html https://blogs.oracle.com/orasysat/entry/zones_clusters_clustering_zones_zonec lusters

failover zones, zone cluster, or zone on a cluster with


Clustering_Solaris_1 0_Zones_with_RSF_1.pdf

failover resources

http://eazy.amigager.de/~milano/blog/?cat=11

General data service (GDS-Script)


http://www.informit.com/articles/article.aspx?p=1635203&seqNum=10 https://blogs.oracle.com/SC/entry/creating_sun_cluster_agents_for # clrs create -g www-rg -t SUNW.gds -p Start_command="/etc/init.d/dsm.scheduler.cluster.sh /zones/webdata/tsm/dsm.opt start" -p Probe_command="/etc/init.d/dsm.scheduler.cluster.sh webdata probe" -p Stop_command="/etc/init.d/dsm.scheduler.cluster.sh webdata stop" -p Network_aware=false webdata-backup-rs /etc/inet/hosts getent hosts 172.16.241.54 172.16.241.54 commslhname commslhname.nowhere.nothing.invalid # clreslogicalhostname create -g comms-rg commslhname http://www.sunmanagers.org/pipermail/summaries/2008-September/008620.html

How to add custom data service to sun cluster


1. create file patrolcluster.rtr >RESOURCE_TYPE = PatrolCluster; >START=/opt/patrol/suncluster/patrol_startup.sh; >STOP=/opt/patrol/suncluster/patrol_shutdown.sh; and files: /opt/patrol/suncluster/patrol_startup.sh /opt/patrol/suncluster/patrol_shutdown.sh 2. register patrolcluster.rtr scrgadm -at PatrolCluster -f /opt/patrol/suncluster/patrolcluster.rtr 3. create the resource scrgadm -aj patrol-res -g sgwprod-rg -t PatrolCluster

4. enable the resource scswitch -ej patrol-res 5. Now the start script you implemented would be called everytime the RG is starting. Sometimes it is desirable to have the start script of myscagent to be called *just before* another existing resource on the cluster. For example, to make sure your start script is always called BEFORE a resource named resource2 is started: - scrgadm -c -j resource2 -y Resource_dependencies=myresource1

http://ahmadt.wordpress.com/2009/06/08/how-to-add-custom-data-service-to-suncluster/ https://alessiodini.wordpress.com/category/sun-cluster/page/2/ http://realsysadmin.com/www/category/sun-cluster/ http://www.schneideradvconcepts.com/services/document7.html

Sun-Cluster-3-2-Che at-Sheet.pdf

Sun Cluster Agent Builder GUI


http://docs.oracle.com/cd/E19787-01/819-2972/agent_builder-26/index.html http://docs.oracle.com/cd/E19787-01/819-2972/agent_builder-27/index.html http://docs.oracle.com/cd/E18728_01/html/821-2848/agent_builder-26.html

Register the resource type # scrgadm -a -t SUNW.gds

-a Adds the specified resource type. -t Specifies the type of resource we are using. In the case, the generic data service
Creating a Failover Resource Group # scrgadm -a -g Metadata-harg -h es220a es220b

-h Specifies the primary and stand-by physical hostnames that make up the cluster Adding a Logical Hostname Resource to a Resource Group
# scrgadm -a -L -g Metadata-harg -l sas-ms

-l The logical hostname that the SAS Management Console profile definition will use. Add a Failover Application Resource to a Resource Group

# export SASMAIN=/global/sas/d1/CFG/Lev1/SASMain/ # export PROBE_HOME=$SASMAIN/MetadataServer/MetadataExamples/ # export META_HOME=$SASMAIN/MetdataServer # scrgadm -a -g Metadata-harg -t SUNW.gds -j sas-rs \ -x Start_command="$META_HOME/MetadataServer.sh start" \ -x Stop_command="$META_HOME/MetadataServer.sh stop" \ -x Probe_command=$PROBE_HOME/pingMetadataServer.sh \ -y Port_list=8561/tcp -y Network_resources_used=sas-ms

Bringing a Resource Group online # scswitch -Z -g Metadata-harg Display the status of the cluster; es220a is primary and online

root@es220a # scstat -g Group Name Node Name State ---------- --------- ----Group: Metadata-harg es220a Online Group: Metadata-harg es220b Offline Verify that the SAS Metadata Server is running root@es220a # ps -ef | grep Meta root 18402 ... /global/sas/d1/sas/sasexe/sas -log /global/sas/d1/CFG/Lev1/SASMain/MetadataServer ... root 18387 ... gds_svc_start -R sas-rs -G Metadata-harg root 18392 ... /bin/sh ./MetadataServer/MetadataServer.sh start2 Kill the SAS Metadata Server simulating a hardware or software failure root@es220a # kill 18402 root@es220a # ps -ef | grep Meta root 18468 8722 0 10:22:53 pts/2 0:00 grep Meta Verify that the SAS Metadata Server is running on es220b root@es220a # scstat -g Group Name Node Name State ---------- --------- ----Group: Metadata-harg es220a Offline Group: Metadata-harg es220b Online

Anda mungkin juga menyukai