http://www.akadia.com/services/ora_rac.html
Dokumentation
Content
Overview Architecture Enterprise Linux Installation and Setup Create Accounts NFS Configuration Enabling SSH User Equivalency Install Oracle Clusterware Install Oracle Database Software Create Listener Configuration Create the Cluster Database Transparent Application Failover (TAF) Facts Sheet RAC Troubles during the Installation
Overview
In the past, it was not easy to become familiar with Oracle Real Application Clusters (RAC), due to the price of the hardware required for a typical production RAC configuration which makes this goal impossible. Shared storage file systems, or even cluster file systems (e.g. OCFS2) are primarily used in a storage area network where all nodes directly access the storage on the shared file system. This makes it possible for nodes to fail without affecting access to the file system from the other nodes. Shared disk file systems are normally used in a high-availability cluster. At the heart of Oracle RAC is a shared disk subsystem. All nodes in the cluster must be able to access all of the data, redo log files, control files and parameter files for all nodes in the cluster. The data disks must be globally available to allow all nodes to access the database. Each node has its own redo log and control files but the other nodes must be able to access them in order to recover that node in the event of a system failure.
Architecture
The following RAC Architecture should only be used for test environments. For our RAC test environment, we use a normal linux server, acting as a shared storage server using NFS. We can use NFS to provide shared storage for a RAC installation. NFS is an abbreviation of Network File System, a platform independent technology created by Sun Microsystems that allows shared access to files stored on computers via an interface called the Virtual File System (VFS) that runs on top of TCP/IP.
1 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
Network Configuration
Each node must have one static IP address for the public network and one static IP address for the private cluster interconnect. The private interconnect should only be used by Oracle. Note that the /etc/hosts settings are the same for both nodes Gentic and Cellar. Host Gentic Device eth0 eth1 IP Address 192.168.138.35 192.168.137.35 Subnet 255.255.255.0 255.255.255.0 Gateway Purpose Connects Gentic to Cellar (private) localhost
/etc/hosts 127.0.0.1 localhost.localdomain # # Public Network - (eth0) 192.168.138.35 gentic 192.168.138.36 cellar # Private Interconnect - (eth1) 192.168.137.35 gentic-priv
2 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
192.168.137.36
cellar-priv
# Public Virtual IP (VIP) addresses for - (eth0) 192.168.138.130 gentic-vip 192.168.138.131 cellar-vip Host Cellar Device eth0 eth1 IP Address 192.168.138.36 192.168.137.36 Subnet 255.255.255.0 255.255.255.0 Gateway Purpose Connects Cellar to Gentic (private) localhost
/etc/hosts 127.0.0.1 localhost.localdomain # # Public Network - (eth0) 192.168.138.35 gentic 192.168.138.36 cellar # Private Interconnect - (eth1) 192.168.137.35 gentic-priv 192.168.137.36 cellar-priv # Public Virtual IP (VIP) addresses for - (eth0) 192.168.138.130 gentic-vip 192.168.138.131 cellar-vip
Note that the virtual IP addresses only need to be defined in the /etc/hosts file (or your DNS) for both nodes. The public virtual IP addresses will be configured automatically by Oracle when you run the Oracle Universal Installer, which starts Oracle's Virtual Internet Protocol Configuration Assistant (VIPCA). All virtual IP addresses will be activated when the srvctl start nodeapps -n <node_name> command is run. This is the Host Name/IP Address that will be configured in the client(s) tnsnames.ora file. About IP Addresses Virtual IP address A public internet protocol (IP) address for each node, to be used as the Virtual IP address (VIP) for client connections. If a node fails, then Oracle Clusterware fails over the VIP address to an available node. This address should be in the /etc/hosts file on any node. The VIP should not be in use at the time of the installation, because this is an IP address that Oracle Clusterware manages. When Automatically Failover occurs, two things happen: 1. The new node re-arps the world indicating a new MAC address for the address. For directly connected clients, this usually causes them to see errors on their connections to the old address. 2. Subsequent packets sent to the VIP go to the new node, which will send error RST packets back to the clients. This results in the clients getting errors immediately. This means that when the client issues SQL to the node that is now down, or traverses the address list while connecting, rather than waiting on a very long TCP/IP time-out (~10 minutes), the client receives a TCP reset. In the case of SQL, this is ORA-3113. In the case of connect, the next address in tnsnames is used. Going one step further is making use of Transparent Application Failover (TAF). With TAF successfully configured, it is possible to completely avoid ORA-3113 errors alltogether. Public IP address The public IP address name must be resolvable to the hostname. You can register both the public IP and the VIP address with the DNS. If you do not have a DNS, then you must make sure that both public IP addresses are in the node /etc/hosts file (for all cluster nodes) A private IP address for each node serves as the private interconnect address for internode cluster communication only. The following must be true for each private IP address: - It must be separate from the public network - It must be accessible on the same network interface on each node - It must be connected to a network switch between the nodes for the private network; crosscable interconnects are not supported The private interconnect is used for internode communication by both Oracle Clusterware and Oracle RAC. The private IP address must be available in each
Private IP address
3 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
4 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
root> service --status-all | grep "is running" crond (pid 2458) is running... gpm (pid 2442) is running... rpc.mountd (pid 2391) is running... nfsd (pid 2383 2382 2381 2380 2379 2378 2377 2376) is running... rpc.rquotad (pid 2339) is running... rpc.statd (pid 2098) is running... ntpd (pid 2284) is running... portmap (pid 2072) is running... rpc.idmapd (pid 2426) is running... sshd (pid 4191 4189 2257) is running... syslogd (pid 2046) is running... klogd (pid 2049) is running... xfs (pid 2496) is running...
Create Accounts
Create the following groups and the user Oracle on all three hosts root> groupadd -g 500 oinstall root> groupadd -g 400 dba root> useradd -u 400 -g 500 -G dba -c "Oracle Owner" -d /home/oracle -s /bin/bash oracle root> passwd oracle $HOME/.bash_profile #!/bin/bash # # # # # # # # # # # # Akadia AG, Fichtenweg 10, CH-3672 Oberdiessbach -------------------------------------------------------------------------File: .bash_profile Autor: Purpose: Location: Martin Zahn, Akadia AG, 20.09.2007 Configuration file for BASH Shell $HOME
5 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
TZ=MET; export TZ PATH=${PATH}:$HOME/bin ENV=$HOME/.bashrc BASH_ENV=$HOME/.bashrc USERNAME=`whoami` POSTFIX=/usr/local/postfix # LANG=en_US.UTF-8 LANG=en_US COLUMNS=130 LINES=45 DISPLAY=192.168.138.11:0.0 export USERNAME ENV COLUMNS LINES TERM PS1 PS2 PATH POSTFIX BASH_ENV LANG DISPLAY # Setup the correct Terminal-Type if [ `tty` != "/dev/tty1" ] then # TERM=linux TERM=vt100 else # TERM=linux TERM=vt100 fi # Setup Terminal (test on [ -t 0 ] is used to avoid problems with Oracle Installer) # -t fd True if file descriptor fd is open and refers to a terminal. if [ -t 0 ] then stty erase "^H" kill "^U" intr "^C" eof "^D" stty cs8 -parenb -istrip hupcl ixon ixoff tabs fi # Set up shell environment # set -u trap "echo -e 'logout $LOGNAME'" 0 # Setup ORACLE 11 environment if [ `uname -n` = "gentic" ] then ORACLE_SID=AKA1; export ORACLE_SID fi if [ `uname -n` = "cellar" ] then ORACLE_SID=AKA2; export ORACLE_SID fi ORACLE_HOSTNAME=`uname -n`; export ORACLE_HOSTNAME ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE ORACLE_HOME=${ORACLE_BASE}/product/11.1.0; export ORACLE_HOME ORA_CRS_HOME=${ORACLE_BASE}/crs; export ORA_CRS_HOME TNS_ADMIN=${ORACLE_HOME}/network/admin; export TNS_ADMIN ORA_NLS11=${ORACLE_HOME}/nls/data; export ORA_NLS10 CLASSPATH=${ORACLE_HOME}/JRE:${ORACLE_HOME}/jlib:${ORACLE_HOME}/rdbms/jlib export CLASSPATH ORACLE_TERM=xterm; export ORACLE_TERM ORACLE_OWNER=oracle; export ORACLE_OWNER NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1; export NLS_LANG LD_LIBRARY_PATH=${ORACLE_HOME}/lib:/lib:/usr/lib; export LD_LIBRARY_PATH # Set up the search paths: PATH=${POSTFIX}/bin:${POSTFIX}/sbin:${POSTFIX}/sendmail:${ORACLE_HOME}/bin PATH=${PATH}:${ORA_CRS_HOME}/bin:/usr/local/bin:/bin:/sbin:/usr/bin:/usr/sbin PATH=${PATH}:/usr/local/sbin:/usr/bin/X11:/usr/X11R6/bin PATH=${PATH}:. export PATH # Set date in European-Form echo -e " " date '+Date: %d.%m.%y Time: %H:%M:%S' echo -e " " uname -a # Clean shell-history file .sh_history : > $HOME/.bash_history # error if undefined variable. # what to do on exit.
6 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
# Show last login cat .lastlogin term=`tty` echo -e "Last login at `date '+%H:%M, %h %d'` on $term" >.lastlogin echo -e " " if [ $LOGNAME = "root" ] then echo -e "WARNING: YOU ARE SUPERUSER !!!" echo -e " " fi # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # Set Shell Limits for user oracle if [ $USER = "oracle" ] then ulimit -u 16384 -n 65536 fi # Umask new files to rw-r--r-umask 022 $HOME/.bashrc alias more=less alias up='cd ..' alias kk='ls -la | less' alias ll='ls -la' alias ls='ls -F' alias ps='ps -ef' alias home='cd $HOME' alias which='type -path' alias h='history' # # Do not produce core dumps # # ulimit -c 0 PS1="`whoami`@\h:\w> " export PS1 PS2="> " export PS2
NFS Configuration
The Oracle Clusterware Shared Files are the Oracle Cluster Registry (OCR) and the CRS Voting Disk. They will be installed by the Oracle Installer on the Shared Disk on the NFS-Server. Besides this two shared Files all Oracle Datafiles will be created on the Shared Disk.
7 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
sync
no_wdelay
no_root_squash insecure
8 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
no_subtree_check
This option enables subtree checking, which does add another level of security, but can be unreliability in some circumstances. If a subdirectory of a filesystem is exported, but the whole filesystem isn't then whenever a NFS request arrives, the server must check not only that the accessed file is in the appropriate filesystem (which is easy) but also that it is in the exported tree (which is harder). This check is called the subtree_check. In order to perform this check, the server must include some information about the location of the file in the "filehandle" that is given to the client. This can cause problems with accessing files that are renamed while a client has them open (though in many simple cases it will still work). Subtree checking is also used to make sure that files inside directories to which only root has access can only be accessed if the filesystem is exported with no_root_squash (see below), even if the file itself allows more general access.
For more information see: man exports root@opal> service nfs restart Shutting down NFS mountd: Shutting down NFS daemon: Shutting down NFS quotas: Shutting down NFS services: Starting NFS services: Starting NFS quotas: Starting NFS daemon: Starting NFS mountd: root@opal> exportfs -v /u01/crscfg /u01/votdsk /u01/oradat <world>(rw,no_root_squash,no_subtree_check,insecure_locks,anonuid=65534,anongid=65534) <world>(rw,no_root_squash,no_subtree_check,insecure_locks,anonuid=65534,anongid=65534) <world>(rw,wdelay,insecure,root_squash,no_subtree_check,anonuid=65534,anongid=65534) [ [ [ [ [ [ [ [ OK OK OK OK OK OK OK OK ] ] ] ] ] ] ] ]
9 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
timeout of 60 seconds is reached or the enough retransmissions have occured to cause a major timeout. Then, if the filesystem is hard mounted, each new timeout cascade restarts at twice the initial value of the previous cascade, again doubling at each retransmission. The maximum timeout is always 60 seconds. rsize The number of bytes NFS uses when reading files from an NFS server. The rsize is negotiated between the server and client to determine the largest block size that both can support. The value specified by this option is the maximum size that could be used; however, the actual size used may be smaller. Note: Setting this size to a value less than the largest supported block size will adversely affect performance. The number of bytes NFS uses when writing files to an NFS server. The wsize is negotiated between the server and client to determine the largest block size that both can support. The value specified by this option is the maximum size that could be used; however, the actual size used may be smaller. Note: Setting this size to a value less than the largest supported block size will adversely affect performance. Using actimeo sets all of acregmin, acregmax, acdirmin, and acdirmax to the same value. There is no default value. Use an alternate RPC version number to contact the NFS daemon on the remote host. This option is useful for hosts that can run multiple NFS servers. The default value depends on which kernel you are using. Disable all forms of attribute caching entirely. This extracts a significant performance penalty but it allows two different NFS clients to get reasonable results when both clients are actively writing to a common export on the server.
wsize
actimeo nfsvers
noac
root> service nfs restart Shutting Shutting Shutting Shutting Starting Starting Starting Starting down NFS mountd: down NFS daemon: down NFS quotas: down NFS services: NFS services: NFS quotas: NFS daemon: NFS mountd: [ [ [ [ [ [ [ [ OK OK OK OK OK OK OK OK ] ] ] ] ] ] ] ]
root> service netfs restart Unmounting NFS filesystems: Mounting NFS filesystems: Mounting other filesystems: [ [ [ OK OK OK ] ] ]
10 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
Generating public/private rsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_rsa. Your public key has been saved in /home/oracle/.ssh/id_rsa.pub. The key fingerprint is: 90:4c:82:48:f1:f1:08:56:dc:e9:c8:98:ca:94:0c:31 oracle@gentic Host Cellar oracle@cellar> cd ~/.ssh oracle@cellar> scp id_rsa.pub gentic:/home/oracle/.ssh/authorized_keys Host Gentic oracle@gentic> cd ~/.ssh oracle@gentic> scp id_rsa.pub cellar:/home/oracle/.ssh/authorized_keys Host Cellar oracle@cellar> cat id_rsa.pub >> authorized_keys oracle@cellar> ssh cellar date Tue Sep 18 15:15:26 CEST 2007 oracle@cellar> ssh gentic date Tue Sep 18 15:15:32 CEST 2007 Host Gentic oracle@gentic> cat id_rsa.pub >> authorized_keys oracle@gentic> ssh cellar date Tue Sep 18 15:15:26 CEST 2007 oracle@gentic> ssh gentic date Tue Sep 18 15:15:32 CEST 2007
11 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAv2TjN0KTuvqxr3XBHG2JFecCqZ0aPqGO/8cqBtdg X9qQuLIP5zGpKGrDcRVULvLncGSifVbDvV89LGFnXiv0FZ+8PHD1snGX5M4YyUMcv362wAaW3g2k Gp1ky0jQias5CZKtC42f94qt6rU1gm4E6Xh7U2QsLkEC0gPiYlGR2Zey4X01Eb18kM55eeGSFjoo v58T99MjdHFmxEWWvckhwudYZ4sFYbGxqJgywKtSNT0WI9HAGL3LNLBBjmLbbAnxrI1iDqTGMQIq zTf+p/E+2K/LrG9oUrN3qdT0EGciD0lcxO6Ke7O/npnCscRoUKPlIChsIN4ruJxikurOMzb37Q== oracle@gentic oracle@gentic:~/.ssh> cat known_hosts cellar,192.168.138.36 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAuUXTa+JUl7Z 8ovFDczcN8+sAxzrAgfpyjgyDPfdJwWnM70uft8SaOUyf+iWq7kmBi4kBPm3xfYzw2qNT Nukw6pSAHkxJuKznU9lYPKtyNWW0+ftXtgiqwEob2yFoagMOCUwRQlIEgl3UFWu6Kb2Tn Di7O08FIXsNgKNe575PH1L6V0lcHoS7KgQt8bev6YqqdjVL25Nvk1TLhEH2toQfkLXL3w InZEnPolGT8uc+MtUEJ+YkKPpMvh++Hd5BNUeY1AwVIt5RC7usJ70hS4W/sTCn77qz0yC KGxgWO2POyfB2B5xOy0UYjEbRcLoIq1YOtu1jc208UmJEa/Kj7dQnnw== gentic,192.168.138.35 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEArQ7O8QZDY84 O6NA2FDUlHQiqc1v+wVpweJAL27kHU8rnqQirExCeVEuySgIqvkNcUf5JlpKold8T0ctB lHsByaeQKYIhnM+roTay5x+2sNQvLKXsiNcGKu0FdGQPXv5lykO4eXNXl1aFx7FVCHHTS GUQppAkBmpi1jUOFwU2mFWyI9e2j6V7aeXvmnb6pnmtjxkHqBaGfBA6YDvanxxJOn0967 1CNzgT6fVk+3UBH+8uhMs9dXqnrBKUNz9Ts2+uUfPAP+K1uR2nrG2O+D1UwguFYEm/JH4 XHQYgpihvncEt/EDDmhcTodzWfZP6Rn+iWfWkj9hbC8f7khfNRwRiNQ==
12 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
root> mkdir -p /u01/app/oracle/crs root> chown -R oracle:oinstall /u01/app root> chmod -R 775 /u01/app Host Gentic root> mkdir -p /u01/app/oracle/crs root> chown -R oracle:oinstall /u01/app root> chmod -R 775 /u01/app Create Cluster Registry and Voting Disk oracle> touch /u01/crscfg/crs_registry oracle> touch /u01/votdsk/voting_disk
Interface information for node "cellar" Interface Name IP Address Subnet Subnet Gateway Default Gateway Hardware Address ---------------- ------------ ------------ ------------ ------------ -----------eth0 192.168.138.36 192.168.138.0 0.0.0.0 192.168.138.1 00:30:48:28:E7:36 eth1 192.168.137.36 192.168.137.0 0.0.0.0 192.168.138.1 00:30:48:28:E7:37
13 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
Interface information for node "gentic" Interface Name IP Address Subnet Subnet Gateway Default Gateway Hardware Address ---------------- ------------ ------------ ------------ ------------ -----------eth0 192.168.138.35 192.168.138.0 0.0.0.0 192.168.138.1 00:30:48:29:BD:E8 eth1 192.168.137.35 192.168.137.0 0.0.0.0 192.168.138.1 00:30:48:29:BD:E9
Check: Node connectivity of subnet "192.168.138.0" Source Destination ------------------------------ -----------------------------cellar:eth0 gentic:eth0 Result: Node connectivity check passed for subnet "192.168.138.0" Check: Node connectivity of subnet "192.168.137.0" Source Destination ------------------------------ -----------------------------cellar:eth1 gentic:eth1 Result: Node connectivity check passed for subnet "192.168.137.0"
Connected? ---------------yes with node(s) cellar,gentic. Connected? ---------------yes with node(s) cellar,gentic.
Interfaces found on subnet "192.168.138.0" that are likely candidates for a private interconnect: cellar eth0:192.168.138.36 gentic eth0:192.168.138.35 Interfaces found on subnet "192.168.137.0" that are likely candidates for a private interconnect: cellar eth1:192.168.137.36 gentic eth1:192.168.137.35 WARNING: Could not find a suitable set of interfaces for VIPs. Result: Node connectivity check passed. Checking system requirements for 'crs'... Check: Total memory Node Name Available ------------ -----------------------gentic 1.98GB (2075156KB) cellar 1010.61MB (1034860KB) Result: Total memory check failed. Check: Free disk space in "/tmp" dir Node Name Available ------------ -----------------------gentic 20.47GB (21460692KB) cellar 19.1GB (20027204KB) Result: Free disk space check passed. Check: Swap space Node Name Available ------------ -----------------------gentic 2.44GB (2555896KB) cellar 2.44GB (2562356KB) Result: Swap space check passed. Check: System architecture Node Name Available ------------ -----------------------gentic i686 cellar i686 Result: System architecture check passed. Check: Kernel version Node Name Available ------------ -----------------------gentic 2.6.18-8.el5PAE cellar 2.6.18-8.el5PAE Result: Kernel version check passed.
Check: Package existence for "make-3.81" Node Name Status ------------------------------ -----------------------------gentic make-3.81-1.1 cellar make-3.81-1.1 Result: Package existence check passed for "make-3.81".
Check: Package existence for "binutils-2.17.50.0.6" Node Name Status Comment ------------------------------ ------------------------------ ---------------gentic binutils-2.17.50.0.6-2.el5 passed cellar binutils-2.17.50.0.6-2.el5 passed Result: Package existence check passed for "binutils-2.17.50.0.6".
14 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
Check: Package existence for "gcc-4.1.1" Node Name Status ------------------------------ -----------------------------gentic gcc-4.1.1-52.el5 cellar gcc-4.1.1-52.el5 Result: Package existence check passed for "gcc-4.1.1". Check: Package existence for "libaio-0.3.106" Node Name Status ------------------------------ -----------------------------gentic libaio-0.3.106-3.2 cellar libaio-0.3.106-3.2 Result: Package existence check passed for "libaio-0.3.106".
Check: Package existence for "libaio-devel-0.3.106" Node Name Status Comment ------------------------------ ------------------------------ ---------------gentic libaio-devel-0.3.106-3.2 passed cellar libaio-devel-0.3.106-3.2 passed Result: Package existence check passed for "libaio-devel-0.3.106". Check: Package existence for "libstdc++-4.1.1" Node Name Status ------------------------------ -----------------------------gentic libstdc++-4.1.1-52.el5 cellar libstdc++-4.1.1-52.el5 Result: Package existence check passed for "libstdc++-4.1.1".
Check: Package existence for "elfutils-libelf-devel-0.125" Node Name Status Comment ------------------------------ ------------------------------ ---------------gentic elfutils-libelf-devel-0.125-3.el5 passed cellar elfutils-libelf-devel-0.125-3.el5 passed Result: Package existence check passed for "elfutils-libelf-devel-0.125". Check: Package existence for "sysstat-7.0.0" Node Name Status ------------------------------ -----------------------------gentic sysstat-7.0.0-3.el5 cellar sysstat-7.0.0-3.el5 Result: Package existence check passed for "sysstat-7.0.0".
Check: Package existence for "compat-libstdc++-33-3.2.3" Node Name Status Comment ------------------------------ ------------------------------ ---------------gentic compat-libstdc++-33-3.2.3-61 passed cellar compat-libstdc++-33-3.2.3-61 passed Result: Package existence check passed for "compat-libstdc++-33-3.2.3". Check: Package existence for "libgcc-4.1.1" Node Name Status ------------------------------ -----------------------------gentic libgcc-4.1.1-52.el5 cellar libgcc-4.1.1-52.el5 Result: Package existence check passed for "libgcc-4.1.1".
Check: Package existence for "libstdc++-devel-4.1.1" Node Name Status Comment ------------------------------ ------------------------------ ---------------gentic libstdc++-devel-4.1.1-52.el5 passed cellar libstdc++-devel-4.1.1-52.el5 passed Result: Package existence check passed for "libstdc++-devel-4.1.1". Check: Package existence for "unixODBC-2.2.11" Node Name Status ------------------------------ -----------------------------gentic unixODBC-2.2.11-7.1 cellar unixODBC-2.2.11-7.1 Result: Package existence check passed for "unixODBC-2.2.11".
Check: Package existence for "unixODBC-devel-2.2.11" Node Name Status Comment ------------------------------ ------------------------------ ---------------gentic unixODBC-devel-2.2.11-7.1 passed cellar unixODBC-devel-2.2.11-7.1 passed Result: Package existence check passed for "unixODBC-devel-2.2.11". Check: Package existence for "glibc-2.5-12" Node Name Status ------------------------------ -----------------------------gentic glibc-2.5-12
Comment ---------------passed
15 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
cellar glibc-2.5-12 Result: Package existence check passed for "glibc-2.5-12". Check: Group existence for "dba" Node Name Status Comment ------------ ------------------------ -----------------------gentic exists passed cellar exists passed Result: Group existence check passed for "dba". Check: Group existence for "oinstall" Node Name Status Comment ------------ ------------------------ -----------------------gentic exists passed cellar exists passed Result: Group existence check passed for "oinstall". Check: User existence for "nobody" Node Name Status Comment ------------ ------------------------ -----------------------gentic exists passed cellar exists passed Result: User existence check passed for "nobody". System requirement failed for 'crs' Pre-check for cluster services setup was unsuccessful. Checks did not pass for the following node(s): cellar The failed memory check on Cellar can be ignored.
passed
Install Clusterware
Make sure that the X11-Server is started and reachable. oracle> echo $DISPLAY 192.168.138.11:0.0 Load the SSH Keys into memory oracle> exec /usr/bin/ssh-agent $SHELL oracle> /usr/bin/ssh-add Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa) Start the Installer, make sure that there are no errors shown in the Installer Window oracle> ./runInstaller Starting Oracle Universal Installer...
Click [Next]
16 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
Enter /u01/app/oracle/crs
Everything should be OK
17 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
Now start the scripts as root as shown in the window, one by one as follows. Host Gentic oracle> cd /u01/app/oraInventory oracle> su root> ./orainstRoot.sh Changing permissions of /u01/app/oraInventory to 770. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete Host Cellar oracle> cd /u01/app/oraInventory oracle> su root> ./orainstRoot.sh Changing permissions of /u01/app/oraInventory to 770. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete Host Gentic root> cd /u01/app/oracle/crs root> ./root.sh WARNING: directory '/u01/app/oracle' is not owned by root WARNING: directory '/u01/app' is not owned by root Checking to see if Oracle CRS stack is already configured /etc/oracle does not exist. Creating it now. Setting the permissions on OCR backup directory Setting up Network socket directories Oracle Cluster Registry configuration upgraded successfully The directory '/u01/app/oracle' is not owned by root. Changing owner to root The directory '/u01/app' is not owned by root. Changing owner to root Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: gentic gentic-priv gentic node 2: cellar cellar-priv cellar Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Now formatting voting device: /u01/votdsk/voting_disk Format of 1 voting devices complete. Date: 19.09.07 Time: 10:03:48 Linux gentic 2.6.18-8.el5PAE #1 SMP Tue Jun 5 23:39:57 EDT 2007 i686 i686 i386 GNU/Linux Last login at 10:03, Sep 19 on /dev/pts/1 Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. Cluster Synchronization Services is active on these nodes. gentic Cluster Synchronization Services is inactive on these nodes. cellar Local node checking complete. Run root.sh on remaining nodes to start CRS daemons.
18 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
Host Cellar root> cd /u01/app/oracle/crs root> ./root.sh WARNING: directory '/u01/app/oracle' is not owned by root WARNING: directory '/u01/app' is not owned by root Checking to see if Oracle CRS stack is already configured /etc/oracle does not exist. Creating it now. Setting the permissions on OCR backup directory Setting up Network socket directories Oracle Cluster Registry configuration upgraded successfully The directory '/u01/app/oracle' is not owned by root. Changing owner to root The directory '/u01/app' is not owned by root. Changing owner to root clscfg: EXISTING configuration version 4 detected. clscfg: version 4 is 11 Release 1. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: gentic gentic-priv gentic node 2: cellar cellar-priv cellar clscfg: Arguments check out successfully. NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Date: 19.09.07 Time: 10:10:11 Linux cellar 2.6.18-8.el5PAE #1 SMP Tue Jun 5 23:39:57 EDT 2007 i686 i686 i386 GNU/Linux Last login at 10:10, Sep 19 on /dev/pts/0 Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. Cluster Synchronization Services is active on these nodes. gentic cellar Cluster Synchronization Services is active on all the nodes. Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps Creating Creating Creating Starting Starting Starting Done. Go back to the Installer. VIP GSD ONS VIP GSD ONS application application application application application application resource resource resource resource resource resource on on on on on on (2) (2) (2) (2) (2) (2) nodes... nodes... nodes... nodes... nodes... nodes...
The clusterware installation is now complete, if you reboot the nodes, you will see that the clusterware now is automatically started with the script /etc/init.d/S96init.crs, many processes are now up and running.
19 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
20 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
Start the Installer, make sure that there are no errors shown in the Installer Window oracle> ./runInstaller
Click [Next]
Specify /u01/app/oracle/product/11.1.0
Everything should be OK
21 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
Click [Install]
Now start the scripts as root as shown in the window, one by one as follows.
Host Cellar oracle> cd /u01/app/oracle/product/11.1.0 oracle> su root> ./root.sh Host Gentic oracle> cd /u01/app/oracle/product/11.1.0 oracle> su root> ./root.sh
22 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
# # # # # # # # # # # #
Akadia AG, Fichtenweg 10, CH-3672 Oberdiessbach listener.ora -------------------------------------------------------------------------File: listener.ora Autor: Purpose: Location: Martin Zahn, Akadia AG, 30.09.2007 Configuration file for Net Listener $TNS_ADMIN (RAC Configuration)
LISTENER_Gentic = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = gentic-vip)(PORT = 1521)(IP = FIRST)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.138.35)(PORT = 1521)(IP = FIRST)) ) ) SID_LIST_LISTENER_Gentic = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/app/oracle/product/11.1.0) (PROGRAM = extproc) ) ) LISTENER.ORA on Host Cellar # # # # # # # # # # # # Akadia AG, Fichtenweg 10, CH-3672 Oberdiessbach listener.ora -------------------------------------------------------------------------File: listener.ora Autor: Purpose: Location: Martin Zahn, Akadia AG, 30.09.2007 Configuration file for Net Listener $TNS_ADMIN (RAC Configuration)
LISTENER_CELLAR = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = cellar-vip)(PORT = 1521)(IP = FIRST)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.138.36)(PORT = 1521)(IP = FIRST)) ) ) SID_LIST_LISTENER_CELLAR = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/app/oracle/product/11.1.0) (PROGRAM = extproc) ) )
TNSNAMES.ORA on both Nodes # # # # # # # # # # # # Akadia AG, Fichtenweg 10, CH-3672 Oberdiessbach tnsnames.ora -------------------------------------------------------------------------File: tnsnames.ora Autor: Purpose: Location: Martin Zahn, Akadia AG, 30.09.2007 Configuration File for all Net Clients (RAC Configuration) $TNS_ADMIN
23 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
(ADDRESS = (PROTOCOL = TCP)(HOST = cellar-vip)(PORT = 1521)) (LOAD_BALANCE = yes) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = AKA.WORLD) (FAILOVER_MODE = (TYPE = SELECT) (METHOD = BASIC) (RETRIES = 180) (DELAY = 5) ) ) ) AKA2 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = cellar-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = AKA.WORLD) (INSTANCE_NAME = AKA2) ) ) AKA1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = gentic-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = AKA.WORLD) (INSTANCE_NAME = AKA1) ) ) LISTENERS_AKA = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = gentic-vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = cellar-vip)(PORT = 1521)) ) EXTPROC_CONNECTION_DATA = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0)) ) (CONNECT_DATA = (SID = PLSExtProc) (PRESENTATION = RO) ) ) SQLNET.ORA on both Nodes # # # # # # # # # # # # Akadia AG, Fichtenweg 10, CH-3672 Oberdiessbach sqlnet.ora -------------------------------------------------------------------------File: sqlnet.ora Autor: Purpose: Location: Martin Zahn, Akadia AG, 30.09.2007 Configuration File for all Net Clients (RAC Configuration) $TNS_ADMIN
NAMES.DIRECTORY_PATH= (TNSNAMES) Start / Stop Listener oracle@gentic> srvctl stop listener -n gentic -l LISTENER_GENTIC oracle@gentic> srvctl start listener -n gentic -l LISTENER_GENTIC oracle@cellar> srvctl stop listener -n gentic -l LISTENER_CELLAR oracle@cellar> srvctl start listener -n gentic -l LISTENER_CELLAR
24 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
you use DBCA to create a clustered database. The scripts which can be generated by DBCA are for reference purposes only. They can be found in /u01/app/oracle/admin/AKA/scripts. We tried to create the Cluster Database using the scripts, but without success. The following Figure shows the location for most of the needed Database File Locations.
The following Table shows the PATH to the corresponding Files File audit_file_dest background_dump_dest Location /u01/app/oracle/admin/AKA/adump Shared No No No No No No No No
/u01/app/oracle/diag/rdbms/aka/AKA1/trace /u01/app/oracle/diag/rdbms/aka/AKA2/trace core_dump_dest /u01/app/oracle/diag/rdbms/aka/AKA1/cdump /u01/app/oracle/diag/rdbms/aka/AKA2/cdump user_dump_dest /u01/app/oracle/diag/rdbms/aka/AKA1/trace /u01/app/oracle/diag/rdbms/aka/AKA2/trace dg_broker_config_file1 /u01/app/oracle/product/11.1.0/dbs/dr1AKA.dat dg_broker_config_file2 /u01/app/oracle/product/11.1.0/dbs/dr2AKA.dat diagnostic_dest /u01/app/oracle Alert.log (log.xml) LISTENER.ORA /u01/app/oracle/diag/rdbms/aka/AKA1/alert /u01/app/oracle/product/11.1.0/network/admin
25 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
/u01/app/oracle/product/11.1.0/network/admin /u01/app/oracle/product/11.1.0/network/admin /u01/oradat/AKA/spfileAKA.ora /u01/app/oracle/product/11.1.0/dbs/initAKA1.ora /u01/app/oracle/product/11.1.0/dbs/initAKA2.ora linked to spfile /u01/oradat/AKA/spfileAKA.ora /u01/oradat/AKA/control01.ctl, /u01/oradat/AKA/control02.ctl /u01/oradat/AKA/control03.ctl /u01/oradat/AKA/system01.dbf /u01/oradat/AKA/sysaux01.dbf /u01/oradat/AKA/temp01.dbf /u01/oradat/AKA/undotbs01.dbf /u01/oradat/AKA/undotbs02.dbf /u01/oradat/AKA/redo01.log /u01/oradat/AKA/redo02.log /u01/oradat/AKA/redo03.log /u01/oradat/AKA/redo04.log /u01/oradat/AKA/users01.dbf /u01/crscfg/crs_registry /u01/votdsk/voting_disk https://gentic:1158/em
No No Yes Yes
control_files
Yes
SYSTEM Tablespace SYSAUX Tablespace TEMP Tablespace UNDO Tablespace Redolog Files
USERS Tablespace Cluster Registry File Voting Disk Enterprise Manager Console
26 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
27 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
Check the Logfiles in: /u01/app/oracle/cfgtoollogs/dbca/AKA 261644 10513 0 534782 442 12971 170921 222385 5496 3858 267 562 357 65 27283 174 1439857 1518 7518 30140 Sep Sep Sep Sep Sep Sep Sep Sep Sep Sep Sep Sep Sep Sep Sep Sep Sep Sep Sep Sep 29 29 29 29 29 29 29 29 29 29 29 29 29 29 29 29 29 29 29 29 13:42 12:31 13:42 12:22 11:46 12:55 13:50 13:20 12:50 12:29 13:43 12:38 13:42 18:04 13:05 12:22 18:05 13:43 13:06 12:38 apex.log context.log CreateClustDBViews.log CreateDBCatalog.log CreateDB.log cwmlite.log emConfig.log emRepository.log interMedia.log JServer.log lockAccount.log ordinst.log owb.log postDBCreation.log spatial.log sqlPlusHelp.log trace.log ultraSearchCfg.log ultraSearch.log xdb_protocol.log
28 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
oracle@gentic> srvctl config database -d AKA gentic AKA1 /u01/app/oracle/product/11.1.0 cellar AKA2 /u01/app/oracle/product/11.1.0 Status of all instances and services oracle@gentic> srvctl status database -d AKA Instance AKA1 is running on node gentic Instance AKA2 is running on node cellar Status of node applications on a particular node oracle@gentic> srvctl status nodeapps -n gentic VIP is running on node: gentic GSD is running on node: gentic Listener is running on node: gentic ONS daemon is running on node: gentic Display the configuration for node applications - (VIP, GSD, ONS, Listener) oracle@gentic> srvctl config nodeapps -n gentic -a -g -s -l VIP exists.: /gentic-vip/192.168.138.130/255.255.255.0/eth0 GSD exists. ONS daemon exists. Listener exists. All running instances in the cluster sqlplus system/manager@AKA1 SELECT inst_id, instance_number, instance_name, parallel, status, database_status, active_state, host_name host FROM gv$instance ORDER BY inst_id; INST_ID INSTANCE_NUMBER INSTANCE_NAME PAR STATUS ---------- --------------- ---------------- --- -----------1 1 AKA1 YES OPEN 2 2 AKA2 YES OPEN SELECT UNION SELECT UNION SELECT UNION SELECT name FROM v$datafile member FROM v$logfile name FROM v$controlfile name FROM v$tempfile; DATABASE_STATUS ----------------ACTIVE ACTIVE ACTIVE_ST --------NORMAL NORMAL HOST -------gentic cellar
NAME -------------------------------------/u01/oradat/AKA/control01.ctl /u01/oradat/AKA/control02.ctl /u01/oradat/AKA/control03.ctl /u01/oradat/AKA/redo01.log /u01/oradat/AKA/redo02.log /u01/oradat/AKA/redo03.log /u01/oradat/AKA/redo04.log /u01/oradat/AKA/sysaux01.dbf /u01/oradat/AKA/system01.dbf /u01/oradat/AKA/temp01.dbf /u01/oradat/AKA/undotbs01.dbf /u01/oradat/AKA/undotbs02.dbf /u01/oradat/AKA/users01.dbf The V$ACTIVE_INSTANCES view can also display the current status of the instances. SELECT * FROM v$active_instances; INST_NUMBER ----------1 2 INST_NAME ---------------------------------------------gentic:AKA1 cellar:AKA2
Finally, the GV$ allow you to display global information for the whole RAC.
29 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
SELECT inst_id, username, sid, serial# FROM gv$session WHERE username IS NOT NULL; INST_ID ---------1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 USERNAME SID SERIAL# ------------------------------ ---------- ---------SYSTEM 113 137 DBSNMP 114 264 DBSNMP 116 27 SYSMAN 118 4 SYSMAN 121 11 SYSMAN 124 25 SYS 125 18 SYSMAN 126 14 SYS 127 7 DBSNMP 128 370 SYS 130 52 SYS 144 9 SYSTEM 170 608 DBSNMP 117 393 SYSTEM 119 1997 SYSMAN 123 53 DBSNMP 124 52 SYS 127 115 SYS 128 126 SYSMAN 129 771 SYSMAN 134 18 DBSNMP 135 5 SYSMAN 146 42 SYSMAN 170 49
Start Enterprise Manager Console The Enterprise Manager Console is shared for the whole Cluster, in our Example it listens on https://gentic:1158/em
30 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
oracle@cellar> emctl stop dbconsole oracle@cellar> srvctl stop instance -d AKA -i AKA2 oracle@cellar> srvctl stop nodeapps -n cellar
31 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
A major component of Oracle RAC 11g that is responsible for failover processing is the Transparent Application Failover (TAF) option. All database connections (and processes) that lose connections are reconnected to another node within the cluster. The failover is completely transparent to the user. One important note is that TAF happens automatically within the OCI libraries. Thus your application (client) code does not need to change in order to take advantage of TAF. Certain configuration steps, however, will need to be done on the Oracle TNS file tnsnames.ora.
TAF Test
Our Test is initiated from the Non-RAC Client VIPER where Oracle 10.2.0.3 is installed oracle@viper> ping gentic-vip PING gentic-vip (192.168.138.130) 56(84) bytes of data. 64 bytes from gentic-vip (192.168.138.130): icmp_seq=1 ttl=64 time=2.17 ms 64 bytes from gentic-vip (192.168.138.130): icmp_seq=2 ttl=64 time=1.20 ms oracle@viper> ping cellar-vip PING cellar-vip (192.168.138.131) 56(84) bytes of data. 64 bytes from cellar-vip (192.168.138.131): icmp_seq=1 ttl=64 time=2.41 ms 64 bytes from cellar-vip (192.168.138.131): icmp_seq=2 ttl=64 time=1.30 ms oracle@viper> tnsping AKA TNS Ping Utility for Linux: Version 10.2.0.3.0 - Production on 30-SEP-2007 11:35:46 Used parameter files: /home/oracle/config/10.2.0/sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = gentic-vip) (PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = cellar-vip)(PORT = 1521)) (LOAD_BALANCE = yes) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = AKA.WORLD) (FAILOVER_MODE = (TYPE = SELECT) (METHOD = BASIC) (RETRIES = 180) (DELAY = 5))))OK (0 msec) SQL Query to Check the Session's Failover Information The following SQL query can be used to check a session's failover type, failover method, and if a failover has occurred. We will be using this query throughout this example. oracle@viper> sqlplus system/manager@AKA SQL*Plus: Release 10.2.0.3.0 - Production on Sun Sep 30 11:38:45 2007 Copyright (c) 1982, 2006, Oracle. All Rights Reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production With the Partitioning, Real Application Clusters, OLAP, Data Mining and Real Application Testing options SQL> COLUMN instance_name FORMAT a13 COLUMN host_name FORMAT a9 COLUMN failover_method FORMAT a15
32 of 33
31.08.2013 2:41 PM
http://www.akadia.com/services/ora_rac.html
COLUMN failed_over FORMAT a11 SELECT DISTINCT v.instance_name AS instance_name, v.host_name AS host_name, s.failover_type AS failover_type, s.failover_method AS failover_method, s.failed_over AS failed_over FROM v$instance v, v$session s WHERE s.username = 'SYSTEM'; INSTANCE_NAME HOST_NAME FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER ------------- --------- ------------- --------------- ----------AKA2 cellar SELECT BASIC NO We can see, that we are connected to the Instance AKA2 which is running on Cellar. Now we stop this Instance without disconnecting from VIPER. oracle@cellar> srvctl status database -d AKA Instance AKA1 is running on node gentic Instance AKA2 is running on node cellar oracle@cellar> srvctl stop instance -d AKA -i AKA2 -o abort oracle@cellar> srvctl status database -d AKA Instance AKA1 is running on node gentic Instance AKA2 is not running on node cellar Now let's go back to our SQL session on VIPER and rerun the SQL statement: SQL> COLUMN instance_name FORMAT a13 COLUMN host_name FORMAT a9 COLUMN failover_method FORMAT a15 COLUMN failed_over FORMAT a11 SELECT DISTINCT v.instance_name AS instance_name, v.host_name AS host_name, s.failover_type AS failover_type, s.failover_method AS failover_method, s.failed_over AS failed_over FROM v$instance v, v$session s WHERE s.username = 'SYSTEM'; INSTANCE_NAME HOST_NAME FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER ------------- --------- ------------- --------------- ----------AKA1 gentic SELECT BASIC YES We can see that the above session has now been failed over to instance AKA1 on Gentic.
All data files, control files, SPFILEs, and redo log files in Oracle RAC environments must reside on cluster-aware shared disks so that all of the cluster database instances can access these storage components.
Oracle recommends that you use one shared server parameter file (SPFILE) with instance-specific entries. Alternatively, you can use a local file system to store instance-specific parameter files (PFILEs).
We noticed, that it's extremely hard to setup a Clustered Database with the generated Scripts from DBCA. The Database can be created, but it is difficult to register it with the Cluster Software using SVCTL.
33 of 33
31.08.2013 2:41 PM