Anda di halaman 1dari 84

2006 Form TREC 20-7

Download, fax, print or ll


pd ller.com
online!

DBATipsArchiveforOracle

BuildinganInexpensiveOracleRAC11gR2onLinux(RHEL5)
byJeffHunter,Sr.DatabaseAdministrator

Contents
Introduction
OracleRAC11gOverview
SharedStorageOverview
iSCSITechnology
HardwareandCosts
InstalltheLinuxOperatingSystem
InstallRequiredLinuxPackagesforOracleRAC
InstallOpenfiler
NetworkConfiguration
ClusterTimeSynchronizationService
ConfigureiSCSIVolumesusingOpenfiler
ConfigureiSCSIVolumesonOracleRACNodes
CreateJobRoleSeparationOperatingSystemPrivilegesGroups,Users,andDirectories
LoggingIntoaRemoteSystemUsingXTerminal
ConfiguretheLinuxServersforOracle
ConfigureRACNodesforRemoteAccessusingSSH(Optional)
InstallandConfigureASMLib2.0
DownloadOracleRAC11gRelease2Software
PreinstallationTasksforOracleGridInfrastructureforaCluster
InstallOracleGridInfrastructureforaCluster
PostinstallationTasksforOracleGridInfrastructureforaCluster
CreateASMDiskGroupsforDataandFastRecoveryArea
InstallOracleDatabase11gwithOracleRealApplicationClusters
InstallOracleDatabase11gExamples(formerlyCompanion)
CreatetheOracleClusterDatabase
PostDatabaseCreationTasks(Optional)
Create/AlterTablespaces
VerifyOracleGridInfrastructureandDatabaseConfiguration
Starting/StoppingtheCluster
Troubleshooting
Conclusion
Acknowledgements
AbouttheAuthor

Introduction
OracleRAC11gRelease2allowsDBA'stoconfigureaclusterdatabasesolutionwithsuperiorfaulttolerance,loadbalancing,andscalability.However,DBA'swho
wanttobecomemorefamiliarwiththefeaturesandbenefitsofdatabaseclustering,willfindthecostsofconfiguringevenasmallRACclustercostingintherange
ofUS$10,000toUS$20,000.ThiscostwouldnotevenincludetheheartofaproductionRACconfiguration,thesharedstorage.Inmostcases,thiswouldbea
StorageAreaNetwork(SAN),whichgenerallystartatUS$10,000.

Unfortunately,formanyshops,thepriceofthehardwarerequiredforatypicalRACconfigurationexceedsmosttrainingbudgets.Forthosewhowanttobecome
familiarwithOracleRAC11gwithoutamajorcashoutlay,thisguideprovidesalowcostalternativetoconfiguringanOracleRAC11gRelease2systemusing
commercialofftheshelfcomponentsanddownloadablesoftwareatanestimatedcostofUS$2,800.

Thesystemwillconsistofatwonodecluster,bothrunningLinux(CentOS5.5forx86_64),OracleRAC11gRelease2forLinuxx86_64,andASMLib2.0.All
shareddiskstorageforOracleRACwillbebasedoniSCSIusingOpenfilerrelease2.3x86_64runningonathirdnode(knowninthisarticleastheNetworkStorage
Server).

Thisguideisprovidedforeducationalpurposesonly,sothesetupiskeptsimpletodemonstrateideasandconcepts.Forexample,thesharedOracleClusterware
files(OCRandvotingfiles)andallphysicaldatabasefilesinthisarticlewillbesetupononlyonephysicaldisk,whileinpracticethatshouldbestoredonmultiple
physicaldrivesconfiguredforincreasedperformanceandredundancy(i.e.RAID).Inaddition,eachLinuxnodewillonlybeconfiguredwithtwonetworkinterfaces
oneforthepublicnetwork(eth0)andonethatwillbeusedforboththeOracleRACprivateinterconnect"and"thenetworkstorageserverforsharediSCSIaccess
(eth1).ForaproductionRACimplementation,theprivateinterconnectshouldbeatleastGigabit(ormore)withredundantpathsand"only"beusedbyOracleto
transferClusterManagerandCacheFusionrelateddata.Athirddedicatednetworkinterface(eth2,forexample)shouldbeconfiguredonanotherredundantGigabit
networkforaccesstothenetworkstorageserver(Openfiler).

Inadditiontothisguide,pleaseseethefollowingextensionstothisarticlethatdescribehowtoaddandremovenodesfromtheOracleRAC.

AddaNodetoanExistingOracleRAC11gR2ClusteronLinux(RHEL5)
RemoveaNodefromanExistingOracleRAC11gR2ClusteronLinux(RHEL5)

OracleDocumentation

WhilethisguideprovidesdetailedinstructionsforsuccessfullyinstallingacompleteOracleRAC11gsystem,itisbynomeansasubstitutefortheofficialOracle
documentation(seelistbelow).Inadditiontothisguide,usersshouldalsoconsultthefollowingOracledocumentstogainafullunderstandingofalternative
configurationoptions,installation,andadministrationwithOracleRAC11g.Oracle'sofficialdocumentationsiteisdocs.oracle.com.

GridInfrastructureInstallationGuide11gRelease2(11.2)forLinux
ClusterwareAdministrationandDeploymentGuide11gRelease2(11.2)
OracleRealApplicationClustersInstallationGuide11gRelease2(11.2)forLinuxandUNIX
RealApplicationClustersAdministrationandDeploymentGuide11gRelease2(11.2)
OracleDatabase2Day+RealApplicationClustersGuide11gRelease2(11.2)
OracleDatabaseStorageAdministrator'sGuide11gRelease2(11.2)

NetworkStorageServer

PoweredbyrPathLinux,OpenfilerisafreebrowserbasednetworkstoragemanagementutilitythatdeliversfilebasedNetworkAttachedStorage(NAS)andblock
basedStorageAreaNetworking(SAN)inasingleframework.TheentiresoftwarestackinterfaceswithopensourceapplicationssuchasApache,Samba,LVM2,
ext3,LinuxNFSandiSCSIEnterpriseTarget.Openfilercombinestheseubiquitoustechnologiesintoasmall,easytomanagesolutionfrontedbyapowerfulweb
basedmanagementinterface.

OpenfilersupportsCIFS,NFS,HTTP/DAV,FTP,however,wewillonlybemakinguseofitsiSCSIcapabilitiestoimplementaninexpensiveSANfortheshared
storagecomponentsrequiredbyOracleRAC11g.Theoperatingsystem(rPathLinux)andtheOpenfilerapplicationwillbeinstalledononeinternalSATAdisk.A
secondinternal73GB15KSCSIharddiskwillbeconfiguredasasinglevolumegroupthatwillbeusedforallshareddiskstoragerequirements.TheOpenfiler
serverwillbeconfiguredtousethisvolumegroupforiSCSIbasedstorageandwillbeusedinourOracleRAC11gconfigurationtostorethesharedfilesrequiredby
OracleGridInfrastructureandtheOracleRACdatabase.

OracleGridInfrastructure11gRelease2(11.2)

WithOracleGridInfrastructure11gRelease2(11.2),theAutomaticStorageManagement(ASM)andOracleClusterwaresoftwareispackagedtogetherinasingle
binarydistributionandinstalledintoasinglehomedirectory,whichisreferredtoastheGridInfrastructurehome.YoumustinstalltheGridInfrastructureinorderto
useOracleRAC11gRelease2.ConfigurationassistantsstartaftertheinstallerinterviewprocessthatwillberesponsibleforconfiguringASMandOracle
Clusterware.WhiletheinstallationofthecombinedproductsiscalledOracleGridInfrastructure,OracleClusterwareandAutomaticStorageManagerremain
separateproducts.

AfterOracleGridInfrastructureisinstalledandconfiguredonbothnodesinthecluster,thenextstepwillbetoinstalltheOracleRealApplicationClusters(Oracle
RAC)softwareonbothOracleRACnodes.

Inthisarticle,theOracleGridInfrastructureandOracleRACsoftwarewillbeinstalledonbothnodesusingtheoptionalJobRoleSeparationconfiguration.OneOS
userwillbecreatedtoowneachOraclesoftwareproduct"grid"fortheOracleGridInfrastructureownerand"oracle"fortheOracleRACsoftware.Throughout
thisarticle,ausercreatedtoowntheOracleGridInfrastructurebinariesiscalledthegriduser.ThisuserwillownboththeOracleClusterwareandOracle
AutomaticStorageManagementbinaries.TheusercreatedtoowntheOracledatabasebinaries(OracleRAC)willbecalledtheoracleuser.BothOraclesoftware
ownersmusthavetheOracleInventorygroup(oinstall)astheirprimarygroup,sothateachOraclesoftwareinstallationownercanwritetothecentralinventory
(oraInventory),andsothatOCRandOracleClusterwareresourcepermissionsaresetcorrectly.TheOracleRACsoftwareownermustalsohavetheOSDBAgroup
andtheoptionalOSOPERgroupassecondarygroups.

AssigningIPAddress

PriortoOracleClusterware11gRelease2,theonlymethodavailableforassigningIPaddressestoeachoftheOracleRACnodeswastohavethenetwork
administratormanuallyassignstaticIPaddressesinDNSnevertouseDHCP.ThiswouldincludethepublicIPaddressforthenode,theRACinterconnect,
virtualIPaddress(VIP),andnewto11gRelease2,theSingleClientAccessName(SCAN)virtualIPaddress(s).

OracleClusterware11gRelease2nowprovidestwomethodsforassigningIPaddressestoallOracleRACnodes:

1.AssigningIPaddressesdynamicallyusingGridNamingService(GNS)whichmakesuseofDHCP

2.ThetraditionalmethodofmanuallyassigningstaticIPaddressesinDomainNameService(DNS)

AssigningIPAddressesDynamicallyusingGridNamingService(GNS)

AnewmethodforassigningIPaddresseswasintroducedinOracleClusterware11gRelease2namedGridNamingService(GNS)whichallowsallprivate
interconnectaddresses,aswellasmostoftheVIPaddressestobedynamicallyassignedusingDHCP.GNSandDHCParekeyelementstoOracle'snewGrid
PlugandPlay(GPnP)featurethat,asOraclestates,eliminatespernodeconfigurationdataandtheneedforexplicitaddanddeletenodessteps.GNSenablesa
dynamicGridInfrastructurethroughtheselfmanagementofthenetworkrequirementsforthecluster.

AllnameresolutionrequestsfortheclusterwithinasubdomaindelegatedbytheDNSarehandedofftoGNSusingmulticastDomainNameService(mDNS)
includedwithintheOracleClusterware.UsingGNSeliminatestheneedformanagingIPaddressesandnameresolutionandisespeciallyadvantageousina
dynamicclusterenvironmentwherenodesareoftenaddedorremoved.

WhileassigningIPaddressesusingGNScertainlyhasitsbenefitsandoffersmoreflexibilityovermanuallydefiningstaticIPaddresses,itdoescomeatthecostof
complexityandrequirescomponentsnotdefinedinthisguide.Forexample,activatingGNSinaclusterrequiresaDHCPserveronthepublicnetworkwhichfalls
outsidethescopeofbuildinganinexpensiveOracleRAC.

TheexampleOracleRACconfigurationdescribedinthisguidewillusethetraditionalmethodofmanuallyassigningstaticIPaddressesinDNS.

TolearnmoreaboutthebenefitsandhowtoconfigureGNS,pleaseseeOracleGridInfrastructureInstallationGuide11gRelease2(11.2)forLinux.

AssigningIPAddressesManuallyusingStaticIPAddress(TheDNSMethod)

IfyouchoosenottouseGNS,manuallydefiningstaticIPaddressesisstillavailablewithOracleClusterware11gRelease2andwillbethemethodusedinthis
articletoassignallrequiredOracleClusterwarenetworkingcomponents(publicIPaddressforthenode,RACinterconnect,virtualIPaddress,andSCANvirtualIP).

ItshouldbepointedoutthatprevioustoOracle11gRelease2,theneedforDNSinordertosuccessfullyconfigureOracleRACwasnotastrictrequirement.Itwas
technicallypossible(althoughnotrecommendedforaproductionsystem)todefineallIPaddressesonlyinthehostsfileonallnodesinthecluster(i.e./etc/hosts).
ThisactuallyworkedtomyadvantagewithanyofmypreviousarticlesonbuildinganinexpensiveRACbecauseitwasonelesscomponenttodocumentand
configure.

So,whyistheuseofDNSnowarequirementwhenmanuallyassigningstaticIPaddresses?TheanswerisSCAN.OracleClusterware11gRelease2requiresthe
useofDNSinordertostoretheSCANvirtualIPaddress(s).InadditiontotherequirementofconfiguringtheSCANvirtualIPaddressinDNS,wewillalso
configurethepublicandvirtualIPaddressforallOracleRACnodesinDNSfornameresolution.IfyoudonothaveaccesstoaDNS,instructionswillbeincluded
laterinthisguideonhowtoinstallaminimalDNSserverontheOpenfilernetworkstorageserver.

WhenusingtheDNSmethodforassigningIPaddresses,Oraclerecommendsthatall
staticIPaddressesbemanuallyconfiguredinDNSbeforestartingtheOracleGrid
Infrastructureinstallation.
SingleClientAccessName(SCAN)fortheCluster

IfyouhaveeverbeentaskedwithextendinganOracleRACclusterbyaddinganewnode(orshrinkingaRACclusterbyremovinganode),thenyouknowthepain
ofgoingthroughalistofallclientsandupdatingtheirSQL*NetorJDBCconfigurationtoreflectthenewordeletednode.Toaddressthisproblem,Oracle11g
Release2introducedanewfeatureknownasSingleClientAccessNameorSCANforshort.SCANisanewfeaturethatprovidesasinglehostnameforclientsto
accessanOracleDatabaserunninginacluster.ClientsusingSCANdonotneedtochangetheirTNSconfigurationifyouaddorremovenodesinthecluster.The
SCANresourceanditsassociatedIPaddress(s)provideastablenameforclientstouseforconnections,independentofthenodesthatmakeupthecluster.You
willbeaskedtoprovidethehostname(alsocalledtheSCANnameinthisdocument)anduptothreeIPaddressestobeusedfortheSCANresourceduringthe
interviewphaseoftheOracleGridInfrastructureinstallation.Forhighavailabilityandscalability,OraclerecommendsthatyouconfiguretheSCANnameforround
robinresolutiontothreeIPaddresses.Ataminimum,theSCANmustresolvetoatleastoneaddress.

TheSCANvirtualIPnameissimilartothenamesusedforanode'svirtualIPaddress,suchasracnode1-vip.However,unlikeavirtualIP,theSCANisassociated
withtheentirecluster,ratherthananindividualnode,andcanbeassociatedwithmultipleIPaddresses,notjustoneaddress.

DuringinstallationoftheOracleGridInfrastructure,alisteneriscreatedforeachoftheSCANaddresses.ClientsthataccesstheOracleRACdatabaseshoulduse
theSCANorSCANaddress,nottheVIPnameoraddress.IfanapplicationusesaSCANtoconnecttotheclusterdatabase,thenetworkconfigurationfilesonthe
clientcomputerdonotneedtobemodifiedwhennodesareaddedtoorremovedfromthecluster.NotethatSCANaddresses,virtualIPaddresses,andpublicIP
addressesmustallbeonthesamesubnet.

TheSCANshouldbeconfiguredsothatitisresolvableeitherbyusingGridNamingService(GNS)withintheclusterorbyusingthetraditionalmethodofassigning
staticIPaddressesusingDomainNameService(DNS)resolution.

Inthisarticle,IwillconfigureSCANforroundrobinresolutiontothree,manuallyconfiguredstaticIPaddressusingtheDNSmethod.

racnode-cluster-scan IN A 192.168.1.187
racnode-cluster-scan IN A 192.168.1.188
racnode-cluster-scan IN A 192.168.1.189

FurtherdetailsregardingtheconfigurationofSCANwillbeprovidedinthesection"VerifySCANConfiguration"duringthenetworkconfigurationphaseofthisguide..

AutomaticStorageManagementandOracleClusterwareFiles

AutomaticStorageManagement(ASM)isnowfullyintegratedwithOracleClusterwareintheOracleGridInfrastructure.OracleASMandOracleDatabase11g
Release2provideamoreenhancedstoragesolutionfrompreviousreleases.PartofthissolutionistheabilitytostoretheOracleClusterwarefilesnamelythe
OracleClusterRegistry(OCR)andtheVotingFiles(VF,alsoknownastheVotingDisks)onASM.ThisfeatureenablesASMtoprovideaunifiedstoragesolution,
storingallthedatafortheclusterwareandthedatabasewithouttheneedforthirdpartyvolumemanagersorclusterfilesystems.

Justlikedatabasefiles,OracleClusterwarefilesarestoredinanASMdiskgroupandthereforeutilizetheASMdiskgroupconfigurationwithrespecttoredundancy.
Forexample,aNormalRedundancyASMdiskgroupwillholdatwowaymirroredOCR.AfailureofonediskinthediskgroupwillnotpreventaccesstotheOCR.
WithaHighRedundancyASMdiskgroup(threewaymirrored),twoindependentdiskscanfailwithoutimpactingaccesstotheOCR.WithExternalRedundancy,no
protectionisprovidedbyOracle.

OracleonlyallowsoneOCRperdiskgroupinordertoprotectagainstphysicaldiskfailures.WhenconfiguringOracleClusterwarefilesonaproductionsystem,
OraclerecommendsusingeithernormalorhighredundancyASMdiskgroups.IfdiskmirroringisalreadyoccurringateithertheOSorhardwarelevel,youcanuse
externalredundancy.

TheVotingFilesaremanagedinasimilarwaytotheOCR.TheyfollowtheASMdiskgroupconfigurationwithrespecttoredundancy,butarenotmanagedas
normalASMfilesinthediskgroup.Instead,eachvotingdiskisplacedonaspecificdiskinthediskgroup.ThediskandthelocationoftheVotingFilesonthedisks
arestoredinternallywithinOracleClusterware.

ThefollowingexampledescribeshowtheOracleClusterwarefilesarestoredinASMafterinstallingOracleGridInfrastructureusingthisguide.ToviewtheOCR,
useASMCMD.

[grid@racnode1 ~]$ asmcmd


ASMCMD> ls -l +CRS/racnode-cluster/OCRFILE
Type Redund Striped Time Sys Name
OCRFILE UNPROT COARSE NOV 22 12:00:00 Y REGISTRY.255.703024853

Fromtheexampleabove,youcanseethatafterlistingalloftheASMfilesinthe+CRS/racnode-cluster/OCRFILEdirectory,itonlyshowstheOCR
(REGISTRY.255.703024853).ThelistingdoesnotshowtheVotingFile(s)becausetheyarenotmanagedasnormalASMfiles.TofindthelocationofallVotingFiles
withinOracleClusterware,usethecrsctl query css votediskcommandasfollows.

[grid@racnode1 ~]$ crsctl query css votedisk


## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 4cbbd0de4c694f50bfd3857ebd8ad8c4 (ORCL:CRSVOL1) [CRS]
Located 1 voting disk(s).

IfyoudecideagainstusingASMfortheOCRandvotingdiskfiles,OracleClusterwarestillallowsthesefilestobestoredonaclusterfilesystemlikeOracle
ClusterFileSystemRelease2(OCFS2)oraNFSsystem.PleasenotethatinstallingOracleClusterwarefilesonraworblockdevicesisnolongersupported,
unlessanexistingsystemisbeingupgraded.

PreviousversionsofthisguideusedOCFS2forstoringtheOCRandvotingdiskfiles.ThisguidewillstoretheOCRandvotingdiskfilesonASMinanASMdisk
groupnamed+CRSusingexternalredundancywhichisoneOCRlocationandonevotingdisklocation.TheASMdiskgroupshouldbebecreatedonsharedstorage
andbeatleast2GBinsize.

TheOraclephysicaldatabasefiles(data,onlineredologs,controlfiles,archivedredologs)willbeinstalledonASMinanASMdiskgroupnamed+RACDB_DATAwhile
theFastRecoveryAreawillbecreatedinaseparateASMdiskgroupnamed+FRA.

ThetwoOracleRACnodesandthenetworkstorageserverwillbeconfiguredasfollows.

OracleRAC/OpenfilerNodes

NodeName InstanceName DatabaseName Processor RAM OperatingSystem

racnode1 racdb1 1xDualCoreIntelXeon,3.00GHz 4GB CentOS5.5(x86_64)


racdb.idevelopment.info
racnode2 racdb2 1xDualCoreIntelXeon,3.00GHz 4GB CentOS5.5(x86_64)
NodeName InstanceName DatabaseName Processor RAM OperatingSystem

openfiler1 2xIntelXeon,3.00GHz 6GB Openfiler2.3(x86_64)

NetworkConfiguration

NodeName PublicIP PrivateIP VirtualIP SCANName SCANIP

racnode1 192.168.1.151 192.168.2.151 192.168.1.251 192.168.1.187


racnodeclusterscan 192.168.1.188
racnode2 192.168.1.152 192.168.2.152 192.168.1.252 192.168.1.189

openfiler1 192.168.1.195 192.168.2.195

OracleSoftwareComponents

SoftwareComponent OSUser PrimaryGroup SupplementaryGroups HomeDirectory OracleBase/OracleHome

/u01/app/grid
GridInfrastructure grid oinstall asmadmin,asmdba,asmoper /home/grid
/u01/app/11.2.0/grid

/u01/app/oracle
OracleRAC oracle oinstall dba,oper,asmdba /home/oracle
/u01/app/oracle/product/11.2.0/dbhome_1

StorageComponents

StorageComponent FileSystem VolumeSize ASMVolumeGroupName ASMRedundancy OpenfilerVolumeName

OCR/VotingDisk ASM 2GB +CRS External racdbcrs1

DatabaseFiles ASM 32GB +RACDB_DATA External racdbdata1

FastRecoveryArea ASM 32GB +FRA External racdbfra1

Thisarticleisonlydesignedtoworkasdocumentedwithabsolutelynosubstitutions.Theonlyexceptionhereisthechoiceofvendorhardware(i.e.machines,
networkingequipment,andinternal/externalharddrives).EnsurethatthehardwareyoupurchasefromthevendorissupportedonRedHatEnterpriseLinux5and
Openfiler2.3(FinalRelease).

IfyouarelookingforanexamplethattakesadvantageofOracleRAC10gRelease2withRHEL5.3usingiSCSI,clickhere.

IfyouarelookingforanexamplethattakesadvantageofOracleRAC11grelease1withRHEL5.1usingiSCSI,clickhere.

OracleRAC11gOverview
BeforeintroducingthedetailsforbuildingaRACcluster,itmightbehelpfultofirstclarifywhataclusteris.Aclusterisagroupoftwoormoreinterconnected
computersorserversthatappearasiftheyareoneservertoendusersandapplicationsandgenerallysharethesamesetofphysicaldisks.Thekeybenefitof
clusteringistoprovideahighlyavailableframeworkwherethefailureofonenode(forexampleadatabaseserverrunninganinstanceofOracle)doesnotbringdown
anentireapplication.Inthecaseoffailurewithoneoftheservers,theothersurvivingserver(orservers)cantakeovertheworkloadfromthefailedserverandthe
applicationcontinuestofunctionnormallyasifnothinghashappened.

Theconceptofclusteringcomputersactuallystartedseveraldecadesago.ThefirstsuccessfulclusterproductwasdevelopedbyDataPointin1977named
ARCnet.TheARCnetproductenjoyedmuchsuccessbyacademiatypesinresearchlabs,butdidn'treallytakeoffinthecommercialmarket.Itwasn'tuntilthe
1980'swhenDigitalEquipmentCorporation(DEC)releaseditsVAXclusterproductfortheVAX/VMSoperatingsystem.

WiththereleaseofOracle6fortheDigitalVAXclusterproduct,Oraclewasthefirstcommercialdatabasetosupportclusteringatthedatabaselevel.Itwasn'tlong,
however,beforeOraclerealizedtheneedforamoreefficientandscalabledistributedlockmanager(DLM)astheoneincludedwiththeVAX/VMSclusterproduct
wasnotwellsuitedfordatabaseapplications.OracledecidedtodesignandwritetheirownDLMfortheVAX/VMSclusterproductwhichprovidedthefinegrain
blocklevellockingrequiredbythedatabase.Oracle'sownDLMwasincludedinOracle6.2whichgavebirthtoOracleParallelServer(OPS)thefirstdatabaseto
runtheparallelserver.

ByOracle7,OPSwasextendedtoincludedsupportfornotonlytheVAX/VMSclusterproductbutalsowithmostflavorsofUNIX.Thisframeworkrequiredvendor
suppliedclusterwarewhichworkedwell,butmadeforacomplexenvironmenttosetupandmanagegiventhemultiplelayersinvolved.ByOracle8,Oracleintroduced
agenericlockmanagerthatwasintegratedintotheOraclekernel.InlaterreleasesofOracle,thisbecameknownastheIntegratedDistributedLockManager(IDLM)
andreliedonanadditionallayerknownastheOperatingSystemDependant(OSD)layer.ThisnewmodelpavedthewayforOracletonotonlyhavetheirownDLM,
buttoalsocreatetheirownclusterwareproductinfuturereleases.

OracleRealApplicationClusters(RAC),introducedwithOracle9i,isthesuccessortoOracleParallelServer.UsingthesameIDLM,Oracle9icouldstillrelyon
externalclusterwarebutwasthefirstreleasetoincludetheirownclusterwareproductnamedClusterReadyServices(CRS).WithOracle9i,CRSwasonly
availableforWindowsandLinux.ByOracle10grelease1,Oracle'sclusterwareproductwasavailableforalloperatingsystemsandwastherequiredcluster
technologyforOracleRAC.WiththereleaseofOracleDatabase10gRelease2(10.2),ClusterReadyServiceswasrenamedtoOracleClusterware.Whenusing
Oracle10gorhigher,OracleClusterwareistheonlyclusterwarethatyouneedformostplatformsonwhichOracleRACoperates(exceptforTrucluster,inwhich
caseyouneedvendorclusterware).Youcanstilluseclusterwarefromothervendorsiftheclusterwareiscertified,butkeepinmindthatOracleRACstillrequires
OracleClusterwareasitisfullyintegratedwiththedatabasesoftware.ThisguideusesOracleClusterwarewhichasof11gRelease2(11.2),isnowacomponentof
OracleGridInfrastructure.

LikeOPS,OracleRACallowsmultipleinstancestoaccessthesamedatabase(storage)simultaneously.RACprovidesfaulttolerance,loadbalancing,and
performancebenefitsbyallowingthesystemtoscaleout,andatthesametimesinceallinstancesaccessthesamedatabase,thefailureofonenodewillnot
causethelossofaccesstothedatabase.

AttheheartofOracleRACisashareddisksubsystem.Eachinstanceintheclustermustbeabletoaccessallofthedata,redologfiles,controlfilesand
parameterfileforallotherinstancesinthecluster.Thedatadisksmustbegloballyavailableinordertoallowallinstancestoaccessthedatabase.Eachinstance
hasitsownredologfilesandUNDOtablespacethatarelocallyread/writable.Theotherinstancesintheclustermustbeabletoaccessthem(readonly)inorderto
recoverthatinstanceintheeventofasystemfailure.Theredologfilesforaninstanceareonlywritablebythatinstanceandwillonlybereadfromanotherinstance
duringsystemfailure.TheUNDO,ontheotherhand,isreadallthetimeduringnormaldatabaseoperation(e.g.forCRfabrication).

AbigdifferencebetweenOracleRACandOPSistheadditionofCacheFusion.WithOPSarequestfordatafromoneinstancetoanotherrequiredthedatatobe
writtentodiskfirst,thentherequestinginstancecanreadthatdata(afteracquiringtherequiredlocks).Thisprocesswascalleddiskpinging.Withcachefusion,
dataispassedalongahighspeedinterconnectusingasophisticatedlockingalgorithm.
Notalldatabaseclusteringsolutionsusesharedstorage.SomevendorsuseanapproachknownasaFederatedCluster,inwhichdataisspreadacrossseveral
machinesratherthansharedbyall.WithOracleRAC,however,multipleinstancesusethesamesetofdisksforstoringdata.Oracle'sapproachtoclustering
leveragesthecollectiveprocessingpowerofallthenodesintheclusterandatthesametimeprovidesfailoversecurity.

PreconfiguredOracleRACsolutionsareavailablefromvendorssuchasDell,IBMandHPforproductionenvironments.Thisarticle,however,focusesonputting
togetheryourownOracleRAC11genvironmentfordevelopmentandtestingbyusingLinuxserversandalowcostshareddisksolutioniSCSI.

FormorebackgroundaboutOracleRAC,visittheOracleRACProductCenteronOTN.

SharedStorageOverview
Today,fibrechannelisoneofthemostpopularsolutionsforsharedstorage.Asmentionedearlier,fibrechannelisahighspeedserialtransferinterfacethatisused
toconnectsystemsandstoragedevicesineitherpointtopoint(FCP2P),arbitratedloop(FCAL),orswitchedtopologies(FCSW).ProtocolssupportedbyFibre
ChannelincludeSCSIandIP.Fibrechannelconfigurationscansupportasmanyas127nodesandhaveathroughputofupto2.12Gigabitspersecondineach
direction,and4.25Gbpsisexpected.

Fibrechannel,however,isveryexpensive.JustthefibrechannelswitchalonecanstartataroundUS$1,000.Thisdoesnotevenincludethefibrechannelstorage
arrayandhighenddrives,whichcanreachpricesofaboutUS$300forasingle36GBdrive.Atypicalfibrechannelsetupwhichincludesfibrechannelcardsforthe
serversisroughlyUS$10,000,whichdoesnotincludethecostoftheserversthatmakeuptheOracledatabasecluster.

AlessexpensivealternativetofibrechannelisSCSI.SCSItechnologyprovidesacceptableperformanceforsharedstorage,butforadministratorsanddevelopers
whoareusedtoGPLbasedLinuxprices,evenSCSIcancomeinoverbudget,ataroundUS$2,000toUS$5,000foratwonodecluster.

AnotherpopularsolutionistheSunNFS(NetworkFileSystem)foundonaNAS.Itcanbeusedforsharedstoragebutonlyifyouareusinganetworkapplianceor
somethingsimilar.Specifically,youneedserversthatguaranteedirectI/OoverNFS,TCPasthetransportprotocol,andread/writeblocksizesof32K.Seethe
CertifypageonOracleMetalinkforsupportedNetworkAttachedStorage(NAS)devicesthatcanbeusedwithOracleRAC.Oneofthekeydrawbacksthathas
limitedthebenefitsofusingNFSandNASfordatabasestoragehasbeenperformancedegradationandcomplexconfigurationrequirements.StandardNFSclient
software(clientsystemsthatusetheoperatingsystemprovidedNFSdriver)isnotoptimizedforOracledatabasefileI/Oaccesspatterns.Withtheintroductionof
Oracle11g,anewfeatureknownasDirectNFSClientintegratestheNFSclientfunctionalitydirectlyintheOraclesoftware.Throughthisintegration,Oracleisable
tooptimizetheI/OpathbetweentheOraclesoftwareandtheNFSserverresultinginsignificantperformancegains.DirectNFSClientcansimplify,andinmany
casesautomate,theperformanceoptimizationoftheNFSclientconfigurationfordatabaseworkloads.TolearnmoreaboutDirectNFSClient,seetheOracleWhite
Paperentitled"OracleDatabase11gDirectNFSClient".

ThesharedstoragethatwillbeusedforthisarticleisbasedoniSCSItechnologyusinganetworkstorageserverinstalledwithOpenfiler.Thissolutionoffersalow
costalternativetofibrechannelfortestingandeducationalpurposes,butgiventhelowendhardwarebeingused,itisnotoftenusedinaproductionenvironment.

iSCSITechnology
Formanyyears,theonlytechnologythatexistedforbuildinganetworkbasedstoragesolutionwasaFibreChannelStorageAreaNetwork(FCSAN).Basedonan
earliersetofANSIprotocolscalledFiberDistributedDataInterface(FDDI),FibreChannelwasdevelopedtomoveSCSIcommandsoverastoragenetwork.

SeveraloftheadvantagestoFCSANincludegreaterperformance,increaseddiskutilization,improvedavailability,betterscalability,andmostimportanttous
supportforserverclustering!Stilltoday,however,FCSANssufferfromthreemajordisadvantages.Thefirstisprice.WhilethecostsinvolvedinbuildingaFCSAN
havecomedowninrecentyears,thecostofentrystillremainsprohibitiveforsmallcompanieswithlimitedITbudgets.Thesecondisincompatiblehardware
components.Sinceitsadoption,manyproductmanufacturershaveinterpretedtheFibreChannelspecificationsdifferentlyfromeachotherwhichhasresultedin
scoresofinterconnectproblems.WhenpurchasingFibreChannelcomponentsfromacommonmanufacturer,thisisusuallynotaproblem.Thethirddisadvantageis
thefactthataFibreChannelnetworkisnotEthernet!Itrequiresaseparatenetworktechnologyalongwithasecondsetofskillsetsthatneedtoexistwiththedata
centerstaff.

WiththepopularityofGigabitEthernetandthedemandforlowercost,FibreChannelhasrecentlybeengivenarunforitsmoneybyiSCSIbasedstoragesystems.
Today,iSCSISANsremaintheleadingcompetitortoFCSANs.

RatifiedonFebruary11,2003bytheInternetEngineeringTaskForce(IETF),theInternetSmallComputerSystemInterface,betterknownasiSCSI,isanInternet
Protocol(IP)basedstoragenetworkingstandardforestablishingandmanagingconnectionsbetweenIPbasedstoragedevices,hosts,andclients.iSCSIisadata
transportprotocoldefinedintheSCSI3specificationsframeworkandissimilartoFibreChannelinthatitisresponsibleforcarryingblockleveldataoverastorage
network.Blocklevelcommunicationmeansthatdataistransferredbetweenthehostandtheclientinchunkscalledblocks.Databaseserversdependonthistype
ofcommunication(asopposedtothefilelevelcommunicationusedbymostNASsystems)inordertoworkproperly.LikeaFCSAN,aniSCSISANshouldbea
separatephysicalnetworkdevotedentirelytostorage,however,itscomponentscanbemuchthesameasinatypicalIPnetwork(LAN).

WhileiSCSIhasapromisingfuture,manyofitsearlycriticswerequicktopointoutsomeofitsinherentshortcomingswithregardstoperformance.Thebeautyof
iSCSIisitsabilitytoutilizeanalreadyfamiliarIPnetworkasitstransportmechanism.TheTCP/IPprotocol,however,isverycomplexandCPUintensive.With
iSCSI,mostoftheprocessingofthedata(bothTCPandiSCSI)ishandledinsoftwareandismuchslowerthanFibreChannelwhichishandledcompletelyin
hardware.TheoverheadincurredinmappingeverySCSIcommandontoanequivalentiSCSItransactionisexcessive.Formanythesolutionistodoawaywith
iSCSIsoftwareinitiatorsandinvestinspecializedcardsthatcanoffloadTCP/IPandiSCSIprocessingfromaserver'sCPU.Thesespecializedcardsare
sometimesreferredtoasaniSCSIHostBusAdaptor(HBA)oraTCPOffloadEngine(TOE)card.Alsoconsiderthat10GigabitEthernetisarealitytoday!

SowithallofthistalkaboutiSCSI,doesthismeanthedeathofFibreChannelanytimesoon?Probablynot.FibreChannelhasclearlydemonstrateditscapabilities
overtheyearswithitscapacityforextremelyhighspeeds,flexibility,androbustreliability.Customerswhohavestrictrequirementsforhighperformancestorage,
largecomplexconnectivity,andmissioncriticalreliabilitywillundoubtedlycontinuetochooseFibreChannel.

Aswithanynewtechnology,iSCSIcomeswithitsownsetofacronymsandterminology.Forthepurposeofthisarticle,itisonlyimportanttounderstandthe
differencebetweenaniSCSIinitiatorandaniSCSItarget.

iSCSIInitiator

Basically,aniSCSIinitiatorisaclientdevicethatconnectsandinitiatesrequeststosomeserviceofferedbyaserver(inthiscaseaniSCSItarget).TheiSCSI
initiatorsoftwarewillneedtoexistoneachoftheOracleRACnodes(racnode1andracnode2).

AniSCSIinitiatorcanbeimplementedusingeithersoftwareorhardware.SoftwareiSCSIinitiatorsareavailableformostmajoroperatingsystemplatforms.Forthis
article,wewillbeusingthefreeLinuxOpeniSCSIsoftwaredriverfoundintheiscsi-initiator-utilsRPM.TheiSCSIsoftwareinitiatorisgenerallyusedwitha
standardnetworkinterfacecard(NIC)aGigabitEthernetcardinmostcases.AhardwareinitiatorisaniSCSIHBA(oraTCPOffloadEngine(TOE)card),which
isbasicallyjustaspecializedEthernetcardwithaSCSIASIConboardtooffloadallthework(TCPandSCSIcommands)fromthesystemCPU.iSCSIHBAsare
availablefromanumberofvendors,includingAdaptec,Alacritech,Intel,andQLogic.

iSCSITarget

AniSCSItargetisthe"server"componentofaniSCSInetwork.Thisistypicallythestoragedevicethatcontainstheinformationyouwantandanswersrequests
fromtheinitiator(s).Forthepurposeofthisarticle,thenodeopenfiler1willbetheiSCSItarget.

HardwareandCosts
ThehardwareusedtobuildourexampleOracleRAC11genvironmentconsistsofthreeLinuxservers(twoOracleRACnodesandoneNetworkStorageServer)and
componentsthatcanbepurchasedatmanylocalcomputerstoresorovertheInternet.

OracleRACNode1(racnode1)

DellPowerEdgeT100

DualCoreIntel(R)Xeon(R)E3110,3.0GHz,6MBCache,1333MHz
4GB,DDR2,800MHz
160GB7.2KRPMSATA3GbpsHardDrive
IntegratedGraphics(ATIES1000)
IntegratedGigabitEthernet(Broadcom(R)NetXtremeIITM5722)
16xDVDDrive
NoKeyboard,Monitor,orMouse(ConnectedtoKVMSwitch)
US$500

1xEthernetLANCard

UsedforRACinterconnecttoracnode2andOpenfilernetworkedstorage.

EachLinuxserverforOracleRACshouldcontainatleasttwoNICadapters.TheDellPowerEdgeT100
includesanembeddedBroadcom(R)NetXtremeIITM5722GigabitEthernetNICthatwillbeusedtoconnect
tothepublicnetwork.AsecondNICadapterwillbeusedfortheprivatenetwork(RACinterconnectand
Openfilernetworkedstorage).SelecttheappropriateNICadapterthatiscompatiblewiththemaximumdata
transmissionspeedofthenetworkswitchtobeusedfortheprivatenetwork.Forthepurposeofthisarticle,I
usedaGigabitEthernetswitch(anda1GbEthernetcard)fortheprivatenetwork.

Intel(R)PRO/1000PTServerAdapter(EXPI9400PT)
US$90

OracleRACNode2(racnode2)

DellPowerEdgeT100

DualCoreIntel(R)Xeon(R)E3110,3.0GHz,6MBCache,1333MHz
4GB,DDR2,800MHz
160GB7.2KRPMSATA3GbpsHardDrive
IntegratedGraphics(ATIES1000)
IntegratedGigabitEthernet(Broadcom(R)NetXtremeIITM5722)
16xDVDDrive
NoKeyboard,Monitor,orMouse(ConnectedtoKVMSwitch)
US$500

1xEthernetLANCard

UsedforRACinterconnecttoracnode1andOpenfilernetworkedstorage.

EachLinuxserverforOracleRACshouldcontainatleasttwoNICadapters.TheDellPowerEdgeT100
includesanembeddedBroadcom(R)NetXtremeIITM5722GigabitEthernetNICthatwillbeusedtoconnect
tothepublicnetwork.AsecondNICadapterwillbeusedfortheprivatenetwork(RACinterconnectand
Openfilernetworkedstorage).SelecttheappropriateNICadapterthatiscompatiblewiththemaximumdata
transmissionspeedofthenetworkswitchtobeusedfortheprivatenetwork.Forthepurposeofthisarticle,I
usedaGigabitEthernetswitch(anda1GbEthernetcard)fortheprivatenetwork.

Intel(R)PRO/1000PTServerAdapter(EXPI9400PT)
US$90

NetworkStorageServer(openfiler1)

DellPowerEdge1800

Dual3.0GHzXeon/1MBCache/800FSB(SL7PE)
6GBofECCMemory
500GBSATAInternalHardDisk
73GB15KSCSIInternalHardDisk
IntegratedGraphics
SingleembeddedIntel10/100/1000GigabitNIC
16xDVDDrive
NoKeyboard,Monitor,orMouse(ConnectedtoKVMSwitch)

Note:TherPathLinuxoperatingsystemandOpenfilerapplicationwillbeinstalledonthe500GBinternal
SATAdisk.Asecondinternal73GB15KSCSIharddiskwillbeconfiguredfortheshareddatabasestorage.
TheOpenfilerserverwillbeconfiguredtousethissecondharddiskforiSCSIbasedstorageandwillbeused
inourOracleRAC11gconfigurationtostorethesharedfilesrequiredbyOracleClusterwareaswellasthe
clusterdatabasefiles.

Pleasebeawarethatanytypeofharddisk(internalorexternal)shouldworkfortheshareddiskstorageas
longasitcanberecognizedbythenetworkstorageserver(Openfiler)andhasadequatespace.For
example,Icouldhavemadeanextrapartitiononthe500GBinternalSATAdiskfortheiSCSItarget,but
decidedtomakeuseofthefasterSCSIdiskforthisexample.

Finally,althoughtheOpenfilerserverusedinthisexampleconfigurationcontains6GBofmemory,thisisby
nomeansarequirement.TheOpenfilerservercouldbeconfiguredwithaslittleas2GBforasmalltest/
evaluationnetworkstorageserver.
US$800

1xEthernetLANCard

Usedfornetworkedstorageontheprivatenetwork.

TheNetworkStorageServer(Openfilerserver)shouldcontaintwoNICadapters.TheDellPowerEdge1800
machineincludedanintegrated10/100/1000Ethernetadapterthatwillbeusedtoconnecttothepublic
network.ThesecondNICadapterwillbeusedfortheprivatenetwork(Openfilernetworkedstorage).Select
theappropriateNICadapterthatiscompatiblewiththemaximumdatatransmissionspeedofthenetwork
switchtobeusedfortheprivatenetwork.Forthepurposeofthisarticle,IusedaGigabitEthernetswitch
(and1GbEthernetcard)fortheprivatenetwork.

Intel(R)PRO/1000MTServerAdapter(PWLA8490MT)
US$125

MiscellaneousComponents
OracleRACNode1(racnode1)

1xEthernetSwitch

Usedfortheinterconnectbetweenracnode1-privandracnode2-privwhichwillbeonthe192.168.2.0network.
ThisswitchwillalsobeusedfornetworkstoragetrafficforOpenfiler.Forthepurposeofthisarticle,Iuseda
GigabitEthernetswitch(and1GbEthernetcards)fortheprivatenetwork.

Note:ThisarticleassumesyoualreadyhaveaswitchorVLANinplacewhatwillbeusedforthepublic
network.

DLink8port10/100/1000DesktopSwitch(DGS2208)
US$50

6xNetworkCables

Category6patchcable(Connectracnode1topublicnetwork)
Category6patchcable(Connectracnode2topublicnetwork) US$10
Category6patchcable(Connectopenfiler1topublicnetwork) US$10
Category6patchcable(Connectracnode1tointerconnectEthernetswitch) US$10
Category6patchcable(Connectracnode2tointerconnectEthernetswitch) US$10
Category6patchcable(Connectopenfiler1tointerconnectEthernetswitch) US$10
US$10

OptionalComponents

KVMSwitch

Thisguiderequiresaccesstotheconsoleofallmachinesinordertoinstalltheoperatingsystemand
performseveraloftheconfigurationtasks.Whenmanagingaverysmallnumberofservers,itmightmake
sensetoconnecteachserverwithitsownmonitor,keyboard,andmouseinordertoaccessitsconsole.
However,asthenumberofserverstomanageincreases,thissolutionbecomesunfeasible.Amorepractical
solutionwouldbetoconfigureadedicateddevicewhichwouldincludeasinglemonitor,keyboard,and
mousethatwouldhavedirectaccesstotheconsoleofeachserver.Thissolutionismadepossibleusinga
Keyboard,Video,MouseSwitchbetterknownasaKVMSwitch.AKVMswitchisahardwaredevicethat
allowsausertocontrolmultiplecomputersfromasinglekeyboard,videomonitorandmouse.Avocent
providesahighqualityandeconomical4portswitchwhichincludesfour6'cables.

AutoView(R)AnalogKVMSwitch

ForadetailedexplanationandguideontheuseandKVMswitches,pleaseseethearticle"KVMSwitchesFor
theHomeandtheEnterprise".
US$350

Total US$2,565

Weareabouttostarttheinstallationprocess.Nowthatwehavetalkedaboutthehardwarethatwillbeusedinthisexample,let'stakeaconceptuallookatwhatthe
environmentwouldlooklikeafterconnectingallofthehardwarecomponents(clickonthegraphicbelowtoviewlargerimage).

Figure1:OracleRAC11gRelease2TestConfiguration

Aswestarttogointothedetailsoftheinstallation,notethatmostofthetaskswithinthisdocumentwillneedtobeperformedonbothOracleRACnodes(racnode1
andracnode2).Iwillindicateatthebeginningofeachsectionwhetherornotthetask(s)shouldbeperformedonbothOracleRACnodesoronthenetworkstorage
server(openfiler1).

InstalltheLinuxOperatingSystem
PerformthefollowinginstallationonbothOracleRACnodesinthecluster.
ThissectionprovidesasummaryofthescreensusedtoinstalltheLinuxoperatingsystem.ThisguideisdesignedtoworkwithCentOSrelease5.5forx86_64or
RedHatEnterpriseLinux5.5forx86_64andfollowsOracle'ssuggestionofperforminga"defaultRPMs"installationtypetoensureallexpectedLinuxO/S
packagesarepresentforasuccessfulOracleRDBMSinstallation.

AlthoughIhaveusedRedHatFedorainthepast,IwantedtoswitchtoaLinuxenvironmentthatwouldguaranteeallofthefunctionalitycontainedwithOracle.This
iswhereCentOScomesin.TheCentOSprojecttakestheRedHatEnterpriseLinux5sourceRPMsandcompilesthemintoafreecloneoftheRedHatEnterprise
Server5product.ThisprovidesafreeandstableversionoftheRedHatEnterpriseLinux5(AS/ES)operatingenvironmentthatIcanuseforOracletestingand
development.IhavemovedawayfromFedoraasIneedastableenvironmentthatisnotonlyfree,butasclosetotheactualOraclesupportedoperatingsystemas
possible.WhileCentOSisnottheonlyprojectperformingthesamefunctionality,ItendtostickwithitasitisstableandreactsfastwithregardstoupdatesbyRed
Hat.

DownloadCentOS

UsethelinksbelowtodownloadCentOS5.5foreitherx86orx86_64dependingonyourhardwarearchitecture.

32bit(x86)Installations

CentOS5.5i386bin1of7.iso(623MB)
CentOS5.5i386bin2of7.iso(621MB)
CentOS5.5i386bin3of7.iso(630MB)
CentOS5.5i386bin4of7.iso(619MB)
CentOS5.5i386bin5of7.iso(629MB)
CentOS5.5i386bin6of7.iso(637MB)
CentOS5.5i386bin7of7.iso(231MB)

Note:IftheLinuxRACnodeshaveaDVDinstalled,youmayfinditmoreconvenienttomakeuseofthesingleDVDimage(requiresBitTorrent).

CentOS5.5i386binDVD.torrent(3.9GB)

64bit(x86_64)Installations

CentOS5.5x86_64bin1of8.iso(623MB)
CentOS5.5x86_64bin2of8.iso(587MB)
CentOS5.5x86_64bin3of8.iso(634MB)
CentOS5.5x86_64bin4of8.iso(633MB)
CentOS5.5x86_64bin5of8.iso(634MB)
CentOS5.5x86_64bin6of8.iso(627MB)
CentOS5.5x86_64bin7of8.iso(624MB)
CentOS5.5x86_64bin8of8.iso(242MB)

Note:IftheLinuxRACnodeshaveaDVDinstalled,youmayfinditmoreconvenienttomakeuseofthetwoDVDimages(requiresBitTorrent).

CentOS5.5x86_64binDVD.torrent(4.7GB)

BurnBinaryImagetoCD/DVD

IfyouaredownloadingtheaboveISOfilestoaMSWindowsmachine,therearemanyoptionsforburningtheseimages(ISOfiles)toaCD.Youmayalreadybe
familiarwithandhavethepropersoftwaretoburnimagestoCD.Ifyouarenotfamiliarwiththisprocessanddonothavetherequiredsoftwaretoburnimagesto
CD,herearejustthreeofthemanysoftwarepackagesthatcanbeused.

InfraRecorder
UltraISO
MagicISOMaker

InstallCentOS

AfterdownloadingandburningtheCentOSimages(ISOfiles)toCD/DVD,insertCentOSDisk#1intothefirstserver(racnode1inthisexample),poweriton,and
answertheinstallationscreenpromptsasnotedbelow.AftercompletingtheLinuxinstallationonthefirstnode,performthesameLinuxinstallationonthesecond
nodewhilesubstitutingthenodenameracnode1forracnode2andthedifferentIPaddresseswereappropriate.

BeforeinstallingtheLinuxoperatingsystemonbothnodes,youshouldhavethetwoNIC
interfaces(cards)installed.

BootScreen

ThefirstscreenistheCentOSbootscreen.Attheboot:prompt,hit[Enter]tostarttheinstallationprocess.

MediaTest

WhenaskedtotesttheCDmedia,taboverto[Skip]andhit[Enter].Iftherewereanyerrors,themediaburningsoftwarewouldhavewarnedus.Afterseveral
seconds,theinstallershouldthendetectthevideocard,monitor,andmouse.TheinstallerthengoesintoGUImode.

WelcometoCentOS

Atthewelcomescreen,click[Next]tocontinue.

Language/KeyboardSelection

ThenexttwoscreenspromptyoufortheLanguageandKeyboardsettings.Maketheappropriateselectionforyourconfigurationandclick[Next]tocontinue.

DetectPreviousInstallation

IftheinstallerdetectsapreviousversionofRHEL/CentOS,itwillaskifyouwouldliketo"InstallCentOS"or"UpgradeanexistingInstallation".Alwaysselectto
InstallCentOS.
DiskPartitioningSetup

Select"Removeallpartitionsonselecteddrivesandcreatedefaultlayout"andchecktheoptionto"Reviewandmodifypartitioninglayout".Click"[Next]"to
continue.

YouwillthenbepromptedwithadialogwindowaskingifyoureallywanttoremoveallLinuxpartitions.Click[Yes]toacknowledgethiswarning.

Partitioning

Theinstallerwillthenallowyoutoview(andmodifyifneeded)thediskpartitionsitautomaticallyselected.Formostautomaticlayouts,theinstallerwillchoose
100MBfor/boot,doubletheamountofRAM(systemswith<=2,048MBRAM)oranamountequaltoRAM(systemswith>2,048MBRAM)forswap,andtherest
goingtotheroot(/)partition.StartingwithRHEL4,theinstallerwillcreatethesamediskconfigurationasjustnotedbutwillcreatethemusingtheLogicalVolume
Manager(LVM).Forexample,itwillpartitionthefirstharddrive(/dev/sdaformyconfiguration)intotwopartitionsoneforthe/bootpartition(/dev/sda1)andthe
remainderofthediskdedicatetoaLVMnamedVolGroup00(/dev/sda2).TheLVMVolumeGroup(VolGroup00)isthenpartitionedintotwoLVMpartitionsonefor
therootfilesystem(/)andanotherforswap.

ThemainconcernduringthepartitioningphaseistoensureenoughswapspaceisallocatedasrequiredbyOracle(whichisamultipleoftheavailableRAM).The
followingisOracle'sminimumrequirementforswapspace.

AvailableRAM SwapSpaceRequired

Between1,024MBand2,048MB 1.5timesthesizeofRAM

Between2,049MBand8,192MB EqualtothesizeofRAM

Morethan8,192MB 0.75timesthesizeofRAM

Forthepurposeofthisinstall,Iwillacceptallautomaticallypreferredsizes.(Including5,952MBforswapsinceIhave4GBofRAMinstalled.)

Ifforanyreason,theautomaticlayoutdoesnotconfigureanadequateamountofswapspace,youcaneasilychangethatfromthisscreen.Toincreasethesizeof
theswappartition,[Edit]thevolumegroupVolGroup00.Thiswillbringupthe"EditLVMVolumeGroup:VolGroup00"dialog.First,[Edit]anddecreasethesizeofthe
rootfilesystem(/)bytheamountyouwanttoaddtotheswappartition.Forexample,toaddanother512MBtoswap,youwoulddecreasethesizeoftherootfile
systemby512MB(i.e.36,032MB512MB=35,520MB).Nowaddthespaceyoudecreasedfromtherootfilesystem(512MB)totheswappartition.When
completed,click[OK]onthe"EditLVMVolumeGroup:VolGroup00"dialog.

Onceyouaresatisfiedwiththedisklayout,click[Next]tocontinue.

BootLoaderConfiguration

TheinstallerwillusetheGRUBbootloaderbydefault.Tousethe"GRUBbootloader",acceptalldefaultvaluesandclick[Next]tocontinue.

NetworkConfiguration

ImadesuretoinstallbothNICinterfaces(cards)ineachoftheLinuxmachinesbeforestartingtheoperatingsysteminstallation.Theinstallershouldhave
successfullydetectedeachofthenetworkdevices.SincethisguidewillusethetraditionalmethodofassigningstaticIPaddressesforeachoftheOracleRAC
nodes,therewillbeseveralchangesthatneedtobemadetothenetworkconfiguration.Thesettingsyoumakeherewill,ofcourse,dependonyournetwork
configuration.ThemostimportantmodificationthatwillberequiredforthisguideistonotconfiguretheOracleRACnodeswithDHCPsincewewillbeassigning
staticIPaddresses.Additionally,youwillneedtoconfiguretheserverwitharealhostname.

First,makesurethateachofthenetworkdevicesarecheckedto"Activeonboot".Theinstallermaychoosetonotactivateeth1bydefault.

Second,[Edit]botheth0andeth1asfollows.YoumaychoosetousedifferentIPaddressesforbotheth0andeth1thatIhavedocumentedinthisguideandthatis
OK.Makecertaintoputeth1(theinterconnect)onadifferentsubnetthaneth0(thepublicnetwork).

OracleRACNodeNetworkConfiguration

(racnode1)

eth0

EnableIPv4support ON

DynamicIPconfiguration(DHCP)(selectManualconfiguration) OFF

IPv4Address 192.168.1.151

Prefix(Netmask) 255.255.255.0

EnableIPv6support OFF

eth1

EnableIPv4support ON

DynamicIPconfiguration(DHCP)(selectManualconfiguration) OFF

IPv4Address 192.168.2.151

Prefix(Netmask) 255.255.255.0

EnableIPv6support OFF

Continuebymanuallysettingyourhostname.Iusedracnode1forthefirstnodeandracnode2forthesecond.Finishthisdialogoffbysupplyingyourgatewayand
DNSservers.

AdditionalDNSconfigurationinformationforbothoftheOracleRACnodeswillbe
discussedlaterinthisguide.
TimeZoneSelection

Selecttheappropriatetimezoneforyourenvironmentandclick[Next]tocontinue.

SetRootPassword

Selectarootpasswordandclick[Next]tocontinue.

PackageInstallationDefaults

Bydefault,CentOSinstallsmostofthesoftwarerequiredforatypicalserver.Thereareseveralotherpackages(RPMs),however,thatarerequiredtosuccessfully
installtheOraclesoftware.Theinstallerincludesa"Customizesoftware"selectionthatallowstheadditionofRPMgroupingssuchas"DevelopmentLibraries"or
"LegacyLibrarySupport".TheadditionofsuchRPMgroupingsisnotanissue.Deselectingany"defaultRPM"groupingsorindividualRPMs,however,canresultin
failedOracleGridInfrastructureandOracleRACinstallationattempts.

Forthepurposeofthisarticle,selecttheradiobutton"Customizenow"andclick[Next]tocontinue.

Thisiswhereyoupickthepackagestoinstall.MostofthepackagesrequiredfortheOraclesoftwarearegroupedinto"PackageGroups"(i.e.Application>
Editors).SincethesenodeswillbehostingtheOracleGridInfrastructureandOracleRACsoftware,verifythatatleastthefollowingpackagegroupsareselectedfor
install.FormanyoftheLinuxpackagegroups,notallofthepackagesassociatedwiththatgroupgetselectedforinstallation.(Notethe"Optionalpackages"button
afterselectingapackagegroup.)Soalthoughthepackagegroupgetsselectedforinstall,someofthepackagesrequiredbyOracledonotgetinstalled.Infact,there
aresomepackagesthatarerequiredbyOraclethatdonotbelongtoanyoftheavailablepackagegroups(i.e.libaio-devel).Nottoworry.Acompletelistof
requiredpackagesforOracleGridInfrastructure11gRelease2andOracleRAC11gRelease2forLinuxwillbeprovidedinthenextsection.Thesepackageswill
needtobemanuallyinstalledfromtheCentOSCDsaftertheoperatingsysteminstall.Fornow,installthefollowingpackagegroups.

DesktopEnvironments

GNOMEDesktopEnvironment

Applications

Editors
GraphicalInternet
TextbasedInternet

Development

DevelopmentLibraries
DevelopmentTools
LegacySoftwareDevelopment

Servers

ServerConfigurationTools

BaseSystem

AdministrationTools
Base
Java
LegacySoftwareSupport
SystemTools
XWindowSystem

Inadditiontotheabovepackages,selectanyadditionalpackagesyouwishtoinstallforthisnodekeepinginmindtoNOTdeselectanyofthe"default"RPM
packages.Afterselectingthepackagestoinstallclick[Next]tocontinue.

AbouttoInstall

Thisscreenisbasicallyaconfirmationscreen.Click[Next]tostarttheinstallation.IfyouareinstallingCentOSusingCDs,youwillbeaskedtoswitchCDsduring
theinstallationprocessdependingonwhichpackagesyouselected.

Congratulations

Andthat'sit.YouhavesuccessfullyinstalledLinuxonthefirstnode(racnode1).TheinstallerwillejecttheCD/DVDfromtheCDROMdrive.TakeouttheCD/DVD
andclick[Reboot]torebootthesystem.

PostInstallationWizardWelcomeScreen

WhenthesystembootsintoCentOSLinuxforthefirsttime,itwillpromptyouwithanotherwelcomescreenforthe"PostInstallationWizard".Thepostinstallation
wizardallowsyoutomakefinalO/Sconfigurationsettings.Onthe"Welcomescreen",click[Forward]tocontinue.

Firewall

Onthisscreen,makesuretoselectthe"Disabled"optionandclick[Forward]tocontinue.

Youwillbepromptedwithawarningdialogaboutnotsettingthefirewall.Whenthisoccurs,click[Yes]tocontinue.

SELinux

OntheSELinuxscreen,choosethe"Disabled"optionandclick[Forward]tocontinue.

YouwillbepromptedwithawarningdialogwarningthatchangingtheSELinuxsettingwillrequirerebootingthesystemsotheentirefilesystemcanberelabeled.
Whenthisoccurs,click[Yes]toacknowledgearebootofthesystemwilloccurafterfirstboot(PostInstallationWizard)iscompleted.

Kdump

AcceptthedefaultsettingontheKdumpscreen(disabled)andclick[Forward]tocontinue.
DateandTimeSettings

Adjustthedateandtimesettingsifnecessaryandclick[Forward]tocontinue.

CreateUser

Createanyadditional(nonoracle)operatingsystemuseraccountsifdesiredandclick[Forward]tocontinue.Forthepurposeofthisarticle,Iwillnotbecreatingany
additionaloperatingsystemaccounts.Iwillbecreatingthe"grid"and"oracle"useraccountslaterinthisguide.

Ifyouchosenottodefineanyadditionaloperatingsystemuseraccounts,click[Continue]toacknowledgethewarningdialog.

SoundCard

Thisscreenwillonlyappearifthewizarddetectsasoundcard.Onthesoundcardscreenclick[Forward]tocontinue.

AdditionalCDs

Onthe"AdditionalCDs"screenclick[Finish]tocontinue.

RebootSystem

GivenwechangedtheSELinuxoptionto"Disabled",wearepromptedtorebootthesystem.Click[OK]torebootthesystemfornormaluse.

LoginScreen

Afterrebootingthemachine,youarepresentedwiththeloginscreen.Loginusingthe"root"useraccountandthepasswordyouprovidedduringtheinstallation.

Performthesameinstallationonthesecondnode

AftercompletingtheLinuxinstallationonthefirstnode,repeattheabovestepsforthesecondnode(racnode2).Whenconfiguringthemachinenameand
networking,ensuretoconfigurethepropervalues.Formyinstallation,thisiswhatIconfiguredforracnode2.

First,makesurethateachofthenetworkdevicesarecheckedto"Activeonboot".Theinstallermaychoosetonotactivateeth1bydefault.

Second,[Edit]botheth0andeth1asfollows.YoumaychoosetousedifferentIPaddressesforbotheth0andeth1thatIhavedocumentedinthisguideandthatis
OK.Makecertaintoputeth1(theinterconnect)onadifferentsubnetthaneth0(thepublicnetwork).

OracleRACNodeNetworkConfiguration

(racnode2)

eth0

EnableIPv4support ON

DynamicIPconfiguration(DHCP)(selectManualconfiguration) OFF

IPv4Address 192.168.1.152

Prefix(Netmask) 255.255.255.0

EnableIPv6support OFF

eth1

EnableIPv4support ON

DynamicIPconfiguration(DHCP)(selectManualconfiguration) OFF

IPv4Address 192.168.2.152

Prefix(Netmask) 255.255.255.0

EnableIPv6support OFF

Continuebymanuallysettingyourhostname.Iusedracnode2forthesecondnode.FinishthisdialogoffbysupplyingyourgatewayandDNSservers.

PerformthesameLinuxinstallationonracnode2

InstallRequiredLinuxPackagesforOracleRAC
InstallthefollowingrequiredLinuxpackagesonbothOracleRACnodesinthecluster.

AfterinstallingtheLinuxO/S,thenextstepistoverifyandinstallallpackages(RPMs)requiredbybothOracleClusterwareandOracleRAC.TheOracleUniversal
Installer(OUI)performschecksonyourmachineduringinstallationtoverifythatitmeetstheappropriateoperatingsystempackagerequirements.Toensurethat
thesecheckscompletesuccessfully,verifythesoftwarerequirementsdocumentedinthissectionbeforestartingtheOracleinstalls.

AlthoughmanyoftherequiredpackagesforOraclewereinstalledduringtheLinuxinstallation,severalwillbemissingeitherbecausetheywereconsideredoptional
withinthepackagegrouporsimplydidn'texistinanypackagegroup.

Thepackageslistedinthissection(orlaterversions)arerequiredforOracleGridInfrastructure11gRelease2andOracleRAC11gRelease2runningontheRed
HatEnterpriseLinux5orCentOS5platform.

32bit(x86)Installations

binutils2.17.50.0.6
compatlibstdc++333.2.3
elfutilslibelf0.125
elfutilslibelfdevel0.125
elfutilslibelfdevelstatic0.125
gcc4.1.2
gccc++4.1.2
glibc2.524
glibccommon2.5
glibcdevel2.52
glibcheaders2.5
kernelheaders2.6.18
ksh20060214
libaio0.3.106
libaiodevel0.3.106
libgcc4.1.2
libgomp4.1.2
libstdc++4.1.2
libstdc++devel4.1.2
make3.81
pdksh5.2.14
sysstat7.0.2
unixODBC2.2.11
unixODBCdevel2.2.11

EachofthepackageslistedabovecanbefoundonCD#1,CD#2,CD#3,andCD#4ontheCentOS5.5forx86CDs.Whileitispossibletoqueryeachindividual
packagetodeterminewhichonesaremissingandneedtobeinstalled,aneasiermethodistoruntherpm -UvhPackageName commandfromthefourCDsas
follows.Forpackagesthatalreadyexistandareuptodate,theRPMcommandwillsimplyignoretheinstallandprintawarningmessagetotheconsolethatthe
packageisalreadyinstalled.

# From CentOS 5.5 (x86)- [CD #1]


mkdir -p /media/cdrom
mount -r /dev/cdrom /media/cdrom
cd /media/cdrom/CentOS
rpm -Uvh binutils-2.*
rpm -Uvh elfutils-libelf-0.*
rpm -Uvh glibc-2.*
rpm -Uvh glibc-common-2.*
rpm -Uvh kernel-headers-2.*
rpm -Uvh ksh-2*
rpm -Uvh libaio-0.*
rpm -Uvh libgcc-4.*
rpm -Uvh libstdc++-4.*
rpm -Uvh make-3.*
cd /
eject

# From CentOS 5.5 (x86) - [CD #2]


mount -r /dev/cdrom /media/cdrom
cd /media/cdrom/CentOS
rpm -Uvh libgomp-4.*
rpm -Uvh unixODBC-2.*
cd /
eject

# From CentOS 5.5 (x86) - [CD #3]


mount -r /dev/cdrom /media/cdrom
cd /media/cdrom/CentOS
rpm -Uvh compat-libstdc++-33*
rpm -Uvh elfutils-libelf-devel-*
rpm -Uvh gcc-4.*
rpm -Uvh gcc-c++-4.*
rpm -Uvh glibc-devel-2.*
rpm -Uvh glibc-headers-2.*
rpm -Uvh libaio-devel-0.*
rpm -Uvh libstdc++-devel-4.*
rpm -Uvh pdksh-5.*
rpm -Uvh unixODBC-devel-2.*
cd /
eject

# From CentOS 5.5 (x86) - [CD #4]


mount -r /dev/cdrom /media/cdrom
cd /media/cdrom/CentOS
rpm -Uvh sysstat-7.*
cd /
eject

--------------------------------------------------------------------------------------

# From CentOS 5.5 (x86)- [DVD #1]


mkdir -p /media/cdrom
mount -r /dev/cdrom /media/cdrom
cd /media/cdrom/CentOS
rpm -Uvh binutils-2.*
rpm -Uvh elfutils-libelf-0.*
rpm -Uvh glibc-2.*
rpm -Uvh glibc-common-2.*
rpm -Uvh kernel-headers-2.*
rpm -Uvh ksh-2*
rpm -Uvh libaio-0.*
rpm -Uvh libgcc-4.*
rpm -Uvh libstdc++-4.*
rpm -Uvh make-3.*
rpm -Uvh libgomp-4.*
rpm -Uvh unixODBC-2.*
rpm -Uvh compat-libstdc++-33*
rpm -Uvh elfutils-libelf-devel-*
rpm -Uvh gcc-4.*
rpm -Uvh gcc-c++-4.*
rpm -Uvh glibc-devel-2.*
rpm -Uvh glibc-headers-2.*
rpm -Uvh libaio-devel-0.*
rpm -Uvh libstdc++-devel-4.*
rpm -Uvh pdksh-5.*
rpm -Uvh unixODBC-devel-2.*
rpm -Uvh sysstat-7.*
cd /
eject

64bit(x86_64)Installations

binutils2.17.50.0.6
compatlibstdc++333.2.3
compatlibstdc++333.2.3(32bit)
elfutilslibelf0.125
elfutilslibelfdevel0.125
elfutilslibelfdevelstatic0.125
gcc4.1.2
gccc++4.1.2
glibc2.524
glibc2.524(32bit)
glibccommon2.5
glibcdevel2.5
glibcdevel2.5(32bit)
glibcheaders2.5
ksh20060214
libaio0.3.106
libaio0.3.106(32bit)
libaiodevel0.3.106
libaiodevel0.3.106(32bit)
libgcc4.1.2
libgcc4.1.2(32bit)
libstdc++4.1.2
libstdc++4.1.2(32bit)
libstdc++devel4.1.2
make3.81
pdksh5.2.14
sysstat7.0.2
unixODBC2.2.11
unixODBC2.2.11(32bit)
unixODBCdevel2.2.11
unixODBCdevel2.2.11(32bit)

EachofthepackageslistedabovecanbefoundonCD#1,CD#3,CD#4,andCD#5ontheCentOS5.5forx86_64CDs.Whileitispossibletoqueryeach
individualpackagetodeterminewhichonesaremissingandneedtobeinstalled,aneasiermethodistoruntherpm -UvhPackageName commandfromthefourCDs
asfollows.Forpackagesthatalreadyexistandareuptodate,theRPMcommandwillsimplyignoretheinstallandprintawarningmessagetotheconsolethatthe
packageisalreadyinstalled.

# From CentOS 5.5 (x86_64)- [CD #1]


mkdir -p /media/cdrom
mount -r /dev/cdrom /media/cdrom
cd /media/cdrom/CentOS
rpm -Uvh binutils-2.*
rpm -Uvh elfutils-libelf-0.*
rpm -Uvh glibc-2.*
rpm -Uvh glibc-common-2.*
rpm -Uvh ksh-2*
rpm -Uvh libaio-0.*
rpm -Uvh libgcc-4.*
rpm -Uvh libstdc++-4.*
rpm -Uvh make-3.*
cd /
eject

# From CentOS 5.5 (x86_64) - [CD #3]


mount -r /dev/cdrom /media/cdrom
cd /media/cdrom/CentOS
rpm -Uvh elfutils-libelf-devel-*
rpm -Uvh gcc-4.*
rpm -Uvh gcc-c++-4.*
rpm -Uvh glibc-devel-2.*
rpm -Uvh glibc-headers-2.*
rpm -Uvh libstdc++-devel-4.*
rpm -Uvh unixODBC-2.*
cd /
eject

# From CentOS 5.5 (x86_64) - [CD #4]


mount -r /dev/cdrom /media/cdrom
cd /media/cdrom/CentOS
rpm -Uvh compat-libstdc++-33*
rpm -Uvh libaio-devel-0.*
rpm -Uvh pdksh-5.*
rpm -Uvh unixODBC-devel-2.*
cd /
eject

# From CentOS 5.5 (x86_64) - [CD #5]


mount -r /dev/cdrom /media/cdrom
cd /media/cdrom/CentOS
rpm -Uvh sysstat-7.*
cd /
eject

--------------------------------------------------------------------------------------

# From CentOS 5.5 (x86_64)- [DVD #1]


mkdir -p /media/cdrom
mount -r /dev/cdrom /media/cdrom
cd /media/cdrom/CentOS
rpm -Uvh binutils-2.*
rpm -Uvh elfutils-libelf-0.*
rpm -Uvh glibc-2.*
rpm -Uvh glibc-common-2.*
rpm -Uvh ksh-2*
rpm -Uvh libaio-0.*
rpm -Uvh libgcc-4.*
rpm -Uvh libstdc++-4.*
rpm -Uvh make-3.*
rpm -Uvh elfutils-libelf-devel-*
rpm -Uvh gcc-4.*
rpm -Uvh gcc-c++-4.*
rpm -Uvh glibc-devel-2.*
rpm -Uvh glibc-headers-2.*
rpm -Uvh libstdc++-devel-4.*
rpm -Uvh unixODBC-2.*
rpm -Uvh compat-libstdc++-33*
rpm -Uvh libaio-devel-0.*
rpm -Uvh pdksh-5.*
rpm -Uvh unixODBC-devel-2.*
rpm -Uvh sysstat-7.*
cd /
eject

InstallOpenfiler
Performthefollowinginstallationonthenetworkstorageserver(openfiler1).

WithLinuxinstalledonbothOracleRACnodes,thenextstepistoinstalltheOpenfilersoftwaretothenetworkstorageserver(openfiler1).Laterinthisguide,the
networkstorageserverwillbeconfiguredasaniSCSIstoragedeviceforallOracleClusterwareandOracleRACsharedstoragerequirements.

PoweredbyrPathLinux,OpenfilerisafreebrowserbasednetworkstoragemanagementutilitythatdeliversfilebasedNetworkAttachedStorage(NAS)andblock
basedStorageAreaNetworking(SAN)inasingleframework.TheentiresoftwarestackinterfaceswithopensourceapplicationssuchasApache,Samba,LVM2,
ext3,LinuxNFSandiSCSIEnterpriseTarget.Openfilercombinestheseubiquitoustechnologiesintoasmall,easytomanagesolutionfrontedbyapowerfulweb
basedmanagementinterface.

OpenfilersupportsCIFS,NFS,HTTP/DAV,FTP,however,wewillonlybemakinguseofitsiSCSIcapabilitiestoimplementaninexpensiveSANfortheshared
storagecomponentsrequiredbyOracleRAC11g.TherPathLinuxoperatingsystemandOpenfilerapplicationwillbeinstalledononeinternalSATAdisk.Asecond
internal73GB15KSCSIharddiskwillbeconfiguredasasinglevolumegroupthatwillbeusedforallshareddiskstoragerequirements.TheOpenfilerserverwillbe
configuredtousethisvolumegroupforiSCSIbasedstorageandwillbeusedinourOracleRAC11gconfigurationtostorethesharedfilesrequiredbyOracle
ClusterwareandtheOracleRACdatabase.

Pleasebeawarethatanytypeofharddisk(internalorexternal)shouldworkfortheshareddatabasestorageaslongasitcanberecognizedbythenetworkstorage
server(Openfiler)andhasadequatespace.Forexample,Icouldhavemadeanextrapartitiononthe500GBinternalSATAdiskfortheiSCSItarget,butdecidedto
makeuseofthefasterSCSIdiskforthisexample.

TolearnmoreaboutOpenfiler,pleasevisittheirwebsiteathttp://www.openfiler.com/.

DownloadOpenfiler

UsethelinksbelowtodownloadOpenfilerNAS/SANAppliance,version2.3(FinalRelease)foreitherx86orx86_64dependingonyourhardwarearchitecture.This
guideusesx86_64.AfterdownloadingOpenfiler,youwillthenneedtoburntheISOimagetoCD.

32bit(x86)Installations

openfiler2.3x86disc1.iso(322MB)

64bit(x86_64)Installations
openfiler2.3x86_64disc1.iso(336MB)

IfyouaredownloadingtheaboveISOfiletoaMSWindowsmachine,therearemanyoptionsforburningtheseimages(ISOfiles)toaCD.Youmayalreadybe
familiarwithandhavethepropersoftwaretoburnimagestoCD.Ifyouarenotfamiliarwiththisprocessanddonothavetherequiredsoftwaretoburnimagesto
CD,herearejustthreeofthemanysoftwarepackagesthatcanbeused.

InfraRecorder
UltraISO
MagicISOMaker

InstallOpenfiler

ThissectionprovidesasummaryofthescreensusedtoinstalltheOpenfilersoftware.Forthepurposeofthisarticle,IoptedtoinstallOpenfilerwithalldefault
options.Theonlymanualchangerequiredwasforconfiguringthelocalnetworksettings.

Oncetheinstallhascompleted,theserverwillreboottomakesureallrequiredcomponents,servicesanddriversarestartedandrecognized.Afterthereboot,any
externalharddrives(ifconnected)willbediscoveredbytheOpenfilerserver.

Formoredetailedinstallationinstructions,pleasevisithttp://www.openfiler.com/learn/.Iwouldsuggest,however,thattheinstructionsIhaveprovidedbelowbe
usedforthisOracleRAC11gconfiguration.

BeforeinstallingtheOpenfilersoftwaretothenetworkstorageserver,youshouldhavebothNICinterfaces(cards)installedandanyexternalharddrivesconnected
andturnedon(ifyouwillbeusingexternalharddrives).

AfterdownloadingandburningtheOpenfilerISOimagefiletoCD,inserttheCDintothenetworkstorageserver(openfiler1inthisexample),poweriton,and
answertheinstallationscreenpromptsasnotedbelow.

BootScreen

ThefirstscreenistheOpenfilerbootscreen.Attheboot:prompt,hit[Enter]tostarttheinstallationprocess.

MediaTest

WhenaskedtotesttheCDmedia,taboverto[Skip]andhit[Enter].Iftherewereanyerrors,themediaburningsoftwarewouldhavewarnedus.Afterseveral
seconds,theinstallershouldthendetectthevideocard,monitor,andmouse.TheinstallerthengoesintoGUImode.

WelcometoOpenfilerNSA

Atthewelcomescreen,click[Next]tocontinue.

KeyboardConfiguration

ThenextscreenpromptsyoufortheKeyboardsettings.Maketheappropriateselectionforyourconfiguration.

DiskPartitioningSetup

Thenextscreenaskswhethertoperformdiskpartitioningusing"AutomaticPartitioning"or"ManualPartitioningwithDiskDruid".AlthoughtheofficialOpenfiler
documentationsuggeststouseManualPartitioning,Ioptedtouse"AutomaticPartitioning"giventhesimplicityofmyexampleconfiguration.

Select[Automaticallypartition]andclick[Next]continue.

AutomaticPartitioning

IftherewereapreviousinstallationofLinuxonthismachine,thenextscreenwillaskifyouwantto"remove"or"keep"oldpartitions.Selecttheoptionto[Remove
allpartitionsonthissystem].Formyexampleconfiguration,IselectedONLYthe500GBSATAinternalharddrive[sda]fortheoperatingsystemandOpenfiler
applicationinstallation.Ideselectedthe73GBSCSIinternalharddrivesincethisdiskwillbeusedexclusivelylaterinthisguidetocreateasingle"VolumeGroup"
(racdbvg)thatwillbeusedforalliSCSIbasedshareddiskstoragerequirementsforOracleClusterwareandOracleRAC.

Ialsokeepthecheckbox[Review(andmodifyifneeded)thepartitionscreated]selected.Click[Next]tocontinue.

Youwillthenbepromptedwithadialogwindowaskingifyoureallywanttoremoveallpartitions.Click[Yes]toacknowledgethiswarning.

Partitioning

Theinstallerwillthenallowyoutoview(andmodifyifneeded)thediskpartitionsitautomaticallychoseforharddisksselectedinthepreviousscreen.Inalmostall
cases,theinstallerwillchoose100MBfor/boot,anadequateamountofswap,andtherestgoingtotheroot(/)partitionforthatdisk(ordisks).Inthisexample,I
amsatisfiedwiththeinstallersrecommendedpartitioningfor/dev/sda.

Theinstallerwillalsoshowanyotherinternalharddisksitdiscovered.Formyexampleconfiguration,theinstallerfoundthe73GBSCSIinternalharddriveas
/dev/sdb.Fornow,Iwill"Delete"anyandallpartitionsonthisdrive(therewasonlyone,/dev/sdb1).Laterinthisguide,Iwillcreatetherequiredpartitionforthis
particularharddisk.

NetworkConfiguration

ImadesuretoinstallbothNICinterfaces(cards)inthenetworkstorageserverbeforestartingtheOpenfilerinstallation.Theinstallershouldhavesuccessfully
detectedeachofthenetworkdevices.

First,makesurethateachofthenetworkdevicesarecheckedto[Activeonboot].Theinstallermaychoosetonotactivateeth1bydefault.

Second,[Edit]botheth0andeth1asfollows.YoumaychoosetousedifferentIPaddressesforbotheth0andeth1andthatisOK.Youmust,however,configure
eth1(thestoragenetwork)tobeonthesamesubnetyouconfiguredforeth1onracnode1andracnode2.

eth0

ConfigureusingDHCP OFF

Activateonboot ON
eth0

IPAddress 192.168.1.195

Netmask 255.255.255.0

eth1

ConfigureusingDHCP OFF

Activateonboot ON

IPAddress 192.168.2.195

Netmask 255.255.255.0

Continuebysettingyourhostnamemanually.Iusedahostnameof"openfiler1".FinishthisdialogoffbysupplyingyourgatewayandDNSservers.

TimeZoneSelection

Thenextscreenallowsyoutoconfigureyourtimezoneinformation.Maketheappropriateselectionforyourlocation.

SetRootPassword

Selectarootpasswordandclick[Next]tocontinue.

AbouttoInstall

Thisscreenisbasicallyaconfirmationscreen.Click[Next]tostarttheinstallation.

Congratulations

Andthat'sit.YouhavesuccessfullyinstalledOpenfileronthenetworkstorageserver.TheinstallerwillejecttheCDfromtheCDROMdrive.TakeouttheCDand
click[Reboot]torebootthesystem.

Ifeverythingwassuccessfulafterthereboot,youshouldnowbepresentedwithatextloginscreenandtheURLtouseforadministeringtheOpenfilerserver.

AfterinstallingOpenfiler,verifyyoucanlogintothemachineusingtherootuseraccount
andthepasswordyousuppliedduringinstallation.Donotattempttologintotheconsoleor
SSHusingthebuiltinopenfileruseraccount.Attemptingtodosowillresultinthe
followingerrormessage.

openfiler1 login: openfiler


Password: password
This interface has not been implemented yet.

OnlyattempttologintotheconsoleorSSHusingtherootuseraccount.

NetworkConfiguration
PerformthefollowingnetworkconfigurationtasksonbothOracleRACnodesinthecluster.

AlthoughweconfiguredseveralofthenetworksettingsduringtheLinuxinstallation,itisimportanttonotskipthissectionasitcontainscriticalstepswhichinclude
configuringDNSandverifyingyouhavethenetworkinghardwareandInternetProtocol(IP)addressesrequiredforanOracleGridInfrastructureforacluster
installation.

NetworkHardwareRequirements

Thefollowingisalistofhardwarerequirementsfornetworkconfiguration.

EachOracleRACnodemusthaveatleasttwonetworkadaptersornetworkinterfacecards(NICs)oneforthepublicnetworkinterfaceandoneforthe
privatenetworkinterface(theinterconnect).TousemultipleNICsforthepublicnetworkorfortheprivatenetwork,OraclerecommendsthatyouuseNIC
bonding.Useseparatebondingforthepublicandprivatenetworks(i.e.bond0forthepublicnetworkandbond1fortheprivatenetwork),becauseduring
installationeachinterfaceisdefinedasapublicorprivateinterface.NICbondingisnotcoveredinthisarticle.

Thepublicinterfacenamesassociatedwiththenetworkadaptersforeachnetworkmustbethesameonallnodes,andtheprivateinterfacenames
associatedwiththenetworkadaptorsshouldbethesameonallnodes.

Forexample,withourtwonodecluster,youcannotconfigurenetworkadaptersonracnode1witheth0asthepublicinterface,butonracnode2haveeth1as
thepublicinterface.Publicinterfacenamesmustbethesame,soyoumustconfigureeth0aspubliconbothnodes.Youshouldconfiguretheprivate
interfacesonthesamenetworkadaptersaswell.Ifeth1istheprivateinterfaceforracnode1,theneth1mustbetheprivateinterfaceforracnode2.

Forthepublicnetwork,eachnetworkadaptermustsupportTCP/IP.

Fortheprivatenetwork,theinterconnectmustsupporttheuserdatagramprotocol(UDP)usinghighspeednetworkadaptersandswitchesthatsupport
TCP/IP(minimumrequirement1GigabitEthernet).

UDPisthedefaultinterconnectprotocolforOracleRAC,andTCPistheinterconnectprotocolforOracleClusterware.Youmustuseaswitchforthe
interconnect.Oraclerecommendsthatyouuseadedicatedswitch.

Oracledoesnotsupporttokenringsorcrossovercablesfortheinterconnect.

Fortheprivatenetwork,theendpointsofalldesignatedinterconnectinterfacesmustbecompletelyreachableonthenetwork.Thereshouldbenonodethatis
notconnectedtoeveryprivatenetworkinterface.Youcantestifaninterconnectinterfaceisreachableusingping.

DuringinstallationofOracleGridInfrastructure,youareaskedtoidentifytheplanneduseforeachnetworkinterfacethatOUIdetectsonyourclusternode.
Youmustidentifyeachinterfaceasapublicinterface,aprivateinterface,ornotusedandyoumustusethesameprivateinterfacesforbothOracle
ClusterwareandOracleRAC.

Youcanbondseparateinterfacestoacommoninterfacetoprovideredundancy,incaseofaNICfailure,butOraclerecommendsthatyoudonotcreate
separateinterfacesforOracleClusterwareandOracleRAC.IfyouusemorethanoneNICfortheprivateinterconnect,thenOraclerecommendsthatyouuse
NICbonding.Notethatmultipleprivateinterfacesprovideloadbalancingbutnotfailover,unlessbonded.

StartingwithOracleClusterware11gRelease2,younolongerneedtoprovideaprivatenameorIPaddressfortheinterconnect.IPaddressesonthesubnet
youidentifyasprivateareassignedasprivateIPaddressesforclustermembernodes.Youdonotneedtoconfiguretheseaddressesmanuallyinahosts
directory.Ifyouwantnameresolutionfortheinterconnect,thenyoucanconfigureprivateIPnamesinthehostsfileortheDNS.However,Oracle
Clusterwareassignsinterconnectaddressesontheinterfacedefinedduringinstallationastheprivateinterface(eth1,forexample),andtothesubnetusedfor
theprivatesubnet.

Inpractice,andforthepurposeofthisguide,IwillcontinuetoincludeaprivatenameandIPaddressoneachnodefortheRACinterconnect.Itprovidesself
documentationandasetofendpointsontheprivatenetworkIcanusefortroubleshootingpurposes.

192.168.2.151 racnode1-priv
192.168.2.152 racnode2-priv

InaproductionenvironmentthatusesiSCSIfornetworkstorage,itishighlyrecommendedtoconfigurearedundantthirdnetworkinterface(eth2,for
example)forthatstoragetrafficusingaTCP/IPoffloadEngine(TOE)card.Forthesakeofbrevity,thisarticlewillconfiguretheiSCSInetworkstoragetraffic
onthesamenetworkastheRACprivateinterconnect(eth1).CombiningtheiSCSIstoragetrafficandcachefusiontrafficforOracleRAConthesame
networkinterfaceworksgreatforaninexpensivetestsystem(liketheonedescribedinthisarticle)butshouldneverbeconsideredforproduction.

ThebasicideaofaTOEistooffloadtheprocessingofTCP/IPprotocolsfromthehostprocessortothehardwareontheadapterorinthesystem.ATOEis
oftenembeddedinanetworkinterfacecard(NIC)orahostbusadapter(HBA)andusedtoreducetheamountofTCP/IPprocessinghandledbytheCPUand
serverI/Osubsystemandimproveoverallperformance.

OracleRACNetworkConfiguration

Forthisguide,IoptednottouseGridNamingService(GNS)forassigningIPaddressestoeachOracleRACnodebutinsteadwillmanuallyassigntheminDNS
andhostsfiles.IoftenrefertothistraditionalmethodofmanuallyassigningIPaddressesasthe"DNSmethod"giventhefactthatallIPaddressesshouldbe
resolvedusingDNS.

WhenusingtheDNSmethodforassigningIPaddresses,OraclerecommendsthatallstaticIPaddressesbemanuallyconfiguredinDNSbeforestartingtheOracle
GridInfrastructureinstallation.ThiswouldincludethepublicIPaddressforthenode,theRACinterconnect,virtualIPaddress(VIP),andnewto11gRelease2,the
SingleClientAccessName(SCAN)virtualIP.

NotethatOraclerequiresyoutodefinetheSCANdomainaddress(racnode-cluster-scan
inthisexample)toresolveonyourDNStooneofthreepossibleIPaddressesinorderto
successfullyinstallOracleGridInfrastructure!DefiningtheSCANdomainaddressonlyin
thehostsfilesforeachOracleRACnode,andnotinDNS,willcausethe"OracleCluster
VerificationUtility"tofailwithan[INS20802]errorduringtheOracleGridInfrastructure
install.

ThefollowingtabledisplaysthenetworkconfigurationthatwillbeusedtobuildtheexampletwonodeOracleRACdescribedinthisguide.NotethateveryIP
addresswillberegisteredinDNSandthehostsfileforeachOracleRACnodewiththeexceptionoftheSCANvirtualIP.TheSCANvirtualIPwillonlybe
registeredinDNS.

ExampleTwoNodeOracleRACNetworkConfiguration

Identity Name Type IPAddress ResolvedBy

Node1Public racnode1 Public 192.168.1.151 DNSandhostsfile

Node1Private racnode1priv Private 192.168.2.151 DNSandhostsfile

Node1VIP racnode1vip Virtual 192.168.1.251 DNSandhostsfile

Node2Public racnode2 Public 192.168.1.152 DNSandhostsfile

Node2Private racnode2priv Private 192.168.2.152 DNSandhostsfile

Node2VIP racnode2vip Virtual 192.168.1.252 DNSandhostsfile

SCANVIP1 racnodeclusterscan Virtual 192.168.1.187 DNS

SCANVIP2 racnodeclusterscan Virtual 192.168.1.188 DNS

SCANVIP3 racnodeclusterscan Virtual 192.168.1.189 DNS

DNSConfiguration

TheexampleOracleRACconfigurationdescribedinthisguidewillusethetraditionalmethodofmanuallyassigningstaticIPaddressesandthereforerequiresa
DNSserver.IfyoudonothaveaccesstoaDNSserver,thissectionincludesdetailedinstructionsforinstallingaminimalDNSserverontheOpenfilernetwork
storageserver.

UseandExistingDNS

IfyoualreadyhaveaccesstoaDNSserver,simplyaddtheappropriateAandPTRrecordsforOracleRACtoyourDNSandskipaheadtothenextsection
"Update/etc/resolv.confFile".Notethatintheexamplebelow,Iamusingthedomainnameidevelopment.info.Pleasefeelfreetosubstituteyourowndomain
nameifneeded.

; Forward Lookup Zone


racnode1 IN A 192.168.1.151
racnode2 IN A 192.168.1.152
racnode1-priv IN A 192.168.2.151
racnode2-priv IN A 192.168.2.152
racnode1-vip IN A 192.168.1.251
racnode2-vip IN A 192.168.1.252
openfiler1 IN A 192.168.1.195
openfiler1-priv IN A 192.168.2.195
racnode-cluster-scan IN A 192.168.1.187
racnode-cluster-scan IN A 192.168.1.188
racnode-cluster-scan IN A 192.168.1.189

; Reverse Lookup Zone


151 IN PTR racnode1.idevelopment.info.
152 IN PTR racnode2.idevelopment.info.
251 IN PTR racnode1-vip.idevelopment.info.
252 IN PTR racnode2-vip.idevelopment.info.
187 IN PTR racnode-cluster-scan.idevelopment.info.
188 IN PTR racnode-cluster-scan.idevelopment.info.
189 IN PTR racnode-cluster-scan.idevelopment.info.

InstallDNSonOpenfiler

InstallingDNSontheOpenfilernetworkstorageserverisatrivialtask.ToinstallorupdatepackagesonOpenfiler,usethecommandlinetoolconary,developedby
rPath.

Tolearnmoreaboutthedifferentoptionsandparametersthatcanbeusedwiththeconaryutility,reviewtheConaryQuickReferenceguide.

ToinstallpackagesonOpenfileryouneedaccesstotheInternet!

ToinstallDNSontheOpenfilerserver,runthefollowingcommandastherootuseraccount.

[root@openfiler1 ~]# conary update bind:runtime


Including extra troves to resolve dependencies:
bind:lib=9.4.3_P5-1.1-1 info-named:user=1-1-0.1
Applying update job 1 of 2:
Install info-named(:user)=1-1-0.1
Applying update job 2 of 2:
Update bind(:lib) (9.3.4_P1-0.5-1[ipv6,~!pie,ssl] -> 9.4.3_P5-1.1-1)
Update bind-utils(:doc :runtime) (9.3.4_P1-0.5-1[ipv6,~!pie,ssl] -> 9.4.3_P5-1.1-1)
Install bind:runtime=9.4.3_P5-1.1-1

VerifythefilesinstalledbytheDNSbindpackage.

[root@openfiler1 ~]# conary q bind --lsl


lrwxrwxrwx 1 root root 16 2009-07-29 17:03:02 UTC /usr/lib/libbind.so.4 -> libbind.so.4.1.2
-rwxr-xr-x 1 root root 294260 2010-03-11 00:48:52 UTC /usr/lib/libbind.so.4.1.2
lrwxrwxrwx 1 root root 18 2009-07-29 17:03:00 UTC /usr/lib/libbind9.so.30 -> libbind9.so.30.1.1
-rwxr-xr-x 1 root root 37404 2010-03-11 00:48:52 UTC /usr/lib/libbind9.so.30.1.1
lrwxrwxrwx 1 root root 16 2010-03-11 00:14:00 UTC /usr/lib/libdns.so.38 -> libdns.so.38.0.0
-rwxr-xr-x 1 root root 1421820 2010-03-11 00:48:52 UTC /usr/lib/libdns.so.38.0.0
lrwxrwxrwx 1 root root 16 2009-07-29 17:02:58 UTC /usr/lib/libisc.so.36 -> libisc.so.36.0.2
-rwxr-xr-x 1 root root 308260 2010-03-11 00:48:52 UTC /usr/lib/libisc.so.36.0.2
lrwxrwxrwx 1 root root 18 2007-03-09 17:26:37 UTC /usr/lib/libisccc.so.30 -> libisccc.so.30.0.1
-rwxr-xr-x 1 root root 28112 2010-03-11 00:48:51 UTC /usr/lib/libisccc.so.30.0.1
lrwxrwxrwx 1 root root 19 2009-07-29 17:03:00 UTC /usr/lib/libisccfg.so.30 -> libisccfg.so.30.0.5
-rwxr-xr-x 1 root root 71428 2010-03-11 00:48:52 UTC /usr/lib/libisccfg.so.30.0.5
lrwxrwxrwx 1 root root 18 2009-07-29 17:03:01 UTC /usr/lib/liblwres.so.30 -> liblwres.so.30.0.6
-rwxr-xr-x 1 root root 64360 2010-03-11 00:48:51 UTC /usr/lib/liblwres.so.30.0.6
-rwxr-xr-x 1 root root 2643 2008-02-22 21:44:05 UTC /etc/init.d/named
-rw-r--r-- 1 root root 163 2004-07-07 19:20:10 UTC /etc/logrotate.d/named
-rw-r----- 1 root root 1435 2004-06-18 04:39:39 UTC /etc/rndc.conf
-rw-r----- 1 root named 65 2005-09-24 20:40:23 UTC /etc/rndc.key
-rw-r--r-- 1 root root 1561 2006-07-20 18:40:14 UTC /etc/sysconfig/named
drwxr-xr-x 1 root named 0 2007-12-16 01:01:35 UTC /srv/named
drwxr-xr-x 1 named named 0 2007-12-16 01:01:35 UTC /srv/named/data
drwxr-xr-x 1 named named 0 2007-12-16 01:01:35 UTC /srv/named/slaves
-rwxr-xr-x 1 root root 2927 2010-03-11 00:14:02 UTC /usr/bin/isc-config.sh
-rwxr-xr-x 1 root root 3168 2010-03-11 00:48:51 UTC /usr/sbin/dns-keygen
-rwxr-xr-x 1 root root 21416 2010-03-11 00:48:51 UTC /usr/sbin/dnssec-keygen
-rwxr-xr-x 1 root root 53412 2010-03-11 00:48:51 UTC /usr/sbin/dnssec-signzone
-rwxr-xr-x 1 root root 379912 2010-03-12 14:07:50 UTC /usr/sbin/lwresd
-rwxr-xr-x 1 root root 379912 2010-03-12 14:07:50 UTC /usr/sbin/named
-rwxr-xr-x 1 root root 7378 2006-10-11 02:33:29 UTC /usr/sbin/named-bootconf
-rwxr-xr-x 1 root root 20496 2010-03-11 00:48:51 UTC /usr/sbin/named-checkconf
-rwxr-xr-x 1 root root 19088 2010-03-11 00:48:51 UTC /usr/sbin/named-checkzone
lrwxrwxrwx 1 root root 15 2007-03-09 17:26:40 UTC /usr/sbin/named-compilezone -> named-checkzone
-rwxr-xr-x 1 root root 24032 2010-03-11 00:48:51 UTC /usr/sbin/rndc
-rwxr-xr-x 1 root root 11708 2010-03-11 00:48:51 UTC /usr/sbin/rndc-confgen
drwxr-xr-x 1 named named 0 2007-12-16 01:01:35 UTC /var/run/named

ConfigureDNS

ConfigurationoftheDNSserverinvolvescreatingandmodifyingthefollowingfiles.

/etc/named.conf(DNSconfigurationfile)
/srv/named/data/idevelopment.info.zone(Forwardzonedefinitionfile)
/srv/named/data/1.168.192.in-addr.arpa.zone(Reversezonedefinitionfile)
/etc/named.conf

ThefirststepwillbetocreatetheDNSconfigurationfile"/etc/named.conf".The/etc/named.confconfigurationfileusedinthisexamplewillbekeptfairlysimple
andonlycontainthenecessarycustomizationsrequiredtorunaminimalDNS.

Forthepurposeofthisguide,Iwillbeusingthedomainnameidevelopment.infoandtheIPrange"192.168.1.*"forthepublicnetwork.Pleasefeelfreeto
substituteyourowndomainnameifsodesired.Ifyoudodecidetouseadifferentdomainname,makecertaintomodifyitinallofthefilesthatarepartofthe
networkconfigurationdescribedinthissection.

TheDNSconfigurationfiledescribedbelowisconfiguredtoresolvethenamesoftheserversdescribedinthisguide.ThisincludesthetwoOracleRACnodes,the
Openfilernetworkstorageserver(whichisnowalsoaDNSserver!),andseveralothermiscellaneousnodes.Inordertomakesurethatserversonexternal
networks,likethoseontheInternet,areresolvedproperly,IneededtoaddDNSForwardingbydefiningtheforwardersdirective.ThisdirectivetellstheDNS,
anythingitcan'tresolveshouldbepassedtotheDNS(s)listed.Forthepurposeofthisexample,IamusingmyDLinkrouterwhichisconfiguredasmygatewayto
theInternet.IcouldjustaswellhaveusedtheDNSentriesprovidedbymyISP.

Thenextdirectivedefinedintheoptionssectionisdirectory.Thisdirectivespecifieswherenamedwilllookforzonedefinitionfiles.Forexample,ifyouskipforward
intheDNSconfigurationfiletothe"idevelopment.info"forwardlookupzone,youwillnoticeit'szonedefinitionfileis"idevelopment.info.zone".Thefullyqualified
nameforthisfileisderivedbyconcatenatingthedirectorydirectiveandthe"file"specifiedforthatzone.Forexample,thefullyqualifiednamefortheforward
lookupzonedefinitionfiledescribedbelowis"/srv/named/data/idevelopment.info.zone".Thesamerulesapplyforthereverselookupzonewhichinthisexample
wouldbe"/srv/named/data/1.168.192.in-addr.arpa.zone".

Createthefile/etc/named.confwithatleastthefollowingcontent.

# +-------------------------------------------------------------------+
# | /etc/named.conf |
# | |
# | DNS configuration file for Oracle RAC 11g Release 2 example |
# +-------------------------------------------------------------------+

options {

// FORWARDERS: Forward any name this DNS can't resolve to my router.


forwarders { 192.168.1.1; };

// DIRECTORY: Directory where named will look for zone files.


directory "/srv/named/data";

};

# ----------------------------------
# Forward Zone
# ----------------------------------

zone "idevelopment.info" IN {
type master;
file "idevelopment.info.zone";
allow-update { none; };
};

# ----------------------------------
# Reverse Zone
# ----------------------------------

zone "1.168.192.in-addr.arpa" IN {
type master;
file "1.168.192.in-addr.arpa.zone";
allow-update { none; };
};

/srv/named/data/idevelopment.info.zone

IntheDNSconfigurationfileabove,wedefinedtheforwardandreversezonedefinitionfiles.Thesefileswillbelocatedinthe"/srv/named/data"directory.

Createandeditthefileassociatedwithyourforwardlookupzone,(whichinmycaseis"/srv/named/data/idevelopment.info.zone"),tolookliketheonedescribed
below.TakenoteofthethreeentriesusedtoconfiguretheSCANnameforroundrobinresolutiontothreeIPaddresses.

; +-------------------------------------------------------------------+
; | /srv/named/data/idevelopment.info.zone |
; | |
; | Forward zone definition file for idevelopment.info |
; +-------------------------------------------------------------------+

$ORIGIN idevelopment.info.

$TTL 86400 ; time-to-live - (1 day)

@ IN SOA openfiler1.idevelopment.info. jhunter.idevelopment.info. (


201011021 ; serial number - (yyyymmdd+s)
7200 ; refresh - (2 hours)
300 ; retry - (5 minutes)
604800 ; expire - (1 week)
60 ; minimum - (1 minute)
)
IN NS openfiler1.idevelopment.info.
localhost IN A 127.0.0.1

; Oracle RAC Nodes


racnode1 IN A 192.168.1.151
racnode2 IN A 192.168.1.152
racnode1-priv IN A 192.168.2.151
racnode2-priv IN A 192.168.2.152
racnode1-vip IN A 192.168.1.251
racnode2-vip IN A 192.168.1.252

; Network Storage Server


openfiler1 IN A 192.168.1.195
openfiler1-priv IN A 192.168.2.195

; Single Client Access Name (SCAN) virtual IP


racnode-cluster-scan IN A 192.168.1.187
racnode-cluster-scan IN A 192.168.1.188
racnode-cluster-scan IN A 192.168.1.189

; Miscellaneous Nodes
router IN A 192.168.1.1
packmule IN A 192.168.1.105
domo IN A 192.168.1.121
switch1 IN A 192.168.1.122
oemprod IN A 192.168.1.125
accesspoint IN A 192.168.1.245

/srv/named/data/1.168.192.inaddr.arpa.zone

Next,weneedtocreatethe"/srv/named/data/1.168.192.in-addr.arpa.zone"zonedefinitionfileforpublicnetworkreverselookups.

; +-------------------------------------------------------------------+
; | /srv/named/data/1.168.192.in-addr.arpa.zone |
; | |
; | Reverse zone definition file for idevelopment.info |
; +-------------------------------------------------------------------+

$ORIGIN 1.168.192.in-addr.arpa.

$TTL 86400 ; time-to-live - (1 day)

@ IN SOA openfiler1.idevelopment.info. jhunter.idevelopment.info. (


201011021 ; serial number - (yyyymmdd+s)
7200 ; refresh - (2 hours)
300 ; retry - (5 minutes)
604800 ; expire - (1 week)
60 ; minimum - (1 minute)
)
IN NS openfiler1.idevelopment.info.

; Oracle RAC Nodes


151 IN PTR racnode1.idevelopment.info.
152 IN PTR racnode2.idevelopment.info.
251 IN PTR racnode1-vip.idevelopment.info.
252 IN PTR racnode2-vip.idevelopment.info.

; Network Storage Server


195 IN PTR openfiler1.idevelopment.info.

; Single Client Access Name (SCAN) virtual IP


187 IN PTR racnode-cluster-scan.idevelopment.info.
188 IN PTR racnode-cluster-scan.idevelopment.info.
189 IN PTR racnode-cluster-scan.idevelopment.info.

; Miscellaneous Nodes
1 IN PTR router.idevelopment.info.
105 IN PTR packmule.idevelopment.info.
121 IN PTR domo.idevelopment.info.
122 IN PTR switch1.idevelopment.info.
125 IN PTR oemprod.idevelopment.info.
245 IN PTR accesspoint.idevelopment.info.

StarttheDNSService

WhentheDNSconfigurationfileandzonedefinitionfilesareinplace,starttheDNSserverbystartingthe"named"service.

[root@openfiler1 ~]# service named start


Starting named: [ OK ]

IfnamedfindsanyproblemswiththeDNSconfigurationfileorzonedefinitionfiles,theservicewillfailtostartanderrorswillbedisplayedonthescreen.To
troubleshootproblemswithstartingthenamedservice,checkthe/var/log/messagesfile.

Ifnamedstartssuccessfully,theentriesinthe/var/log/messagesfileshouldresemblethefollowing.

...
Nov 2 21:35:49 openfiler1 named[7995]: starting BIND 9.4.3-P5 -u named
Nov 2 21:35:49 openfiler1 named[7995]: adjusted limit on open files from 1024 to 1048576
Nov 2 21:35:49 openfiler1 named[7995]: found 1 CPU, using 1 worker thread
Nov 2 21:35:49 openfiler1 named[7995]: using up to 4096 sockets
Nov 2 21:35:49 openfiler1 named[7995]: loading configuration from '/etc/named.conf'
Nov 2 21:35:49 openfiler1 named[7995]: using default UDP/IPv4 port range: [1024, 65535]
Nov 2 21:35:49 openfiler1 named[7995]: using default UDP/IPv6 port range: [1024, 65535]
Nov 2 21:35:49 openfiler1 named[7995]: listening on IPv4 interface lo, 127.0.0.1#53
Nov 2 21:35:49 openfiler1 named[7995]: listening on IPv4 interface eth0, 192.168.1.195#53
Nov 2 21:35:49 openfiler1 named[7995]: listening on IPv4 interface eth1, 192.168.2.195#53
Nov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: 0.IN-ADDR.ARPA
Nov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: 127.IN-ADDR.ARPA
Nov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: 254.169.IN-ADDR.ARPA
Nov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: 2.0.192.IN-ADDR.ARPA
Nov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: 255.255.255.255.IN-ADDR.ARPA
Nov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: 0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA
Nov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA
Nov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: D.F.IP6.ARPA
Nov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: 8.E.F.IP6.ARPA
Nov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: 9.E.F.IP6.ARPA
Nov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: A.E.F.IP6.ARPA
Nov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: B.E.F.IP6.ARPA
Nov 2 21:35:49 openfiler1 named[7995]: command channel listening on 127.0.0.1#953
Nov 2 21:35:49 openfiler1 named[7995]: command channel listening on ::1#953
Nov 2 21:35:49 openfiler1 named[7995]: no source of entropy found
Nov 2 21:35:49 openfiler1 named[7995]: zone 1.168.192.in-addr.arpa/IN: loaded serial 201011021
Nov 2 21:35:49 openfiler1 named[7995]: zone idevelopment.info/IN: loaded serial 201011021
Nov 2 21:35:49 openfiler1 named: named startup succeeded
Nov 2 21:35:49 openfiler1 named[7995]: running
...

ConfigureDNStoStartAutomatically

Nowthatthenamedserviceisrunning,issuethefollowingcommandstomakesurethisservicestartsautomaticallyatboottime.

[root@openfiler1 ~]# chkconfig named on


[root@openfiler1 ~]# chkconfig named --list
named 0:off 1:off 2:on 3:on 4:on 5:on 6:off

Update"/etc/resolv.conf"File

WithDNSnowsetupandrunning,thenextstepistoconfigureeachservertouseitfornameresolution.Thisisaccomplishedbyeditingthe"/etc/resolv.conf"
fileoneachserverincludingthetwoOracleRACnodesandtheOpenfilernetworkstorageserver.

Makecertainthe/etc/resolv.conffilecontainsthefollowingentrieswheretheIPaddressofthenameserveranddomainmatchthoseofyourDNSserverandthe
domainyouhaveconfigured.

nameserver 192.168.1.195
search idevelopment.info

Thesecondlineallowsyoutoresolveanameonthisnetworkwithouthavingtospecifythefullyqualifiedhostname.

Verifythatthe/etc/resolv.conffilewassuccessfullyupdatedonallserversinourmininetwork.

[root@openfiler1 ~]# cat /etc/resolv.conf


nameserver 192.168.1.195
search idevelopment.info

[root@racnode1 ~]# cat /etc/resolv.conf


nameserver 192.168.1.195
search idevelopment.info

[root@racnode2 ~]# cat /etc/resolv.conf


nameserver 192.168.1.195
search idevelopment.info

Aftermodifyingthe/etc/resolv.conffileoneveryserverinthecluster,verifythatDNSisfunctioningcorrectlybytestingforwardandreverselookupsusingthe
nslookupcommandlineutility.Performtestssimilartothefollowingfromeachnodetoallothernodesinyourcluster.

[root@racnode1 ~]# nslookup racnode2.idevelopment.info


Server: 192.168.1.195
Address: 192.168.1.195#53

Name: racnode2.idevelopment.info
Address: 192.168.1.152

[root@racnode1 ~]# nslookup racnode2


Server: 192.168.1.195
Address: 192.168.1.195#53

Name: racnode2.idevelopment.info
Address: 192.168.1.152

[root@racnode1 ~]# nslookup 192.168.1.152


Server: 192.168.1.195
Address: 192.168.1.195#53

152.1.168.192.in-addr.arpa name = racnode2.idevelopment.info.

[root@racnode1 ~]# nslookup racnode-cluster-scan


Server: 192.168.1.195
Address: 192.168.1.195#53

Name: racnode-cluster-scan.idevelopment.info
Address: 192.168.1.187
Name: racnode-cluster-scan.idevelopment.info
Address: 192.168.1.188
Name: racnode-cluster-scan.idevelopment.info
Address: 192.168.1.189

[root@racnode1 ~]# nslookup 192.168.1.187


Server: 192.168.1.195
Address: 192.168.1.195#53

187.1.168.192.in-addr.arpa name = racnode-cluster-scan.idevelopment.info.

ConfiguringPublicandPrivateNetwork

Inourtwonodeexample,weneedtoconfigurethenetworkonbothOracleRACnodesforaccesstothepublicnetworkaswellastheirprivateinterconnect.

TheeasiestwaytoconfigurenetworksettingsinRHEL/CentOSiswiththeprogram"NetworkConfiguration".NetworkConfigurationisaGUIapplicationthatcan
bestartedfromthecommandlineastherootuseraccountasfollows.

[root@racnode1 ~]# /usr/bin/system-config-network &


UsingtheNetworkConfigurationapplication,youneedtoconfigurebothNICdevicesaswellasthe/etc/hostsfileandverifyingtheDNSconfiguration.Allof
thesetaskscanbecompletedusingtheNetworkConfigurationGUI.

Itshouldbenotedthatthe/etc/hostsentriesarethesameforbothOracleRACnodesandthatIremovedanyentrythathastodowithIPv6.Forexample:

# ::1 localhost6.localdomain6 localhost6

OurexampleOracleRACconfigurationwillusethefollowingnetworksettings.

OracleRACNode1(racnode1)

Device IPAddress Subnet Gateway Purpose

eth0 192.168.1.151 255.255.255.0 192.168.1.1 Connectsracnode1tothepublicnetwork

eth1 192.168.2.151 255.255.255.0 Connectsracnode1(interconnect)toracnode2(racnode2priv)

/etc/resolv.conf

nameserver 192.168.1.195
search idevelopment.info

/etc/hosts

# Do not remove the following line, or various programs


# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost

# Public Network - (eth0)


192.168.1.151 racnode1.idevelopment.info racnode1
192.168.1.152 racnode2.idevelopment.info racnode2

# Private Interconnect - (eth1)


192.168.2.151 racnode1-priv.idevelopment.info racnode1-priv
192.168.2.152 racnode2-priv.idevelopment.info racnode2-priv

# Public Virtual IP (VIP) addresses - (eth0:1)


192.168.1.251 racnode1-vip.idevelopment.info racnode1-vip
192.168.1.252 racnode2-vip.idevelopment.info racnode2-vip

# Private Storage Network for Openfiler - (eth1)


192.168.1.195 openfiler1.idevelopment.info openfiler1
192.168.2.195 openfiler1-priv.idevelopment.info openfiler1-priv

OracleRACNode2(racnode2)

Device IPAddress Subnet Gateway Purpose

eth0 192.168.1.152 255.255.255.0 192.168.1.1 Connectsracnode2tothepublicnetwork

eth1 192.168.2.152 255.255.255.0 Connectsracnode2(interconnect)toracnode1(racnode1priv)

/etc/resolv.conf

nameserver 192.168.1.195
search idevelopment.info

/etc/hosts
Device IPAddress Subnet Gateway Purpose

# Do not remove the following line, or various programs


# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost

# Public Network - (eth0)


192.168.1.151 racnode1.idevelopment.info racnode1
192.168.1.152 racnode2.idevelopment.info racnode2

# Private Interconnect - (eth1)


192.168.2.151 racnode1-priv.idevelopment.info racnode1-priv
192.168.2.152 racnode2-priv.idevelopment.info racnode2-priv

# Public Virtual IP (VIP) addresses - (eth0:1)


192.168.1.251 racnode1-vip.idevelopment.info racnode1-vip
192.168.1.252 racnode2-vip.idevelopment.info racnode2-vip

# Private Storage Network for Openfiler - (eth1)


192.168.1.195 openfiler1.idevelopment.info openfiler1
192.168.2.195 openfiler1-priv.idevelopment.info openfiler1-priv

OpenfilerNetworkStorageServer(openfiler1)

Device IPAddress Subnet Gateway Purpose

eth0 192.168.1.195 255.255.255.0 192.168.1.1 Connectsopenfiler1tothepublicnetwork

eth1 192.168.2.195 255.255.255.0 Connectsopenfiler1totheprivatenetwork

/etc/resolv.conf

nameserver 192.168.1.195
search idevelopment.info

/etc/hosts

# Do not remove the following line, or various programs


# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
192.168.1.195 openfiler1.idevelopment.info openfiler1

Inthescreenshotsbelow,onlyOracleRACNode1(racnode1)isshown.BesuretomakeallthepropernetworksettingstobothOracleRACnodes.

Figure2:NetworkConfigurationScreen,Node1(racnode1)
Figure3:EthernetDeviceScreen,eth0(racnode1)

Figure4:EthernetDeviceScreen,eth1(racnode1)
Figure5:NetworkConfigurationScreen,DNS(racnode1)

Figure6:NetworkConfigurationScreen,/etc/hosts(racnode1)

Oncethenetworkisconfigured,youcanusetheifconfigcommandtoverifyeverythingisworking.Thefollowingexampleisfromracnode1.

[root@racnode1 ~]# /sbin/ifconfig -a


eth0 Link encap:Ethernet HWaddr 00:26:9E:02:D3:AC
inet addr:192.168.1.151 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::226:9eff:fe02:d3ac/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:236549 errors:0 dropped:0 overruns:0 frame:0
TX packets:264953 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:28686645 (27.3 MiB) TX bytes:159319080 (151.9 MiB)
Interrupt:177 Memory:dfef0000-dff00000

eth1 Link encap:Ethernet HWaddr 00:0E:0C:64:D1:E5


inet addr:192.168.2.151 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::20e:cff:fe64:d1e5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:120 errors:0 dropped:0 overruns:0 frame:0
TX packets:48 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:24544 (23.9 KiB) TX bytes:8634 (8.4 KiB)
Base address:0xddc0 Memory:fe9c0000-fe9e0000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:3191 errors:0 dropped:0 overruns:0 frame:0
TX packets:3191 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4296868 (4.0 MiB) TX bytes:4296868 (4.0 MiB)

sit0 Link encap:IPv6-in-IPv4


NOARP MTU:1480 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

VerifyNetworkConfiguration

Astherootuseraccount,verifythenetworkconfigurationbyusingthepingcommandtotesttheconnectionfromeachnodeintheclustertoalltheothernodes.
Forexample,astherootuseraccount,runthefollowingcommandsoneachnode.

# ping -c 3 racnode1.idevelopment.info
# ping -c 3 racnode2.idevelopment.info
# ping -c 3 racnode1-priv.idevelopment.info
# ping -c 3 racnode2-priv.idevelopment.info
# ping -c 3 openfiler1.idevelopment.info
# ping -c 3 openfiler1-priv.idevelopment.info
# ping -c 3 racnode1
# ping -c 3 racnode2
# ping -c 3 racnode1-priv
# ping -c 3 racnode2-priv
# ping -c 3 openfiler1
# ping -c 3 openfiler1-priv

YoushouldnotgetaresponsefromthenodesusingthepingcommandforthevirtualIPs(racnode1-vip,racnode2-vip)ortheSCANIPaddresses(racnode-
cluster-scan)untilafterOracleClusterwareisinstalledandrunning.Ifthepingcommandsforthepublicaddressesfail,resolvetheissuebeforeyouproceed.

VerifySCANConfiguration

Inthisarticle,IwillconfigureSCANforroundrobinresolutiontothree,manuallyconfiguredstaticIPaddressesinDNS.

racnode-cluster-scan IN A 192.168.1.187
racnode-cluster-scan IN A 192.168.1.188
racnode-cluster-scan IN A 192.168.1.189

OracleCorporationstronglyrecommendsconfiguringthreeIPaddressesconsideringloadbalancingandhighavailabilityrequirements,regardlessofthenumberof
serversinthecluster.ThesevirtualIPaddressesmustallbeonthesamesubnetasthepublicnetworkinthecluster.TheSCANnamemustbe15charactersor
lessinlength,notincludingthedomain,andmustberesolvablewithoutthedomainsuffix.Forexample,"racnode-cluster-scan"mustberesolvableasopposedto
only"racnode-cluster-scan.idevelopment.info".ThevirtualIPaddressesforSCAN(andthevirtualIPaddressforthenode)shouldnotbemanuallyassignedto
anetworkinterfaceontheclustersinceOracleClusterwareisresponsibleforenablingthemaftertheOracleGridInfrastructureinstallation.Inotherwords,the
SCANaddressesandvirtualIPaddresses(VIPs)shouldnotrespondtopingcommandsbeforeinstallation.

VerifytheSCANconfigurationinDNSusingthenslookupcommandlineutility.SinceourDNSissetuptoprovideroundrobinaccesstotheIPaddressesresolved
bytheSCANentry,runthenslookupcommandseveraltimestomakecertainthattheroundrobinalgorithmisfunctioningproperly.Theresultshouldbethateach
timethenslookupisrun,itwillreturnthesetofthreeIPaddressesinadifferentorder.Forexample:

[root@racnode1 ~]# nslookup racnode-cluster-scan


Server: 192.168.1.195
Address: 192.168.1.195#53

Name: racnode-cluster-scan.idevelopment.info
Address: 192.168.1.187
Name: racnode-cluster-scan.idevelopment.info
Address: 192.168.1.188
Name: racnode-cluster-scan.idevelopment.info
Address: 192.168.1.189

[root@racnode1 ~]# nslookup racnode-cluster-scan


Server: 192.168.1.195
Address: 192.168.1.195#53

Name: racnode-cluster-scan.idevelopment.info
Address: 192.168.1.189
Name: racnode-cluster-scan.idevelopment.info
Address: 192.168.1.187
Name: racnode-cluster-scan.idevelopment.info
Address: 192.168.1.188

[root@racnode1 ~]# nslookup racnode-cluster-scan


Server: 192.168.1.195
Address: 192.168.1.195#53

Name: racnode-cluster-scan.idevelopment.info
Address: 192.168.1.188
Name: racnode-cluster-scan.idevelopment.info
Address: 192.168.1.189
Name: racnode-cluster-scan.idevelopment.info
Address: 192.168.1.187

ConfirmtheRACNodeNameisNotListedinLoopbackAddress

Ensurethatthenodename(racnode1orracnode2)isnotincludedfortheloopbackaddressinthe/etc/hostsfile.Ifthemachinenameislistedintheloopback
addressentry:

127.0.0.1 racnode1 localhost.localdomain localhost


itwillneedtoberemovedasshownbelow:

127.0.0.1 localhost.localdomain localhost

IftheRACnodenameislistedfortheloopbackaddress,youwillreceivethefollowingerrorduringtheRACinstallation.

ORA-00603: ORACLE server session terminated by fatal error

or

ORA-29702: error occurred in Cluster Group Service operation

CheckandturnoffUDPICMPrejections

DuringtheLinuxinstallationprocess,Iindicatedtonotconfigurethefirewalloption.Bydefaulttheoptiontoconfigureafirewallisselectedbytheinstaller.Thishas
burnedmeseveraltimessoIliketodoadoublecheckthatthefirewalloptionisnotconfiguredandtoensureudpICMPfilteringisturnedoff.

IfUDPICMPisblockedorrejectedbythefirewall,theOracleClusterwaresoftwarewillcrashafterseveralminutesofrunning.WhentheOracleClusterware
processfails,youwillhavesomethingsimilartothefollowinginthe<machine_name>_evmocr.logfile.

08/29/2005 22:17:19
oac_init:2: Could not connect to server, clsc retcode = 9
08/29/2005 22:17:19
a_init:12!: Client init unsuccessful : [32]
ibctx:1:ERROR: INVALID FORMAT
proprinit:problem reading the bootblock or superbloc 22

Whenexperiencingthistypeoferror,thesolutionistoremovetheUDPICMP(iptables)rejectionruleortosimplyhavethefirewalloptionturnedoff.TheOracle
Clusterwaresoftwarewillthenstarttooperatenormallyandnotcrash.ThefollowingcommandsshouldbeexecutedastherootuseraccountonbothOracleRAC
nodes.

1.Checktoensurethatthefirewalloptionisturnedoff.Ifthefirewalloptionisstopped(likeitisinmyexamplebelow)youdonothavetoproceedwiththe
followingsteps.

[root@racnode1 ~]# /etc/rc.d/init.d/iptables status


Firewall is stopped.

[root@racnode2 ~]# /etc/rc.d/init.d/iptables status


Firewall is stopped.

2.Ifthefirewalloptionisoperating,youwillneedtofirstmanuallydisableUDPICMPrejections.

[root@racnode1 ~]# /etc/rc.d/init.d/iptables stop


Flushing firewall rules: [ OK ]
Setting chains to policy ACCEPT: filter [ OK ]
Unloading iptables modules: [ OK ]

3.Then,turnUDPICMPrejectionsoffforallsubsequentserverreboots(whichshouldalwaysbeturnedoff).

[root@racnode1 ~]# chkconfig iptables off


ClusterTimeSynchronizationService
PerformthefollowingClusterTimeSynchronizationServiceconfigurationonbothOracleRACnodesinthecluster.

OracleClusterware11gRelease2andlaterrequirestimesynchronizationacrossallnodeswithinaclusterwhereOracleRACisdeployed.Oracleprovidestwo
optionsfortimesynchronization:anoperatingsystemconfigurednetworktimeprotocol(NTP)orthenewOracleClusterTimeSynchronizationService(CTSS).
OracleClusterTimeSynchronizationService(ctssd)isdesignedfororganizationswhoseOracleRACdatabasesareunabletoaccessNTPservices.

ConfiguringNTPisoutsidethescopeofthisarticleandwillthereforerelyontheOracleClusterTimeSynchronizationServiceasthenetworktimeprotocol.

ConfigureClusterTimeSynchronizationService(CTSS)

IfyouwanttouseClusterTimeSynchronizationServicetoprovidesynchronizationserviceinthecluster,thendeconfigureanddeinstalltheNetworkTime
Protocol(NTP)service.
TodeactivatetheNTPservice,youmuststoptheexistingntpdservice,disableitfromtheinitializationsequencesandremovethentp.conffile.Tocompletethese
stepsonRedHatEnterpriseLinuxorCentOS,runthefollowingcommandsastherootuseraccountonbothOracleRACnodes.

[root@racnode1 ~]# /sbin/service ntpd stop


[root@racnode1 ~]# chkconfig ntpd off
[root@racnode1 ~]# mv /etc/ntp.conf /etc/ntp.conf.original
Alsoremovethefollowingfile:

[root@racnode1 ~]# rm /var/run/ntpd.pid


ThisfilemaintainsthepidfortheNTPdaemon.

WhentheinstallerfindsthattheNTPprotocolisnotactive,theClusterTimeSynchronizationServiceisautomaticallyinstalledinactivemodeandsynchronizesthe
timeacrossthenodes.IfNTPisfoundconfigured,thentheClusterTimeSynchronizationServiceisstartedinobservermode,andnoactivetimesynchronizationis
performedbyOracleClusterwarewithinthecluster.

Toconfirmthatctssdisactiveafterinstallation,enterthefollowingcommandastheGridinstallationowner(grid).

[grid@racnode1 ~]$ crsctl check ctss


CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 0

ConfigureNetworkTimeProtocol(onlyifnotusingCTSSasdocumentedabove)

PleasenotethatthisguidewilluseClusterTimeSynchronizationServicefortime
synchronization(describedabove)acrossbothOracleRACnodesinthecluster.This
sectionisprovidedfordocumentationpurposesonlyandcanbeusedbyorganizations
alreadysetuptouseNTPwithintheirdomain.

IfyouareusingNTPandyouprefertocontinueusingitinsteadofClusterTimeSynchronizationService,thenyouneedtomodifytheNTPinitializationfiletoset
the-xflag,whichpreventstimefrombeingadjustedbackward.Restartthenetworktimeprotocoldaemonafteryoucompletethistask.

TodothisonOracleLinux,RedHatLinux,andAsianuxsystems,editthe/etc/sysconfig/ntpdfiletoaddthe-xflag,asinthefollowingexample.

# Drop root to id 'ntp:ntp' by default.


OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
# Set to 'yes' to sync hw clock after successful ntpdate
SYNC_HWCLOCK=no
# Additional options for ntpdate
NTPDATE_OPTIONS=""

Then,restarttheNTPservice.

# /sbin/service ntp restart


OnSUSEsystems,modifytheconfigurationfile/etc/sysconfig/ntpwiththefollowingsettings.

NTPD_OPTIONS="-x -u ntp"

Restartthedaemonusingthefollowingcommand.

# service ntp restart


ConfigureiSCSIVolumesusingOpenfiler
Performthefollowingconfigurationtasksonthenetworkstorageserver(openfiler1).

OpenfileradministrationisperformedusingtheOpenfilerStorageControlCenterabrowserbasedtooloveranhttpsconnectiononport446.Forexample:

https://openfiler1.idevelopment.info:446/

FromtheOpenfilerStorageControlCenterhomepage,loginasanadministrator.ThedefaultadministrationlogincredentialsforOpenfilerare:

Username:openfiler
Password:password

Thefirstpagetheadministratorseesisthe[Status]/[SystemOverview]screen.

TouseOpenfilerasaniSCSIstorageserver,wehavetoperformsixmajortaskssetupiSCSIservices,configurenetworkaccess,identifyandpartitionthe
physicalstorage,createanewvolumegroup,createalllogicalvolumes,andfinally,createnewiSCSItargetsforeachofthelogicalvolumes.

Services

Tocontrolservices,weusetheOpenfilerStorageControlCenterandnavigateto[Services]/[ManageServices].
Figure7:EnableiSCSIOpenfilerService

ToenabletheiSCSIservice,clickonthe'Enable'linkunderthe'iSCSItargetserver'servicename.Afterthat,the'iSCSItargetserver'statusshouldchangeto
Enabled
' '.

TheietdprogramimplementstheuserlevelpartofiSCSIEnterpriseTargetsoftwareforbuildinganiSCSIstoragesystemonLinux.WiththeiSCSItargetenabled,
weshouldbeabletoSSHintotheOpenfilerserverandseetheiscsi-targetservicerunning.

[root@openfiler1 ~]# service iscsi-target status


ietd (pid 14243) is running...

NetworkAccessConfiguration

ThenextstepistoconfigurenetworkaccessinOpenfilertoidentifybothOracleRACnodes(racnode1andracnode2)thatwillneedtoaccesstheiSCSIvolumes
throughthestorage(private)network.NotethatiSCSIlogicalvolumeswillbecreatedlateroninthissection.Alsonotethatthisstepdoesnotactuallygrantthe
appropriatepermissionstotheiSCSIvolumesrequiredbybothOracleRACnodes.ThatwillbeaccomplishedlaterinthissectionbyupdatingtheACLforeachnew
logicalvolume.

Asintheprevioussection,configuringnetworkaccessisaccomplishedusingtheOpenfilerStorageControlCenterbynavigatingto[System]/[NetworkSetup].
The"NetworkAccessConfiguration"section(atthebottomofthepage)allowsanadministratortosetupnetworksand/orhoststhatwillbeallowedtoaccess
resourcesexportedbytheOpenfilerappliance.Forthepurposeofthisarticle,wewillwanttoaddbothOracleRACnodesindividuallyratherthanallowingtheentire
192.168.2.0networkhaveaccesstoOpenfilerresources.

WhenenteringeachoftheOracleRACnodes,notethatthe'Name'fieldisjustalogicalnameusedforreferenceonly.Asaconventionwhenenteringnodes,I
simplyusethenodenamedefinedforthatIPaddress.Next,whenenteringtheactualnodeinthe'Network/Host'field,alwaysuseitsIPaddresseventhoughits
hostnamemayalreadybedefinedinyour/etc/hostsfileorDNS.Lastly,whenenteringactualhostsinourClassCnetwork,useasubnetmaskof
255.255.255.255.

ItisimportanttorememberthatyouwillbeenteringtheIPaddressoftheprivatenetwork(eth1)foreachoftheRACnodesinthecluster.

ThefollowingimageshowstheresultsofaddingbothOracleRACnodes.
Figure8:ConfigureOpenfilerNetworkAccessforOracleRACNodes

PhysicalStorage

Inthissection,wewillbecreatingthethreeiSCSIvolumestobeusedassharedstoragebybothoftheOracleRACnodesinthecluster.Thisinvolvesmultiple
stepsthatwillbeperformedontheinternal73GB15KSCSIharddiskconnectedtotheOpenfilerserver.

StoragedeviceslikeinternalIDE/SATA/SCSI/SASdisks,storagearrays,externalUSBdrives,externalFireWiredrives,orANYotherstoragecanbeconnectedto
theOpenfilerserverandservedtotheclients.OncethesedevicesarediscoveredattheOSlevel,OpenfilerStorageControlCentercanbeusedtosetupand
manageallofthatstorage.

Inourcase,wehavea73GBinternalSCSIharddriveforoursharedstorageneeds.OntheOpenfilerserverthisdriveisseenas/dev/sdb(MAXTOR
ATLAS15K2_73SCA).ToseethisandtostarttheprocessofcreatingouriSCSIvolumes,navigateto[Volumes]/[BlockDevices]fromtheOpenfilerStorage
ControlCenter.

Figure9:OpenfilerPhysicalStorageBlockDeviceManagement

PartitioningthePhysicalDisk

Thefirststepwewillperformistocreateasingleprimarypartitiononthe/dev/sdbinternalharddisk.Byclickingonthe/dev/sdblink,wearepresentedwiththe
optionsto'Edit'or'Create'apartition.Sincewewillbecreatingasingleprimarypartitionthatspanstheentiredisk,mostoftheoptionscanbelefttotheirdefault
settingwheretheonlymodificationwouldbetochangethe'PartitionType'from'Extendedpartition'to'Physicalvolume'.HerearethevaluesIspecifiedtocreate
theprimarypartitionon/dev/sdb.

PhysicalDiskPrimaryPartition

Mode Primary

PartitionType Physicalvolume

StartingCylinder 1

EndingCylinder 8924

Thesizenowshows68.36GB.Toacceptthatweclickonthe[Create]button.Thisresultsinanewpartition(/dev/sdb1)onourinternalharddisk.
Figure10:PartitionthePhysicalVolume

VolumeGroupManagement

ThenextstepistocreateaVolumeGroup.Wewillbecreatingasinglevolumegroupnamedracdbvgthatcontainsthenewlycreatedprimarypartition.

FromtheOpenfilerStorageControlCenter,navigateto[Volumes]/[VolumeGroups].Therewewouldseeanyexistingvolumegroups,ornoneasinourcase.
UsingtheVolumeGroupManagementscreen,enterthenameofthenewvolumegroup( racdbvg ),clickonthecheckboxinfrontof/dev/sdb1toselectthat
partition,andfinallyclickonthe[Addvolumegroup]button.Afterthatwearepresentedwiththelistthatnowshowsournewlycreatedvolumegroupnamed
"racdbvg".

Figure11:NewVolumeGroupCreated

LogicalVolumes

Wecannowcreatethethreelogicalvolumesinthenewlycreatedvolumegroup(racdbvg).

FromtheOpenfilerStorageControlCenter,navigateto[Volumes]/[AddVolume].Therewewillseethenewlycreatedvolumegroup(racdbvg)alongwithitsblock
storagestatistics.Alsoavailableatthebottomofthisscreenistheoptiontocreateanewvolumeintheselectedvolumegroup(Createavolumein"racdbvg").
UsethisscreentocreatethefollowingthreeiSCSIlogicalvolumes.Aftercreatingeachlogicalvolume,theapplicationwillpointyoutothe"ManageVolumes"
screen.Youwillthenneedtoclickbacktothe"AddVolume"tabtocreatethenextlogicalvolumeuntilallthreeiSCSIvolumesarecreated.
iSCSI/LogicalVolumes

VolumeName VolumeDescription RequiredSpace(MB) FilesystemType

racdb-crs1 racdbASMCRSVolume1 2,208 iSCSI

racdb-data1 racdbASMDataVolume1 33,888 iSCSI

racdb-fra1 racdbASMFRAVolume1 33,888 iSCSI

IneffectwehavecreatedthreeiSCSIdisksthatcannowbepresentedtoiSCSIclients(racnode1andracnode2)onthenetwork.The"ManageVolumes"screen
shouldlookasfollows:

Figure12:NewLogical(iSCSI)Volumes

iSCSITargets

Atthispoint,wehavethreeiSCSIlogicalvolumes.BeforeaniSCSIclientcanhaveaccesstothem,however,aniSCSItargetwillneedtobecreatedforeachof
thesethreevolumes.EachiSCSIlogicalvolumewillbemappedtoaspecificiSCSItargetandtheappropriatenetworkaccesspermissionstothattargetwillbe
grantedtobothOracleRACnodes.Forthepurposeofthisarticle,therewillbeaonetoonemappingbetweenaniSCSIlogicalvolumeandaniSCSItarget.

TherearethreestepsinvolvedincreatingandconfiguringaniSCSItargetcreateauniqueTargetIQN(basically,theuniversalnameforthenewiSCSItarget),
maponeoftheiSCSIlogicalvolumescreatedintheprevioussectiontothenewlycreatediSCSItarget,andfinally,grantbothoftheOracleRACnodesaccessto
thenewiSCSItarget.PleasenotethatthisprocesswillneedtobeperformedforeachofthethreeiSCSIlogicalvolumescreatedintheprevioussection.

Forthepurposeofthisarticle,thefollowingtableliststhenewiSCSItargetnames(theTargetIQN)andwhichiSCSIlogicalvolumeitwillbemappedto.

iSCSITarget/LogicalVolumeMappings

TargetIQN iSCSIVolumeName VolumeDescription

iqn.2006-01.com.openfiler:racdb.crs1 racdb-crs1 racdbASMCRSVolume1

iqn.2006-01.com.openfiler:racdb.data1 racdb-data1 racdbASMDataVolume1

iqn.2006-01.com.openfiler:racdb.fra1 racdb-fra1 racdbASMFRAVolume1

WearenowreadytocreatethethreenewiSCSItargetsoneforeachoftheiSCSIlogicalvolumes.Theexamplebelowillustratesthethreestepsrequiredto
createanewiSCSItargetbycreatingtheOracleClusterware/racdbcrs1target(iqn.2006-01.com.openfiler:racdb.crs1).Thisthreestepprocesswillneedtobe
repeatedforeachofthethreenewiSCSItargetslistedinthetableabove.

CreateNewTargetIQN
FromtheOpenfilerStorageControlCenter,navigateto[Volumes]/[iSCSITargets].Verifythegreysubtab"TargetConfiguration"isselected.Thispageallows
youtocreateanewiSCSItarget.AdefaultvalueisautomaticallygeneratedforthenameofthenewiSCSItarget(betterknownasthe"TargetIQN").Anexample
TargetIQNis"iqn.2006-01.com.openfiler:tsn.ae4683b67fd3":

Figure13:CreateNewiSCSITarget:DefaultTargetIQN

IprefertoreplacethelastsegmentofthedefaultTargetIQNwithsomethingmoremeaningful.ForthefirstiSCSItarget(racdb-crs1),IwillmodifythedefaultTarget
IQNbyreplacingthestring"tsn.ae4683b67fd3"with"racdb.crs1"asshowninFigure14below.

Figure14:CreateNewiSCSITarget:ReplaceDefaultTargetIQN

OnceyouaresatisfiedwiththenewTargetIQN,clickthe[Add]button.ThiswillcreateanewiSCSItargetandthenbringupapagethatallowsyoutomodifya
numberofsettingsforthenewiSCSItarget.Forthepurposeofthisarticle,noneofsettingsforthenewiSCSItargetneedtobechanged.

LUNMapping

AftercreatingthenewiSCSItarget,thenextstepistomaptheappropriateiSCSIlogicalvolumetoit.Underthe"TargetConfiguration"subtab,verifythecorrect
iSCSItargetisselectedinthesection"SelectiSCSITarget".Ifnot,usethepulldownmenutoselectthecorrectiSCSItargetandclickthe[Change]button.

Next,clickonthegreysubtabnamed"LUNMapping"(nextto"TargetConfiguration"subtab).LocatetheappropriateiSCSIlogicalvolume(/dev/racdbvg/racdb-
crs1inthisfirstexample)andclickthe[Map]button.Youdonotneedtochangeanysettingsonthispage.YourscreenshouldlooksimilartoFigure15after
clickingthe"Map"buttonforvolume/dev/racdbvg/racdb-crs1.
Figure15:CreateNewiSCSITarget:MapLUN

NetworkACL

BeforeaniSCSIclientcanhaveaccesstothenewlycreatediSCSItarget,itneedstobegrantedtheappropriatepermissions.Awhileback,weconfigurednetwork
accessinOpenfilerfortwohosts(theOracleRACnodes).ThesearethetwonodesthatwillneedtoaccessthenewiSCSItargetsthroughthestorage(private)
network.WenowneedtograntbothoftheOracleRACnodesaccesstothenewiSCSItarget.

Clickonthegreysubtabnamed"NetworkACL"(nextto"LUNMapping"subtab).ForthecurrentiSCSItarget,changethe"Access"forbothhostsfrom'Deny'to
'Allow'andclickthe[Update]button.

Figure16:CreateNewiSCSITarget:UpdateNetworkACL

GobacktotheCreateNewTargetIQNsectionandperformthesesamethreetasksfortheremainingtwoiSCSIlogicalvolumeswhilesubstitutingthevalues
foundinthe"iSCSITarget/LogicalVolumeMappings"table(namely,thevalueinthe'TargetIQN'column).

ConfigureiSCSIVolumesonOracleRACNodes
ConfiguretheiSCSIinitiatoronbothOracleRACnodesinthecluster.Creatingpartitions,however,shouldonlybeexecutedononeofnodesintheRACcluster.

AniSCSIclientcanbeanysystem(Linux,Unix,MSWindows,AppleMac,etc.)forwhichiSCSIsupport(adriver)isavailable.Inourcase,theclientsaretwo
Linuxservers,racnode1andracnode2,runningRedHatEnterpriseLinux5.5orCentOS5.5.

InthissectionwewillbeconfiguringtheiSCSIsoftwareinitiatoronbothoftheOracleRACnodes.RHEL/CentOS5.5includestheOpeniSCSIiSCSIsoftware
initiatorwhichcanbefoundintheiscsi-initiator-utilsRPM.ThisisachangefrompreviousversionsofRHEL/CentOS(4.x)whichincludedtheLinuxiscsi
sfnetsoftwaredriverdevelopedaspartoftheLinuxiSCSIProject.AlliSCSImanagementtaskslikediscoveryandloginswillusethecommandlineinterface
iscsiadmwhichisincludedwithOpeniSCSI.

TheiSCSIsoftwareinitiatorwillbeconfiguredtoautomaticallylogintothenetworkstorageserver(openfiler1)anddiscovertheiSCSIvolumescreatedinthe
previoussection.WewillthengothroughthestepsofcreatingpersistentlocalSCSIdevicenames(i.e./dev/iscsi/crs1)foreachoftheiSCSItargetnames
discoveredusingudev.HavingaconsistentlocalSCSIdevicenameandwhichiSCSItargetitmapsto,helpstodifferentiatebetweenthethreevolumeswhen
configuringASM.Beforewecandoanyofthis,however,wemustfirstinstalltheiSCSIinitiatorsoftware.

ThisguidemakesuseofASMLib2.0whichisasupportlibraryfortheAutomaticStorage
Management(ASM)featureoftheOracleDatabase.ASMLibwillbeusedtolabelalliSCSI
volumesusedinthisguide.Bydefault,ASMLibalreadyprovidespersistentpathsand
permissionsforstoragedevicesusedwithASM.Thisfeatureeliminatestheneedfor
updatingudevordevlabelfileswithstoragedevicepathsandpermissions.Forthepurpose
ofthisarticleandinpractice,IstillopttocreatepersistentlocalSCSIdevicenamesfor
eachoftheiSCSItargetnamesdiscoveredusingudev.Thisprovidesameansofself
documentationwhichhelpstoquicklyidentifythenameandlocationofeachvolume.

InstallingtheiSCSI(initiator)service

WithRedHatEnterpriseLinux5.5orCentOS5.5,theOpeniSCSIiSCSIsoftwareinitiatordoesnotgetinstalledbydefault.Thesoftwareisincludedintheiscsi-
initiator-utilspackagewhichcanbefoundonCD/DVD#1.Todetermineifthispackageisinstalled(whichinmostcases,itwillnotbe),performthefollowing
onbothOracleRACnodes.

[root@racnode1 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})n"| grep iscsi-initiator-utils


Iftheiscsi-initiator-utilspackageisnotinstalled,loadCD/DVD#1intoeachoftheOracleRACnodesandperformthefollowing.

[root@racnode1 ~]# mount -r /dev/cdrom /media/cdrom


[root@racnode1 ~]# cd /media/cdrom/CentOS
[root@racnode1 ~]# rpm -Uvh iscsi-initiator-utils-*
[root@racnode1 ~]# cd /
[root@racnode1 ~]# eject
Verifytheiscsi-initiator-utilspackageisnowinstalledonbothOracleRACnodes.

[root@racnode1 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})n"| grep iscsi-initiator-utils


iscsi-initiator-utils-6.2.0.871-0.16.el5 (x86_64)

[root@racnode2 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})n"| grep iscsi-initiator-utils


iscsi-initiator-utils-6.2.0.871-0.16.el5 (x86_64)

ConfiguretheiSCSI(initiator)service

Afterverifyingthattheiscsi-initiator-utilspackageisinstalled,starttheiscsidserviceonbothOracleRACnodesandenableittoautomaticallystartwhen
thesystemboots.WewillalsoconfiguretheiscsiservicetoautomaticallystartwhichlogsintoiSCSItargetsneededatsystemstartup.

[root@racnode1 ~]# service iscsid start


Turning off network shutdown. Starting iSCSI daemon: [ OK ]
[ OK ]

[root@racnode1 ~]# chkconfig iscsid on


[root@racnode1 ~]# chkconfig iscsi on
NowthattheiSCSIserviceisstarted,usetheiscsiadmcommandlineinterfacetodiscoverallavailabletargetsonthenetworkstorageserver.Thisshouldbe
performedonbothOracleRACnodestoverifytheconfigurationisfunctioningproperly.

[root@racnode1 ~]# iscsiadm -m discovery -t sendtargets -p openfiler1-priv


192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs1
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.fra1
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.data1

ManuallyLogIntoiSCSITargets

Atthispoint,theiSCSIinitiatorservicehasbeenstartedandeachoftheOracleRACnodeswereabletodiscovertheavailabletargetsfromtheOpenfilernetwork
storageserver.ThenextstepistomanuallylogintoeachoftheavailableiSCSItargetswhichcanbedoneusingtheiscsiadmcommandlineinterface.Thisneeds
toberunonbothOracleRACnodes.NotethatIhadtospecifytheIPaddressandnotthehostnameofthenetworkstorageserver(openfiler1-priv)Ibelieve
thisisrequiredgiventhediscovery(above)showsthetargetsusingtheIPaddress.

[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.crs1 -p 192.168.2.195 -l


[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.data1 -p 192.168.2.195 -l
[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.fra1 -p 192.168.2.195 -l
ConfigureAutomaticLogIn

Thenextstepistoensuretheclientwillautomaticallylogintoeachofthetargetslistedabovewhenthemachineisbooted(ortheiSCSIinitiatorserviceis
started/restarted).Aswiththemanualloginprocessdescribedabove,performthefollowingonbothOracleRACnodes.

[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.crs1 -p 192.168.2.195 --op update -n node.startup -v automatic
[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.data1 -p 192.168.2.195 --op update -n node.startup -v automatic
[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.fra1 -p 192.168.2.195 --op update -n node.startup -v automatic
CreatePersistentLocalSCSIDeviceNames

Inthissection,wewillgothroughthestepstocreatepersistentlocalSCSIdevicenamesforeachoftheiSCSItargetnames.Thiswillbedoneusingudev.Having
aconsistentlocalSCSIdevicenameandwhichiSCSItargetitmapsto,helpstodifferentiatebetweenthethreevolumeswhenconfiguringASM.Althoughthisis
notastrictrequirementsincewewillbeusingASMLib2.0forallvolumes,itprovidesameansofselfdocumentationtoquicklyidentifythenameandlocationof
eachiSCSIvolume.

Bydefault,wheneitheroftheOracleRACnodesbootandtheiSCSIinitiatorserviceisstarted,itwillautomaticallylogintoeachoftheiSCSItargetsconfiguredin
arandomfashionandmapthemtothenextavailablelocalSCSIdevicename.Forexample,thetargetiqn.2006-01.com.openfiler:racdb.crs1maygetmappedto
/dev/sdb.Icanactuallydeterminethecurrentmappingsforalltargetsbylookingatthe/dev/disk/by-pathdirectory.

[root@racnode1 ~]# (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}')
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0 -> ../../sdb
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0 -> ../../sdd
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0 -> ../../sdc

Usingtheoutputfromtheabovelisting,wecanestablishthefollowingcurrentmappings.

CurrentiSCSITargetNametoLocalSCSIDeviceNameMappings

iSCSITargetName LocalSCSIDeviceName

iqn.200601.com.openfiler:racdb.crs1 /dev/sdb

iqn.200601.com.openfiler:racdb.data1 /dev/sdd

iqn.200601.com.openfiler:racdb.fra1 /dev/sdc

Thismapping,however,maychangeeverytimetheOracleRACnodeisrebooted.Forexample,afterarebootitmaybedeterminedthattheiSCSItargetiqn.2006-
01.com.openfiler:racdb.crs1getsmappedtothelocalSCSIdevice/dev/sdc.ItisthereforeimpracticaltorelyonusingthelocalSCSIdevicenamegiventhere
isnowaytopredicttheiSCSItargetmappingsafterareboot.

Whatweneedisaconsistentdevicenamewecanreference(i.e./dev/iscsi/crs1)thatwillalwayspointtotheappropriateiSCSItargetthroughreboots.Thisis
wheretheDynamicDeviceManagementtoolnamedudevcomesin.udevprovidesadynamicdevicedirectoryusingsymboliclinksthatpointtotheactualdevice
usingaconfigurablesetofrules.Whenudevreceivesadeviceevent(forexample,theclientloggingintoaniSCSItarget),itmatchesitsconfiguredrulesagainst
theavailabledeviceattributesprovidedinsysfstoidentifythedevice.Rulesthatmatchmayprovideadditionaldeviceinformationorspecifyadevicenodename
andmultiplesymlinknamesandinstructudevtorunadditionalprograms(aSHELLscriptforexample)aspartofthedeviceeventhandlingprocess.

Thefirststepistocreateanewrulesfile.Thefilewillbenamed/etc/udev/rules.d/55-openiscsi.rulesandcontainonlyasinglelineofname=valuepairsused
toreceiveeventsweareinterestedin.ItwillalsodefineacalloutSHELLscript(/etc/udev/scripts/iscsidev.sh)tohandletheevent.

Createthefollowingrulesfile/etc/udev/rules.d/55-openiscsi.rulesonbothOracleRACnodes.

# /etc/udev/rules.d/55-openiscsi.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM="/etc/udev/scripts/iscsidev.sh %b",SYMLINK+="iscsi/%c/part%n"

WenowneedtocreatetheUNIXSHELLscriptthatwillbecalledwhenthiseventisreceived.Let'sfirstcreateaseparatedirectoryonbothOracleRACnodes
whereudevscriptscanbestored.

[root@racnode1 ~]# mkdir -p /etc/udev/scripts


[root@racnode2 ~]# mkdir -p /etc/udev/scripts

Next,createtheUNIXshellscript/etc/udev/scripts/iscsidev.shonbothOracleRACnodes.

#!/bin/sh

# FILE: /etc/udev/scripts/iscsidev.sh

BUS=${1}
HOST=${BUS%%:*}

[ -e /sys/class/iscsi_host ] || exit 1

file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"

target_name=$(cat ${file})

# This is not an open-scsi drive


if [ -z "${target_name}" ]; then
exit 1
fi

# Check if QNAP drive


check_qnap_target_name=${target_name%%:*}
if [ $check_qnap_target_name = "iqn.2004-04.com.qnap" ]; then
target_name=`echo "${target_name%.*}"`
fi

echo "${target_name##*.}"

AftercreatingtheUNIXSHELLscript,changeittoexecutable.

[root@racnode1 ~]# chmod 755 /etc/udev/scripts/iscsidev.sh


[root@racnode2 ~]# chmod 755 /etc/udev/scripts/iscsidev.sh
Nowthatudevisconfigured,restarttheiSCSIserviceonbothOracleRACnodes.

[root@racnode1 ~]# service iscsi stop


Logging out of session [sid: 1, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]
Logging out of session [sid: 2, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]
Logging out of session [sid: 3, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]
Logout of [sid: 1, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]: successful
Logout of [sid: 2, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]: successful
Logout of [sid: 3, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]: successful
Stopping iSCSI daemon: [ OK ]

[root@racnode1 ~]# service iscsi start


iscsid dead but pid file exists
Turning off network shutdown. Starting iSCSI daemon: [ OK ]
[ OK ]
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]: successful
Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]: successful
Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]: successful
[ OK ]

[root@racnode2 ~]# service iscsi stop


Logging out of session [sid: 1, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]
Logging out of session [sid: 2, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]
Logging out of session [sid: 3, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]
Logout of [sid: 1, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]: successful
Logout of [sid: 2, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]: successful
Logout of [sid: 3, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]: successful
Stopping iSCSI daemon: [ OK ]

[root@racnode2 ~]# service iscsi start


iscsid dead but pid file exists
Starting iSCSI daemon: [ OK ]
[ OK ]
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]: successful
Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]: successful
Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]: successful
[ OK ]

Let'sseeifourhardworkpaidoff.

[root@racnode1 ~]# ls -l /dev/iscsi/*


/dev/iscsi/crs1:
total 0
lrwxrwxrwx 1 root root 9 Nov 6 17:32 part -> ../../sdc

/dev/iscsi/data1:
total 0
lrwxrwxrwx 1 root root 9 Nov 6 17:32 part -> ../../sdd

/dev/iscsi/fra1:
total 0
lrwxrwxrwx 1 root root 9 Nov 6 17:32 part -> ../../sde

[root@racnode2 ~]# ls -l /dev/iscsi/*


/dev/iscsi/crs1:
total 0
lrwxrwxrwx 1 root root 9 Nov 6 17:36 part -> ../../sdd

/dev/iscsi/data1:
total 0
lrwxrwxrwx 1 root root 9 Nov 6 17:36 part -> ../../sdc

/dev/iscsi/fra1:
total 0
lrwxrwxrwx 1 root root 9 Nov 6 17:36 part -> ../../sde

Thelistingaboveshowsthatudevdidthejobitwassupposetodo!WenowhaveaconsistentsetoflocaldevicenamesthatcanbeusedtoreferencetheiSCSI
targets.Forexample,wecansafelyassumethatthedevicename/dev/iscsi/crs1/partwillalwaysreferencetheiSCSItargetiqn.2006-
01.com.openfiler:racdb.crs1.WenowhaveaconsistentiSCSItargetnametolocaldevicenamemappingwhichisdescribedinthefollowingtable.

iSCSITargetNametoLocalDeviceNameMappings

iSCSITargetName LocalDeviceName

iqn.200601.com.openfiler:racdb.crs1 /dev/iscsi/crs1/part

iqn.200601.com.openfiler:racdb.data1 /dev/iscsi/data1/part

iqn.200601.com.openfiler:racdb.fra1 /dev/iscsi/fra1/part
CreatePartitionsoniSCSIVolumes

WenowneedtocreateasingleprimarypartitiononeachoftheiSCSIvolumesthatspanstheentiresizeofthevolume.Asmentionedearlierinthisarticle,Iwillbe
usingAutomaticStorageManagement(ASM)tostorethesharedfilesrequiredforOracleClusterware,thephysicaldatabasefiles(data/indexfiles,onlineredolog
files,andcontrolfiles),andtheFastRecoveryArea(FRA)fortheclusterdatabase.

TheOracleClusterwaresharedfiles(OCRandvotingdisk)willbestoredinanASMdiskgroupnamed+CRSwhichwillbeconfiguredforexternalredundancy.The
physicaldatabasefilesfortheclusterdatabasewillbestoredinanASMdiskgroupnamed+RACDB_DATAwhichwillalsobeconfiguredforexternalredundancy.
Finally,theFastRecoveryArea(RMANbackupsandarchivedredologfiles)willbestoredinathirdASMdiskgroupnamed+FRAwhichtoowillbeconfiguredfor
externalredundancy.

ThefollowingtableliststhethreeASMdiskgroupsthatwillbecreatedandwhichiSCSIvolumetheywillcontain.

OracleSharedDriveConfiguration

FileTypes ASMDiskgroupName iSCSITarget(short)Name ASMRedundancy Size ASMLibVolumeName

OCRandVotingDisk +CRS crs1 External 2GB ORCL:CRSVOL1

OracleDatabaseFiles +RACDB_DATA data1 External 32GB ORCL:DATAVOL1

OracleFastRecoveryArea +FRA fra1 External 32GB ORCL:FRAVOL1

Asshowninthetableabove,wewillneedtocreateasingleLinuxprimarypartitiononeachofthethreeiSCSIvolumes.ThefdiskcommandisusedinLinuxfor
creating(andremoving)partitions.ForeachofthethreeiSCSIvolumes,youcanusethedefaultvalueswhencreatingtheprimarypartitionasthedefaultactionisto
usetheentiredisk.YoucansafelyignoreanywarningsthatmayindicatethedevicedoesnotcontainavalidDOSpartition(orSun,SGIorOSFdisklabel).

Inthisexample,Iwillberunningthefdiskcommandfromracnode1tocreateasingleprimarypartitiononeachiSCSItargetusingthelocaldevicenamescreated
byudevintheprevioussection.

/dev/iscsi/crs1/part
/dev/iscsi/data1/part
/dev/iscsi/fra1/part

CreatingthesinglepartitiononeachoftheiSCSIvolumesmustonlyberunfromoneofthe
nodesintheOracleRACcluster!(i.e.racnode1)

# ---------------------------------------

[root@racnode1 ~]# fdisk /dev/iscsi/crs1/part


Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1012, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-1012, default 1012): 1012
Command (m for help): p
Disk /dev/iscsi/crs1/part: 2315 MB, 2315255808 bytes
72 heads, 62 sectors/track, 1012 cylinders
Units = cylinders of 4464 * 512 = 2285568 bytes

Device Boot Start End Blocks Id System


/dev/iscsi/crs1/part1 1 1012 2258753 83 Linux

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.


Syncing disks.

# ---------------------------------------

[root@racnode1 ~]# fdisk /dev/iscsi/data1/part


Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-33888, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-33888, default 33888): 33888
Command (m for help): p
Disk /dev/iscsi/data1/part: 35.5 GB, 35534143488 bytes
64 heads, 32 sectors/track, 33888 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System


/dev/iscsi/data1/part1 1 33888 34701296 83 Linux

Command (m for help): w


The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

# ---------------------------------------

[root@racnode1 ~]# fdisk /dev/iscsi/fra1/part


Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-33888, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-33888, default 33888): 33888
Command (m for help): p
Disk /dev/iscsi/fra1/part: 35.5 GB, 35534143488 bytes
64 heads, 32 sectors/track, 33888 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System


/dev/iscsi/fra1/part1 1 33888 34701296 83 Linux

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.


Syncing disks.

VerifyNewPartitions

Aftercreatingallrequiredpartitionsfromracnode1,youshouldnowinformthekernelofthepartitionchangesusingthefollowingcommandastherootuseraccount
fromallremainingnodesintheOracleRACcluster(racnode2).NotethatthemappingofiSCSItargetnamesdiscoveredfromOpenfilerandthelocalSCSIdevice
namewillbedifferentonbothOracleRACnodes.ThisisnotaconcernandwillnotcauseanyproblemssincewewillnotbeusingthelocalSCSIdevicenamesbut
ratherthelocaldevicenamescreatedbyudevintheprevioussection.

Fromracnode2,runthefollowingcommands:

[root@racnode2 ~]# partprobe


[root@racnode2 ~]# fdisk -l

Disk /dev/sda: 160.0 GB, 160000000000 bytes


255 heads, 63 sectors/track, 19452 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 19452 156143767+ 8e Linux LVM

Disk /dev/sdb: 35.5 GB, 35534143488 bytes


64 heads, 32 sectors/track, 33888 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System


/dev/sdb1 1 33888 34701296 83 Linux

Disk /dev/sdc: 35.5 GB, 35534143488 bytes


64 heads, 32 sectors/track, 33888 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System


/dev/sdc1 1 33888 34701296 83 Linux

Disk /dev/sdd: 2315 MB, 2315255808 bytes


72 heads, 62 sectors/track, 1012 cylinders
Units = cylinders of 4464 * 512 = 2285568 bytes

Device Boot Start End Blocks Id System


/dev/sdd1 1 1012 2258753 83 Linux

AsafinalstepyoushouldrunthefollowingcommandonbothOracleRACnodestoverifythatudevcreatedthenewsymboliclinksforeachnewpartition.

[root@racnode1 ~]# (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}')
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0 -> ../../sdc
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0-part1 -> ../../sdc1
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0 -> ../../sdd
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0-part1 -> ../../sdd1
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0 -> ../../sde
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0-part1 -> ../../sde1

[root@racnode2 ~]# (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}')
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0 -> ../../sdd
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0-part1 -> ../../sdd1
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0 -> ../../sdc
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0-part1 -> ../../sdc1
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0 -> ../../sde
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0-part1 -> ../../sde1
Thelistingaboveshowsthatudevdidindeedcreatenewdevicenamesforeachofthenewpartitions.Wewillbeusingthesenewdevicenameswhenconfiguring
thevolumesforASMliblaterinthisguide.

/dev/iscsi/crs1/part1
/dev/iscsi/data1/part1
/dev/iscsi/fra1/part1
CreateJobRoleSeparationOperatingSystemPrivilegesGroups,Users,andDirectories
Performthefollowinguser,group,directoryconfiguration,andsettingshelllimittasksforthegridandoracleusersonbothOracleRACnodesinthecluster.

ThissectionprovidestheinstructionsonhowtocreatetheoperatingsystemusersandgroupstoinstallallOraclesoftwareusingaJobRoleSeparation
configuration.ThecommandsinthissectionshouldbeperformedonbothOracleRACnodesasroottocreatethesegroups,users,anddirectories.Notethatthe
groupanduserIDsmustbeidenticalonbothOracleRACnodesinthecluster.ChecktomakesurethatthegroupanduserIDsyouwanttouseareavailableon
eachclustermembernode,andconfirmthattheprimarygroupforeachGridInfrastructureforaClusterinstallationownerhasthesamenameandgroupIDwhich
forthepurposeofthisguideisoinstall(GID1000).

AJobRoleSeparationprivilegesconfigurationofOracleisaconfigurationwithoperatingsystemgroupsandusersthatdivideadministrativeaccessprivilegestothe
OracleGridInfrastructureinstallationfromotheradministrativeprivilegesusersandgroupsassociatedwithotherOracleinstallations(e.g.theOracledatabase
software).Administrativeprivilegesaccessisgrantedbymembershipinseparateoperatingsystemgroups,andinstallationprivilegesaregrantedbyusingdifferent
installationownersforeachOracleinstallation.

OneOSuserwillbecreatedtoowneachOraclesoftwareproduct"grid"fortheOracleGridInfrastructureownerand"oracle"fortheOracleRACsoftware.
Throughoutthisarticle,ausercreatedtoowntheOracleGridInfrastructurebinariesiscalledthegriduser.ThisuserwillownboththeOracleClusterwareand
OracleAutomaticStorageManagementbinaries.TheusercreatedtoowntheOracledatabasebinaries(OracleRAC)willbecalledtheoracleuser.BothOracle
softwareownersmusthavetheOracleInventorygroup(oinstall)astheirprimarygroup,sothateachOraclesoftwareinstallationownercanwritetothecentral
inventory(oraInventory),andsothatOCRandOracleClusterwareresourcepermissionsaresetcorrectly.TheOracleRACsoftwareownermustalsohavethe
OSDBAgroupandtheoptionalOSOPERgroupassecondarygroups.

ThistypeofconfigurationisoptionalbuthighlyrecommendbyOraclefororganizationsthatneedtorestrictuseraccesstoOraclesoftwarebyresponsibilityareas
fordifferentadministratorusers.Forexample,asmallorganizationcouldsimplyallocateoperatingsystemuserprivilegessothatyoucanuseoneadministrative
userandonegroupforoperatingsystemauthenticationforallsystemprivilegesonthestorageanddatabasetiers.Withthistypeofconfiguration,youcandesignate
theoracleusertobethesoleinstallationownerforallOraclesoftware(GridinfrastructureandtheOracledatabasesoftware),anddesignateoinstalltobethe
singlegroupwhosemembersaregrantedallsystemprivilegesforOracleClusterware,AutomaticStorageManagement,andallOracleDatabasesontheservers,
andallprivilegesasinstallationowners.Otherorganizations,however,havespecializedsystemroleswhowillberesponsibleforinstallingtheOraclesoftwaresuch
assystemadministrators,networkadministrators,orstorageadministrators.ThesedifferentadministrativeuserscanconfigureasysteminpreparationforanOracle
GridInfrastructureforaclusterinstallation,andcompleteallconfigurationtasksthatrequireoperatingsystemrootprivileges.WhenGridInfrastructureinstallation
andconfigurationiscompletedsuccessfully,asystemadministratorshouldonlyneedtoprovideconfigurationinformationandtograntaccesstothedatabase
administratortorunscriptsasrootduringanOracleRACinstallation.

ThefollowingO/Sgroupswillbecreatedtosupportjobroleseparation.

Description OSGroupName OSUsersAssignedtothisGroup OraclePrivilege OracleGroupName

OracleInventoryandSoftwareOwner oinstall grid, oracle

OracleAutomaticStorageManagementGroup asmadmin grid SYSASM OSASM

ASMDatabaseAdministratorGroup asmdba grid, oracle SYSDBAforASM OSDBAforASM

ASMOperatorGroup asmoper grid SYSOPERforASM OSOPERforASM

DatabaseAdministrator dba oracle SYSDBA OSDBA

DatabaseOperator oper oracle SYSOPER OSOPER

O/SGroupDescriptions

OracleInventoryGroup(typicallyoinstall)

MembersoftheOINSTALLgroupareconsideredthe"owners"oftheOraclesoftwareandaregrantedprivilegestowritetotheOraclecentralinventory
(oraInventory).WhenyouinstallOraclesoftwareonaLinuxsystemforthefirsttime,OUIcreatesthe/etc/oraInst.locfile.Thisfileidentifiesthenameof
theOracleInventorygroup(bydefault,oinstall),andthepathoftheOracleCentralInventorydirectory.

Bydefault,ifanoraInventorygroupdoesnotexist,thentheinstallerliststheprimarygroupoftheinstallationownerfortheGridInfrastructureforaClusteras
theoraInventorygroup.EnsurethatthisgroupisavailableasaprimarygroupforallplannedOraclesoftwareinstallationowners.Forthepurposeofthis
guide,thegridandoracleinstallationownersmustbeconfiguredwithoinstallastheirprimarygroup.

TheOracleAutomaticStorageManagementGroup(typicallyasmadmin)

Thisisarequiredgroup.CreatethisgroupasaseparategroupifyouwanttohaveseparateadministrationprivilegegroupsforOracleASMandOracle
Databaseadministrators.InOracledocumentation,theoperatingsystemgroupwhosemembersaregrantedprivilegesiscalledtheOSASMgroup,andincode
examples,wherethereisagroupspecificallycreatedtograntthisprivilege,itisreferredtoasasmadmin.

MembersoftheOSASMgroupcanuseSQLtoconnecttoanOracleASMinstanceasSYSASMusingoperatingsystemauthentication.TheSYSASMprivilegethat
wasintroducedinOracleASM11grelease1(11.1)isnowfullyseparatedfromtheSYSDBAprivilegeinOracleASM11gRelease2(11.2).SYSASMprivilegesno
longerprovideaccessprivilegesonanRDBMSinstance.ProvidingsystemprivilegesforthestoragetierusingtheSYSASMprivilegeinsteadoftheSYSDBA
privilegeprovidesaclearerdivisionofresponsibilitybetweenASMadministrationanddatabaseadministration,andhelpstopreventdifferentdatabasesusing
thesamestoragefromaccidentallyoverwritingeachothersfiles.TheSYSASMprivilegespermitmountinganddismountingdiskgroups,andotherstorage
administrationtasks.

TheASMDatabaseAdministratorgroup(OSDBAforASM,typicallyasmdba)

MembersoftheASMDatabaseAdministratorgroup(OSDBAforASM)isasubsetoftheSYSASMprivilegesandaregrantedreadandwriteaccesstofiles
managedbyOracleASM.TheGridInfrastructureinstallationowner(grid)andallOracleDatabasesoftwareowners(oracle)mustbeamemberofthis
group,andalluserswithOSDBAmembershipondatabasesthathaveaccesstothefilesmanagedbyOracleASMmustbemembersoftheOSDBAgroupfor
ASM.

MembersoftheASMOperatorGroup(OSOPERforASM,typicallyasmoper)

Thisisanoptionalgroup.CreatethisgroupifyouwantaseparategroupofoperatingsystemuserstohavealimitedsetofOracleASMinstance
administrativeprivileges(theSYSOPERforASMprivilege),includingstartingupandstoppingtheOracleASMinstance.Bydefault,membersoftheOSASM
groupalsohaveallprivilegesgrantedbytheSYSOPERforASMprivilege.
TousetheASMOperatorgrouptocreateanASMadministratorgroupwithfewerprivilegesthanthedefaultasmadmingroup,thenyoumustchoosethe
AdvancedinstallationtypetoinstalltheGridinfrastructuresoftware.Inthiscase,OUIpromptsyoutospecifythenameofthisgroup.Inthisguide,thisgroup
isasmoper.

IfyouwanttohaveanOSOPERforASMgroup,thenthegridinfrastructureforaclustersoftwareowner(grid)mustbeamemberofthisgroup.

DatabaseAdministrator(OSDBA,typicallydba)

MembersoftheOSDBAgroupcanuseSQLtoconnecttoanOracleinstanceasSYSDBAusingoperatingsystemauthentication.Membersofthisgroupcan
performcriticaldatabaseadministrationtasks,suchascreatingthedatabaseandinstancestartupandshutdown.Thedefaultnameforthisgroupisdba.The
SYSDBAsystemprivilegeallowsaccesstoadatabaseinstanceevenwhenthedatabaseisnotopen.Controlofthisprivilegeistotallyoutsideofthedatabase
itself.

TheSYSDBAsystemprivilegeshouldnotbeconfusedwiththedatabaseroleDBA.TheDBAroledoesnotincludetheSYSDBAorSYSOPERsystemprivileges.

DatabaseOperator(OSOPER,typicallyoper)

MembersoftheOSOPERgroupcanuseSQLtoconnecttoanOracleinstanceasSYSOPERusingoperatingsystemauthentication.Membersofthisoptional
grouphavealimitedsetofdatabaseadministrativeprivilegessuchasmanagingandrunningbackups.Thedefaultnameforthisgroupisoper.TheSYSOPER
systemprivilegeallowsaccesstoadatabaseinstanceevenwhenthedatabaseisnotopen.Controlofthisprivilegeistotallyoutsideofthedatabaseitself.
Tousethisgroup,choosetheAdvancedinstallationtypetoinstalltheOracledatabasesoftware.

CreateGroupsandUserforGridInfrastructure

LetsstartthissectionbycreatingtherecommendedOSgroupsanduserforGridInfrastructureonbothOracleRACnodes.

[root@racnode1 groupadd -g 1000 oinstall


~]#
[root@racnode1 groupadd -g 1200 asmadmin
~]#
[root@racnode1 groupadd -g 1201 asmdba
~]#
[root@racnode1 groupadd -g 1202 asmoper
~]#
[root@racnode1 useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash -c "Grid Infrastructure Owner" grid
~]#

[root@racnode1 ~]# id grid


uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)

-------------------------------------------------

[root@racnode2 groupadd -g 1000 oinstall


~]#
[root@racnode2 groupadd -g 1200 asmadmin
~]#
[root@racnode2 groupadd -g 1201 asmdba
~]#
[root@racnode2 groupadd -g 1202 asmoper
~]#
[root@racnode2 useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash -c "Grid Infrastructure Owner" grid
~]#

[root@racnode2 ~]# id grid


uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)

SetthepasswordforthegridaccountonbothOracleRACnodes.

[root@racnode1 ~]# passwd grid


Changing password for user grid.
New UNIX password: xxxxxxxxxxx
Retype new UNIX password: xxxxxxxxxxx
passwd: all authentication tokens updated successfully.

[root@racnode2 ~]# passwd grid


Changing password for user grid.
New UNIX password: xxxxxxxxxxx
Retype new UNIX password: xxxxxxxxxxx
passwd: all authentication tokens updated successfully.

CreateLoginScriptforthegridUserAccount

LogintobothOracleRACnodesasthegriduseraccountandcreatethefollowingloginscript(.bash_profile).

WhensettingtheOracleenvironmentvariablesintheloginscriptforeachOracleRAC
node,makecertaintoassigneachRACnodewithauniqueOracleSIDforASM.
racnode1:ORACLE_SID=+ASM1
racnode2:ORACLE_SID=+ASM2

[root@racnode1 ~]# su - grid

# ---------------------------------------------------
# .bash_profile
# ---------------------------------------------------
# OS User: grid
# Application: Oracle Grid Infrastructure
# Version: Oracle 11g Release 2
# ---------------------------------------------------

# Get the aliases and functions


if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

alias ls="ls -FA"

# ---------------------------------------------------
# ORACLE_SID
# ---------------------------------------------------
# Specifies the Oracle system identifier (SID)
# for the Automatic Storage Management (ASM)instance
# running on this node.
# Each RAC node must have a unique ORACLE_SID.
# (i.e. +ASM1, +ASM2,...)
# ---------------------------------------------------
ORACLE_SID=+ASM1; export ORACLE_SID

# ---------------------------------------------------
# JAVA_HOME
# ---------------------------------------------------
# Specifies the directory of the Java SDK and Runtime
# Environment.
# ---------------------------------------------------
JAVA_HOME=/usr/local/java; export JAVA_HOME

# ---------------------------------------------------
# GRID_BASE
# ---------------------------------------------------
# Specifies the base of the Oracle directory structure
# for Optimal Flexible Architecture (OFA) compliant
# installations. The Oracle base directory for the
# grid installation owner is the location where
# diagnostic and administrative logs, and other logs
# associated with Oracle ASM and Oracle Clusterware
# are stored.
# ---------------------------------------------------
GRID_BASE=/u01/app/grid; export GRID_BASE

ORACLE_BASE=$GRID_BASE; export ORACLE_BASE

# ---------------------------------------------------
# GRID_HOME
# ---------------------------------------------------
# Specifies the directory containing the Oracle
# Grid Infrastructure software. For grid
# infrastructure for a cluster installations, the Grid
# home must not be placed under one of the Oracle base
# directories, or under Oracle home directories of
# Oracle Database installation owners, or in the home
# directory of an installation owner. During
# installation, ownership of the path to the Grid
# home is changed to root. This change causes
# permission errors for other installations.
# ---------------------------------------------------
GRID_HOME=/u01/app/11.2.0/grid; export GRID_HOME

ORACLE_HOME=$GRID_HOME; export ORACLE_HOME

# ---------------------------------------------------
# ORACLE_PATH
# ---------------------------------------------------
# Specifies the search path for files used by Oracle
# applications such as SQL*Plus. If the full path to
# the file is not specified, or if the file is not
# in the current directory, the Oracle application
# uses ORACLE_PATH to locate the file.
# This variable is used by SQL*Plus, Forms and Menu.
# ---------------------------------------------------
ORACLE_PATH=/u01/app/oracle/dba_scripts/sql; export ORACLE_PATH

# ---------------------------------------------------
# SQLPATH
# ---------------------------------------------------
# Specifies the directory or list of directories that
# SQL*Plus searches for a login.sql file.
# ---------------------------------------------------
# SQLPATH=/u01/app/oracle/dba_scripts/sql; export SQLPATH

# ---------------------------------------------------
# ORACLE_TERM
# ---------------------------------------------------
# Defines a terminal definition. If not set, it
# defaults to the value of your TERM environment
# variable. Used by all character mode products.
# ---------------------------------------------------
ORACLE_TERM=xterm; export ORACLE_TERM

# ---------------------------------------------------
# NLS_DATE_FORMAT
# ---------------------------------------------------
# Specifies the default date format to use with the
# TO_CHAR and TO_DATE functions. The default value of
# this parameter is determined by NLS_TERRITORY. The
# value of this parameter can be any valid date
# format mask, and the value must be surrounded by
# double quotation marks. For example:
#
# NLS_DATE_FORMAT = "MM/DD/YYYY"
#
# ---------------------------------------------------
NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT

# ---------------------------------------------------
# TNS_ADMIN
# ---------------------------------------------------
# Specifies the directory containing the Oracle Net
# Services configuration files like listener.ora,
# tnsnames.ora, and sqlnet.ora.
# ---------------------------------------------------
TNS_ADMIN=$GRID_HOME/network/admin; export TNS_ADMIN

# ---------------------------------------------------
# ORA_NLS11
# ---------------------------------------------------
# Specifies the directory where the language,
# territory, character set, and linguistic definition
# files are stored.
# ---------------------------------------------------
ORA_NLS11=$GRID_HOME/nls/data; export ORA_NLS11

# ---------------------------------------------------
# PATH
# ---------------------------------------------------
# Used by the shell to locate executable programs;
# must include the $GRID_HOME/bin directory.
# ---------------------------------------------------
PATH=.:${JAVA_HOME}/bin:$JAVA_HOME/db/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/oracle/dba_scripts/bin
export PATH

# ---------------------------------------------------
# LD_LIBRARY_PATH
# ---------------------------------------------------
# Specifies the list of directories that the shared
# library loader searches to locate shared object
# libraries at runtime.
# ---------------------------------------------------
LD_LIBRARY_PATH=$GRID_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$GRID_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH

# ---------------------------------------------------
# CLASSPATH
# ---------------------------------------------------
# Specifies the directory or list of directories that
# contain compiled Java classes.
# ---------------------------------------------------
CLASSPATH=$GRID_HOME/JRE
CLASSPATH=${CLASSPATH}:$GRID_HOME/jdbc/lib/ojdbc6.jar
CLASSPATH=${CLASSPATH}:$GRID_HOME/jlib
CLASSPATH=${CLASSPATH}:$GRID_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$GRID_HOME/network/jlib
export CLASSPATH

# ---------------------------------------------------
# THREADS_FLAG
# ---------------------------------------------------
# All the tools in the JDK use green threads as a
# default. To specify that native threads should be
# used, set the THREADS_FLAG environment variable to
# "native". You can revert to the use of green
# threads by setting THREADS_FLAG to the value
# "green".
# ---------------------------------------------------
THREADS_FLAG=native; export THREADS_FLAG

# ---------------------------------------------------
# TEMP, TMP, and TMPDIR
# ---------------------------------------------------
# Specify the default directories for temporary
# files; if set, tools that create temporary files
# create them in one of these directories.
# ---------------------------------------------------
export TEMP=/tmp
export TMPDIR=/tmp

# ---------------------------------------------------
# UMASK
# ---------------------------------------------------
# Set the default file mode creation mask
# (umask) to 022 to ensure that the user performing
# the Oracle software installation creates files
# with 644 permissions.
# ---------------------------------------------------
umask 022
CreateGroupsandUserforOracleDatabaseSoftware

Next,createtherecommendedOSgroupsanduserfortheOracledatabasesoftwareonbothOracleRACnodes.

[root@racnode1 ~]# groupadd -g 1300 dba


[root@racnode1 ~]# groupadd -g 1301 oper
[root@racnode1 ~]# useradd -m -u 1101 -g oinstall -G dba,oper,asmdba -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle
[root@racnode1 ~]# id oracle
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)

-------------------------------------------------

[root@racnode2 ~]# groupadd -g 1300 dba


[root@racnode2 ~]# groupadd -g 1301 oper
[root@racnode2 ~]# useradd -m -u 1101 -g oinstall -G dba,oper,asmdba -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle
[root@racnode2 ~]# id oracle
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)

Setthepasswordfortheoracleaccount.

[root@racnode1 ~]# passwd oracle


Changing password for user oracle.
New UNIX password: xxxxxxxxxxx
Retype new UNIX password: xxxxxxxxxxx
passwd: all authentication tokens updated successfully.

[root@racnode2 ~]# passwd oracle


Changing password for user oracle.
New UNIX password: xxxxxxxxxxx
Retype new UNIX password: xxxxxxxxxxx
passwd: all authentication tokens updated successfully.

CreateLoginScriptfortheoracleUserAccount

LogintobothOracleRACnodesastheoracleuseraccountandcreatethefollowingloginscript(.bash_profile).

WhensettingtheOracleenvironmentvariablesintheloginscriptforeachOracleRAC
node,makecertaintoassigneachRACnodewithauniqueOracleSID.
racnode1:ORACLE_SID=racdb1
racnode2:ORACLE_SID=racdb2

[root@racnode1 ~]# su - oracle

# ---------------------------------------------------
# .bash_profile
# ---------------------------------------------------
# OS User: oracle
# Application: Oracle Database Software Owner
# Version: Oracle 11g Release 2
# ---------------------------------------------------

# Get the aliases and functions


if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

alias ls="ls -FA"

# ---------------------------------------------------
# ORACLE_SID
# ---------------------------------------------------
# Specifies the Oracle system identifier (SID) for
# the Oracle instance running on this node.
# Each RAC node must have a unique ORACLE_SID.
# (i.e. racdb1, racdb2,...)
# ---------------------------------------------------
ORACLE_SID=racdb1; export ORACLE_SID

# ---------------------------------------------------
# ORACLE_UNQNAME
# ---------------------------------------------------
# In previous releases of Oracle Database, you were
# required to set environment variables for
# ORACLE_HOME and ORACLE_SID to start, stop, and
# check the status of Enterprise Manager. With
# Oracle Database 11g Release 2 (11.2) and later, you
# need to set the environment variables ORACLE_HOME
# and ORACLE_UNQNAME to use Enterprise Manager.
# Set ORACLE_UNQNAME equal to the database unique
# name.
# ---------------------------------------------------
ORACLE_UNQNAME=racdb; export ORACLE_UNQNAME

# ---------------------------------------------------
# JAVA_HOME
# ---------------------------------------------------
# Specifies the directory of the Java SDK and Runtime
# Environment.
# ---------------------------------------------------
JAVA_HOME=/usr/local/java; export JAVA_HOME

# ---------------------------------------------------
# ORACLE_BASE
# ---------------------------------------------------
# Specifies the base of the Oracle directory structure
# for Optimal Flexible Architecture (OFA) compliant
# database software installations.
# ---------------------------------------------------
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE

# ---------------------------------------------------
# ORACLE_HOME
# ---------------------------------------------------
# Specifies the directory containing the Oracle
# Database software.
# ---------------------------------------------------
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1; export ORACLE_HOME

# ---------------------------------------------------
# ORACLE_PATH
# ---------------------------------------------------
# Specifies the search path for files used by Oracle
# applications such as SQL*Plus. If the full path to
# the file is not specified, or if the file is not
# in the current directory, the Oracle application
# uses ORACLE_PATH to locate the file.
# This variable is used by SQL*Plus, Forms and Menu.
# ---------------------------------------------------
ORACLE_PATH=/u01/app/oracle/dba_scripts/sql:$ORACLE_HOME/rdbms/admin; export ORACLE_PATH

# ---------------------------------------------------
# SQLPATH
# ---------------------------------------------------
# Specifies the directory or list of directories that
# SQL*Plus searches for a login.sql file.
# ---------------------------------------------------
# SQLPATH=/u01/app/oracle/dba_scripts/sql; export SQLPATH

# ---------------------------------------------------
# ORACLE_TERM
# ---------------------------------------------------
# Defines a terminal definition. If not set, it
# defaults to the value of your TERM environment
# variable. Used by all character mode products.
# ---------------------------------------------------
ORACLE_TERM=xterm; export ORACLE_TERM

# ---------------------------------------------------
# NLS_DATE_FORMAT
# ---------------------------------------------------
# Specifies the default date format to use with the
# TO_CHAR and TO_DATE functions. The default value of
# this parameter is determined by NLS_TERRITORY. The
# value of this parameter can be any valid date
# format mask, and the value must be surrounded by
# double quotation marks. For example:
#
# NLS_DATE_FORMAT = "MM/DD/YYYY"
#
# ---------------------------------------------------
NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT

# ---------------------------------------------------
# TNS_ADMIN
# ---------------------------------------------------
# Specifies the directory containing the Oracle Net
# Services configuration files like listener.ora,
# tnsnames.ora, and sqlnet.ora.
# ---------------------------------------------------
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN

# ---------------------------------------------------
# ORA_NLS11
# ---------------------------------------------------
# Specifies the directory where the language,
# territory, character set, and linguistic definition
# files are stored.
# ---------------------------------------------------
ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11

# ---------------------------------------------------
# PATH
# ---------------------------------------------------
# Used by the shell to locate executable programs;
# must include the $ORACLE_HOME/bin directory.
# ---------------------------------------------------
PATH=.:${JAVA_HOME}/bin:$JAVA_HOME/db/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/oracle/dba_scripts/bin
export PATH

# ---------------------------------------------------
# LD_LIBRARY_PATH
# ---------------------------------------------------
# Specifies the list of directories that the shared
# library loader searches to locate shared object
# libraries at runtime.
# ---------------------------------------------------
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH

# ---------------------------------------------------
# CLASSPATH
# ---------------------------------------------------
# Specifies the directory or list of directories that
# contain compiled Java classes.
# ---------------------------------------------------
CLASSPATH=$ORACLE_HOME/jdbc/lib/ojdbc6.jar
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH

# ---------------------------------------------------
# THREADS_FLAG
# ---------------------------------------------------
# All the tools in the JDK use green threads as a
# default. To specify that native threads should be
# used, set the THREADS_FLAG environment variable to
# "native". You can revert to the use of green
# threads by setting THREADS_FLAG to the value
# "green".
# ---------------------------------------------------
THREADS_FLAG=native; export THREADS_FLAG

# ---------------------------------------------------
# TEMP, TMP, and TMPDIR
# ---------------------------------------------------
# Specify the default directories for temporary
# files; if set, tools that create temporary files
# create them in one of these directories.
# ---------------------------------------------------
export TEMP=/tmp
export TMPDIR=/tmp

# ---------------------------------------------------
# UMASK
# ---------------------------------------------------
# Set the default file mode creation mask
# (umask) to 022 to ensure that the user performing
# the Oracle software installation creates files
# with 644 permissions.
# ---------------------------------------------------
umask 022

VerifyThattheUsernobodyExists

Beforeinstallingthesoftware,completethefollowingproceduretoverifythattheusernobodyexistsonbothOracleRACnodes.

1.Todetermineiftheuserexists,enterthefollowingcommand:

[root@racnode1 ~]# id nobody


uid=99(nobody) gid=99(nobody) groups=99(nobody)

[root@racnode2 ~]# id nobody


uid=99(nobody) gid=99(nobody) groups=99(nobody)

Ifthiscommanddisplaysinformationaboutthenobodyuser,thenyoudonothavetocreatethatuser.

2.Iftheusernobodydoesnotexist,thenenterthefollowingcommandtocreateit:

[root@racnode1 ~]# /usr/sbin/useradd nobody


[root@racnode2 ~]# /usr/sbin/useradd nobody

CreatetheOracleBaseDirectoryPath

ThefinalstepistoconfigureanOraclebasepathcompliantwithanOptimalFlexibleArchitecture(OFA)structureandcorrectpermissions.Thiswillneedtobe
performedonbothOracleRACnodesintheclusterasroot.
Thisguideassumesthatthe/u01directoryisbeingcreatedintherootfilesystem.Pleasenotethatthisisbeingdoneforthesakeofbrevityandisnot
recommendedasageneralpractice.Normally,the/u01directorywouldbeprovisionedasaseparatefilesystemwitheitherhardwareorsoftwaremirroring
configured.

[root@racnode1 ~]# mkdir -p /u01/app/grid


[root@racnode1 ~]# mkdir -p /u01/app/11.2.0/grid
[root@racnode1 ~]# chown -R grid:oinstall /u01
[root@racnode1 ~]# mkdir -p /u01/app/oracle
[root@racnode1 ~]# chown oracle:oinstall /u01/app/oracle
[root@racnode1 ~]# chmod -R 775 /u01
-------------------------------------------------------------

[root@racnode2 ~]# mkdir -p /u01/app/grid


[root@racnode2 ~]# mkdir -p /u01/app/11.2.0/grid
[root@racnode2 ~]# chown -R grid:oinstall /u01
[root@racnode2 ~]# mkdir -p /u01/app/oracle
[root@racnode2 ~]# chown oracle:oinstall /u01/app/oracle
[root@racnode2 ~]# chmod -R 775 /u01
Attheendofthissection,youshouldhavethefollowingonbothOracleRACnodes:

AnOraclecentralinventorygroup,ororaInventorygroup(oinstall),whosemembersthathavethecentralinventorygroupastheirprimarygrouparegranted
permissionstowritetotheoraInventorydirectory.

AseparateOSASMgroup(asmadmin),whosemembersaregrantedtheSYSASMprivilegetoadministerOracleClusterwareandOracleASM.

AseparateOSDBAforASMgroup(asmdba),whosemembersincludegridandoracle,andwhoaregrantedaccesstoOracleASM.

AseparateOSOPERforASMgroup(asmoper),whosemembersincludegrid,andwhoaregrantedlimitedOracleASMadministratorprivileges,includingthe
permissionstostartandstoptheOracleASMinstance.

AnOraclegridinstallationforaclusterowner(grid),withtheoraInventorygroupasitsprimarygroup,andwiththeOSASM(asmadmin),OSDBAforASM
(asmdba)andOSOPERforASM(asmoper)groupsassecondarygroups.

AseparateOSDBAgroup(dba),whosemembersaregrantedtheSYSDBAprivilegetoadministertheOracleDatabase.

AseparateOSOPERgroup(oper),whosemembersincludeoracle,andwhoaregrantedlimitedOracledatabaseadministratorprivileges.

AnOracleDatabasesoftwareowner(oracle),withtheoraInventorygroupasitsprimarygroup,andwiththeOSDBA(dba),OSOPER(oper),andtheOSDBA
forASMgroup(asmdba)astheirsecondarygroups.

AnOFAcompliantmountpoint/u01ownedbygrid:oinstallbeforeinstallation.

AnOraclebaseforthegrid/u01/app/gridownedbygrid:oinstallwith775permissions,andchangedduringtheinstallationprocessto755permissions.
ThegridinstallationownerOraclebasedirectoryisthelocationwhereOracleASMdiagnosticandadministrativelogfilesareplaced.

AGridhome/u01/app/11.2.0/gridownedbygrid:oinstallwith775(drwxdrwxr-x)permissions.Thesepermissionsarerequiredforinstallation,andare
changedduringtheinstallationprocesstoroot:oinstallwith755permissions(drwxr-xr-x).

Duringinstallation,OUIcreatestheOracleInventorydirectoryinthepath/u01/app/oraInventory.Thispathremainsownedbygrid:oinstall,toenable
otherOraclesoftwareownerstowritetothecentralinventory.

AnOraclebase/u01/app/oracleownedbyoracle:oinstallwith775permissions.

SetResourceLimitsfortheOracleSoftwareInstallationUsers

ToimprovetheperformanceofthesoftwareonLinuxsystems,youmustincreasethefollowingresourcelimitsfortheOraclesoftwareownerusers(grid,oracle).

ShellLimit Iteminlimits.conf HardLimit

Maximumnumberofopenfiledescriptors nofile 65536

Maximumnumberofprocessesavailabletoasingleuser nproc 16384

Maximumsizeofthestacksegmentoftheprocess stack 10240

Tomakethesechanges,runthefollowingasroot:

1.OneachOracleRACnode,addthefollowinglinestothe/etc/security/limits.conffile(thefollowingexampleshowsthesoftwareaccountownersoracle
andgrid).

[root@racnode1 ~]#cat >> /etc/security/limits.conf <<EOF


grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
EOF
[root@racnode2 ~]#cat >> /etc/security/limits.conf <<EOF
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
EOF
2.OneachOracleRACnode,addoreditthefollowinglineinthe/etc/pam.d/loginfile,ifitdoesnotalreadyexist.

[root@racnode1 ~]# cat >> /etc/pam.d/login <<EOF


session required pam_limits.so
EOF
[root@racnode2 ~]# cat >> /etc/pam.d/login <<EOF
session required pam_limits.so
EOF
3.Dependingonyourshellenvironment,makethefollowingchangestothedefaultshellstartupfileinordertochangeulimitsettingsforallOracleinstallation
owners(notethattheseexamplesshowtheusersoracleandgrid).

FortheBourne,Bash,orKornshell,addthefollowinglinestothe/etc/profilefilebyrunningthefollowing:

[root@racnode1 ~]# cat >> /etc/profile <<EOF


if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
EOF
[root@racnode2 ~]# cat >> /etc/profile <<EOF
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
EOF
FortheCshell(cshortcsh),addthefollowinglinestothe/etc/csh.loginfilebyrunningthefollowing:

[root@racnode1 ~]# cat >> /etc/csh.login <<EOF


if ( $USER == "oracle" || $USER == "grid" ) then
limit maxproc 16384
limit descriptors 65536
endif
EOF
[root@racnode2 ~]# cat >> /etc/csh.login <<EOF
if ( $USER == "oracle" || $USER == "grid" ) then
limit maxproc 16384
limit descriptors 65536
endif
EOF
LoggingIntoaRemoteSystemUsingXTerminal
Thisguiderequiresaccesstotheconsoleofallmachines(OracleRACnodesandOpenfiler)inordertoinstalltheoperatingsystemandperformseveralofthe
configurationtasks.Whenmanagingaverysmallnumberofservers,itmightmakesensetoconnecteachserverwithitsownmonitor,keyboard,andmousein
ordertoaccessitsconsole.However,asthenumberofserverstomanageincreases,thissolutionbecomesunfeasible.Amorepracticalsolutionwouldbeto
configureadedicateddevicewhichwouldincludeasinglemonitor,keyboard,andmousethatwouldhavedirectaccesstotheconsoleofeachmachine.This
solutionismadepossibleusingaKeyboard,Video,MouseSwitchbetterknownasaKVMSwitch.

AfterinstallingtheLinuxoperatingsystem,thereareseveralapplicationswhichareneededtoinstallandconfigureOracleRACthatuseaGraphicalUserInterface
(GUI)andrequiretheuseofanX11displayserver.ThemostnotableoftheseGUIapplications(orbetterknownasanXapplication)istheOracleUniversalInstaller
(OUI)althoughothersliketheVirtualIPConfigurationAssistant(VIPCA)alsorequiretheuseofanX11displayserver.

GiventhefactthatIcreatedthisarticleonasystemthatmakesuseofaKVMSwitch,IamabletotoggletoeachnodeandrelyonthenativeX11displayserverfor
LinuxinordertodisplayXapplications.

IfyouarenotloggeddirectlyontothegraphicalconsoleofanodebutratheryouareusingaremoteclientlikeSSH,PuTTY,orTelnettoconnecttothenode,anyX
applicationwillrequireanX11displayserverinstalledontheclient.Forexample,ifyouaremakingaterminalremoteconnectiontoracnode1fromaWindows
workstation,youwouldneedtoinstallanX11displayserveronthatWindowsclient(Xmingforexample).IfyouintendtoinstalltheOracleGridInfrastructureand
OracleRACsoftwarefromaWindowsworkstationorothersystemwithanX11displayserverinstalled,thenperformthefollowingactions.

1.StarttheX11displayserversoftwareontheclientworkstation.

2.ConfigurethesecuritysettingsoftheXserversoftwaretopermitremotehoststodisplayXapplicationsonthelocalsystem.

3.Fromtheclientworkstation,SSHorTelnettotheserverwhereyouwanttoinstallthesoftwareastheOracleGridInfrastructureforaclustersoftwareowner
(grid)ortheOracleRACsoftware(oracle).
4.Asthesoftwareowner(grid,oracle),settheDISPLAYenvironment.

[root@racnode1 ~]# su - grid


[grid@racnode1 ~]$ DISPLAY=<your local workstation>:0.0
[grid@racnode1 ~]$ export DISPLAY

[grid@racnode1 ~]$ # TEST X CONFIGURATION BY RUNNING xterm


[grid@racnode1 ~]$ xterm &

Figure17:TestX11DisplayServeronWindowsRunxtermfromNode1(racnode1)

ConfiguretheLinuxServersforOracle
PerformthefollowingconfigurationproceduresonbothOracleRACnodesinthecluster.

ThissectionprovidesinformationaboutsettingallOSkernelparametersrequiredforOracle.Thekernelparametersdiscussedinthissectionwillneedtobeseton
bothOracleRACnodesintheclustereverytimethemachineisbooted.InstructionsforsettingallOSkernelparametersrequiredbyOracleinastartupscript
(/etc/sysctl.conf)willbediscussedlaterinthissection.

Overview

ThissectionfocusesonconfiguringbothOracleRACLinuxserversgettingeachonepreparedfortheOracleGridInfrastructure11gRelease2andOracleRAC
11gRelease2installationsontheRedHatEnterpriseLinux5orCentOS5platform.Thisincludesverifyingenoughmemoryandswapspace,settingshared
memoryandsemaphores,settingthemaximumnumberoffilehandles,settingtheIPlocalportrange,andfinally,howtoactivateallkernelparametersforthe
system.

Thereareseveraldifferentwaystosettheseparameters.Forthepurposeofthisarticle,Iwillbemakingallchangespermanentthroughrebootsbyplacingall
valuesinthe/etc/sysctl.conffile.

MemoryandSwapSpaceConsiderations

TheminimumrequiredRAMonRHEL/CentOSis1.5GBforGridInfrastructureforaCluster,or2.5GBforGridInfrastructureforaClusterandOracleRAC.Inthis
guide,eachOracleRACnodewillbehostingOracleGridInfrastructureandOracleRACandwillthereforerequireatleast2.5GBineachserver.EachoftheOracle
RACnodesusedinthisexampleareequippedwith4GBofphysicalRAM.

Theminimumrequiredswapspaceis1.5GB.Oraclerecommendsthatyousetswapspaceto1.5timestheamountofRAMforsystemswith2GBofRAMorless.
Forsystemswith2GBto16GBRAM,useswapspaceequaltoRAM.Forsystemswithmorethan16GBRAM,use16GBofRAMforswapspace.

Tochecktheamountofmemoryyouhave,type:

[root@racnode1 ~]# cat /proc/meminfo | grep MemTotal


MemTotal: 4038512 kB

[root@racnode2 ~]# cat /proc/meminfo | grep MemTotal


MemTotal: 4038512 kB

Tochecktheamountofswapyouhaveallocated,type:

[root@racnode1 ~]# cat /proc/meminfo | grep SwapTotal


SwapTotal: 6094840 kB

[root@racnode2 ~]# cat /proc/meminfo | grep SwapTotal


SwapTotal: 6094840 kB

Ifyouhavelessthan4GBofmemory(betweenyourRAMandSWAP),youcanaddtemporaryswapspacebycreatingatemporaryswapfile.Thiswayyou
donothavetousearawdeviceorevenmoredrastic,rebuildyoursystem.

Asroot,makeafilethatwillactasadditionalswapspace,let'ssayabout500MB.
[root@racnode1 ~]# dd if=/dev/zero of=tempswap bs=1k count=500000
Next,changethefilepermissions.

[root@racnode1 ~]# chmod 600 tempswap


Finally,formatthe"partition"asswapandaddittotheswapspace.

[root@racnode1 ~]# mke2fs tempswap


[root@racnode1 ~]# mkswap tempswap
[root@racnode1 ~]# swapon tempswap
ConfigureKernelParameters

ThekernelparameterspresentedinthissectionarerecommendedvaluesonlyasdocumentedbyOracle.Forproductiondatabasesystems,Oraclerecommends
thatyoutunethesevaluestooptimizetheperformanceofthesystem.

OnbothOracleRACnodes,verifythatthekernelparametersdescribedinthissectionaresettovaluesgreaterthanorequaltotherecommendedvalues.Alsonote
thatwhensettingthefoursemaphorevaluesthatallfourvaluesneedtobeenteredononeline.

OracleDatabase11gRelease2onRHEL/CentOS5requiresthekernelparametersettingsshownbelow.Thevaluesgivenareminimums,soifyoursystemusesa
largervalue,donotchangeit.

kernel.shmmax = 4294967295
kernel.shmall = 2097152
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048576
fs.aio-max-nr=1048576

RHEL/CentOS5alreadycomesconfiguredwithdefaultvaluesdefinedforthefollowingkernelparameters.Thedefaultvaluesforthesetwokernelparametersis
adequateforOracleDatabase11gRelease2andthereforedonotneedtobemodified.

kernel.shmall
kernel.shmmax

Usethedefaultvaluesiftheyarethesameorlargerthantherequiredvalues.

ThisarticleassumesafreshnewinstallofRHEL/CentOS5andassuch,manyoftherequiredkernelparametersarealreadyset(seeabove).Thisbeingthecase,
youcansimplycopy/pastethefollowingtobothOracleRACnodeswhileloggedinasroot.

[root@racnode1 ~]#cat >> /etc/sysctl.conf <<EOF


# Controls the maximum number of shared memory segments system wide
kernel.shmmni = 4096
# Sets the following semaphore values:
# SEMMSL_value SEMMNS_value SEMOPM_value SEMMNI_value
kernel.sem = 250 32000 100 128
# Sets the maximum number of file-handles that the Linux kernel will allocate
fs.file-max = 6815744
# Defines the local port range that is used by TCP and UDP
# traffic to choose the local port
net.ipv4.ip_local_port_range = 9000 65500
# Default setting in bytes of the socket "receive" buffer which
# may be set by using the SO_RCVBUF socket option
net.core.rmem_default=262144
# Maximum setting in bytes of the socket "receive" buffer which
# may be set by using the SO_RCVBUF socket option
net.core.rmem_max=4194304
# Default setting in bytes of the socket "send" buffer which
# may be set by using the SO_SNDBUF socket option
net.core.wmem_default=262144
# Maximum setting in bytes of the socket "send" buffer which
# may be set by using the SO_SNDBUF socket option
net.core.wmem_max=1048576
# Maximum number of allowable concurrent asynchronous I/O requests
fs.aio-max-nr=1048576
EOF

[root@racnode2 ~]# cat >> /etc/sysctl.conf <<EOF


# Controls the maximum number of shared memory segments system wide
kernel.shmmni = 4096
# Sets the following semaphore values:
# SEMMSL_value SEMMNS_value SEMOPM_value SEMMNI_value
kernel.sem = 250 32000 100 128
# Sets the maximum number of file-handles that the Linux kernel will allocate
fs.file-max = 6815744
# Defines the local port range that is used by TCP and UDP
# traffic to choose the local port
net.ipv4.ip_local_port_range = 9000 65500
# Default setting in bytes of the socket "receive" buffer which
# may be set by using the SO_RCVBUF socket option
net.core.rmem_default=262144
# Maximum setting in bytes of the socket "receive" buffer which
# may be set by using the SO_RCVBUF socket option
net.core.rmem_max=4194304
# Default setting in bytes of the socket "send" buffer which
# may be set by using the SO_SNDBUF socket option
net.core.wmem_default=262144
# Maximum setting in bytes of the socket "send" buffer which
# may be set by using the SO_SNDBUF socket option
net.core.wmem_max=1048576
# Maximum number of allowable concurrent asynchronous I/O requests
fs.aio-max-nr=1048576
EOF
ActivateAllKernelParametersfortheSystem

Theabovecommandpersistedtherequiredkernelparametersthroughrebootsbyinsertingtheminthe/etc/sysctl.confstartupfile.Linuxallowsmodificationof
thesekernelparameterstothecurrentsystemwhileitisupandrunning,sothere'snoneedtorebootthesystemaftermakingkernelparameterchanges.To
activatethenewkernelparametervaluesforthecurrentlyrunningsystem,runthefollowingasrootonbothOracleRACnodesinthecluster.

[root@racnode1 ~]# sysctl -p


net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576

[root@racnode2 ~]# sysctl -p


net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576

VerifythenewkernelparametervaluesbyrunningthefollowingonbothOracleRACnodesinthecluster.

[root@racnode1 ~]# /sbin/sysctl -a | grep shm


vm.hugetlb_shm_group = 0
kernel.shmmni = 4096
kernel.shmall = 4294967296
kernel.shmmax = 68719476736

[root@racnode1 ~]# /sbin/sysctl -a | grep sem


kernel.sem = 250 32000 100 128

[root@racnode1 ~]# /sbin/sysctl -a | grep file-max


fs.file-max = 6815744

[root@racnode1 ~]# /sbin/sysctl -a | grep ip_local_port_range


net.ipv4.ip_local_port_range = 9000 65500

[root@racnode1 ~]# /sbin/sysctl -a | grep 'core.[rw]mem'


net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_max = 1048576

ConfigureRACNodesforRemoteAccessusingSSH(Optional)
PerformthefollowingoptionalproceduresonbothOracleRACnodestomanuallyconfigurepasswordlessSSHconnectivitybetweenthetwoclustermembernodes
asthe"grid"and"oracle"user.

Oneofthebestpartsaboutthissectionofthedocumentisthatitiscompletelyoptional.That'snottosayconfiguringSecureShell(SSH)connectivitybetweenthe
OracleRACnodesisnotnecessary.Tothecontrary,theOracleUniversalInstaller(OUI)usesthesecureshelltoolssshandscpcommandsduringinstallationto
runremotecommandsonandcopyfilestotheotherclusternodes.DuringtheOraclesoftwareinstallations,SSHmustbeconfiguredsothatthesecommandsdo
notpromptforapassword.TheabilitytorunSSHcommandswithoutbeingpromptedforapasswordissometimesreferredtoasuserequivalence.

ThereasonthissectionofthedocumentisoptionalisthattheOUIinterfacein11gRelease2includesanewfeaturethatcanautomaticallyconfigureSSHduring
theinstallphaseoftheOraclesoftwarefortheuseraccountrunningtheinstallation.TheautomaticconfigurationperformedbyOUIcreatespasswordlessSSH
connectivitybetweenallclustermembernodes.OraclerecommendsthatyouusetheautomaticprocedureprovidedbytheOUIwheneverpossible.

InadditiontoinstallingtheOraclesoftware,SSHisusedafterinstallationbyconfigurationassistants,OracleEnterpriseManager,OPatch,andotherfeaturesthat
performconfigurationoperationsfromlocaltoremotenodes.

ConfiguringSSHwithapassphraseisnolongersupportedforOracleClusterware11g
Release2andlaterreleases.PasswordlessSSHisrequiredforOracle11gRelease2and
higher.

SincethisguideusesgridastheOracleGridInfrastructuresoftwareownerandoracleastheowneroftheOracleRACsoftware,passwordlessSSHmustbe
configuredforbothuseraccounts.

WhenSSHisnotavailable,theinstallerattemptstousethershandrcpcommands
insteadofsshandscp.Theseservices,however,aredisabledbydefaultonmostLinux
systems.TheuseofRSHwillnotbediscussedinthisguide.

VerifySSHSoftwareisInstalled

ThesupportedversionofSSHforLinuxdistributionsisOpenSSH.OpenSSHshouldbeincludedintheLinuxdistributionminimalinstallation.ToconfirmthatSSH
packagesareinstalled,runthefollowingcommandonbothOracleRACnodes.

[root@racnode1 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})n"| grep ssh


openssh-askpass-4.3p2-41.el5 (x86_64)
openssh-clients-4.3p2-41.el5 (x86_64)
openssh-server-4.3p2-41.el5 (x86_64)
openssh-4.3p2-41.el5 (x86_64)

IfyoudonotseealistofSSHpackages,theninstallthosepackagesforyourLinuxdistribution.Forexample,loadCD#1intoeachoftheOracleRACnodesand
performthefollowingtoinstalltheOpenSSHpackages.

[root@racnode1 ~]# mount -r /dev/cdrom /media/cdrom


[root@racnode1 ~]# cd /media/cdrom/Server
[root@racnode1 ~]# rpm -Uvh openssh-*
[root@racnode1 ~]# cd /
[root@racnode1 ~]# eject
WhyConfigureSSHUserEquivalenceUsingtheManualMethodOption?

So,iftheOUIalreadyincludesafeaturethatautomatestheSSHconfigurationbetweentheOracleRACnodes,thenwhyprovideasectiononhowtomanually
configurepasswordlessSSHconnectivity?Infact,forthepurposeofthisarticle,IdecidedtoforgomanuallyconfiguringSSHconnectivityinfavorofOracle's
automaticmethodsincludedintheinstaller.

OnereasontoincludethissectiononmanuallyconfiguringSSHistomakementionofthefactthatyoumustremovesttycommandsfromtheprofilesofany
Oraclesoftwareinstallationowners,andremoveothersecuritymeasuresthataretriggeredduringaloginandthatgeneratemessagestotheterminal.These
messages,mailchecks,andotherdisplayspreventOraclesoftwareinstallationownersfromusingtheSSHconfigurationscriptthatisbuiltintotheOracleUniversal
Installer.Iftheyarenotdisabled,thenSSHmustbeconfiguredmanuallybeforeaninstallationcanberun.Furtherdocumentationonpreventinginstallationerrors
causedbysttycommandscanbefoundlaterinthissection.

AnotherreasonyoumaydecidetomanuallyconfigureSSHforuserequivalenceistohavetheabilitytoruntheClusterVerificationUtility(CVU)priortoinstalling
theOraclesoftware.TheCVU(runcluvfy.sh)isavaluabletoollocatedintheOracleClusterwarerootdirectorythatnotonlyverifiesallprerequisiteshavebeenmet
beforesoftwareinstallation,italsohastheabilitytogenerateshellscriptprograms,calledfixupscripts,toresolvemanyincompletesystemconfiguration
requirements.TheCVUdoes,however,haveaprerequisiteofitsownandthatisthatSSHuserequivalencyisconfiguredcorrectlyfortheuseraccountrunningthe
installation.IfyouintendtoconfigureSSHconnectivityusingtheOUI,knowthattheCVUutilitywillfailbeforehavingtheopportunitytoperformanyofitscritical
checks.

[grid@racnode1 ~]$ /media/cdrom/grid/runcluvfy.sh stage -pre crsinst -fixup -n racnode1,racnode2 -verbose


Performing pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "racnode1"


Destination Node Reachable?
------------------------------------ ------------------------
racnode1 yes
racnode2 yes
Result: Node reachability check passed from node "racnode1"

Checking user equivalence...

Check: User equivalence for user "grid"


Node Name Comment
------------------------------------ ------------------------
racnode2 failed
racnode1 failed
Result: PRVF-4007 : User equivalence check failed for user "grid"

ERROR:
User equivalence unavailable on all the specified nodes
Verification cannot proceed

Pre-check for cluster services setup was unsuccessful on all the nodes.

PleasenotethatitisnotrequiredtoruntheCVUutilitybeforeinstallingtheOraclesoftware.StartingwithOracle11gRelease2,theinstallerdetectswhenminimum
requirementsforinstallationarenotcompletedandperformsthesametasksdonebytheCVUtogeneratefixupscriptstoresolveincompletesystemconfiguration
requirements.

ConfigureSSHConnectivityManuallyonAllClusterNodes

Toreiterate,itisnotrequiredtomanuallyconfigureSSHconnectivitybeforerunningtheOUI.TheOUIin11gRelease2providesaninterfaceduringtheinstallfor
theuseraccountrunningtheinstallationtoautomaticallycreatepasswordlessSSHconnectivitybetweenallclustermembernodes.Thisistherecommend
approachbyOracleandthemethodusedinthisarticle.ThetasksbelowtomanuallyconfigureSSHconnectivitybetweenallclustermembernodesisincludedfor
documentationpurposesonly.KeepinmindthatthisguideusesgridastheOracleGridInfrastructuresoftwareownerandoracleastheowneroftheOracleRAC
software.IfyoudecidetomanuallyconfigureSSHconnectivity,itshouldbeperformedforbothuseraccounts.

ThegoalinthissectionistosetupuserequivalenceforthegridandoracleOSuseraccounts.Userequivalenceenablesthegridandoracleuseraccountsto
accessallothernodesinthecluster(runningcommandsandcopyingfiles)withouttheneedforapassword.Oracleaddedsupportin10grelease1forusingthe
SSHtoolsuiteforsettingupuserequivalence.BeforeOracleDatabase10g,userequivalencehadtobeconfiguredusingremoteshell(RSH).

Intheexamplethatfollows,theOraclesoftwareownergridwillbeconfiguredforpasswordlessSSH.

CheckingExistingSSHConfigurationontheSystem

TodetermineifSSHisinstalledandrunning,enterthefollowingcommand.

[grid@racnode1 ~]$ pgrep sshd


2535
19852

IfSSHisrunning,thentheresponsetothiscommandisalistofprocessIDnumber(s).RunthischeckonbothOracleRACnodesintheclustertoverifytheSSH
daemonsareinstalledandrunning.

YouneedeitheranRSAoraDSAkeyfortheSSHprotocol.RSAisusedwiththeSSH1.5protocol,whileDSAisthedefaultfortheSSH2.0protocol.With
OpenSSH,youcanuseeitherRSAorDSA.TheinstructionsthatfollowareforSSH1.IfyouhaveanSSH2installation,andyoucannotuseSSH1,thenreferto
yourSSHdistributiondocumentationtoconfigureSSH1compatibilityortoconfigureSSH2withDSA.

AutomaticpasswordlessSSHconfigurationusingtheOUIcreatesRSAencryptionkeyson
allnodesofthecluster.

ConfiguringPasswordlessSSHonClusterNodes

ToconfigurepasswordlessSSH,youmustfirstcreateRSAorDSAkeysoneachclusternode,andthencopyallthekeysgeneratedonallclusternodemembers
intoanauthorizedkeysfilethatisidenticaloneachnode.NotethattheSSHfilesmustbereadableonlybyrootandbythesoftwareinstallationuser(grid,
oracle),asSSHignoresaprivatekeyfileifitisaccessiblebyothers.Intheexamplesthatfollow,theDSAkeyisused.

YoumustconfigurepasswordlessSSHseparatelyforeachOraclesoftwareinstallationownerthatyouintendtouseforinstallation(grid,oracle).

ToconfigurepasswordlessSSH,completethefollowingonbothOracleRACnodes.

CreateSSHDirectoryandSSHKeys

CompletethefollowingstepsoneachOracleRACnode.

1.LogintobothOracleRACnodesasthesoftwareowner(inthisexample,thegriduser).
[root@racnode1 ~]# su - grid
2.ToensurethatyouareloggedinasgridandtoverifythattheuserIDmatchestheexpecteduserIDyouhaveassignedtothegriduser,enterthe
commandsidandid grid.VerifythattheOracleusergroupanduserandtheuserterminalwindowprocessyouareusinghavegroupanduserIDsthatare
identical.Forexample:

[grid@racnode1 ~]$ id
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)

[grid@racnode1 ~]$ id grid


uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)

3.Ifnecessary,createthe.sshdirectoryinthegriduser'shomedirectoryandsetpermissionsonittoensurethatonlythegriduserhasreadandwrite
permissions.

[grid@racnode1 ~]$ mkdir ~/.ssh


[grid@racnode1 ~]$ chmod 700 ~/.ssh

SSHconfigurationwillfailifthepermissionsarenotsetto700.

4.EnterthefollowingcommandtogenerateaDSAkeypair(publicandprivatekey)fortheSSHprotocol.Attheprompts,acceptthedefaultkeyfilelocation
andnopassphrase(simplypress[Enter]threetimes).

[grid@racnode1 ~]$ /usr/bin/ssh-keygen -t dsa


Generating public/private dsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_dsa): [Enter]
Enter passphrase (empty for no passphrase): [Enter]
Enter same passphrase again: [Enter]
Your identification has been saved in /home/grid/.ssh/id_dsa.
Your public key has been saved in /home/grid/.ssh/id_dsa.pub.
The key fingerprint is:
57:21:d7:d5:54:29:4c:12:40:23:36:e9:6e:2f:e6:40 grid@racnode1

SSHwithpassphraseisnotsupportedforOracleClusterware11gRelease2andlater
releases.PasswordlessSSHisrequiredforOracle11gRelease2andhigher.

ThiscommandwritestheDSApublickeytothe~/.ssh/id_dsa.pubfileandtheprivatekeytothe~/.ssh/id_dsafile.

NeverdistributetheprivatekeytoanyonenotauthorizedtoperformOraclesoftwareinstallations.

5.Repeatsteps1through4forallremainingnodesthatyouintendtomakeamemberoftheclusterusingtheDSAkey(racnode2).

AddAllKeystoaCommonauthorized_keysFile

NowthatbothOracleRACnodescontainapublicandprivatekeyforDSA,youwillneedtocreateanauthorizedkeyfile(authorized_keys)ononeofthenodes.
Anauthorizedkeyfileisnothingmorethanasinglefilethatcontainsacopyofeveryone's(everynode's)DSApublickey.Oncetheauthorizedkeyfilecontainsall
ofthepublickeysforeachnode,itisthendistributedtoallofthenodesinthecluster.

Thegriduser's~/.ssh/authorized_keysfileoneverynodemustcontainthecontents
fromallofthe~/.ssh/id_dsa.pubfilesthatyougeneratedonallclusternodes.

Completethefollowingstepsononeofthenodesintheclustertocreateandthendistributetheauthorizedkeyfile.Forthepurposeofthisexample,Iamusingthe
primarynodeinthecluster,racnode1.

1.Fromracnode1,determineiftheauthorizedkeyfile~/.ssh/authorized_keysalreadyexistsinthe.sshdirectoryoftheowner'shomedirectory.Inmost
casesthiswillnotexistsincethisarticleassumesyouareworkingwithanewinstall.Ifthefiledoesn'texist,createitnow.

[grid@racnode1 ~]$ touch ~/.ssh/authorized_keys


[grid@racnode1 ~]$ ls -l ~/.ssh
total 8
-rw-r--r-- 1 grid oinstall 0 Nov 7 17:25 authorized_keys
-rw------- 1 grid oinstall 672 Nov 7 16:56 id_dsa
-rw-r--r-- 1 grid oinstall 603 Nov 7 16:56 id_dsa.pub

Inthe.sshdirectory,youshouldseetheid_dsa.pubpublickeythatwascreatedandtheblankfileauthorized_keys.

2.Fromracnode1,useSCP(SecureCopy)orSFTP(SecureFTP)tocopythepublickey(~/.ssh/id_dsa.pub)frombothOracleRACnodesintheclusterto
theauthorizedkeyfilejustcreated(~/.ssh/authorized_keys).Again,thiswillbedonefromracnode1.YouwillbepromptedforthegridOSuseraccount
passwordforbothOracleRACnodesaccessed.

Thefollowingexampleisbeingrunfromracnode1andassumesatwonodecluster,withnodesracnode1andracnode2.

[grid@racnode1 ~]$ ssh racnode1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys


The authenticity of host 'racnode1 (192.168.1.151)' can't be established.
RSA key fingerprint is 66:65:a6:99:5f:cb:6e:60:6a:06:18:b7:fc:c2:cc:3e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'racnode1,192.168.1.151' (RSA) to the list of known hosts.
grid@racnode1's password: xxxxx
[grid@racnode1 ~]$ ssh racnode2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'racnode2 (192.168.1.152)' can't be established.
RSA key fingerprint is 30:cd:90:ad:18:00:24:c5:42:49:21:b0:1d:59:2d:7b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'racnode2,192.168.1.152' (RSA) to the list of known hosts.
grid@racnode2's password: xxxxx
ThefirsttimeyouuseSSHtoconnecttoanodefromaparticularsystem,youwillseeamessagesimilartothefollowing.

The authenticity of host 'racnode1 (192.168.1.151)' can't be established.


RSA key fingerprint is 66:65:a6:99:5f:cb:6e:60:6a:06:18:b7:fc:c2:cc:3e.
Are you sure you want to continue connecting (yes/no)? yes
Enteryesattheprompttocontinue.Thepublichostnamewillthenbeaddedtotheknown_hostsfileinthe~/.sshdirectoryandyouwillnotseethismessage
againwhenyouconnectfromthissystemtothesamenode.

3.Atthispoint,wehavetheDSApublickeyfromeverynodeintheclustercontainedintheauthorizedkeyfile(~/.ssh/authorized_keys)onracnode1.

[grid@racnode1 ~]$ ls -l ~/.ssh


total 16
-rw-r--r-- 1 grid oinstall 1206 Nov 7 17:31 authorized_keys
-rw------- 1 grid oinstall 672 Nov 7 16:56 id_dsa
-rw-r--r-- 1 grid oinstall 603 Nov 7 16:56 id_dsa.pub
-rw-r--r-- 1 grid oinstall 808 Nov 7 17:31 known_hosts

Wenowneedtocopytheauthorizedkeyfiletotheremainingnodesinthecluster.Inourtwonodeclusterexample,theonlyremainingnodeisracnode2.
Usethescpcommandtocopytheauthorizedkeyfiletoallremainingnodesinthecluster.

[grid@racnode1 ~]$ scp ~/.ssh/authorized_keys racnode2:.ssh/authorized_keys


grid@racnode2's password: xxxxx
authorized_keys 100% 1206 1.2KB/s 00:00

4.ChangethepermissionoftheauthorizedkeyfileforbothOracleRACnodesintheclusterbyloggingintothenodeandrunningthefollowing:

[grid@racnode1 ~]$ chmod 600 ~/.ssh/authorized_keys


[grid@racnode2 ~]$ chmod 600 ~/.ssh/authorized_keys

EnableSSHUserEquivalencyonClusterNodes

Afteryouhavecopiedtheauthorized_keysfilethatcontainsallpublickeystoeachnodeinthecluster,completethestepsinthissectiontoensurepasswordless
SSHconnectivitybetweenallclustermembernodesisconfiguredcorrectly.Inthisexample,theOracleGridInfrastructuresoftwareownerwillbeusedwhichis
namedgrid.

WhenrunningthetestSSHcommandsinthissection,ifyouseeanyothermessagesortext,apartfromthedateandhostname,thentheOracleinstallationwill
fail.Ifanyofthenodespromptforapasswordorpassphrasethenverifythatthe~/.ssh/authorized_keysfileonthatnodecontainsthecorrectpublickeysand
thatyouhavecreatedanOraclesoftwareownerwithidenticalgroupmembershipandIDs.Makeanychangesrequiredtoensurethatonlythedateandhostnameis
displayedwhenyouenterthesecommands.Youshouldensurethatanypartofaloginscriptthatgeneratesanyoutput,orasksanyquestions,ismodifiedsoitacts
onlywhentheshellisaninteractiveshell.

1.OnthesystemwhereyouwanttorunOUIfrom(racnode1),loginasthegriduser.

[root@racnode1 ~]# su - grid


2.IfSSHisconfiguredcorrectly,youwillbeabletousethesshandscpcommandswithoutbeingpromptedforapasswordorpassphrasefromtheterminal
session.

[grid@racnode1 ~]$ ssh racnode1 "date;hostname"


Sun Nov 7 18:06:17 EST 2010
racnode1

[grid@racnode1 ~]$ ssh racnode2 "date;hostname"


Sun Nov 7 18:07:55 EST 2010
racnode2

3.PerformthesameactionsabovefromtheremainingnodesintheOracleRACcluster(racnode2)toensuretheytoocanaccessallothernodeswithoutbeing
promptedforapasswordorpassphraseandgetaddedtotheknown_hostsfile.

[root@racnode2 ~]# su - grid


[grid@racnode2 ~]$ ssh racnode1 "date;hostname"
The authenticity of host 'racnode1 (192.168.1.151)' can't be established.
RSA key fingerprint is 66:65:a6:99:5f:cb:6e:60:6a:06:18:b7:fc:c2:cc:3e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'racnode1,192.168.1.151' (RSA) to the list of known hosts.
Sun Nov 7 18:08:46 EST 2010
racnode1

[grid@racnode2 ~]$ ssh racnode1 "date;hostname"


Sun Nov 7 18:08:53 EST 2010
racnode1

--------------------------------------------------------------------------

[grid@racnode2 ~]$ ssh racnode2 "date;hostname"


The authenticity of host 'racnode2 (192.168.1.152)' can't be established.
RSA key fingerprint is 30:cd:90:ad:18:00:24:c5:42:49:21:b0:1d:59:2d:7b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'racnode2,192.168.1.152' (RSA) to the list of known hosts.
Sun Nov 7 18:11:51 EST 2010
racnode2

[grid@racnode2 ~]$ ssh racnode2 "date;hostname"


Sun Nov 7 18:11:54 EST 2010
racnode2

4.TheOracleUniversalInstallerisaGUIinterfaceandrequirestheuseofanXServer.Fromtheterminalsessionenabledforuserequivalence(thenodeyou
willbeperformingtheOracleinstallationsfrom),settheenvironmentvariableDISPLAYtoavalidXWindowsdisplay.

Bourne,Korn,andBashshells:

[grid@racnode1 ~]$ DISPLAY=<Any X-Windows Host>:0


[grid@racnode1 ~]$ export DISPLAY
Cshell:

[grid@racnode1 ~]$ setenv DISPLAY <Any X-Windows Host>:0


AftersettingtheDISPLAYvariabletoavalidXWindowsdisplay,youshouldperformanothertestofthecurrentterminalsessiontoensurethatX11forwarding
isnotenabled.

[grid@racnode1 ~]$ ssh racnode1 hostname


racnode1

[grid@racnode1 ~]$ ssh racnode2 hostname


racnode2

Ifyouareusingaremoteclienttoconnecttothenodeperformingtheinstallation,andyou
seeamessagesimilarto:"Warning: No xauth data; using fake authentication data
for X11 forwarding."thenthismeansthatyourauthorizedkeysfileisconfiguredcorrectly,
however,yourSSHconfigurationhasX11forwardingenabled.Forexample:

[grid@racnode1 ~]$ export DISPLAY=melody:0


[grid@racnode1 ~]$ ssh racnode2 hostname
Warning: No xauth data; using fake authentication data for X11 forwarding.
racnode2

NotethathavingX11ForwardingenabledwillcausetheOracleinstallationtofail.Tocorrectthisproblem,createauserlevelSSHclientconfigurationfilefor
thegridandoracleOSuseraccountthatdisablesX11Forwarding.

1.Usingatexteditor,editorcreatethefile~/.ssh/config

2.MakesurethattheForwardX11attributeissettono.Forexample,insertthefollowingintothe~/.ssh/config file:

Host *
ForwardX11 no

PreventingInstallationErrorsCausedby sttyCommands
DuringanOracleGridInfrastructureorOracleRACsoftwareinstallation,OUIusesSSHtoruncommandsandcopyfilestotheothernodes.Duringtheinstallation,
hiddenfilesonthesystem(forexample,.bashrcor.cshrc)willcausemakefileandotherinstallationerrorsiftheycontainsttycommands.

Toavoidthisproblem,youmustmodifythesefilesineachOracleinstallationowneruserhomedirectorytosuppressalloutputonSTDERR,asinthefollowing
examples:

Bourne,Bash,orKornshell:

if [ -t 0 ]; then
stty intr ^C
fi

Cshell:

test -t 0
if ($status == 0) then
stty intr ^C
endif
Iftherearehiddenfilesthatcontainsttycommandsthatareloadedbytheremoteshell,
thenOUIindicatesanerrorandstopstheinstallation.

InstallandConfigureASMLib2.0
TheinstallationandconfigurationproceduresinthissectionshouldbeperformedonbothoftheOracleRACnodesinthecluster.CreatingtheASMdisks,however,
willonlyneedtobeperformedonasinglenodewithinthecluster(racnode1).

Inthissection,wewillinstallandconfigureASMLib2.0whichisanoptionalsupportlibraryfortheOracleAutomaticStorageManagement(ASM)featureofthe
OracleDatabase.Inthisguide,OracleASMwillbeusedasthesharedfilesystemandvolumemanagerforOracleClusterwarefiles(OCRandvotingdisk),Oracle
Databasefiles(data,onlineredologs,controlfiles,archivedredologs),andtheFastRecoveryArea.

OracleAutomaticStorageManagement(OracleASM)simplifiesdatabaseadministrationbyeliminatingtheneedfortheDBAtodirectlymanagepotentially
thousandsofOracledatabasefilesrequiringonlythemanagementofgroupsofdisksallocatedtotheOracleDatabase.ASMisbuiltintotheOraclekernelandcan
beusedforbothsingleandclusteredinstancesofOracle.AllofthefilesanddirectoriestobeusedforOraclewillbecontainedinadiskgroup(orforthepurpose
ofthisarticle,threediskgroups).ASMautomaticallyperformsloadbalancinginparallelacrossallavailablediskdrivestopreventhotspotsandmaximize
performance,evenwithrapidlychangingdatausagepatterns.ASMLibisaLinuxspecificimplementationsupportlibrarythatallowsanOracleDatabaseusingASM
moreefficientandcapableaccesstothediskgroupsitisusing.

KeepinmindthatASMLibisonlyasupportlibraryfortheOracleASMsoftware.TheOracleASMsoftwarewillbeinstalledaspartofOracleGridInfrastructurelater
inthisguide.

StartingwithOracleGridInfrastructure11gRelease2(11.2),theAutomaticStorageManagementandOracleClusterwaresoftwareispackagedtogetherinasingle
binarydistributionandinstalledintoasinglehomedirectory,whichisreferredtoastheGridInfrastructurehome.TheOracleGridInfrastructuresoftwarewillbe
ownedbytheusergrid.

So,isASMLibrequiredforASM?Notatall.Infact,therearetwodifferentmethodstoconfigureASMonLinux.

ASMwithASMLibI/O

ThismethodcreatesallOracledatabasefilesonrawblockdevicesmanagedbyASMusingASMLibcalls.RAWcharacterdevicesarenotrequiredwiththis
methodasASMLibworkswithblockdevices.

ASMwithStandardLinuxI/O

ThismethoddoesnotmakeuseofASMLib.OracledatabasefilesarecreatedonrawcharacterdevicesmanagedbyASMusingstandardLinuxI/Osystem
calls.YouwillberequiredtocreateRAWdevicesforalldiskpartitionsusedbyASM.

Inthisarticle,Iwillbeusingthe"ASMwithASMLibI/O"method.OraclestatesinMetalinkNote275315.1that"ASMLibwasprovidedtoenableASMI/OtoLinux
diskswithoutthelimitationsofthestandardUNIXI/OAPI".IplanonperformingseveraltestsinthefuturetoidentifytheperformancegainsinusingASMLib.Those
performancemetricsandtestingdetailsareoutofscopeofthisarticleandthereforewillnotbediscussed.

IfyouwouldliketolearnmoreaboutOracleASMLib2.0,visithttp://www.oracle.com/technetwork/topics/linux/asmlib/index101839.html.

DownloadASMLib2.0Packages

WestartthissectionbydownloadingthelatestASMLib2.0librariesandthekerneldriverfromOTN.

OracleASMLibDownloadsforRedHatEnterpriseLinuxServer5

Atthetimeofthiswriting,thelatestreleaseoftheASMLibkerneldriveris2.0.51.WeneedtodownloadtheappropriateversionoftheASMLibdriverfortheLinux
kernelwhichinmycaseiskernel2.6.18194.el5runningonthex86_64architecture.

[root@racnode1 ~]# uname -a


Linux racnode1 2.6.18-194.el5 #1 SMP Fri Apr 2 14:58:14 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
32bit(x86)Installations

oracleasm2.6.18194.el52.0.51.el5.i686.rpm

Next,downloadtheASMLibtools.

oracleasmsupport2.1.71.el5.i386.rpm
oracleasmlib2.0.41.el5.i386.rpm

64bit(x86_64)Installations

oracleasm2.6.18194.el52.0.51.el5.x86_64.rpm

Next,downloadtheASMLibtools.

oracleasmsupport2.1.71.el5.x86_64.rpm
oracleasmlib2.0.41.el5.x86_64.rpm

InstallASMLib2.0Packages

TheinstallationofASMLib2.0needstobeperformedonbothnodesintheOracleRACclusterastherootuseraccount.

[root@racnode1 ~]# rpm -Uvh oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm


> oracleasmlib-2.0.4-1.el5.x86_64.rpm
> oracleasm-support-2.1.7-1.el5.x86_64.rpm
warning: oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing... ########################################### [100%]
1:oracleasm-support ########################################### [ 33%]
2:oracleasm-2.6.18-194.el########################################### [ 67%]
3:oracleasmlib ########################################### [100%]
[root@racnode2 ~]# rpm -Uvh oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm
> oracleasmlib-2.0.4-1.el5.x86_64.rpm
> oracleasm-support-2.1.7-1.el5.x86_64.rpm
warning: oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing... ########################################### [100%]
1:oracleasm-support ########################################### [ 33%]
2:oracleasm-2.6.18-194.el########################################### [ 67%]
3:oracleasmlib ########################################### [100%]

AfterinstallingtheASMLibpackages,verifyfrombothOracleRACnodesthatthesoftwareisinstalled.

[root@racnode1 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})n"| grep oracleasm | sort
oracleasm-2.6.18-194.el5-2.0.5-1.el5 (x86_64)
oracleasmlib-2.0.4-1.el5 (x86_64)
oracleasm-support-2.1.7-1.el5 (x86_64)

[root@racnode2 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})n"| grep oracleasm | sort
oracleasm-2.6.18-194.el5-2.0.5-1.el5 (x86_64)
oracleasmlib-2.0.4-1.el5 (x86_64)
oracleasm-support-2.1.7-1.el5 (x86_64)

ConfigureASMLib

NowthatyouhaveinstalledtheASMLibpackagesforLinux,youneedtoconfigureandloadtheASMkernelmodule.ThistaskneedstoberunonbothOracleRAC
nodesastherootuseraccount.

Theoracleasmcommandbydefaultisinthepath/usr/sbin.The/etc/init.dpath,whichwasusedinpreviousreleases,isnotdeprecatedbuttheoracleasm
binaryinthatpathisnowusedtypicallyforinternalcommands.Ifyouenterthecommandoracleasm configurewithoutthe-iflag,thenyouareshownthecurrent
configuration.Forexample:

[root@racnode1 ~]# /usr/sbin/oracleasm configure


ORACLEASM_ENABLED=false
ORACLEASM_UID=
ORACLEASM_GID=
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""

1.Enterthefollowingcommandtoruntheoracleasminitializationscriptwiththeconfigureoption.

[root@racnode1 ~]# /usr/sbin/oracleasm configure -i


Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: grid


Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done

Thescriptcompletesthefollowingtasks:

Createsthe/etc/sysconfig/oracleasmconfigurationfile
Createsthe/dev/oracleasmmountpoint
MountstheASMLibdriverfilesystem

TheASMLibdriverfilesystemisnotaregularfilesystem.ItisusedonlybytheAutomatic
StorageManagementlibrarytocommunicatewiththeAutomaticStorageManagement
driver.

2.Enterthefollowingcommandtoloadtheoracleasmkernelmodule.

[root@racnode1 ~]# /usr/sbin/oracleasm init


Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Mounting ASMlib driver filesystem: /dev/oracleasm

3.Repeatthisprocedureonallnodesinthecluster(racnode2)whereyouwanttoinstallOracleRAC.

CreateASMDisksforOracle

CreatingtheASMdisksonlyneedstobeperformedfromonenodeintheRACclusterastherootuseraccount.Iwillberunningthesecommandsonracnode1.On
theotherOracleRACnode(s),youwillneedtoperformascandisktorecognizethenewvolumes.Whenthatiscomplete,youshouldthenruntheoracleasm
listdiskscommandonbothOracleRACnodestoverifythatallASMdiskswerecreatedandavailable.
Inthesection"CreatePartitionsoniSCSIVolumes",weconfigured(partitioned)threeiSCSIvolumestobeusedbyASM.ASMwillbeusedforstoringOracle
Clusterwarefiles,Oracledatabasefileslikeonlineredologs,databasefiles,controlfiles,archivedredologfiles,andtheFastRecoveryArea.Usethelocaldevice
namesthatwerecreatedbyudevwhenconfiguringthethreeASMvolumes.

TocreatetheASMdisksusingtheiSCSItargetnamestolocaldevicenamemappings,typethefollowing:

[root@racnode1 ~]# /usr/sbin/oracleasm createdisk CRSVOL1 /dev/iscsi/crs1/part1


Writing disk header: done
Instantiating disk: done

[root@racnode1 ~]# /usr/sbin/oracleasm createdisk DATAVOL1 /dev/iscsi/data1/part1


Writing disk header: done
Instantiating disk: done

[root@racnode1 ~]# /usr/sbin/oracleasm createdisk FRAVOL1 /dev/iscsi/fra1/part1


Writing disk header: done
Instantiating disk: done

Tomakethevolumesavailableontheothernodesinthecluster(racnode2),enterthefollowingcommandasrootoneachnode.

[root@racnode2 ~]# /usr/sbin/oracleasm scandisks


Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "DATAVOL1"
Instantiating disk "CRSVOL1"
Instantiating disk "FRAVOL1"

WecannowtestthattheASMdisksweresuccessfullycreatedbyusingthefollowingcommandonbothnodesintheRACclusterastherootuseraccount.This
commandidentifiesshareddisksattachedtothenodethataremarkedasAutomaticStorageManagementdisks.

[root@racnode1 ~]# /usr/sbin/oracleasm listdisks


CRSVOL1
DATAVOL1
FRAVOL1

[root@racnode2 ~]# /usr/sbin/oracleasm listdisks


CRSVOL1
DATAVOL1
FRAVOL1

DownloadOracleRAC11gRelease2Software
Thefollowingdownloadproceduresonlyneedtobeperformedononenodeinthecluster(racnode1).

ThenextstepistodownloadandextracttherequiredOraclesoftwarepackagesfromtheOracleTechnologyNetwork(OTN).

IfyoudonotcurrentlyhaveanaccountwithOracleOTN,youwillneedtocreateone.This
isaFREEaccount!
Oracleoffersadevelopmentandtestinglicensefreeofcharge.Nosupport,however,is
providedandthelicensedoesnotpermitproductionuse.Afulldescriptionofthelicense
agreementisavailableonOTN.

32bit(x86)Installations

http://www.oracle.com/technetwork/database/enterpriseedition/downloads/112010linuxsoft085393.html

64bit(x86_64)Installations

http://www.oracle.com/technetwork/database/enterpriseedition/downloads/112010linx8664soft100572.html

YouwillbedownloadingandextractingtherequiredsoftwarefromOracletoonlyoneoftheLinuxnodesintheclusternamely,racnode1.Youwillperformall
Oraclesoftwareinstallsfromthismachine.TheOracleinstallerwillcopytherequiredsoftwarepackagestoallothernodesintheRACconfigurationusingremote
access(scp).

LogintothenodethatyouwillbeperformingalloftheOracleinstallationsfrom(racnode1)astheappropriatesoftwareowner.Forexample,loginanddownloadthe
OracleGridInfrastructuresoftwaretothedirectory/home/grid/software/oracleasthegriduser.Next,loginanddownloadtheOracleDatabaseandOracle
Examples(optional)softwaretothe/home/oracle/software/oracledirectoryastheoracleuser.

DownloadandExtracttheOracleSoftware

Downloadthefollowingsoftwarepackages:

OracleDatabase11gRelease2GridInfrastructure(11.2.0.1.0)forLinux
OracleDatabase11gRelease2(11.2.0.1.0)forLinux
OracleDatabase11gRelease2Examples(optional)

Alldownloadsareavailablefromthesamepage.

ExtracttheOracleGridInfrastructuresoftwareasthegriduser:

[grid@racnode1 ~]$ mkdir -p /home/grid/software/oracle


[grid@racnode1 ~]$ mv linux.x64_11gR2_grid.zip /home/grid/software/oracle
[grid@racnode1 ~]$ cd /home/grid/software/oracle
[grid@racnode1 oracle]$ unzip linux.x64_11gR2_grid.zip
ExtracttheOracleDatabaseandOracleExamplessoftwareastheoracleuser:

[oracle@racnode1 ~]$ mkdir -p /home/oracle/software/oracle


[oracle@racnode1 ~]$ mv linux.x64_11gR2_database_1of2.zip /home/oracle/software/oracle
[oracle@racnode1 ~]$ mv linux.x64_11gR2_database_2of2.zip /home/oracle/software/oracle
[oracle@racnode1 ~]$ mv linux.x64_11gR2_examples.zip /home/oracle/software/oracle
[oracle@racnode1 ~]$ cd /home/oracle/software/oracle
[oracle@racnode1 oracle]$ unzip linux.x64_11gR2_database_1of2.zip
[oracle@racnode1 oracle]$ unzip linux.x64_11gR2_database_2of2.zip
[oracle@racnode1 oracle]$ unzip linux.x64_11gR2_examples.zip
PreinstallationTasksforOracleGridInfrastructureforaCluster
PerformthefollowingchecksonbothOracleRACnodesinthecluster.

ThissectioncontainsanyremainingpreinstallationtasksforOracleGridInfrastructurethathasnotalreadybeendiscussed.Pleasenotethatmanuallyrunningthe
ClusterVerificationUtility(CVU)beforerunningtheOracleinstallerisnotrequired.TheCVUisrunautomaticallyattheendoftheOracleGridInfrastructure
installationaspartoftheConfigurationAssistantsprocess.

InstallthecvuqdiskPackageforLinux

InstalltheoperatingsystempackagecvuqdisktobothOracleRACnodes.Withoutcvuqdisk,ClusterVerificationUtilitycannotdiscovershareddisksandyouwill
receivetheerrormessage"Packagecvuqdisknotinstalled"whentheClusterVerificationUtilityisrun(eithermanuallyorattheendoftheOracleGridInfrastructure
installation).UsethecvuqdiskRPMforyourhardwarearchitecture(forexample,x86_64ori386).

ThecvuqdiskRPMcanbefoundontheOracleGridInfrastructureinstallationmediaintherpmdirectory.Forthepurposeofthisarticle,theOracleGrid
Infrastructuremediawasextractedtothe/home/grid/software/oracle/griddirectoryonracnode1asthegriduser.

ToinstallthecvuqdiskRPM,completethefollowingprocedures:

1.LocatethecvuqdiskRPMpackage,whichisinthedirectoryrpmontheinstallationmediafromracnode1.

[racnode1]: /home/grid/software/oracle/grid/rpm/cvuqdisk-1.0.7-1.rpm

2.Copythecvuqdiskpackagefromracnode1toracnode2asthegriduseraccount.

[racnode2]: /home/grid/software/oracle/grid/rpm/cvuqdisk-1.0.7-1.rpm

3.LoginasrootonbothOracleRACnodes.

[grid@racnode1 rpm]$ su
[grid@racnode2 rpm]$ su

4.SettheenvironmentvariableCVUQDISK_GRPtopointtothegroupthatwillowncvuqdisk,whichforthisarticleisoinstall.

[root@racnode1 rpm]# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP


[root@racnode2 rpm]# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP

5.InthedirectorywhereyouhavesavedthecvuqdiskRPM,usethefollowingcommandtoinstallthecvuqdiskpackageonbothOracleRACnodes.

[root@racnode1 rpm]# rpm -iv cvuqdisk-1.0.7-1.rpm


Preparing packages for installation...
cvuqdisk-1.0.7-1

[root@racnode2 rpm]# rpm -iv cvuqdisk-1.0.7-1.rpm


Preparing packages for installation...
cvuqdisk-1.0.7-1

6.Verifythecvuqdiskutilitywassuccessfullyinstalled.

[root@racnode1 rpm]# ls -l /usr/sbin/cvuqdisk


-rwsr-xr-x 1 root oinstall 9832 May 28 2009 /usr/sbin/cvuqdisk

[root@racnode2 rpm]# ls -l /usr/sbin/cvuqdisk


-rwsr-xr-x 1 root oinstall 9832 May 28 2009 /usr/sbin/cvuqdisk

VerifyOracleClusterwareRequirementswithCVU(optional)

Asstatedearlierinthissection,runningtheClusterVerificationUtilitybeforerunningtheOracleinstallerisnotrequired.StartingwithOracleClusterware11g
Release2,OracleUniversalInstaller(OUI)detectswhentheminimumrequirementsforaninstallationarenotmetandcreatesshellscriptscalledfixupscriptsto
finishincompletesystemconfigurationsteps.IfOUIdetectsanincompletetask,itthengeneratesfixupscripts(runfixup.sh).Youcanrunthefixupscriptafteryou
clickthe[FixandCheckAgainButton]duringtheOracleGridInfrastructureinstallation.

YoualsocanhaveCVUgeneratefixupscriptsbeforeinstallation.
IfyoudecidethatyouwanttoruntheCVU,pleasekeepinmindthatitshouldberunasthegriduserfromthenodeyouwillbeperformingtheOracleinstallation
from(racnode1).Inaddition,SSHconnectivitywithuserequivalencemustbeconfiguredforthegriduser.IfyouintendtoconfigureSSHconnectivityusingthe
OUI,theCVUutilitywillfailbeforehavingtheopportunitytoperformanyofitscriticalchecksandgeneratethefixupscripts:

Checking user equivalence...

Check: User equivalence for user "grid"


Node Name Comment
------------------------------------ ------------------------
racnode2 failed
racnode1 failed
Result: PRVF-4007 : User equivalence check failed for user "grid"

ERROR:
User equivalence unavailable on all the specified nodes
Verification cannot proceed

Pre-check for cluster services setup was unsuccessful on all the nodes.

OnceallprerequisitesforrunningtheCVUutilityhavebeenmet,youcannowmanuallycheckyourclusterconfigurationbeforeinstallationandgenerateafixup
scripttomakeoperatingsystemchangesbeforestartingtheinstallation.

[grid@racnode1 ~]$ cd /home/grid/software/oracle/grid


[grid@racnode1 grid]$ ./runcluvfy.sh stage -pre crsinst -n racnode1,racnode2 -fixup -verbose
ReviewtheCVUreport.

Theonlyfailurethatshouldbefoundgiventheconfigurationdescribedinthisguideis:

Check: Membership of user "grid" in group "dba"


Node Name User Exists Group Exists User in Group Comment
---------------- ------------ ------------ ------------ ----------------
racnode2 yes yes no failed
racnode1 yes yes no failed
Result: Membership check for user "grid" in group "dba" failed

ThecheckfailsbecausethisguidecreatesroleallocatedgroupsandusersbyusingaJobRoleSeparationconfigurationwhichisnotaccuratelyrecognizedbythe
CVU.CreatingaJobRoleSeparationconfigurationwasdescribedinthesectionCreateJobRoleSeparationOperatingSystemPrivilegesGroups,Users,and
Directories.TheCVUfailstorecognizethistypeofconfigurationandassumesthegridusershouldalwaysbepartofthedbagroup.Thisfailedcheckcanbe
safelyignored.AllotherchecksperformedbyCVUshouldbereportedas"passed"beforecontinuingwiththeOracleGridInfrastructureinstallation.

VerifyHardwareandOperatingSystemSetupwithCVU

ThenextCVUchecktorunwillverifythehardwareandoperatingsystemsetup.Again,runthefollowingasthegriduseraccountfromracnode1withuser
equivalenceconfigured:

[grid@racnode1 ~]$ cd /home/grid/software/oracle/grid


[grid@racnode1 grid]$ ./runcluvfy.sh stage -post hwos -n racnode1,racnode2 -verbose
ReviewtheCVUreport.AllchecksperformedbyCVUshouldbereportedas"passed"beforecontinuingwiththeOracleGridInfrastructureinstallation.

InstallOracleGridInfrastructureforaCluster
PerformthefollowinginstallationproceduresfromonlyoneoftheOracleRACnodesinthecluster(racnode1).TheOracleGridInfrastructuresoftware(Oracle
ClusterwareandAutomaticStorageManagement)willbeinstalledtobothoftheOracleRACnodesintheclusterbytheOracleUniversalInstaller.

Youarenowreadytoinstallthe"grid"partoftheenvironmentOracleClusterwareandAutomaticStorageManagement.Completethefollowingstepstoinstall
OracleGridInfrastructureonyourcluster.

Atanytimeduringinstallation,ifyouhaveaquestionaboutwhatyouarebeingaskedtodo,clicktheHelpbuttonontheOUIpage.

TypicalandAdvancedInstallation

Startingwith11gRelease2,OraclenowprovidestwooptionsforinstallingtheOracleGridInfrastructuresoftware:

TypicalInstallation

Thetypicalinstallationoptionisasimplifiedinstallationwithaminimalnumberofmanualconfigurationchoices.Thisnewoptionprovidesstreamlinedcluster
installations,especiallyforthosecustomerswhoarenewtoclustering.Typicalinstallationdefaultsasmanyoptionsaspossibletothoserecommendedas
bestpractices.

AdvancedInstallation

Theadvancedinstallationoptionisanadvancedprocedurethatrequiresahigherdegreeofsystemknowledge.Itenablesyoutoselectparticularconfiguration
choicesincludingadditionalstorageandnetworkchoices,useofoperatingsystemgroupauthenticationforrolebasedadministrativeprivileges,integration
withIPMI,andmoregranularityinspecifyingAutomaticStorageManagementroles.

GiventhefactthatthisguidemakesuseofrolebasedadministrativeprivilegesandhighgranularityinspecifyingAutomaticStorageManagementroles,wewillbe
usingthe"AdvancedInstallation"option.

VerifyTerminalShellEnvironment

BeforestartingtheOracleUniversalInstaller,logintoracnode1astheowneroftheOracleGridInfrastructuresoftwarewhichforthisarticleisgrid.Next,ifyouare
usingaremoteclienttoconnecttotheOracleRACnodeperformingtheinstallation(SSHorTelnettoracnode1fromaworkstationconfiguredwithanXServer),
verifyyourX11displayserversettingswhichweredescribedinthesectionLoggingIntoaRemoteSystemUsingXTerminal.
InstallOracleGridInfrastructure

PerformthefollowingtasksasthegridusertoinstallOracleGridInfrastructure:

[grid@racnode1 ~]$ id
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)

[grid@racnode1 ~]$ DISPLAY=<your local workstation>:0.0


[grid@racnode1 ~]$ export DISPLAY
[grid@racnode1 ~]$ cd /home/grid/software/oracle/grid
[grid@racnode1 grid]$ ./runInstaller

ScreenName Response ScreenShot

Select
Installation Select"InstallandConfigureGridInfrastructureforaCluster"
Option

Select
Installation Select"AdvancedInstallation"
Type

SelectProduct
Maketheappropriateselection(s)foryourenvironment.
Languages

InstructionsonhowtoconfigureGridNamingService(GNS)isbeyondthescopeofthisarticle.Unchecktheoptionto
"ConfigureGNS".

ClusterName SCANName SCANPort

racnode-cluster racnode-cluster-scan 1521


GridPlugand
Play
Information Afterclicking[Next],theOUIwillattempttovalidatetheSCANinformation:

Usethisscreentoaddthenoderacnode2totheclusterandtoconfigureSSHconnectivity.

Clickthe[Add]buttontoadd"racnode2.idevelopment.info"anditsvirtualIPaddress"racnode2
vip.idevelopment.info"accordingtothetablebelow:

PublicNodeName VirtualHostName

racnode1.idevelopment.info racnode1-vip.idevelopment.info

racnode2.idevelopment.info racnode2-vip.idevelopment.info

ClusterNode
Information Next,clickthe[SSHConnectivity]button.Enterthe"OSPassword"forthegriduserandclickthe[Setup]button.
Thiswillstartthe"SSHConnectivity"configurationprocess:

AftertheSSHconfigurationprocesssuccessfullycompletes,acknowledgethedialogbox.

Finishoffthisscreenbyclickingthe[Test]buttontoverifypasswordlessSSHconnectivity.

Identifythenetworkinterfacetobeusedforthe"Public"and"Private"network.Makeanychangesnecessarytomatch
thevaluesinthetablebelow:

Specify
Network InterfaceName Subnet InterfaceType
Interface
Usage eth0 192.168.1.0 Public

eth1 192.168.2.0 Private

StorageOption
Select"AutomaticStorageManagement(ASM)".
Information

CreateanASMDiskGroupthatwillbeusedtostoretheOracleClusterwarefilesaccordingtothevaluesinthetable
below:

CreateASM
DiskGroup DiskGroupName Redundancy DiskPath

CRS External ORCL:CRSVOL1

SpecifyASM
Forthepurposeofthisarticle,Ichooseto"Usesamepasswordsfortheseaccounts".
Password
ScreenName Response ScreenShot

Failure
ConfiguringIntelligentPlatformManagementInterface(IPMI)isbeyondthescopeofthisarticle.Select"Donotuse
Isolation
IntelligentPlatformManagementInterface(IPMI)".
Support

ThisarticlemakesuseofrolebasedadministrativeprivilegesandhighgranularityinspecifyingAutomaticStorage
ManagementrolesusingaJobRoleSeparationconfiguration.

Privileged Makeanychangesnecessarytomatchthevaluesinthetablebelow:
Operating
SystemGroups OSDBAforASM OSOPERforASM OSASM

asmdba asmoper asmadmin

Setthe"OracleBase"($GRID_BASE)and"SoftwareLocation"($GRID_HOME)fortheOracleGridInfrastructureinstallation:
Specify
Installation
OracleBase:/u01/app/grid
Location
SoftwareLocation:/u01/app/11.2.0/grid

Sincethisisthefirstinstallonthehost,youwillneedtocreatetheOracleInventory.Usethedefaultvaluesprovided
bytheOUI:
Create
Inventory
InventoryDirectory:/u01/app/oraInventory
oraInventoryGroupName:oinstall

TheinstallerwillrunthroughaseriesofcheckstodetermineifbothOracleRACnodesmeettheminimumrequirements
forinstallingandconfiguringtheOracleClusterwareandAutomaticStorageManagementsoftware.

StartingwithOracleClusterware11gRelease2(11.2),ifanycheckfails,theinstaller(OUI)willcreateshellscript
programscalledfixupscriptstoresolvemanyincompletesystemconfigurationrequirements.IfOUIdetectsan
incompletetaskthatismarked"fixable",thenyoucaneasilyfixtheissuebygeneratingthefixupscriptbyclickingthe
Prerequisite [Fix & Check Again]button.
Checks
Thefixupscriptisgeneratedduringinstallation.Youwillbepromptedtorunthescriptasrootinaseparateterminal
session.Whenyourunthescript,itraiseskernelvaluestorequiredminimums,ifnecessary,andcompletesother
operatingsystemconfigurationtasks.

Ifallprerequisitecheckspass(aswasthecaseformyinstall),theOUIcontinuestotheSummaryscreen.

Summary Click[Finish]tostarttheinstallation.

Setup TheinstallerperformstheOracleGridInfrastructuresetupprocessonbothOracleRACnodes.

Aftertheinstallationcompletes,youwillbepromptedtorunthe/u01/app/oraInventory/orainstRoot.shand
/u01/app/11.2.0/grid/root.shscripts.OpenanewconsolewindowonbothOracleRACnodesinthecluster,(startingwith
thenodeyouareperformingtheinstallfrom),astherootuseraccount.

RuntheorainstRoot.shscriptonbothnodesintheRACcluster:

[root@racnode1 ~]# /u01/app/oraInventory/orainstRoot.sh


[root@racnode2 ~]# /u01/app/oraInventory/orainstRoot.sh
WithinthesamenewconsolewindowonbothOracleRACnodesinthecluster,(startingwiththenodeyouare
performingtheinstallfrom),stayloggedinastherootuseraccount.Runtheroot.shscriptonbothnodesintheRAC
Execute clusteroneatatimestartingwiththenodeyouareperformingtheinstallfrom:
Configuration
scripts [root@racnode1 ~]# /u01/app/11.2.0/grid/root.sh
[root@racnode2 ~]# /u01/app/11.2.0/grid/root.sh

Theroot.shscriptcantakeseveralminutestorun.Whenrunningroot.shonthelastnode,youwillreceiveoutput
similartothefollowingwhichsignifiesasuccessfulinstall:

...
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

GobacktoOUIandacknowledgethe"ExecuteConfigurationscripts"dialogwindow.

Configure
TheinstallerwillrunconfigurationassistantsforOracleNetServices(NETCA),AutomaticStorageManagement
OracleGrid
(ASMCA),andOraclePrivateInterconnect(VIPCA).ThefinalstepperformedbyOUIistoruntheClusterVerification
Infrastructure
Utility(CVU).
foraCluster

Finish Attheendoftheinstallation,clickthe[Close]buttontoexittheOUI.

Afterinstallationiscomplete,donotmanuallyremoveorruncronjobsthatremove
/tmp/.oracleor/var/tmp/.oracleoritsfileswhileOracleClusterwareisup.Ifyou
removethesefiles,thenOracleClusterwarecouldencounterintermittenthangsandyouwill
encountererror:

CRS-0184: Cannot communicate with the CRS daemon

PostinstallationTasksforOracleGridInfrastructureforaCluster
PerformthefollowingpostinstallationproceduresonbothOracleRACnodesinthecluster.

VerifyOracleClusterwareInstallation
AftertheinstallationofOracleGridInfrastructure,youshouldrunthroughseveralteststoverifytheinstallwassuccessful.Runthefollowingcommandsonboth
nodesintheRACclusterasthegriduser.

CheckCRSStatus

[grid@racnode1 ~]$ crsctl check crs


CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

CheckClusterwareResources

[grid@racnode1 ~]$ crs_stat -t -v


Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora.CRS.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE racnode1
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode2
ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode1
ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode1
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE racnode1
ora.eons ora.eons.type 0/3 0/ ONLINE ONLINE racnode1
ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE racnode1
ora.oc4j ora.oc4j.type 0/5 0/0 OFFLINE OFFLINE
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE racnode1
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE racnode1
ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE racnode1
ora....de1.gsd application 0/5 0/0 OFFLINE OFFLINE
ora....de1.ons application 0/3 0/0 ONLINE ONLINE racnode1
ora....de1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode1
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE racnode2
ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE racnode2
ora....de2.gsd application 0/5 0/0 OFFLINE OFFLINE
ora....de2.ons application 0/3 0/0 ONLINE ONLINE racnode2
ora....de2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode2
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode2
ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode1
ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode1

Thecrs_statcommandisdeprecatedinOracleClusterware11gRelease2(11.2).

CheckClusterNodes

[grid@racnode1 ~]$ olsnodes -n


racnode1 1
racnode2 2

CheckOracleTNSListenerProcessonBothNodes

[grid@racnode1 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
LISTENER_SCAN2
LISTENER_SCAN3
LISTENER

[grid@racnode2 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
LISTENER_SCAN1
LISTENER

ConfirmingOracleASMFunctionforOracleClusterwareFiles

IfyouinstalledtheOCRandvotingdiskfilesonOracleASM,thenusethefollowingcommandsyntaxastheGridInfrastructureinstallationownertoconfirmthat
yourOracleASMinstallationisrunning.

[grid@racnode1 ~]$ srvctl status asm -a


ASM is running on racnode1,racnode2
ASM is enabled.

CheckOracleClusterRegistry(OCR)

[grid@racnode1 ~]$ ocrcheck


Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2332
Available space (kbytes) : 259788
ID : 1559468462
Device/File Name : +CRS
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check bypassed due to non-privileged user

CheckVotingDisk

[grid@racnode1 ~]$ crsctl query css votedisk


## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 05592be032644f19bf2b50a929efe843 (ORCL:CRSVOL1) [CRS]
Located 1 voting disk(s).

TomanageOracleASMorOracleNet11gRelease2(11.2)orlaterinstallations,usethe
srvctlbinaryintheOracleGridInfrastructurehomeforacluster(Gridhome).Whenwe
installOracleRealApplicationClusters(theOracledatabasesoftware),youcannotusethe
srvctlbinaryinthedatabasehometomanageOracleASMorOracleNetwhichresidein
theOracleGridInfrastructurehome.

CheckSCANResolution

AfterinstallingOracleGridInfrastructure,verifytheSCANvirtualIP.Asshownintheoutputbelow,thescanaddressisresolvedto3differentIPaddresses:

[grid@racnode1 ~]$ dig racnode-cluster-scan.idevelopment.info


; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_4.2 <<>> racnode-cluster-scan.idevelopment.info
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37366
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 1

;; QUESTION SECTION:
;racnode-cluster-scan.idevelopment.info. IN A

;; ANSWER SECTION:
racnode-cluster-scan.idevelopment.info. 86400 IN A 192.168.1.187
racnode-cluster-scan.idevelopment.info. 86400 IN A 192.168.1.188
racnode-cluster-scan.idevelopment.info. 86400 IN A 192.168.1.189

;; AUTHORITY SECTION:
idevelopment.info. 86400 IN NS openfiler1.idevelopment.info.

;; ADDITIONAL SECTION:
openfiler1.idevelopment.info. 86400 IN A 192.168.1.195

;; Query time: 0 msec


;; SERVER: 192.168.1.195#53(192.168.1.195)
;; WHEN: Mon Nov 8 16:54:02 2010
;; MSG SIZE rcvd: 145

VotingDiskManagement

Inpriorreleases,itwashighlyrecommendedtobackupthevotingdiskusingtheddcommandafterinstallingtheOracleClusterwaresoftware.WithOracle
Clusterwarerelease11.2andlater,backingupandrestoringavotingdiskusingtheddisnotsupportedandmayresultinthelossofthevotingdisk.

BackingupthevotingdisksinOracleClusterware11gRelease2isnolongerrequired.ThevotingdiskdataisautomaticallybackedupinOCRaspartofany
configurationchangeandisautomaticallyrestoredtoanyvotingdiskadded.

Tolearnmoreaboutmanagingthevotingdisks,OracleClusterRegistry(OCR),andOracleLocalRegistry(OLR),pleaserefertotheOracleClusterware
AdministrationandDeploymentGuide11gRelease2(11.2).

BackUptheroot.shScript

Oraclerecommendsthatyoubackuptheroot.shscriptafteryoucompleteaninstallation.IfyouinstallotherproductsinthesameOraclehomedirectory,thenthe
installerupdatesthecontentsoftheexistingroot.shscriptduringtheinstallation.Ifyourequireinformationcontainedintheoriginalroot.shscript,thenyoucan
recoveritfromtheroot.shfilecopy.

Backuptheroot.shfileonbothOracleRACnodesasroot:

[root@racnode1 ~]# cd /u01/app/11.2.0/grid


[root@racnode1 grid]# cp root.sh ~/root.sh.racnode1.AFTER_INSTALL_NOV-08-2010
[root@racnode2 ~]# cd /u01/app/11.2.0/grid
[root@racnode2 grid]# cp root.sh ~/root.sh.racnode2.AFTER_INSTALL_NOV-08-2010
InstallClusterHealthManagementSoftware(Optional)

Toaddresstroubleshootingissues,OraclerecommendsthatyouinstallInstantaneousProblemDetectionOSTool(IPD/OS)ifyouareusingLinuxkernel2.6.9or
higher.ThisarticlewaswrittenusingRHEL/CentOS5.5whichusesthe2.6.18kernel:

[root@racnode1 ~]# uname -a


Linux racnode1 2.6.18-194.el5 #1 SMP Fri Apr 2 14:58:14 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux

IfyouareusingaLinuxkernelearlierthan2.6.9,thenyouwoulduseOSWatcherandRACDDTwhichisavailablethroughtheMyOracleSupportwebsite(formerly
Metalink).

TheIPD/OStoolisdesignedtodetectandanalyzeoperatingsystemandclusterresourcerelateddegradationandfailures.Thetoolcanprovidebetterexplanations
formanyissuesthatoccurinclusterswhereOracleClusterware,OracleASMandOracleRACarerunning,suchasnodeevictions.Ittrackstheoperatingsystem
resourceconsumptionateachnode,process,anddevicelevelcontinuously.Itcollectsandanalyzesclusterwidedata.Inrealtimemode,whenthresholdsare
reached,analertisshowntotheoperator.Forrootcauseanalysis,historicaldatacanbereplayedtounderstandwhatwashappeningatthetimeoffailure.

InstructionsforinstallingandconfiguringtheIPD/OStoolisbeyondthescopeofthisarticleandwillnotbediscussed.YoucandownloadtheIPD/OStoolalongwith
adetailedinstallationandconfigurationguideatthefollowingURL:

http://www.oracle.com/technology/products/database/clustering/ipd_download_homepage.html

CreateASMDiskGroupsforDataandFastRecoveryArea
RuntheASMConfigurationAssistant(asmca)asthegriduserfromonlyonenodeinthecluster(racnode1)tocreatetheadditionalASMdiskgroupswhichwillbe
usedtocreatetheclusterdatabase.

DuringtheinstallationofOracleGridInfrastructure,weconfiguredoneASMdiskgroupnamed+CRSwhichwasusedtostoretheOracleclusterwarefiles(OCRand
votingdisk).

Inthissection,wewillcreatetwoadditionalASMdiskgroupsusingtheASMConfigurationAssistant(asmca).ThesenewASMdiskgroupswillbeusedlaterinthis
guidewhencreatingtheclusterdatabase.

ThefirstASMdiskgroupwillbenamed+RACDB_DATAandwillbeusedtostoreallOraclephysicaldatabasefiles(data,onlineredologs,controlfiles,archivedredo
logs).AsecondASMdiskgroupwillbecreatedfortheFastRecoveryAreanamed+FRA.

VerifyTerminalShellEnvironment

BeforestartingtheASMConfigurationAssistant,logintoracnode1astheowneroftheOracleGridInfrastructuresoftwarewhichforthisarticleisgrid.Next,ifyou
areusingaremoteclienttoconnecttotheOracleRACnodeperformingtheinstallation(SSHorTelnettoracnode1fromaworkstationconfiguredwithanXServer),
verifyyourX11displayserversettingswhichweredescribedinthesectionLoggingIntoaRemoteSystemUsingXTerminal.

CreateAdditionalASMDiskGroupsusingASMCA

PerformthefollowingtasksasthegridusertocreatetwoadditionalASMdiskgroups:

[grid@racnode1 ~]$ asmca &


ScreenName Response ScreenShot

DiskGroups Fromthe"DiskGroups"tab,clickthe[Create]button.

The"CreateDiskGroup"dialogshouldshowtwooftheASMLibvolumeswecreatedearlierinthisguide.

IftheASMLibvolumeswecreatedearlierinthisarticledonotshowupinthe"SelectMemberDisks"windowaseligible
(ORCL:DATAVOL1andORCL:FRAVOL1)thenclickonthe[ChangeDiskDiscoveryPath]buttonandinput"ORCL:*".
CreateDisk
Group
Whencreatingthe"Data"ASMdiskgroup,use"RACDB_DATA"forthe"DiskGroupName".Inthe"Redundancy"section,
choose"External(None)".Finally,checktheASMLibvolume"ORCL:DATAVOL1"inthe"SelectMemberDisks"section.

Afterverifyingallvaluesinthisdialogarecorrect,clickthe[OK]button.

AftercreatingthefirstASMdiskgroup,youwillbereturnedtotheinitialdialog.Clickthe[Create]buttonagaintocreate
DiskGroups
thesecondASMdiskgroup.

The"CreateDiskGroup"dialogshouldnowshowthefinalremainingASMLibvolume.

CreateDisk Whencreatingthe"FastRecoveryArea"diskgroup,use"FRA"forthe"DiskGroupName".Inthe"Redundancy"section,
Group choose"External(None)".Finally,checktheASMLibvolume"ORCL:FRAVOL1"inthe"SelectMemberDisks"section.

Afterverifyingallvaluesinthisdialogarecorrect,clickthe[OK]button.

DiskGroups ExittheASMConfigurationAssistantbyclickingthe[Exit]button.

InstallOracleDatabase11gwithOracleRealApplicationClusters
PerformtheOracleDatabasesoftwareinstallationfromonlyoneoftheOracleRACnodesinthecluster(racnode1).TheOracleDatabasesoftwarewillbeinstalled
tobothoftheOracleRACnodesintheclusterbytheOracleUniversalInstallerusingSSH.

NowthattheGridInfrastructuresoftwareisfunctional,youcaninstalltheOracleDatabasesoftwareontheonenodeinyourcluster(racnode1)astheoracleuser.
OUIcopiesthebinaryfilesfromthisnodetoalltheothernodeintheclusterduringtheinstallationprocess.

Forthepurposeofthisguide,wewillforgothe"CreateDatabase"optionwheninstallingtheOracleDatabasesoftware.Theclusterdatabasewillbecreatedlaterin
thisguideusingtheDatabaseConfigurationAssistant(DBCA)afterallinstallshavebeencompleted.
VerifyTerminalShellEnvironment

BeforestartingtheOracleUniversalInstaller(OUI),logintoracnode1astheowneroftheOracleDatabasesoftwarewhichforthisarticleisoracle.Next,ifyouare
usingaremoteclienttoconnecttotheOracleRACnodeperformingtheinstallation(SSHorTelnettoracnode1fromaworkstationconfiguredwithanXServer),
verifyyourX11displayserversettingswhichweredescribedinthesectionLoggingIntoaRemoteSystemUsingXTerminal.

InstallOracleDatabase11gRelease2Software

PerformthefollowingtasksastheoracleusertoinstalltheOracleDatabasesoftware:

[oracle@racnode1 ~]$ id
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)

[oracle@racnode1 ~]$ DISPLAY=<your local workstation>:0.0


[oracle@racnode1 ~]$ export DISPLAY
[oracle@racnode1 ~]$ cd /home/oracle/software/oracle/database
[oracle@racnode1 database]$ ./runInstaller

ScreenName Response ScreenShot

Configure
Forthepurposeofthisexample,uncheckthesecurityupdatescheckboxandclickthe[Next]buttontocontinue.
Security
Acknowledgethewarningdialogindicatingyouhavenotprovidedanemailaddressbyclickingthe[Yes]button.
Updates

Installation
Select"Installdatabasesoftwareonly".
Option

Selectthe"RealApplicationClustersdatabaseinstallation"radiobutton(default)andverifythatbothOracleRAC
nodesarecheckedinthe"NodeName"window.

Next,clickthe[SSHConnectivity]button.Enterthe"OSPassword"fortheoracleuserandclickthe[Setup]button.
Thiswillstartthe"SSHConnectivity"configurationprocess:

GridOptions

AftertheSSHconfigurationprocesssuccessfullycompletes,acknowledgethedialogbox.

Finishoffthisscreenbyclickingthe[Test]buttontoverifypasswordlessSSHconnectivity.

Product
Maketheappropriateselection(s)foryourenvironment.
Languages

Database
Select"EnterpriseEdition".
Edition

SpecifytheOraclebaseandSoftwarelocation(Oraclehome)asfollows:
Installation
Location OracleBase:/u01/app/oracle
SoftwareLocation:/u01/app/oracle/product/11.2.0/dbhome_1

SelecttheOSgroupstobeusedfortheSYSDBAandSYSOPERprivileges:
Operating
System
DatabaseAdministrator(OSDBA)Group:dba
Groups
DatabaseOperator(OSOPER)Group:oper

TheinstallerwillrunthroughaseriesofcheckstodetermineifbothOracleRACnodesmeettheminimumrequirements
forinstallingandconfiguringtheOracleDatabasesoftware.

Startingwith11gRelease2(11.2),ifanychecksfail,theinstaller(OUI)willcreateshellscriptprogramscalledfixup
scriptstoresolvemanyincompletesystemconfigurationrequirements.IfOUIdetectsanincompletetaskthatismarked
Prerequisite "fixable",thenyoucaneasilyfixtheissuebygeneratingthefixupscriptbyclickingthe[Fix & Check Again]button.
Checks
Thefixupscriptisgeneratedduringinstallation.Youwillbepromptedtorunthescriptasrootinaseparateterminal
session.Whenyourunthescript,itraiseskernelvaluestorequiredminimums,ifnecessary,andcompletesother
operatingsystemconfigurationtasks.

Ifallprerequisitecheckspass(aswasthecaseformyinstall),theOUIcontinuestotheSummaryscreen.

Summary Click[Finish]tostarttheinstallation.

Install
TheinstallerperformstheOracleDatabasesoftwareinstallationprocessonbothOracleRACnodes.
Product

Aftertheinstallationcompletes,youwillbepromptedtorunthe/u01/app/oracle/product/11.2.0/dbhome_1/root.shscripton
bothOracleRACnodes.OpenanewconsolewindowonbothOracleRACnodesinthecluster,(startingwiththenodeyou
areperformingtheinstallfrom),astherootuseraccount.

Execute Runtheroot.shscriptonallnodesintheRACcluster:
Configuration
scripts [root@racnode1 ~]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh
[root@racnode2 ~]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh
GobacktoOUIandacknowledgethe"ExecuteConfigurationscripts"dialogwindow.

Finish Attheendoftheinstallation,clickthe[Close]buttontoexittheOUI.
InstallOracleDatabase11gExamples(formerlyCompanion)
PerformtheOracleDatabase11gExamplessoftwareinstallationfromonlyoneoftheOracleRACnodesinthecluster(racnode1).TheOracleDatabaseExamples
softwarewillbeinstalledtobothofOracleRACnodesintheclusterbytheOracleUniversalInstallerusingSSH.

NowthattheOracleDatabase11gsoftwareisinstalled,youhavetheoptiontoinstalltheOracleDatabase11gExamples.LiketheOracleDatabasesoftware
install,theExamplessoftwareisonlyinstalledfromonenodeinyourcluster(racnode1)astheoracleuser.OUIcopiesthebinaryfilesfromthisnodetoallthe
othernodeintheclusterduringtheinstallationprocess.

VerifyTerminalShellEnvironment

BeforestartingtheOracleUniversalInstaller(OUI),logintoracnode1astheowneroftheOracleDatabasesoftwarewhichforthisarticleisoracle.Next,ifyouare
usingaremoteclienttoconnecttotheOracleRACnodeperformingtheinstallation(SSHorTelnettoracnode1fromaworkstationconfiguredwithanXServer),
verifyyourX11displayserversettingswhichweredescribedinthesectionLoggingIntoaRemoteSystemUsingXTerminal.

InstallOracleDatabase11gRelease2Examples

PerformthefollowingtasksastheoracleusertoinstalltheOracleDatabaseExamples:

[oracle@racnode1 ~]$ cd /home/oracle/software/oracle/examples


[oracle@racnode1 examples]$ ./runInstaller
ScreenName Response ScreenShot

SpecifytheOraclebaseandSoftwarelocation(Oraclehome)asfollows:
Installation
Location OracleBase:/u01/app/oracle
SoftwareLocation:/u01/app/oracle/product/11.2.0/dbhome_1

TheinstallerwillrunthroughaseriesofcheckstodetermineifbothOracleRACnodesmeettheminimumrequirements
forinstallingandconfiguringtheOracleDatabaseExamplessoftware.

Startingwith11gRelease2(11.2),ifanychecksfail,theinstaller(OUI)willcreateshellscriptprogramscalledfixup
scriptstoresolvemanyincompletesystemconfigurationrequirements.IfOUIdetectsanincompletetaskthatismarked
Prerequisite "fixable",thenyoucaneasilyfixtheissuebygeneratingthefixupscriptbyclickingthe[Fix & Check Again]button.
Checks
Thefixupscriptisgeneratedduringinstallation.Youwillbepromptedtorunthescriptasrootinaseparateterminal
session.Whenyourunthescript,itraiseskernelvaluestorequiredminimums,ifnecessary,andcompletesother
operatingsystemconfigurationtasks.

Ifallprerequisitecheckspass(aswasthecaseformyinstall),theOUIcontinuestotheSummaryscreen.

Summary Click[Finish]tostarttheinstallation.

Install
TheinstallerperformstheOracleDatabaseExamplessoftwareinstallationprocessonbothOracleRACnodes.
Product

Finish Attheendoftheinstallation,clickthe[Close]buttontoexittheOUI.

CreatetheOracleClusterDatabase
ThedatabasecreationprocessshouldonlybeperformedfromoneoftheOracleRACnodesinthecluster(racnode1).

UsetheOracleDatabaseConfigurationAssistant(DBCA)tocreatetheclusterdatabase.

BeforeexecutingtheDBCA,makecertainthatthe$ORACLE_HOMEand$PATHaresetappropriatelyforthe$ORACLE_BASE/product/11.2.0/dbhome_1environment.
Settingenvironmentvariablesintheloginscriptfortheoracleuseraccountwascoveredinthesection"CreateLoginScriptfortheoracleUserAccount".

Youshouldalsoverifythatallserviceswehaveinstalleduptothispoint(OracleTNSlistener,OracleClusterwareprocesses,etc.)arerunningonbothOracleRAC
nodesbeforeattemptingtostarttheclusterdatabasecreationprocess:

[oracle@racnode1 ~]$ su - grid -c "crs_stat -t -v"


Password: *********
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora.CRS.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1
ora.FRA.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE racnode1
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode2
ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode1
ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode1
ora....DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE racnode1
ora.eons ora.eons.type 0/3 0/ ONLINE ONLINE racnode1
ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE racnode1
ora.oc4j ora.oc4j.type 0/5 0/0 OFFLINE OFFLINE
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE racnode1
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE racnode1
ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE racnode1
ora....de1.gsd application 0/5 0/0 OFFLINE OFFLINE
ora....de1.ons application 0/3 0/0 ONLINE ONLINE racnode1
ora....de1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode1
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE racnode2
ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE racnode2
ora....de2.gsd application 0/5 0/0 OFFLINE OFFLINE
ora....de2.ons application 0/3 0/0 ONLINE ONLINE racnode2
ora....de2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode2
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode2
ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode1
ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode1

[oracle@racnode2 ~]$ su - grid -c "crs_stat -t -v"


Password: *********
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora.CRS.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1
ora.FRA.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE racnode1
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode2
ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode1
ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode1
ora....DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE racnode1
ora.eons ora.eons.type 0/3 0/ ONLINE ONLINE racnode1
ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE racnode1
ora.oc4j ora.oc4j.type 0/5 0/0 OFFLINE OFFLINE
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE racnode1
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE racnode1
ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE racnode1
ora....de1.gsd application 0/5 0/0 OFFLINE OFFLINE
ora....de1.ons application 0/3 0/0 ONLINE ONLINE racnode1
ora....de1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode1
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE racnode2
ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE racnode2
ora....de2.gsd application 0/5 0/0 OFFLINE OFFLINE
ora....de2.ons application 0/3 0/0 ONLINE ONLINE racnode2
ora....de2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode2
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode2
ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode1
ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode1

VerifyTerminalShellEnvironment

BeforestartingtheDatabaseConfigurationAssistant(DBCA),logintoracnode1astheowneroftheOracleDatabasesoftwarewhichforthisarticleisoracle.
Next,ifyouareusingaremoteclienttoconnecttotheOracleRACnodeperformingtheinstallation(SSHorTelnettoracnode1fromaworkstationconfiguredwith
anXServer),verifyyourX11displayserversettingswhichweredescribedinthesectionLoggingIntoaRemoteSystemUsingXTerminal.

Createtheclusterdatabase

Tostartthedatabasecreationprocess,runthefollowingastheoracleuser:

[oracle@racnode1 ~]$ dbca &


ScreenName Response ScreenShot

Welcome
SelectOracleRealApplicationClustersdatabase.
Screen

Operations SelectCreateaDatabase.

Database
SelectCustomDatabase.
Templates

Clusterdatabaseconfiguration.
ConfigurationType:Admin-Managed

Databasenaming.

GlobalDatabaseName:racdb.idevelopment.info
Database
SIDPrefix:racdb
Identification
Note:Iusedidevelopment.infoforthedatabasedomain.Youmayuseanydatabasedomain.Keepinmindthatthis
domaindoesnothavetobeavalidDNSdomain.

NodeSelection.
Clickthe[SelectAll]buttontoselectallservers:racnode1andracnode2.

Management Leavethedefaultoptionshere,whichistoConfigureEnterpriseManager/ConfigureDatabaseControlforlocal
Options management.

Database IselectedtoUsetheSameAdministrativePasswordforAllAccounts.Enterthepassword(twice)andmakesure
Credentials thepassworddoesnotstartwithadigitnumber.

Specifystoragetypeandlocationsfordatabasefiles.
DatabaseFile StorageType:Automatic Storage Management (ASM)
Locations
StorageLocations:Use Oracle-Managed Files
DatabaseArea:+RACDB_DATA

Specify
ASMSNMP SpecifytheASMSNMPpasswordfortheASMinstance.
Password
ScreenName Response ScreenShot

ChecktheoptionforSpecifyFastRecoveryArea.

Recovery FortheFastRecoveryArea,clickthe[Browse]buttonandselectthediskgroupname+FRA.
Configuration
Mydiskgrouphasasizeofabout33GB.WhendefiningtheFastRecoveryAreasize,usetheentirevolumeminus10%
foroverhead(3310%=30GB).IusedaFastRecoveryAreaSizeof30GB(30413MB).

Database IleftalloftheDatabaseComponents(anddestinationtablespaces)settotheirdefaultvaluealthoughitisperfectlyOKto
Content selecttheSampleSchemas.ThisoptionisavailablesinceweinstalledtheOracleDatabase11gExamples.

Initialization
Changeanyparametersforyourenvironment.Ileftthemallattheirdefaultsettings.
Parameters

Database
Changeanyparametersforyourenvironment.Ileftthemallattheirdefaultsettings.
Storage

KeepthedefaultoptionCreateDatabaseselected.IalsoalwaysselecttoGenerateDatabaseCreationScripts.Click
Finishtostartthedatabasecreationprocess.Afteracknowledgingthedatabasecreationreportandscriptgeneration
Creation
dialog,thedatabasecreationwillstart.
Options
ClickOKonthe"Summary"screen.

Endof
Database Attheendofthedatabasecreation,exitfromtheDBCA.
Creation

WhentheDBCAhascompleted,youwillhaveafullyfunctionalOracleRAC11gRelease2clusterrunning!

CreateNewServices(Optional)

Optionally,addanyservicestothenewclusterdatabaseandassignthemtoinstance(s).

[oracle@racnode1 ~]$ srvctl add service -d racdb -s racdbsvc.idevelopment.info -r racdb1,racdb2


[oracle@racnode1 ~]$ srvctl start service -d racdb

[oracle@racnode1 ~]$ srvctl status database -d racdb -v


Instance racdb1 is running on node racnode1 with online services racdbsvc.idevelopment.info. Instance status: Open.
Instance racdb2 is running on node racnode2 with online services racdbsvc.idevelopment.info. Instance status: Open.

VerifyclusterdatabaseisOpen

[oracle@racnode1 ~]$ su - grid -c "crsctl status resource -w "TYPE co 'ora'" -t"


Password: *********
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
ONLINE ONLINE racnode1
ONLINE ONLINE racnode2
ora.DOCS.dg
ONLINE ONLINE racnode1
ONLINE ONLINE racnode2
ora.FRA.dg
ONLINE ONLINE racnode1
ONLINE ONLINE racnode2
ora.LISTENER.lsnr
ONLINE ONLINE racnode1
ONLINE ONLINE racnode2
ora.RACDB_DATA.dg
ONLINE ONLINE racnode1
ONLINE ONLINE racnode2
ora.asm
ONLINE ONLINE racnode1 Started
ONLINE ONLINE racnode2 Started
ora.gsd
OFFLINE OFFLINE racnode1
OFFLINE OFFLINE racnode2
ora.net1.network
ONLINE ONLINE racnode1
ONLINE ONLINE racnode2
ora.ons
ONLINE ONLINE racnode1
ONLINE ONLINE racnode2
ora.registry.acfs
ONLINE ONLINE racnode1
ONLINE ONLINE racnode2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE racnode2
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE racnode1
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE racnode1
ora.cvu
1 ONLINE ONLINE racnode1
ora.oc4j
1 ONLINE ONLINE racnode1
ora.racdb.db
1 ONLINE ONLINE racnode1 Open
2 ONLINE ONLINE racnode2 Open
ora.racdb.racdbsvc.idevelopment.info.svc
1 ONLINE ONLINE racnode1
2 ONLINE ONLINE racnode2
ora.racnode1.vip
1 ONLINE ONLINE racnode1
ora.racnode2.vip
1 ONLINE ONLINE racnode2
ora.scan1.vip
1 ONLINE ONLINE racnode2
ora.scan2.vip
1 ONLINE ONLINE racnode1
ora.scan3.vip
1 ONLINE ONLINE racnode1

OracleEnterpriseManager

IfyouconfiguredOracleEnterpriseManager(DatabaseControl),itcanbeusedtoviewthedatabaseconfigurationandcurrentstatusofthedatabase.

TheURLforthisexampleis:https://racnode1.idevelopment.info:1158/em

[oracle@racnode1 ~]$ emctl status dbconsole


Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0
Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved.
https://racnode1.idevelopment.info:1158/em/console/aboutApplication
Oracle Enterprise Manager 11g is running.
------------------------------------------------------------------
Logs are generated in directory /u01/app/oracle/product/11.2.0/dbhome_1/racnode1_racdb/sysman/log

Figure18:OracleEnterpriseManager(DatabaseConsole)

PostDatabaseCreationTasks(Optional)
ThissectionoffersseveraloptionaltasksthatcanbeperformedonyournewOracle11genvironmentinordertoenhanceavailabilityaswellasdatabase
management.

RecompileInvalidObjects

Runtheutlrp.sqlscripttorecompileallinvalidPL/SQLpackagesnowinsteadofwhenthepackagesareaccessedforthefirsttime.Thisstepisoptionalbut
recommended.

[oracle@racnode1 ~]$ sqlplus / as sysdba


SQL> @?/rdbms/admin/utlrp.sql
EnablingArchiveLogsinaRACEnvironment

Whetherasingleinstanceorclusterdatabase,Oracletracksandlogsallchangestodatabaseblocksinonlineredologfiles.InanOracleRACenvironment,each
instancewillhaveitsownsetofonlineredologfilesknownasathread.EachOracleinstancewilluseitsgroupofonlineredologsinacircularmanner.Oncean
onlineredologfills,Oraclemovestothenextone.Ifthedatabaseisin"ArchiveLogMode",Oraclewillmakeacopyoftheonlineredologbeforeitgetsreused.A
threadmustcontainatleasttwoonlineredologs(oronlineredologgroups).Thesameholdstrueforasingleinstanceconfiguration.Thesingleinstancemustcontain
atleasttwoonlineredologs(oronlineredologgroups).

Thesizeofanonlineredologfileiscompletelyindependentofanotherinstance's'redologsize.Althoughinmostconfigurationsthesizeisthesame,itmaybe
differentdependingontheworkloadandbackup/recoveryconsiderationsforeachnode.Itisalsoworthmentioningthateachinstancehasexclusivewriteaccess
toitsownonlineredologfiles.InacorrectlyconfiguredRACenvironment,however,eachinstancecanreadanotherinstance'scurrentonlineredologfiletoperform
instancerecoveryifthatinstancewasterminatedabnormally.Itisthereforearequirementthatonlineredologsbelocatedonasharedstoragedevice(justlikethe
databasefiles).

Asalreadymentioned,Oraclewritestoitsonlineredologfilesinacircularmanner.Whenthecurrentonlineredologfills,Oraclewillswitchtothenextone.To
facilitatemediarecovery,OracleallowstheDBAtoputthedatabaseinto"ArchiveLogMode"whichmakesacopyoftheonlineredologafteritfills(andbeforeit
getsreused).Thisisaprocessknownasarchiving.

TheDatabaseConfigurationAssistant(DBCA)allowsuserstoconfigureanewdatabasetobeinarchivelogmodewithintheRecoveryConfigurationsection
howevermostDBA'sopttobypassthisoptionduringinitialdatabasecreation.Incaseslikethiswherethedatabaseisinnoarchivelogmode,itisasimpletaskto
putthedatabaseintoarchivelogmode.Notehoweverthatthiswillrequireashortdatabaseoutage.FromoneofthenodesintheOracleRACconfiguration,usethe
followingtaskstoputaRACenableddatabaseintoarchivelogmode.Forthepurposeofthisarticle,Iwillusethenoderacnode1whichrunstheracdb1instance:

1.Logintooneofthenodes(i.e.racnode1)asoracleanddisabletheclusterinstanceparameterbysettingcluster_databasetoFALSEfromthecurrent
instance:

[oracle@racnode1 ~]$sqlplus / as sysdba


SQL> alter system set cluster_database=false scope=spfile sid='racdb1';

System altered.

2.Shutdownallinstancesaccessingtheclusterdatabaseastheoracleuser:

[oracle@racnode1 ~]$ srvctl stop database -d racdb


3.Usingthelocalinstance,mountthedatabase:

[oracle@racnode1 ~]$ sqlplus / as sysdba


SQL*Plus: Release 11.2.0.1.0 Production on Sat Nov 21 19:26:47 2009

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup mount


ORACLE instance started.

Total System Global Area 1653518336 bytes


Fixed Size 2213896 bytes
Variable Size 1073743864 bytes
Database Buffers 570425344 bytes
Redo Buffers 7135232 bytes

4.Enablearchiving:

SQL> alter database archivelog;


Database altered.

5.Reenablesupportforclusteringbymodifyingtheinstanceparametercluster_databasetoTRUEfromthecurrentinstance:

SQL> alter system set cluster_database=true scope=spfile sid='racdb1';


System altered.

6.Shutdownthelocalinstance:

SQL> shutdown immediate


ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.

7.Bringallinstancesbackupastheoracleaccountusingsrvctl:

[oracle@racnode1 ~]$ srvctl start database -d racdb


8.LogintothelocalinstanceandverifyArchiveLogModeisenabled:
[oracle@racnode1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Mon Nov 8 20:07:48 2010

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> archive log list


Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 68
Next log sequence to archive 69
Current log sequence 69

AfterenablingArchiveLogMode,eachinstanceintheRACconfigurationcanautomaticallyarchiveredologs!

DownloadandInstallCustomOracleDatabaseScripts

DBA'srelyonOracle'sdatadictionaryviewsanddynamicperformanceviewsinordertosupportandbettermanagetheirdatabases.Althoughtheseviewsprovidea
simpleandeasymechanismtoquerycriticalinformationregardingthedatabase,ithelpstohaveacollectionofaccurateandreadilyavailableSQLscriptstoquery
theseviews.

InthissectionyouwilldownloadandinstallacollectionofOracleDBAscriptsthatcanbeusedtomanagemanyaspectsofyourdatabaseincludingspace
management,performance,backups,security,andsessionmanagement.TheDBAScriptsArchiveforOraclecanbedownloadedusingthefollowinglink
http://www.idevelopment.info/data/Oracle/DBA_scripts/dba_scripts_archive_Oracle.zip.Astheoracleuseraccount,downloadthe
dba_scripts_archive_Oracle.ziparchivetothe$ORACLE_BASEdirectoryofeachnodeinthecluster.Forthepurposeofthisexample,the
dba_scripts_archive_Oracle.ziparchivewillbecopiedto/u01/app/oracle.Next,unzipthearchivefiletothe$ORACLE_BASEdirectory.

Forexample,performthefollowingonbothnodesintheOracleRACclusterastheoracleuseraccount:

[oracle@racnode1 ~]$ mv dba_scripts_archive_Oracle.zip /u01/app/oracle


[oracle@racnode1 ~]$ cd /u01/app/oracle
[oracle@racnode1 oracle]$ unzip dba_scripts_archive_Oracle.zip
Thefinalstepistoverify(orset)theappropriateenvironmentvariableforthecurrentUNIXshelltoensuretheOracleSQLscriptscanberunfromwithinSQL*Plus
whileinanydirectory.ForUNIX,verifythefollowingenvironmentvariableissetandincludedinyourloginshellscript:

ORACLE_PATH= $ORACLE_BASE/dba_scripts/sql:.:$ORACLE_HOME/rdbms/admin
export ORACLE_PATH

TheORACLE_PATHenvironmentvariableshouldalreadybesetinthe.bash_profilelogin
scriptthatwascreatedinthesectionCreateLoginScriptfortheoracleUserAccount.

NowthattheDBAScriptsArchiveforOraclehasbeenunzippedandtheUNIXenvironmentvariable($ORACLE_PATH)hasbeensettotheappropriatedirectory,you
shouldnowbeabletorunanyoftheSQLscriptsinthe$ORACLE_BASE/dba_scripts/sqlwhileloggedintoSQL*Plusfromanydirectory.Forexample,toquery
tablespaceinformationwhileloggedintotheOracledatabaseasaDBAuser:

SQL> @dba_tablespaces
Status Tablespace Name TS Type Ext. Mgt. Seg. Mgt. Tablespace Size Used (in bytes) Pct. Used
------- ----------------- ------------ ---------- --------- ---------------- ---------------- ---------
ONLINE SYSAUX PERMANENT LOCAL AUTO 629,145,600 511,967,232 81
ONLINE UNDOTBS1 UNDO LOCAL MANUAL 1,059,061,760 948,043,776 90
ONLINE USERS PERMANENT LOCAL AUTO 5,242,880 1,048,576 20
ONLINE SYSTEM PERMANENT LOCAL MANUAL 734,003,200 703,135,744 96
ONLINE EXAMPLE PERMANENT LOCAL AUTO 157,286,400 85,131,264 54
ONLINE UNDOTBS2 UNDO LOCAL MANUAL 209,715,200 20,840,448 10
ONLINE TEMP TEMPORARY LOCAL MANUAL 75,497,472 66,060,288 88
---------------- ---------------- ---------
avg 63
sum 2,869,952,512 2,336,227,328

7 rows selected.

ToobtainalistofallavailableOracleDBAscriptswhileloggedintoSQL*Plus,runthehelp.sqlscript.

SQL> @help.sql
========================================
Automatic Shared Memory Management
========================================
asmm_components.sql

========================================
Automatic Storage Management
========================================
asm_alias.sql
asm_clients.sql
asm_diskgroups.sql
asm_disks.sql
asm_disks_perf.sql
asm_drop_files.sql
asm_files.sql
asm_files2.sql
asm_templates.sql

< --- SNIP --- >

perf_top_sql_by_buffer_gets.sql
perf_top_sql_by_disk_reads.sql

========================================
Workspace Manager
========================================
wm_create_workspace.sql
wm_disable_versioning.sql
wm_enable_versioning.sql
wm_freeze_workspace.sql
wm_get_workspace.sql
wm_goto_workspace.sql
wm_merge_workspace.sql
wm_refresh_workspace.sql
wm_remove_workspace.sql
wm_unfreeze_workspace.sql
wm_workspaces.sql

Create/AlterTablespaces
Whencreatingtheclusterdatabase,weleftalltablespacessettotheirdefaultsize.Ifyouareusingalargedriveforthesharedstorage,youmaywanttomakea
sizabletestingdatabase.

BelowareseveraloptionalSQLcommandsformodifyingandcreatingalltablespacesforthetestdatabase.Pleasekeepinmindthatthedatabasefilenames(OMF
files)usedinthisexamplemaydifferfromwhattheOracleDatabaseConfigurationAssistant(DBCA)createsforyourenvironment.Whenworkingthroughthis
section,substitutethedatafilenamesthatwerecreatedinyourenvironmentwhereappropriate.Thefollowingquerycanbeusedtodeterminethefilenamesfor
yourenvironment:

SQL> select tablespace_name, file_name


2 from dba_data_files
3 union
4 select tablespace_name, file_name
5 from dba_temp_files;
TABLESPACE_NAME FILE_NAME
--------------- --------------------------------------------------
EXAMPLE +RACDB_DATA/racdb/datafile/example.263.703530435
SYSAUX +RACDB_DATA/racdb/datafile/sysaux.260.703530411
SYSTEM +RACDB_DATA/racdb/datafile/system.259.703530397
TEMP +RACDB_DATA/racdb/tempfile/temp.262.703530429
UNDOTBS1 +RACDB_DATA/racdb/datafile/undotbs1.261.703530423
UNDOTBS2 +RACDB_DATA/racdb/datafile/undotbs2.264.703530441
USERS +RACDB_DATA/racdb/datafile/users.265.703530447

7 rows selected.

[oracle@racnode1 ~]$sqlplus "/ as sysdba"


SQL> create user scott identified by tiger default tablespace users;

User created.

SQL> grant dba, resource, connect to scott;


Grant succeeded.

SQL> alter database datafile '+RACDB_DATA/racdb/datafile/users.265.703530447' resize 1024m;


Database altered.

SQL> alter tablespace users add datafile '+RACDB_DATA' size 1024m autoextend off;
Tablespace altered.

SQL> create tablespace indx datafile '+RACDB_DATA' size 1024m


2 autoextend on next 100m maxsize unlimited
3 extent management local autoallocate
4 segment space management auto;
Tablespace created.

SQL> alter database datafile '+RACDB_DATA/racdb/datafile/system.259.703530397' resize 1024m;


Database altered.

SQL> alter database datafile '+RACDB_DATA/racdb/datafile/sysaux.260.703530411' resize 1024m;


Database altered.
SQL> alter database datafile '+RACDB_DATA/racdb/datafile/undotbs1.261.703530423' resize 1024m;
Database altered.

SQL> alter database datafile '+RACDB_DATA/racdb/datafile/undotbs2.264.703530441' resize 1024m;


Database altered.

SQL> alter database tempfile '+RACDB_DATA/racdb/tempfile/temp.262.703530429' resize 1024m;


Database altered.

HereisasnapshotofthetablespacesIhavedefinedformytestdatabaseenvironment:

Status Tablespace Name TS Type Ext. Mgt. Seg. Mgt. Tablespace Size Used (in bytes) Pct. Used
------- ----------------- ------------ ---------- --------- ---------------- ---------------- ---------
ONLINE SYSAUX PERMANENT LOCAL AUTO 1,073,741,824 512,098,304 48
ONLINE UNDOTBS1 UNDO LOCAL MANUAL 1,073,741,824 948,043,776 88
ONLINE USERS PERMANENT LOCAL AUTO 2,147,483,648 2,097,152 0
ONLINE SYSTEM PERMANENT LOCAL MANUAL 1,073,741,824 703,201,280 65
ONLINE EXAMPLE PERMANENT LOCAL AUTO 157,286,400 85,131,264 54
ONLINE INDX PERMANENT LOCAL AUTO 1,073,741,824 1,048,576 0
ONLINE UNDOTBS2 UNDO LOCAL MANUAL 1,073,741,824 20,840,448 2
ONLINE TEMP TEMPORARY LOCAL MANUAL 1,073,741,824 66,060,288 6
---------------- ---------------- ---------
avg 33
sum 8,747,220,992 2,338,521,088

8 rows selected.

VerifyOracleGridInfrastructureandDatabaseConfiguration
ThefollowingOracleClusterwareandOracleRACverificationcheckscanbeperformedonanyoftheOracleRACnodesinthecluster.Forthepurposeofthis
article,Iwillonlybeperformingchecksfromracnode1astheoracleOSuser.

MostofthechecksdescribedinthissectionusetheServerControlUtility(SRVCTL)andcanberunaseithertheoracleorgridOSuser.Therearefivenodelevel
tasksdefinedforSRVCTL:

Addinganddeletingnodelevelapplications
Settingandunsettingtheenvironmentfornodelevelapplications
Administeringnodeapplications
AdministeringASMinstances
StartingandstoppingagroupofprogramsthatincludesvirtualIPaddresses,listeners,OracleNotificationServices,andOracleEnterpriseManageragents
(formaintenancepurposes).

OraclealsoprovidestheOracleClusterwareControl(CRSCTL)utility.CRSCTLisaninterfacebetweenyouandOracleClusterware,parsingandcallingOracle
ClusterwareAPIsforOracleClusterwareobjects.

OracleClusterware11gRelease2(11.2)introducesclusterawarecommandswithwhichyoucanperformcheck,start,andstopoperationsonthecluster.Youcan
runthesecommandsfromanynodeintheclusteronanothernodeinthecluster,oronallnodesinthecluster,dependingontheoperation.

YoucanuseCRSCTLcommandstoperformseveraloperationsonOracleClusterware,suchas:

StartingandstoppingOracleClusterwareresources
EnablinganddisablingOracleClusterwaredaemons
Checkingthehealthofthecluster
Managingresourcesthatrepresentthirdpartyapplications
IntegratingIntelligentPlatformManagementInterface(IPMI)withOracleClusterwaretoprovidefailureisolationsupportandtoensureclusterintegrity
DebuggingOracleClusterwarecomponents

Forthepurposeofthisarticle(andthissection),wewillonlymakeuseofthe"Checkingthehealthofthecluster"operationwhichusestheClusterized(Cluster
Aware)Command:

crsctl check cluster

ManysubprogramsandcommandsweredeprecatedinOracleClusterware11gRelease2(11.2):

crs_stat
crs_register
crs_unregister
crs_start
crs_stop
crs_getperm
crs_profile
crs_relocate
crs_setperm
crsctl check crsd
crsctl check cssd
crsctl check evmd
crsctl debug log
crsctl set css votedisk
crsctl start resources
crsctl stop resources

ChecktheHealthoftheCluster(ClusterizedCommand)

Runasthegriduser.
[grid@racnode1 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

AllOracleInstances(DatabaseStatus)

[oracle@racnode1 ~]$ srvctl status database -d racdb


Instance racdb1 is running on node racnode1
Instance racdb2 is running on node racnode2

SingleOracleInstance(StatusofSpecificInstance)

[oracle@racnode1 ~]$ srvctl status instance -d racdb -i racdb1


Instance racdb1 is running on node racnode1

NodeApplications(Status)

[oracle@racnode1 ~]$ srvctl status nodeapps


VIP racnode1-vip is enabled
VIP racnode1-vip is running on node: racnode1
VIP racnode2-vip is enabled
VIP racnode2-vip is running on node: racnode2
Network is enabled
Network is running on node: racnode1
Network is running on node: racnode2
GSD is disabled
GSD is not running on node: racnode1
GSD is not running on node: racnode2
ONS is enabled
ONS daemon is running on node: racnode1
ONS daemon is running on node: racnode2
eONS is enabled
eONS daemon is running on node: racnode1
eONS daemon is running on node: racnode2

NodeApplications(Configuration)

[oracle@racnode1 ~]$ srvctl config nodeapps


VIP exists.:racnode1
VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0
VIP exists.:racnode2
VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
eONS daemon exists. Multicast port 24057, multicast IP address 234.194.43.168, listening port 2016

ListallConfiguredDatabases

[oracle@racnode1 ~]$ srvctl config database


racdb

Database(Configuration)

[oracle@racnode1 ~]$ srvctl config database -d racdb -a


Database unique name: racdb
Database name: racdb
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +RACDB_DATA/racdb/spfileracdb.ora
Domain: idevelopment.info
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: racdb
Database instances: racdb1,racdb2
Disk Groups: RACDB_DATA,FRA
Mount point paths:
Services: racdbsvc.idevelopment.info
Type: RAC
Database is enabled
Database is administrator managed

ASM(Status)
[oracle@racnode1 ~]$ srvctl status asm
ASM is running on racnode1,racnode2

ASM(Configuration)

$ srvctl config asm -a


ASM home: /u01/app/11.2.0/grid
ASM listener: LISTENER
ASM is enabled.

TNSlistener(Status)

[oracle@racnode1 ~]$ srvctl status listener


Listener LISTENER is enabled
Listener LISTENER is running on node(s): racnode1,racnode2

TNSlistener(Configuration)

[oracle@racnode1 ~]$ srvctl config listener -a


Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
/u01/app/11.2.0/grid on node(s) racnode2,racnode1
End points: TCP:1521

SCAN(Status)

[oracle@racnode1 ~]$ srvctl status scan


SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node racnode2
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node racnode1
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node racnode1

SCAN(Configuration)

[oracle@racnode1 ~]$ srvctl config scan


SCAN name: racnode-cluster-scan, Network: 1/192.168.1.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /racnode-cluster-scan.idevelopment.info/192.168.1.188
SCAN VIP name: scan2, IP: /racnode-cluster-scan.idevelopment.info/192.168.1.189
SCAN VIP name: scan3, IP: /racnode-cluster-scan.idevelopment.info/192.168.1.187

VIP(StatusofSpecificNode)

[oracle@racnode1 ~]$ srvctl status vip -n racnode1


VIP racnode1-vip is enabled
VIP racnode1-vip is running on node: racnode1

[oracle@racnode1 ~]$ srvctl status vip -n racnode2


VIP racnode2-vip is enabled
VIP racnode2-vip is running on node: racnode2

VIP(ConfigurationofSpecificNode)

[oracle@racnode1 ~]$ srvctl config vip -n racnode1


VIP exists.:racnode1
VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0

[oracle@racnode1 ~]$ srvctl config vip -n racnode2


VIP exists.:racnode2
VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0

ConfigurationforNodeApplications(VIP,GSD,ONS,Listener)

[oracle@racnode1 ~]$ srvctl config nodeapps -a -g -s -l


-l option has been deprecated and will be ignored.
VIP exists.:racnode1
VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0
VIP exists.:racnode2
VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
/u01/app/11.2.0/grid on node(s) racnode2,racnode1
End points: TCP:1521

VerifyingClockSynchronizationacrosstheClusterNodes

[oracle@racnode1 ~]$ cluvfy comp clocksync -verbose


Verifying Clock Synchronization across the cluster nodes

Checking if Clusterware is installed on all nodes...


Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...


Check: CTSS Resource running on all nodes
Node Name Status
------------------------------------ ------------------------
racnode1 passed
Result: CTSS resource check passed

Querying CTSS for time offset on all nodes...


Result: Query of CTSS for time offset passed

Check CTSS state started...


Check: CTSS state
Node Name State
------------------------------------ ------------------------
racnode1 Active
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Reference Time Offset Limit: 1000.0 msecs
Check: Reference Time Offset
Node Name Time Offset Status
------------ ------------------------ ------------------------
racnode1 0.0 passed

Time offset is within the specified limits on the following set of nodes:
"[racnode1]"
Result: Check of clock time offsets passed

Oracle Cluster Time Synchronization Services check passed

Verification of Clock Synchronization across the cluster nodes was successful.

Allrunninginstancesinthecluster(SQL)

SELECT
inst_id
, instance_number inst_no
, instance_name inst_name
, parallel
, status
, database_status db_status
, active_state state
, host_name host
FROM gv$instance
ORDER BY inst_id;
INST_ID INST_NO INST_NAME PAR STATUS DB_STATUS STATE HOST
-------- -------- ---------- --- ------- ------------ --------- -------
1 1 racdb1 YES OPEN ACTIVE NORMAL racnode1
2 2 racdb2 YES OPEN ACTIVE NORMAL racnode2

AlldatabasefilesandtheASMdiskgrouptheyresidein(SQL)

select name from v$datafile


union
select member from v$logfile
union
select name from v$controlfile
union
select name from v$tempfile;
NAME
-------------------------------------------
+FRA/racdb/controlfile/current.256.703530389
+FRA/racdb/onlinelog/group_1.257.703530391
+FRA/racdb/onlinelog/group_2.258.703530393
+FRA/racdb/onlinelog/group_3.259.703533497
+FRA/racdb/onlinelog/group_4.260.703533499
+RACDB_DATA/racdb/controlfile/current.256.703530389
+RACDB_DATA/racdb/datafile/example.263.703530435
+RACDB_DATA/racdb/datafile/indx.270.703542993
+RACDB_DATA/racdb/datafile/sysaux.260.703530411
+RACDB_DATA/racdb/datafile/system.259.703530397
+RACDB_DATA/racdb/datafile/undotbs1.261.703530423
+RACDB_DATA/racdb/datafile/undotbs2.264.703530441
+RACDB_DATA/racdb/datafile/users.265.703530447
+RACDB_DATA/racdb/datafile/users.269.703542943
+RACDB_DATA/racdb/onlinelog/group_1.257.703530391
+RACDB_DATA/racdb/onlinelog/group_2.258.703530393
+RACDB_DATA/racdb/onlinelog/group_3.266.703533497
+RACDB_DATA/racdb/onlinelog/group_4.267.703533499
+RACDB_DATA/racdb/tempfile/temp.262.703530429

19 rows selected.

ASMDiskVolumes(SQL)

SELECT path
FROM v$asm_disk;
PATH
----------------------------------
ORCL:CRSVOL1
ORCL:DATAVOL1
ORCL:FRAVOL1

Starting/StoppingtheCluster
Atthispoint,everythinghasbeeninstalledandconfiguredforOracleRAC11gRelease2.OracleGridInfrastructurewasinstalledbythegriduserwhiletheOracle
RACsoftwarewasinstalledbyoracle.Wealsohaveafullyfunctionalclusterdatabaserunningnamedracdb.

Afterallofthathardwork,youmayask,"OK,sohowdoIstartandstopservices?".Ifyouhavefollowedtheinstructionsinthisguide,allservices,includingOracle
Clusterware,ASM,network,SCAN,VIP,theOracleDatabase,andsoonshouldstartautomaticallyoneachrebootoftheLinuxnodes.

Therearetimes,however,whenyoumightwanttotakedowntheOracleservicesonanodeformaintenancepurposesandrestarttheOracleClusterwarestackat
alatertime.OryoumayfindthatEnterpriseManagerisnotrunningandneedtostartit.ThissectionprovidesthecommandsnecessarytostopandstarttheOracle
Clusterwarestackonalocalserver(racnode1).

Thefollowingstop/startactionsneedtobeperformedasroot.

StoppingtheOracleClusterwareStackontheLocalServer

Usethe"crsctl stop cluster"commandonracnode1tostoptheOracleClusterwarestack:

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster


CRS-2673: Attempting to stop 'ora.crsd' on 'racnode1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'racnode1'
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'racnode1'
CRS-2673: Attempting to stop 'ora.racdb.db' on 'racnode1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'racnode1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'racnode1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'racnode1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.racnode1.vip' on 'racnode1'
CRS-2677: Stop of 'ora.racnode1.vip' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.racnode1.vip' on 'racnode2'
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'racnode1'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'racnode1'
CRS-2677: Stop of 'ora.scan3.vip' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.scan3.vip' on 'racnode2'
CRS-2677: Stop of 'ora.scan2.vip' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.scan2.vip' on 'racnode2'
CRS-2676: Start of 'ora.racnode1.vip' on 'racnode2' succeeded <-- Notice racnode1 VIP moved to racnode2
CRS-2676: Start of 'ora.scan3.vip' on 'racnode2' succeeded <-- Notice SCAN3 VIP moved to racnode2
CRS-2676: Start of 'ora.scan2.vip' on 'racnode2' succeeded <-- Notice SCAN2 VIP moved to racnode2
CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'racnode2'
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'racnode2'
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'racnode2' succeeded <-- Notice LISTENER_SCAN3 moved to racnode2
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'racnode2' succeeded <-- Notice LISTENER_SCAN2 moved to racnode2
CRS-2677: Stop of 'ora.CRS.dg' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.racdb.db' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'racnode1'
CRS-2673: Attempting to stop 'ora.RACDB_DATA.dg' on 'racnode1'
CRS-2677: Stop of 'ora.FRA.dg' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.RACDB_DATA.dg' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'racnode1'
CRS-2677: Stop of 'ora.asm' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.eons' on 'racnode1'
CRS-2673: Attempting to stop 'ora.ons' on 'racnode1'
CRS-2677: Stop of 'ora.ons' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'racnode1'
CRS-2677: Stop of 'ora.net1.network' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.eons' on 'racnode1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'racnode1' has completed
CRS-2677: Stop of 'ora.crsd' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'racnode1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'racnode1'
CRS-2673: Attempting to stop 'ora.evmd' on 'racnode1'
CRS-2673: Attempting to stop 'ora.asm' on 'racnode1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.asm' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'racnode1'
CRS-2677: Stop of 'ora.cssd' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on 'racnode1'
CRS-2677: Stop of 'ora.diskmon' on 'racnode1' succeeded

IfanyresourcesthatOracleClusterwaremanagesarestillrunningafteryourunthe"crsctl
stop cluster"command,thentheentirecommandfails.Usethe-foptionto
unconditionallystopallresourcesandstoptheOracleClusterwarestack.

AlsonotethatyoucanstoptheOracleClusterwarestackonallserversintheclusterbyspecifying-all.ThefollowingwillbringdowntheOracleClusterwarestack
onbothracnode1andracnode2:

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster -all


StartingtheOracleClusterwareStackontheLocalServer

Usethe"crsctl start cluster"commandonracnode1tostarttheOracleClusterwarestack:

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster


CRS-2672: Attempting to start 'ora.cssdmonitor' on 'racnode1'
CRS-2676: Start of 'ora.cssdmonitor' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'racnode1'
CRS-2672: Attempting to start 'ora.diskmon' on 'racnode1'
CRS-2676: Start of 'ora.diskmon' on 'racnode1' succeeded
CRS-2676: Start of 'ora.cssd' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'racnode1'
CRS-2676: Start of 'ora.ctssd' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'racnode1'
CRS-2672: Attempting to start 'ora.evmd' on 'racnode1'
CRS-2676: Start of 'ora.evmd' on 'racnode1' succeeded
CRS-2676: Start of 'ora.asm' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'racnode1'
CRS-2676: Start of 'ora.crsd' on 'racnode1' succeeded

YoucanchoosetostarttheOracleClusterwarestackonallserversintheclusterbyspecifying-all:

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -all


YoucanalsostarttheOracleClusterwarestackononeormorenamedserversintheclusterbylistingtheserversseparatedbyaspace:

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -n racnode1 racnode2


Start/StopAllInstanceswithSRVCTL

Finally,youcanstart/stopallinstancesandassociatedservicesusingthefollowing:

[oracle@racnode1 ~]$ srvctl stop database -d racdb


[oracle@racnode1 ~]$ srvctl start database -d racdb

Troubleshooting
Thissectioncontainsashortlistofcommonerrors(andsolutions)thatcanbeencounteredduringtheOracleRACinstallationdescribedinthisarticle.

ConfiguringSCANwithoutDNS

DefiningtheSCANinonlythehostsfile(/etc/hosts)andnotineitherGridNamingService(GNS)orDNSisaninvalidconfigurationandwillcausetheCluster
VerificationUtilitytofailduringtheOracleGridInfrastructureinstallation:

Figure19:OracleGridInfrastructure/CVUError(ConfiguringSCANwithoutDNS)

INFO: Checking Single Client Access Name (SCAN)...


INFO: Checking name resolution setup for "racnode-cluster-scan"...
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for "racnode-cluster-scan" (IP address: 216.24.138.153) failed
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for "racnode-cluster-scan" (IP address: 192.168.1.187) failed
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "racnode-cluster-scan"
INFO: Verification of SCAN VIP and Listener setup failed

ProvidedthisistheonlyerrorreportedbytheCVU,itisOKtoignorethischeckandcontinuebyclickingthe[Next]buttoninOUIandmoveforwardwiththeOracle
GridInfrastructureinstallation.ThisisdocumentedinDocID:887471.1ontheMyOracleSupportwebsite.

IfontheotherhandyouwanttheCVUtocompletesuccessfullywhilestillonlydefiningtheSCANinthehostsfile,simplymodifythenslookuputilityasrooton
bothOracleRACnodesasfollows.

AlthoughOraclestronglydiscouragesthispracticeandhighlyrecommendstheuseofGNS
orDNSresolution,somereadersmaynothaveaccesstoaDNS.Theinstructionsbelow
includeaworkaround(Ok,atotalhack)tothenslookupbinarythatallowstheCluster
VerificationUtilitytofinishsuccessfullyduringtheOracleGridInfrastructureinstall.Please
notethattheworkarounddocumentedinthissectionisonlyforthesakeofbrevityand
shouldnotbeconsideredforaproductionimplementation.

First,renametheoriginalnslookupbinarytonslookup.originalonbothOracleRACnodes:

[root@racnode1 ~]# mv /usr/bin/nslookup /usr/bin/nslookup.original


[root@racnode2 ~]# mv /usr/bin/nslookup /usr/bin/nslookup.original

Next,createanewshellscriptonbothOracleRACnodesnamed/usr/bin/nslookupasshownbelowwhilereplacing24.154.1.34withyourprimaryDNS,
racnode-cluster-scanwithyourSCANhostname,and192.168.1.187withyourSCANIPaddress:

#!/bin/bash

HOSTNAME=${1}

if [[ $HOSTNAME = "racnode-cluster-scan" ]]; then


echo "Server: 24.154.1.34"
echo "Address: 24.154.1.34#53"
echo "Non-authoritative answer:"
echo "Name: racnode-cluster-scan"
echo "Address: 192.168.1.187"
else
/usr/bin/nslookup.original $HOSTNAME
fi

Finally,changethenewnslookupshellscripttoexecutable:

[root@racnode1 ~]# chmod 755 /usr/bin/nslookup


[root@racnode2 ~]# chmod 755 /usr/bin/nslookup
RemembertoperformtheseactionsonbothOracleRACnodes.

Thenewnslookupshellscriptsimplyecho'sbackyourSCANIPaddresswhenevertheCVUcallsnslookupwithyourSCANhostnameotherwise,itcallsthe
originalnslookupbinary.

TheCVUwillnowpassduringtheOracleGridInfrastructureinstallationwhenitattemptstoverifyyourSCAN:

[grid@racnode1 ~]$ cluvfy comp scan -verbose


Verifying scan

Checking Single Client Access Name (SCAN)...


SCAN VIP name Node Running? ListenerName Port Running?
---------------- ------------ ------------ ------------ ------------ ------------
racnode-cluster-scan racnode1 true LISTENER 1521 true

Checking name resolution setup for "racnode-cluster-scan"...


SCAN Name IP Address Status Comment
------------ ------------------------ ------------------------ ----------
racnode-cluster-scan 192.168.1.187 passed
Verification of SCAN VIP and Listener setup passed

Verification of scan was successful.

===============================================================================

[grid@racnode2 ~]$ cluvfy comp scan -verbose


Verifying scan

Checking Single Client Access Name (SCAN)...


SCAN VIP name Node Running? ListenerName Port Running?
---------------- ------------ ------------ ------------ ------------ ------------
racnode-cluster-scan racnode1 true LISTENER 1521 true

Checking name resolution setup for "racnode-cluster-scan"...


SCAN Name IP Address Status Comment
------------ ------------------------ ------------------------ ----------
racnode-cluster-scan 192.168.1.187 passed
Verification of SCAN VIP and Listener setup passed

Verification of scan was successful.

ConfirmtheRACNodeNameisNotListedinLoopbackAddress

Ensurethatthenodename(racnode1orracnode2)isnotincludedfortheloopbackaddressinthe/etc/hostsfile.Ifthemachinenameislistedintheinthe
loopbackaddressentryasbelow:

127.0.0.1 racnode1 localhost.localdomain localhost


itwillneedtoberemovedasshownbelow:

127.0.0.1 localhost.localdomain localhost

IftheRACnodenameislistedfortheloopbackaddress,youwillreceivethefollowingerrorduringtheRACinstallation:

ORA-00603: ORACLE server session terminated by fatal error

or

ORA-29702: error occurred in Cluster Group Service operation

OpenfilerLogicalVolumesNotActiveonBoot

OneissuethatIhaverunintoseveraltimesoccurswhenusingaUSBdriveconnectedtotheOpenfilerserver.WhentheOpenfilerserverisrebooted,thesystemis
abletorecognizetheUSBdrivehowever,itisnotabletoloadthelogicalvolumesandwritesthefollowingmessageto/var/log/messages(alsoavailablethrough
dmesg):

iSCSI Enterprise Target Software - version 0.4.14


iotype_init(91) register fileio
iotype_init(91) register blockio
iotype_init(91) register nullio
open_path(120) Can't open /dev/rac1/crs -2
fileio_attach(268) -2
open_path(120) Can't open /dev/rac1/asm1 -2
fileio_attach(268) -2
open_path(120) Can't open /dev/rac1/asm2 -2
fileio_attach(268) -2
open_path(120) Can't open /dev/rac1/asm3 -2
fileio_attach(268) -2
open_path(120) Can't open /dev/rac1/asm4 -2
fileio_attach(268) -2
PleasenotethatIamnotsuggestingthatthisonlyoccurswithUSBdrivesconnectedtotheOpenfilerserver.Itmayoccurwithothertypesofdrives,howeverI
haveonlyseenitwithUSBdrives!

Ifyoudoreceivethiserror,youshouldfirstcheckthestatusofalllogicalvolumesusingthelvscancommandfromtheOpenfilerserver:

# lvscan
inactive '/dev/rac1/crs' [2.00 GB] inherit
inactive '/dev/rac1/asm1' [115.94 GB] inherit
inactive '/dev/rac1/asm2' [115.94 GB] inherit
inactive '/dev/rac1/asm3' [115.94 GB] inherit
inactive '/dev/rac1/asm4' [115.94 GB] inherit

Noticethatthestatusforeachofthelogicalvolumesissettoinactive(thestatusforeachlogicalvolumeonaworkingsystemwouldbesettoACTIVE).

IcurrentlyknowoftwomethodstogetOpenfilertoautomaticallyloadthelogicalvolumesonreboot,bothofwhicharedescribedbelow.

Method1

OneofthefirststepsistoshutdownbothoftheOracleRACnodesinthecluster(racnode1andracnode2).Then,fromtheOpenfilerserver,manuallyseteachof
thelogicalvolumestoACTIVEforeachconsecutivereboot:

# lvchange -a y /dev/rac1/crs
# lvchange -a y /dev/rac1/asm1
# lvchange -a y /dev/rac1/asm2
# lvchange -a y /dev/rac1/asm3
# lvchange -a y /dev/rac1/asm4
AnothermethodtosetthestatustoactiveforalllogicalvolumesistousetheVolumeGroupchangecommandasfollows:

# vgscan
Reading all physical volumes. This may take a while...
Found volume group "rac1" using metadata type lvm2

# vgchange -ay
5 logical volume(s) in volume group "rac1" now active

Aftersettingeachofthelogicalvolumestoactive,usethelvscancommandagaintoverifythestatus:

# lvscan
ACTIVE '/dev/rac1/crs' [2.00 GB] inherit
ACTIVE '/dev/rac1/asm1' [115.94 GB] inherit
ACTIVE '/dev/rac1/asm2' [115.94 GB] inherit
ACTIVE '/dev/rac1/asm3' [115.94 GB] inherit
ACTIVE '/dev/rac1/asm4' [115.94 GB] inherit

Asafinaltest,reboottheOpenfilerservertoensureeachofthelogicalvolumeswillbesettoACTIVEafterthebootprocess.Afteryouhaveverifiedthateachof
thelogicalvolumeswillbeactiveonboot,checkthattheiSCSItargetserviceisrunning:

# service iscsi-target status


ietd (pid 2668) is running...

Finally,restarteachoftheOracleRACnodesinthecluster(racnode1andracnode2).

Method2

ThismethodwaskindlyprovidedbyMartinJones.Hisworkaroundincludesamendingthe/etc/rc.sysinitscripttobasicallywaitfortheUSBdisk(/dev/sdain
myexample)tobedetected.Aftermakingthechangestothe/etc/rc.sysinitscript(describedbelow),verifytheexternaldrivesarepoweredonandthenreboot
theOpenfilerserver.

Thefollowingisasmallportionofthe/etc/rc.sysinitscriptontheOpenfilerserverwiththechanges(highlightedinblue)proposedbyMartin:

..............................................................
# LVM2 initialization, take 2
if [ -c /dev/mapper/control ]; then
if [ -x /sbin/multipath.static ]; then
modprobe dm-multipath >/dev/null 2>&1
/sbin/multipath.static -v 0
if [ -x /sbin/kpartx ]; then
/sbin/dmsetup ls --target multipath --exec "/sbin/kpartx -a"
fi
fi

if [ -x /sbin/dmraid ]; then
modprobe dm-mirror > /dev/null 2>&1
/sbin/dmraid -i -a y
fi

#-----
#----- MJONES - Customisation Start
#-----

# Check if /dev/sda is ready


while [ ! -e /dev/sda ]
do
echo "Device /dev/sda for first USB Drive is not yet ready."
echo "Waiting..."
sleep 5
done
echo "INFO - Device /dev/sda for first USB Drive is ready."

#-----
#----- MJONES - Customisation END
#-----
if [ -x /sbin/lvm.static ]; then
if /sbin/lvm.static vgscan > /dev/null 2>&1 ; then
action $"Setting up Logical Volume
Management:" /sbin/lvm.static vgscan --mknodes --ignorelockingfailure &&
/sbin/lvm.static vgchange -a y --ignorelockingfailure
fi
fi
fi

# Clean up SELinux labels


if [ -n "$SELINUX" ]; then
for file in /etc/mtab /etc/ld.so.cache ; do
[ -r $file ] && restorecon $file >/dev/null 2>&1
done
fi
..............................................................

Finally,restarteachoftheOracleRACnodesinthecluster(racnode1andracnode2).

Conclusion
OracleRAC11gRelease2allowstheDBAtoconfigureaclusterdatabasesolutionwithsuperiorfaulttoleranceandloadbalancing.However,forthoseDBA'sthat
wanttobecomemorefamiliarwiththefeaturesandbenefitsofdatabaseclusteringwillfindthecostsofconfiguringevenasmallRACclustercostingintherangeof
US$15,000toUS$20,000.

ThisarticlehashopefullygivenyouaneconomicalsolutiontosettingupandconfiguringaninexpensiveOracle11gRelease2RACClusterusingRedHat
EnterpriseLinux(orCentOS)andiSCSItechnology.TheRACsolutionpresentedinthisarticlecanbeputtogetherforaroundUS$2,700andwillprovidetheDBA
withafullyfunctionalOracle11gRelease2RACcluster.Whilethehardwareusedforthisguideisstableenoughforeducationalpurposes,itshouldneverbe
consideredforaproductionenvironment.

Acknowledgements
Anarticleofthismagnitudeandcomplexityisgenerallynottheworkofonepersonalone.AlthoughIwasabletoauthorandsuccessfullydemonstratethevalidityof
thecomponentsthatmakeupthisconfiguration,thereareseveralotherindividualsthatdeservecreditinmakingthisarticleasuccess.

First,IwouldliketothankBaneRadulovicfromtheServerBDETeamatOracle.BanenotonlyintroducedmetoOpenfiler,butsharedwithmehisexperienceand
knowledgeoftheproductandhowtobestutilizeitforOracleRAC.HisresearchandhardworkmadethetaskofconfiguringOpenfilerseamless.Banewasalso
involvedwithhardwarerecommendationsandtesting.

AspecialthankstoKGopalakrishnanforhisassistanceindeliveringtheOracleRAC11gOverviewsectionofthisarticle.Inthissection,muchofthecontent
regardingthehistoryofOracleRACcanbefoundinhisverypopularbookOracleDatabase10gRealApplicationClustersHandbook.Thisbookcomeshighly
recommendedforbothDBA'sandDeveloperswantingtosuccessfullyimplementOracleRACandfullyunderstandhowmanyoftheadvancedserviceslikeCache
FusionandGlobalResourceDirectoryoperate.

Lastly,IwouldliketoexpressmyappreciationtothefollowingvendorsforgenerouslysupplyingthehardwareforthisarticleSeagate,AvocentCorporation,and
Intel.

AbouttheAuthor
JeffreyHunterisanOracleCertifiedProfessional,JavaDevelopmentCertifiedProfessional,Author,andanOracleACE.JeffcurrentlyworksasaSeniorDatabase
AdministratorforTheDBAZone,Inc.locatedinPittsburgh,Pennsylvania.Hisworkincludesadvancedperformancetuning,JavaandPL/SQLprogramming,
developinghighavailabilitysolutions,capacityplanning,databasesecurity,andphysical/logicaldatabasedesigninaUNIX/Linuxserverenvironment.Jeff'sother
interestsincludemathematicalencryptiontheory,tutoringadvancedmathematics,programminglanguageprocessors(compilersandinterpreters)inJavaandC,
LDAP,writingwebbaseddatabaseadministrationtools,andofcourseLinux.HehasbeenaSr.DatabaseAdministratorandSoftwareEngineerforover20years
andmaintainshisownwebsitesiteat:http://www.iDevelopment.info.JeffgraduatedfromStanislausStateUniversityinTurlock,California,withaBachelor's
degreeinComputerScienceandMathematics.

Copyright(c)19982017JeffreyM.Hunter.Allrightsreserved.

Allarticles,scriptsandmateriallocatedattheInternetaddressofhttp://www.idevelopment.infoisthecopyrightofJeffreyM.HunterandisprotectedundercopyrightlawsoftheUnitedStates.This
documentmaynotbehostedonanyothersitewithoutmyexpress,prior,writtenpermission.Applicationtohostanyofthematerialelsewherecanbemadebycontactingmeat
jhunter@idevelopment.info.

Ihavemadeeveryeffortandtakengreatcareinmakingsurethatthematerialincludedonmywebsiteistechnicallyaccurate,butIdisclaimanyandallresponsibilityforanyloss,damageor
destructionofdataoranyotherpropertywhichmayarisefromrelyingonit.Iwillinnocasebeliableforanymonetarydamagesarisingfromsuchloss,damageordestruction.

Lastmodifiedon
Monday,14Jul201418:17:39EDT
PageCount:152743

Anda mungkin juga menyukai