Anda di halaman 1dari 87

WIPRO INFOTECH ENTERPRISE SERVICES

- Syed Rahman

After this training you should be able to

Define a cluster Understand technical concepts in a cluster State the requirements for a cluster configuration Install, configure and administer a cluster Troubleshoot based on learning

What is a cluster ? 1. 2. 3. 4. 5. 6. A cluster is a general terminology that describes a group of independent servers which function as a Harmonious Unit A cluster is characterized by the following : Autonomous server nodes, each running their own OS. Dedicated hardware interconnects. Multi-ported storage acting as Shared storage. Cluster software framework Overall target of providing HIGH AVAILABILITY. Support a variety of Cluster Aware and Cluster Unaware software like Oracle and OPS/RAC

Sun Cluster Hardware and Software Environment Supports two to eight servers Nodes can be added/removed and replaced without interruption on data service. Global device implementation Provides for a unique and uniform device namespace for all data storage. Shared data storage must be physically attached to the nodes in the cluster Kernel Integration Solaris 8 onwards, the Sun Cluster software is tightly coupled with the OS. Node monitoring/transport Support for a variety of off-the-shelf components Data service agents for cluster-unaware application.

Sun Cluster Hardware Environment

Cluster nodes running Solaris 8 OS or Solaris 9 OS. Separate boot-disks One or more public network interfaces per system, preferred two ( for PNM or IPMP) A redundant cluster transport interface. Dual-hosted disk storage Terminal Concentrator ( reqd. only for console level access) Administrative Console.

Sun Cluster Hardware Environment

Cluster setup with two nodes and a Multipack

Sun Cluster Software Environment Solaris OS versions Solaris 8 OS (02/02) (Update 7) or later Solaris 9 OS (all updates) Veritas Volume Manager versions Veritas Volume Manager 3.2 Veritas Volume Manager 3.5 Solaris Volume Manager versions SDS 4.2.1 with patch 108693-06 or higher SVS (part of base OS in the Solaris 9 OS)

Sun Cluster Application Types

Failover Applications Cluster provides automatic restart on the same or different cluster node of the cluster. Failover services are usually paired with a logical IP address, aka application IP address, which shifts to the other node. Multiple failover applications in the same resource group can share the same IP address, provided they all failover to the same node together. Example Oracle server, SAP etc.

Sun Cluster Application Types

Scalable Applications These applications involve running multiple instances on the same cluster and giving a feel of a single application. Applications that write data without any type of locking cant work as Scalable applications.

Sun Cluster Major Data Service supports

HA for Oracle HA for SUN ONE HA for Apache HA for NFS HA for DNS HA for SAP HA for SAMBA HA for Veritas Netbackup HA for BEA Weblogic HA for Websphere MQ

Understanding the cluster HA framework

Implemented by a series of daemons and kernel modules Node Fault monitoring and Cluster Membership Network Fault monitoring Public Network Management PNM/IPMP Cluster Transport Monitoring Dual transport paths are necessary to sustain interface faults Cluster Configuration Repository (CCR) contains cluster name and nodes, transport configurations, Veritas DGs or SDS disksets, Master node list, timeouts, current cluster status. ASCII files stored in /etc/cluster/ccr

Global naming, devices and file systems Global Naming refers to the DID devices DID stands for Disk ID Devices (DID) Multi-ported disks that might have different logical access names are assigned unique DID names (d#) which are uniform across all nodes in the cluster. Illustration : If a disk is shared and accessed from one node as c2t18d0 and another node as c5t18d0, cluster assigns a unique name (lets say d2) and then the device is uniformly accessed as d2 by all nodes of the cluster.

Global naming, devices and file systems Device entries are created under the directories /dev/did/rdsk and /dev/did/dsk Global file system features makes file systems simultaneously available on all nodes, regardless of their physical location. Sun Cluster makes a filesystem global with a global mount option The global file system is also known by the names Cluster File system Proxy File systems

Overview of Sun Cluster Topologies Topologies describe typical ways in that cluster nodes can be connected to data storage devices. Typical topologies include Clustered pairs Pair + N topology N + 1 topology Multiported ( N*N) topology. In clusters with more than two nodes, it is not required to have a storage at all. They could be used for compute-based applications without any data storage. Two node clusters require a shared storage because they need a quorum vote.

Partitioned cluster and Failure Fencing

Caused by interconnection snap between one or more nodes Each node assumes the other is still functional. Results in split brain Two clusters cant be allowed to run to avoid data corruption Each node tries to master the quorum vote. The node which fails aborts the Sun Cluster software.

Cluster Amnesia One or more form the cluster with a stale copy of cluster configuration. Illustration : Assume a two node cluster. Node 2 crashes out. This causes Configuration changes on Node 1. Later Node 1 is shutdown, and Node 2 is booted to form a new cluster This can be avoided by use of SCSI PGR or Persistent Group Reservations, wherein the node leaves a PGR key in the quorum. Due to this Node 2 is not able to use the Quorum as a vote and Waits.

Cluster Installation Objectives . Prepare the cluster nodes for Sun Cluster 3.x installation . Learn the steps required to manually install the Sun Cluster 3.x software . Learn the steps required to use the SunPlex Manager to install the Sun Cluster 3.x software

Installation Major Tasks 1. Verify the clusters hardware installation 2. Install and configure the administrative console 3. Install and configure the Solaris operating environment 4. Set up proper environment 5. Install Sun Cluster 3.x 6. Complete the cluster initialization

Installation Major Tasks 7. Install the volume management software 8. Install Sun Cluster 3.x data service (agent) software 9. Configure the volume manager 10. Create and mount global file systems 11. Configure PNM/IPMP 12. Update ntpc onf files . 13. Verify the installation

Verify the clusters hardware  Check that the storage arrays are cabled properly and conform to one of the supported topologies  Verify the cabling of the cluster interconnects.  Storage array, cluster interconnect and redundant public network connections should be spread among different I/O boards  Power to cluster components should be distributed across separate power sources

Install and configure the administrative console . Install Solaris 8, End User System Support or above . Install the SUNWccon package from the Sun Cluster 3.x CD
# d L < u r e S x . 3 _ t P c a t n C > s # k d a p g d . S N P < S u l n C r o ( 8 Copy rig ht 19 9 9 S u n U # . # # . # n t ## C hec king f or c o # I c o f S 3 x C m g n l n c o o c t e p r ) 3 x 0 R V 1 9 . 2 6 M i c ro s y s t e m s, I n c. A ll r ig ht s r ese rv ed . i r c s i g p a e f n r c s i g e n o m t . e i y n a k g d s e i y n k s a r q n f l i ct s w i t h pac k age s a l r ea dy in sta ll ed . s

Installation of the Admin workstation


I # n a t s g s o / u l c W N b r n i p t p o / u N U S s . . . . o / p / S t e v [ y f i c I l a t p g a s e i p / U o e l e

Setting up the configuration on admin console 1. Add an entry for each of the cluster nodes and the terminal concentrator into / c/ the appropriate name service (for example, and/or ots et h s NIS or DNS) 2. Configure the /ec/clster pots and /etc rs r t files/seial u 3. Add /ot /SUPATH top your N 4. Add /ot /SUWc lu s top your N MANPATH F l c t e / f o : s r m a t s

ClusterName NameofNode1 NameofNode2 ... NameofNode8

Setting up the configuration on admin console Configure the user environment # # An example configuration of /etc/serialports c #at /etc/serialports venu tcs plan s 5002 et mars tc-p lane 5003 ts P

Using the Cluster Console Panel (CCP) # ccp <clustername> (for example : planets)

Setting up the configuration on admin console Configure the user environment # # C u g i f e s t r o p l a n c c #at /etc/serialports venu tcs plan s 5002 et mars tc-p lane 5003 ts e f l P

Solaris Installation considerations When installing Solaris 8, make sure to set your file system allocations to support Sun Cluster 3.x: File System Size A r llocate all free space to the root partition A w s t least double the amount of system RAM a 1 /00MB 1 o V0MB (s7 for SDS) l

Solaris Installation considerations

The /glo is simply a placeholder used by s i s al global device b to n t the c hold l namespace. It must be large enough to hold a copy of both the /de /dev c and v i directories from the local node. Typically 500 MB. The volume manager slice can be used by Solstice Disk Suite to hold local metadevice state database replicas or as an unused slice and cylinders to support Veritas boot disk encapsulation.

Solaris Installation considerations  Install any required OS patches All Sun Cluster patches are available under appropriate directories of the EIS CD. Recommended : Always use the latest EIS CD and installation Checklists Helps in updated information.

Configurations on Cluster Nodes n / Standard paths to Solaris executables r i b , n s s / b S /un Cluster 3.x command line utilities o / p o / p Veritas Volume Manager command line and GUI utilities (if using Veritas Volume Manager) Also, set the MANPATH variable to include / u r/l /V TSx n m a n and /opte the s . c us r/m vm/ Export v tR a MANPATH.

Installing Sun Cluster using scinstall scinstallcan either be run as an interactive application, where it will present you with menus and prompts, or as a command line utility. 1. Change to the Su_Cluste_30/T Sun Cluster 3.0 CD-ROM subdirectory of the on r . o s image:

2. To run scnstll interactively, on the first node of the cluster, invoke scins l i a ta with no arguments: 3. Choose option 1 at the Main Menu 4. Provide appropriate answers to the installation scripts prompts.

Installing Sun Cluster using scinstall 5. Upon completion, scn stal the appropriate packages and will install i l prepare the node to become a member of the cluster. 6. Repeat steps 1-4 on the remaining nodes of the cluster (using option 2 on the Main Menu) Note: The first node or the sponsoring node should be a member of the cluster and MUST be rebooted before other nodes can join the cluster. Tip Verify cluster status using scstat command on node 1

Installing Sun Cluster using scinstall

< # Location of SC 3.0 CD Image>/ Sun_C ust _ l er 3 l a t n i c s / . #


* P s a : i t p o ) * ( g n * a t s E ) e n r i f h d 1 l e c * 3 g i f n o C s u e p m l a t r e b 4 5 r P d o n e t s u l c * t i w p l o m n e u ? t i x E ) e * 1 : n o i t p O e s l u t r e i s h m e a M i e u *

Installing Sun Cluster using scinstall


* r t s u l c w e n h T i . u g j b y a l c i w s d n r e h t f o d a n I o i w u y b m g f s l c e v r p t r P e t a y n yes D u o y t n c ? ] [ ) / s ( e a t o i l s e s r a t m n * E t b i h n

Installing Sun Cluster using scinstall


> u l > C r N m a E c g v f o s t b l c i h w m n r e y p q C . u p h W lanets i a o t l e t e y u t <

Installing Sun Cluster using scinstall


> h T i mars o N d e ^D : ) h s i f D l r t C ( m a n e d o N Thi : d n f l p m o c e h t i s s u n e v r a m yes Is ? ] [ ) n / s y ( e r o c t i a e

Installing Sun Cluster using scinstall


Select DES authentication option The DES authentication option determines whether the additional cluster nodes will be required to have proper DES authentication keys when they try to install themselves as part of the cluster when s runal each of the nodes. Proper keys will need to be set up outside of s ista is ston cin l if n ll c this option is enabled.

Installing Sun Cluster using scinstall


> e d s h u l a r T o p t T f I e b w y esn hit Tte2oe r sa g n;a tgs lno uaw aoi s2 id k0i 0s .s 55 ma c y v o r d b n h e m 5 s k . i r h nt a z r . f a . y l e , m s t k o N t o k A e f r C s <

y I es a k o t i p e f r d h e yes s I o a p e d l u m t y ( [ ) ] t

o a c l e w r t a c h e s n

Installing Sun Cluster using scinstall


> n i o t s > t a l h Te yes D s i h t e u r c o n / y ( ? t l s u o n d p r i P o n <

< i c J o p n a T r e t s u l C > N s v c r n i a h t e d w y f j o p l m u I . g b , ( _ F s Wwitch1 t s i f o w [ r e c 1 ? a t switch2 Wh a i s f o t s n w [ r e t h ] c s t a l s i p o e w t n m j o e c u t n m u c i h l

Installing Sun Cluster using scinstall


D > c F v s l y m e t > < E r a f I . y t p m e b h T t s g o e d l a . i e yes s I G o a c o e i h l s u t v a n / b . < D f y p r d I , i s l w e h u o H v n t m a y f k p r . T s 0 B i e l d o n u , t m d f u t t

Installing Sun Cluster using scinstall


>u e R c i a m o t < A > n O c yes o D u i c s a t e r y ( [ ) ] o a t s l b n e o

Installing Sun Cluster using scinstall


C > t a m r i f < n Y \ l a t n i c s l p \ s \ s u n e v N \ p y h t r a m , s u v = e d o n T A t \ c w s , 1 h : = t i o p d n e m c w s , 2 h : = t i o p d n e m Y r A es h e Y D es System needs to be rebooted following package installation. Installation of 3.1 also gives an option to install cluster patches t r n t

Setting up the quorum Configure the Quorum using scsetup. Reset installmode.

Configuring the data storage Metadevices Metadevice are logical devices that are made up of one or more physical disk partitions. After they are created, metadevices are used like disk partitions.

Configuring the data storage Metadevice Types Metadevices are the usable constructs in SDS. There are several metadevice types that can be created:  Simple metadevices - concatenations or stripes created from actual disk partitions (DID or physical disk slices)  Metamirror - mirrors of simple metadevices  RAID 5 metadevice  Metatrans device - Logged UFS device, usually made up of metamirrors

Configuring the data storage

Configuring the data storage Metadevice State Database  Contains information about the configuration and status of the metadevices  Copies of the metadevice state database, called replicas, are maintained by SDS  There should be a minimum of 3 replicas  SDS can survive the loss of up to half of the total number of replicas. Disksets maintain their own set of replicas, automatically allocating space for 2 replicas on each disk in the diskset

Dual String Mediators Dual string mediators are required on DiskSuite based clusters where disksets are configured across exactly two disk arrays (two strings of disks - hence the name Dual String mediators) which are shared by two cluster nodes. Mediators act as voting parties to determine the majority and ownership of disksets. Specifically, for two host two arrays(strings) of disk, there is a possibility of an exact half split. Hence the mediator vote becomes a MUST

Dual String Mediators

Outline of SDS configuration 1. Install the appropriate packages for Solstice DiskSuite 2. Install the SDS T-patch 108693-02 3. Modify the mdcappropriately file onf . 4. Reboot all nodes in the cluster 5. Create /.ho add root to group 14 files or r sts

Outline of SDS configuration 6. Initialize the local state databases 7. Create disksets to house data service data 8. Add drives to each diskset 9. Partition disks in the disksets 10.Create the metadevices for each diskset 11.Configure dual string mediators

Outline of SDS configuration  Use p k d the SDS packages from the Solaris 8 CD to install ga  Required packages are: SUNWmruare , SN you .U Wm If dd booting a 64-bit kernel, SUNWdx is also required. m Use p a h any DiskSuite patches to install tc recommended by Sun ( T Patch 108693-n is required )

Outline of SDS configuration  Location of the kernel configuration file is: k / e n l d v m . o f

 If your DiskSuite namespace will have metadevice names greater than 128, you will need to increase the nmd parameter  If you are going to create more than 4 disksets in the cluster, you will need to increase the md_nsets parameter  Keep this file identical on all nodes of the cluster  Changes are put into effect after a reconfiguration reboot

Outline of SDS configuration 1. Ensure that you have a free slice at least 2MB on a local disk on each node. 2. Create the local metadevice state database replicas using the metadb m 3 c f a s Y d X W t b t a e n d b s c i h m c f o r e i p t s C o u b a e a w t c v

Outline of SDS configuration 1. On one of the cluster nodes, use the m et command to create aset the disksets. 2. Check the status of the newly created disksets by running the metaset command. # s i D < m a t o H 2 > . s t s e l a m o h d t s e l a m o h i # s t e k i m a N S D m t s N h < o . i k e a e o l g b t r d s o l g b t r d s t s >

Outline of SDS configuration Use the m e s command to add drives to each diskset: taet #

Outline of SDS configuration

Outline of SDS configuration Create an /e lv the metadevices to tc/ m/m create d required for each diskset: # S ple m.ta f am d b ile f 2 or w _d _1d10 -t web_ a eb ata / dat w _d _1d11 -m web_ a eb ata / dat

# metainit web_data_1/d10 # metainit web_data_1/d11

Outline of SDS configuration You may add new mediators using the m eta et command: s met a set - < k et Na > -a s < i a > t s s Dis S me i Med r -m - s D skS N i et ame t o a m d t for dd e ia ors A d e i t e r

-m M ediaor hos t nam t ad e(cmma s ) t r t es o d o i a m < s d e t t a D S N e> s t D k s i a N t o e u r d m c h o Mediator information can be deleted by using the m eta t command: se m eta t - < s kSt N se s Di e ame> -d -m < iat Hos Li Med or t s

Outline of VxVM configuration Describe the features and basic concepts of Veritas Volume Manager (VxVM)  Describe how to install and configure Veritas Volume Manager for use in Sun Cluster 3.0

Outline of VxVM configuration

Outline of VxVM configuration

VxVM configuration objects

Outline of VxVM configuration Installing and configuring VxVM in Sun Cluster 3.0 consists of the following steps: 1. Disable dynamic multipathing support (except for EMC power path software) 2. Configure the disks which are going to be used by VxVM 3. Install the appropriate packages for VxVM 4. Check the major number for the vxo on all nodes device i

Outline of VxVM configuration 5. License VxVM 6. Initialize the rot group disk odg 7. Create and populate VxVM disk groups to house data service data 8. Create VxVM volumes in each disk group 9. Register the disk group with the cluster framework

Using scvxinstall performs the following tasks on i node: c s n v s a l a 1. Verifies that the node on which you are running is booted into the cluster 2. Disables DMP 3. Adds the VRS vxv e a to the node , VTS VRTS n and packages R T vmd v m vmm 4. Changes the vxo in the /et/nmeto_ aor entry i filec a _ m j to 210

Using scvxinstall  When run in install and encapsulate mode, the Install only steps will be executed plus the following steps: 5. License VxVM 6. Prepare the boot drive for encapsulation 7. Modify the /go bal.dto encapsulation to / entry priore e ol n e X / vic ensure that it gets encapsulated properly 8. Reboot the node (twice) to complete the encapsulation process 9. Reminor the rotdg to ensure uniqueness in the cluster volumes o

Using scvxinstall All disk groups (except rootdg) must be registered with the cluster framework. Any VxVM administrative commands (such as vva is is t , , and vxdg xed) t GUI x sor operations must be performed from the node which currently owns the disk group. Use the s c a - to determine ownership of the disk group. command stt D Whenever making any configuration changes to a disk group (especially changes which affect the volumes within a disk group), the disk group should be re-registered with the cluster. This can be done with scc s or scetup onf

Registering Diskgroups with Sun Cluster Scsetup, or scconf -a D type=vxvm, name=testdg,nodelist=host1,host2 -a - indicates the add form of the scconf command type=vxvm - specifies a VxVM disk group name=<NameofDiskGroup> - specifies the VxVM disk group to be registered, nodelist=<NodeList> - Specifies a colon separated list of nodes that can import this disk group (based on the topology of the cluster)

Registering Diskgroups with Sun Cluster scsetup #


* M P ) 4 e D 4 : n o i t p O * s p u o r G e c i v De n * P 1 1 : n o i t p O > V S A T I a n i g p s t y b d e c u r R l k a g a e t s u l c g . i h p d r x M k o p T s e g m a o r l n e i n t M

Registering Disk groups with Sun Cluster


Ri I Esc Vd. eatr e gi i a kts mse uyt lbn oc T rd eu ge ap wi po n c h s d cl i v x t M A s a t e n y i T l w yes I t i y o ? ] [ ) n / s ( e a c s Ncdg1 P no D a w u o y f n r p g e [ ) ] t r i / yes A re h t o b a l i n / s e y ( ? o h d i I s . y f u s t e l p c d n a m o C t o f e r ( e t k n r u o o n S a k r m e t h o s i u u

NOTE : VERIFY MAJOR AND MINOR NUMBERS ON ALL CLUSTER NODES ARE UNIFORM

Configuring Resource Groups The steps required to configure resource groups are: 1. Install the data service applications 2. Register the appropriate resource types 3. Create and configure the resource groups 4. Enable the resources 5. Bring the resource group online

Configuring Resource Groups  Location for the application binaries  Cluster file system  Local file system (file system local to each node)  Location for the application data  Should always be a global device or cluster file system  Scalable or failover hostname and address that will be used to host the application  Make sure to enter the scalable or failover hostname and address in the appropriate name services and/or host files

Configuring Resource Groups s d a g R o > r i t p y T e c < f i c e p s r u o a t s e f i r c u o < m e t r a d c y e o p c

Configuring Resource Groups The resource types are: Resource Type Name Description SUNW.iws iPlanet Web Server SUNW.oracle_listener Oracle8 Listener SUNW.oracle_server Oracle8 Server SUNW.apache Apache Web Server SUNW.nfs NFS Server SUNW.dns DNS Server

Configuring Resource Groups SUNW.nsldap Netscape Directory Server SUNW.LogicalHostname Logical Host address resource type for failover applications (Automatically registered during SC installation) SUNW.SharedAddress Shared Address resource type for scalable applications (Automatically registered during SC installation)

Configuring Data Service Binaries The location for the applications binaries  On a cluster file system:  Only need to perform installation procedure once  Only a single copy of the application needs to be maintained  On a local file system:  Installation must be performed identically on multiple nodes (identical paths, parameters, etc.)

Configuring Resource Groups # c S t m d a s w i r W . g # t N U e l n s i m . r # t N U e l v r s m . # a m d U S c p . W N s e # c S t m d a s f n r W . g # c S t m d a s n r W . g # a m d U S l s n . W N p t t N a S c l a S c s r a N N r a

Configuring Resource Groups Create the group: s d a g u o t i l < h > e m n a s e f i d t r p o e b i g o t . n d s l u a f l o r i D a m r g r p c o

Configuring Resource Groups Add logical host addresses and adapters to the group: s h l < d a t r e p o s f i c e p S f i g o m a n t r g < a n m c u s e h t f o p > o N m p c o

-l < h-h o s n me> Ho s tname o t h lo gical ho l ta f e st a d d < l f C k r t a o F A u g c s h e > m p d n w i l . r n d g l

Configuring Resource Groups Add the data service resources to the group: s a s e f i d t r p o a p c o

- j < so rce name> Na re u me of reso e to add urc t e > f s o d n a y e p . s i t r o r u c e y g e t

Configuring Resource Groups # g r c a l u n e v , m h s t # m d a g r c t s @ 0 o n f u e v l # g s w n U a p s e L l n t r m i a s S

Configuring Resource Groups Enable the resources: c s h e R N r u < o m t r o l b n a f i c p s e Enable the resources monitors: > m a N r u o R < j M e h t i w c s t r o l b n a s f i c p S e M s e f i u o n m c r e p c t r i c >

Configuring Resource Groups Make the resource group managed: s cswi h -o -g <Re our c e Gr ou p Na > e tc s t s d e g n a m a m n g d s a e

Bring the resource group online on a node: c s t g e G p N i z R u c a e <

- z S p e fi ci es th at a mas t e c an op ra tio ry h ge e

Administrative commands and utilities The SC 3.0 administrative commands and utilities are:  scinstall - Installs cluster software and initializes cluster nodes  scconf - Updates the Sun Cluster software configuration  scsetup - Interactive Sun Cluster configuration tool  sccheck - checks and validates Sun Cluster configuration  scstat - displays the current status of the Cluster  scgdevs - Administers the global device namespace  scdidadm - Disk ID configuration and administration utility

Administrative commands and utilities scrgadm - Manages registration and configuration of resource types, resources and resource groups  scswitch - Performs ownership or state changes of Sun Cluster resource groups and disk device groups  pnmset - Sets up and updates the configuration for Public Network Management (PNM)  pnmstat - Report status for Network Adapter Failover (NAFO) groups managed by PNM  pnmptor, pnmrtop - Maps pseudo adapter to real adapter name (pnmpt adapter to pseudo adapter name (pmro ) or real or )n tp in NAFO groups

Anda mungkin juga menyukai