Acknowledgements
I can no other answer make, but, thanks, and thanks to my well wisher, evergreen
admiring personality Mr. T. Gurubalan, Sun Microsystems Inc, who influenced, crafted,
guided, cooked me to taste Sun.
Words cannot convey my gratitude, you can have no idea how much it means to me. Its
stunning. Special Thanks to My IMS Batch-2, who fueled me to explore more heights
technically.
Raja, Aravindh, Sathish, Senthil, Hari Krishnan, Murali, Raman, Rakesh, Prabakar,
Md.Mukram, Manikandan, Ibrahim.
Raja kindle, always inspiring me to go little far on extra miles in all aspects.
Sources are always precious and unavailable, additional thanks to Hari Krishnan on his
consistent work of collecting the resource, with great fuss.
Last but not the least, I would thank all persons behind lights from the bottom of my
heart, but for you all my heart has no bottom. Thanks! Thanks! Thanks!
Fingered by: Manickam Kamalakkannan
# 103, Housing Unit
Rajagopalapuram
Periyar Nagar
Pudukkottai 62203
Tamil Nadu
Mail: kamalmanickam@yahoo.co.in
kamalmanickam@gamil.com
Mobile: + 91-99946 11237
Topics Include
1. Interface configuration
2. Clinet-Server model
3. SMC - Solaris Management Console
4. Swap configuration
5. NFS - Network File System
6. AutoFs
7. Raid - SDS/SVM
8. Naming Service
9. NIS - Network Information Service
10. ACL - Access Control List
11. Crash and Core Dumps
12. Jumpstart Installation
13. System Messaging
5. SVM
6. VXVM [Partially]
7. RSC Remote System Console
8. Sun Cluster 3.1
BASIC COMMANDS
syn: # cd <path>
eg: # cd /tenth/eng_medium/half/a_section
this command will move to the location
/tenth/eng_medium/half/a_section.
Hence if the command # pwd is executed
the output will be
/tenth/eng_medium/half/a_section
SYN: # ls -lh
provides the size of the file and dir in human readable format
syn: # ls -p
provides the information about dir or file
dir is represented with "/" symbol
syn: # ls -R
displays all the sub-dir and files inside the directory specified
# ls -R | more
displays all the sub-dir and files in pagewise
eg: # ls -R /tenth | more]
will display only the sub-dir and files inside the folder
/tenth
syn: # ls -a
will display all the files and directories including the hidden files
and directories.
syn: # ls -i
will display the information of inodes
syn: # ls -t
will display the information about the recently accessed file/dir
on time stamp
# date
will display the date and time
# cp new /tenth/eng-medium/old
will copy the file `new` to the specified location.
NOTE: There is no need for the existance of the destination file earlier.
head => to view the top `n` number of lines from a file
syn: # head -n <file_name>
where `n` can be any number
eg: # head -5 newfile
will display the first 5 lines from the file `newfile`.
tail => to view the last few number of lines from a file
syn: # tail -n <file_name>
eg: # tail -5 newfile
will display the last 5 lines from the file specified
syn: # rm -r <dir_name>
eg: # rm -r /newfolder
will remove the dir named `newfolder` along with its contents
syn: # rm -i <file_name>
eg: # rm -i test
will delte the file interactively. It'll prompt for a question.
which => provides the information about the location of the command
syn: # which <command>
eg: # which ls
listusers => will display the number of users existing in the system
syn: # listusers
# listusers -g <group_name>
eg: # listusers -g others
will display the users who belongs to the group named others
wall => to broadcast the message to all the users who are currently logged in
syn: # wall
< message>
ctrl+d -> to save and send the message
eg: # wall
hai! good morning
ctrl+d
write => to send the message to the particular user who is currently logged in
syn: # write "<user_login_name>
<message>
ctrl+d
eg: # write "shiva"
welcome to our org!
ctrl+d
will display the message only the specified user.
TOD
/etc/motd => message of the day
This file contents will be broadcasted to all the users whenever they login to the system.
/etc/issue => by default this file will not exist. Root is permitted to create a file in this
name. Can edit anything to this file. When the user is prompted for login the message
typed in the above fie [/etc/issue] will be displayed.
The difference between /etc/motd and /etc/issue is, the contents of the /etc/motd file
will be displayed after the user is logged in. Where is in /etc/issue, the contents of the
file are displayed before the user is logged in.
# compress => to compress the file
syn: # compress <file_name>
eg: # compress test
will compress the specified file 'test'
Note:
It's not possible to read the contents of the compressed file
Using the command "cat"
# gzcat => to view the contents of the compressed file with extension .gz
syn: # gzcat <file_name>
eg: # gzcat test.gz
# uname -X
will display the following information.
eg output is displayed.
System = SunOS
Node = sys1
Release = 5.10
KernelID = Generic_118855-33
Machine = i86pc
BusType = <unknown>
Serial = <unknown>
Users = <unknown>
OEM# = 0
Origin# = 1
NumCPU = 2
# find
this command is used to search the file
# whereis passwd
will display the location of the file/command
# psrinfo
provide the infomation about the staus & the number of processors attached to the
system
eg output:
0 on-line since 11/28/2008 08:21:25
1 on-line since 11/28/2008 08:21:35
1. vi
2. pico
3. vim
4. emac
VI EDITOR COMMAND
# vi <new_file_name>
to create a new file
# vi <file_name>
to open a file
esc:w => to write/save the content to the file and move the cursor to the
original position
esc:q => to quit the file without saving
esc:wq => to write/save and quit from the file
esc:wq! => to write/save & quit from the file forcefully
# vi -x <file_name>
to assign the password for the file.
DISK ADMINISTRATION
Re-labeling a disk:
NOTE:
1. # fmthard command cannot write a disk label on a unlabeled disk. Use
# format utility for this purpose.
2. When using the # format utility to change the size of the disk slice is
automatically designated that expands & shrinks to accommodate the slice
resizing operations. This temporary slice is referred to as the free hog and it
represents the unused disk space on a disk drive.
FREEE HOG:
When using the format utility and change the size of the disk slices, a temporary slice is
automatically designate that expands & shrinks to accommodate the slice resizing
operations. This temperory slice is refered to as the free hog & it represents the unused
disk space on a disk drive
NOTE:
1. To operating system a file system appears as a collection of files & directories used to
store & organise data for access by the system and its users.
2. To operating system, a file is a collection of control structures and data blocks that
occupy the space defined by a partition, which allow for data storage & management.
-> provides the information, whether the drivers files are installed.
-> provides information, to verify the device driver is available as a kernel module
NOTE:
1. The ufs file system does not allow fragment of the same file to to be stored in two
different data blocks.
2. When the state flag is "clean, stable, logging" file system scans are not run
# newfs -N
# newfs -m 2 /dev/rdsk/c0t3d0s5
will create a file system with the minfree value 2%. By default 10%
1. The fist line printed by the newfs command describes the basic disk geomerty.
2. 2nd line describes the 'ufs' file system created in this slice.
3. 3rd & remaining line list the beginning sector locations of the backup super blocks.
FSCK:
INODE CONSISTENCY:
Checks
1. for the allocation state of inodes
2. Type
3. Link count
4. Duplicate blocks
5. Bad blocks
6. inode size
7. Block count for each inode
8. Any unreferenced inode with non-zero link count is linked to the file system's
lost+found
Directory Structure
/devices - provides the information about, the location to which the hardware devices
are connected to.
/etc - holds all the configuration related with system and its
services
Inodes:
1. Every file and directory will be assigned by a unique number by the operating
system.
2. Inode wil store the information about the files and directories.
3. It holds the information like, the permissionship of the owner of the file,
permissionship of the file to a group, other permissionship, when it is modified.
4. It'll also have pointers which will point to the data.
Link file:
2 types of links
a. hard link
b. soft link or symbolic link
Hard link:
1. when hard link is created, the link count is increased.
2. Inode number remians same for the source file and the destination file
3. If the source file is deleted, still datas from the destionation file can be accessed.
4. The souce file permissionship is inherited to the destination files.
5. Both the source and destination file will show the same size. But the destination files
will NOT occupy the disk space
Naming conventions:
3 Naming conventions
1. Logical name
2. Physical name
3. Instance name
Logical Name:
c# t# d# s#
c = specifies the controller
t = specifies the target
d = specifies the disk
s = specifies the slice
Instance Name:
1. Generated by the kernel for each devices connected to the system
# prtconf
will display the what are the devices attached to the system and all the possibilites of
connecting the devices
# cat /etc/path_to_inst
1. not recommended to edit
2. will display the information about physical location where the devices are connected
to and their corresponding instance
name
# format
command displays the number of hardisks attached to the system along with its
physical and instance name
1. format is a utility
2. It has two tires
a. format>
b. partition>
3. ctrl+c - to exit from the format utility
4. can be executed only by the root user
format> help
will provide the information about what are the commands that can be used in format>
tire.
format> fdisk
is used to delete, view the windows partition informations through Solaris OS.
format> partition
will move the next tire partition>
partition> help
will provide the information about what are the commands that can be used in the
partition> tire
partition> print
will print the disk layout which provides number of information about the slices
NOTE:
x86 arch - will have 9 slices
sparc arch - will have 7 slices
Tag name
Is the name given to the slice
Only selected names can be assigned to the slice
Permitted tag names: root, backup, alternates, reserved, usr,
stand, boot, home, swap, var, unassigned
Flag:
States the status of the slices
Cylinders:
Specifies the starting and ending cylinders of a particular slice
partition> lable
to lable or to make OS to recognize the changes done with the partition
NOTE:
partition> lable
will only make the OS to recognize the change done and it will not be stored in any file.
format> save
will save the changes done to a file
by default: format.dat
It can also be stored in any location with any file name
partition>name
is used to name the partition table
max 8 characters is supported
NOTE:
# newfs /dev/rdsk/c0d1s4
to create a file system for the slice c0d1s4
# mkdir /slice4
# mount /dev/dsk/c0d1s4 /slice4
used to mount the slice slice4 under the location /slice4
# cd /slice4
# touch one two three
to access, to write some datas to the slice4
NOTE:
Mounted slices cannot be deleted.
Only after unmounting the slices it can be deleted.
# cd
# umount /dev/dsk/c0d1s4
to unmount the mounted slice
format> verify
to view the partition table, slices information
# newfs
will create a new file system
# newfs /dev/rdsk/c0d1s6
will prompt for confirmation
then it creates a ufs file system
it also displays the number of primary super backup blocks
which can be used by the # fsck command
# newfs -N /dev/rdsk/c0d1s6
will NOT create the file system but
will only display the number of primary super backup blocks (only if the particular slice
is having the file system)
# newfs -T /dev/rdsk/c0d1s6
will create a file system that can hold a file size in tera bytes.
NOTE:
1. When # newfs command is executed, the command reads the entry of the file
/etc/default/fs and accordingly the file system is created.
and hence # newfs command will create only ufs file system.
# fstyp
will dispaly number of informations about the particular slice
Note:
1. The default minfree value for every slice is 1% of its own size.
# tunefs -m 10 /dev/dsk/c0d1s6
this command will increase the minfree value to 10% for the slice, slice6
# tunefs -m 1 /dev/dsk/c0d1s6
will decrease the minfree value to 1% for the slice, slice 6
# prtvtoc
will print the volume table of contents
# prtvtoc /dev/dsk/c0d1s0
Here, the output of a particular hard disk drive with different slice remains same.
for eg:
This above command will display the informations about the slice layout, where it is
mounted, flag state, hard disk drive geometry.
Note:
If we have more than one hard disk drive with the SAME geometry and if we require
same layout in all the hard disk drive, then VTOC of one disk can be copied to another
using a command # fmthard
# df
# df -h
will provid the information about
1. what are the slices mounted
2. what is the total size of the each slices
3. how much of space is used in each slcies
4. how much of free space is available in each slcies
5. where it is mounted
6. how much of % of space is used in each slices
Note:
Solaris 8 supports
# df -bk
# du
will display the how much of the disk space is used
# du -h
will display how much of the disk space is used in human readable format
NOTE:
# man ls > list
# du -h > disk_use
the above commands will not display the output, instead,
the output is redirected to a file named list and disk_use respectively.
Hence we can open the files list and disk_use to the see the output of the commands.
VTOC:
Resides in Track0, Sector0
It occupies 512 bytes of space
holds the information about the hard disk layout, geometry
Bootblock:
Resides in next 15 sectors next to vtoc
track0
Sector 1 to sector 15
Will be active in the root hard disk drive
Primary super block:
Will reside next to boot block
track0
Sector 16 to sector 31
Data block:
1. Size of each data block is 8kb
2. Data block is further divided into 8 fragments
3. Size of the each fragment is 1 kb
4. a data block can be used only by a single file.
Data blocks cannot be shared by any files
5. Data block is the area where the user (both root and non-root) is given right to store
the data.
Inode:
Will provide the following information
1. Ownership permissions
2. Group permissions
3. Other permissions
4. When the file/dir was modified
5. Pointers
2 types of pointers
a. Direct pointer
b. Indirect pointer
3 types of indirect pointer
i. single indirect pointer
ii. double indirect pointer
iii. triple indirect pointer
9. If the file size is more than that, triple indirect will appear and it can refer 2048
additional double indirect pointer
2048 * 32 = 64 Tb
In short
File system
1. Disk Based File system
2. Distributed file system
3. Pseudo file system
These directories will be unmounted automatically when the system goes down.
/system/object
1. uses objfs
2. object file system
3. This file system is used by the kernel to store details relating to the modules
currently loaded by the kernel.
devfs
1. used by /devices directory
2. Used to manage name space of all devices on the system.
ctfs
1. used by /system/contract directory
2. used by SMF to track the processes which compose a service, so that a failure in
a past of a multi-process service can be identified as a failure of that service.
PERFORMING MOUNTS AND UNMOUNTS
# mount =>
1. will display the informations about the permanently mounted and temp
mounted slices and other removable media.
2. can be used only by the root user
/etc/vold.conf => volume management configuration file which holds the actions to be
performed.
vold => is the daemon which will be running at the background while the
volume management process is started.
/etc/rmmount.conf => is the configuration file for the removable media
# iostat -En
will provide us the information about the removable media where it is connected, for eg:
,to which controller, target and so on.
# df -h => will provide what are the slices and the medias that are mounted
# umount <mount_point>
eg: # umount /mnt/cdrom
# umount <device>
eg: # umount /dev/dsk/c0t0d0s0
will umount the devices.
# mountall => will mount all the slices which is having the option "yes"
at "mount at boot" in the file /etc/vfstab
#umoutall => reverse of mountall
/etc/mnttab:
1. cannot be editable
2. will hold all the entries of the slices & media mounted
3. reffered by the #mount command
/etc/vfstab
1. Holds the permanent mounted slice information
2. editable by the root user
NOTE:
/etc/vfstab
0 UFS file system are not checked. However, non-UFS file system are checked
- - Not checked
1 Checked one at a time in the order they appear in the /etc/vfstab file.
/, /usr, /var SMF mounts the file systems as specified under /lib/svc/method
directory beginning with fs.
1. # mount
2. # cat /etc/vfstab
3. # cat /etc/default/fs
4. # cat /etc/dfs/dfstypes
5. # fstyp /dev/rdsk/c0t0d0s7
or
move to the location of the dir where the packages resides and start installing.
# cd /mnt/cdrom/Solaris_10/Product
# ls
# pkgadd -d . SUNWbash
NOTE:
All the installed packges informations are stored to the file
/var/sadm/install/contents
All the installed packages will be in /var/sadm/pkg
# pkgchk -p /etc/shadow
will provide the info when the file is recently updated from its installation.
pkgtrans => is a command to translate the packages into a file with data stream
format
-s => specifies the source of the packages where its available.[cdrom is mounted under
/mnt/cdrom and the packages is available accordingly].
test => can be any file name to store or to translate the packages under any location
SUNWman and SUNWbash are the packages combined as a single file test.
# file /test
Shows the format of the file.
Eg:
# pkgadd /test
Note:
No . is used to install the translated package.
# pkginfo p
Displays the partially installed packages
# pkginfo l SUNWman
INSTALLATION OF SUN SOLARIS 10 OPERATING SYSTEM SOFTWARE
Ok nvramrc
NVRAMRC contents are displayed
Ok oem-logo?
If true, displays customized oem logo specified by oem-logo
Ok boot a
Ask me. Interactive mode prompts for the names of the boot files.
[Helpful if you need to boot off an alternate /etc/system file after kernel unable
modifications.]
Ok boot r
Reconfigure boot. Boot and search for all attached devices, then build device entries for
anything which does not already exist. Useful when new devices are added to the
system.
Ok boot s
Single user. Boots the system to run level 1.
Ok boot v
Verbose boot. Show good debugging information.
Ok boot V
Verbose boot. Show a little debugging information.
Ok .enet-addr
Displays the enternet address
Ok .version
Display version and date of the boot PROM
(pritconf V in a shell when booted)
Ok .speed
Display processor and bus speeds
Ok .env
On severs, this command is used to obtain status information about the systems power
supplies, fans, and temperature sensors.
3. advise - environmental monitor will perform routinge checks and will only
report failures.
Ok .idprom
Displays ID PROM contents
Ok sync
Call the operating system to write information to hard disk drive
Ok firmware-version
Displays major/minor CPU firmware
Ok reset
Reset entire system [similar to performing a power cycle]
Ok reset-all
Reset entire system [similar t performing a power cycle]
Ok set-defaults
Reset all the PROM settings to the factory settings
Ok eject
Ejects the drive
Ok eject cdrom
Ok test device
Test the specified device
Ok test net
Test the primary network controller
Ok test-all
Test all devices available with the self-test capability
Ok test scsi
Test the primary SCSI controller
Ok watch-net
Monitors network broadcast packets for default interace
. for a good packet
X for a bad packet
Ok watch-net-all
Monitors network broadcast packets for all the interfaces
Obdiag
Invokes an optional interactive menu tool which lists all self-test methods available on a
system; provides commands to run self test. (More for servers and very machine
specific. Reference the specific hardware manual for the machine to get additional
information on running obdiag.
Ok nvedit
Enter the NVRAMRC editor. If data remains in the temporary buffer from a previous
nvedit session, resume editing those previous contents. IF not, read the contents of
NVRAMC into the temporary and begin editing it.
Ok show-devs
Display list of installed and probed devices
Ok show-pci-devs
Display all PCI devices
Ok show-disks
Display a list of known disks in format for use in creating device alias.
Sets the PROM security password to what is specified in the password filed. This
password must be between zero and eight characters [any characters after eight are
ignored] and the passwords takes effect immediately no reset is required. Once set, if
we enter an incorrect password there is a delay of around 10 seconds, before we are
able to try again and the security-#badlogins counter is incremented. The password is
never shown as we type it or with the printenv.
OK printenv security-mode
2. command
a. All commands expect for boot and go require password
3. full
a. All commands expect for go require the password
Caution:
We must set our security password before setting the security mode. [The
password is blank by default, but if already set by someone, we wont know
what it is and will not be able to disable it] If we forgot the security
password, we may not be able to use our system and must call the vendor
for a replacement of a PROM.
Ok printenv security-#badlogins
Reset the security-#badlogins counter. This counter keeps track of the nuber
of failed security password attempts.
Changing the power-on banner:
The banner information seen from the power-on can be modified with the
oem-banner and oem-banner? Configuration settings. By default the banner
shows information like processor type, speed, PROM revision, memory,
hosted and the Ethernet address.
Ok banner
Display the power-on banner
Note:
1. The bootblk program is placed on the disk drive by the
installboot command during system installation.
2. Boot program phase:
a. UFS boot program locates & loads the appropriate
2-part kernel.
b. [i] genunix Is platform independent generic kernel
file
[ii] unix platform specific kernel file
3. When ufsboot loads these 2 files into memory, they are
combined to form the running kernel.
4. Solaris 10 for SPARC only runs on 64-bit systems
5. The /etc/init file is a symbolic link to /sbin/init
Note:
# bootadm - manage bootability of GRUB-enabled operating system
# bootadm list-menu
The location for the active GRUB menu is: /boot/grub/menu.lst
default 0
timeout 10
0 Solaris 10 11/06 s10x_u3wos_10 X86
1 Solaris failsafe
# bootadm list-archive
# cat /etc/default/init
Status:
Maintenance -> this state needs rootss interrogation. In this case the
services has to be make available manually
Maintenance: The service instances has encountered an error that must be
resolved by the administrator
Uninitialized: This state is the initial state for all services before their
configuration has been read.
# svcs a
-a option will display all services, including disabled services
# svcs
List out what are the services running, status of the service, FMRI
# svcs l
-l option will give detailed information about a service.
Eg: svcs l network
# svcs l <FMRI>
List out the detailed information about the specified FMRI. Status of the service can
also be viewed.
Eg: # svcs l telnet
# svcs d
-d option lists the services or service instances upon which the given service
instance depents.
Eg: svcs d milestone/network:default
svcs d milestone/multi_user
svcs d network/inetd
# svcs D
-D option will display the other services depends on a given service.
eg: svcs D milestone/multi-user
# svcs p
-p option is to view the processes associated with a service instance.
eg: svcs p svc:/network/inetd:default.
# svcs x
If a service fails for some reason and can not be restarted, you can list the
service using the x option.
NOTE:
milestone/single-user represents run level S of previous versions of Solaris
milestone/multi-user represents run level 2 of previous version of Solaris
milestone/multi-user-server represents run level 3 of previous versions of Solaris.
# inetconv - convert inetd.conf entries into smf service manifests, import them into
SMF repository
# inetadm Displays what are the services that are controlled by inetd
# inetadm l <FMRI>
Displays detailed information about the FMRI specified.
Eg: # inetadm l telnet
# inetadm d <FMRI>
To disable the specified service
Eg: # inetadm d telnet
# inetadm e <FMRI>
To enable the specified service
Eg: # inetadm e telnet
# inetadm p
Displays the global setttings
If the current /etc/vfstab file contains NFS mount entries, saves the
/etc/vfstab file to /etc/vfstab.orig.
Removes the default hostname in /etc/hostname. interface files for all interfaces
configured when this command is run. To determine which interfaces are
configured, run the command 'ifconfig-a'. The /etc/hostname.interface files
corresponding to all of the interfaces listed in the resulting output, with the
exception of the loopback interface (lo0), will removed.
Removes the default domainname in /etc/defaultdomain.
Disables the Network Information Service (NIS) and Network Information Service
Plus (NIS+) if either NIS or NIS+ was configured.
Removes the file /etc/defaultrouter. Removes the password set for root in
/etc/shadow.
FILE PERMISSIONS
# chmod => used to change the permisssionships of the dir and the file
syn: # chmod <user/group/other> <operator> <read/write/execute>
eg: # chmod u+rwx hai
u => user
g => group
o => other
a => all
r => read
w => write
x => execute
+ => to add
- => to remove
= => to assign
NOTE:
6 - rw-
5 = r-x
4 = r--
where
hari => new owner
jai => file name
Note:
Ownership can be changed only by the ROOT user.
1. SETUID - 4
2. SETGID - 2
3. STICKY BIT - 1
when the SETUID is assigned to a file, all the users who are accessing the file will
become the owner of the file at that moment.
To check:
# ls -l
Again login as user -> shiva
$ pwd
$ cd check
$ touch welcome to the world of unix
$ ls -l
STICKY BIT
1. Its useful when the sticky bit permission is implemented to the directory.
2. If a directory is with the sticky bit, every user has the right to create a file inside
that dir [provided with write permission].
3. Only the root user and the owner of the file is permitted to delete the created file.
To implement:
# mkdir test
# chmod 1777 test
Now login as a user named che and create a file inside the directory test.
To check:
Login as another user named castro and try to delete the file created by che.
System will not permit to delete the file by the user castro.
Files involved:
/etc/passwd
/etc/shadow
/etc/skel
/etc/group
In nutshell:
# useradd D
Displays the default parameters assigned to the #useradd cooomand
Note:
syn: # id <user_name>
eg: # id castro
will provide the information about the user id and the primary group he belongs to.
syn: # id -a <user_name>
eg: # id -a castro
will provide the information about the user id, primary group and secondary groups the
user belongs
syn: # id
will provide the information about the user id and primary group the currently logged
user
# userdel <login-name>
# userdel shiva
will only delete the useracount named shiva
# userdel -r <login-name>
# userdel -r shiva
will delete the useraccount along with the home dir and the datas created by the
specified user.
# passwd -d <login_name>
# passwd -d shiva
will remove the password the user specified
NOTE:
# logins -p
will provide the information about the user who is not having the passwd
# passwd -l <login-name>
# passwd -l shiva
will lock the user specified
# passwd -u <login-name>
# passwd -u shiva
will unlock the user account
Step 2.A :
# useradd -m -d /export/home/shiva -g solaris -s /bin/bash shiva
# passwd shiva
these above commands creates the user account shiva belongs to the group solaris &
assign the password to them.
Step 2.B:
# useradd -m -d /export/home/lingesh -s /bin/bash lingesh
# passwd lingesh
these commands creates & assings the password to the user account lingesh
Step 3:
As a root user or as any user create a file.
Here lets create a file with the root user account
# mkdir /new
# cd /new
# cat > one
# ls -l
this will display the default permission ship and the group the owner (here root) belongs
to.
# chmod 664 one
This command will change the permission ship to file 'one'
# chgrp Solaris one
this command will change the group to 'Solaris' for the file 'one'
Step 4:
To assign the password to a group
a. Copy the second field (encrypted password) of any user account from the file
/etc/shadow
b. Paste the same to the second field of the file /etc/group
Step 5: To check
a. Login as the user (shiva - who belongs to solaris group)
and make the changes to the file. It'll change.
b. Login as the other user (lingesh - who DOESNT belong to solarsi group)
and try to make the changes to the file.
We'll be prompted with "permission denied"
c. # newgrp solaris
this command will prompt for the password of the group Solaris
and allows to take the group permission ship.
NOTE: When the user is login to the group the shell changes.
NOTE: DONOT duplicate the root id to any user, if happens it leads to security breech.
MISC
2. # pwck => checks the entry of the file /etc/passwd and if any errors
it'll be displayed
3. # grpck => checks the entry of the file /etc/group and if any errors it'll be displayed
4. # echo $? => provides the info status on the command executed
if its 0 -> command is executed successfuly
if other than 0 -> its shows error occurance
Trouble shooting:
At single user mode, whilst CD_ROM is mounted
# TERM = sun
# export TERM
# vi /etc/shadow
FTP IMPLEMENTATION
# ftpcount
Shows current number of users in each ftp server class
-v Displays the user counts for ftp server classes defined in virtual host [ftpaccess]
-V Display program copyright and version information then terminate
# ftpwho
Shows current process information for each ftp server user
1. Itll display which user is logged in along with the process id
2. Status of the user will be displayed
3. Will also display the password given by the anonymous user
Note:
Login time via ftp is defined in the file /etc/ftpd/ftpaccess
Time out in seconds.
# ftpconfig
Setup anonymous ftp
Note:
1. If the /var/ftp dir doesnt exist, this above command will create and update the dir
for anonymouns ftp.
3. This can also be achieved by using GUI web browser to check the anonymous login
using ftp.
# ftpconfig /var/ftp
# cd /var/ftp
# ls l
ftp://192.168.0.100
3- default classes:
1. Real users:
a. Can login using shell [ssh/telnet]
b. Can browse the entire directory
2. Guest users:
a. Are temporary users
3. Anonymous user:
a. General public for download capability
2. # passwd guests
3. # mkdir /export/home/guests
4. # chown guests /export/home/guests
5. # ftpconfig d /export/home/guests
6. Update the configuration file /etc/ftpd/ftpaccess
Edit anywhere in the file. It also have the entry commented out.
# guest user <guest_user_name>
guest user guests
Note:
Guest users are similar to real users, except guest users are jailed/chrooted.
PERFORMANCE MONITORING
# who am i
# whoami
# who
#w
# rusers l <host_name>
Displays who is currently logged to this machine along with the remote access.
# whodo
Displays who is doing what
# groups
Print group membership of user
# groups <user_name>
Print group membership of the specified user
# uptime => provides the information about the uptime of the system
# last => provide the info about the currently logged in users,
When the system came to upstate, when rebooted and so on.
Protocols include:
TCP, IP, ICMP [which controls ping, echo], IGMP, RAWIP, UDP [DHCP, TFTP]
# netstat usage:
# netstat
TCP: IPv4
Local Address Remote Address Swind Send-Q Rwind Recv-Q State
-------------------- -------------------- ----- ------ ----- ------ -------
accel1.telnet intel.32961 49640 0 49640 0 ESTABLISHED
1 2 3 4
Where
1 => hostname of the sender
2 => port/protocol
3 => hostname of the receiver / remote
4 => port/protocol of remote
Note:
1. # cat /etc/services
Displays the well known port number and their corresponding services
2. Hostname is displayed while using the # netstat command can be possible only
of the /etc/hosts file is having the entry of the ip-address and corresponding
hostname [resolve].
This file will be indirectly checked.
When issuing the # netstat command it will read the file /etc/nsswith.conf and
this file redirect to read the file /etc/hosts [provided the entry is made].
5. Sockets are NOT found for UDP connections since they are connection less.
# netstat a
a. Shows the state of all packets
b. All routing table entries / all interfaces, both physical & logical
c. Returns ALL protocols for ALL address families [TCP/UDP/UNIX].
UDP: IPv4
Local Address Remote Address State
-------------------- -------------------- -------
*.route Idle
*.sunrpc Idle
*.* Unbound
*.32771 Idle
[Output truncated]
# netstat n
a. Shows network addresses as numbers. Normally # netstat displays
addresses as
symbols.
b. It disables name resolution of hosts and ports and hence displays the ip-
address.
TCP: IPv4
Local Address Remote Address Swind Send-Q Rwind Recv-Q State
-------------------- -------------------- ----- ------ ----- ------ -------
192.168.0.100.23 192.168.0.19.32961 49640 0 49640 0 ESTABLISHED
192.168.0.100.32921 192.168.0.5.6000 500576 0 49640 0 ESTABLISHED
127.0.0.1.32923 127.0.0.1.32879 49152 0 49152 0 ESTABLISHED
[Output truncated]
# netstat i
a. Returns the state of the physical interfaces. Pay attention to
errors/collisions/queue whilst troubleshooting.
b. When combined with -a options displays report on logical interfaces.
Name Mtu Net/Dest Address Ipkts Ierrs Opkts Oerrs Collis Queue
lo0 8232 loopback localhost 131536 0 131536 0 0 0
hme0 1500 accel1 accel1 186731 0 189733 0 0 0
NOTE:
mtu - Maximum Transmission Unit
In general the loopback address mtu will be high.
# netstat m
a. Show the STREAMS memory
[How much TCP packets is working on the system]
streams allocation:
cumulative allocation
current maximum total failures
streams 300 336 2463 0
queues 742 756 5539 0
mblk 488 1778 192771 0
dblk 489 2009 1062735 0
linkblk 7 169 8 0
syncq 17 50 77 0
qband 2 127 2 0
# netstat p
Returns net-to-media information
[MAC/layer-2 information] i.e., to arp table.
# netstat P <protocol>
Returns active sockets for specified protocol
Note:
1. Protocols should be specified with small letters
2. The following protocols are only allowed ip|ipv6|icmp|icmpv6|tcp|udp|
rawip|raw|igmp
TCP: IPv4
Local Address Remote Address Swind Send-Q Rwind Recv-Q State
-------------------- -------------------- ----- ------ ----- ------ -------
accel1.telnet intel.32961 49640 0 49640 0 ESTABLISHED
accel1.32921 192.168.0.5.6000 500576 0 49640 0 ESTABLISHED
localhost.32923 localhost.32879 49152 0 49152 0 ESTABLISHED
[Output truncated]
# netstat r
a. Returns routing table
b. Normally, only interface, host, network & default routes are displayed
c. Combined with -a option, all routes will be displayed, including cache.
# netstat D
Returns DNCP configuration [includes releases/renewal etc]
# netstat an f [inet|inet|6|unix]
-f => allows to specify the family address
UDP: IPv4
Local Address Remote Address State
-------------------- -------------------- -------
*.520 Idle
*.111 Idle
*.* Unbound
*.32771 Idle
*.* Unbound
[Output truncated]
Misc
# atq
will provide the info abt the scheduled task along with their id.
# at -l
will provide the info abt the job id and the user who scheduled the process
/etc/cron.d/at.deny
this file will have the login name of the users who are denied to use the at command.
/etc/cron.d/at.allow
this file will not be present by default.
this file has to be created mannualy
this file holds the login name of the users who are having the permission to access the
at command.
In general system will check for the /etc/cron.d/at.allow file first and then moves to the
file /etc/cron.d/at.deny
/var/cron/log
this file logs the at command shceduling
/var/spool/cron/atjobs
is a directory which holds the at schedule
NOTE:
0 = sunday
1 = monday
respectively
6. command field => what command has to be executed
PRINTER CONFIGURATION
# printmgr &
=> This above command opens a menu
=> Printer configuration can be menu driven
NOTE:
1. Before configuring the printer make sure about the compatablity with the sun
microsystems.
2. Check the make and the type
3. The port to which the printer is connected physically.
# lp <file_name>
eg: # lp check_printer
will the print the file named "check_printer" to the default printer
# lpstat -d
displays which is activated as the default printer if we have configured more than one
printer
# lpstat -p
displays status of all the printers that are configured to the system
# lpadmin -d <printer_name>
eg: # lpadmin -d hp
will activate "hp" as the default printer if we had configured more than one printer.
# reject <printer_name>
eg: # reject hp
this command will reject the requests to the printer named "hp"
ie.. hp printer will not accept the requests from any user including the root.
Note:
In the above case, printer is physically connected, activated but the request will not be
fulfilled or not accepted.
# accept <printer_name>
eg: # accept hp
this command will start sending the request to the printer named "hp"
In other words printer starts printing the desired output.
# disable <printer_name>
eg: # disable hp
this command will disable the printer. In other words printer is not activated.
cc
# enable <printer_name>
eg: # enable hp
will ativate/enable the printer specified.
/var/lp/logs/requests -> provides the information on the print logs which inclues
1. which user given the print request
2. date & time of the request
3. size of the file
4. user id, group id
5. file name
6. location of the file
# lpq
provide the information about the request in the queue.
BACKUP & RESTORE
NOTE:
1. Its recommended to take backup after unmounting the file system
NOTE:
1. Enter into the system maintenance mode
2. Then check the destination size of the tape/disk
3. Proceed with the backup.
# cd /
# newfs /dev/rdsk/c1d0s0
# mount /dev/dsk/c1d0s0 /a
# cd /a
# ufsrestore rvf /dev/rdsk/c1d0s6
# rm restoresymtable
# cd /usr/platform/`uname -m`/lib/fs/ufs
# installboot bootblk /dev/rdsk/c1t1d0s0 -> SPARC
# installgrub -fm /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1d0s0
-> X86/X64 arch
where
-f => suppresses interaction when overwriting the master boot record
-m => installs GRUB stage1 on the master boot sector interactively
# cd /
# umount /a
# init 6
SNAPSHOT
NOTE:
Here we consider that
c1d0s5 is mounted under /mnt/slice5 and is holding some data.
In above case after the command is executed, /mnt/slice5 is snapshoted and stored to
-> /dev/fssnap/0
# fssnap -i
Provides the info about all the snapshots available in the system
# fssnap -d <device>
eg: # fssnap -d /dev/fssnap/0
Deletes the device
NOTE:
The device is in unmounted state by default.
NOTE:
We can also mount the /dev/fssnap device and chek whether all the informations are
snapshoted properly or not.
The condition is that, we able to mount it only in `ro` option. (read only)
# dladm show-dev
Display the instance name of the interfaces in the current system.
# ifconfig -a
Displays the interface (instance name), ipaddress, state of the interface card, netmask,
MAC address.
# ifconfig <instacne_name_of_the_interface> up
eg: # ifconfig nge1 up
will make the interface in up state and its ready to communicate.
NOTE:
# grep network /etc/path_to_inst
Will also display the instance name that is associated to the hardware.
Keep in mind that this above command works with SPARC and not with
x86/x64.
# snoop => used to snoop/monitor the network packets transmitted between the
machines.
NOTE:
When the user is trying to login using telnet the uers login name along with the
password and commands executed by him can be snooped or monitored.
But when the user is trying to login remotely using "rlogin" it CANNOT be snooped.
SWAP CONFIGURATION
To increase the swap/virtual memory.
1. We can add a slice to the swap memory
2. We can add a file to the swap memory
# swap -l
this lists out what are the files and slices that are dedicated or associated to the
swap/virtual memory
# swap -s
this lists the summary of the virtual memory.
# mkfile <size> <location_of_the_file>
eg: # mkfile 200m /empty_swap
this command create a file with 200mb with the name "empty_swap" under / (root).
NOTE:
The file can be created anywhere in the mounted slices and can be
associated/dedicated to the swap/virtual memory.
# swap -a <swap_file_name_or_disk_slice>
eg: # swap -a /empty_swap
this command will add the file /empty_swap (with size 200 m) to the virtual memory.
# swap -a /dev/dsk/c1d0s5
will add the slice s5 to the swap/virtual memory.
NOTE:
The slice 5 has to be created earlier along with the file system before adding to the
swap/virtual memory.
# swap -d <swap_file_slice>
eg: # swap -d /empty_swap
# swap -d /dev/dsk/c1d0s5
will delete/remove the associated/dedicated slice/file from the swap/virtual
memeory.
# smc &
# dumpadm
This command will read the info from the configuration file
/etc/dumpadm.conf
NOTE:
Both with # dumpadm and # coreadm
commands its recommeded to use the commands and DONOT edit the
configuration file.
CRASH DUMP:
OS generates a crash dump by writing some of the contents of the Physcial memory to a
pre-determined dump device, which must be a local disk slice.
/var/crash/`uname -n`/vmcore.x
where
x = integer indentifying the dump
/var/crash/`uname -n`/unix.x
NOTE:
Within the crash dump directory a file named bounds is created. The bounds file holds
a number that is used as a suffix for the next dump to be saved.
# dumpadm
This command reads the file /etc/dumpadm.conf and the output will be displayed
accordingly.
# dumpadm -d /dev/dsk/c0d1s5
Will change the default (/dev/dsk/c0d1s1) dumpdevice to
/dev/dsk/c0d1s5
# dumpadm -n
will disable the save core.
Dump content: kernel pages
Dump device: /dev/dsk/c0d1s5 (dedicated)
Savecore directory: /var/crash/server
Savecore enabled: no
# dumpadm -y
will enable the save core.
Dump content: kernel pages
Dump device: /dev/dsk/c0d1s5 (dedicated)
Savecore directory: /var/crash/server
Savecore enabled: yes
NOTE:
1. save core is by default enabled.
Only if the save core is enabled dumpadm will dump the contents to the device
specified.
2. # dumpadm
command updates the file /etc/dumpadm.conf
and hence the configuration remains permanent.
# dumpadm -s /var/crash/Unix
This command change the save core directory.
# dumpadm -c all
This will ask the system to dump all the pages from the physical memory.
The default dump contents is kernel pages.
Coreadm:
NOTE:
If the directory defined in the global core file does not exist, it has to be created
manually.
# coreadm
reads the entries of the file /etc/coreadm.conf and the configuration is displayed.
Coreadm Pattterns:
%m = machine name
%n = system known name
%p = process-id
%t = decimal value
%u = effective user
%z = which process executes
%g = effictive group id
%f = execuitable file name
-d = disable
-e = enable
NFS is used to
1. Share the resources which includes the sharing of the database and the sharing of
hardware resource (especially the Hard disk drive slice).
# share
1. Makes a local directory on an NFS server available for mounting
2. It also displays the contents of the file /etc/dfs/sharetab
3. It also updates the file /etc/dfs/sharetab
WHERE
A = command to mount the shared resource
B = option -F to specify the file system
C = "nfs" specifies the file system
D = specifies the node name or ip address of the nfs server
E = Shared directory at the Server
F = Mount point at the client
NOTE:
1. Before mounting the resource at the mounting point at Client, Make sure that the
mount point exists.
2. If instead of ip-address of the nfs-server, node name is used to mount the resource,
make sure the client system's /etc/hosts file is upadated or resolved with nfs-server
name and ip.
# cat /etc/dfs/fstypes
will provide the information about the utilities supported by NFS.
nfs, cachefs and autofs will be supported.
nfs => to share the resource
cachefs => for sync. when file at the client side is updated, the source file at the server
will also get updated. For this updatation, cachefs is requried.
autofs => for automatic mounting of the resource on demand.
# dfshares
Lists available shared resources from a remote/local NFS server along with the ip-
address or with the node-name
syn: # dfshares
will check for the local system
syn: # dfshares <ip-address_or_node-name>
eg: # dfshares 192.168.1.51
eg: # dfshares Unix
Checks the system Unix/192.168.1.51 for the shared resource.
# shareall
Reads and executed share statements from the file
/etc/dfs/dfstab
# unshare
Makes a previously available directory unavialable for client side mount operations
# unshareall
Makes a perviously shared resources unaviloable
/etc/dfs/dfstab
has the entries of the permanenlty shared resource
This file is editable
Lists the local resources to share at boot time.
/etc/dfs/sharetab
is NOT editable.
Explanation"
Here
share -F nfs /empty
Note:
Irrespective of the file or directory permission /another.
NOTE:
If the svc:/network/nfs/server
service does not find any 'share' commands in the /etc/dfs/dfstab file, it does not start
the NFS server daemons.
Solaris-9:
/etc/init.d/nfs.server stop
/etc/init.d/nfs.client stop
NOTE:
NFSv4 uses the well-known port number 2049.
/etc/nfs/nfslog.conf
Configuration file. Lists information defining the location of configuration logs used for
NFS server logging.
/etc/default/nfslogd
List configuration information describing the behavious of the nfslogd daemon for
NFSv2 and 3.
/etc/default/nfs
Contains parameter values for NFS protocols and NFS deamon
Autofs
1. It's a client side service to make the shared resource available at the client side on
demand.
2. Autofs file is initialized by
/lib/svc/automount script
NOTE:
automountd deamon is completely independent from the automount command.
Because of this separation, we can add/modify/delete map information without having
to stop and start the automountd daemon process.
Autofs types:
1. Master map
2. Direct map
3. Indirect map
4. Special map
Master map:
1. Lists the other maps used for establishing the autofs file system.
2. The automount command reads this map at boot time.
Direct map:
Lists the mount points as ABSOLUTE PATH names. This map explicitly indicates the
mount point on the client.
/- mount point is a pointer that informs the automount facility that full path names
are defined in the file specified by MAP_NAME (for eg: here its /etc/direct_map).
NOTE:
1. /- is NOT an entry in the default master map file (/etc/auto_master)
2. The automount facility by default automatically searched for all map related file in
/etc directory.
Indirect map:
Lists the mount points are relative path names. This map uses a relative path to
establish the mount point on the client.
An indrect map uses a key substitute value to establish the association between a
mount point on the client and a directory on the server. Indirect map are useful for
accessing specific filesystems, such as home directories, from anywhere in the network.
Special map:
Provides access to NFS service by using theri host names.
NOTE:
+ symbol at the beginning of the
+auto_master line in the /etc/auto_master file directs the automountd daemon to look
at the NIS, NIS+ or LDAP databases before it reads the rest of the map.
If this line is commented out, only the local files are searched
unless the /etc/nsswitch.conf files specifies that NIS, NIS+ or LDAP should be
searched.
auto_home
This maps provide the mechanism to allow users to access their centrally localted
$HOME directories
-hosts map
Provides access to all resources shared by NFS servers. The server are mounted below
the /net/hostname directory, or if only the server's ip-address is known, bleow the
/net/ipaddress directory. The server does not have to be listed in the hosts database for
this mechanism to work.
here
-v = provides the detailed information about the automounted resources.
SYSTEM MESSAGING
the /etc/syslog.conf
the above file is responsible for sending or redirecting the errors to the logfile or console
or user or cental log host.
1. Daemon
2. user process
3. kernel
4. logger ( this is only a command to generate the error which is used to check out
configuration performed on the file /etc/syslog.conf).
Level of errors:
emerg - 0 (Priority)
alert - 1
crit - 2
err - 3
warning - 4
notice - 5
info - 6
debug - 7
none - 8
# tail -f /var/adm/messages
will display all the errors generated by all the users.
Note:
-f option along with the tail command refresh the file and keep on display the contents
to the users.
To check:
Let's consider two systems. HOSTA and HOST123
1. Login as 'root' user using telnet to the system HOST123 from HOST A.
Try to do some activity which generate some errors. For eg: try to plumb the interface.
Note: keep it in mind do not disturb the interface to which you have connected.
Mean while
Open the file using
# tail -f /var/adm/messages
in the system HOST A.
/etc/syslog.conf
1. this file is editable
To make changes to the file
i.e to re-direct the logs to the central log host
this file has to be edited.
By default the errors will be generated to the file /var/adm/messages
Note:
Before doing any configuration its recommended to have a backup copy of the default
configuration file.
/etc/init.d/syslog start
/etc/init.d/syslog stop
Its good the compile (using m4) and restart the syslog daemon after doing the changes
to the file /etc/syslog.conf
where
A = *.err
means, all (user process, kernel, daemon, logger) who ever
generating the error message
B = kern.debug
means, only kernel generating debug messages
C = daemon.notice
means, only deamon generating notice messages
D = mail.crit
means, only mail generating critical messages
E = /var/adm/messages
all above mentioned messages have to be logged to the file
/var/adm/messages
To test:
1. edit the file /etc/syslog.conf
*.notice /var/log/logs-test
Note:
If the same message is generated for several times, the same message will not be logged
to the specified file.
ACL - ACCESS CONTROL LIST
eg: # getfacl one (NOTE: file named "one" is already created and acl has been
implemented to the file)
The following is the output of the above command.
# file: one
# owner: root
# group: root
user::rw-
user:bill:rwx #effective:rw-
group::r-- #effective:r--
mask:rw-
other:---
# ls -l
total 4
-rw-r--r-- 1 root root 0 Nov 27 09:24 four
-rw-r--r-- 1 root root 1389 Nov 27 09:27 hai
-rw-r-----+ 1 root root 0 Nov 27 09:24 one
-rw-r--r-- 1 root root 0 Nov 27 09:24 three
-rw-r--r-- 1 root root 0 Nov 27 09:24 two
NOTE:
Only the file "one" is displayed with "+" -> which shows that acl has been
implemented
where
A -> command to assign the acl
B -> to substitute the acl permissions to the file
C -> user acl permissions
D -> group acl permissions
E -> other acl permissions
F -> ACL mask value
G -> to sepcify a sperate acl permission to a particular user
H -> name of the file
# getfacl -d /acl/four
to get the default acl entries
where
A -> to get the acl entires
B -> source file
C -> to assign the acl entries
D -> source file
E -> the file name to which the acl entry has to assigned
The acl entries assigned to the file "four" will be assigned to the file "seven"
eg:
#setfacl -s user::rwx,group::rw-,other:r--,mask:rw-,user:karl:rw- test
A B C D E F G H
Here
A = setfacl => command to assign or set the acl permissionship
B = -s => option 's' to specify that we are assigning the acl
permissionship to a file
Note:
Although a particular user is given the permission to modify the file, and if mask is
assigned as r-- (read only), that particular user is denied to modify.
In short only acl mask permissionship plays the vital role
# file: test
# owner: lenin
# group: acl
user::rwx
user:karl:rw- #effective:r--
group::rw- #effective:r--
mask:r--
other:r--
Since the acl mask is assigned as r--, user karl cannot modify the file test, even though
he was given the permission.
here
eg:
# ls -l
-rwxrw-r--+ 1 lenin acl 150 May 13 11:58 test
A
here
A = + => indicates that the file 'test' is assigned with acl permissionship
where
A = to get the acl entries of the file (test)
B = source file name (here : test)
C = to assign the acl entries
D = option -f to assingn the acl entries
E = the file name to which the acl entry has to be assigned.
Here the acl permissionship of the file "test" is assigned to the file
"/export/home/lenin/another"
(2) -> after reading the entry of the file, it moves and reads the file /etc/hosts
1. # cp /etc/nsswitch.nis /etc/nsswitch.conf
2. # domainname aita.com
4. # cd /etc
6. # ypinit -m
7. # /usr/lib/netsvc/yp/ypstart
# /usr/lib/netsvc/yp/ypstop
8. # ypcat hosts
10. # ypwhich
1. # cp /etc/nsswitch.nis /etc/nsswitch.conf
2. # domainname accel.com
4. # ypinit -c
5. # /usr/lib/netsvc/up/ypstart
6. # ypwhich -m
1. # cd /var/yp
2. # /usr/ccs/bin/make
# ypwhich
will display the name of the NIS Master server
# ypcat hosts
will display the hosts database
# ypinit -m
to initate the NIS Master server
# ypinit -c
to intiate it as a client when prompted for list of servers,
provides the server name.
ACTIONS:
Continue - try the next source
Return - stop looking for an entry
Default Actions:
SUCCESS = return
UNAVAIL = continue
NOTFOUND = continue
TRAGAIN = continue
Note:
NOTFOUND = return
The next source in the list will only be searched if NIS is down or has been disabled
Normally, a success indicated that the search is over and an unsuccessful result
indicates that the next source should be queried. There are occassions, however when
you want to stop searching when an unsuccessful search result is returned.
NOTE:
YP to NIS,
1. NIS was formerly known as SUN Yellow Pages (YP). The functionality of the second
NIS remains the same, only the same has changed.
3. NIS stored information about workstation names and addresses, users, the network
itself, and network services. This collection of network information is reffered to as the
NIS NAMESPACE.
4. Any system can be an NIS client, but only system with disks should be NIS servers,
whether master or slave.
6. The master copies of the maps are located on the NIS master server, in the
directory /var/yp/domain_name
/etc/bootparams:
1. Contains the path names that clients need during startup: root, swap and possibly
others.
/etc/ethers:
1. Contains system names and ethernet addresses. The system name in the key -
ethers.byname
2. Contains system names and etehnet addresses. The ethernet addresses is the key in
the map - ethers.byaddr
/etc/netgroup:
1. netgroup - contains groupname, username and system name. The groupname is the
key.
2. netgroup.byhost - contians the group name, user name and system name is the key.
3. netgroup.byuser - contains the group name, user name and system name. The
username is the key.
/etc/netmask:
netmasks.byaddr - contains the network masks to be used with YP subnetting. The
address is the key.
/etc/timezone:
timezone.byname - contians the default timezone database. The timezone name is the
key.
/etc/shadow - agening.byname
/etc/auto_home - auto.home
Automounter file for home directory.
/etc/auto_master - auto.master
Master automounter map
/etc/security/exec_attr
Contains execution profiles, part of RBAC
/etc/hosts
hosts.byaddr
hosts.byname
/etc/group
group.byid
group.byname
group.byaddr
/etc/usre_attr
contains the extended user attributes database, part of RBAC
/etc/security/prof_attr
Contains profile descriptions, part of RBAC
/etc/passwd
/etc/shadow
passwd.byname
passwd.byid
These above were some of the databases, files are reffered after activating NIS. Still
some more files and directories are there.
# ypinit -s
Note:
Make sure that the yp services are stopped.
# /usr/lib/netsvc/yp/ypstop
JUMP START
ie. .. the server is installed with x86 (Solaris 10) and still we can have the OS image of
SPARC and install it to the client. The same can be done vice versa.
I BOOT PROCESS:
Custom Jumpstart:
1. Requires up-front work
2. The most efficient way to centralize and automate the operating system installation
at large enterprise
3. A way to install groups of similar system automatically and indentically.
Jumpstart:
1. Automatically install the Solaris software on SPARC based system just by inserting
the Solaris CD and powering on the system.
2. For new sparc systems shipped from Sun Mircrosystems, this is the default method
of installing the operating system.
Commands:
# ./setup_install_server
Sets up an install server to provide the OS to the client during the jumpstart
installation. This command is also used to setup a boot only server when -b option is
specified.
# ./add_to_install_server
A script that copies additional packages within a product tree on the Solaris 10 software
and Solaris 10 languages CD's to the local disk on an existing install server.
#./add_install_client
A command that adds network installation information about a system to an install or
boot server's
/etc files so that the system can install over the network.
# ./rm_install_client
Removes jumpstart clients that were previously setup for network installation
#./check
Validates the information in the rules file.
3. Configuartion services:
These are provided by networked configuration server and provide information that a
jumpstart client uses to partition disks and create file systems, add/remove Solaris
packages and perform other configuration task.
/etc/ethers:
1. When the jumpstart client boots, it has no IP address; so it broadcasts its Ethernet
address to the network using RARP.
2. Boot server receives this request and attempts to match the client's Ethernet address
with an entry in the local /etc/ethers file.
3. If a match is found, the client name is matched to an entry in the /etc/hosts file. In
response to the RARP request from the client, the boot server sends the IP address from
the /etc/hosts
file back to the client. The client continues the boot process using the assigned IP
address.
4. An entry for the jumpstart client must be created by editing the /etc/ethers file or by
using the add_install_client script.
/etc/bootparams:
1. Contains entries that network clients use for booting.
2. Jumpstart clients retrieve the information from this file by issuing requests to server
running rpc.bootparamd program.
/tftpboot:
1. When booting over the network, the jumpstart client's boot PROM makes a RARP
request, and when it receives a reply the PROM broadcasts a TFTP request to fetch the
inetboot file from any server that responds & executes it.
II CONFIGURATION:
1. Boot service
2. Installation service
3. Identification service
4. Configuration service
INSTALLATION SERVICE:
1. Create a directory with at least 5 GB of space for holding OS image.
3. # . /setup_install_server /jstart/install
INDENTIFICATION SERVICE
WTD What to do?
1. Create a dir /jstart/config [It can be any directory].
2. Create a dir in the name of jumpstart client under above created directory
/jstart/config/jclinet1 [optional]
3. Create a file sysidcfg [File name should be sysidcfg].
4. Share the dir sysidcfg
network_interface=Primary
{
hostname=jclient1
netmask=255.0.0
protocol-ipv6=no
default_route=none [//gateway]
}
name_service=none
sercurity_policy=none
system_locale=en_us
timezone=Asia/Calcutta
timeserver=localhost
root_password=<copy_and_paste_from_the_etc/shadow_file>
CONFIGURATION SERVICE:
How the installation proceeds in Jumpstart clients
Provides information about
a. Installation type
b. System type
c. Disk partitions or file system
d. Cluster selection
e. Software package addition/deletion
2. Create rules file to choose the right profile for the client in the same directory
Note:
In the case of X86 for partitioning
2. # vi /jstart/config/jclient/rules
#hostname <jumpstart_client> <Pre_script> <Profile_name> <Post_script>
# any - - profilename -
hostname jclient1 prof1 - -
Optional:
# vi /jstart/config/jlient1/PS1
#!/bin/sh
echo Disabling auto power shutdown features & nfs4
touch /a/etc/Shutodown
touch /a/etc/.NFS4 inst_state.domain
:wq!
3. # cd /jstart/install/Solaris_10/Misc/jumpstart_sample
4. # cp check /jstart/config/jclient1
5. # cd /jstart/config/jclient1
6. # . /check [It will verify the rules files. If the syntax is correct it created rules.ok file]
BOOT SERVER
1. # vi /etc/ethers
# <mar_address> <jumpstar_client_name>
8:0:20:f9:54:50 jclient1
:wq!
2. # vi /etc/inet/hosts
<ip_address_jumpstart_client> <client_hostname>
100.0.0.1 jclient1
:wq!
3. # cd /jstart/install/Solaris_10/Tools
# . /add_install_client c <js_server_name:profile> -p <js_server_name:sysidcfg-path>
<client_name> <platform_group>
Eg:
# . /add_install-client c 100.0.0.108:/jstart/config/jclient1 p
100.0.0.108:/jstart/config/jclient1 jclient1 sun4u
1. # cd /cdrom/cdrom0/Solaris_10/Tools
2.a # ./setup_install_server /shivan
(In the case of DVD)
5. # shareall
CONFIGURATION SERVICES:
NOTE:
Make sure that the harddisk is correctly connected to the specified location in the
client.
# cd /export/install/Solaris_10/Tools
# ./add_install_script -c <js_server_name>:<profile_path> -p <js_server_name>:
<sysidcfg_path> <client_name> <platform_group>
eg:
# ./add_install_client -c aita:/export/config/node1 -p aita:/export/config/node1 node1
sun4u
/etc/security/prof_attr
Define profiles, lists the profiles assigned authorizations, and identifies the associated
help file.
/etc/security/exec_attr
Defines the privileged operations assigned to a profile.
/etc/security/auth_attr
Defines authorizations and their attributes.
Here sdown is the name of the logical user [name of the role]. Name of the role can be
any name.
Note:
A role can be mapped with multiple activities performed by a single profile.
Create a user named user10 and assign it access to the sdown role.
# useradd -u 4009 -g 10 -m -d /export/home/user10 -s /bin/ksh -R sdown user10
# passwd user10
COMMANDS:
15. # metahs -> To manage hot spares & hot spare spool
ADVANTAGES OF SVM:
Provides 3 major functionalities:
1. Overcome the disk size limitation by providing for joining of multiple disk slices to
form a bigger volume.
2. fault tolerance by allowing mirroring of data from one disk to another and keeping
parity information in Raid5.
2. Serial in nature. ie., sequential data operation are performed serialy on first disk
then second disk and so on..
3. Due to serial in nature new slices can be added up without having to take the
backup of entire concatenated volume.
5. No fault tolerance
STRIPPING:
1. Spreading of data over multiple disk drives mainly to enhance the performance by
distributing the data.
NOTE:
We can use a concatenated/stripped metadevce for any file system with the exception of
/ (root), /usr/, /var and /opt or any file system accessed during a Solaris install
MIRRORING:
NOTE:
1. Recording file system changes to the log device & then writes to them
to master device.
RAID 5:
2. Data redundancy
4. Data is divided into stripes & the parity is calculated from the data,
than they are stored in such a manner, parity is distributed (rotated)
2. Are temporary fixes, used until failed components are either repaired
or replaced.
HOTSPARE POOL:
SOFT PARTITION:
1. Logical partition
NOTE:
Expanding mounted file system:
1. Can expan mounted/unmounted UFS file system with the disk suite concatenation
facilities & the `growfs' command.
2. Expansion can be performed without bringing down the system or
performing a backup
3. Mounted/Unmounted file system can be expanded upto the new size of the meta
device on which the file system resides.
1. 3 system files
2. a. /etc/lvm/md.tab
b. /etc/lvm/md.cf
c. /etc/lvm/mddb.cf
3. /etc/lvm/md.tab
a. used by metainit & metadb commands as a workspace file
b. each meata device may have a unique entry
c. used only when creating meta devices, hot spare/database
replicas
d. not automatically updated by disk suite utilities
e. have little or no correspondence with actual meta devices, hot
sapres or replicas
f. The output from this file is similar to that displayed when
# metastat -p command
g. # metainit -a => command updates this file
4. /etc/lvm/md.cf
a. automatically updated whenever the configuration is changed
b. basically a disaster recovery file and should NEVER BE EDITED
NOTE:
c. the md.cf file DOES NOT get updated when hot sapring occurs
d. should never be used blindly after a disaster. Be sure to
examine the file first.
5. /etc/lvm/mddb.cf
a. created whenever the `metadb` command is run and is used by
`metainit` to find locations of the meta device state data
base
b. NEVER EDIT this file
c. each meta device state database replica has a unique entry in
this file
NOTE:
d. an entry is made in the mddb.cf file that tells the location
of all the state databases
e. the identical information is edited in the `/etc/system` file
NOTE:
/kernel/drv/md.conf
3. if the field is modified, perform reconfiguration boot to build meta devices (ok boot -r)
4. if the "nmd" is lowered, any metadevice existing between the old number and the
new number MAY NOT PERSIST ENT
META DEVICE:
4. standard meta device name begins with "d" & is followed by a number
for eg: d10, d100
a. by default 128 unique meta devices in the range between
0 and 127
b. additional meta devices can be added
1. provides the non-volatile storage necessary to keep track of configuration & status
information for all meta devices, meta mirrrors
3. when the state database is updated each replica is modified one at a time
4. Read & writes the files from and to the meta device
CREATING A MIRROR:
SUB_MIRROR:
a. Is made of one or more striped or concatenated meta devices
b. Each meta device within a meta mirror is called a SUB MIRROR
4. Any file system including /, swap and /usr or any application such
as database can be use a mirror.
7. When mirroring existing file system/data, be sure that the existing data is contianed
on the submirror initially defined with the meta mirror. When second sub-mirrot is
subsequently attached, data from the initital submirror is copied on to the attached
sub-mirror.
3. Create a mirror metadevice and associate with one meta device (adding first sub-
mirror)
4. Attach another metadevice with mirror meta device (adding second sub-mirror)
5. # metastat | grep %
To check the sync status
6. # newfs /dev/md/rdsk/d30
# mkdir /mirror
# mount /dev/md/dsk/d30 /mirror
# cd /mirror
NOTE:
3. Slice has to be with same size & gemotery, if not greater than the source size.
WTD:
1. Detach the sub-mirror from the mirror (unmounted)
3. Mount the individual slice, the same data will be available in both physical
components.
HTD:
1. # metadetach <mirror> <sub-mirror>
# metadetach d30 d20
3. # metaclear -r d30 => removes both the mirror d30 & the sub-mirror d10
2. Suppose if we remove one disk which contans the 2nd sub-mirror, still we can access
the data
NOTE:
The size & gemotery has to be same or with greater size than the
source disk.
NOTE:
1. Dont'f format or create the file system
SOFT PARTITION:
1. Dividing one logical componet into many soft partitions. It can be laid out over
physical disk/mirror/Raid-5
# metainit d62 -p d5 1g
# metainit d63 -p d5 1g
1. Hotspare facility included with DISK Suite allows automatic replacement of failed
submirror/Raid-5 components, provided space components are available & reserved.
3. A hotsapce is a component that is running (but not being used) which can be
substituted fro a broken component in a sub-mirrot of a two or three way metamirrot or
RAID device.
NOTE:
4. Failed components are a one-way meta mirror cannot be replaced by a hotspare.
1. 3 staes
a. available
b. in-use
c. broken
a. Available:
Available hotspares are running and ready to accept data, but are not currently
being written to or read from.
b. In-use:
In-use hot spares are currenty being written to and read from
c. Broken:
Broken hotspares are out of service.
A hot spare is placed in the broken state when an I/O error occurs
3. Once the hot spare pools are defined & associated with a sub mirror, the hot sapres
are `available` for use. If a component failure occurs, disk suite seaches through the list
of hot spares in the assigned pool and selects the first "avialable" co,ponent that is
equal or greater in disk capacity.
4. If a hot spare of adequate size is found, the hot sapre`s state changes to "in-use" and
a resync operation is automatically performed. The resync operation brings the hot
spare into sync with other sub-mirrors.
5. If a componet of adequate size is not found, the sub mirror that failed is considered
`errored` and the portion of the sub mirror no longer replicated the data.
1. Associating hot spares of the wrong size with sub mirror. This condition occurs
when hot sapre pools are defined and associated with a sub mirror & none of the hot
spares in the hot sapre pool are equal to or greater than the smallest component in the
sub mirror.
2. Having all the hot sapre within the hot spare pool in use.
In this case immediate action is required.
a. 2 possible solutions
1. first is to add additional hot spare
2. To repair some of the components that have been hot spare replaced
NOTE:
If all the hot spare are in use and a sub mirror fails due to errors, that portion of the
mirror will no longer be replicated.
1. # metahs
-> adding hot spares to hot spare spools
-> deleting hot spares from the hot spare spool;
-> replacing hot spares in hot spare pools
-> enabling hot spare
-> checking the status of the hot spare
(or)
# metahs -a hsp000 c0t1d0s6 c0t2d0s6 c0t4d0s7
-a => to add
-i => to obtian the information
1. Hot spares can be deleted from any or all the hot spare pools to which they have been
associated.
2. When a hot sapre is delted from a hot spare pool, the position of the remining hot
spare changes to reflect the new position. For eg, if the second of three hot spares in a
hot spare spool is deleted, th 3rd hot spare moves to the second position.
3. # metahs -d hsp000 c0t11d0s6
-> removing slice form the hot spare spool
-d -> to delete
# metahs -d <hsp-name>
-> to delete the hot spare pool.
2. The order of hot spares in the hot spare pool is not changed when the replacemebt
occurs.
ASSOCIATING THE HOT SPARE POOL WITH SUB-MIRROT/ RAID-5 META DEVICE:
NOTE:
Where d101 and d102 submirrors of d103 mirror
where
-h => specifies the hot spare pool to be used by a metadevice
where,
none - specifies the meta device is disassociated with the hot spare pool associated to
t11
# metaclear d100
# metaclear d12
# metaclear -r d15
# metahs -i
NOTE:
Suppose the failed disk is going to be replaced to free up hot spare.
This option is used when, a disk drive has had its device is changed during a firmware
upgrade or due to chaning conroller of a storage.
-v => execution in verbose mode. Has no effect when used with `-u` option. Verbose is
default.
# metadevadm -v -u c0t11d0s4
RAID-5:
UFS LOGGING:
1. Recording the changes to the file system to logging devie & then updation the same to
master device
2. metatrans device = master device + logging device
NOTE:
Size of logging device should not be more than 64 mb.
# newfs /dev/md/rdsk/d13
2. Aborting a `growfs` command may cause a temporary loss of free sapce. The space
can be recovered using `fsck` command after the file syste, is unmounted using
`umount`.
3. 'growfs' command non-destructively expands a file system upto the size of the file
system's physical device or meta device.
4. 'growfs' write locks the file system when expanding a mounted file system. Access
times are not kept whilel the file system is write-protected. The 'lockfs' command can be
used to check the file system lock status and unlock the file system in the unlikely even
that 'growfs' aborts without unlocking the file system.
5. We can perform,
a. expanding a non-meta device component
b. expanding a mounted file system
c. expanding a mounted file system to an existing meta mirror
d. expanding an unmounted file system
e. expanding a mounted file system using stripes
f. 'growfs'
1. attach the disk space
2. grow the disk space
# newfs /dev/rdsk/c0t10d0s3
# mkdir /expand
# mount /dev/dsk/c0t10d0s3 /expand
# metainit -f d100 1 1 c0t10d0s3 => dont format
# umount /expand
# mount /dev/md/dsk/d100 /expand
# metattach d100 c0t10d0s6 => new slice6 is attaced tp d100
# growfs -M /expand /dev/md/rdsk/d100 => raw disk is expanded
GROWING A MIRROR:
1. Attach each individual componet to each sub mirror.
2. Grow the mirror
NOTE:
The newly attached slice will have only data. It won't be used for storing parity
information.
OPTIONAL:
The following file can also be edited to create meta state database / replica and meta
devices too.
File:
md.tab
Edit:
:wq!
# metadb a f mddb01
In the above eg, 3 state database/replicas are stored on each of the 3 components.
Once the above entry is made to the file md.tab file, metadb command must be run
with both the a & -f options.
RAID-0
Concatenation with stripping:
:wq!
# metainit n d10
NOTE:
# metainit n -> to verify that the information in the md.tab file is accurate.
-n option enables us tot check our entry
# metainit d10
If the configuration is accurate, run metainit to begin using the striped meta device
/dev/md/dsk/d50 -m /dev/md/dsk/d1
/dev/md/dsk/d1 2 1 c0t0d0s5 1 c0t1d0s6
Or
/dev/md/dsk/d1 2 1 /dev/dsk/c0t0d0s5
1 /dev/dsk/c0t0d0s6
# metainit a
-a -> to activate
# metattach d50 d2
RAID-5
where
-i -> interlace size, which is optional
HOTSPARE:
d10 -m d20
d20 1 1 c1t0d0s3 -h hsp000
d30 1 1 c2t2d0s3 -h hsp001
d40 1 1 c2t5d0s3 -h hsp002
Hard disks are formatted and information are stored using 2 methods.
7. Physical storage layout
2. Logical storage layout
Vxvm uses both the physical objects and virtual objects to handle the storage
management.
PHYSICAL DISK / PHYSICAL OBJECT: are hardware with block and raw OS
device interfaces that are used to store the data.
VIRTUAL OBJECTS:
3. When one or more physical disks are brought under the control of veritas, it
creates virtual objects called VOLUME, on those physical disks.
4. Volumes and their physical components are called virtual objects or vxvm
objects.
NOTE:
Vxvm control is accomplished only of vxvm takes control of the physical disk and the
disk is not under the control of another storage manager such as SVM.
Before the disk can be brought under vxvm control, the disk must be accessible
through the operating system device interface.
Vxvm is layered on top of the OS interface services and is dependent upon how the OS
access physical disks.
5. OS disk devices
6. Device handles
9. Made up of space from one or more physical disks on which the data is
physically stored.
11. All users and applications access volumes as contiguous address space using
special device files in a manner similar to accessing a disk partition.
12. Volumes have block & character device nodes in the /dev tree.
For eg: /dev/vx/( r ) dsk/..
Figure1:
When we place a disk under vxvm control, a CDS disk layout is used, which
ensures that the disk is accessible on different platforms, regardless of platforms on
which disk was initialized.
Figure 2:
2. And therefore, when placing a boot disk under volume. manager control, we must
use a SLICED DISK layout
LOGICAL OBJECTS:
14. vmdisk
15. disk group
16. sub dis
17. plex
18. volume
PHYSICAL OBJECTS:
19. Controllers
20. Disks
VMDISK:
21. When a disk is brought under the control of vxvm that disk is called VMDISK.
22. Can bring the disk under vxvm by 2 methods.
a. Intialization:
1. Intialize the disk as vmdisk
2. The entire data on the disk will be overwritten, ie., the data in
the disk will be destroyed.
b. Encapsulation:
1. When a disk is brought under the control of vxvm with
encapsulation, all the data (partition) in the disk will be
preserved.
DISK GROUP:
1. Is a collectionn of voulme manager disks that have been put together into a logical
grouping.
2. Grouping of disk is for management purpose, such as to hold the data for a spefic
application or set of applications.
3. Voulme managerobjects CANNOT span disk groups. For eg: volumes SUB-DISKS,
PLEXES and disk must be derived from the same disk group.
Can create additional disk group as necessary.
4. Disk group ease the use of devices in a high availability environmen, because a disk
group and its componetnts can be moved as a unit from the host machine to another.
SUB-DISKS:
PLEX:
VOLUME:
1. Is a collection of plexes
2. Is a virtual storage device that is used by applications in a manner smimilar to
physical disk. Due to its virtua in nature a volume is not restricted by the physical disk
size constraints that apply to physical disk.
3. Volume can be as large as the total sum of avialable unreesrved free physical disk
space
4. Minimum of plex in a volume is 1.
Maximum of plexes in a volume is 32.
5. Size of the volume is the size of the least plex.
6. Maximum size of a volume is a size of the disk group.
FIGURE 3:
DAEMONS:
1. vxconfigd - main configuration daemon of vxvm
reponsible for maintaining the vmdisk & disk group
information
PACKAGE NAME:
PATH=$PATH:/opt/VRTS/bin:/etc/vx/bin
MANPATH=$MANPATH:/opt/VRTS/man
export PATH MANPATH
:wq!
# ./etc/profile
# echo $PATH
# echo $MANPATH
NOTE:
MOst commands are located in
1. /etc/vx/bin
2. /usr/sbin
3. /usr/lib/vxvm/bin
NOTE:
While adding the package manually please ensure the following;
1. ensure teh packages are installed in the correct order
2. always install VRTSvlic first
3. always install the VRTSvxvm package before other vxvm packages.
4. documentation and manual pages are optional
5. After installing the package, using OS specific commands, run vxinstall to configure
vxvm for the first time.
# pkginfo -l VRTSvxvm
vxinstall:
1. is an interactive program that guides through the initial vxvm configuration
2. the main steps in vxinstall process are
a. entering the license key
b. select the naming method
1. enclosure based naming
2. traditional naming
3. if desired, set up a system-wide default disk group
NOTE:
vxdiskadm only provides access to certain disk and disk group management functions.
CLI:
COMMANDS:
1. # vxdiskadm - used to add or initialize one or more disks, encapsulate one or more
disks, remove disk, remove a disk for replacement, replace a failed or removed disk,
more volumes from a disk, enable access (import) to a disk group, remoce access
(deport) to a disk group, enable (online) a disk device, disable (offline) a disk device,
mark a disk as a spare for a disk group, turn-off the spare flag on a diks , list disk
informations.
2. # vxassist - Utility used to create volume, add mirrors & logs to existing volumes,
exten & shrink the existing volumes, provides the migration of data from a specified set
of disks & provides facilities for the online backup of existing volumes.
SYNTAX:
OPTIONS:
-g => to specify the disk group
-b -> background option
-d => file containing defaults for vxassist if not specified,
/etc/default/vxassist is used.
KEYWORD:
make, mirror, growto, growby, shirkby, shrinkto, snapshot, snapstart, snapwait
ATTRIBUTES:
specifies volume layout, sik controller to include, excluede etc.,
10. # vxprint -> displays the detailed information on existing vxvm objects
NOTE:
VEA is automatically installed when we run the vxvm installation scripts, We can also
install VEA by manually adding the packages.
VOLUME PATH:
Block device : /dev/vx/dsk/<dg_name>/<volume_name>
Raw device : /dev/vx/rdsk/<dg_name>/<volume_name>
1. Located in /var/adm/vx/veacmdlog
2. displays a history of taks performed in the current session and in previous sessions.
3. This file is created after the first execution of a task in VEA.
MANAGING VEA:
1. The VEA server program is in
/opt/VRTSos/bin/vxsvc
DISK GROUP:
2. To initialize:
Syn: # vxdisksetup -i >device_tag> [attribute]
eg: # vxdisksetup -i Disk_1 { enclosure based naming }
# vxdisksetup -i c2t0d0 { traditional based naming }
NOTE:
In a shared access environment, when displaying disks, we should frequenlty run #
vxdctl enable to rescan for the disk changes
EVACUVATING A DISK:
Before renaming the disk, we need to evacuvate data from the disk to another disk in
the disk group.
1. # vxdiskadm
" Move volumes from a disk "
NOTE: Remove the disk from the disk group & then uninitialize it.
RENAMING A DISK:
Syn: # vxedit -g <disk_group> rename <old_name> <new_name>
eg: # vxedit -g datadg rename datadg01 datadg03
NOTE:
1. New disk name must be unique within the disk group.
2. Renaming a disk does not auomatically rename sub-disks on that disk.
4. # vxdiskadm
" Remove access to (deport) a disk group "
NOTE:
We cannot upgrade to a specific version using VEA. We can only upgrade to the current
version. To upgrade to specific version, we have to use
TO CREATE A VXFS:
# newfs /dev/vx/rdsk/datadg/shivavol
to create the file system
# mkfs -F vxfs /dev/vx/rdsk/datadg/shivavol
NOTE:
If we edit the file
/etc/default/fs
edit:
local = ufs // chage it as vxfs
then there is no need to specify the file system while creating the file system.
To MOUNT:
# mount -F vxfs /dev/vx/dsk/datadg/shivavol /check
# fstyp /dev/vx/dsk/datadg/shivavol
used to know the file system type
# prtvtoc /dev/vx/dsk/datadg/shivavol
Note:
# prtvtoc -pv | grep bootpath
shows the physical device path of the booted OS.
2. # vxtask list
displays the % progress, task id
1. # vxdisksetup -i c0t1d0
to initialize the physical disk to vmdisk
2. # vxdisk list
to view the vmdisk lists
3. # vxinstall
to install license key
5. # vxdisk list
6. # vxdg -g dg1 adddisk d3=c1t2d0
adding d3 to the existing disk group dg1
Note:
1. As for vxvm commands the default size unit is "s" - representing the sector
2. Add suffix such as
a. k - kilo byte
b. m - mega byte
c. g - giga byte
NOTE:
1. # vxlicrep | more
provides the information about the license
2. # vxdisk list
if invalid - the disk is not under the control of verita
ONLINE - can recognize the availability of the disk
3. # vxdiskadm
option 21 : to identify the new disk connected to the box
Note: This operation requires the vxvm configuration daemon, 'vxconfigd' to be stopped
and restarted
If we choose enclosure based naming
1. disks are displayed in 3 categories
2. Enclosure:
a. Supported by RAID disk arrays are displayed in
enclosurename_# -> format
b. Disks:
supported JBOD (Just Bunch Of Disks) disk arrays are displayed
with the prefix
Disk_
c. Others:
disks that do not return a path independent identifier to vxvm
displayed in the traditional os based format
a. bootdg:
1. if the boot disk is brought under vxvm control, volume manager
assigns bootdg as an alias for the name of the disk group that
contains the volume that are used to boot the system.
2. # vxdg bootdg
# vxdg defaultdg
to display what is set as bootdg or defaultdg
STRIPE VOLUME:
1. # vxprint -st
9. # vxprint -vt
MIRRORING:
6. # mkfs /dev/vx/rdsk/dg1/mirvol
7. # mkdir /mnt/veritas_mirror
RAID-5:
# cp /etc/system /system.orig
# cp /etc/vfstab /vfstab.orig
# vxdiskadm
Select option : 2 " Add one or mode disk for encapsulation
Select the system disk: c0t0d0
# vxdiskadm
# vxprint
# vxprint
# vxprint
# vxprint
# vxprint
# init 6
OK boot disk1
Now, insert the new disk in the target 0
# vxdiskadm
Select the option:5
(if not detected)
Select the option:2, then again option5
RESIZING:
1. # vxassist
2. # vxremake
Using 'vxassist' command will increase or decrease volume size no the file system size.
# vxdisk list
# vxdisk list
# mkdir /mvol3
# df -h
# df -h
# cd /usr/lib/fs/vxfs
# vxdisk list
# vxdisk list
# vxdisk list
# vxedit -g dg set nohotuse=off d1
to remove the hotspare
SNAPSHOT
3. # mkdir /cvol
8. # mkdir /snapvol
10. # df -h
1. # vxdg deport dg
2. # vxdisk list
4. # vxdg import dg
5. # vxdisk list
CREATING A VOLUME:
Before creating a volume, initialize disk & assign them to disk groups
2. Mirrored volume requires minimum one hard disk for each plex. A mirror cannot
be on the same disk that the other plexes are using.
1. To create a volume:
CONCATENATED VOLUME:
1. To create a concatenated volume
Figure:
STRIPED VOLUME:
Example:
Figure:
RAID-5:
a. default ncol=3
c. log is created by default. Therefore, we need atleast one or more disk than the
number of coloumns.
Example:
Figure:
MIRRORED VOLUME:
layout=mirror [mirror=number]
Example:
Concatenated mirror
specifying 3 mirrors
b. nlog = n creates nlogs & is used when we want more than one log plex to be
created
Ex:
Ex:
Figure:
REMOVING A VOLUME:
1. When a volume is removed, the space used by the volume is freed and can be
used elsewhere
(We should only remove a volume, if we are sure that we
NOTE:
(/etc/vfstab) in order to remove the entry for the file system and avoid errors at boot
time.
ADMINISTERING MIRRORS:
2. By default, a mirror is created with the same plex layout as the original volume.
NOTE:
1. Cannot add a mirror to a disk that is already being used by the volume.
Figure:
[layout=layout-type] [disk-name]
REMOVING A MIRROR:
To remove the plex that contains a sub-disk from the disk datadg02.
NOTE: To remove the plex that contains a sub-disk froma specific disk [!] is used.
We can also used the vxplex and vxedit commands in combination to remove the
mirror.
2. If the system fails, only the changed regions of volume must be recovered
To mount the file system automatically at boot time, edit the OS-specific file system
table file to add any entry for the file system.
5. fsck pass 1
7. Mount options - -
REMOVING A VOLUME:
To resize a volume,
Shrinking a volume enable to use space elsewhere. VXVM returns space to the free
space pool.
RESIZING A VOLUME:
1. Can expand and shrink a mounted
veritas files system but unmounted veritas file system cannot be changed.
for ex:
Original volume size : 10 mb
Figure:
For example:
Original volume size: 20mb
examle:
Expand the file system/datavol from 512000 sectors to 1024000 sectors
1. If we resize a LUN in the hardware, we should resizse the vxvm disk corresponding to
that LUN.
2. Disk headers and other vxvm structures are updated to reflect the new size.
3. Intended for devices that are part of an imported disk group
What is encapsulation?
1. A process that converts existing partions on a specified disk to volumes. If any
partitions cntain file systems, /etc/vfstab entires are modified so that the file system
are mounted on volumes instead.
(or)
NOTE:
1. Encapsulation - preseves the data on the disk
2. Initialization - Destroys the data on the disk
Requirements:
1. One free partition for private & public region
2. S2 slice that represents the full disk
3. 2048 sectors free at beginning or end of disk for private region.
Figure:
What is rootability?
1. Rootability is the process of encapsulating the root file system, swap device and other
file system on the boot disk under vxvm contol.
2. Volume manager converts existing partitions of the boot disk into volume manager
volumes.
3. The system can then mount the standard boot disk file system from volumes instead
of disk partitions.
4. When encapsulating the boot disk, the private regions can be created form the swap
area, which reduces swap area by the size of the private region.
The private region is created at the begining of the swap area, and the swap partition
begins one cylinder from its original location.
5, requirements are the same as for the data disk encapsulation, but the private regions
is created at the swap space.
Figure:
Why encapsulate the boot disk?
Shuld encapsulate the boot disk only if we plan to mirror the boot disk.
For swap,
1. The first swap volume must be contiguous and therefore cannot use striped or layerd
layouts.
2. Other swap voulumes can be non-contiguous and can use any layout. However there
is an implied 2gb limit of usable swap space per device for 32 bit operating system.
This enable volume manager to take advantage of boot disk aliases to identify the
mirror of the boot disk if a replacemebt is required.
If this variable is assigned as false, we must determine which disks are bootable.
Note: Sliced disk format. The boot disk cannot be a CDS disk
3. The root mirror places the private region at the beinning of the disk, The remaining
partitions are placed after the private region.
1. To mirror the root volume only:
# vxrootmir alternate-disk
2. To mirror all other un-mirrored, concatenated volumes on the boot disk to the
alternate disk:
# vxmirror boot-disk alternate-disk
To boot the system using an alternate boot disk after failure of the primary boot disk,
1. Set the eeprom variabl
OK setenv use-nvramrc? true
OK reset
This above variable must be set to true to enable the use of alternate boot disks.
Note:
Do not use vxunroot if we aer only upgrading vxvm packages, including the VEA
package.
1. Ensure that the boot disk volumes, volumes only have one plex each.
# vxprint -hvt rootvol swapvol usr var
2. If boot disk volumes have more than one plex each, remove the unnecessary plexes.
# vxplex -o rm dis plex-name
HOT RELOCATON
The system automatically reacts to i/o failures on redundant vxvm objects and restores
redundancy to those objects relocating affected sub-diks.
NOTE:
Sub-diks are relocated to disks designated as spare disks or to free space in the disk
group.
Figure:1
Figure:2
1. vxrelocd detects the disks failure
2. root (admin) is notified
3. sub-diks are relocated to a spare
4.volume recovery is attempted.
. 1. Sun RSC is a server management tool that allows us to monitor & control our
server over modem lines and over a network.
2. Provides remote system administration fro geographically distributed or
physically inaccessible systems.
RSC access:
From workstation running on Solaris, win 9x or win NT.
NOTE:
The sever can boot and operate normally when the RSC software is not enabled, and
Sun console features continue to be available on standard RS232 ports.
RSC features:
1. Remote system monitoring, error monitoring, including output from power-on
self test and OBP diagnostics.
2. Remote server reboot, power on & power off o demand.
3. Ability to monitor the CPU temperature and fan sensors without being near the
managed server, even when the server is in offline.
4. To run diagnostic tests from a remote console.
5. Remote event notification of server problems
6. Detailed log of RSC events
7. Remote console functions on both the serial & Ethernet ports
Monitoring tools:
Complemented by RSC
1. Solstice SYMON
a. Main & popular tools
b. Main tool for observing system operation behavior and performance
while the server operating system is up and running.
2. SUN VTS
3. Kadb kernel debugger
4. OBP
5. Open Boot Diagnostics OBDiag
RSC use:
1. After installing and configuring SUNRSC s/w on the server & client, we can use
OBP commands & set OBP variables that redirect the console output to RSC
2. Part of RSC configuration defines & enables alert mechanisms.
a. Alerts provide remote notification of system problems.
b. It can be sent to pagers, mails, to any clients that are currently
logged into RSC.
3. RSC generate alert messages whenever the following error occurs
a. server system resets
b. server temperature crosses the lower & higher fault
c. sever redundant power supply fails
d. power outage occurs at the server site, if an un-interruptible power
supply is in use and it is configured to send an alert to RSC.
e. Sever undergoes a h/w watch dog reset.
f. Detects 5 un-successful RSC login attempts within 5 min
4. Each alert message includes sever name & other important details
5. RSC controls whether an alert is sent to mail/pager.
6. It always sent alert to any client currently logged into RSC accounts for that
server.
7. If server is running & if tools available
- SUN VTS
- Solstice SyMON
If not running tools may be available through X windows
8. If server is not running & if no tools
a. RSC feature to delay the server
- Show environmental information
- Put the server in debug mode
- Control sever firmware behavior
- Turn server power off & then on if the server is hung
View logs
- Displays detailed log of RSC errors, events & RSC command
history
- Displays & reset server console logs
RSC configuration:
Can control RSC configuration settings for
- Alerts
- Ethernet ports
- Serial ports
- RSC date & time
- RSC password
- RSC user account
1. supports upto 4 protected password user account for each
managed server
2. each with customizable access rights
User-permissions:
-a administration permission, authorized to change the stat of RSC
configuration variables
User interface:
- Graphical
Runs using SUN java (RSC- SUNWrscj)
- Command line
Using standard telnet to RSC ehternet port
To the RSC serial port using PPP (Point-to-point protocol)
Note:
- RSC always sent alert messages to any users that are logged
into RSC accounts
- rsradm utility / RSC interfaces to configure RSC after
installation
Note:
- RSC java application is installed on a Solaris client machine, it
resides in the directory /opt/rsc by default.
- To run RSC GUI Java application at client, it must have Java
Development kit for Solaris version 1.1.6
Defining Clustering:
1. Clustering is a general terminology that describes a group of two or more
separate servers.
2. Clusters is a collection of 2 or more system that work together as a single,
continuous available system to provide applications, system resources and data
to users.
HA High Availability:
1. Clusters are generally marketed as the only way o provide high availability for
the applications that run on them.
2. HA can be defined as the minimization of downtime rather than the complete
elimination of downtime.
HA standards:
Usually phrased with wording such as Provides 5 nines availability. This means
99.999% uptime for the application or about 5 min of downtime per year. One clean
server reboot often already exceeds that amount of downtime.
Scalability:
1. Clusters also provide an integrated h/w and s/w environment for scalability.
2. Scalability is defined as the ability to increase application performance by
supporting multiple instances of applications on different nodes in the cluster.
Terminology:
1. Cluster-node:
a. Is a system that runs both Solaris 10 OS s/w and Sun Cluster s/w.
b. Every node in the cluster is aware when another node joins or leaves the
cluster.
2. Cluster interconnect:
a. This connection is established between all cluster nodes and is solely
used by cluster nodes for the private and data service communications.
b. This communication path is also known as private n/w.
c. There are 2 variations of interconnect
i. Point-to-Point
ii. Junction based (In this junction based interconnect, the
junction must be switches and not hubs)
3. CCR Cluster Configuration Repository:
a. Its a private, cluster wide, distributed database for storing information
about the configuration and star of the cluster.
b. CCR contains the following information:
i. Cluster & node name
ii. Cluster transport configuration
iii. The names of SVM disk set or Vxvm disk group
iv. A list of nodes that can master each disk group or disk set
v. Operations parameter values for data services
vi. Paths to data services call back methods
vii. Current cluster status
c. CCR is accessed when error/recovery situations occur or when there has been
general cluster status changes, such as node leaving or joining the cluster.
4. Local devices:
a. These devices are accessible only on a node that is running the service
and has a physical connection to the cluster. They are not highly
available device.
5. Global device:
a. These devices are highly available to any node in a cluster. Suppose if a
node fails while providing access to a global device the Sun Cluster s/w
switches over to another path to the device and re-directs the access to
the path.
b. This access is known as global device access.
c. Provides simultaneous access to the raw (character) device associated
with storage devices from nodes, regardless of where the storage is
physically attached.
c. Its important to note that DIDs themselves are just global naming
scheme and not a global access scheme.
9. Resource:
a. In the context of cluster, the word resource refers to any element above
the layer of the cluster frame work which can be turned on or off and can
be monitored in the cluster.
b. Ins a instance (example or first stage of proceeding) of a resource type
that is defined cluster wide.
NOTE:
Data services utilize several types of resources. Application & n/w resources form a
basic unit ie., managed by RGM.
NOTE:
1. There must be a majority (more than 50% of all possible
vote present) to form a cluster.
2. A single quorum device can be automatically configured
by scinstall for 2-node cluster only.
3. All other quorum devicews are manually configured after
the Sun Cluster s/w installation is complete.
4. (n/2)+1 quorum devices are required. (Similar to replica)
e. Quorum device rules:
1. Must be available to both nodes in 2-node cluster
2. Information is maintained globally in the CCR database