Advanced Administration
Sun Microsystems, Inc. has intellectual property rights relating to technology embodied in the product that is described in this document. In particular, and without
limitation, these intellectual property rights may include one or more U.S. patents or pending patent applications in the U.S. and in other countries.
U.S. Government Rights – Commercial software. Government users are subject to the Sun Microsystems, Inc. standard license agreement and applicable provisions
of the FAR and its supplements.
This distribution may include materials developed by third parties.
Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark in the U.S. and other
countries, exclusively licensed through X/Open Company, Ltd.
Sun, Sun Microsystems, the Sun logo, the Solaris logo, the Java Coffee Cup logo, docs.sun.com, Java, ZFS, and Solaris are trademarks or registered trademarks of Sun
Microsystems, Inc. or its subsidiaries in the U.S. and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of
SPARC International, Inc. in the U.S. and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc.
Adobe is a registered trademark of Adobe Systems, Incorporated. PostScript is a trademark or registered trademark of Adobe Systems, Incorporated, which may be
registered in certain jurisdictions.
The OPEN LOOK and SunTM Graphical User Interface was developed by Sun Microsystems, Inc. for its users and licensees. Sun acknowledges the pioneering efforts
of Xerox in researching and developing the concept of visual or graphical user interfaces for the computer industry. Sun holds a non-exclusive license from Xerox to
the Xerox Graphical User Interface, which license also covers Sun's licensees who implement OPEN LOOK GUIs and otherwise comply with Sun's written license
agreements.
Products covered by and information contained in this publication are controlled by U.S. Export Control laws and may be subject to the export or import laws in
other countries. Nuclear, missile, chemical or biological weapons or nuclear maritime end uses or end users, whether direct or indirect, are strictly prohibited. Export
or reexport to countries subject to U.S. embargo or to entities identified on U.S. export exclusion lists, including, but not limited to, the denied persons and specially
designated nationals lists is strictly prohibited.
DOCUMENTATION IS PROVIDED “AS IS” AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY
IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO
THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.
Copyright 2008 Sun Microsystems, Inc. 4150 Network Circle, Santa Clara, CA 95054 U.S.A. Tous droits réservés.
Sun Microsystems, Inc. détient les droits de propriété intellectuelle relatifs à la technologie incorporée dans le produit qui est décrit dans ce document. En particulier,
et ce sans limitation, ces droits de propriété intellectuelle peuvent inclure un ou plusieurs brevets américains ou des applications de brevet en attente aux Etats-Unis
et dans d'autres pays.
Cette distribution peut comprendre des composants développés par des tierces personnes.
Certaines composants de ce produit peuvent être dérivées du logiciel Berkeley BSD, licenciés par l'Université de Californie. UNIX est une marque déposée aux
Etats-Unis et dans d'autres pays; elle est licenciée exclusivement par X/Open Company, Ltd.
Sun, Sun Microsystems, le logo Sun, le logo Solaris, le logo Java Coffee Cup, docs.sun.com, Java, ZFS, et Solaris sont des marques de fabrique ou des marques
déposées de Sun Microsystems, Inc., ou ses filiales, aux Etats-Unis et dans d'autres pays. Toutes les marques SPARC sont utilisées sous licence et sont des marques de
fabrique ou des marques déposées de SPARC International, Inc. aux Etats-Unis et dans d'autres pays. Les produits portant les marques SPARC sont basés sur une
architecture développée par Sun Microsystems, Inc. Adobe est une marque enregistree de Adobe Systems, Incorporated. PostScript est une marque de fabrique
d'Adobe Systems, Incorporated, laquelle pourrait ?‘tre d?pos?e dans certaines juridictions.
L'interface d'utilisation graphique OPEN LOOK et Sun a été développée par Sun Microsystems, Inc. pour ses utilisateurs et licenciés. Sun reconnaît les efforts de
pionniers de Xerox pour la recherche et le développement du concept des interfaces d'utilisation visuelle ou graphique pour l'industrie de l'informatique. Sun détient
une licence non exclusive de Xerox sur l'interface d'utilisation graphique Xerox, cette licence couvrant également les licenciés de Sun qui mettent en place l'interface
d'utilisation graphique OPEN LOOK et qui, en outre, se conforment aux licences écrites de Sun.
Les produits qui font l'objet de cette publication et les informations qu'il contient sont régis par la legislation américaine en matière de contrôle des exportations et
peuvent être soumis au droit d'autres pays dans le domaine des exportations et importations. Les utilisations finales, ou utilisateurs finaux, pour des armes nucléaires,
des missiles, des armes chimiques ou biologiques ou pour le nucléaire maritime, directement ou indirectement, sont strictement interdites. Les exportations ou
réexportations vers des pays sous embargo des Etats-Unis, ou vers des entités figurant sur les listes d'exclusion d'exportation américaines, y compris, mais de manière
non exclusive, la liste de personnes qui font objet d'un ordre de ne pas participer, d'une façon directe ou indirecte, aux exportations des produits ou des services qui
sont régis par la legislation américaine en matière de contrôle des exportations et la liste de ressortissants spécifiquement designés, sont rigoureusement interdites.
LA DOCUMENTATION EST FOURNIE "EN L'ETAT" ET TOUTES AUTRES CONDITIONS, DECLARATIONS ET GARANTIES EXPRESSES OU TACITES
SONT FORMELLEMENT EXCLUES, DANS LA MESURE AUTORISEE PAR LA LOI APPLICABLE, Y COMPRIS NOTAMMENT TOUTE GARANTIE
IMPLICITE RELATIVE A LA QUALITE MARCHANDE, A L'APTITUDE A UNE UTILISATION PARTICULIERE OU A L'ABSENCE DE CONTREFACON.
080829@20795
Contents
Preface ...................................................................................................................................................15
3
Contents
3 Managing Serial Ports With the Service Access Facility (Tasks) ...................................................35
Managing Serial Ports (Task Map) .................................................................................................... 36
Using the Service Access Facility ....................................................................................................... 36
Overall SAF Administration (sacadm) .............................................................................................. 37
Service Access Controller (SAC Program) ................................................................................ 38
SAC Initialization Process ........................................................................................................... 38
Port Monitor Service Administration (pmadm) ................................................................................. 38
ttymon Port Monitor ................................................................................................................... 39
Port Initialization Process ........................................................................................................... 39
Bidirectional Service .................................................................................................................... 40
TTY Monitor and Network Listener Port Monitors ....................................................................... 40
TTY Port Monitor (ttymon) ....................................................................................................... 40
ttymon and the Console Port ...................................................................................................... 40
ttymon-Specific Administrative Command (ttyadm) ............................................................. 41
Network Listener Service (listen) ............................................................................................ 41
Special listen-Specific Administrative Command (nlsadmin) ........................................... 42
Administering ttymon Port Monitors ............................................................................................... 42
▼ How to Set the ttymon Console Terminal Type ....................................................................... 42
▼ How to Set the Baud Rate Speed on the ttymon Console Terminal ....................................... 43
▼ How to Add a ttymon Port Monitor ........................................................................................... 44
▼ How to View ttymon Port Monitor Status ................................................................................ 44
▼ How to Stop a ttymon Port Monitor .......................................................................................... 45
▼ How to Start a ttymon Port Monitor .......................................................................................... 46
▼ How to Disable a ttymon Port Monitor ..................................................................................... 46
▼ How to Enable a ttymon Port Monitor ...................................................................................... 46
▼ How to Remove a ttymon Port Monitor .................................................................................... 47
Administering ttymon services (Task Map) ..................................................................................... 47
Administering ttymon Services ......................................................................................................... 48
▼ How to Add a Service ................................................................................................................... 48
▼ How to View the Status of a TTY Port Service .......................................................................... 49
▼ How to Enable a Port Monitor Service ...................................................................................... 51
▼ How to Disable a Port Monitor Service ..................................................................................... 51
Service Access Facility Administration (Reference) ........................................................................ 51
Files Associated With the SAF .................................................................................................... 52
/etc/saf/_sactab File ............................................................................................................... 52
/etc/saf/pmtab/_pmtab File .................................................................................................... 53
5
Contents
7
Contents
9
Contents
11
Contents
13
14
Preface
System Administration Guide: Advanced Administration is part of a set that covers a significant
part of the SolarisTM system administration information. This guide includes information for
both SPARC® and x86 based systems.
This book assumes that you have installed the SunOSTM 5.10 Operating System. It also assumes
that you have set up any networking software that you plan to use.
For the Solaris 10 release, new features that are interesting to system administrators are covered
in sections called What's New in ... ? in the appropriate chapters.
Note – This Solaris release supports systems that use the SPARC and x86 families of processor
architectures: UltraSPARC®, SPARC64, AMD64, Pentium, and Xeon EM64T. The supported
systems appear in the Solaris OS: Hardware Compatibility Lists at
http://www.sun.com/bigadmin/hcl. This document cites any implementation differences
between the platform types.
15
Preface
System Administration Guide: Basic Administration User accounts and groups, server and client support, shutting
down and booting a system, managing services, and managing
software (packages and patches)
System Administration Guide: Advanced Administration Terminals and modems, system resources (disk quotas,
accounting, and crontabs), system processes, and troubleshooting
Solaris software problems
System Administration Guide: Devices and File Systems Removable media, disks and devices, file systems, and backing up
and restoring data
System Administration Guide: IP Services TCP/IP network administration, IPv4 and IPv6 address
administration, DHCP, IPsec, IKE, Solaris IP filter, Mobile IP, IP
network multipathing (IPMP), and IPQoS
System Administration Guide: Naming and Directory Services DNS, NIS, and LDAP naming and directory services, including
(DNS, NIS, and LDAP) transitioning from NIS to LDAP and transitioning from NIS+ to
LDAP
System Administration Guide: Naming and Directory Services NIS+ naming and directory services
(NIS+)
System Administration Guide: Network Services Web cache servers, time-related services, network file systems
(NFS and Autofs), mail, SLP, and PPP
System Administration Guide: Solaris Printing Solaris printing topics and tasks, using services, tools, protocols,
and technologies to set up and administer printing services and
printers
System Administration Guide: Security Services Auditing, device management, file security, BART, Kerberos
services, PAM, Solaris Cryptographic Framework, privileges,
RBAC, SASL, and Solaris Secure Shell
System Administration Guide: Solaris Containers-Resource Resource management topics projects and tasks, extended
Management and Solaris Zones accounting, resource controls, fair share scheduler (FSS), physical
memory control using the resource capping daemon (rcapd), and
resource pools; virtualization using Solaris Zones software
partitioning technology and lx branded zones
Solaris ZFS Administration Guide ZFS storage pool and file system creation and management,
snapshots, clones, backups, using access control lists (ACLs) to
protect ZFS files, using ZFS on a Solaris system with zones
installed, emulated volumes, and troubleshooting and data
recovery
Solaris Trusted Extensions Administrator’s Procedures System administration that is specific to a Solaris Trusted
Extensions system
Solaris Trusted Extensions Configuration Guide Starting with the Solaris 10 5/08 release, describes how to plan for,
enable, and initially configure Solaris Trusted Extensions
Typographic Conventions
The following table describes the typographic conventions that are used in this book.
AaBbCc123 The names of commands, files, and directories, Edit your .login file.
and onscreen computer output
Use ls -a to list all files.
machine_name% you have mail.
17
Preface
aabbcc123 Placeholder: replace with a real name or value The command to remove a file is rm
filename.
AaBbCc123 Book titles, new terms, and terms to be Read Chapter 6 in the User's Guide.
emphasized
A cache is a copy that is stored
locally.
Do not save the file.
Note: Some emphasized items
appear bold online.
Shell Prompt
C shell machine_name%
General Conventions
Be aware of the following conventions that are used in this book.
■ When following steps or using examples, be sure to type double-quotes ("), left
single-quotes (‘), and right single-quotes (’) exactly as shown.
■ The key referred to as Return is labeled Enter on some keyboards.
■ It is assumed that the root path includes the /sbin, /usr/sbin, /usr/bin, and /etc
directories, so the steps in this book show the commands in these directories without
absolute path names. Steps that use commands in other, less common, directories show the
absolute path in the example.
■ The examples in this book are for a basic SunOS 5.10 software installation without the
Binary Compatibility Package installed and without /usr/ucb in the path.
Caution – If /usr/ucb is included in a search path, it should always be at the end of the search
path. Commands like ps or df are duplicated in /usr/ucb with different formats and
different options from the SunOS 5.10 commands.
19
20
1
C H A P T E R 1
This chapter provides overview information for managing terminals and modems.
For step-by-step instructions on how to set up terminals and modems with the Serial Ports tool,
see Chapter 2, “Setting Up Terminals and Modems (Tasks).”
For step-by-step instructions on how to set up terminals and modems with the Service Access
Facility (SAF), see Chapter 3, “Managing Serial Ports With the Service Access Facility (Tasks).”
21
What's New in Managing Terminals and Modems?
output. The generated console output is more efficient than using OBP rendering. The coherent
console also avoids idling CPUs during the SPARC console output and enhances the user
experience.
This change does not impact how the terminal type is set for the serial port. You can still use the
svccfg command to modify the $TERM value, as shown in the following example:
# svccfg
svc:> select system/console-login
svc:/system/console-login> setprop ttymon/terminal_type = "xterm"
svc:/system/console-login> exit
Note – You can no longer customize the ttymon invocation in the /etc/inittab file.
For step-by-step instructions on how to specify ttymon command arguments with SMF, see
“How to Set the ttymon Console Terminal Type” on page 42.
For a complete overview of SMF, see Chapter 16, “Managing Services (Overview),” in System
Administration Guide: Basic Administration. For information on the step-by-step procedures
that are associated with SMF, see Chapter 17, “Managing Services (Tasks),” in System
Administration Guide: Basic Administration.
Terminal Description
Your system's bitmapped graphics display is not the same as an alphanumeric terminal. An
alphanumeric terminal connects to a serial port and displays only text. You don't have to
perform any special steps to administer the graphics display.
Modem Description
Modems can be set up in three basic configurations:
■ Dial-out
■ Dial-in
■ Bidirectional
A modem connected to your home computer might be set up to provide dial-out service. With
dial-out service, you can access other computers from your own home. However, nobody
outside can gain access to your machine.
Dial-in service is just the opposite. Dial-in service allows people to access a system from remote
sites. However, it does not permit calls to the outside world.
Bidirectional access, as the name implies, provides both dial-in and dial-out capabilities.
Ports Description
A port is a channel through which a device communicates with the operating system. From a
hardware perspective, a port is a “receptacle” into which a terminal or modem cable might be
physically connected.
However, a port is not strictly a physical receptacle, but an entity with hardware (pins and
connectors) and software (a device driver) components. A single physical receptacle often
provides multiple ports, allowing connection of two or more devices.
Common types of ports include serial, parallel, small computer systems interface (SCSI), and
Ethernet.
Devices that have been designed according to RS-232-C or RS-423 standards, this include most
modems, alphanumeric terminals, plotters, and some printers. These devices can be connected
interchangeably, using standard cables, into serial ports of computers that have been similarly
designed.
When many serial port devices must be connected to a single computer, you might need to add
an adapter board to the system. The adapter board, with its driver software, provides additional
serial ports for connecting more devices than could otherwise be accommodated.
Services Description
Modems and terminals gain access to computing resources by using serial port software. Serial
port software must be set up to provide a particular “service” for the device attached to the port.
For example, you can set up a serial port to provide bidirectional service for a modem.
Port Monitors
The main mechanism for gaining access to a service is through a port monitor. A port monitor is
a program that continuously monitors for requests to log in or access printers or files.
When a port monitor detects a request, it sets whatever parameters are required to establish
communication between the operating system and the device requesting service. Then, the port
monitor transfers control to other processes that provide the services needed.
The following table describes the two types of port monitors included in the Solaris Operating
System.
You might be familiar with an older port monitor called getty. The new ttymon port monitor is
more powerful. A single ttymon port monitor can replace multiple occurrences of getty.
Otherwise, these two programs serve the same function. For more information, see the
getty(1M) man page.
The most comprehensive Service Access Facility (SAF) commands “Service Access Facility” on
page 25
The SAF is an open-systems solution that controls access to system and network resources
through tty devices and local-area networks (LANs). The SAF is not a program, but a hierarchy
of background processes and administrative commands.
This chapter provides step-by-step instructions for setting up terminals and modems using
Solaris Management Console's Serial Ports tool.
For overview information about terminals and modems, see Chapter 1, “Managing Terminals
and Modems (Overview).” For overview information about managing system resources, see
Chapter 4, “Managing System Resources (Overview).”
For information about the procedures associated with setting up terminals and modems using
Solaris Management Console's Serial Ports tool, see “Setting Terminals and Modems (Task
Map)” on page 27
27
Setting Up Terminals and Modems With Serial Ports Tool (Overview)
Initialize a port. To initialize a port, use the Solaris “How to Initialize a Port” on
Management Console Serial Ports page 32
tool. Choose the appropriate
option from the Action menu.
Select a serial port from the Serial Ports window and then choose a Configure option from the
Action menu to configure the following:
■ Terminal
■ Modem – Dial–In
■ Modem – Dial–Out
■ Modem – Dial–In/Dial–Out
■ Initialize Only – No Connection
The Configure options provide access to the templates for configuring these services. You can
view two levels of detail for each serial port: Basic and Advanced. You can access the Advanced
level of detail for each serial port after it is configured by selecting the serial port and selecting
the Properties option from the Action menu. After a serial port is configured, you can disable or
enable the port with the SAF commands. For information on using the SAF commands, see
Chapter 3, “Managing Serial Ports With the Service Access Facility (Tasks).”
For information on using the Serial Ports command–line interface, see the smserialport(1M)
man page.
Setting Up Terminals
The following table describes the menu items (and their default values) when you set up a
terminal by using the Serial Ports tool.
Basic Port —
Description Terminal
Setting Up Modems
The following table describes the three modem templates that are available when you set up a
modem using the Serial Ports tool.
Dial-In Only Users can dial in to the modem but cannot dial out.
Dial-Out Only Users can dial out from the modem but cannot dial in.
Dial-In and Out (Bidirectional) Users can either dial in or dial out from the modem.
Detail Item Modem - Dial-In Only Modem - Dial-Out Only Modem - Dial In and Out
Description Modem – Dial In Only Modem – Dial Out Only Modem – Dial In and Out
The following table describes the default values for the Initialize Only template.
6 Click OK.
7 To configure the advanced items, select the port configured as a terminal. Then, select
Properties from the Action menu.
For information on starting the Solaris Management Console, see “Starting the Solaris
Management Console” in System Administration Guide: Basic Administration.
5 Choose one of the following Configure options from the Action menu.
6 Click OK.
7 To configure the advanced items, select the port configured as a modem. Then, select Properties
from the Action menu.
6 Click OK.
7 To configure the advanced items, select the port configured as initialize only. Then, select
Properties from the Action menu.
$ pmadm -l -t ttymon
Examine the /etc/ttydefs file and double–check the label definition against the terminal
configuration. Use the sacadmcommand to check the port monitor's status. Use pmadm to
check the service associated with the port the terminal uses.
■ Check the serial connection.
If the Service Access Controller is starting the TTY port monitor and the following is true:
■ The pmadm command reports that the service for the terminal's port is enabled.
■ The terminal's configuration matches the port monitor's configuration.
Then, continue to search for the problem by checking the serial connection. A serial
connection comprises serial ports, cables, and terminals. Test each of these parts by using
one part with two other parts that are known to be reliable.
For more information on ttymon and SMF, see “What's New in Managing Terminals and
Modems?” on page 21.
This chapter describes how to manage serial port services using the Service Access Facility
(SAF).
Also included in this chapter is information on how to perform console administration with the
Service Management Facility (SMF).
Note – The SAF and SMF are two different tools in the Solaris OS. Starting with the Solaris 10
release, ttymon invocations on the system console are now managed by SMF. The SAF tool is
still used to administer terminals, modems, and other network devices.
For information on the step-by-step procedures that are associated with managing serial ports ,
see the following:
■ “Managing Serial Ports (Task Map)” on page 36
■ “Administering ttymon services (Task Map)” on page 47
For reference information about the SAF, see “Service Access Facility Administration
(Reference)” on page 51.
35
Managing Serial Ports (Task Map)
Add a ttymon port monitor. Use the sacadm command to add a “How to Add a ttymon Port
ttymon port monitor. Monitor” on page 44
View a ttymon port monitor status. Use the sacadm command to view “How to View ttymon Port
ttymon port monitor status. Monitor Status” on page 44
Stop a ttymon port monitor. Use the sacadm command to stop a “How to Stop a ttymon Port
ttymon port monitor. Monitor” on page 45
Start a ttymon port monitor. Use the sacadm command to start a “How to Start a ttymon Port
ttymon port monitor. Monitor” on page 46
Disable a ttymon port monitor. Use the sacadm command to “How to Disable a ttymon Port
disable a ttymon port monitor. Monitor” on page 46
Enable a ttymonport monitor. Use the sacadm command to enable “How to Enable a ttymon Port
a ttymon port monitor. Monitor” on page 46
Remove a ttymon port monitor. Use the sacadm command to “How to Remove a ttymon Port
remove a ttymon port monitor. Monitor” on page 47
The SAF is a tool that is used to administer terminals, modems, and other network devices. The
top-level SAF program is the Service Access Controller (SAC). The SAC controls port monitors
that you administer through the sacadm command. Each port monitor can manage one or more
ports.
You administer the services associated with ports through the pmadm command. While services
provided through the SAC can differ from network to network, the SAC and its administrative
commands, sacadm and pmadm, are network independent.
The following table describes the SAF control hierarchy. The sacadm command is used to
administer the SAC, which controls the ttymon and listen port monitors.
The services of ttymon and listen are in turn controlled by the pmadm command. One instance
of ttymon can service multiple ports. One instance of listen can provide multiple services on a
network interface.
Overall administration sacadm Command for adding and removing port monitors
Port monitor service pmadm Command for controlling port monitors services
administrator
Console administration console login Console services are managed by the SMF service,
svc:/system/console-login:default. This service
invokes the ttymon port monitor. Do not use the
pmadm or the sacadm command to manage the
console. For more information, see “ttymon and the
Console Port” on page 40, “How to Set the ttymon
Console Terminal Type” on page 42, and “How to Set
the Baud Rate Speed on the ttymon Console
Terminal” on page 43.
Chapter 3 • Managing Serial Ports With the Service Access Facility (Tasks) 37
Port Monitor Service Administration (pmadm)
When someone attempts to log in by using an alphanumeric terminal or a modem, the serial
port driver passes the activity to the operating system. The ttymon port monitor notes the serial
port activity, and attempts to establish a communications link. The ttymon port monitor
determines which data transfer rate, line discipline, and handshaking protocol are required to
communicate with the device.
After the proper parameters for communication with the modem or terminal are established,
the ttymon port monitor passes these parameters to the login program and transfers control to
it.
The ttymon port monitor then writes the prompt and waits for user input. If the user indicates
that the speed is inappropriate by pressing the Break key, the ttymon port monitor tries the next
speed and writes the prompt again.
If autobaud is enabled for a port, the ttymon port monitor tries to determine the baud rate on
the port automatically. Users must press Return before the ttymon port monitor can recognize
the baud rate and print the prompt.
When valid input is received, the ttymon port monitor does the following tasks:
■ Interprets the per-service configuration file for the port
■ Creates an /etc/utmpx entry, if required
■ Establishes the service environment
■ Invokes the service associated with the port
After the service terminates, the ttymon port monitor cleans up the /etc/utmpx entry, if this
entry exists, and returns the port to its initial state.
Chapter 3 • Managing Serial Ports With the Service Access Facility (Tasks) 39
TTY Monitor and Network Listener Port Monitors
Bidirectional Service
If a port is configured for bidirectional service, the ttymon port monitor does the following:
■ Allows users to connect to a service
■ Allows the uucico, cu, or ct commands to use the port for dialing out, if the port is free
■ Waits to read a character before printing a prompt
■ Invokes the port's associated service, without sending the prompt message, when a
connection is requested, if the connect-on-carrier flag is set
The ttymon port monitor provides Solaris users the same services that the getty port monitor
did under previous versions of SunOS 4.1 software.
The ttymon port monitor runs under the SAC program and is configured with the sacadm
command. Each instance of ttymon can monitor multiple ports. These ports are specified in the
port monitor's administrative file. The administrative file is configured by using the pmadm and
ttyadm commands.
defined for any of the properties, then the value is not used for ttymon. However, if the ttymon
device value is empty, or not set, then /dev/console is used as the default to enable ttymon to
run.
The following properties are available under the SMF service,
svc:/system/console-login:default:
ttymon/nohangup Specifies the nohangup property. If set to true, do not force a line
hang up by setting the line speed to zero before setting the default
or specified speed.
ttymon/prompt Specifies the prompt string for the console port.
ttymon/terminal_type Specifies the default terminal type for the console.
ttymon/device Specifies the console device.
ttymon/label Specifies the TTY label in the /etc/ttydefs line.
Thus, the ttyadm command does not administer ttymon directly. The ttyadm command
complements the generic administrative commands, sacadm and pmadm. For more information,
see the ttyadm(1M) man page.
The listen port monitor is configured by using the sacadm command. Each instance of listen
can provide multiple services. These services are specified in the port monitor's administrative
file. This administrative file is configured by using the pmadm and nlsadmin commands.
The network listener process can be used with any connection-oriented transport provider that
conforms to the Transport Layer Interface (TLI) specification. In the Solaris Operating System,
listen port monitors can provide additional network services not provided by the inetd
service.
Chapter 3 • Managing Serial Ports With the Service Access Facility (Tasks) 41
Administering ttymon Port Monitors
Thus, the nlsadmin command does not administer listen directly. The command
complements the generic administrative commands, sacadm and pmadm.
Each network, configured separately, can have at least one instance of the network listener
process associated with it. The nlsadmin command controls the operational states of listen
port monitors.
The nlsadmin command can establish a listen port monitor for a given network, configure the
specific attributes of that port monitor, and start and kill the monitor. The nlsadmin command
can also report on the listen port monitors on a machine.
2 Run the svccfg command to set the property for the service instance that you want to change.
# svccfg -s console-login setprop ttymon/terminal_type = "xterm"
where xterm is an example of a terminal type that you might want to use.
Caution – If you choose to restart the service instance immediately, you are logged out of the
console. If you do not restart the service instance immediately, the property changes apply at
the next login prompt on the console.
The following are supported console speeds for SPARC based systems:
■ 9600 bps
■ 19200 bps
■ 38400 bps
2 Use the eeprom command to set a baud rate speed that is appropriate for your system type.
# eeprom ttya-mode=baud-rate,8,n,1,-
For example, to change the baud rate on an x86 based system's console to 38400, type:
# eeprom ttya-mode=38400,8,n,1,-
# 9600 :bd:
ttymodes="2502:1805:bd:8a3b:3:1c:7f:15:4:0:0:0:11:13:1a:19:12:f:17:16";
Chapter 3 • Managing Serial Ports With the Service Access Facility (Tasks) 43
Administering ttymon Port Monitors
Use the following command to change the baud rate speed to 19200.
# 19200 :be:
ttymodes="2502:1805:be:8a3b:3:1c:7f:15:4:0:0:0:11:13:1a:19:12:f:17:16";
Use the following command to change the baud rate speed to 38400.
# 38400 :bf:
ttymodes="2502:1805:bf:8a3b:3:1c:7f:15:4:0:0:0:11:13:1a:19:12:f:17:16";
■ On x86 based systems: Change the console speed if the BIOS serial redirection is enabled.
The method that you use to change the console speed is platform-dependent.
# sacadm -l -p mbmon
PMTAG PMTYPE FLGS RCNT STATUS COMMAND
mbmon ttymon - 0 STARTING /usr/lib/saf/ttymon #TTY Ports a & b
PMTAG Identifies the port monitor name, mbmon.
PMTYPE Identifies the port monitor type, ttymon.
FLGS Indicates whether the following flags are set:
■ d — Do not enable the new port monitor.
■ x — Do not start the new port monitor.
■ dash (-) — No flags are set.
RCNT Indicates the return count value. A return count of 0 indicates that the
port monitor is not to be restarted if it fails.
STATUS Indicates the current status of the port monitor.
COMMAND Identifies the command used to start the port monitor.
#TTY Ports a & b Identifies any comment used to describe the port monitor.
Chapter 3 • Managing Serial Ports With the Service Access Facility (Tasks) 45
Administering ttymon Port Monitors
Note – Port monitor configuration files cannot be updated or changed by using the sacadm
command. To reconfigure a port monitor, remove it and then add a new one.
Add a ttymon service. Use the pmadm command to add a “How to Add a Service” on page 48
service.
View the Status of a TTY Port Use the pmadmcommand to view “How to View the Status of a TTY
Service. the status of a TTY port. Port Service” on page 49
Enable a port monitor service. Use the pmadm command with the “How to Enable a Port Monitor
-e option to enable a port monitor. Service” on page 51
Disable a port monitor service. Use the pmadm command with the “How to Disable a Port Monitor
-d option to disable a port monitor. Service” on page 51
Chapter 3 • Managing Serial Ports With the Service Access Facility (Tasks) 47
Administering ttymon Services
Note – In this example, the input wraps automatically to the next line. Do not use a Return key or
line feed.
# pmadm -l -p mbmon
PMTAG PMTYPE SVCTAG FLAGS ID <PMSPECIFIC>
mbmon ttymon a - root /dev/term/a - - /usr/bin/login - contty
ldterm,ttcompat login: Terminal disabled tvi925 y #
PMTAG Identifies the port monitor name, mbmon, that is set by using
the pmadm -p command.
PMTYPE Identifies the port monitor type, ttymon.
SVCTAG Indicates the service tag value that is set by using the pmadm -s
command.
FLAGS Identifies whether the following flags are set by using the
pmadm -f command.
■ x — Do not enable the service.
■ u — Create a utmpx entry for the service.
■ dash (-) — No flags are set.
ID Indicates the identity assigned to the service when it is started.
This value is set by using the pmadm -i command.
<PMSPECIFIC> Information
/dev/term/a Indicates the TTY port path name that is set by using the
ttyadm -d command.
Chapter 3 • Managing Serial Ports With the Service Access Facility (Tasks) 49
Administering ttymon Services
Chapter 3 • Managing Serial Ports With the Service Access Facility (Tasks) 51
Service Access Facility Administration (Reference)
/etc/saf/_sactab File
The information in the /etc/saf/_sactab file is as follows:
# VERSION=1
zsmon:ttymon::0:/usr/lib/saf/ttymon
#
# VERSION=1 Indicates the Service Access Facility version number.
zsmon Is the name of the port monitor.
ttymon Is the type of port monitor.
:: Indicates whether the following two flags are set:
■ d — Do not enable the port monitor.
■ x — Do not start the port monitor. No flags are set in this
example.
/etc/saf/pmtab/_pmtab File
The /etc/saf/pmtab/_pmtab file, such as /etc/saf/zsmon/_pmtab, is similar to the following:
# VERSION=1
ttya:u:root:reserved:reserved:reserved:/dev/term/a:I::/usr/bin/login::9600:
ldterm,ttcompat:ttya login\: ::tvi925:y:#
# VERSION=1 Indicates the Service Access Facility version number.
ttya Indicates the service tag.
x,u Identifies whether the following flags are set:
■ x — Do not enable the service.
■ u — Create a utmpx entry for the service.
root Indicates the identity assigned to the service tag.
reserved This field is reserved for future use.
reserved This field is reserved for future use.
reserved This field is reserved for future use.
/dev/term/a Indicates the TTY port path name.
/usr/bin/login Identifies the full path name of the service to be invoked when a
connection is received.
:c,b,h,I,r: Indicates whether the following flags are set:
Chapter 3 • Managing Serial Ports With the Service Access Facility (Tasks) 53
Service Access Facility Administration (Reference)
Service States
The sacadm command controls the states of services. The following table describes the possible
states of services.
State Description
Enabled Default state – When the port monitor is added, the service operates.
Disabled Default state – When the port monitor is removed, the service stops.
State Description
Started Default state – When the port monitor is added, it is automatically started.
Enabled Default state – When the port monitor is added, it is automatically ready to accept
requests for service.
Stopped Default state – When the port monitor is removed, it is automatically stopped.
Disabled Default state – When the port monitor is removed, it automatically continues
existing services and refuses to add new services.
State Description
Stopping Intermediate state – The port monitor has been manually terminated, but it has
not completed its shutdown procedure. The port monitor is on the way to
becoming stopped.
Notrunning Inactive state – The port monitor has been killed. All ports previously monitored
are inaccessible. An external user cannot tell whether a port is disabled or
notrunning.
Failed Inactive state – The port monitor is unable to start and remain running.
To determine the state of any particular port monitor, use the following command:
# sacadm -l -p portmon-name
Port States
Ports can be enabled or disabled depending on the state of the port monitor that controls the
ports.
State Description
Enabled The ttymon port monitor sends a prompt message to the port and provides
login service to it.
Disabled Default state of all ports if ttymon is killed or disabled. If you specify this
state, ttymon sends out the disabled message when it receives a connection
request.
Chapter 3 • Managing Serial Ports With the Service Access Facility (Tasks) 55
56
4
C H A P T E R 4
This chapter provides a brief description of the system resource management features that are
available in the Solaris Operating System and a road map to help you manage system resources.
Using these features, you can display general system information, monitor disk space, set disk
quotas and use accounting programs. You can also schedule the cron and at commands to
automatically run routine commands.
This section does not cover information on Solaris resource management that enables you to
allocate, monitor, and control system resources in a flexible way.
For information on the procedures that are associated with managing system resources without
Solaris resource management, see “Managing System Resources (Road Map)” on page 59.
For information on managing system resources with Solaris resource management, see Chapter
1, “Introduction to Solaris 10 Resource Manager,” in System Administration Guide: Solaris
Containers-Resource Management and Solaris Zones.
57
What's New in Managing System Resources?
The firmware device tree root properties that are displayed by using the -b option to the
prtconf command are as follows:
■ name
■ compatible
■ banner-name
■ model
To display additional platform- specific output that might be available, use the prtconf -vb
command. For more information, see the prtconf(1M) man page and “How to Display a
System's Product Name” on page 67.
For information about the procedures associated with this feature, see “How to Display a
System's Physical Processor Type” on page 68.
For more information in this guide, see Chapter 5, “Displaying and Changing System
Information (Tasks).”
For a complete listing of new Solaris features and a description of Solaris releases, see Solaris 10
What’s New.
Displaying and changing Use various commands to display and Chapter 5, “Displaying and
system information change system information, such as general Changing System Information
system information, the language (Tasks)”
environment, the date and time, and the
system's host name.
Managing disk use Identify how disk space is used and take steps Chapter 6, “Managing Disk Use
to remove old and unused files. (Tasks)”
Managing quotas Use UFS file system quotas to manage how Chapter 7, “Managing Quotas
much disk space is used by users. (Tasks)”
Scheduling system events Use cron and at jobs to help schedule system Chapter 8, “Scheduling System
routines that can include clean up of old and Tasks (Tasks)”
unused files.
Managing system Use system accounting to identify how users Chapter 9, “Managing System
accounting and applications are using system resources. Accounting (Tasks)”
Managing system resources Use resource manager to control how Chapter 1, “Introduction to
with Solaris Resource applications use available system resources Solaris 10 Resource Manager,”
Management and to track and charge resource usage. in System Administration
Guide: Solaris
Containers-Resource
Management and Solaris Zones
This chapter describes the tasks that are required to display and change the most common
system information.
For information about the procedures associated with displaying and changing system
information, see the following:
■ “Displaying System Information (Task Map)” on page 61
■ “Changing System Information (Task Map)” on page 71
For overview information about managing system resources, see Chapter 4, “Managing System
Resources (Overview).”
Determine whether a system has 32 Use the isainfo command to “How to Determine Whether a
bit or 64–bit capabilities enabled. determine whether a system has System Has 32–bit or 64–Bit Solaris
32–bit or 64-bit capabilities Capabilities Enabled” on page 63
enabled. For x86 based systems,
you can use the isalist command
to display this information.
Display Solaris Release Display the contents of the “How to Display Solaris Release
Information /etc/release file to identify your Information” on page 66
Solaris release version.
Display General System Use the showrev command to “How to Display General System
Information. display general system Information” on page 66
information.
61
Displaying System Information (Task Map)
Display a system's Host ID Use the hostid command to “How to Display a System's Host
number. display your system's host id. ID Number” on page 67
Display a System's product name Starting with the Solaris 10 1/06 “How to Display a System's
release, you can use the prtconf -b Product Name” on page 67
command to display the product
name of a system.
Display a System's Installed Use the prtconf command to “How to Display a System's
Memory display information about your Installed Memory” on page 68
system's installed memory.
Display a system's date and time. Use the date command to display “How to Display the Date and
your system's date and time. Time” on page 68
Display a system's physical Use the psrinfo -p command to “How to Display a System's
processor type. list the total number of physical Physical Processor Type” on
processors on a system. page 68
Use the psrinfo -pv command to
list all physical processors on a
system and the virtual processors
that is associated with each physical
processor.
Display a system's logical processor Use the psrinfo -v command to “How to Display a System's Logical
type. display a system's logical processor Processor Type” on page 69
type.
Display locales that are installed on Use the localeadm command to “How to Display Locales Installed
a system. display locales that are installed on on a System” on page 69
your system.
Determine if a locale is installed on Use the -q option of the localeadm “How to Determine if a Locale is
a system. command and a locale to Installed on a System” on page 70
determine if a locale is installed on
your system.
The isainfo command, run without specifying any options, displays the name or names of the
native instruction sets for applications supported by the current OS version.
-v Prints detailed information about the other options
-b Prints the number of bits in the address space of the native instruction set.
-n Prints the name of the native instruction set used by portable applications supported by
the current version of the OS.
-k Prints the name of the instruction set or sets that are used by the OS kernel components
such as device drivers and STREAMS modules.
Note – For x86 based systems, the isalist command can also be used to display this
information.
Example 5–1 SPARC: Determining Whether a System Has 32–Bit or 64–Bit Solaris Capabilities
Enabled
The isainfo command output for an UltraSPARC system that is running previous releases of
the Solaris OS using a 32-bit kernel is displayed as follows:
$ isainfo -v
32-bit sparc applications
This output means that this system can support only 32–bit applications.
The current release of the Solaris OS only ships a 64–bit kernel on SPARC based systems. The
isainfo command output for an UltraSPARC system that is running a 64–bit kernel is
displayed as follows:
$ isainfo -v
64-bit sparcv9 applications
32-bit sparc applications
This output means that this system is capable of supporting both 32–bit and 64–bit
applications.
Use the isainfo -b command to display the number of bits supported by native applications
on the running system.
The output from a SPARC based, x86 based, or UltraSPARC system that is running the 32–bit
Solaris Operating System is displayed as follows:
$ isainfo -b
32
The isainfo command output from a 64–bit UltraSPARC system that is running the 64–bit
Solaris Operating System is displayed as follows:
$ isainfo -b
64
The command returns 64 only. Even though a 64–bit UltraSPARC system can run both types of
applications, 64–bit applications are the best kind of applications to run on a 64–bit system.
Example 5–2 x86: Determining Whether a System Has 32–Bit or 64–Bit Solaris Capabilities
Enabled
The isainfo command output for an x86 based system that is running the 64-bit kernel is
displayed as follows:
$ isainfo
amd64 i386
This output means that this system can support 64–bit applications.
Use the isainfo -v command to determine if an x86 based system is capable of running a 32–bit
kernel.
$ isainfo -v
64-bit amd64 applications
fpu tsc cx8 cmov mmx ammx a3dnow a3dnowx fxsr sse sse2
32-bit i386 applications
fpu tsc cx8 cmov mmx ammx a3dnow a3dnowx fxsr sse sse2
This output means that this system can support both 64–bit and 32–bit applications.
Use the isainfo -b command to display the number of bits supported by native applications
on the running system.
The output from an x86 based system that is running the 32–bit Solaris Operating System is
displayed as follows:
$ isainfo -b
32
The isainfo command output from an x86 based system that is running the 64–bit Solaris
Operating System is displayed as follows:
$ isainfo -b
64
You can also use the isalist command to determine whether an x86 based system is running
in 32–bit or 64–bit mode.
$ isalist
amd64 pentium_pro+mmx pentium_pro pentium+mmx pentium i486 i386 i86
In the preceding example, amd64 indicates that the system has 64–bit Solaris capabilities
enabled.
● Display the contents of the /etc/release file to identify your Solaris release version.
$ cat /etc/release
Solaris 10 s10_51 SPARC
Copyright 2004 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 21 January 2004
$ uname
SunOS
$ uname -a
SunOS starbug 5.10 Generic sun4u sparc SUNW,Ultra-5_10
$
$ showrev -a
Hostname: stonetouch
Hostid: 8099dfb9
Release: 5.10
Kernel architecture: sun4u
OpenWindows version:
Solaris X11 Version 6.6.2 20 October 2003
● To display the host ID number in hexadecimal format, use the hostid command.
$ hostid
80a5d34c
● To display the product name for your system, use the prtconf command with the -b option.
# prtconf -b
name: SUNW,Ultra-5_10
model: SUNW,375-0066
banner-name: Sun Ultra 5/10 UPA/PCI (UltraSPARC-IIi 333MHz)
This example shows sample output from the prtconf -vb command.
# prtconf -vb
name: SUNW,Ultra-5_10
model: SUNW,375-0066
banner-name: Sun Ultra 5/10 UPA/PCI (UltraSPARC-IIi 333MHz)
idprom: 01800800.20a6c363.00000000.a6c363a9.00000000.00000000.405555aa.aa555500
openprom model: SUNW,3.15
● To display the amount of memory that is installed on your system, use the prtconf command.
● To display the current date and time according to your system clock, use the date command.
$ date
Wed Jan 21 17:32:59 MST 2004
$
$ psrinfo -pv
The UltraSPARC-IV physical processor has 2 virtual processors (8, 520)
The UltraSPARC-IV physical processor has 2 virtual processors (9, 521)
The UltraSPARC-IV physical processor has 2 virtual processors (10, 522)
The UltraSPARC-IV physical processor has 2 virtual processors (11, 523)
The UltraSPARC-III+ physical processor has 1 virtual processor (16)
The UltraSPARC-III+ physical processor has 1 virtual processor (17)
$ psrinfo -pv
The i386 physical processor has 2 virtual processors (0, 2)
The i386 physical processor has 2 virtual processors (1, 3)
$ isalist
$ psrinfo -v
Status of virtual processor 0 as of: 04/16/2004 10:32:13
on-line since 03/22/2004 19:18:27.
The sparcv9 processor operates at 650 MHz,
and has a sparcv9 floating point processor.
$ isalist
pentium_pro+mmx pentium_pro pentium+mmx pentium i486 i386 i86
2 Display the locales currently installed on your system using the localeadm command. The -l
option displays the locales that are installed on the system. For example:
# localeadm -l
Checking for installed pkgs. This could take a while.
POSIX (C)
Done.
2 Determine if a locale is installed on your system using the localeadm command. The -q option
and a locale queries the system to see if that locale is installed on the system. To see if the
Central European region (ceu) is installed on your system, for example:
# localeadm -q ceu
locale/region name is ceu
Checking for Central Europe region (ceu)
.
.
.
The Central Europe region (ceu) is installed on this system
Manually set a system's date and Manually set your system's date “How to Set a System's Date and
time. and time by using the date Time Manually” on page 72
mmddHHMM[[cc]yy]
command-line syntax.
Change a system's host name. Change your system's host name by “How to Change a System's Host
editing the following files: Name” on page 73
■ /etc/nodename
■ /etc/hostname.*host-name
■ /etc/inet/hosts
Add a locale to a system. Use the localeadm command to How to Add a Locale to a System
add a locale to your system.
Remove a locale from a system. Use the -r option of the localeadm How to Remove a Locale From a
command and the locale to remove System
of locale from your system.
3 Verify that you have reset your system's date correctly by using the date command with no
options.
# date
Wed Mar 3 14:04:19 MST 2004
# date 0121173404
Thu Jan 21 17:34:34 MST 2004
$ cat /etc/motd
Sun Microsystems Inc. SunOS 5.10 Generic May 2004
The following example shows an edited /etc/motd file that provides information about system
availability to each user who logs in.
$ cat /etc/motd
The system will be down from 7:00 a.m to 2:00 p.m. on
Saturday, July 7, for upgrades and maintenance.
Do not try to access the system during those hours.
Thank you.
Remember to update your name service database to reflect the new host name.
You can also use the sys-unconfig command to reconfigure a system, including the host name.
For more information, see the sys-unconfig(1M) man page.
■ /etc/inet/hosts
■ /etc/inet/ipnodes – Applies only to some Solaris releases.
Note – Starting with the Solaris 10 8/07 release, there is no longer two separate hosts files. The
/etc/inet/hosts file is the single hosts file that contains both IPv4 and IPv6 entries. You do
not need to maintain IPv4 entries in two hosts files that always require synchronization. For
backward compatibility, the /etc/inet/ipnodes file is replaced with a symbolic link of the
same name to the /etc/inet/hosts file. For more information, see the hosts(4) man page.
3 (Optional) If you are using a name service, change the system's host name in the hosts file.
2 Add the packages for the locale you want to install on your system using the localeadm
command. The -a option and a locale identifies the locale that you want to add. The -d option
and a device identifies the device containing the locale packages you want to add. To add the
Central European region (ceu) to your system, for example:
# localeadm -a ceu -d /net/install/latest/Solaris/Product
2 Remove the packages for the locale installed on your system using the localeadm command.
The -r option and a locale identifies the locale that you want to remove from the system. To
remove the Central European region (ceu) from your system, for example:
# localeadm -r ceu
locale/region name is ceu
Removing packages for Central Europe (ceu)
.
.
.
One or more locales have been removed.
To update the list of locales available
at the login screen’s "Options->Language" menu,
.
.
.
This chapter describes how to optimize disk space by locating unused files and large directories.
For information on the procedures associated with managing disk use, see “Managing Disk Use
(Task Map)” on page 77.
Display information about files and Display information about how “How to Display Information
disk space. disk space is used by using the df About Files and Disk Space” on
command. page 79
Display the size of files. Display information about the size “How to Display the Size of Files”
of files by using the ls command on page 81
with the -lh options.
Find large files. The ls -s command allows you to “How to Find Large Files” on
sort files by size, in descending page 82
order.
Find files that exceed a specified Locate and display the names of “How to Find Files That Exceed a
size limit. files that exceed a specified size by Specified Size Limit” on page 84
using the find command with the
-size option and the value of the
specified size limit.
Display the size of directories, Display the size of one or more “How to Display the Size of
subdirectories, and files. directories, subdirectories, and files Directories, Subdirectories, and
by using the du command. Files” on page 85
77
Displaying Information About Files and Disk Space
Display ownership of local UFS file Display ownership of files by using “How to Display the User
systems. the quot -a command. Ownership of Local UFS File
Systems” on page 86
List the newest files. Display the most recently created “How to List the Newest Files” on
or changed files first, by using the page 87
ls -t command
Find and remove old or inactive Use the find command with the “How to Find and Remove Old or
files. -atime and -mtime options to Inactive Files” on page 88
locate files that have not been
accessed for a specified number of
days. You can remove these files by
using therm ‘cat filename’
command.
Clear out temporary directories. Locate temp directories, then use “How to Clear Out Temporary
the rm -r * command to remove Directories” on page 89
the entire directory.
Find and delete core files. Find and delete core files by using “How to Find and Delete core
the find . -name core -exec rm Files” on page 90
{} \; command.
Delete crash dump files. Delete crash dump files that are “How to Delete Crash Dump Files”
located in the /var/crash/ on page 91
directory by using the rm *
command.
Example 6–1 Displaying Information About File Size and Disk Space
In the following example, all the file systems listed are locally mounted except for /usr/dist,
which is mounted remotely from the system venus.
$ df
/ (/dev/dsk/c0t0d0s0 ): 101294 blocks 105480 files
/devices (/devices ): 0 blocks 0 files
/system/contract (ctfs ): 0 blocks 2147483578 files
/proc (proc ): 0 blocks 1871 files
/etc/mnttab (mnttab ): 0 blocks 0 files
/etc/svc/volatile (swap ): 992704 blocks 16964 files
/system/object (objfs ): 0 blocks 2147483530 files
/usr (/dev/dsk/c0t0d0s6 ): 503774 blocks 299189 files
/dev/fd (fd ): 0 blocks 0 files
/var/run (swap ): 992704 blocks 16964 files
/tmp (swap ): 992704 blocks 16964 files
/opt (/dev/dsk/c0t0d0s5 ): 23914 blocks 6947 files
/export/home (/dev/dsk/c0t0d0s7 ): 16810 blocks 7160 files
$ df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 249M 200M 25M 90% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 485M 376K 485M 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
/dev/dsk/c0t0d0s6 3.2G 2.9G 214M 94% /usr
fd 0K 0K 0K 0% /dev/fd
swap 485M 40K 485M 1% /var/run
swap 485M 40K 485M 1% /tmp
/dev/dsk/c0t0d0s5 13M 1.7M 10M 15% /opt
/dev/dsk/c0t0d0s7 9.2M 1.0M 7.3M 13% /export/home
Although /proc and /tmp are local file systems, they are not UFS file systems. /proc is a
PROCFS file system, /var/run and /tmp are TMPFS file systems, and /etc/mnttab is an
MNTFS file system.
Example 6–3 Displaying Total Number of Blocks and Files Allocated for a File System
The following example shows a list of all mounted file systems, device names, total 512-byte
blocks used, and the number of files. The second line of each two-line entry displays the total
number of blocks and files that are allocated for the file system.
$ df -t
/ (/dev/dsk/c0t0d0s0 ): 101294 blocks 105480 files
total: 509932 blocks 129024 files
/devices (/devices ): 0 blocks 0 files
total: 0 blocks 113 files
/system/contract (ctfs ): 0 blocks 2147483578 files
total: 0 blocks 69 files
/proc (proc ): 0 blocks 1871 files
total: 0 blocks 1916 files
/etc/mnttab (mnttab ): 0 blocks 0 files
total: 0 blocks 1 files
/etc/svc/volatile (swap ): 992608 blocks 16964 files
total: 993360 blocks 17025 files
/system/object (objfs ): 0 blocks 2147483530 files
total: 0 blocks 117 files
/usr (/dev/dsk/c0t0d0s6 ): 503774 blocks 299189 files
total: 6650604 blocks 420480 files
/dev/fd (fd ): 0 blocks 0 files
total: 0 blocks 31 files
/var/run (swap ): 992608 blocks 16964 files
total: 992688 blocks 17025 files
Note – If you run out of space in the /var directory, do not symbolically link the /var directory
to a directory on a file system with more disk space. Doing so, even as a temporary measure,
might cause problems for certain Solaris daemon processes and utilities.
$ cd /var/adm
$ ls -lh
total 148
The following example shows that the lpsched.1 file uses two blocks.
$ cd /var/lp/logs
$ ls -s
total 2 0 lpsched 2 lpsched.1
Note that this command sorts files in a list by the character that is in the fourth field, starting
from the left.
■ If the characters or columns for the files are the same, use the following command to sort a
list of files by block size, from largest to smallest.
Note that this command sorts files in a list, starting with the left most character.
Example 6–5 Finding Large Files (Sorting by the Fifth Field's Character)
$ cd /var/adm
$ ls -l | sort +4rn | more
-r--r--r-- 1 root root 4568368 Oct 17 08:36 lastlog
-rw-r--r-- 1 adm adm 697040 Oct 17 12:30 pacct.9
-rw-r--r-- 1 adm adm 280520 Oct 17 13:05 pacct.2
-rw-r--r-- 1 adm adm 277360 Oct 17 12:55 pacct.4
-rw-r--r-- 1 adm adm 264080 Oct 17 12:45 pacct.6
-rw-r--r-- 1 adm adm 255840 Oct 17 12:40 pacct.7
-rw-r--r-- 1 adm adm 254120 Oct 17 13:10 pacct.1
-rw-r--r-- 1 adm adm 250360 Oct 17 12:25 pacct.10
-rw-r--r-- 1 adm adm 248880 Oct 17 13:00 pacct.3
-rw-r--r-- 1 adm adm 247200 Oct 17 12:35 pacct.8
-rw-r--r-- 1 adm adm 246720 Oct 17 13:15 pacct.0
-rw-r--r-- 1 adm adm 245920 Oct 17 12:50 pacct.5
-rw-r--r-- 1 root root 190229 Oct 5 03:02 messages.1
-rw-r--r-- 1 adm adm 156800 Oct 17 13:17 pacct
-rw-r--r-- 1 adm adm 129084 Oct 17 08:36 wtmpx
Example 6–6 Finding Large Files (Sorting by the Left Most Character)
In the following example, the lastlog and messages files are the largest files in the /var/adm
directory.
$ cd /var/adm
$ ls -s | sort -nr | more
48 lastlog
30 messages
24 wtmpx
18 pacct
8 utmpx
2 vold.log
2 sulog
2 sm.bin/
2 sa/
2 passwd/
2 pacct1
2 log/
2 acct/
0 spellhist
0 aculog
total 144
$ du -s /var/adm /var/spool/lp
130 /var/adm
40 /var/spool/lp
The following example shows the sizes of two directories and includes the sizes of all the
subdirectories and files that are contained within each directory. The total number of blocks
that are contained in each directory is also displayed.
$ du /var/adm /var/spool/lp
2 /var/adm/exacct
2 /var/adm/log
2 /var/adm/streams
2 /var/adm/acct/fiscal
2 /var/adm/acct/nite
2 /var/adm/acct/sum
8 /var/adm/acct
2 /var/adm/sa
2 /var/adm/sm.bin
258 /var/adm
4 /var/spool/lp/admins
2 /var/spool/lp/requests/printing.Eng.Sun.COM
4 /var/spool/lp/requests
4 /var/spool/lp/system
2 /var/spool/lp/fifos
24 /var/spool/lp
$ du -h /usr/share/audio
796K /usr/share/audio/samples/au
797K /usr/share/audio/samples
798K /usr/share/audio
2 Display users, directories, or file systems, and the number of 1024-byte blocks used.
# quot [-a] [filesystem ...]
-a Lists all users of each mounted UFS file system and the number of 1024-byte
blocks used.
filesystem Identifies a UFS file system. Users and the number of blocks used are displayed
for that file system.
Note – The quot command works only on local UFS file systems.
Example 6–9 Displaying the User Ownership of Local UFS File Systems
In the following example, users of the root (/) file system are displayed. In the subsequent
example, users of all mounted UFS file systems are displayed.
# quot /
/dev/rdsk/c0t0d0s0:
43340 root
3142 rimmer
47 uucp
35 lp
30 adm
4 bin
4 daemon
# quot -a
/dev/rdsk/c0t0d0s0 (/):
43340 root
3150 rimmer
47 uucp
35 lp
30 adm
4 bin
4 daemon
/dev/rdsk/c0t0d0s6 (/usr):
460651 root
206632 bin
791 uucp
46 lp
4 daemon
1 adm
/dev/rdsk/c0t0d0s7 (/export/home):
9 root
Other ways to conserve disk space include emptying temporary directories such as the
directories located in /var/tmp or /var/spool, and deleting core and crash dump files. For
more information about crash dump files, refer to Chapter 17, “Managing System Crash
Information (Tasks).”
$ ls -tl /var/adm
total 134
-rw------- 1 root root 315 Sep 24 14:00 sulog
-r--r--r-- 1 root other 350700 Sep 22 11:04 lastlog
-rw-r--r-- 1 root bin 4464 Sep 22 11:04 utmpx
-rw-r--r-- 1 adm adm 20088 Sep 22 11:04 wtmpx
-rw-r--r-- 1 root other 0 Sep 19 03:10 messages
-rw-r--r-- 1 root other 0 Sep 12 03:10 messages.0
-rw-r--r-- 1 root root 11510 Sep 10 16:13 messages.1
-rw-r--r-- 1 root root 0 Sep 10 16:12 vold.log
drwxr-xr-x 2 root sys 512 Sep 10 15:33 sm.bin
drwxrwxr-x 5 adm adm 512 Sep 10 15:19 acct
drwxrwxr-x 2 adm sys 512 Sep 10 15:19 sa
-rw------- 1 uucp bin 0 Sep 10 15:17 aculog
-rw-rw-rw- 1 root bin 0 Sep 10 15:17 spellhist
drwxr-xr-x 2 adm adm 512 Sep 10 15:17 log
drwxr-xr-x 2 adm adm 512 Sep 10 15:17 passwd
2 Find files that have not been accessed for a specified number of days and list them in a file.
# find directory -type f[-atime +nnn] [-mtime +nnn] -print > filename &
directory Identifies the directory you want to search. Directories below this directory
are also searched.
-atime +nnn Finds files that have not been accessed within the number of days (nnn) that
you specify.
-mtime +nnn Finds files that have not been modified within the number of days (nnn) that
you specify.
filename Identifies the file that contains the list of inactive files.
Caution – Ensure that you are in the correct directory before completing Step 3. Step 3 deletes all
files in the current directory.
4 Change to other directories that contain unnecessary, temporary or obsolete subdirectories and
files. Delete these subdirectories and files by repeating Step 3.
# cd mywork
# ls
filea.000
fileb.000
filec.001
# rm -r *
# ls
#
2 Change to the directory where you want to search for core files.
3 Find and remove any core files in this directory and its subdirectories.
# find . -name core -exec rm {} \;
# cd /home/jones
# find . -name core -exec rm {} \;
Caution – Ensure you are in the correct directory before completing Step 3. Step 3 deletes all files
in the current directory.
# cd /var/crash/venus
# rm *
# ls
This chapter describes how to set up and administer quotas for disk space and inodes.
Using Quotas
Once quotas are in place, they can be changed to adjust the amount of disk space or the number
of inodes that users can consume. Additionally, quotas can be added or removed as system
needs change. For instructions on changing quotas or the amount of time that quotas can be
exceeded, disabling individual quotas, or removing quotas from file systems, see “Changing and
Removing Quotas” on page 103.
In addition, quota status can be monitored. Quota commands enable administrators to display
information about quotas on a file system, or search for users who have exceeded their quotas.
For procedures that describe how to use these commands, see “Checking Quotas” on page 101.
93
Setting Up Quotas
Setting Up Quotas
Setting up quotas involves these general steps:
1. Ensuring that quotas are enforced each time the system is rebooted by adding a quota option
to the /etc/vfstab file entries. Also, creating a quotas file in the top-level directory of the
file system.
2. After you create a quota for one use, copying the quota as a prototype to set up other user
quotas.
3. Before you turn quotas on, checking the consistency of the proposed quotas with the current
disk usage to make sure that there are no conflicts.
4. Turning on the quotas on for one or more file systems.
For specific information about these procedures, see “Setting Up Quotas (Task Map)” on
page 96.
The following table describes the commands that you use to set up disk quotas.
edquota Sets the hard limits and soft limits on the edquota(1M)
number of inodes and the amount of disk
space for each user.
1. Configure a file system for Edit the /etc/vfstab file so that “How to Configure File Systems for
quotas. quotas are activated each time the Quotas” on page 96
file system is mounted. Also, create
a quotas file.
2. Set up quotas for a user. Use the edquota command to “How to Set Up Quotas for a User”
create disk quotas and inode quotas on page 97
for a single user account.
3. (Optional) Set up quotas for Use the edquota command to “How to Set Up Quotas for
multiple users. apply prototype quotas to other Multiple Users” on page 98
user accounts.
4. Check for consistency. Use the quotacheck command to “How to Check Quota
compare quotas to current disk Consistency” on page 98
usage for consistency across one or
more file systems.
2 Edit the /etc/vfstab file and add rq to the mount options field for each UFS file system that
will have quotas.
3 Change directory to the root of the file system that will have quotas.
The following example line from the /etc/vfstab file shows that the local /work directory is
mounted with quotas enabled, signified by the rq entry under the mount options column.
2 Use the quota editor to create a temporary file that contains one line of quota information for
each mounted UFS file system that has a quotas file in the file system's root directory.
# edquota username
where username is the user for whom you want to set up quotas.
3 Change the number of 1-Kbyte disk blocks, both soft and hard, and the number of inodes, both
soft and hard, from the default of 0, to the quotas that you specify for each file system.
The following example shows the same line in the temporary file after quotas have been set up.
fs /files blocks (soft = 50, hard = 60) inodes (soft = 90, hard = 100)
2 Use the quota editor to apply the quotas you already established for a prototype user to the
additional users that you specify.
# edquota -p prototype-user username ...
prototype-user Is the user name of the account for which you have set up quotas.
username ... Specifies one or more user names of additional accounts. More than one user
name is specified by separating each user name with a space.
Also keep in mind that running the quotacheck command on large file systems can be
time-consuming.
Note – To ensure accurate disk data, the file systems being checked should be quiescent when
you run the quotacheck command manually.
# quotacheck -va
*** Checking quotas for /dev/rdsk/c0t0d0s7 (/export/home)
filesystem ... Turns on quotas for one or more file systems that you specify. More than one
file system is specified by separating each file system name with a space.
Check for exceeded quotas. Display the quotas and disk use for “How to Check for Exceeded
individual users on file systems on Quotas” on page 101
which quotas have been activated
by using the quota command.
Check for quotas on a file system. Display the quotas and disk use for “How to Check Quotas on a File
all users on one or more file System” on page 102
systems by using the repquota
command.
Change the soft limit default. Change the length of time that “How to Change the Soft Limit
users can exceed their disk space Default” on page 103
quotas or inode quotas by using the
edquota command.
Change quotas for a user. Use the quota editor, edquota, to “How to Change Quotas for a
change quotas for an individual User” on page 104
user.
Disable quotas for a user. Use the quota editor, edquota, to “How to Disable Quotas for a User”
disable quotas for an individual on page 105
user.
Turn off quotas. Turn off quotas by using the “How to Turn Off Quotas” on
quotaoff command. page 106
Checking Quotas
After you have set up and turned on disk quotas and inode quotas, you can check for users who
exceed their quotas. In addition, you can check quota information for entire file systems.
The following table describes the commands that you use to check quotas.
Command Task
quota(1M) Displays user quotas and current disk use, and information about
users who are exceeding their quotas
repquota(1M) Displays quotas, files, and the amount of space that is owned for
specified file systems
2 Display user quotas for mounted file systems where quotas are enabled.
# quota [-v] username
-v Displays one or more users' quotas on all mounted file systems that have quotas.
username Is the login name or UID of a user's account.
# quota -v 301
Disk quotas for bob (uid 301):
Filesystem usage quota limit timeleft files quota limit timeleft
/export/home 0 1 2 0 2 3
Filesystem Is the mount point for the file system.
usage Is the current block usage.
2 Display all quotas for one or more file systems, even if there is no usage.
# repquota [-v] -a filesystem
-v Reports on quotas for all users, even those users who do not consume resources.
-a Reports on all file systems.
filesystem Reports on the specified file system.
# repquota -va
/dev/dsk/c0t3d0s7 (/export/home):
Block limits File limits
User used soft hard timeleft used soft hard timeleft
#301 -- 0 1 2.0 days 0 2 3
#341 -- 57 50 60 7.0 days 2 90 100
Block limits Definition
used Is the current block usage.
The following table describes the commands that you use to change quotas or to remove quotas.
edquota edquota(1M) Changes the hard limits and soft limits on the number
of inodes or amount of disk space for each user. Also,
changes the soft limit for each file system with a quota.
You can change the length of time that users can exceed their disk space quotas or inode quotas
by using the edquota command.
2 Use the quota editor to create a temporary file that contains soft time limits.
# edquota -t
where the -t option specifies the editing of the soft time limits for each file system.
3 Change the time limits from 0 (the default) to the time limits that you specify. So, use numbers
and the keywords month, week, day, hour, min, or sec.
The following example shows the same temporary file after the time limit for exceeding the
blocks quota has been changed to 2 weeks. Also, the time limit for exceeding the number of files
has been changed to 16 days.
2 Use the quota editor to open a temporary file that contains one line for each mounted file
system that has a quotas file in the file system's root directory.
# edquota username
where username specifies the user name whose quota you want to change.
Caution – You can specify multiple users as arguments to the edquota command. However, the
user that this information belongs to, is not displayed. To avoid confusion, specify only one user
name.
3 Specify the number of 1-Kbyte disk blocks, both soft and hard, and the number of inodes, both
soft and hard.
The following output shows the same temporary file after quotas have been changed.
# quota -v smith
Disk quotas for smith (uid 12):
Filesystem usage quota limit timeleft files quota limit timeleft
2 Use the quota editor to create a temporary file containing one line for each mounted file system
that has a quotas file in its top-level directory.
# edquota username
Where username specifies the user name whose quota you want to disable.
Caution – You can specify multiple users as arguments to the edquota command. However, the
user that this information belongs to, is not displayed. To avoid confusion, specify only one user
name.
3 Change the number of 1-Kbyte disk blocks, both soft and hard, and the number of inodes, both
soft and hard, to 0.
Note – Ensure that you change the values to zero. Do not delete the line from the text file.
fs /files blocks (soft = 50, hard = 60) inodes (soft = 90, hard = 100)
The following example shows the same temporary file after quotas have been disabled.
filesystem Turns off quotas for one or more file systems that you specify. More than one file
system is specified by separating each file system name with a space.
# quotaoff -v /export/home
/export/home: quotas turned off
This chapter describes how to schedule routine or single (one-time) system tasks by using the
crontab and at commands.
This chapter also explains how to control access to these commands by using the following files:
■ cron.deny
■ cron-allow
■ at.deny
For information on the procedures that are associated with scheduling system tasks, see the
following:
■ “Creating and Editing crontab Files (Task Map)” on page 109
■ “Using the at Command (Task Map)” on page 122
Create or edit a crontab file. Use the crontab -e command to create “How to Create or Edit a crontab
or edit a crontab file. File” on page 115
Verify that a crontab file Use the ls -l command to verify the “How to Verify That a crontab File
exists. contents of the Exists” on page 116
/var/spool/cron/crontabs file.
Display a crontabfile. Use the ls -l command to display the “How to Display a crontab File” on
crontab file. page 117
109
Ways to Automatically Execute System Tasks
Remove a crontab file The crontab file is set up with restrictive “How to Remove a crontab File” on
permissions Use the crontab -r page 118
command, rather than the rm command
to remove a crontab file.
Deny crontab access To deny users access to crontab “How to Deny crontab Command
commands, add user names to the Access” on page 120
/etc/cron.d/cron.deny file by editing
this file.
Limit crontab access to To allow users access to the crontab “How to Limit crontab Command
specified users. command, add user names to the Access to Specified Users” on
/etc/cron.d/cron.allow file. page 120
This section contains overview information about two commands, crontab and at, which
enable you to schedule routine tasks to execute automatically. The crontab command
schedules repetitive commands. The at command schedules tasks that execute once.
The following table summarizes crontab and at commands, as well as the files that enable you
to control access to these commands.
You can also use the Solaris Management Console's Scheduled Jobs tool to schedule routine
tasks. For information on using and starting the Solaris Management Console, see Chapter 2,
“Working With the Solaris Management Console (Tasks),” in System Administration Guide:
Basic Administration.
Additionally, users can schedule crontab commands to execute other routine system tasks,
such as sending reminders and removing backup files.
For step-by-step instructions on scheduling crontab jobs, see “How to Create or Edit a crontab
File” on page 115.
Similar to crontab, the at command allows you to schedule the automatic execution of routine
tasks. However, unlike crontab files, at files execute their tasks once. Then, they are removed
from their directory. Therefore, the at command is most useful for running simple commands
or scripts that direct output into separate files for later examination.
Submitting an at job involves typing a command and following the at command syntax to
specify options to schedule the time your job will be executed. For more information about
submitting at jobs, see “Description of the at Command” on page 123.
The at command stores the command or script you ran, along with a copy of your current
environment variable, in the /var/spool/cron/atjobs directory. Your at job file name is
given a long number that specifies its location in the at queue, followed by the .a extension,
such as 793962000.a.
The cron daemon checks for at jobs at startup and listens for new jobs that are submitted. After
the cron daemon executes an at job, the at job's file is removed from the atjobs directory. For
more information, see the at(1) man page.
For step-by-step instructions on scheduling at jobs, see “How to Create an at Job” on page 124.
For example, a crontab file named root is supplied during SunOS software installation. The
file's contents include these command lines:
10 3 * * * /usr/sbin/logadm (1)
15 3 * * 0 /usr/lib/fs/nfs/nfsfind (2)
1 2 * * * [ -x /usr/sbin/rtc ] && /usr/sbin/rtc -c > /dev/null 2>&1 (3)
30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean (4)
The following describes the output for each of these command lines:
■ The first line runs the logadm command at 3:10 a.m. every day.
■ The second line executes the nfsfind script every Sunday at 3:15 a.m.
■ The third line runs a script that checks for daylight savings time (and make corrections, if
necessary) at 2:10 a.m. daily.
If there is no RTC time zone, nor an /etc/rtc_config file, this entry does nothing.
x86 only – The /usr/sbin/rtc script can only be run on an x86 based system.
■ The fourth line checks for (and removes) duplicate entries in the Generic Security Service
table, /etc/gss/gsscred_db, at 3:30 a.m. daily.
For more information about the syntax of lines within a crontab file, see “Syntax of crontab
File Entries” on page 114.
The crontab files are stored in the /var/spool/cron/crontabs directory. Several crontab files
besides root are provided during SunOS software installation. See the following table.
adm Accounting
lp Printing
Besides the default crontab files, users can create crontab files to schedule their own system
tasks. Other crontab files are named after the user accounts in which they are created, such as
bob, mary, smith, or jones.
To access crontab files that belong to root or other users, superuser privileges are required.
Procedures explaining how to create, edit, display, and remove crontab files are described in
subsequent sections.
In much the same way, the cron daemon controls the scheduling of at files. These files are
stored in the /var/spool/cron/atjobs directory. The cron daemon also listens for
notifications from the crontab commands regarding submitted at jobs.
Minute 0-59
Hour 0-23
Month 1-12
Follow these guidelines for using special characters in crontab time fields:
■ Use a space to separate each field.
■ Use a comma to separate multiple values.
■ Use a hyphen to designate a range of values.
■ Use an asterisk as a wildcard to include all possible values.
■ Use a comment mark (#) at the beginning of a line to indicate a comment or a blank line.
For example, the following crontab command entry displays a reminder in the user's console
window at 4 p.m. on the first and fifteenth days of every month.
Each command within a crontab file must consist of one line, even if that line is very long. The
crontab file does not recognize extra carriage returns. For more detailed information about
crontab entries and command options, refer to the crontab(1) man page.
The following example shows how to determine if an editor has been defined, and how to set up
vi as the default.
$ which $EDITOR
$
$ EDITOR=vi
$ export EDITOR
where username specifies the name of the user's account for which you want to create or edit a
crontab file. You can create your own crontab file without superuser privileges, but you must
have superuser privileges to creating or edit a crontab file for root or another user.
Caution – If you accidentally type the crontab command with no option, press the interrupt
character for your editor. This character allows you to quit without saving changes. If you
instead saved changes and exited the file, the existing crontab file would be overwritten with an
empty file.
# crontab -e jones
The following command entry added to a new crontab file automatically removes any log files
from the user's home directory at 1:00 a.m. every Sunday morning. Because the command entry
does not redirect output, redirect characters are added to the command line after *.log. Doing
so ensures that the command executes properly.
Verify the contents of user's crontab file by using the crontab -l command as described in
“How to Display a crontab File” on page 117.
By default, the crontab -l command displays your own crontab file. To display crontab files
that belong to other users, you must be superuser.
You do not need to become superuser or assume an equivalent role to display your own
crontab file.
Caution – If you accidentally type the crontab command with no option, press the interrupt
character for your editor. This character allows you to quit without saving changes. If you
instead saved changes and exited the file, the existing crontab file would be overwritten with an
empty file.
$ crontab -l
13 13 * * * chmod g+w /home1/documents/*.book > /dev/null 2>&1
$ suPassword:
Sun Microsystems Inc. SunOS 5.10 s10_51 May 2004
# crontab -l
#ident "@(#)root 1.19 98/07/06 SMI" /* SVr4.0 1.1.3.1 */
#
# The root crontab should be used to perform accounting data collection.
#
#
10 3 * * * /usr/sbin/logadm
15 3 * * 0 /usr/lib/fs/nfs/nfsfind
30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
#10 3 * * * /usr/lib/krb5/kprop_script ___slave_kdcs___
$ su
Password:
Sun Microsystems Inc. SunOS 5.10 s10_51 May 2004
# crontab -l jones
13 13 * * * cp /home/jones/work_files /usr/backup/. > /dev/null 2>&1
You do not have to change the directory to /var/spool/cron/crontabs (where crontab files
are located) to use this command.
You do not need to become superuser or assume an equivalent role to remove your own
crontab file.
Caution – If you accidentally type the crontab command with no option, press the interrupt
character for your editor. This character allows you to quit without saving changes. If you
instead saved changes and exited the file, the existing crontab file would be overwritten with an
empty file.
$ ls /var/spool/cron/crontabs
adm jones lp root smith sys uucp
$ crontab -r
$ ls /var/spool/cron/crontabs
adm jones lp root sys uucp
The cron.deny and cron.allow files consist of a list of user names, one user name per line.
Superuser privileges are required to edit or create the cron.deny and cron.allow files.
The cron.deny file, which is created during SunOS software installation, contains the following
user names:
$ cat /etc/cron.d/cron.deny
daemon
bin
smtp
nuucp
listen
nobody
noaccess
None of the user names in the default cron.deny file can access the crontab command. You can
edit this file to add other user names that will be denied access to the crontab command.
No default cron.allow file is supplied. So, after Solaris software installation, all users (except
users who are listed in the default cron.deny file) can access the crontab command. If you
create a cron.allow file, only these users can access the crontab command.
2 Edit the /etc/cron.d/cron.deny file and add user names, one user per line. Include users who
will be denied access to the crontab commands.
daemon
bin
smtp
nuucp
listen
nobody
noaccess
username1
username2
username3
.
.
.
4 Add the user names, one user name per line. Include users that will be allowed to use the
crontab command.
root
username1
username2
username3
.
.
.
$ cat /etc/cron.d/cron.deny
daemon
bin
smtp
nuucp
listen
nobody
noaccess
jones
temp
visitor
The following example shows a cron.allow file. The users root, jones, lp, and smith are the
only users who can access the crontab command.
$ cat /etc/cron.d/cron.allow
root
jones
lp
smith
$ crontab -l
If the user can access the crontab command, and already has created a crontab file, the file is
displayed. Otherwise, if the user can access the crontab command but no crontab file exists, a
message similar to the following message is displayed:
Either this user either is listed in the cron.allow file (if the file exists), or the user is not listed in
the cron.deny file.
If the user cannot access the crontab command, the following message is displayed whether or
not a previous crontab file exists:
This message means that either the user is not listed in the cron.allow file (if the file exists), or
the user is listed in the cron.deny file.
Display the at queue. User the atq command to display “How to Display the at Queue” on
the at queue. page 125
Verify an at job. Use the atq command to confirm “How to Verify an at Job” on
that at jobs that belong to a specific page 125
user have been submitted to the
queue.
Deny access to the at command. To deny users access to the “How to Deny Access to the at
atcommand, edit the Command” on page 127
/etc/cron.d/at.deny file.
By default, users can create, display, and remove their own at job files. To access at files that
belong to root or other users, you must have superuser privileges.
When you submit an at job, it is assigned a job identification number along with the .a
extension. This designation becomes the job's file name, as well as its queue number.
Note – If output from this command or script is important, be sure to direct the output to a
file for later examination.
For example, the following at job removes core files from the user account smith near
midnight on the last day of July.
$ at 11:45pm July 31
at> rm /home/smith/*core*
at> Press Control-d
commands will be executed using /bin/csh
job 933486300.a at Tue Jul 31 23:45:00 2004
The at.deny file, which is created during SunOS software installation, contains the following
user names:
daemon
bin
smtp
nuucp
listen
nobody
noaccess
With superuser privileges, you can edit the at.deny file to add other user names whose at
command access you want to restrict.
2 At the at prompt, type the commands or scripts that you want to execute, one per line.
You may type more than one command by pressing Return at the end of each line.
$ at -m 1930
at> rm /home/jones/*.backup
at> Press Control-D
job 897355800.a at Thu Jul 12 19:30:00 2004
She received a email message which confirmed the execution of her at job.
The following example shows how jones scheduled a large at job for 4:00 a.m. Saturday
morning. The job output was directed to a file named big.file.
$ at 4 am Saturday
at> sort -r /usr/dict/words > /export/home/jones/big.file
$ at -l
897543900.a Sat Jul 14 23:45:00 2004
897355800.a Thu Jul 12 19:30:00 2004
897732000.a Tue Jul 17 04:00:00 2004
The following example shows the output that is displayed when a single job is specified with the
at -l command.
$ at -l 897732000.a
897732000.a Tue Jul 17 04:00:00 2004
You do not need to become superuser or assume an equivalent role to remove your own at job.
1 Remove the at job from the queue before the job is executed.
$ at -r [job-id]
where the -r job-id option specifies the identification number of the job you want to remove.
2 Verify that the at job is removed by using the at -l (or the atq) command.
The at -l command displays the jobs remaining in the at queue. The job whose identification
number you specified should not appear.
$ at -l [job-id]
$ at -l
897543900.a Sat Jul 14 23:45:00 2003
897355800.a Thu Jul 12 19:30:00 2003
897732000.a Tue Jul 17 04:00:00 2003
$ at -r 897732000.a
$ at -l 897732000.a
at: 858142000.a: No such file or directory
2 Edit the /etc/cron.d/at.deny file and add the names of users, one user name per line, that will
be prevented from using the at commands.
daemon
bin
smtp
nuucp
listen
nobody
noaccess
username1
username2
username3
.
.
.
$ cat at.deny
daemon
bin
smtp
nuucp
listen
nobody
noaccess
jones
smith
$ at 2:30pm
at: you are not authorized to use at. Sorry.
This message confirms that the user is listed in the at.deny file.
If at command access is allowed, then the at -l command returns nothing.
For information on the step-by-step procedures that are associated with system accounting, see
“System Accounting (Task Map)” on page 134.
For reference information about the various system accounting reports, see Chapter 10,
“System Accounting (Reference).”
129
What is System Accounting?
Switching to microstate accounting provides substantially more accurate data about user
processes and the amount of time they spend in various states. In addition, this information is
used to generate more accurate load averages and statistics from the /proc file system. For more
information, see the proc(4) man page.
Connect Accounting
Connect accounting enables you to determine the following information:
■ The length of time a user was logged in
■ How the tty lines are being used
■ The number of reboots on your system
■ How many times the accounting software was turned off and on
To provide this information on connect sessions, the system stores the following data
■ Record of time adjustments
■ Boot times
■ Number of times the accounting software was turned off and on
■ Changes in run levels
■ The creation of user processes (login processes and init processes)
■ The terminations of processes
These records are produced from the output of system programs such as date, init, login,
ttymon, and acctwtmp. They are stored in the /var/adm/wtmpx file.
Process Accounting
Process accounting enables you to keep track of the following data about each process that runs
on your system:
■ User IDs and group IDs of users using the process
■ Beginning times and elapsed times of the process
■ CPU time for the process (user time and system time)
■ Amount of memory used by the process
■ Commands run by the process
■ The tty that controls the process
Every time a process terminates, the exit program collects this information and writes it to the
/var/adm/pacct file.
Disk Accounting
Disk accounting enables you to gather and format the following data about the files each user
has on disks:
■ User name and user ID of the user
■ Number of blocks that are used by the user's files
This data is collected by the /usr/lib/acct/dodisk shell script at intervals that are determined
by the entry you add to the /var/spool/cron/crontabs/root file. In turn, the dodisk script
invokes the acctdisk and acctdusg commands. These commands gather disk usage by login
name.
The acctdusg command might overcharge for files that are written randomly, which can create
holes in the files. This problem occurs because the acctdusg command does not read the
indirect blocks of a file when determining the file size. Rather, the acctdusg command
determines the file size by checking the current file size value in the file's inode.
Fee Calculations
The chargefee utility stores charges for special services that are provided to a user in the
/var/adm/fee file. A special service, for example, is file restoration. Each entry in the file
consists of a user login name, user ID, and the fee. This file is checked by the runacct script
every day, and new entries are merged into the accounting records. For instructions on running
the chargefee script to bill users, see “How to Bill Users” on page 138.
3. The turnacct script, invoked with the -on option, begins process accounting. Specifically,
the turnacct script executes the accton program with the /var/adm/pacct argument.
4. The remove shell script “cleans up” the saved pacct and wtmpx files that are left in the sum
directory by the runacct script.
5. The login and init programs record connect sessions by writing records into the
/var/adm/wtmpx file. Date changes (using date with an argument) are also written to the
/var/adm/wtmpx file. Reboots and shutdowns using the acctwtmp command are also
recorded in the /var/adm/wtmpx file.
6. When a process ends, the kernel writes one record per process, using the acct.h format, in
the /var/adm/pacct file.
Every hour, the cron command executes the ckpacct script to check the size of the
/var/adm/pacct file. If the file grows beyond 500 blocks (default), the turnacct switch
command is executed. (The program moves the pacct file to the pacctn file and creates a
new file.) The advantage of having several smaller pacct files becomes apparent when you
try to restart the runacct script if a failure occurs when processing these records.
7. The runacct script is executed by the cron command each night. The runacct script
processes the accounting files to produce command summaries and usage summaries by
user name. These accounting files are processed: /var/adm/pacctn, /var/adm/wtmpx,
/var/adm/fee, and /var/adm/acct/nite/disktacct.
8. The /usr/lib/acct/prdaily script is executed on a daily basis by the runacct script to
write the daily accounting information in the /var/adm/acct/sum/rprtMMDD files.
9. The monacct script should be executed on a monthly basis (or at intervals you determine,
such as at the end of every fiscal period). The monacct script creates a report that is based on
data stored in the sum directory that has been updated daily by the runacct script. After
creating the report, the monacct script “cleans up” the sum directory to prepare the
directory's files for the new runacct data.
Set up system Set up system accounting by performing the following “How to Set Up System
accounting. tasks: Accounting” on
■ Create the /etc/rc0.d/K22acct and page 135
/etc/rc2.d/S22acct files.
■ Modify the /var/spool/cron/crontabs/adm and
/var/spool/cron/crontabs/root crontab files.
Bill users. Run the /usr/lib/acct/chargefee username amount “How to Bill Users” on
command. page 138
Fix a corrupted wtmpx Convert the wtmpx file from binary to ASCII format. “How to Fix a Corrupted
file. wtmpx File” on page 139
Fix tacct errors. Run the prtacct script to check the “How to Fix tacct
/var/adm/acct/sum/tacctprev file. Then, patch the Errors” on page 139
latest/var/adm/acct/sum/tacctMMDD file. You will
need to re-create the /var/adm/acct/sum/tacct file.
Restart the runacct Remove the lastdate file and any lock files. Then, “How to Restart the
script. manually restart the runacct script. runacct Script” on
page 140
Disable system Edit theadm crontab file to stop the ckpacct, runacct, “How to Temporarily
accounting and monacct programs from running. Stop System
temporarily. Accounting” on
page 141
Disable system Delete the entries for the ckpacct, runacct, and monacct “How to Permanently
accounting programs in the adm and crontab files. Disable System
permanently. Accounting” on
page 142
134 System Administration Guide: Advanced Administration • September 2008
Setting Up System Accounting
You can choose which accounting scripts run by default. After these entries have been added to
the crontab files, system accounting should run automatically.
2 If necessary, install the SUNWaccr and SUNWaccu packages on your system by using the pkgadd
command.
5 Add the following lines to the adm crontab file to start the ckpacct, runacct, and monacct
scripts automatically.
# EDITOR=vi; export EDITOR
# crontab -e adm
0 * * * * /usr/lib/acct/ckpacct
30 2 * * * /usr/lib/acct/runacct 2> /var/adm/acct/nite/fd2log
30 7 1 * * /usr/lib/acct/monacct
6 Add the following line to the root crontab file to start the dodisk script automatically.
# crontab -e
30 22 * * 4 /usr/lib/acct/dodisk
Billing Users
If you provide special user services by request, you might want to bill users by running the
chargefee utility. Special services include restoring files or remote printing. The chargefee
utility records charges in the /var/adm/fee file. Each time the runacct utility is executed, new
entries are merged into the total accounting records.
# /usr/lib/acct/chargefee print_customer 10
The wtmpx files seem to cause the most problems in the daily operation of system accounting.
When the date is changed manually and the system is in multiuser mode, a set of date change
records is written to the /var/adm/wtmpx file. The wtmpfix utility is designed to adjust the time
stamps in the wtmp records when a date change is encountered. However, some combinations of
date changes and reboots slip through the wtmpfix utility and cause the acctcon program to
fail.
4 Edit the xtacct file, removing corrupted records and writing duplicate records to another file.
6 Merge the files tacctprev and tacct.MMDD into the tacct file.
# /usr/lib/acct/acctmerg < tacctprev tacctMMDD > tacct
If the active.MMDD file exists, check it first for error messages. If the active and lock files
exist, check the fd2log file for any relevant messages.
Run without arguments, the runacct script assumes that this invocation is the first invocation
of the day. The argument MMDD is necessary if the runacct script is being restarted and
specifies the month and day for which the runacct script reruns the accounting. The entry
point for processing is based on the contents of the statefile file. To override the statefile
file, include the desired state on the command line. For a description of the available states, see
the runacct(1M) man page.
Caution – When you run the runacct program manually, be sure to run it as user adm.
2 Edit the adm crontab file to stop the ckpacct, runacct, and monacct programs from running by
commenting out the appropriate lines.
# EDITOR=vi; export EDITOR
# crontab -e adm
#0 * * * * /usr/lib/acct/ckpacct
#30 2 * * * /usr/lib/acct/runacct 2> /var/adm/acct/nite/fd2log
#30 7 1 * * /usr/lib/acct/monacct
3 Edit the root crontab file to stop the dodisk program from running by commenting out the
appropriate line.
# crontab -e
#30 22 * * 4 /usr/lib/acct/dodisk
5 (Optional) Remove the newly added comment symbols from the crontab files.
2 Edit the adm crontab file and delete the entries for the ckpacct, runacct, and monacct
programs.
# EDITOR=vi; export EDITOR
# crontab -e adm
3 Edit the root crontab file and delete the entries for the dodisk program.
# crontab -e
For more information about system accounting tasks, see Chapter 9, “Managing System
Accounting (Tasks).”
runacct Script
The main daily accounting script, runacct, is normally invoked by the cron command outside
of normal business hours. The runacct script processes connect, fee, disk, and process
accounting files. This script also prepares daily and cumulative summary files for use by the
prdaily and monacct scripts for billing purposes.
The runacct script takes care not to damage files if errors occur.
A series of protection mechanisms that are used to perform the following tasks:
■ Recognize an error
■ Provide intelligent diagnostics
■ Complete processing in such a way that the runacct script can be restarted with minimal
intervention
This script records its progress by writing descriptive messages to the active file. Files used by
the runacct script are assumed to be in the /var/adm/acct/nite directory, unless otherwise
noted. All diagnostic output during the execution of the runacct script is written to the fd2log
file.
143
runacct Script
When the runacct script is invoked, it creates the lock and lock1 files. These files are used to
prevent simultaneous execution of the runacct script. The runacct program prints an error
message if these files exist when it is invoked. The lastdate file contains the month and day the
runacct script was last invoked, and is used to prevent more than one execution per day.
For instructions on how to restart the runacct script, see “How to Restart the runacct Script”
on page 140.
To allow the runacct script to be restarted, processing is broken down into separate re-entrant
states. The statefile file is used to track the last state completed. When each state is
completed, the statefile file is updated to reflect the next state. After processing for the state is
complete, the statefile file is read and the next state is processed. When the runacct script
reaches the CLEANUP state, it removes the locks and ends. States are executed as shown in the
following table.
State Description
SETUP The turnacct switch command is executed to create a new pacct file. The
/var/adm/pacctn process accounting files (except for the pacct file) are moved to
the /var/adm/Spacctn.MMDD files. The /var/adm/wtmpx file is moved to the
/var/adm/acct/nite/wtmp.MMDD file (with the current time record added on
the end) and a new /var/adm/wtmp file is created. The closewtmp and utmp2wtmp
programs add records to the wtmp.MMDD file and the new wtmpx file to account
for users who are currently logged in.
WTMPFIX The wtmpfix program checks the wtmp.MMDD file in the nite directory for
accuracy. Because some date changes cause the acctcon program to fail, the
wtmpfix program attempts to adjust the time stamps in the wtmpx file if a record of
a date change appears. This program also deletes any corrupted entries from the
wtmpx file. The fixed version of the wtmp.MMDD file is written to the tmpwtmp file.
CONNECT The acctcon program is used to record connect accounting records in the file
ctacct.MMDD. These records are in tacct.h format. In addition, the acctcon
program creates the lineuse and reboots files. The reboots file records all the
boot records found in the wtmpx file.
MERGE The acctmerg program merges the process accounting records with the connect
accounting records to form the daytacct file.
FEES The acctmerg program merges ASCII tacct records from the fee file into the
daytacct file.
DISK The dodisk script produces the disktacct file. If the dodisk script has been run,
which produces the disktacct file, the DISK program merges the file into the
daytacct file and moves the disktacct file to the /tmp/disktacct.MMDD file.
MERGETACCT The acctmerg program merges the daytacct file with the sum/tacct file, the
cumulative total accounting file. Each day, the daytacct file is saved in the
sum/tacct.MMDD file so that the sum/tacct file can be re-created if it is
corrupted or lost.
CMS The acctcms program is run several times. This program is first run to generate
the command summary by using the Spacctn files and write the data to the
sum/daycms file. The acctcms program is then run to merge the sum/daycms file
with the sum/cms cumulative command summary file. Finally, the acctcms
program is run to produce nite/daycms and nite/cms, the ASCII command
summary files from the sum/daycms and sum/cms files, respectively. The
lastlogin program is used to create the /var/adm/acct/sum/loginlog log file.
This file reports when each user last logged in. If the runacct script is run after
midnight, the dates showing the time last logged in by some users will be incorrect
by one day.
USEREXIT Any installation-dependent (local) accounting program can be run at this point.
The runacct script expects this program to be called the
/usr/lib/acct/runacct.local program.
CLEANUP This state cleans up temporary files, runs the prdaily script and saves its output in
the sum/rpt.MMDD file, removes the locks, and then exits.
Caution – When restarting the runacct script in the CLEANUP state, remove the last ptacct file
because this file will not be complete.
“Daily Report” on page 146 Shows terminal line utilization by tty number.
“Daily Usage Report” on page 147 Indicates usage of system resources by users (listed in order of user ID).
“Daily Command Summary” on Indicates usage of system resources by commands, listed in descending
page 148 order of memory use. In other words, the command that used the most
memory is listed first. This same information is reported for the month in
the monthly command summary.
“Monthly Command Summary” A cumulative summary that reflects the data accumulated since the last
on page 150 invocation of the monacct program.
“Last Login Report” on page 150 Shows the last time each user logged in (listed in chronological order).
Daily Report
This report gives information about each terminal line used. The following is a sample Daily
Report.
The from and to lines specify the time period reflected in the report. This time period covers the
time the last Daily Report was generated to the time the current Daily Report was generated.
Then, the report presents a log of system reboots, shutdowns, power failure recoveries, and any
other record written to the /var/adm/wtmpx file by the acctwtmp program. For more
information, see the acct(1M) man page.
The second part of the report is a breakdown of terminal line utilization. The TOTAL DURATION
tells how long the system was in multiuser mode (accessible through the terminal lines). The
following table describes the data provided by the Daily Report.
Column Description
MINUTES The number of minutes that the line was in use during the accounting period.
# SESS The number of times this line or port was accessed for a login session.
# ON Same as SESS. (This column no longer has meaning. Previously, this column listed
the number of times that a line or port was used to log in a user.)
# OFF The number of times a user logs out and any interrupts that occur on that line.
Generally, interrupts occur on a port when ttymon is first invoked after the system
T
is brought to multiuser mode. If the # OFF exceeds the # SESS by a large factor, the
multiplexer, modem, or cable is probably going bad. Or, a bad connection exists
somewhere. The most common cause is an unconnected cable dangling from the
multiplexer.
During real time, you should monitor the /var/adm/wtmpx file because it is the file from which
the connect accounting is derived. If the wtmpx file grows rapidly, execute the following
command to see which tty line is the noisiest.
LOGIN CPU (MINS) KCORE- MINS CONNECT (MINS) DISK # OF # OF # DISK FEE
UID NAME PRIME NPRIME PRIME NPRIME PRIME NPRIME BLOCKS PROCS SESS SAMPLES
0 TOTAL 72 148 11006173 51168 26230634 57792 539 330 0 2150 1
0 root 32 76 11006164 33664 26230616 22784 0 0 0 127 0
4 adm 0 0 22 51 0 0 0 420 0 0 0
101 rimmer 39 72 894385 1766020 539 330 0 1603 1 0 0
The following table describes the data provided by the Daily Usage Report.
Column Description
LOGIN NAME Login (or user) name of the user. Identifies a user who has multiple login
names.
CPU (MINS) Amount of time, in minutes, that the user's process used the central processing
unit. Divided into PRIME and NPRIME (nonprime) utilization. The accounting
system's version of this data is located in the /etc/acct/holidays file.
KCORE-MINS A cumulative measure of the amount of memory in Kbyte segments per minute
that a process uses while running. Divided into PRIME and NPRIME utilization.
CONNECT (MINS) Amount of time, in minutes, that the a user was logged in to the system, or “real
time.” Divided into PRIME and NPRIME utilization. If these numbers are high
while the # OF PROCS is low, you can conclude that the user logs in first thing in
the morning and hardly touches the terminal the rest of the day.
DISK BLOCKS Output from the acctdusg program, which runs the disk accounting programs
and merges the accounting records (daytacct). For accounting purposes, a
block is 512 bytes.
# OF PROCS Number of processes invoked by the user. If large numbers appear, a user might
have a shell procedure that has run out of control.
# DISK SAMPLES Number of times that disk accounting was run to obtain the average number of
DISK BLOCKS.
FEE Often unused field that represents the total accumulation of units charged
against the user by the chargefee script.
These reports are sorted by TOTAL KCOREMIN, which is an arbitrary gauge but often useful for
calculating drain on a system.
TOTALS 2150 1334999.75 219.59 724258.50 6079.48 0.10 0.00 397338982 419448
The following table describes the data provided by the Daily Command Summary.
Column Description
COMMAND NAME Name of the command. All shell procedures are lumped together under
the name sh because only object modules are reported by the process
accounting system. You should monitor the frequency of programs called
a.out or core, or any other unexpected name. You can use the acctcom
program to determine who executed an oddly named command and if
superuser privileges were used.
TOTAL KCOREMIN Total cumulative measurement of the Kbyte segments of memory used by
a process per minute of run time.
MEAN SIZE-K Mean (average) of the TOTAL KCOREMIN over the number of invocations
reflected by the NUMBER CMDS.
MEAN CPU-MIN Mean (average) derived from the NUMBER CMDS and the TOTAL CPU-MIN.
HOG FACTOR Total CPU time divided by elapsed time. Shows the ratio of system
availability to system utilization, providing a relative measure of total
available CPU time consumed by the process during its execution.
CHARS TRNSFD Total number of characters transferred by the read and write system calls.
Might be negative due to overflow.
BLOCKS READ Total number of the physical block reads and writes that a process
performed.
TOTALS 42718 4398793.50 361.92 956039.00 12154.09 0.01 0.00 16100942848 825171
netscape 789 3110437.25 121.03 79101.12 25699.58 0.15 0.00 3930527232 302486
adeptedi 84 1214419.00 50.20 4174.65 24193.62 0.60 0.01 890216640 107237
acroread 145 165297.78 7.01 18180.74 23566.84 0.05 0.00 1900504064 26053
dtmail 2 64208.90 6.35 20557.14 10112.43 3.17 0.00 250445824 43280
dtaction 800 47602.28 11.26 15.37 4226.93 0.01 0.73 640057536 8095
soffice. 13 35506.79 0.97 9.23 36510.84 0.07 0.11 134754320 5712
dtwm 2 20350.98 3.17 20557.14 6419.87 1.59 0.00 190636032 14049
For a description of the data provided by the Monthly Command Summary, see “Daily
Command Summary” on page 148.
The default output of the acctcom command provides the following information:
# acctcom
COMMAND START END REAL CPU MEAN
NAME USER TTYNAME TIME TIME (SECS) (SECS) SIZE(K)
#accton root ? 02:30:01 02:30:01 0.03 0.01 304.00
turnacct adm ? 02:30:01 02:30:01 0.42 0.01 320.00
mv adm ? 02:30:01 02:30:01 0.07 0.01 504.00
utmp_upd adm ? 02:30:01 02:30:01 0.03 0.01 712.00
utmp_upd adm ? 02:30:01 02:30:01 0.01 0.01 824.00
utmp_upd adm ? 02:30:01 02:30:01 0.01 0.01 912.00
utmp_upd adm ? 02:30:01 02:30:01 0.01 0.01 920.00
utmp_upd adm ? 02:30:01 02:30:01 0.01 0.01 1136.00
utmp_upd adm ? 02:30:01 02:30:01 0.01 0.01 576.00
closewtm adm ? 02:30:01 02:30:01 0.10 0.01 664.00
Field Explanation
COMMAND NAME Command name (pound (#) sign if the command was
executed with superuser privileges)
You can obtain the following information by using acctcom command options.
■ State of the fork/exec flag (1 for fork without exec)
■ System exit status
■ Hog factor
■ Total kcore minutes
■ CPU factor
■ Characters transferred
■ Blocks read
Option Description
-a Shows average statistics about the processes selected. The statistics are printed
after the output is recorded.
-b Reads the files backward, showing latest commands first. This option has no effect
if reading standard input.
-f Prints the fork/exec flag and system exit status columns. The output is an octal
number.
-h Instead of mean memory size, shows the hog factor, which is the fraction of total
available CPU time consumed by the process during its execution. Hog factor =
total-CPU-time/elapsed-time.
-C sec Shows only processes with total CPU time (system plus user) that exceeds sec
seconds.
-e time Shows processes existing at or before time, given in the format hr[:min[:sec]]
-E time Shows processes starting at or before time, given in the format hr[:min[:sec]].
Using the same time for both -S and -E, shows processes that existed at the time.
-H factor Shows only processes that exceed factor, where factor is the “hog factor” (see the -h
option).
-I chars Shows only processes that transferred more characters than the cutoff number
specified by chars.
-n pattern Shows only commands that match pattern (a regular expression except that “+”
means one or more occurrences).
-o ofile Instead of printing the records, copies them in acct.h format to ofile.
-O sec Shows only processes with CPU system time that exceeds sec seconds.
-s time Show processes existing at or after time, given in the format hr[:min[:sec]].
-S time Show processes starting at or after time, given in the format hr[:min[:sec]].
File Description
fee Output from the chargefee program, which are the ASCII tacct records
pacctn Process accounting files that are switched by running the turnacct script
Spacctn.MMDD Process accounting files for MMDD during execution of the runacct script
The /var/adm/acct directory contains the nite, sum, and fiscal directories. These directories
contain the actual data collection files. For example, the nite directory contains files that are
reused daily by the runacct script. A brief summary of the files in the /var/adm/acct/nite
directory follows.
File Description
active Used by the runacct script to record progress and print warning and error
messages
active.MMDD Same as the active file after the runacct script detects an error
ctmp Output of acctcon1 program, which consists of connect session records in ctmp.h
format (acctcon1 and acctcon2 are provided for compatibility purposes)
disktacct Disk accounting records in tacct.h format, created by the dodisk script
lastdate Last day the runacct script executed (in date +%m%d format)
log.MMDD Same as the log file after the runacct script detects an error
reboots Beginning and ending dates from the wtmpx file, and a listing of reboots
statefile Used to record current state during execution of the runacct script
wtmperror MMDD Same as the wtmperror file after the runacct script detects an error
The sum directory contains the cumulative summary files updated by the runacct script and
used by the monacct script. The following table summarizes the files in the /var/adm/acct/sum
directory.
File Description
cms Total command summary file for current fiscal period in binary format
daycms Command summary file for the day's usage in internal summary format
loginlog Record of last date each user logged in; created by the lastlogin script and used in
the prdaily script
The fiscal directory contains periodic summary files that are created by the monacct script. The
following table summarizes the files in the /var/adm/acct/fiscal directory.
File Description
cmsn Total command summary file for fiscal period n in internal summary format
File Description
nite/daytacct The total accounting file for the day in tacct.h format.
nite/lineuse The runacct script calls the acctcon program to gather data on terminal line
usage from the /var/adm/acct/nite/tmpwtmp file and writes the data to the
/var/adm/acct/nite/lineuse file. The prdaily script uses this data to report
line usage. This report is especially useful for detecting bad lines. If the ratio
between the number of logouts to logins is greater than three to one, the line is
very likely failing.
sum/cms This file is the accumulation of each day's command summaries. The
accumulation restarts when the monacct script is executed. The ASCII version
is the nite/cms file.
sum/daycms The runacct script calls the acctcms program to process the commands used
during the day to create the Daily Command Summary report and stores the
data in the /var/adm/acct/sum/daycms file. The ASCII version is the
/var/adm/acct/nite/daycms file.
sum/loginlog The runacct script calls the lastlogin script to update the last date logged in
for the logins in the /var/adm/acct/sum/loginlog file. The lastlogin
command also removes from this file any logins that are no longer valid.
sum/rprt.MMDD Each execution of the runacct script saves a copy of the daily report that was
printed by the prdaily script.
sum/tacct Contains the accumulation of each day's nite/daytacct data and is used for
billing purposes. The monacct script restarts accumulating this data each
month or fiscal period.
157
Where to Find System Performance Tasks
The CPC commands cpustat and cputrack have enhanced, command-line syntax for
specifying CPU information. For example, in previous versions of the Solaris OS, you were
required to specify two counters. The configuration of both commands now allows you to
specify only one counter, as shown in the following example:
# cputrack -c pic0=Cycle_cnt ls -d .
time lwp event pic0 pic1
.
0.034 1 exit 841167
For simple measurements, you can even omit the counter configuration, as shown in the
following example:
# cputrack -c Cycle_cnt ls -d .
time lwp event pic0 pic1
.
0.016 1 exit 850736
For more information on using the cpustat command, see the cpustat(1M) man page. For
more information on using the cputrack command, see the cputrack(1) man page.
Manage System Performance Tasks Chapter 2, “Projects and Tasks (Overview),” in System
Administration Guide: Solaris Containers-Resource
Management and Solaris Zones
Manage Processes With FX and FS Schedulers Chapter 8, “Fair Share Scheduler (Overview),” in
System Administration Guide: Solaris
Containers-Resource Management and Solaris Zones
System resources that affect performance are described in the following table.
Input/output (I/O) devices I/O devices transfer information into and out of the
computer. Such a device could be a terminal and
keyboard, a disk drive, or a printer.
Chapter 13, “Monitoring System Performance (Tasks),” describes the tools that display statistics
about the system's activity and performance.
Term Description
Process Any system activity or job. Each time you boot a system, execute a
command, or start an application, the system activates one or more
processes.
Lightweight process (LWP) A virtual CPU or execution resource. LWPs are scheduled by the kernel to
use available CPU resources based on their scheduling class and priority.
LWPs include a kernel thread and an LWP. A kernel thread contains
information that has to be in memory all the time. An LWP contains
information that is swappable.
Application thread A series of instructions with a separate stack that can execute independently
in a user's address space. Application threads can be multiplexed on top of
LWPs.
A process can consist of multiple LWPs and multiple application threads. The kernel schedules
a kernel-thread structure, which is the scheduling entity in the SunOS environment. Various
process structures are described in the following table.
Structure Description
proc Contains information that pertains to the whole process and must be in
main memory all the time
kthread Contains information that pertains to one LWP and must be in main
memory all the time
The following figure illustrates the relationships among these process structures.
Main Memory
(non-swappable)
user LWP
(user structure) (klwp structure)
Swappable
Most process resources are accessible to all the threads in the process. Almost all process virtual
memory is shared. A change in shared data by one thread is available to the other threads in the
process.
Monitoring Tools
The Solaris software provides several tools to help you track how your system is performing.
The following table describes these tools.
netstat and nfsstat Displays information about network netstat(1M) and nfsstat(1M)
commands performance
ps and prstat commands Displays information about active Chapter 12, “Managing System
processes Processes (Tasks)”
sar and sadc commands Collects and reports on system activity Chapter 13, “Monitoring System
data Performance (Tasks)”
Sun Enterprise SyMON Collects system activity data on Sun's Sun Enterprise SyMON 2.0.1
enterprise-level systems Software User's Guide
vmstat and iostat Summarizes system activity data, such as Chapter 13, “Monitoring System
commands virtual memory statistics, disk usage, and Performance (Tasks)”
CPU activity
kstat and mpstat commands Examines the available kernel statistics, kstat(1M) and mpstat(1M) man
or kstats, on the system and reports pages.
those statistics which match the criteria
specified on the command line. The
mpstat command reports processor
statistics in tabular form.
For information on the procedures associated with managing system processes, see the
following:
■ “Managing System Processes (Task Map)” on page 163
■ “Managing Process Class Information (Task Map)” on page 174
For overview information about managing system processes, see the following:
■ “Commands for Managing System Processes” on page 164
■ “Managing Process Class Information” on page 175
List processes. Use the ps command to list all the “How to List Processes” on
processes on a system. page 167
Display information about Use the pgrep command to obtain “How to Display Information
processes. the process IDs for processes that About Processes” on page 169
you want to display more
information about.
163
Commands for Managing System Processes
ps, pgrep, prstat, pkill Checks the status of active ps(1), pgrep(1), andprstat(1M)
processes on a system, as well as
displays detailed information about
the processes
The Solaris Management Console's Processes tool enables you to manage processes with a
user-friendly interface. For information on using and starting the Solaris Management Console,
see Chapter 2, “Working With the Solaris Management Console (Tasks),” in System
Administration Guide: Basic Administration.
Depending on which options you use, the ps command reports the following information:
■ Current status of the process
■ Process ID
■ Parent process ID
■ User ID
■ Scheduling class
■ Priority
■ Address of the process
■ Memory used
■ CPU time used
The following table describes some fields that are reported by the ps command. Which fields are
displayed depend on which option you choose. For a description of all available options, see the
ps(1) man page.
Field Description
C The processor xutilization for scheduling. This field is not displayed when
the -c option is used.
CLS The scheduling class to which the process belongs such as real-time, system,
or timesharing. This field is included only with the -c option.
PRI The kernel thread's scheduling priority. Higher numbers indicate a higher
priority.
WCHAN The address of an event or lock for which the process is sleeping.
STIME The starting time of the process in hours, minutes, and seconds.
TTY The terminal from which the process, or its parent, was started. A question
mark indicates that there is no controlling terminal.
TIME The total amount of CPU time used by the process since it began.
pfiles Reports fstat and fcntl information for open files in a process
pflags Prints /proc tracing flags, pending signals and held signals, and
other status information
pldd Lists the dynamic libraries that are linked into a process
pstack Prints a hex+symbolic stack trace for each lwp in each process
The process tools are similar to some options of the ps command, except that the output that is
provided by these commands is more detailed.
If a process becomes trapped in an endless loop, or if the process takes too long to execute, you
might want to stop (kill) the process. For more information about stopping processes using the
kill or the pkill command, see Chapter 12, “Managing System Processes (Tasks).”
The /proc file system is a directory hierarchy that contains additional subdirectories for state
information and control functions.
The /proc file system also provides an xwatchpoint facility that is used to remap read-and-write
permissions on the individual pages of a process's address space. This facility has no restrictions
and is MT-safe.
Debugging tools have been modified to use /proc's xwatchpoint facility, which means that the
entire xwatchpoint process is faster.
The following restrictions have been removed when you set xwatchpoints by using the dbx
debugging tool:
■ Setting xwatchpoints on local variables on the stack due to SPARC based system register
windows
■ Setting xwatchpoints on multithreaded processes
For more information, see the proc(4), and mdb(1) man pages.
$ ps
PID TTY TIME COMD
1664 pts/4 0:06 csh
2081 pts/4 0:00 ps
The following example shows output from the ps -ef command. This output shows that the
first process that is executed when the system boots is sched (the swapper) followed by the init
process, pageout, and so on.
$ ps -ef
UID PID PPID C STIME TTY TIME CMD
root 0 0 0 Dec 20 ? 0:17 sched
root 1 0 0 Dec 20 ? 0:00 /etc/init -
root 2 0 0 Dec 20 ? 0:00 pageout
root 3 0 0 Dec 20 ? 4:20 fsflush
root 374 367 0 Dec 20 ? 0:00 /usr/lib/saf/ttymon
root 367 1 0 Dec 20 ? 0:00 /usr/lib/saf/sac -t 300
root 126 1 0 Dec 20 ? 0:00 /usr/sbin/rpcbind
root 54 1 0 Dec 20 ? 0:00 /usr/lib/sysevent/syseventd
root 59 1 0 Dec 20 ? 0:00 /usr/lib/picl/picld
root 178 1 0 Dec 20 ? 0:03 /usr/lib/autofs/automountd
root 129 1 0 Dec 20 ? 0:00 /usr/sbin/keyserv
root 213 1 0 Dec 20 ? 0:00 /usr/lib/lpsched
root 154 1 0 Dec 20 ? 0:00 /usr/sbin/inetd -s
root 139 1 0 Dec 20 ? 0:00 /usr/lib/netsvc/yp/ypbind ...
root 191 1 0 Dec 20 ? 0:00 /usr/sbin/syslogd
root 208 1 0 Dec 20 ? 0:02 /usr/sbin/nscd
root 193 1 0 Dec 20 ? 0:00 /usr/sbin/cron
root 174 1 0 Dec 20 ? 0:00 /usr/lib/nfs/lockd
daemon 175 1 0 Dec 20 ? 0:00 /usr/lib/nfs/statd
root 376 1 0 Dec 20 ? 0:00 /usr/lib/ssh/sshd
root 226 1 0 Dec 20 ? 0:00 /usr/lib/power/powerd
root 315 1 0 Dec 20 ? 0:00 /usr/lib/nfs/mountd
root 237 1 0 Dec 20 ? 0:00 /usr/lib/utmpd
.
.
.
# pgrep cron 1
4780
# pwdx 4780 2
4780: /var/spool/cron/atjobs
# ptree 4780 3
4780 /usr/sbin/cron
# pfiles 4780 4
4780: /usr/sbin/cron
Current rlimit: 256 file descriptors
0: S_IFCHR mode:0666 dev:290,0 ino:6815752 uid:0 gid:3 rdev:13,2
O_RDONLY|O_LARGEFILE
/devices/pseudo/mm@0:null
1: S_IFREG mode:0600 dev:32,128 ino:42054 uid:0 gid:0 size:9771
O_WRONLY|O_APPEND|O_CREAT|O_LARGEFILE
/var/cron/log
2: S_IFREG mode:0600 dev:32,128 ino:42054 uid:0 gid:0 size:9771
O_WRONLY|O_APPEND|O_CREAT|O_LARGEFILE
/var/cron/log
3: S_IFIFO mode:0600 dev:32,128 ino:42049 uid:0 gid:0 size:0
O_RDWR|O_LARGEFILE
/etc/cron.d/FIFO
4: S_IFIFO mode:0000 dev:293,0 ino:4630 uid:0 gid:0 size:0
O_RDWR|O_NONBLOCK
# pgrep dtpad 1
2921
# pstop 2921 2
# prun 2921 3
1. Obtains the process ID for the dtpad process
2. Stops the dtpad process
3. Restarts the dtpad process
For more information, see the pgrep(1) and pkill(1) and kill(1) man pages.
2 Obtain the process ID for the process that you want to terminate.
$ pgrep process
where process is the name of the process that you want to terminate.
For example:
$ pgrep netscape
587
566
The process ID is displayed in the output.
Note – To obtain process information on a Sun RayTM, use the following commands:
# ps -fu user
should not be used to kill certain processes, such as a database process, or an LDAP
server process. The result is that data might be lost.
process Is the name of the process to stop.
Tip – When using the pkill command to terminate a process, first try using the command by
itself, without including a signal option. Wait a few minutes to see if the process terminates
before using the pkill command with the -9 signal.
$ ps -fu userabc
userabc 328 323 2 Mar 12 ? 10:18 /usr/openwin/bin/Xsun
:0 -nobanner -auth /var/dt/A:0-WmayOa
userabc 366 349 0 Mar 12 ? 0:00 /usr/openwin/bin/fbconsole
userabc 496 485 0 Mar 12 ? 0:09 /usr/dt/bin/sdtperfmeter
-f -H -t cpu -t disk -s 1 -name fpperfmeter
userabc 349 332 0 Mar 12 ? 0:00 /bin/ksh /usr/dt/bin/Xsession
userabc 440 438 0 Mar 12 pts/3 0:00 -csh -c unsetenv _ PWD;
unsetenv DT; setenv DISPLAY :0;
userabc 372 1 0 Mar 12 ? 0:00 /usr/openwin/bin/speckeysd
userabc 438 349 0 Mar 12 pts/3 0:00 /usr/dt/bin/sdt_shell -c
unset
.
.
.
The process ID is displayed in the first column of the output.
Tip – When using the kill command to stop a process, first try using the command by itself,
without including a signal option. Wait a few minutes to see if the process terminates before
using the kill command with the -9 signal.
For information on using the preap command, see the preap(1) man page. For information on
the using the pargs command, see the pargs(1) man page. See also, the proc(1) man page.
The pargs command solves a long-standing problem of being unable to display with the ps
command all the arguments that are passed to a process. The following example shows how to
use the pargs command in combination with the pgrep command to display the arguments
that are passed to a process.
argv[1]: -g
argv[2]: -h
argv[3]: -p
argv[4]: system-name console login:
argv[5]: -T
argv[6]: sun
argv[7]: -d
argv[8]: /dev/console
argv[9]: -l
argv[10]: console
argv[11]: -m
argv[12]: ldterm,ttcompat
548: /usr/lib/saf/ttymon
argv[0]: /usr/lib/saf/ttymon
The following example shows how to use the pargs -e command to display the environment
variables that are associated with a process.
$ pargs -e 6763
6763: tcsh
envp[0]: DISPLAY=:0.0
Display basic information about Use the priocntl -l command. to “How to Display Basic Information
process classes. Display process scheduling classes About Process Classes (priocntl)”
and priority ranges. on page 176
Display the global priority of a Use the ps -ecl command to “How to Display the Global
process. display the global priority of a Priority of a Process” on page 176
process.
Designate a process priority. Start a process with a designated “How to Designate a Process
priority by using the priocntl -e Priority (priocntl)” on page 177
-c command.
Change scheduling parameters of a Use the priocntl -s -m command “How to Change Scheduling
timesharing process. to change scheduling parameters in Parameters of a Timesharing
a timesharing process. Process (priocntl)” on page 178
Change the class of a process. Use the priocntl -s -c command “How to Change the Class of a
to change the class of a process. Process (priocntl)” on page 178
Change the priority of a process. Use the /usr/bin/nice command “How to Change the Priority of a
with the appropriate options to Process (nice)” on page 180
lower or raise the priority of a
process.
You can use the priocntl command to assign processes to a priority class and to manage
process priorities. For instructions on using the priocntl command to manage processes, see
“How to Designate a Process Priority (priocntl)” on page 177.
# priocntl -l
CONFIGURED CLASSES
==================
TS (Time Sharing)
Configured TS User Priority Range: -60 through 60
FX (Fixed priority)
Configured FX User Priority Range: 0 through 60
IA (Interactive)
Configured IA User Priority Range: -60 through 60
$ ps -ecl
F S UID PID PPID CLS PRI ADDR SZ WCHAN TTY TIME COMD
19 T 0 0 0 SYS 96 f00d05a8 0 ? 0:03 sched
8 S 0 1 0 TS 50 ff0f4678 185 ff0f4848 ? 36:51 init
19 S 0 2 0 SYS 98 ff0f4018 0 f00c645c ? 0:01 pageout
19 S 0 3 0 SYS 60 ff0f5998 0 f00d0c68 ? 241:01 fsflush
-s Lets you set the upper limit on the user priority range and change the current
priority.
-c class Specifies the class, TS for time-sharing or RT for real-time, to which you
are changing the process.
-i idtype idlist Uses a combination of xidtype and xidlist to identify the process or
processes. The xidtype specifies the type of ID, such as the process ID or user
ID. Use xidlist to identify a list of process IDs or user IDs.
Note – You must be superuser or working in a real-time shell to change a process from, or to, a
real-time process. If, as superuser, you change a user process to the real-time class, the user
cannot subsequently change the real-time scheduling parameters by using the priocntl -s
command.
The priority of a process is determined by the policies of its scheduling class and by its nice
number. Each timesharing process has a global priority. The global priority is calculated by
adding the user-supplied priority, which can be influenced by the nice or priocntl commands,
and the system-calculated priority.
The execution priority number of a process is assigned by the operating system. The priority
number is determined by several factors, including the process's scheduling class, how much
CPU time it has used, and in the case of a timesharing process, its nice number.
Each timesharing process starts with a default nice number, which it inherits from its parent
process. The nice number is shown in the NI column of the ps report.
A user can lower the priority of a process by increasing its user-supplied priority. However, only
superuser can lower a nice number to increase the priority of a process. This restriction
prevents users from increasing the priorities of their own processes, thereby monopolizing a
greater share of the CPU.
The nice numbers range from 0 to +39, with 0 representing the highest priority. The default
nice value for each timesharing process is 20. Two versions of the command are available: the
standard version, /usr/bin/nice, and the C shell built-in command.
Note – This section describes the syntax of the /usr/bin/nice command and not the C-shell
built-in nicecommand. For information about the C-shell nice command, see the csh(1) man
page.
1 Determine whether you want to change the priority of a process, either as a user or as superuser.
Then, select one of the following:
% /usr/bin/nice -n 5 command-name
The following nice command lowers the priority of command-name by raising the nice
number by the default increment of 10 units, but not beyond the maximum value of 39.
% /usr/bin/nice command-name
# /usr/bin/nice -5 command-name
See Also For more information, see the nice(1) man page.
This chapter describes procedures for monitoring system performance by using the vmstat,
iostat, df, and sar commands.
For information on the procedures that are associated with monitoring system performance,
see the following:
■ “Displaying System Performance Information (Task Map)” on page 183
■ “Monitoring System Activities (Task Map)” on page 191
Display virtual memory Statistics. Collect virtual memory statistics by “How to Display Virtual Memory
using the vmstat command. Statistics (vmstat)” on page 185
Display system event information. Display system event information “How to Display System Event
by using the vmstat command with Information (vmstat -s)” on
the -s option page 186
Display swapping statistics. Use the vmstat command with the “How to Display Swapping
-S option to display swapping Statistics (vmstat -S)” on page 187
statistics.
Display interrupts per device. Use the vmstat command with the “How to Display Interrupts Per
-i option to show the number of Device (vmstat -i)” on page 187
interrupts per device.
Display disk utilization. Use the iostat command to report “How to Display Disk Utilization
disk input and output statistics. Information (iostat)” on page 188
183
Displaying Virtual Memory Statistics (vmstat)
Display extended disk statistics. Use the iostat command with the “How to Display Extended Disk
-xtcoption to display extended Statistics (iostat -xtc)” on
disk statistics. page 189
Display disk space information. The df -k command displays disk “How to Display Disk Space
space information in Kbytes. Information (df -k)” on page 190
The following table describes the fields in the vmstat command output.
page Reports on page faults and paging activity, in units per second:
re Pages reclaimed
pi Kbytes paged in
fr Kbytes freed
disk Reports the number of disk operations per second, showing data
on up to four disks
us User time
sy System time
id Idle time
For a more detailed description of this command, see the vmstat(1M) man page.
$ vmstat 5
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr dd f0 s1 -- in sy cs us sy id
0 0 0 863160 365680 0 3 1 0 0 0 0 0 0 0 0 406 378 209 1 0 99
0 0 0 765640 208568 0 36 0 0 0 0 0 0 0 0 0 479 4445 1378 3 3 94
0 0 0 765640 208568 0 0 0 0 0 0 0 0 0 0 0 423 214 235 0 0 100
0 0 0 765712 208640 0 0 0 0 0 0 0 3 0 0 0 412 158 181 0 0 100
The swapping statistics fields are described in the following list. For a description of the other
fields, see Table 13–1.
si Average number of LWPs that are swapped in per second
so Number of whole processes that are swapped out
Note – The vmstat command truncates the output of si and so fields. Use the sar command to
display a more accurate accounting of swap statistics.
$ vmstat -i
interrupt total rate
--------------------------------
clock 52163269 100
esp0 2600077 4
zsc0 25341 0
zsc1 48917 0
cgsixc0 459 0
lec0 400882 0
fdc0 14 0
bppc0 0 0
audiocs0 0 0
--------------------------------
Total 55238959 105
$ iostat 5
tty sd0 sd6 nfs1 nfs49 cpu
tin tout kps tps serv kps tps serv kps tps serv kps tps serv us sy wt id
0 0 1 0 49 0 0 0 0 0 0 0 0 15 0 0 0 100
0 47 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 44 6 132 0 0 0 0 0 0 0 0 0 0 0 1 99
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 3 1 23 0 0 0 0 0 0 0 0 0 0 0 1 99
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
The following table describes the fields in the output of the iostat n command.
us In user mode
sy In system mode
id Idle
The percentage of disk space actually reported by the df command is used space divided by
usable space.
If the file system exceeds 90 percent capacity, you could transfer files to a disk that is not as full
by using the cp command. Alternately, you could transfer files to a tape by using the tar or cpio
commands. Or, you could remove the files.
For a detailed description of this command, see the df(1M) man page.
$ df -k
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t0d0s0 254966 204319 25151 90% /
/devices 0 0 0 0% /devices
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 496808 376 496432 1% /etc/svc/volatile
objfs 0 0 0 0% /system/object
/dev/dsk/c0t0d0s6 3325302 3073415 218634 94% /usr
fd 0 0 0 0% /dev/fd
swap 496472 40 496432 1% /var/run
swap 496472 40 496432 1% /tmp
/dev/dsk/c0t0d0s5 13702 1745 10587 15% /opt
/dev/dsk/c0t0d0s7 9450 1045 7460 13% /export/home
Check file access. Display file access operation status by using the “How to Check File Access (sar
sar command with the -a option. -a)” on page 193
Check buffer activity. Display buffer activity statistics by using the sar “How to Check Buffer Activity
command with the -b option. (sar -b)” on page 194
Check system call Display system call statistics by using the sar “How to Check System Call
statistics. command with the -c option. Statistics (sar -c)” on page 195
Check disk activity. Check disk activity by using the sar command “How to Check Disk Activity
with the -d option. (sar -d)” on page 197
Check page-out and Use the sar command with the -g option to “How to Check Page-Out and
memory. display page-out memory freeing activities. Memory (sar -g)” on page 198
Check kernel memory The kernel memory allocation (KMA) allows a “How to Check Kernel Memory
allocation. kernel subsystem to allocate and free memory, as Allocation (sar -k)” on
needed. Use the sar command with the -k option page 200
to check KMA.
Check interprocess Use the sar command with the -m option to “How to Check Interprocess
communication. report interprocess communication activities. Communication (sar -m)” on
page 202
Check page-in activity. Use the sar command with the -p option to “How to Check Page-In Activity
report page-in activity. (sar -p)” on page 203
Check queue activity. Use the sar command with the -q option to check “How to Check Queue Activity
the following: (sar -q)” on page 204
■ Average queue length while queue is occupied
■ Percentage of time that the queue is occupied
Check unused memory. Use the sar command with the -r option to “How to Check Unused
report the number of memory pages and swap file Memory (sar -r)” on page 205
disk blocks that are currently used.
Check CPU utilization. Use the sar command with the -u option to “How to Check CPU Utilization
display CPU utilization statistics. (sar -u)” on page 206
Check system table Use the sar command with the -v option to “How to Check System Table
status. report status on the following system tables: Status (sar -v)” on page 208
■ Process
■ Inode
■ File
■ Shared memory record
Check swapping Use the sar command with the -w option to check “How to Check Swapping
activity. swapping activity. Activity (sar -w)” on page 209
Check terminal Use the sar command with the -y option to “How to Check Terminal
activity. monitor terminal device activity. Activity (sar -y)” on page 210
Check overall system The sar -A command displays statistics from all “How to Check Overall System
performance. options to provide overall system performance Performance (sar -A)” on
information. page 211
Set up automatic data To set up your system to collect data “How to Set Up Automatic Data
collection. automatically and to run the sar commands, do Collection” on page 214
the following:
■ Run the svcadm enable
system/sar:default command
■ Edit the /var/spool/cron/crontabs/sys file
For a detailed description of this command, see the sar(1) man page.
Average 0 4 0
The following list describes the field names and description of operating system routines that
are reported by the sar -a command.
iget/s The number of requests made for inodes that were not in the directory name
look-up cache (DNLC).
namei/s The number of file system path searches per second. If namei does not find a
directory name in the DNLC, it calls iget to get the inode for either a file or
directory. Hence, most igets are the result of DNLC misses.
dirbk/s The number of directory block reads issued per second.
The larger the reported values for these operating system routines, the more time the kernel is
spending to access user files. The amount of time reflects how heavily programs and
applications are using the file systems. The -a option is helpful for viewing how disk-dependent
an application is.
$ sar -b
10:20:00 0 0 100 0 0 68 0 0
10:40:00 0 1 98 0 1 70 0 0
11:00:00 0 1 100 0 1 75 0 0
Average 0 1 100 0 1 91 0 0
The following table describes the buffer activities that are displayed by the -b option.
The most important entries are the cache hit ratios %rcache and %wcache. These entries
measure the effectiveness of system buffering. If %rcache falls below 90 percent, or if %wcache
falls below 65 percent, it might be possible to improve performance by increasing the buffer
space.
$ sar -c
The following table describes the system call categories that are reported by the -c option.
Typically, reads and writes account for about half of the total system calls. However, the
percentage varies greatly with the activities that are being performed by the system.
$ sar -d
The following table describes the disk device activities that are reported by the -d option.
Note that queue lengths and wait times are measured when something is in the queue. If %busy
is small, large queues and service times probably represent the periodic efforts by the system to
ensure that altered blocks are promptly written to the disk.
$ sar -g
%ufs_ipf The percentage of ufs inodes taken off the free list by
iget that had reusable pages associated with them.
These pages are flushed and cannot be reclaimed by
processes. Thus, this field represents the percentage of
igets with page flushes. A high value indicates that
the free list of inodes is page-bound, and that the
number of ufs inodes might need to be increased.
Rather than statically allocating the maximum amount of memory it is expected to require
under peak load, the KMA divides requests for memory into three categories:
■ Small (less than 256 bytes)
■ Large (512 bytes to 4 Kbytes)
■ Oversized (greater than 4 Kbytes)
The KMA keeps two pools of memory to satisfy small requests and large requests. The oversized
requests are satisfied by allocating memory from the system page allocator.
If you are checking a system that is being used to write drivers or STREAMS that use KMA
resources, then the sar -k command will likely prove useful. Otherwise, you will probably not
need the information it provides. Any driver or module that uses KMA resources, but does not
specifically return the resources before it exits, can create a memory leak. A memory leak causes
the amount of memory that is allocated by KMA to increase over time. Thus, if the alloc fields
of the sar -k command increase steadily over time, there might be a memory leak. Another
indication of a memory leak is failed requests. If this problem occurs, a memory leak has
probably caused KMA to be unable to reserve and allocate memory.
If it appears that a memory leak has occurred, you should check any drivers or STREAMS that
might have requested memory from KMA and not returned it.
$ sar -k
$ sar -m
$ sar -p
The following table describes the reported statistics from the -p option.
$ sar -q
00:00:00 runq-sz %runocc swpq-sz %swpocc
The following list describes the output from the -q option.
runq-sz The number of kernel threads in memory that are waiting for a CPU to run.
Typically, this value should be less than 2. Consistently higher values mean that
the system might be CPU-bound.
%runocc The percentage of time that the dispatch queues are occupied.
swpq-sz No longer reported by the sar command.
%swpocc No longer reported by the sar command.
$ sar -q
$ sar -r
The sar command without any options is equivalent to the sar -u command. At any given
moment, the processor is either busy or idle. When busy, the processor is in either user mode or
system mode. When idle, the processor is either waiting for I/O completion or “sitting still”
with no work to do.
The following list describes output from the -u option.
%usr Lists the percentage of time that the processor is in user mode
%sys Lists the percentage of time that the processor is in system mode
%wio Lists the percentage of time that the processor is idle and waiting for I/O completion
%idle Lists the percentage of time that the processor is idle and not waiting for I/O
A high %wio value generally means that a disk slowdown has occurred.
$ sar -u
Average 2 0 0 98
$ sar -v
file-sz The size of the open system file table. The sz is given
as 0, because space is allocated dynamically for the file
table.
The following list describes target values and observations related to the sar -w command
output.
swpin/s The number of LWP transfers into memory per second.
bswin/s The number of blocks transferred for swap-ins per second. /*
(float)PGTOBLK(xx->cvmi.pgswapin) / sec_diff */
swpot/s The average number of processes that are swapped out of memory per second. If
the number is greater than 1, you might need to increase memory.
bswot/s The number of blocks that are transferred for swap-outs per second.
pswch/s The number of kernel thread switches, per second.
$ sar -w
The number of modem interrupts per second (mdmin/s) should be close to zero. The receive
and transmit interrupts per second (xmtin/s and rcvin/s) should be less than or equal to the
number of incoming or outgoing characters, respectively. If not, check for bad lines.
$ sar -y
Average 0 0 1 0 0 0
The sadc data collection utility periodically collects data on system activity and saves the data in
a file in binary format, one file for each 24-hour period. You can set up the sadc command to
run periodically (usually once each hour), and whenever the system boots to multiuser mode.
The data files are placed in the /var/adm/sa directory. Each file is named sadd, where dd is the
current date. The format of the command is as follows:
/usr/lib/sa/sadc [t n] [ofile]
The command samples n times with an interval of t seconds, which should be greater than five
seconds between samples. This command then writes to the binary ofile file, or to standard
output.
# 0 * * * 0-6 /usr/lib/sa/sa1
# 20,40 8-17 * * 1-5 /usr/lib/sa/sa1
# 5 18 * * 1-5 /usr/lib/sa/sa2 -s 8:00 -e 18:01 -i 1200 -A
sar [-aAbcdgkmpqruvwy] [-s time] [-e time] [-i sec] [-f file]
The following sar command samples cumulative activity counters in the operating system
every t seconds, n times. The t should be five seconds or greater. Otherwise, the command itself
might affect the sample. You must specify a time interval in which to take the samples.
Otherwise, the command operates according to the second format. The default value of n is 1.
The following example takes two samples separated by 10 seconds. If the -o option were
specified, samples are saved in binary format.
$ sar -u 10 2
Other important information about the sar command includes the following:
■ With no sampling interval or number of samples specified, the sar command extracts data
from a previously recorded file. This file is either the file specified by the -f option or, by
default, the standard daily activity file, /var/adm/sa/sadd, for the most recent day.
■ The -s and -e options define the starting time and the ending time for the report. Starting
and ending times are of the form hh[:mm[:ss]], where hh, mm, and ss represent hours,
minutes, and seconds.
■ The -i option specifies, in seconds, the intervals between record selection. If the -i option is
not included, all intervals that are found in the daily activity file are reported.
The following table lists the sar options and their actions.
Option Actions
-A Reports overall system performance, which is the same as entering all options.
Using no option is equivalent to calling the sar command with the -u option.
Note – Do not edit a crontab file directly. Instead, use the crontab -e command to make
changes to an existing crontab file.
# crontab -e sys
For information on new or changed troubleshooting features in the Solaris 10 release, see the
following:
■ “Dynamic Tracing Facility” on page 218
■ “kmdb Replaces kadb as Standard Solaris Kernel Debugger” on page 218
For a complete listing of new Solaris features and a description of Solaris releases, see Solaris 10
What’s New.
217
What's New in Troubleshooting?
Typically, the container is not visible. However, there are two instances when you might need to
interact with the container daemon:
■ It is possible that another application might attempt to use a network port that is reserved
for the common agent container.
■ In the event that a certificate store is compromised, you might have to regenerate the
common agent container certificate keys.
For information about how to troubleshoot these problems, see “Troubleshooting Common
Agent Container Problems in the Solaris OS” on page 260.
kmdb brings all the power and flexibility of mdb to live kernel debugging. kmdb supports the
following:
■ Debugger commands (dcmds)
■ Debugger modules (dmods)
■ Access to kernel type data
■ Kernel execution control
■ Inspection
■ Modification
For more information, see the kmdb(1) man page. For step-by-step instructions on using kmdb to
troubleshoot a system, see “How to Boot the System With the Kernel Debugger (kmdb)” in
System Administration Guide: Basic Administration and “How to Boot a System With the
Kernel Debugger in the GRUB Boot Environment (kmdb)” in System Administration Guide:
Basic Administration.
Manage system crash information Chapter 17, “Managing System Crash Information
(Tasks)”
Troubleshoot software problems such as reboot Chapter 18, “Troubleshooting Miscellaneous Software
failures and backup problems Problems (Tasks)”
Troubleshoot file access problems Chapter 19, “Troubleshooting File Access Problems
(Tasks)”
Resolve UFS file system inconsistencies Chapter 20, “Resolving UFS File System
Inconsistencies (Tasks)”
ok sync
If the system fails to reboot successfully after a system crash, see Chapter 18,
“Troubleshooting Miscellaneous Software Problems (Tasks).”
Check to see if a system crash dump was generated after the system crash. System crash dumps
are saved by default. For information about crash dumps, see Chapter 17, “Managing System
Crash Information (Tasks).”
Question Description
Can you reproduce the problem? This is important because a reproducible test case is often
essential for debugging really hard problems. By reproducing the
problem, the service provider can build kernels with special
instrumentation to trigger, diagnose, and fix the bug.
Are you using any third-party drivers? Drivers run in the same address space as the kernel, with all the
same privileges, so they can cause system crashes if they have
bugs.
What was the system doing just before it If the system was doing anything unusual like running a new
crashed? stress test or experiencing higher-than-usual load, that might
have led to the crash.
Were there any unusual console messages Sometimes the system will show signs of distress before it actually
right before the crash? crashes; this information is often useful.
Did you add any tuning parameters to the Sometimes tuning parameters, such as increasing shared
/etc/system file? memory segments so that the system tries to allocate more than it
has, can cause the system to crash.
Did the problem start recently? If so, did the onset of problems coincide with any changes to the
system, for example, new drivers, new software, different
workload, CPU upgrade, or a memory upgrade.
This chapter describes system messaging features in the Solaris Operating System.
For example:
If the message originated in the kernel, the kernel module name is displayed. For example:
Oct 1 14:07:24 mars ufs: [ID 845546 kern.notice] alloc: /: file system full
When a system crashes, it might display a message on the system console like this:
Less frequently, this message might be displayed instead of the panic message:
Watchdog reset !
The error logging daemon, syslogd, automatically records various system warnings and errors
in message files. By default, many of these system messages are displayed on the system console
and are stored in the /var/adm directory. You can direct where these messages are stored by
setting up system message logging. For more information, see “Customizing System Message
Logging” on page 226. These messages can alert you to system problems, such as a device that is
about to fail.
223
Viewing System Messages
The /var/adm directory contains several message files. The most recent messages are in
/var/adm/messages file (and in messages.*), and the oldest are in the messages.3 file. After a
period of time (usually every ten days), a new messages file is created. The messages.0 file is
renamed messages.1, messages.1 is renamed messages.2, and messages.2 is renamed
messages.3. The current /var/adm/messages.3 file is deleted.
Because the /var/adm directory stores large files containing messages, crash dumps, and other
data, this directory can consume lots of disk space. To keep the /var/adm directory from
growing too large, and to ensure that future crash dumps can be saved, you should remove
unneeded files periodically. You can automate this task by using the crontab file. For more
information on automating this task, see “How to Delete Crash Dump Files” on page 91 and
Chapter 8, “Scheduling System Tasks (Tasks).”
$ more /var/adm/messages
$ dmesg
Jan 3 08:44:41 starbug genunix: [ID 540533 kern.notice] SunOS Release 5.10 ...
Jan 3 08:44:41 starbug genunix: [ID 913631 kern.notice] Copyright 1983-2003 ...
Jan 3 08:44:41 starbug genunix: [ID 678236 kern.info] Ethernet address ...
Jan 3 08:44:41 starbug unix: [ID 389951 kern.info] mem = 131072K (0x8000000)
Jan 3 08:44:41 starbug unix: [ID 930857 kern.info] avail mem = 121888768
Jan 3 08:44:41 starbug rootnex: [ID 466748 kern.info] root nexus = Sun Ultra 5/
10 UPA/PCI (UltraSPARC-IIi 333MHz)
Jan 3 08:44:41 starbug rootnex: [ID 349649 kern.info] pcipsy0 at root: UPA 0x1f0x0
Jan 3 08:44:41 starbug genunix: [ID 936769 kern.info] pcipsy0 is /pci@1f,0
Jan 3 08:44:41 starbug pcipsy: [ID 370704 kern.info] PCI-device: pci@1,1, simba0
Jan 3 08:44:41 starbug genunix: [ID 936769 kern.info] simba0 is /pci@1f,0/pci@1,1
Jan 3 08:44:41 starbug pcipsy: [ID 370704 kern.info] PCI-device: pci@1, simba1
Jan 3 08:44:41 starbug genunix: [ID 936769 kern.info] simba1 is /pci@1f,0/pci@1
Jan 3 08:44:57 starbug simba: [ID 370704 kern.info] PCI-device: ide@3, uata0
Jan 3 08:44:57 starbug genunix: [ID 936769 kern.info] uata0 is /pci@1f,0/pci@1,
1/ide@3
Jan 3 08:44:57 starbug uata: [ID 114370 kern.info] dad0 at pci1095,6460
.
.
.
See Also For more information, see the dmesg(1M) man page.
The system log rotation is defined in the /etc/logadm.conf file. This file includes log rotation
entries for processes such as syslogd. For example, one entry in the /etc/logadm.conf file
specifies that the /var/log/syslog file is rotated weekly unless the file is empty. The most
recent syslog file becomes syslog.0, the next most recent becomes syslog.1, and so on. Eight
previous syslog log files are kept.
The /etc/logadm.conf file also contains time stamps of when the last log rotation occurred.
You can use the logadm command to customize system logging and to add additional logging in
the /etc/logadm.conf file as needed.
For example, to rotate the Apache access and error logs, use the following commands:
In this example, the Apache access_log file is rotated when it reaches 100 MB in size, with a .0,
.1, (and so on) suffix, keeping 10 copies of the old access_log file. The error_log is rotated
when it reaches 10 MB in size with the same suffixes and number of copies as the access_log
file.
The /etc/logadm.conf entries for the preceding Apache log rotation examples look similar to
the following:
# cat /etc/logadm.conf
.
.
.
/var/apache/logs/error_log -s 10m
/var/apache/logs/access_log -s 100m
You can use the logadm command as superuser or by assuming an equivalent role (with Log
Management rights). With role-based access control (RBAC), you can grant non-root users the
privilege of maintaining log files by providing access to the logadm command.
For example, add the following entry to the /etc/user_attr file to grant user andy the ability to
use the logadm command:
andy::::profiles=Log Management
Or, you can set up a role for log management by using the Solaris Management Console. For
more information about setting up a role, see “Role-Based Access Control (Overview)” in
System Administration Guide: Security Services.
Do not put two entries for the same facility on the same line, if the entries are
for different priorities. Putting a priority in the syslog file indicates that all
messages of that all messages of that priority or higher are logged, with the last
message taking precedence. For a given facility and level, syslogd matches all
messages for that level and all higher levels.
action The action field indicates where the messages are forwarded.
The following example shows sample lines from a default /etc/syslog.conf file.
user.err /dev/sysmsg
user.err /var/adm/messages
user.alert ‘root, operator’
user.emerg *
Note – Placing entries on separate lines might cause messages to be logged out of order if a log
target is specified more than once in the /etc/syslog.conf file. Note that you can specify
multiple selectors in a single line entry, each separated by a semi-colon.
The most common error condition sources are shown in the following table. The most common
priorities are shown in Table 15–2 in order of severity.
Source Description
auth Authentication
lp Spooling system
Note – The number of syslog facilities that can be activated in the /etc/syslog.conf file is
unlimited.
Priority Description
2 Edit the /etc/syslog.conf file, adding or changing message sources, priorities, and message
locations according to the syntax described in syslog.conf(4).
user.emerg ‘root, *’
■ The consadm command runs a daemon to monitor auxiliary console devices. Any display
device designated as an auxiliary console that disconnects, hangs up or loses carrier, is
removed from the auxiliary console device list and is no longer active. Enabling one or more
auxiliary consoles does not disable message display on the default console; messages
continue to display on /dev/console.
For more information on enabling an auxiliary console, see the consadm(1m) man page.
3 Verify that the device has been added to the list of persistent auxiliary consoles.
# consadm
b. Disable the auxiliary console and remove it from the list of persistent auxiliary consoles.
# consadm -p -d devicename
This chapter describes how to manage core files with the coreadm command.
For information on the procedures associated with managing core files, see “Managing Core
Files (Task Map)” on page 233.
1. Display the current core Display the current core dump configuration “How to Display the Current
dump configuration by using the coreadm command. Core Dump Configuration” on
page 236
3. Examine a Core Dump Use the proc tools to view a core dump file. “Examining Core Files” on
File page 238
233
Managing Core Files Overview
For example, you can use the coreadm command to configure a system so that all process core
files are placed in a single system directory. This means it is easier to track problems by
examining the core files in a specific directory whenever a Solaris process or daemon terminates
abnormally.
When a process terminates abnormally, it produces a core file in the current directory by
default. If the global core file path is enabled, each abnormally terminating process might
produce two files, one in the current working directory, and one in the global core file location.
By default, a setuid process does not produce core files using either the global or per-process
path.
%g Effective group ID
%p Process ID
%u Effective user ID
%% Literal %
/var/core/core.%f.%p
and a sendmail process with PID 12345 terminates abnormally, it produces the following core
file:
/var/core/core.sendmail.12345
For example, the following coreadm command sets the default per-process core file pattern.
This setting applies to all processes that have not explicitly overridden the default core file
pattern. This setting persists across system reboots.
# coreadm -i /var/core/core.%f.%p
This coreadm command sets the per-process core file name pattern for any processes:
$ coreadm -p /var/core/core.%f.%p $$
The $$ symbols represent a placeholder for the process ID of the currently running shell. The
per-process core file name pattern is inherited by all child processes.
Once a global or per-process core file name pattern is set, it must be enabled with the coreadm
-e command. See the following procedures for more information.
You can set the core file name pattern for all processes run during a user's login session by
putting the command in a user's $HOME/.profile or .login file.
By default, both flags are disabled. For security reasons, the global core file path must be a full
pathname, starting with a leading /. If superuser disables per-process core files, individual users
cannot obtain core files.
The setuid core files are owned by superuser with read/write permissions for superuser only.
Regular users cannot access them even if the process that produced the setuid core file was
owned by an ordinary user.
$ coreadm
global core file pattern:
global core file content: default
init core file pattern: core
init core file content: default
global core dumps: disabled
per-process core dumps: enabled
global setid core dumps: disabled
per-process setid core dumps: disabled
global core dump logging: disabled
3 Display the current process core file path to verify the configuration.
$ coreadm $$
1180: /home/kryten/corefiles/%f.%p
3 Display the current process core file path to verify the configuration.
# coreadm
global core file pattern: /var/core/core.%f.%p
global core file content: default
init core file pattern: core
init core file content: default
global core dumps: enabled
per-process core dumps: enabled
global setid core dumps: disabled
per-process setid core dumps: disabled
global core dump logging: disabled
The /usr/proc/bin/pstack, pmap, pldd, pflags, and pcred tools can now be applied to core
files by specifying the name of the core file on the command line, similar to the way you specify
a process ID to these commands.
For more information on using proc tools to examine core files, see proc(1).
$ ./a.out
Segmentation Fault(coredump)
$ /usr/proc/bin/pstack ./core
core ’./core’ of 19305: ./a.out
000108c4 main (1, ffbef5cc, ffbef5d4, 20800, 0, 0) + 1c
This chapter describes how to manage system crash information in the Solaris Operating
System.
For information on the procedures associated with managing system crash information, see
“Managing System Crash Information (Task Map)” on page 241.
1. Display the current crash Display the current crash dump “How to Display the Current
dump configuration. configuration by using the dumpadm Crash Dump Configuration” on
command. page 245
2. Modify the crash dump Use the dumpadm command to specify the “How to Modify a Crash Dump
configuration. type of data to dump, whether or not the Configuration” on page 246
system will use a dedicated dump device, the
directory for saving crash dump files, and the
amount of space that must remain available
after crash dump files are written.
3. Examine a crash dump Use the mdb command to view crash dump “How to Examine a Crash
file. files. Dump” on page 247
4. (Optional) Recover from a The system crashes, but no room is available “How to Recover From a Full
full crash dump directory. in the savecore directory, and you want to Crash Dump Directory
save some critical system crash dump (Optional)” on page 248
information.
241
System Crashes (Overview)
5. (Optional) Disable or Use the dumpadm command to disable or “How to Disable or Enable
enable the saving of crash enable the saving the crash dump files. Saving Crash Dumps” on
dump files. Saving crash dump files is enabled by default. page 248
A ZFS volume is also created for the dump device. The dump device is sized at 1/2 the size of
physical memory, but no more than 2 Gbytes. Currently, the swap area and the dump device
must reside on separate ZFS volumes.
If you need to modify your ZFS swap area or dump area after installation, use the swap or
dumpadm commands, as in previous Solaris releases.
For information about managing dump devices in this document, see “Managing System Crash
Dump Information” on page 245.
Crash dump files are sometimes confused with core files, which are images of user applications
that are written when the application terminates abnormally.
System crash information is managed with the dumpadm command. For more information, see
“The dumpadm Command” on page 243.
Additionally, crash dumps saved by savecore can be useful to send to a customer service
representative for analysis of why the system is crashing.
■ System crash dump files, generated by the savecore command, are saved by default.
■ The savecore -L command is a new feature which enables you to get a crash dump of the
live running the Solaris OS. This command is intended for troubleshooting a running
system by taking a snapshot of memory during some bad state, such as a transient
performance problem or service outage. If the system is up and you can still run some
commands, you can execute the savecore -L command to save a snapshot of the system to
the dump device, and then immediately write out the crash dump files to your savecore
directory. Because the system is still running, you can only use the savecore -L command if
you have configured a dedicated dump device.
dump device The device that stores dump data temporarily as the system crashes. When
the dump device is not the swap area, savecore runs in the background,
which speeds up the boot process.
savecore directory The directory that stores system crash dump files.
minimum free space Minimum amount of free space required in the savecore directory after
saving crash dump files. If no minimum free space has been configured, the
default is one Mbyte.
Specifically, dumpadm initializes the dump device and the dump content through the /dev/dump
interface.
After the dump configuration is complete, the savecore script looks for the location of the
crash dump file directory. Then, savecore is invoked to check for crash dumps and check the
content of the minfree file in the crash dump directory.
This output identifies the default dump configuration for a system running the Solaris 10
release.
# dumpadm
Dump content: kernel pages
Dump device: /dev/dsk/c0t3d0s1 (swap)
Savecore directory: /var/crash/pluto
Savecore enabled: yes
# dumpadm -c all -d /dev/dsk/c0t1d0s1 -m 10%
Dump content: all pages
Dump device: /dev/dsk/c0t1d0s1 (dedicated)
Savecore directory: /var/crash/pluto (minfree = 77071KB)
Savecore enabled: yes
.
.
# /usr/bin/mdb -k unix.0
Loading modules: [ unix krtld genunix ip nfs ipc ptm ]
> ::status
debugging crash dump /dev/mem (64-bit) from ozlo
operating system: 5.10 Generic (sun4u)
> ::system
set ufs_ninode=0x9c40 [0t40000]
set ncsize=0x4e20 [0t20000]
set pt_cnt=0x400 [0t1024]
2 Clear out the savecore directory, usually /var/crash/hostname, by removing existing crash
dump files that have already been sent to your service provider. Or, run the savecore command
and specify an alternate directory that has sufficient disk space. See the next step.
3 Manually run the savecore command and if necessary, specify an alternate savecore directory.
# savecore [ directory ]
# dumpadm -n
Dump content: all pages
Dump device: /dev/dsk/c0t1d0s1 (dedicated)
Savecore directory: /var/crash/pluto (minfree = 77071KB)
Savecore enabled: no
# dumpadm -y
Dump content: all pages
Dump device: /dev/dsk/c0t1d0s1 (dedicated)
Savecore directory: /var/crash/pluto (minfree = 77071KB)
Savecore enabled: yes
This chapter describes miscellaneous software problems that might occur occasionally and are
relatively easy to fix. Troubleshooting miscellaneous software problems includes solving
problems that aren't related to a specific software application or topic, such as unsuccessful
reboots and full file systems. Resolving these problems are described in the following sections.
The system can't find /platform/‘uname You may need to change the boot-device setting in
-m‘/kernel/unix. the PROM on a SPARC based system. For
information on changing the default boot device, see
“How to Change the Default Boot Device by Using
the Boot PROM” in System Administration Guide:
Basic Administration.
251
What to Do if You Forgot Root Password
Solaris 10: There is no default boot device on an x86 Solaris 10: Boot the system by using the
based system. The message displayed is: Configuration Assistant/boot diskette and select the
disk from which to boot.
Not a UFS filesystem.
Solaris 10 1/06: The GRUB boot archive has become Solaris 10 1/06: Boot the failsafe archive.
corrupted. Or, the SMF boot archive service has failed.
An error message is displayed if you run the svcs -x
command.
There's an invalid entry in the /etc/passwd file. For information on recovering from an invalid
passwd file, see Chapter 12, “Booting a Solaris System
(Tasks),” in System Administration Guide: Basic
Administration.
There's a hardware problem with a disk or another Check the hardware connections:
device. ■ Make sure the equipment is plugged in.
■ Make sure all the switches are set properly.
■ Look at all the connectors and cables, including
the Ethernet cables.
■ If all this fails, turn off the power to the system,
wait 10 to 20 seconds, and then turn on the power
again.
If none of the above suggestions solve the problem, contact your local service provider.
Note – GRUB based booting is not available on SPARC based systems in this Solaris release.
The following examples describe how to recover from a forgotten root password on both
SPARC and x86 based systems.
The following example shows how to recover when you forget the root password by booting
from the network. This example assumes that the boot server is already available. Be sure to
apply a new root password after the system has rebooted.
EXAMPLE 18–2 x86: Performing a GRUB Based Boot When You Have Forgotten the Root Password
This example assumes that the boot server is already available. Be sure to apply a new root
password after the system has rebooted.
EXAMPLE 18–2 x86: Performing a GRUB Based Boot When You Have Forgotten the Root Password
(Continued)
1 pool10:13292304648356142148 ROOT/be10
2 rpool:14465159259155950256 ROOT/be01
EXAMPLE 18–3 x86: Booting a System When You Have Forgotten the Root Password
Solaris 10: The following example shows how to recover when you forget root's password by
booting from the network. This example assumes that the boot server is already available. Be
sure to apply a new root password after the system has rebooted.
EXAMPLE 18–3 x86: Booting a System When You Have Forgotten the Root Password (Continued)
See: http://sun.com/msg/SMF-8000-KS
See: /etc/svc/volatile/system-boot-archive:default.log
Impact: 48 dependent services are not running. (Use -v for list.)
Note that you must become superuser or the equivalent to run this command.
For more information on rebuilding the GRUB boot archive, see “How to Boot the Failsafe
Archive on an x86 Based System by Using GRUB” in System Administration Guide: Basic
Administration and the bootadm(1M) man page.
■ If possible, log in remotely from another system on the network. Use the pgrep
command to look for the hung process. If it looks like the window system is hung,
identify the process and kill it.
2. Press Control-\ to force a “quit” in the running program and (probably) write out a core file.
3. Press Control-c to interrupt the program that might be running.
4. Log in remotely and attempt to identify and kill the process that is hanging the system.
5. Log in remotely, become superuser or assume an equivalent role and reboot the system.
6. If the system still does not respond, force a crash dump and reboot. For information on
forcing a crash dump and booting, see “Forcing a Crash Dump and Reboot of the System” in
System Administration Guide: Basic Administration.
7. If the system still does not respond, turn the power off, wait a minute or so, then turn the
power back on.
8. If you cannot get the system to respond at all, contact your local service provider for help.
There are several reasons why a file system fills up. The following sections describe several
scenarios for recovering from a full file system. For information on routinely cleaning out old
and unused files to prevent full file systems, see Chapter 6, “Managing Disk Use (Tasks).”
Someone accidentally copied a file or directory to the Log in as superuser or assume an equivalent role and
wrong location. This also happens when an application use the ls -tl command in the specific file system to
crashes and writes a large core file into the file system. identify which large file is newly created and remove
it. For information on removing core files, see “How
to Find and Delete core Files” on page 90.
This can occur if TMPFS is trying to write more than it is For information on recovering from tmpfs-related
allowed or some current processes are using a lot of error messages, see the tmpfs(7FS) man page.
memory.
If files or directories with ACLs are copied or restored Copy or restore files into the /var/tmp directory
into the /tmp directory, the ACL attributes are lost. instead.
The /tmp directory is usually mounted as a temporary
file system, which doesn't support UFS file system
attributes such as ACLs.
If you used an invalid destination device name with Use the ls -tl command in the /dev directory to
the -f option, the ufsdump command wrote to a file in identify which file is newly created and abnormally
the /dev directory of the root (/) file system, filling it large, and remove it.
up. For example, if you typed /dev/rmt/st0 instead of
/dev/rmt/0, the backup file /dev/rmt/st0 was
created on the disk rather than being sent to the tape
drive.
Interactive Commands
When you use the interactive command, a ufsrestore> prompt is displayed, as shown in this
example:
At the ufsrestore> prompt, you can use the commands listed on Chapter 27, “UFS Backup and
Restore Commands (Reference),” in System Administration Guide: Devices and File Systems to
find files, create a list of files to be restored, and restore them.
Note – If you are troubleshooting an installation of Sun Cluster, the port assignments are
different.
If your installation already reserves any of these port numbers, change the port numbers that
are occupied by the common agent container, as described in the following procedure.
Note – For the Sun Cluster software, you must propagate this change across all nodes in the
cluster.
This chapter provides information on resolving file access problems such as those related to
incorrect permissions and search paths.
Users frequently experience problems, and call on a system administrator for help, because they
cannot access a program, a file, or a directory that they could previously use.
This chapter briefly describes how to recognize problems in each of these three areas and
suggests possible solutions.
To fix a search path problem, you need to know the pathname of the directory where the
command is stored.
263
Solving Problems With Search Paths (Command not found)
If the wrong version of the command is found, a directory that has a command of the same
name is in the search path. In this case, the proper directory may be later in the search path or
may not be present at all.
You can display your current search path by using the echo $PATH command. For example:
$ echo $PATH
/home/kryten/bin:/sbin:/usr/sbin:/usr/bin:/usr/dt:/usr/dist/exe
Use the which command to determine whether you are running the wrong version of the
command. For example:
$ which acroread
/usr/doctools/bin/acroread
Note – The which command looks in the .cshrc file for path information. The which command
might give misleading results if you execute it from the Bourne or Korn shell and you have a
.cshrc file than contains aliases for the which command. To ensure accurate results, use the
which command in a C shell, or, in the Korn shell, use the whence command.
Shell File Where Path Is Located Use this Command to Activate The Path
venus% mytool
mytool: Command not found
venus% which mytool
no mytool in /sbin /usr/sbin /usr/bin /etc /home/ignatz/bin .
venus% echo $PATH
/sbin /usr/sbin /usr/bin /etc /home/ignatz/bin
venus% vi ~/.cshrc
(Add appropriate command directory to the search path)
venus% source .cshrc
venus% mytool
If you cannot find a command, look at the man page for its directory path. For example, if you
cannot find the lpsched command (the lp printer daemon), the lpsched(1M) man page tells
you the path is /usr/lib/lp/lpsched.
Access problems can also arise when the group ownership changes or when a group of which a
user is a member is deleted from the /etc/group database.
For information about how to change the permissions or ownership of a file that you are having
problems accessing, see Chapter 6, “Controlling Access to Files (Tasks),” in System
Administration Guide: Security Services.
See “Strategies for NFS Troubleshooting” in System Administration Guide: Network Services for
information about problems with network access and problems with accessing systems through
AutoFS.
This chapter describes the fsck error messages and the possible responses you can make to
resolve the error messages.
For information about the fsck command and how to use it to check file system integrity, see
Chapter 21, “Checking UFS File System Consistency (Tasks),” in System Administration Guide:
Devices and File Systems.
267
fsck Error Messages
When you run the fsck command interactively, it reports each inconsistency found and fixes
innocuous errors. However, for more serious errors, the command reports the inconsistency
and prompts you to choose a response. When you run the fsck command with the -y or -n
options, your response is predefined as yes or no to the default response suggested by the fsck
command for each error condition.
Some corrective actions will result in some loss of data. The amount and severity of data loss
might be determined from the fsck diagnostic output.
The fsck command is a multipass file system check program. Each pass invokes a different
phase of the fsck command with different sets of messages. After initialization, the fsck
command performs successive passes over each file system, checking blocks and sizes, path
names, connectivity, reference counts, and the map of free blocks (possibly rebuilding it). It also
performs some cleanup.
The phases (passes) performed by the UFS version of the fsck command are:
■ Initialization
■ Phase 1 – Check Blocks and Sizes
■ Phase 2a – Check Duplicated Names
■ Phase 2b – Check Pathnames
■ Phase 3 – Check Connectivity
■ Phase 3b – Verify Shadows/ACLs
■ Phase 4 – Check Reference Counts
■ Phase 5 – Check Cylinder Groups
The next sections describe the error conditions that might be detected in each phase, the
messages and prompts that result, and possible responses you can make.
Messages that might appear in more than one phase are described in “General fsck Error
Messages” on page 269. Otherwise, messages are organized alphabetically by the phases in which
they occur.
The following table lists many of the abbreviations included in the fsck error messages.
Abbreviation Meaning
CG Cylinder group
UNREF Unreferenced
Many of the messages also include variable fields, such as inode numbers, which are represented
in this book by an italicized term, such as inode-number. For example, this screen message:
is shown as follows:
Solaris 10:
Action
If the disk is experiencing hardware problems, the problem will persist. Run fsck again to
recheck the file system.
If the recheck fails, contact your local service provider or another qualified person.
Solaris 10:
Solaris 10: A request to read a specified block number, block-number, in the file system
failed. The message indicates a serious problem, probably a hardware failure.
If you want to continue the file system check, fsck will retry the read and display a list of
sector numbers that could not be read. If the block was part of the virtual memory buffer
cache, fsck will terminate with a fatal I/O error message. If fsck tries to write back one of the
blocks on which the read failed, it will display the following message:
Solaris 10:
If you continue the file system check, fsck will retry the write and display a list of sector
numbers that could not be written. If the block was part of the virtual memory buffer cache,
fsck will terminate with a fatal I/O error message.
Solaris 10: A request to write a specified block number, block-number, in the file system
failed.
If you continue the file system check, fsck will retry the write and display a list of sector
numbers that could not be written. If the block was part of the virtual memory buffer cache,
fsck will terminate with a fatal I/O error message.
Action
The disk might be write-protected. Check the write-protect lock on the drive. If the disk has
hardware problems, the problem will persist. Run fsck again to recheck the file system. If the
write-protect is not the problem or the recheck fails, contact your local service provider or
another qualified person.
Answering yes at this point reclaims the blocks that were used for the log. The next time the
file system is mounted with logging enabled, the log will be recreated.
Answering no preserves the log and exits, but the file system isn't mountable.
Try to rerun fsck with an alternative superblock. Specifying block 32 is a good first choice.
You can locate an alternative copy of the superblock by running the newfs -N command on
the slice. Be sure to specify the -N option; otherwise, newfs overwrites the existing file
system.
All errors in this phase except INCORRECT BLOCK COUNT, PARTIALLY TRUNCATED INODE,
PARTIALLY ALLOCATED INODE, and UNKNOWN FILE TYPE terminate fsck when it is preening a
file system.
Cause
An internal error has scrambled the fsck state map so that it shows the impossible value
state-number. fsck exits immediately.
Action
Contact your local service provider or another qualified person.
Solaris 10:
Solaris 10: Inode inode-number contains a block number block-number, which is already
claimed by the same or another inode. This error condition might generate the EXCESSIVE
DUP BLKS error message in phase 1 if inode inode-number has too many block numbers
claimed by the same or another inode. This error condition invokes phase 1B and generates
the BAD/DUP error messages in phases 2 and 4.
Action
N/A
Solaris 10: There is no more room in an internal table in fsck containing duplicate block
numbers. If the -o p option is specified, the program terminates.
Action
To continue the program, type y at the CONTINUE prompt. When this error occurs, a
complete check of the file system is not possible. If another duplicate fragment is found, this
error condition repeats. Increase the amount of virtual memory available (by killing some
processes, increasing swap space) and run fsck again to recheck the file system. To
terminate the program, type n.
Solaris 10: To continue the program, type y at the CONTINUE prompt. When this error
occurs, a complete check of the file system is not possible. If another duplicate block is found,
this error condition repeats. Increase the amount of virtual memory available (by killing
some processes, increasing swap space) and run fsck again to recheck the file system. To
terminate the program, type n.
Solaris 10:
Solaris 10: Too many (usually more than 10) blocks have a number lower than the number
of the first data block in the file system or greater than the number of the last block in the file
system associated with inode inode-number. If the -o p (preen) option is specified, the
program terminates.
Action
To continue the program, type y at the CONTINUE prompt. When this error occurs, a
complete check of the file system is not possible. You should run fsck again to recheck the
file system. To terminate the program, type n.
Solaris 10:
Solaris 10: Too many (usually more than 10) blocks are claimed by the same or another
inode or by a free-list. If the -o p option is specified, the program terminates.
Action
To continue the program, type y at the CONTINUE prompt. When this error occurs, a
complete check of the file system is not possible. You should run fsck again to recheck the
file system. To terminate the program, type n.
Solaris 10:
Cause
The disk block count for inode inode-number is incorrect.. When preening, fsck corrects the
count.
Solaris 10: fsck has found inode inode-number whose size is shorter than the number of
blocks allocated to it. This condition occurs only if the system crashes while truncating a file.
When preening the file system, fsck completes the truncation to the specified size.
Action
To complete the truncation to the size specified in the inode, type y at the SALVAGE prompt.
To ignore this error condition, type n.
Cause
The mode word of the inode inode-number shows that the inode is not a pipe, character
device, block device, regular file, symbolic link, FIFO file, or directory inode. If the -o p
option is specified, the inode is cleared.
Solaris 10: The mode word of the inode inode-number shows that the inode is not a pipe,
special character inode, special block inode, regular inode, symbolic link, FIFO file, or
directory inode. If the -o p option is specified, the inode is cleared.
Action
To deallocate the inode inode-number by zeroing its contents, which results in the
UNALLOCATED error condition in phase 2 for each directory entry pointing to this inode, type
y at the CLEAR prompt. To ignore this error condition, type n.
When a duplicate fragment is found in the file system, this message is displayed:
Cause
Inode inode-number contains a fragment number fragment-number that is already claimed
by the same or another inode. This error condition generates the BAD/DUP error message in
phase 2. Inodes that have overlapping fragments might be determined by examining this
error condition and the DUP error condition in phase 1. This is simplified by the duplicate
fragment report produced at the fsck run.
Action
When a duplicate block is found, the file system is rescanned to find the inode that
previously claimed that block.
When a duplicate block is found in the file system, this message is displayed:
When the file system is being preened (-o -poption), all errors in this phase terminate fsck,
except those related to directories not being a multiple of the block size, duplicate and bad
blocks, inodes out of range, and extraneous hard links.
Cause
A directory inode-number has been found whose inode number for “.” does not equal
inode-number.
Action
To change the inode number for “.” to be equal to inode-number, type y at the FIX prompt
To leave the inode numbers for “.” unchanged, type n.
Cause
A directory filename has been found whose size file-size is less than the minimum directory
size. The owner UID, mode file-mode, size file-size, modify time modification-time, and
directory name filename are displayed.
Action
To increase the size of the directory to the minimum directory size, type y at the FIX prompt.
To ignore this directory, type n.
Solaris 10:
Cause
Phase 1 or phase 1B found duplicate fragments or bad fragments associated with directory or
file entry filename, inode inode-number. The owner UID, mode file-mode, size file-size,
modification time modification-time, and directory or file name filename are displayed. If the
-op (preen) option is specified, the duplicate/bad fragments are removed.
Solaris 10:
Phase 1 or phase 1B found duplicate blocks or bad blocks associated with directory or file
entry filename, inode inode-number. The owner UID, mode file-mode, size file-size,
modification time modification-time, and directory or file name filename are displayed. If the
-op (preen) option is specified, the duplicate/bad blocks are removed.
Action
To remove the directory or file entry filename, type y at the REMOVE prompt. To ignore this
error condition, type n.
Cause
A directory inode-number has been found that has more than one entry for “..” (the parent
directory).
Action
To remove the extra entry for ‘..' (the parent directory), type y at the FIX prompt. To leave
the directory unchanged, type n.
Cause
A directory inode-number has been found whose first entry is not “.”. fsck cannot resolve
the problem.
Action
If this error message is displayed, contact your local service provider or another qualified
person.
Cause
A directory inode-number has been found whose second entry is unallocated.
Action
To build an entry for “..” with inode number equal to the parent of inode-number, type y at
the FIX prompt. (Note that “..'' in the root inode points to itself.) To leave the directory
unchanged, type n.
Cause
A directory inode-number has been found whose second entry is filename. fsck cannot
resolve this problem.
Action
If this error message is displayed, contact your local service provider or another qualified
person.
Cause
A directory inode-number has been found whose second entry is not “..” (the parent
directory). fsck cannot resolve this problem.
Action
If this error message is displayed, contact your local service provider or another qualified
person.
Cause
An excessively long path name has been found, which usually indicates loops in the file
system name space. This error can occur if a privileged user has made circular links to
directories.
Action
Remove the circular links.
Action
To remove the directory entry filename, type y at the REMOVE prompt. This results in the
BAD/DUP error message in phase 4. To ignore the error condition, type n.
Solaris 10:
To round up the length to the appropriate block size, type y at the ADJUST prompt. When
preening, fsck displays a warning and adjusts the directory. To ignore this error condition,
type n.
Cause
The directory inode inode-number was not connected to a directory entry when the file
system was traversed. The owner UID, mode file-mode, size file-size, and modification time
modification-time of directory inode inode-number are displayed. When preening, fsck
reconnects the non-empty directory inode if the directory size is non-zero. Otherwise, fsck
clears the directory inode.
Action
To reconnect the directory inode inode-number into the lost+found directory, type y at the
RECONNECT prompt. If the directory is successfully reconnected, a CONNECTED message is
displayed. Otherwise, one of the lost+found error messages is displayed. To ignore this
error condition, type n. This error causes the UNREF error condition in phase 4.
Action
To deallocate inode inode-number by zeroing its contents, type y at the CLEAR prompt. To
ignore this error condition, type n.
(CLEAR)
Cause
The inode mentioned in the UNREF error message immediately preceding cannot be
reconnected. This message does not display if the file system is being preened because lack of
space to reconnect files terminates fsck.
Action
To deallocate the inode by zeroing out its contents, type y at the CLEAR prompt. To ignore the
preceding error condition, type n.
Cause
There is no lost+found directory in the root directory of the file system. When preening,
fsck tries to create a lost+found directory.
Action
To create a lost+found directory in the root of the file system, type y at the CREATE prompt.
If the lost+found directory cannot be created, fsck displays the message: SORRY. CANNOT
CREATE lost+found DIRECTORY and aborts the attempt to link up the lost inode. This error
in turn generates the UNREF error message later in phase 4. To abort the attempt to link up the
lost inode, type n.
Cause
There is no space to add another entry to the lost+found directory in the root directory of
the file system. When preening, fsck expands the lost+found directory.
Action
To expand the lost+found directory to make room for the new entry, type y at the EXPAND
prompt. If the attempted expansion fails, fsck displays the message: SORRY. NO SPACE IN
lost+found DIRECTORY and aborts the request to link a file to the lost+found directory.
This error generates the UNREF error message later in phase 4. Delete any unnecessary entries
in the lost+found directory. This error terminates fsck when preening (-o p option) is in
effect. To abort the attempt to link up the lost inode, type n.
Cause
File inode inode-number was not connected to a directory entry when the file system was
traversed. The owner UID, mode file-mode, size file-size, and modification time
modification-time of inode inode-number are displayed. When fsck is preening, the file is
cleared if either its size or its link count is zero; otherwise, it is reconnected.
Action
To reconnect inode inode-number to the file system in the lost+found directory, type y. This
error might generate the lost+found error message in phase 4 if there are problems
connecting inode inode-number to the lost+found directory. To ignore this error condition,
type n. This error always invokes the CLEAR error condition in phase 4.
Cause
Inode inode-number (whose type is directory or file) was not connected to a directory entry
when the file system was traversed. The owner UID, mode file-mode, size file-size, and
This phase checks the free-fragment and used-inode maps. It reports error conditions resulting
from:
■ Allocated inodes missing from used-inode maps
■ Free fragments missing from free-fragment maps
■ Free inodes in the used-inode maps
■ Incorrect total free-fragment count
■ Incorrect total used inode count
Cause
The magic number of cylinder group cg-number is wrong. This error usually indicates that
the cylinder group maps have been destroyed. When running interactively, the cylinder
group is marked as needing reconstruction. fsck terminates if the file system is being
preened.
Action
If this occurs, contact your local service provider or another qualified person.
Cause
The summary information is incorrect. When preening, fsck recomputes the summary
information.
Action
To reconstruct the summary information, type y at the SALVAGE prompt. To ignore this error
condition, type n.
This phase checks the free-block and used-inode maps. It reports error conditions resulting
from:
■ Allocated inodes missing from used-inode maps
■ Free blocks missing from free-block maps
■ Free inodes in the used-inode maps
■ Incorrect total free-block count
■ Incorrect total used inode count
Cause
A cylinder group block map is missing some free blocks. During preening, fsck reconstructs
the maps.
Action
To reconstruct the free-block map, type y at the SALVAGE prompt. To ignore this error
condition, type n.
Cause
The magic number of cylinder group character-for-command-option is wrong. This error
usually indicates that the cylinder group maps have been destroyed. When running
interactively, the cylinder group is marked as needing reconstruction. fsck terminates if the
file system is being preened.
Action
If this occurs, contact your local service provider or another qualified person.
Once a file system has been checked, a few summary messages are displayed.
This message indicates that the file system checked contains number-of files using number-of
fragment-sized blocks, and that there are number-of fragment-sized blocks free in the file
system. The numbers in parentheses break the free count down into number-of free fragments,
number-of free full-sized blocks, and the percent fragmentation.
This message indicates that the file system was modified by fsck. There is no need to rerun fsck
if you see this message. This message is just informational about fsck's corrective actions.
Once a file system has been checked, a few cleanup functions are performed. The cleanup phase
displays the following status messages.
This message indicates that the file system checked contains number-of files using number-of
fragment-sized blocks, and that there are number-of fragment-sized blocks free in the file
system. The numbers in parentheses break the free count down into number-of free fragments,
number-of free full-sized blocks, and the percent fragmentation.
This message indicates that the file system was modified by fsck. If this file system is mounted
or is the current root (/) file system, reboot. If the file system is mounted, you might need to
unmount it and run fsck again; otherwise, the work done by fsck might be undone by the
in-core copies of tables.
This message indicates that file system filename was marked as stable. Use the fsck -m
command to determine if the file system needs checking.
This message indicates that file system filename was not marked as stable. Use the fsck -m
command to determine if the file system needs checking.
This chapter describes problems you might encounter when installing or removing software
packages. The Specific Software Package Installation Errors section describes package
installation and administration errors you might encounter. The General Software Package
Installation Problems section describes behavioral problems that might not display an error
message.
For information about managing software packages, see Chapter 18, “Managing Software
(Overview),” in System Administration Guide: Basic Administration.
Now, the default behavior is that if a package needs to change the target of a symbolic link to
something else, the target of the symbolic link and not the source of the symbolic link is
inspected by the pkgadd command.
Unfortunately, this means that some packages may or may not conform to the new pkgadd
behavior.
295
Specific Software Package Installation Errors
The PKG_NONABI_SYMLINKS environment variable might help you transition between the old
and new pkgadd symbolic link behaviors. If this environment variable is set to true, pkgadd
follows the source of the symbolic link.
Setting this variable enables a non-conforming package to revert to the old behavior if set by the
administrator before adding a package with the pkgadd command.
The new pkgadd symbolic link behavior might cause an existing package to fail when added
with the pkgadd command. You might see the following error message in this situation:
# PKG_NONABI_SYMLINKS=true
# export PKG_NONABI_SYMLINKS
# pkgadd pkg-name
This error message indicates that not all of a package's If you see this warning message during a package
files could be installed. This usually occurs when you installation, you must also install the package on the
are using pkgadd to install a package on a client. In this server. See Chapter 18, “Managing Software
case, pkgadd attempts to install a package on a file (Overview),” in System Administration Guide: Basic
system that is mounted from a server, but pkgadd Administration for details.
doesn't have permission to do so.
There is a known problem with adding or removing Set the following environment variable and try to add
some packages developed prior to the Solaris 2.5 the package again.
release and compatible versions. Sometimes, when
NONABI_SCRIPTS=TRUE
adding or removing these packages, the installation
fails during user interaction or you are prompted for
user interaction and your responses are ignored.
A accounting (Continued)
accounting, 139, 141, 155 types of, 137
See also billing users user fee calculation, 132
connect, 131 See also billing users
runacct states and, 144 acct.h format files, 151, 152
/var/adm/acct/nite/directory and, 153 acctcms command, 144, 155
/var/adm/wtmpx, 147 acctcom command, 151, 152
daily, 132, 155 acctcon command, 138, 144, 153
See also accounting, reports acctdusg command, 132, 148, 153
step-by-step summary of, 134 acctprc command, 144
disabling, 142 acctwtmp command, 131, 133, 146
disk, 132, 133 active file, 143
acctdusg program, 148 active file, 140, 153
files for, 153, 155 active.MMDD file, 140, 153
fixing corrupted files adapter board (serial port), 24
tacct file, 139-140 address space map, 167
wtmpx file, 138, 139, 144 alert message priority (for syslogd), 227
maintaining, 141 alphanumeric terminal, See terminals
overview, 130 application threads, 159, 160
process, 131, 133, 147, 148 at command, 123, 124, 127
raw data, 132 -l option (list), 126
reports, 146 -m option (mail), 124, 125
daily command summary, 148, 155 automatic scheduling of, 113
daily report (tty line utilization), 146, 147 controlling access to, 124, 127
daily usage report, 147, 148 overview, 110
last login report, 150 denying access, 127-128
overview, 146 error messages, 128
total command summary (monthly), 150, 154, overview, 110, 111, 123
155 at.deny file, 124, 127
set up to run automatically (how to), 136 description, 110
starting, 136 at job files, 123, 127
stopping, 141 creating, 124, 125
299
Index
301
Index
303
Index
305
Index
307
Index
309
Index
311
Index