DBAs
Student Manual
Education Services
Course #: L1-251.4
IBM Part #: Z251-1601-00
November 7, 2003
Copyright, Trademarks, Disclaimer of Warranties, and
Limitation of Liability
Copyright IBM Corporation 2003.
IBM Software Group
One Rogers Street
Cambridge, MA 02142
IBM and the IBM logo are registered trademarks of International Business Machines Corporation.
The following are trademarks or registered trademarks of International Business Machines Corporation in the United States,
other countries, or both:
Answers OnLine DPI FFST/2 OnLine Dynamic Server S/390
AIX DRDA Foundation.2000 OS/2 Sequent
APPN Dynamic Scalable Illustra OS/2 WARP SP
AS/400 Architecture Informix OS/390 System View
BookMaster Dynamic Server Informix 4GL OS/400 Tivoli
C-ISAM Dynamic Server.2000 Informix Extended PTX TME
Client SDK Dynamic Server with Parallel Server QBIC UniData
Cloudscape Advanced Decision Informix Internet QMF UniData and Design
Connection Services Support Option Foundation.2000 RAMAC Universal Data
Database Architecture Dynamic Server with Informix Red Brick Red Brick Design Warehouse Blueprint
DataBlade Extended Parallel Option Decision Server Red Brick Data Mine Universal Database
DataJoiner Dynamic Server with J/Foundation Red Brick Decision Components
DataPropagator Universal Data Option MaxConnect Server Universal Web Connect
DB2 Dynamic Server with Web MVS Red Brick Mine Builder UniVerse
DB2 Connect Integration Option MVS/ESA Red Brick Decisionscape Virtual Table Interface
DB2 Extenders Dynamic Server, Workgroup Net.Data Red Brick Ready Visionary
DB2 Universal Database Edition NUMA-Q Red Brick Systems VisualAge
Distributed Database Enterprise Storage Server ON-Bar Relyon Red Brick Web Integration Suite
Distributed Relational WebSphere
Microsoft, Windows, Window NT, SQL Server and the Windows logo are trademarks of Microsoft Corporation in the United
States, other countries, or both.
Java, JDBC, and all Java-based trademarks are trademarks or registered trademarks of Sun Microsystems, Inc. in the United
States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
All other product or brand names may be trademarks of their respective companies.
The information contained in this document has not been submitted to any formal IBM test and is distributed on an as is basis
without any warranty either express or implied. The use of this information or the implementation of any of these techniques is
a customer responsibility and depends on the customers ability to evaluate and integrate them into the customers operational
environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that
the same or similar results will result elsewhere. Customers attempting to adapt these techniques to their own environments do
so at their own risk. The original repository material for this course has been certified as being Year 2000 compliant.
This document may not be reproduced in whole or in part without the prior written permission of IBM.
Note to U.S. Government Users Documentation related to restricted rights Use, duplication, or disclosure is subject to
restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
iii
iv
Course Description
This course provides cross-training for a DB2 UDB Administrator and DBA and is geared toward
students with prior experience as either an Oracle System Administrator or DBA. Students will
use DB2 UDB version 7.2 or 8.1, so they should have equivalent Oracle experience with version
8 or 9i. During the course, students will build and run a DB2 UDB database using data made
available in Oracle unload format.
Objectives
At the end of this course, you will be able to:
Prerequisites
To maximize the benefits of this course, we require that you have met the following prerequisites:
Understanding and use of relational database elements
Understanding and use of SQL statements
Oracle System Administration or DBA knowledge
UNIX/Linux systems knowledge
Microsoft Windows knowledge
v
Acknowledgments
Course Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Glen Mules
Contributing Course Developer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bob Bernard
Technical Review Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bob Bernard, Nora Sokolof
Course Production Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Susan Dykman
Further Information
To find out more about IBM education solutions and resources, please visit the IBM Education
website at http://www-3.ibm.com/software/info/education.
Additional information about IBM Data Management education and certification can be found at
http://www-3.ibm.com/software/data/education.html.
To obtain further information regarding IBM Informix training, please visit the IBM Informix
Education Services website at http://www-3.ibm.com/software/data/informix/education.
Comments or Suggestions
Thank you for attending this training class. We strive to build the best possible courses, and we
value your feedback. Help us to develop even better material by sending comments, suggestions
and compliments to dmedu@us.ibm.com.
vi
Table of Contents
Module 1 Differences Between Oracle and IBM DB2 Instances
Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
DB2 UDB Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
DB2 UDB Terminology (cont.) . . . . . . . . . . . . . . . . . . . . . . . 1-5
Oracle Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
DB2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
DB2 Architecture (cont.) . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11
Environment and Registry Variables . . . . . . . . . . . . . . . . . 1-12
Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14
DAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-16
DAS (cont.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-17
Client Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-18
Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-19
Concurrency Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-20
DB2 UDB Isolation Levels . . . . . . . . . . . . . . . . . . . . . . . . . 1-22
DB2 UDB and Oracle Terminology Comparison . . . . . . . . 1-25
Additional DB2 UDB Terminology . . . . . . . . . . . . . . . . . . . 1-26
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-27
vii
Module 3 Creating a DB2 UDB Instance
Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
Requirements to Create an Instance . . . . . . . . . . . . . . . . . . 3-3
The SYSADM User and Group . . . . . . . . . . . . . . . . . . . . . . 3-4
Fenced User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
Creating the DAS Instance . . . . . . . . . . . . . . . . . . . . . . . . . 3-6
The db2icrt Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
The db2icrt Command in Detail . . . . . . . . . . . . . . . . . . . . . .3-8
The Instance Directory Structure . . . . . . . . . . . . . . . . . . . . . 3-9
Initializing the Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10
User Authentication and Instance Authorities . . . . . . . . . . 3-11
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13
viii
Adding and Extending Containers in a DMS Table Space 5-12
Characteristics of SMS and DMS Table Spaces . . . . . . . . 5-14
Table-Space Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-15
Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-16
Extent Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-18
Creating Table Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-19
Basic Disk Storage Requirements . . . . . . . . . . . . . . . . . . . 5-21
Basic Disk Storage Requirements . . . . . . . . . . . . . . . . . . . 5-24
Monitoring Disk Storage Usage . . . . . . . . . . . . . . . . . . . . . 5-26
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-27
ix
Module 8 Data Migration Methods Loading Tables
Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
Data Copy Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-3
Privileges and Authorities Needed . . . . . . . . . . . . . . . . . . . 8-4
Import Data File Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-5
Import Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-6
Using Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-7
Import Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-8
Load Input Data Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-9
Load Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-10
Load Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-13
Using Load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-14
Load Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-17
Checking for Constraints Violations . . . . . . . . . . . . . . . . . . 8-18
LOAD QUERY Command . . . . . . . . . . . . . . . . . . . . . . . . . 8-19
Monitoring Disk Usage After Load . . . . . . . . . . . . . . . . . . . 8-20
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-21
x
Module 10 Using Constraints to Manage Business Requirements
Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2
Types of Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3
Constraint Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-6
Referential Constraint Delete Rules . . . . . . . . . . . . . . . . . 10-7
Check Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-8
Constraint Syntax Similarities & Differences . . . . . . . . . . . 10-9
Informational Constraints . . . . . . . . . . . . . . . . . . . . . . . . . 10-10
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-11
xi
REORG INDEXES/TABLE . . . . . . . . . . . . . . . . . . . . . . . 13-10
REORG Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-11
REORGCHK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-13
Buffer Pool Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 13-16
Page Cleaning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-19
Tuning Buffer Pool Parameters . . . . . . . . . . . . . . . . . . . . 13-21
Disk Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-22
Page Size Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-23
Extent Size Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-25
Prefetch Size Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-27
Process Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-28
Tuning MAXAGENTS and MAXAPPLS . . . . . . . . . . . . . . 13-29
DB2 UDB Self-tuning Capability . . . . . . . . . . . . . . . . . . . 13-30
Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-31
The AUTOCONFIGURE Command . . . . . . . . . . . . . . . . 13-32
Monitoring the Server/Database . . . . . . . . . . . . . . . . . . . 13-34
Snapshot Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-36
Snapshot Switch Settings . . . . . . . . . . . . . . . . . . . . . . . . 13-37
Snapshot Example Output . . . . . . . . . . . . . . . . . . . . . . . . 13-39
Event Monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-42
Event Monitor Example . . . . . . . . . . . . . . . . . . . . . . . . . . 13-44
Performance Configuration Wizard . . . . . . . . . . . . . . . . . 13-48
Health Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-50
Memory Visualizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-52
Memory Visualizer Panel . . . . . . . . . . . . . . . . . . . . . . . . . 13-53
Other Data Management Tools . . . . . . . . . . . . . . . . . . . . 13-55
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-58
xii
Appendix A Oracle and DB2 UDB Comparisons
xiii
xiv
Module 1
1-2
This course provides an introduction to DB2 UDB administration both server administration
and DBA functionality for Oracle administrators. The functionality covered is that DB2 UDB
on Unix/Linux/Windows (Intel) environments and it is aimed at transactional systems (OLTP)
rather than datawarehouse or business intelligence systems (OLAP).
Other courses provide the information specific to DW/BI systems and DB2 UDB Enterprise -
Extended Edition (EEE) servers. Those courses build up the material covered here.
1-3
Instance
An instance in DB2 UDB has similar meaning as an instance in Oracle. Each instance in DB2
UDB refers to one set of processes that link back to the installed binary files in the DB2 UDB
directory. An instance includes:
Memory usage
Processor usage
Disk usage
Oracle restricts the definition of an instance to the processes and memory components. Thus,
with Oracle 9i, an instance consists of a number of background processes:
smon system monitor
pmon process monitor
dbwr database writer
lgwr log writer
ckpt checkpoint
Table space
A table space is logical space allocated for storing table data and indexes. This logical space is
comprised of one or more physical containers (either files, devices, or directories). You can
create any number of table spaces, but three are created by default when you create the database.
These are SYSCATSPACE, TEMPSPACE1, and USERSPACE1.
You might want to add other table spaces to partition data from indexes or to use as a temporary
table space. You will learn more about table-space usage in a later module.
In DB2 terminology we talk of or write about a table space (two words), whereas Oracle
generally uses tablespace (one word). In SQL and administrative statements for both servers,
the technical reference to TABLESPACE is one word (e.g., ALTER TABLESPACE ...).
A Container is:
A physical storage location
Similar to a data file in Oracle
1-5
Container
A container is a physical storage location that is similar to a segment of a data file in Oracle. For
DB2 UDB, this location can be a directory, if assigned to an SMS table space, or a file or device,
if assigned to a DMS table space.
An SMS (system managed space) table space is one that is managed by the operating system as
part of its file system manager. A DMS (database managed space) table space is managed by the
database server instance.
Extent size
An extent is a unit of space within a container of a table space. The extent size is configured to
be divisible by pages and is defined when a table space is created. In DB2 UDB, the extent size
is configured for each table space.
In Oracle terminology, tablespaces are collections of data files and tables, indexes, and other
database objects are placed within these tablespaces. Storage is allocated as extents consisting of
contiguous collections of Oracle data blocks (equivalent to the DB2 UDB page). Prior to Oracle
9i, the blocksize (2KB, 4KB, 8KB, and occasionally 16KB or 32KB) was determined by a single
Buffer pool
Similar to Oracle database buffers, the DB2 UDB database buffer pool is used to cache table and
index pages. In DB2 UDB, a buffer pool is exclusive to one database and is not shared across the
databases supported by the instance. When creating a database, a default buffer pool is also
created for that database. In addition, a database can have multiple buffer pools to take
advantage of the following two features.
First, DB2 UDB table spaces can have different page sizes. Since disk I/O requires that the page
size of the buffer pool match the page size of the table space, a separate buffer pool is required
for each different page size used in any table spaces.
Second, a table space in DB2 UDB can be assigned its own exclusive buffer pool. This
facilitates the caching of data for specific tables.
These DB2 UDB options are discussed in detail in the module on table spaces.
Library Cache
Listener
listener.ora
Remote User
Parameter file
Process Control files Database
User (init.ora)
tnsnames.ora
1-7
Memory Structures
Oracle creates and uses memory structures to complete several jobs. Thus, for example, memory
is used to store program code being executed and data that is shared among users. Two basic
memory structures are associated with the Oracle server: the system global area (which includes
the database and redo log buffers, and the shared pool) and the program global area.
Processes
Processes are jobs or tasks that work in the memory of these computers. A process is a "thread
of control" or a mechanism in an operating system that can execute a series of steps some
operating systems use the terms "job" or "task" to describe this mechanism. The Oracle database
system has two general types of processes: user processes and Oracle processes.
User Processes
A user process is created and maintained to execute the software code of an application program
(such as a PRO*C program) or an Oracle tool (such as SQL*PLUS). The user process also
manages the communication with the server processes. User processes communicate with the
server processes through the program interface.
Oracle Processes
Oracle processes are called by other processes to perform functions on behalf of the invoking
process a server is created process to handle requests from connected user processes. In
addition there are a set of background processes for each instance.
The background processes are:
DBW0 (Database Writer) writes data from database buffer cache to data files.
LGWR (Log Writer) registers changes in redo log buffer to redo log files.
SMON (System Monitor) checks for consistency in the database.
PMON (Process Monitor) cleans up resources when one of the Oracle processes fails.
CHPT (CheckPoint) updates database status information in the control files and data
files whenever changes in buffer cache are permanently recorded in the database.
DB2 ARCHITECTURE
client client client
UDF
1-9
On the client side, either local or remote applications, or both, are linked with the DB2 UDB
client library. Local clients communicate using shared memory and semaphores; remote clients
use a protocol such as Named Pipes (NPIPE), TCP/IP, NetBIOS, or SNA.
On the server side, activity is controlled by engine dispatchable units (EDUs). In the above and
on the next page, EDUs are shown as circles or groups of circles.
Processes
EDUs are implemented as threads in a single process on Windows-based platforms and as
processes on UNIX (single-threaded). DB2 agents are the most common type of EDUs. These
agents perform most of the SQL processing on behalf of applications. Prefetchers and page
cleaners are other common EDUs.
A set of subagents might be assigned to process the client application requests. Multiple
subagents can be assigned if the machine where the server resides has multiple processors or is
part of a partitioned database.
All agents and subagents are managed using a pooling algorithm that minimizes the creation and
destruction of EDUs.
Device
Directory File container
container container
File
container
Secondary
Directory Device log files
container container Device
container
1-10
Shared Memory
Buffer pools are areas of database server memory where database pages of user table data, index
data, and catalog data are temporarily moved and can be modified.
The configuration of the buffer pools, as well as prefetcher and page cleaner EDUs, controls
how quickly data can be accessed and how readily available it is to applications.
Prefetchers retrieve data from disk and move it into the buffer pool before applications
need the data. Agents of the application send asynchronous read-ahead requests to a
common prefetch queue. As prefetchers become available, they implement those
requests by using big-block or scatter-read input operations to bring the requested pages
from disk to the buffer pool.
Page cleaners move data from the buffer pool back out to disk. Page cleaners are
background EDUs that are independent of the application agents. They look for pages
from the buffer pool that are no longer needed and write the pages to disk. Page
cleaners ensure that there is room in the buffer pool for the pages being retrieved by the
prefetchers.
Without the independent prefetchers and the page cleaner EDUs, the application agents would
have to do all of the reading and writing of data between the buffer pool and disk storage.
With DB2 UDB, there is one configuration file for the instance:
Contains parameter values for that instance
Oracle uses one parameter file (or init file, i.e., initxxxxx.ora) for
database settings for the whole instance (with approximately 200
possible settings in Oracle 8 and 260 settings in Oracle 9i).
Additional control files are used for networking and connecting:
LISTENER.ora, SQLNET.ora, TNSNAMES.ora, ...
1-11
Oracle configuration
Oracle uses one parameter file for the instance, and it tells the instance where to find the
instance control files. In DB2 UDB, there is one configuration file for the instance, but each
database also has its own configuration file.
DB configuration file
Every DB2 UDB database also has a configuration file that contains parameters for just that one
database. The various databases in an instance are configured separately within the bounds of
the instance.
Environment Variables:
Very few (DB2INSTANCE, PATH)
Only take effect after instance is restarted
Set manually
Stored in the db2profile and userprofile files on UNIX
Registry Variables:
More than 100 in number
Take effect immediately
Set with the db2set command
Stored in the profile.env file on UNIX
1-12
Environment variables
Unlike Oracle, the operating environment for DB2 UDB does not rely heavily on environment
variables. In fact, there are only two environment variables that are needed to operate a DB2
UDB instance.
DB2INSTANCE operates much the same as the Oracle variable ORACLE_SID it
stores the name of the current instance.
PATH is a UNIX variable and it must contain the directory where the DB2 UDB binary
files are installed. With an Oracle server, the path to the Oracle binaries is found in
ORACLE_PATH and other product files are found via ORACLE_HOME.
Registry variables
Registry variables do not have an equivalent feature in Oracle and are unique to DB2 UDB.
Registry variables function like environment variables with the major advantage being that they
take effect immediately and do not require the instance to be restarted. Registry variables are set
using the DB2 set command and are stored in a file called profile.env in the sqllib directory for
the instance.
db2set -g
DB2_DOCCDPATH=C:\IBM\SQLLIB\
DB2SYSTEM=BOB-LTOP
DB2PATH=C:\IBM\SQLLIB
DB2INSTDEF=DB2
DB2ADMINSERVER=DB2DAS00
db2set -all
[e] DB2PATH=C:\IBM\SQLLIB
[i] DB2ACCOUNTNAME=BOB-LTOP\db2admin
[i] DB2INSTOWNER=BOB-LTOP
[i] DB2PORTRANGE=60000:60003
[i] DB2INSTPROF=C:\IBM\SQLLIB
[i] DB2COMM=TCPIP
[g] DB2_DOCCDPATH=C:\IBM\SQLLIB\
[g] DB2SYSTEM=BOB-LTOP
[g] DB2PATH=C:\IBM\SQLLIB
[g] DB2INSTDEF=DB2
[g] DB2ADMINSERVER=DB2DAS00
1-14
Package
Applications that connect to DB2 UDB databases use packages to execute the SQL statements.
One application has one associated package, which contains all of the SQL statements found in
that application, plus the optimized query plan for each SQL statement.
A package is created using either the BIND or PREP command, which examines the application
looking for SQL statements and then creates an optimized query plan for each SQL statement
found. When the application runs and executes an SQL statement, the associated SQL statement
is located in the package and the optimized query plan for that SQL statement is used to access
the data.
The SQL statements that exist in the package are considered STATIC SQL statements, since
their query plans remain the same until the next BIND or PREP command is executed against
the application.
Example of the PREP & BIND stages.
db2 PREP <filename> VERSION V1.1
db2 BIND <filename>
Packages in Oracle
A package in Oracle is a collection of procedures and functions bundled together.
1-16
DAS
The DB2 Administration Server (DAS) is a special instance of DB2 UDB that keeps track of
other instances of DB2 UDB. It is automatically created and configured when DB2 UDB is
initially installed on the host machine and is automatically started whenever the host machine is
booted. The DAS provides the following specific functions:
Enables remote administration and monitoring of DB2 UDB instances
Provides a scheduler that is used to execute user-defined jobs. These jobs may include
operating system commands.
Allows DB2 UDB Discovery to return information to remote clients
Queries the operating system for user and group information
Note In DB2 UDB version 8, the DAS is not an instance, but a separate process that
manages instances.
In Oracle, each instance is atomic, having nothing to do with any other instance that may be
running on the same hardware. There is no concept of a DAS in Oracle.
1-17
DAS
Although the DAS is automatically created at installation time and is automatically started when
the system boots, you can also manually create, start, stop, list, and remove the DAS.
The DAS is the connection between the GUI tools on the client and those on the server. If the
DAS is not installed or has been stopped, you cannot connect to the database(s) using the GUI
tools. Since there can be only one DAS running on the server, multiple versions of DB2 UDB,
such as V6.1 or V7.2, connect through the same DAS.
Some customers do not want to use the GUI administration tools and their applications connect
through JDBC (GUI uses ODBC), so they do not create the DAS. Most customers want the GUI
interface, however.
Run-Time Client:
Must be installed on every client workstation used to access the
database server
Contains the support necessary to connect to the server using
ODBC, JDBC and the Command Line Processor (CLP)
Supported communication protocols are APPC, IPX/SPX,
named pipes, NetBIOS, and TCP/IP
Administration Client:
Installed on a client workstation and consists of a suite of GUI
tools that provide remote administration of databases and
instances
1-18
Database security:
Provided by the GRANT SQL statement
1-19
Database Security
The permissions on the database are all granted using the GRANT SQL statement. A user that
has SYSADM authority on the instance grants the initial database administrator (DBADM)
authority on a database. From that point on, the user with DBADM authority on the database can
grant further database object permissions, such as select permission on a table, to other users.
1-20
The Problem
Database and transaction processing differs considerably among the various vendors and require
potentially different approaches by developers. One of the most significant differences users
notice when they port applications from Oracle to DB2 is the difference in concurrency control
between the two databases. Here we will addresses the locking behaviors of each database and
thereby introduce you to how to map application behavior from Oracle to DB2 UDB.
Some Oracle applications, when ported to DB2, appear to behave identically, and the topic of
concurrency can be ignored. However, if your applications involve frequent accesses to the same
tables, you may find that your applications behave differently.
To get the best results, sometimes it is worth redesigning applications to achieve the best
concurrency in DB2 UDB. Understanding concurrency control in DB2 UDB helps you know
how to rework an application.
Isolation Levels
Before looking at how to improve the concurrency of ported applications, it is useful to have a
quick description of differences between DB2 UDBs and Oracles implementation of
concurrency control.
Oracle implements an optimistic view of locking. The Oracle assumption is that in most cases,
the data fetched by an application is unlikely to be changed by another application. It is up to the
application to take care of the situation in which the data is modified by another concurrent
application.
For example, when an Oracle application starts an update transaction, the old version of the data
is kept in the rollback segment. When any other application makes a read request for the data, it
gets the version from the rollback segment. Once the update transaction commits, the rollback
segment version is erased and all other applications see the new version of the data. Different
readers of the data may hold a different value for the same row, depending on whether the data is
fetched before or after the update commits. Hence, it is also called the Oracle versioning
technique.
To ensure read consistency in Oracle, the application must issue SELECT FOR UPDATE. In this
case, all other readers and updaters are blocked.
DB2 UDB has a suite of concurrency control schemes to suit the needs of applications. An
application can set the level of isolation to provide the proper level of concurrency. One of these
means can be used:
1-22
Here is a brief description of each isolation level. For more details, consult the DB2 UDB
Administration Guide.
1-25
1-26
These are a few terms that are used freely in the IBM publications, and need some definition.
DASD (direct access storage device) is simply another term for a disk drive device.
SCALAR is a property assigned to a variable. It means that the value of the variable is singular,
as opposed to a range or a set, and it has a relationship to other singular values on a scale or line.
For example, the value 3 would be scalar since it is singular, and it is less than 2 and more
than 4 on a number line. In like manner, the value c is scalar since in can be compared to
other characters in alphabetical order.
SARGABLE is a conjugation of the two words searchable argument. For example the name
Smith is sargable in the last name column of the customer table.
PREDICATE is simply a condition. For example, the condition last_name = Smith could be
a predicate in a WHERE clause. All rows returned by the server are predicated on having a last
name of Smith.
FIXPAK is an intermediate revision of DB2 UDB that fixes small bugs or adds minor new
features. These revisions occur between scheduled releases of major versions. In some cases a
FIXPAK modifies the code sufficiently to increment the decimal part of the version number. For
example, there is a fixpak that changes DB2 UDB version 7.1 to version 7.2.
1-27
1.2 There can be only one database assigned to one DB2 UDB instance:
A True.
B False.
1.3 In the DB2 UDB architecture, the term table space refers to:
A One table.
B A logical space allocated for storing table data.
C A physical file on disk for storing data.
1.10 The Database Administration Server (DAS) instance allows an administrator to manage
local instances from a remote location.
A True.
B False.
1.11 The DB2 UDB Run-Time Client is installed on the client machine and provides:
A A bundle of IBM supplied applications for common business functions,
such as finance, human relations, and inventory control.
B A bundle of connectivity products such as ODBC, JDBC, and the
Command Line Processor.
C A bundle of administration tools that allows for remote management of
DB2 UDB instances.
1.2 There can be only one database assigned to one DB2 UDB instance:
A True.
B False.
There can be multiple databases assigned to one DB2 UDB instance.
1.3 In the DB2 UDB architecture, the term table space refers to:
A One table.
B A logical space allocated for storing table data.
C A physical file on disk for storing data.
A logical space is comprised of one or more physical containers.
1.10 The Database Administration Server (DAS) instance allows an administrator to manage
local instances from a remote location.
A True.
B False.
1.13 Security for the DB2 UDB instance starts at the instance level, as opposed to security for
the Oracle instance, which starts at the database level.
A True.
B False.
Client Connectivity
2-2
2-3
2-4
The Client Configuration Assistant (CCA) is a tool that contains wizards to help set up clients to
local or remote DB2 servers. The tool can also be used to easily help configure DB2 Connect
servers.
The Client Configuration Assistant lets you maintain a list of databases to which your
applications can connect, cataloging nodes and databases while shielding you from the inherent
complexities of these tasks.
The Client Configuration Assistant provides the following methods to assist in adding new
database connection entries:
Use a profile. A profile can be exported from a previously configured machine and used
to configure new machines.
Search the network. The CCA can search the network for DB2 systems which have an
administration server running.
Manually configure a connection to a database. All information must be provided, but a
wizard is started to help make the task simpler.
The ability to invoke the Control Center from the Configuration Assistant.
The option to configure both local and remote servers, including DB2 Connect servers.
The ability to create configuration templates without affecting the local configuration.
Import and export capabilities for exchanging configuration templates with other
systems.
Improved response time for discovery requests along with the option to refresh the list
of discovered objects at any time.
The ability to view and update applicable database manager configuration parameters
and DB2 registry variables.
The following Version 8 tools support Version 7 servers (with some restrictions) and Version 8
servers:
Configuration Assistant (This tool has different components, of which only the import/
export configuration file can be used with Version 7 servers; all of the components
work with Version 8)
Data Warehouse Center
Replication Center
Command Center (including the Web-version of this center)
SQL Assist
Development Center
Visual Explain
In general, any Version 8 tool that is only launched from within the navigation tree of the
Control Center, or any details view based on these tools, will not be available or accessible to
Version 7 and earlier servers. You should consider using the Version 7 tools when working with
Version 7 or earlier servers.
You can access the Configuration Assistant by navigating:
Start=> Programs=> IBM DB2=> Set-up Tools=>
Configuration Assistant
2-6
The SQL statement in step 4 fails because there is no column named C2 in table A. Since that
statement was issued with auto-commit ON (default), it rolls back not only the statement in step
4, but also the one in step 3, because the latter was issued with auto-commit OFF.
The command:
db2 list tables
then returns an empty list.
Tip The various commands and options for the Command Line Processor are explained
in Appendix LE: Lab Exercises Environment of this document.
2-8
Note DB2 Administration Clients are available for the following platforms: AIX, HP-UX,
Linux, the Solaris Operating Environment, and Windows operating systems.
The Administration Client is installed on a client workstation and consists of a suite of GUI
tools that provide remote administration of databases and instances. The administration tools
consist of the following:
Control Center This is the primary screen from which the tools are accessed.
Data Warehouse Center This provides a central control point from which to manage
the extraction and transformation of data for your data warehouse.
V8
DB2 V7 tools Equivalent DB2 V8 tools
Control Center Control Center
Data Warehouse Center Data Warehouse Center
Command Center Command Center
Script Center Task Center
Alert Center Health Center
Journal Journal
License Center License Center
Stored Procedure Builder Development Center
Satellite Administration Center Satellite Administration Center
Information Center Information Catalog Center
Replication Center
2-10
1.2 Once connected to the sample database, query all columns and all rows in the sales table.
1.2 Once connected to the sample database, query all columns and all rows in the sales table.
db2 "SELECT * FROM sales"
41 record(s) selected.
3-2
3-3
3-4
Before a database manager instance can be created, a user must exist to function as the systems
administrator (SYSADM) for the instance. Some thought should be given to the name chosen
for this user, because the name of the database manager instance is the same as the name for this
user. This user also becomes the owner of the instance. When the instance is created, this
user's primary group name is used to set the value of the database manager configuration
parameter SYSADM_GROUP. Any additional users that wish to have SYSADM authority on the
instance must also belong to this group. SYSADM authority has total authority over all
functions for the instance in a similar way that root has total authority on a UNIX system.
3-5
Before a database manager instance can be created, a user must exist that can run any user-
defined functions (UDFs) and stored procedures in a fenced mode. This user is necessary since
UDFs are created using the C programming language, which can use pointers to reference
memory addresses outside of their defined memory space. To prevent a poorly written UDF
from corrupting the DB2 UDB memory, UDFs are commonly run in a fenced section of
memory, which prohibits references to memory addresses outside of the fence.
3-6
If this installation of DB2 UDB is a new installation, then a Database Administration Server
(DAS) is created along with the database manager instance. During the installation process, you
are asked to provide a name for the DAS. The user name that you provide becomes the name of
the DAS and the installing user has SYSADM authority on the DAS. In addition, the registry
variable DB2ADMINSERVER is set to the name of the DAS. If you do not plan on using the GUI
administration tools, the DAS is not needed and can be dropped after the database manager
instance has been created.
Drop the DAS using the dasidrop command (UNIX) or the db2admin drop command (INTEL).
If, at a later date, you decide to use the GUI Administration tools and you need to have a DAS,
you can create one using the dasicrt command (UNIX), or the db2admin create command
(INTEL).
In DB2 UDB version 8, the DAS is not an instance, but a separate process that
V8 manages instances.
Syntax:
db2icrt -u <fenced_user> <sysadm_user>
Example:
db2icrt -u fence101 inst101
3-7
The DB2 command used to create the database manager instance is db2icrt. In the above
example a database manager instance is created with the name inst101, and a fenced user is
created named fence101. The user inst101 is the owner of the instance and is assigned SYSADM
authority over the instance.
In addition, all files associated with the instance, plus any default SMS table spaces, are created
in the $HOME directory for user inst101.
3-8
The db2icrt command installs and configures the database manager instance on the server.
Normally only the user root has authority to run this command, but in our classroom
environment, the student logins have been given authority to run this command.
The environment variable DB2INSTANCE is set to the name of the database manager instance and
PATH is set to include the path to the DB2 UDB binary files. A new directory, sqllib, is created
in the $HOME directory of the user specified as the SYSADM.
If it is a new installation, a DAS is created.
The communications protocols that are supported on the server are examined and entries are
made in the operating system services file to allow communications with the DAS and the
database manager instance.
Finally, the files necessary to set environment variables are created. The first of these two files is
db2profile (or db2bashrc or db2cshrc, depending on your shell), which sets the default
environment variables. This file is often overwritten by new versions of DB2 UDB or by
fixpacks, and you should not make any changes to it. The second file is called userprofile and is
provided for your use to set environment variables unique to your installation. It will not be
overwritten by new versions of DB2 UDB or by fixpacks.
| adm
| backup
| cfg
| ctrl
| db2dump
| function
| log
| .netls
| security
| sqldbdir
| tmp
3-9
The db2diag.log file resides in the DIAGPATH directory. In the directory structure shown
above, the default location is /instname/sqllib/db2dump.
The amount of detail captured in the db2diag.log file is controlled by the DIAGLEVEL
configuration parameter. This parameter can be set to 0, 1, 2, 3 or 4.
The DBM configuration file (db2systm) resides in /instname/sqllib, but this file is non-human
readable, so you cannot edit it directly. You must use the db2 update dbm cfg command.
In v8, the db2diag.log file is split into db2diag.log and <instname>.nfy. The
V8 admin log file (.nfy) is intended for administrators, while the diagnostic log file is
for troubleshooting personnel. Both files' default location is /instname/sqllib/
db2dump.A new DBM parameter, NOTIFYLEVEL, controls the level of
information in the <instname>.nfy file. This parameter can be set to 0, 1, 2, 3 or 4.
Note Appendix D in this document contains examples of the DBM configuration file
parameters.
Example:
db2 attach to inst101
3-10
Now that an instance is installed, it can be started. When the db2start command is executed, the
system reads the value of DB2INSTANCE and starts the specified database manager instance.
The process of starting consists of reading the DBM configuration file and setting up UNIX
processes (called agents) and memory control structures to allow communication with the
instance.
The DB2 attach command assigns agents to an application so that instance-level utilities and
commands, such as CREATE DATABASE, can work. Use the DB2 attach to <instance_name>
command to establish this attachment.
At this point there is still no connection to any of the databases on the instance. The CONNECT
command creates a connection to a database and is discussed in the next module.
3-11
SYSMAINT authority is the third highest level of database manager instance authority. Users who
have SYSMAINT authority have the same restrictions as users with SYSCTRL authority, but in
addition they cannot terminate user sessions, create/drop databases, or create or drop table
spaces. Users who belong to the group named in the SYMAINT_GROUP DBM configuration
parameter have SYSMAINT authority on the instance.
Unlike an Oracle configuration file, the DBM configuration file used by DB2 UDB cannot be
updated directly, but is updated using the db2 update dbm cfg command.
db2 update dbm cfg using <parameter_name> <value>
3-13
1.2 Double-click the telnet icon and select the name of the training server supplied by your
instructor.
Note The db2icrt command can only be run as root, so when you run db2icrt in the IBM
classroom, you will automatically run it with root permissions.
1.5 Now, create the instance. The command to create the instance is:
db2icrt -u fence### inst####
Where fence### is your fenced user, and inst### is your instance administrator. It may
be necessary to type the entire path to the db2icrt command, which would be:
For Linux:
/usr/IBMdb2/V7.1/instance/db2icrt -u fence### inst###
For Solaris:
/opt/IBMdb2/V7.1/instance/db2icrt -u fence### inst###
This could take a few minutes to execute. When it completes, you will get the message:
Program db2icrt completed successfully
2.1 When db2icrt ran, it created a directory called sqllib in the home directory of inst###.
To examine this directory type:
cd /home/inst###/sqllib
ls -l |more
2.2 There are some environment variables that need to be set in order to use DB2 UDB, such
as DB2INSTANCE and PATH. When db2icrt ran, it created a file called db2profile (or
db2bashrc), which is used to set default DB2 UDB environment variables. Examine this
file to determine how these variables are being set:
more db2profile
2.3 The db2profile file may be overwritten with subsequent releases of DB2 UDB or with
fixpacks, so if any other environment variables need to be set, they can be established in
a file called userprofile. Examine userprofile:
more userprofile
2.4 When db2icrt ran it modified your .profile (or .bashrc) file to execute the db2profile
command file. Examine your .profile:
cd
more .profile
or:
more .bashrc
3.2 Find the options available for the db2 get command by typing:
db2 ? get | more
3.4 Obtain the values for the database manager configuration file:
db2 get dbm cfg | more
Tip Consult the Appendix for example DBM and DB configuration file values.
Note If DB2COMM is not set, see the solutions of this exercise for instructions to set it.
5.2 In a DB2 Command Window, change to your student HOME directory and list it .
Change to your instance directory and list it. Finally, change to the db2dump directory,
and list it.
5.3 Use the view command to see the contents of the db2diag.log file. Find the following
information:
Timestamp of the last time the Database manager was started ____________.
PID of the last db2star2 (startdbm) process _________________.
1.2 Double-click the telnet icon and select the name of the training server supplied by your
instructor.
1.4 Before an instance can be created, two users need to exist. One is the instance
administrator and one is the fenced user, whose purpose will be discussed later. Since
you were able to log in as the instance administrator, it exists. However you need to get
the group name, which will be used to define the system administrators for your instance.
To obtain the group name type:
id inst###
group name: ______________________
Write the NAME (not the number) of the GID into the blank.
To verify that the fenced user exists, type:
id fence###
1.5 Now, create the instance. The command to create the instance is:
db2icrt -u fence### inst####
Where fence### is your fenced user, and inst### is your instance administrator. It may
be necessary to type the entire path to the db2icrt command, which would be:
For Linux:
/usr/IBMdb2/V7.1/instance/db2icrt -u fence### inst###
For Solaris:
/opt/IBMdb2/V7.1/instance/db2icrt -u fence### inst###
This could take a few minutes to execute. When it completes, you will get the message:
Program db2icrt completed successfully
Several minutes can go by while the instance is being created. The program should
end by giving you a message that the program completed successfully. If not,
contact your instructor.
2.1 When db2icrt ran, it created a directory called sqllib in the home directory of inst###.
To examine this directory type:
cd /home/inst###/sqllib
ls -l |more
2.2 There are some environment variables that need to be set in order to use DB2 UDB, such
as DB2INSTANCE and PATH. When db2icrt ran, it created a file called db2profile (or
bd2bashrc), which is used to set default DB2 UDB environment variables. Examine this
file to determine how these variables are being set:
more db2profile
You should see the shell scripts used to set up the values for DB2INSTANCE and
PATH
2.3 The db2profile file may be overwritten with subsequent releases of DB2 UDB or with
fixpacks, so if any other environment variables need to be set, they can be established in
a file called userprofile. Examine userprofile:
more userprofile
The userprofile file will most likely be blank, since no other environment variables
are being set on the server.
2.4 When db2icrt ran it modified your .profile (or .bashrc) file to execute the db2profile
command file. Examine your .profile:
cd
more .profile
or:
more .bashrc
You should see some lines added that execute the .db2profile file.
3.2 Find the options available for the db2 get command by typing:
db2 ? get | more
3.4 Obtain the values for the database manager configuration file:
db2 get dbm cfg | more
The database manager (DBM) configuration parameters should be returned. Note
that SYSADM_GROUP has been set to the group name that you obtained earlier.
Also note that DFTDBPATH has been set to the home directory of the user
SYSADM that created the instance. This is where databases will be created by
default.
Tip Consult the Appendix for example DBM and DB configuration file values.
5.2 In a DB2 Command Window, change to your student HOME directory and list it .
Change to your instance directory and list it. Finally, change to the db2dump directory,
and list it.
cd
ls -l
cd sqllib
ls
cd db2dump
ls -l
-rw-rw-rw- 1 insta8 insta8 361 Jan 22 09:48 db2diag.log
-rw-r----- 1 insta8 insta8 5242044 Jan 22 09:47 db2eventlog.000
-rw-rw-rw- 1 insta8 insta8 363 Jan 22 09:48 insta8.nfy
5.3 Use the view command to see the contents of the db2diag.log file. Find the following
information:
Timestamp of the last time the Database manager was started ____________.
PID of the last db2star2 (startdbm) process _________________.
Creating a Database
4-2
4-3
Buffer pools are allocated at the database level in DB2 UDB, and every database must have at
least one buffer pool. One default buffer pool, IBMDEFAULTBP, is created when the CREATE
DATABASE command is processed. Additional buffer pools can be created using the CREATE
BUFFERPOOL SQL statement. For example, the syntax to create a buffer pool named bp_four_k
with 10,000 pages and a page size of 4096 bytes is shown below:
Syntax:
CREATE BUFFERPOOL bufferpool_name
SIZE number_of_pages
PAGESIZE integer K
Example:
CREATE BUFFERPOOL bp_four_k
SIZE 10000
PAGESIZE 4K
Once a buffer pool has been created, it can be associated with one or more table spaces. Use the
following syntax to create (or alter) a DMS table space to associate it with this buffer pool:
Example:
CREATE TABLESPACE customer_tablespace
MANAGED BY DATABASE
USING (DEVICE '/dev/rdsk/device101' 16000)
EXTENTSIZE 16
PREFETCHSIZE 32
BUFFERPOOL bp_four_k
Example:
ALTER TABLESPACE customer_tablespace
BUFFERPOOL bp_four_k
There are several reasons you might want to have multiple buffer pools. If your database has a
relatively large table that is randomly accessed (in which case data caching would be of limited
value), creating a small buffer pool for this table space would prevent the pages from being
cached in the main buffer pool. If your database has a table space with an 8K, 16K, or 32K page
size, then you need a buffer pool with a page size to match. At least one buffer pool must exist
for each page size used in the database.
4-5
There is only one type of database that is available with DB2 UDB, and it has several of the
attributes of an ANSI-compliant database.
There is only one log buffer per database. It is flushed when a COMMIT or ROLLBACK statement is
executed or it becomes full.
Table names must include the owner and are considered fully-qualified. For example, the
following SELECT statement will not run in DB2 UDB (unless martha is the user):
SELECT * FROM customer;
ANSI-compliant databases, by definition, cannot have logging turned off. In DB2 UDB,
however, logging can be turned off selectively for specific tables (for example during a load),
and DB2 UDB does not follow the strict ANSI logging standard.
ANSI-compliant databases do not automatically grant permission to the user PUBLIC. However,
in DB2 UDB, the user PUBLIC is granted a generous set of default permissions when a database
object is created.
4-6
One of the attributes of an ANSI database is the fully qualified table name. DB2 UDB
implements this attribute using schemas. For example, suppose that the user bobjones creates a
table called customer. In the database, this table would have the name bobjones.customer.
Next, suppose that the user martha creates a table called customer. Her table would be named
martha.customer in the database. When bobjones accessed the customer table he would access
the bobjones.customer table. If bobjones attempted to access the marth.customer table, he
could receive an error, depending on permissions granted to him on that table.
It is conceivable that one database could contain multiple schemas; however, for simplicity, most
DB2 UDB databases have only one user-defined schema.
There are several system-defined schemas created for a database. They are:
NULLID
SYSCAT
SYSFUN
SYSIBM
SYSSTAT
4-7
In DB2 UDB only users in the SYSADM or the SYSCTRL user groups can create databases. When
a user creates a database, they are given database administrator (DBADM) authority on that
database.
However, only users with SYSADM authority on the instance can grant DBADM authority on
databases. Therefore, it is possible for a user with SYSCTRL authority to create a database and be
granted DBADM authority on the database, but not be able to grant DBADM to other users. In
addition, when SYSADM authority is revoked at the instance level, DBADM authority is not
revoked at the database level. Therefore, it is again possible for a user to have DBADM authority
on a database but not be able to grant DBADM authority to other users.
Authorities:
Exist at the instance level
Exist for the database level
Privileges:
Exist for actions on database objects
4-8
Once a user is connected to a database, any actions that they can execute are controlled by
privileges granted to them on the database objects. The following is a partial list of privileges
available for database objects. Due to time limitations, it is beyond the scope of this class to
describe all of the privileges available on all of the database objects. However, selected
privileges will be discussed throughout the rest of this course in conjunction with other topics.
A point to be made here is that the privileges are more granular and give you more control on a
DB2 UDB database as compared to an Oracle database.
For a complete discussion of authorities and privileges, you can attend the DB2 Universal
Database Administration Workshop, which is available for the following operating systems:
Linux (CF20)
UNIX (CF21)
Windows NT (CF23)
Solaris (CF27)
There is also a CBT self study course, Fast Path to DB2 UDB for Experienced Relational DBAs
(CT28), which contains a superb explanation of privileges and is available for download free of
charge at:
www.ibm.com/software/data/db2/selfstudy
Two new privileges added in version 8 can be used when creating and executing
V8 external programs containing user defined functions. They are:
CREATE_EXTERNAL_ROUTINE
EXECUTE (UDF privileges)
4-10
IN DB2 UDB the CREATE DATABASE statement is a command line statement and not an SQL
statement. Either of the CREATE DATABASE commands shown above create a database using the
default settings.
4-11
When the CREATE DATABASE statement is executed, the database manager (instance) executes
the actions listed above. Each of these actions will be described in the following pages.
4-12
The instance creates a subdirectory to store the database files and table spaces. This subdirectory
is named /NODE0000/SQL0001 for the first database created on the instance. The starting point
for this subdirectory can be an option specified in the CREATE DATABASE statement, or the
default is the value of the DFTDBPATH DBM configuration parameter.
The next few pages show the subdirectories created under the database subdirectory.
4-13
The database configuration file that is created by the database manager is initially populated
with default settings. Once the database has been created, the configuration file can be modified
with the db2 update db cfg command.
Important!
The database configuration file cannot be accessed directly using an editor. The db2
update db cfg using <parameter> <parameter_value> command must be used
instead.
4-14
When a database is created, three SMS table spaces are created by default in the subdirectory
defined for the database. There are options available in the CREATE DATABASE command to
create the default table spaces as DMS table spaces or to create them in different locations.
The database configuration file, SQLDBCON, resides in the /NODE0000/SQL00001
subdirectory. The default location of the database log files is /NODE0000/SQL00001/
SQLOGDIR for the first database created.
4-15
The system catalog tables contain all the information necessary to define the database and are
created in the SYSCATSPACE table space. The SYSCAT, SYSFUN, and SYSTAT tables are
actually views into underlying tables, which belong to the SYSIBM schema. The SYSCAT
views contain all of the data necessary to define the database objects. The SYSFUN views
contain all the data for functions, and the SYSSTAT views contain all of the statistical
information used by the optimizer to determine query plans.
The system catalog views and tables cannot be explicitly created or dropped and have select
privilege granted to PUBLIC by default.
4-16
DBADM authority is initially granted to the creator of the database. In addition, the CONNECT,
CREATETAB, BINDADD, IMPLICIT_SCHEMA, CREATE _NOT_FENCED, and LOAD privileges are
granted to the database creator.
If the DBADM authority is revoked from the database creator, the other privileges still remain
and must also be explicitly revoked.
PUBLIC is granted:
SELECT privilege on system catalog tables and views
CONNECT, IMPLICIT_SCHEMA, CREATETAB, and BINDADD
BIND and EXECUTE privilege on each utility
USE privilege on USERSPACE1 table space
4-17
When a database is created, the user PUBLIC is granted several privileges by default.
SELECT privilege is granted on all of the system catalog tables.
The CONNECT privilege allows access to the database.
The IMPLICIT_SCHEMA privilege allows a user to create schemas.
The CREATETAB privilege allows a user to create tables within the database.
The BINDADD privilege allows a user to generate packages.
The BIND privilege is really a rebind privilege, since the package must first be created
using the BINDADD privilege.
The EXECUTE privilege allows a user to execute an existing package.
The USE privilege allows a user to specify (or default to) a table space when creating a
table.
Privileges can be granted to individual users or to groups of users (defined at the operating
system level). This functionality is equivalent to ROLE.
4-18
The commands and conditions listed above show the three methods for starting up a database
and the associated command or condition required to shutdown the database.
Connect command
If a database is not yet running, the db2 connect command will start up the database and then
establish a connection for the application issuing the command. After the database is running,
any additional applications can connect and subsequently disconnect from the database
including the initial connection. The database will remain running as long as there is at least
one connection attached to it. When all the applications have disconnected from the database,
the database will automatically shutdown.
quit No No
4-20
4-21
1.2 What information can you specify for the create db command?
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
1.4 What type of table space (SMS or DMS) will they be by default?
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
1.6 Create your database with a name of storesdb using the default settings. It may take a
few minutes to create the database.
cd $HOME
db2 create db storesdb
1.8 After the database is created you need to connect to it. To find the current connection
state for the database use the DB2 command get connection state:
db2 get connection state
1.10 Connect to the storesdb database using the DB2 connect command:
db2 connect to storesdb
2.2 Some of the configuration parameters can be changed. Update the MAXAPPLS parameter
to allow for a maximum of 50 connections.
db2 update db cfg for storesdb using maxappls 50
2.3 Changes to the configuration file do not take effect until the database is deactivated and
activated again. Deactivate and activate the database by using the following commands:
db2 terminate
db2 force application all
db2 connect to storesdb
2.4 List the names and ID numbers for the three table spaces that were created during
creation of the database:
db2 list tablespaces |more
Table space name ID number
2.5 List the container information for table space ID 0 using the DB2 command list
tablespace containers. What is the physical location of the container?
db2 list tablespace containers for 0
_______________________________________________________________
2.6 To look at the files used to store tables, change to the SQLT0000.0 directory and list it:
cd /home/inst###/inst###/NODE0000/SQL00001/SQLT0000.0
ls -l |more
SYSIBM.SYSTABLES.tableid = ________
Note Type the SELECT as one line. When the ENTER key is pressed, DB2 considers the
command as complete and starts processing it.
2.8 Does the SYSIBM.SYSTABLES table have any index or LOB data?
2.9 What are the default path containers for the TEMPSPACE1 (table space 1) table space
and the USERSPACE1 (table space 2) table space?
db2 list tablespace containers for 1
db2 list tablespace containers for 2
2.10 In a DB2 Command Window, change to your student HOME directory, then change to
your database directory, then change to the NODE0000 directory and list it. Change to
your database top directory SQL00001 (the first one you created) and list it.
1.2 What information can you specify for the create db command?
The name and location of the database, an alias name, the codeset and territory for
storing the data, a collating sequence, and table space information such as extent
size and location can all be specified using the create db command.
1.4 What type of table space (SMS or DMS) will they be by default?
SMS.
1.5 1.5 What is the default path on which the database is created?
By default, the database will be created in the path specified in the DFTDBPATH
database manager configuration parameter. This should be the /home/inst###
directory.
1.6 Create your database with a name of storesdb using the default settings. It may take a
few minutes to create the database.
cd $HOME
db2 create db storesdb
This can take several minutes to execute since the disk space needs to be initialized
for all of the system catalog tables, the user space, and the temp space. The
command should return a message saying that the command completed
successfully. If not, contact your instructor.
1.8 After the database is created you need to connect to it. To find the current connection
state for the database use the DB2 command get connection state:
db2 get connection state
1.10 Connect to the storesdb database using the DB2 connect command:
db2 connect to storesdb
2.2 Some of the configuration parameters can be changed. Update the MAXAPPLS parameter
to allow for a maximum of 50 connections.
db2 update db cfg for storesdb using maxappls 50
The command should return a message indicating that it completed successfully.
2.3 Changes to the configuration file do not take effect until the database is deactivated and
activated again. Deactivate and activate the database by using the following commands:
db2 terminate
db2 force application all
db2 connect to storesdb
2.4 List the names and ID numbers for the three table spaces that were created during
creation of the database:
db2 list tablespaces |more
Table space name ID number
SYSCATSPACE 0
TEMPSPACE1 1
USERSPACE1 2
2.5 List the container information for table space ID 0 using the DB2 command list
tablespace containers. What is the physical location of the container?
db2 list tablespace containers for 0
This is an SMS table space so the container is a directory which is /home/inst###/
inst###/NODE0000/SQL00001/SQLT0000.0. The files that make up the tables will
be located in this directory.
2.7 By selecting tableid from the system table syscat.tables, we can map tables to files. The
file names are derived from the tableid in the format SQLXXXXX.DAT for data files,
.INX for index files, and .LB for LOB files. Run the following SELECT statement to
find the table ID for SYSIBM.SYSTABLES:
db2 "SELECT SUBSTR(tabname,1,18)
AS table_name, tableid
FROM syscat.tables
WHERE tabschema ='SYSIBM' ORDER BY 2" | more
SYSIBM.SYSTABLES.tableid = 2
Table ID = 2 for SYSIBM.SYSTABLES so the file SQL00002.DAT would contain
the row data. Any index data would be stored in a file named SQL00002.INX, and
any LOB data would be in SQL00002.LB or SQL00002.LBA.
Note Type the SELECT as one line. When the ENTER key is pressed, DB2 considers the
command as complete and starts processing it.
2.8 Does the SYSIBM.SYSTABLES table have any index or LOB data?
Yes. There is an SQL00002.INX file and an SQL00002.LB file.
2.9 What are the default path containers for the TEMPSPACE1 (table space 1) table space
and the USERSPACE1 (table space 2) table space?
db2 list tablespace containers for 1
db2 list tablespace containers for 2
These are both SMS table spaces so the containers will be directories. The container
for TEMPSPACE1 is /home/inst###/inst###/NODE0000/SQL00001/SQLT0001.0,
and the container for USERSPACE1 is /home/inst###/inst###/NODE0000/
SQL00001/SQLT0002.0.
5-2
5-3
An operating system user with sufficient permissions (usually root) is required to create the
storage areas in the form of files, directories, or disk devices.
Operating system authentication is needed to allow a user to work within the DB2 UDB
instance. This user is required to belong to the DB2 UDB administration group, however it is
named. Authorizations that allow that user to allocate containers to the instance are SYSADM
and SYSCTRL.
There are no database privileges needed to create new table spaces or add containers to table
spaces. This activity is controlled through authorities as discussed above.
DB2 instance 1
Database 1
table space 0 table space 1 table space 2
SYSCATSPACE TEMPSPACE1 USERSPACE1
table table table
Database 2
table space 0 table space 1 table space 2
SYSCATSPACE TEMPSPACE1 USERSPACE1
table table table table
5-4
Table spaces can be of two types: SMS or DMS. An SMS table space is created using directory
containers. The DMS table space is created using either file containers or device containers.
SMS is system managed space (managed by the operating system) and DMS is database
managed space (managed by DB2 UDB).
The selection of which type of table space to use impacts the types of data that can be stored in
each, as well as performance and ease-of-use considerations. These topics will be addressed in a
later module.
5-7
In an SMS (System Managed Space) table space, the operating systems file system manager
allocates and manages the space where the table is stored. The storage model typically consists
of many files, representing table objects, stored in the file system space. The user decides on the
location of the files (DB2 controls their names) and the file system is responsible for managing
them. By controlling the amount of data written to each file, the database manager distributes the
data evenly across the table space containers. By default, the initial table spaces created at
database creation time are SMS.
Each table has at least one SMS physical file associated with it.
In an SMS table space, a file is extended one page at a time as the object grows. If you need
improved insert performance, you can consider enabling multipage file allocation. This allows
the system to allocate or extend the file by more than one page at a time. For performance
reasons, if you will be storing multidimensional (MDC) tables in your SMS table space, you
should enable multipage file allocation. Run db2empfa to enable multipage file allocation. In a
partitioned database environment, this utility must be run on each database partition. Once
multipage file allocation is enabled, it cannot be disabled.
SMS table spaces are defined using the MANAGED BY SYSTEM option on the CREATE
DATABASE command, or on the CREATE TABLESPACE statement. You must consider two
key factors when you design your SMS table spaces:
Note Care must be taken when defining the containers. If there are existing files or
directories on the containers, an error (SQL0298N) is returned.
Note The SMS table space is full as soon as any one of its containers is full. Thus, it is
important to have the same amount of space available to each container.
To help distribute data across the containers more evenly, the database manager determines
which container to use first by taking the table identifier (1 in the above example) modulo the
number of containers. Containers are numbered sequentially, starting at 0.
5-10
In a DMS (Database Managed Space) table space, the database manager controls the storage
space. The storage model consists of a limited number of devices or files whose space is
managed by DB2. The database administrator decides which devices and files to use, and DB2
manages the space on those devices and files. The table space is essentially an implementation
of a special purpose file system designed to best meet the needs of the database manager.
A DMS table space containing user defined tables and data can be defined as:
A regular table space to store any table data and optionally index data
A large table space to store long field or LOB data or index data.
When designing your DMS table spaces and containers, you should consider the following:
The database manager uses striping to ensure an even distribution of data across all
containers.
The maximum size of regular table spaces is 64 GB for 4 KB pages; 128 GB for 8 KB
pages; 256 GB for 16 KB pages; and 512 GB for 32 KB pages. The maximum size of
large table spaces is 2 TB.
Unlike SMS table spaces, the containers that make up a DMS table space do not need to
be the same size; however, this is not normally recommended, because it results in
uneven striping across the containers, and sub-optimal performance. If any container is
full, DMS table spaces use available free space from other containers.
5-12
When a table space is created, its table space map is created and all of the initial containers are
lined up such that they all start in stripe 0. This means that data will be striped evenly across all
of the table space containers until the individual containers fill up.
The ALTER TABLESPACE statement lets you add a container to an existing table space or
extend a container to increase its storage capacity.
Adding a container which is smaller than existing containers results in a uneven distribution of
data. This can cause parallel I/O operations (such as prefetching data) to perform less efficiently
than they otherwise could on containers of equal size.
When new containers are added to a table space or existing containers are extended, a rebalance
of the table space data may occur.
Rebalancing
The process of rebalancing when adding or extending containers involves moving table space
extents from one location to another, and it is done in an attempt to keep data striped within the
table space.
5-14
Note * Tables cannot be partitioned across table spaces in DB2 UDB Enterprise Edition
server (but can be in Enterprise - Extended Edition server). However, an index on a
table can be placed in a different table space than the table data.
The Enterprise Server Edition (ESE) product is the combination of the Enterprise
V8 Edition server and the Enterprise - Extended Edition server products.
5-15
Three table spaces are created by default when the database is created.
Other table spaces need to be created to hold other structures and data, such as a user temporary
space (for user temporary tables).
Although the three default table spaces are created as SMS type by default, it is better to specify
the table-space type during database creation, allowing you more latitude in data placement and
better performance.
5-16
As stated earlier, a container is a physical storage location, which could be a directory, device,
or file.
Directory Containers
An SMS table space uses only directory containers. Each SMS table space uses one or more
directory containers, but they must be specified at table-space create time. You cannot alter the
SMS table space to include other directories after it has been created.
You may add a directory container to an SMS table space only during a redirected restore.
Device Containers
A DMS table space can use device containers, but only on AIX, Windows NT, and Solaris
operating systems. These are considered raw disk space in Oracle. Using a logical volume
manager allows a physical disk to be partitioned into multiple devices for the DB2 UDB
instance. You can alter a DMS table space to add device containers after creation.
5-18
A page is the smallest unit of storage space and I/O for a table space in the instance. Rows of
DB2 UDB data are organized in blocks of data called pages. The page size can be 4 KB, 8 KB,
16 KB, or 32 KB.
An extent is a contiguous unit of space within a container of a table space. In this regard, both
page (or Oracle data block) and extent definitions are similar for both DB2 UDB and Oracle.
Page size can be specified in DB2 UDB, but block size in Oracle (determined by
DB_BLOCK_SIZE) was fixed for the database prior to Oracle 9i. With Oracle 9i up to four
additional nonstandard block sizes are allowed per database, and supported block sizes are 2KB,
4KB, 8KB, 16KB, and 32KB.
In DB2, when creating a table space, the default extent size is used (DB configuration parameter
DFT_EXTENT_SZ), or it can be overridden in the CREATE TABLESPACE statement.
5-19
The basic parameters needed to explicitly create a table space are the table-space name, how it is
managed, and the container(s). Optionally you can specify the use of the table space, such as
regular or user-temporary. You can also optionally specify page size, extent size, prefetch size,
and buffer pool name. In DB2 UDB sizes can be entered in four different units:
integer pages
integer K kilobytes
integer M megabytes
integer G gigabytes
Example of creating an SMS table space:
CREATE TABLESPACE retail_sales
MANAGED BY SYSTEM USING
('/dbdata/database/container1','/dbdata/database/container2')
PREFETCHSIZE 32
5-21
Basic disk storage requirements must be computed for the table spaces, and it must be
determined how they are allocated and of which type (SMS or DMS).
You must consider storage size for the system catalog tables, data tables, indexes, long and LOB
data types, and log space.
* Page size less 91 bytes overhead (including 15 bytes for first slot)
The number of pages for a table can be estimated by:
4K page size:
(number_of_rows / TRUNCATE(4020 / (average_row_size + 10))) * 1.1
8K page size:
(number_of_rows / TRUNCATE(8116 / (average_row_size + 10))) * 1.1
16K page size:
(number_of_rows / TRUNCATE(16308 / (average_row_size + 10))) * 1.1
32K page size:
(number_of_rows / TRUNCATE(32692 / (average_row_size + 10))) * 1.1
In DB2 the rows must fit entirely on a single page rows are not broken across pages using
row chaining (as in Oracle).
Other page sizes available are 8K, 16K, or 32K. A single table or index object can be as large as
512 gigabytes, if using a 32K page size.
Page size Max row length Max columns Max size for
(bytes)* regular DMS
4K still 4,005 500 64 gigabytes
8K still 8,101 1012 128 gigabytes
16K still 16,293 1012 256 gigabytes
32K still 32,677 1012 512 gigabytes
* Page size less 83 bytes overhead (including 15 bytes for first slot), but the maximum size
of a row on a 4K page is still 4,005, eventhough it calculates to 4,013 bytes.
5-24
Indexes
For each unique index, the space needed can be estimated as:
(average_index_key_size + 8) * number_of_rows * 2
average_index_key_size byte count of each column in the index key. For VARCHAR
and VARGRAPHIC columns, use an average of the current data size, plus one byte.
Factor of 2 overhead, such as non-leaf pages and free space.
Add one extra byte for the null indicator for every column that allows NULLs.
The maximum amount of temporary space required during index creation can be estimated as:
(average_index_key_size + 8) * number_of_rows * 3.2
Factor of 3.2 index overhead and space required for sorting during index creation.
For nonunique indexes, four bytes are required to store duplicate key entries. The estimates
shown above assume no duplicates.
For SMS, the minimum required space is 12 KB. For DMS, the minimum is the size of an
extent.
LOB
Large object (LOB) data is stored in two separate objects (structured differently from other data
types):
LOB data objects
Data is stored in 64-megabyte areas that are broken up into segments whose sizes
are double-multiples of 1024 bytes. (1024 bytes, 2048 bytes, 4096 bytes, and so
on), up to 64 megabytes.
You can specify the COMPACT option of the CREATE TABLE and the ALTER TABLE
statements to reduce the amount of disk space used by LOB data. This allows the
LOB data to be split into smaller segments. This uses the minimum amount of
space to the nearest 1K boundary.
LOB allocation objects
Allocation and free space information is stored in 4K allocation pages separate from the actual
data. The overhead for these pages is calculated as one 4K page for every 64 gigabytes, plus one
4K page for every 8 megabytes.
The amount of space (in bytes) required for primary and secondary log files is:
((LOGPRIMARY + LOGSECOND) * (LOGFILSIZ + 2) * 4096) + 8192
5-26
You need to use several tools to understand how much disk space you are using for the database
objects. Within DB2 UDB, you can use the DB2 command list tablespaces show detail to view
the type of table space it is, how many pages are being used, how many total pages are allocated,
how many free pages, extent size, page size, and number of containers. However, if you are
using an SMS table space, all of the allocated pages are shown as used, and the free page value
is not applicable. Because this table space is system-managed, the instance information is not as
detailed as the DMS type of table space.
To find more information, you can use the DB2 command list tablespace containers for
<tablespace_number> show details. This provides the path to the storage container. With this
information, you can view the container using system tools, such as ls -al or du -k -s
<directory_name>.
You need to monitor the log-storage directory to ensure you have enough file system space to
hold the logs.
5-27
1.2 Is a DB2 UDB table space defined and controlled as part of the database, or as part of
the instance?
1.3 How does this differ from how a tablespace is created and controlled in Oracle?
1.4 If an SMS table space is created and it uses a directory as a container for physical data
storage, what controls the structure of the container and I/O to that container?
1.5 If a DMS table space is created and it uses a file as a container for physical data storage,
what controls the structure of the container and I/O to that container?
1.6 If a DMS table space is created and it uses a device as a container for physical data
storage, what controls the structure of the container and I/O to that container?
1.8 If you are using an SMS type of table space, how would you increase the physical storage
space, if needed?
1.9 What would need to be done to manually increase the storage size of an SMS type of
container?
1.10 Is there a performance consideration when comparing SMS and DMS types of table
spaces?
1.12 What is the difference between DB2 UDB and Oracle when an extent is created? How
is it sized?
The estimates made above do not allow for nullable overhead (1 byte). Indexes include
overhead. For DB2 UDB, index size (bytes) = (key + 8) * #_of_rows * 2.
2.2 In the migration scenario below, what types of table spaces would you use for the
objects, and why?
Object Object usage Table space type
Table 1 medium sized volatile table
Table 1 indexes volatile indexes
Table 2 small static table
Table 2 indexes static index
Table 3 large volatile table with LOB
Table 3 indexes volatile indexes
In this exercise you will use the Estimate Size wizard for a table, and compare its results to your
calculated values.
2.3 Use a DB2 Command Window to find the number of rows and average row size of the
employee table in the sample database.
Number of rows in the employee table 32_______
Average row size of employee table 77_____
2.4 Now caclulate the size (in pages) required for 1000 rows in the employee table.
Use the following formula to calculate the storage size required for the employee table,
assuming a 4K page size.
Number of data pages =
(number_of_rows / TRUNCATE (4028 /(avge_row_size + 10))) * 1.1
Insert your calculated value in the table shown below.
2.5 Open the Control Panel, and drill down in the left pane until you select the sample
database. Click on the Tables icon, then right click on the employee table. Select the
Estimate Size wizard.
2.6 What information do you see for Number of rows and Average row length? Why are
you shown that number of rows?
2.7 Currently, the Display size units is MB; change it to Pages. Click on the right-left arrow
combination on the right side of the display, near the column titles. Select the following
columns to display:
Name
Tablespace
Current size
Estimated size
Maximum size
2.9 Fill in the table below with your results, and be prepared to explain your conclusions.
Your Calculation Estimate size wizard
Avg. row size
Number of rows
Pages used
Number of rows
Pages used
Sales database (international sales of electronics parts, associated with the inventory
database)
customer table (columns to track ID, name, address, phone, and credit info)
orders (columns to track ID, items ordered, item quantity, item backorder, PO
number, ship instructions, and how paid)
items table (view that links to the parts table in the inventory database)
Use the default table space for the system catalog
Four other table spaces must be DMS type.
Containers for all DMS table spaces are devices.
Inventory DB Sales DB
table space 6
(SMS)
USERTEMP1
user temporary
area
/opt/inventory
Warning!
Do not use extended storage.
5.1 Open a DB2 Command Window, and change to your HOME directory. Create two
directories here, named SMS-space and DMS-space.
5.2 Change directory to your DMS-space directory, and create a file named tbspc1. This file
will be used as a container for a table.
5.3 Using the DB2 Command Window, create a table space named SMS-TBSPC, using the
following criteria:
regular tablespace
pagesize 8K
SMS, using your SMS-space directory as a container
bufferpool 8K-BP
5.4 Now create a table space named DMS-TBSPC, using the following criteria:
regular tablespace
pagesize 16K
DMS, using your DMS-space/tbspc1 as a container
size of 20 MB (1280 16 K pages)
bufferpool 16K-BP
5.6 Create a new table in each table space with these specifics:
SMS-space DMS-space
Table name sms1 dms1
Column name col1 col1
Column data type varchar (800) varchar (800)
5.7 Get a new listing of the table space information and fill in the following table with the
new dimensions:
SMS-space DMS-space
Total pages
Used pages
Number of containers
Number of extents used
Size of container (ls -l) 8192 bytes (the DAT file) 20,971,520 bytes
5.9 Insert data into both tables, get a new table space list, and enter the storage size in the
following table:
SMS-space DMS-space
Total pages
Used pages
Number of containers
Number of extents used
Size of container (ls -l) 81920 bytes (the DAT file) 20,971,520 bytes
1.2 Is a DB2 UDB table space defined and controlled as part of the database, or as part of
the instance?
In DB2 UDB the table space is created for a database, and its usage is controlled by
that database.
1.3 How does this differ from how a tablespace is created and controlled in Oracle?
A tablespace in Oracle is created using the CREATE TABLESPACE statement in
SQL*Plus, or by using the GUI interface of the Oracle Enterprise Manager (OEM)
of Oracle 9i. In Oracle a tablespace is a logical storage container. The hierarchy is:
database is made up of tablespaces; tablespaces are made up of one or more data
files; tablespaces contain segments; segments are made up of one or more extents;
extents are contiguous sets of blocks.
1.4 If an SMS table space is created and it uses a directory as a container for physical data
storage, what controls the structure of the container and I/O to that container?
The operating system controls the structure and use of a directory-type container
and the scheduling and buffering of data I/O as it is passed to and from the
container.
1.5 If a DMS table space is created and it uses a file as a container for physical data storage,
what controls the structure of the container and I/O to that container?
The database manager controls the structure and use of a file-type container, while
the operating system performs I/O of data to and from the container.
1.8 If you are using an SMS type of table space, how would you increase the physical storage
space, if needed?
Generally, the operating system handles the allocated storage space for a SMS type
of container. The controlling criteria is whether or not there is enough space to
grow in the file system where the container resides. Adding more file space to that
file system would allow the container to grow.
1.9 What would need to be done to manually increase the storage size of an SMS type of
container?
You need to backup the database in question and use a redirected restore to a new
file system with more space. You cannot simply add more containers to a SMS table
space.
1.10 Is there a performance consideration when comparing SMS and DMS types of table
spaces?
Generally, an SMS table space using a directory container performs slower than a
DMS table space because of the operating system overhead used to manage the
containers.
1.11 Which type of table space allows you to separate data and indexes?
You can store the same kind of data in both types of table spaces. However, use
DMS table spaces to separate the index data, table data, and LOB data.
The estimates made above do not allow for nullable overhead (1 byte). Indexes include
overhead. For DB2 UDB, index size (bytes) = (key + 8) * #_of_rows * 2.
In this exercise you will use the Estimate Size wizard for a table, and compare its results to your
calculated values.
2.3 Use a DB2 Command Window to find the number of rows and average row size of the
employee table in the sample database.
CONNECT TO sample
SELECT COUNT(*) FROM employee
DESCRIBE TABLE employee
TERMINATE
Number of rows in the employee table 32_______
Average row size of employee table 77_____
2.4 Now caclulate the size (in pages) required for 1000 rows in the employee table.
Use the following formula to calculate the storage size required for the employee table,
assuming a 4K page size.
Number of data pages =
(number_of_rows / TRUNCATE (4028 /(avge_row_size + 10))) * 1.1
Number of data pages = (number_of_rows / TRUNCATE (4028 /( 77 + 10))) *1.1
Number of data pages = (number_of_rows / 46) *1.1
Number of data pages = (1000 / 46) *1.1
Number of data pages = 23.9
Insert your calculated value in the table shown below.
2.5 Open the Control Panel, and drill down in the left pane until you select the sample
database. Click on the Tables icon, then right click on the employee table. Select the
Estimate Size wizard.
2.7 Currently, the Display size units is MB; change it to Pages. Click on the right-left arrow
combination on the right side of the display, near the column titles. Select the following
columns to display:
Name
Tablespace
Current size
Estimated size
Maximum size
Note there is 1 page being shown, eventhough there are 0 rows displayed.
2.8 For the wizard to help you, you must update its view of the data, using the Run statistics
button. Follow the steps below:
a. Click on the Run statistics button.
b. On the Columns tab, click the Collect basic statistics on all columns radio
button.
c. Click the Index tab. Click on the Collect statistics for all indexes radio button.
d. Click on the Schedule tab, and click the Run now without saving task history
radio button.
e. Finally, click on the OK button at the bottom of the panel to get the statistics.
f. You will see a message indicating RUNSTATS worked correctly. Click Close.
g. Now click Refresh several times on the wizard panel - eventually the correct
current row count will be displayed.
h. Enter 1000 in the New total number of rows field, and click Refresh. Enter
your new estimated size in the table below.
Sales database (international sales of electronics parts, associated with the inventory
database)
customer table (columns to track ID, name, address, phone, and credit info)
orders (columns to track ID, items ordered, item quantity, item backorder, PO
number, ship instructions, and how paid)
items table (view that links to the parts table in the inventory database)
Use the default table space for the system catalog
Four other table spaces must be DMS type.
Containers for all DMS table spaces are devices.
Inventory DB Sales DB
table space 6
(SMS)
USERTEMP1
user temporary
area
/opt/inventory
Warning!
Do not use extended storage.
CONNECT TO sample
CREATE BUFFERPOOL "8K-BP" IMMEDIATE SIZE 80 PAGESIZE 8K
CREATE BUFFERPOOL "16K-BP" IMMEDIATE SIZE 40 PAGESIZE 16K
These two bufferpools will be used in the next exercise.
5.1 Open a DB2 Command Window, and change to your HOME directory. Create two
directories here, named SMS-space and DMS-space.
cd
mkdir SMS-space
mkdir DMS-space
5.2 Change directory to your DMS-space directory, and create a file named tbspc1. This file
will be used as a container for a table.
cd DMS-space
touch tbspc1
5.3 Using the DB2 Command Window, create a table space named SMS-TBSPC, using the
following criteria:
regular tablespace
pagesize 8K
SMS, using your SMS-space directory as a container
bufferpool 8K-BP
CONNECT TO sample
CREATE REGULAR TABLESPACE "SMS-TBSPC" PAGESIZE 8 K
MANAGED BY SYSTEM USING ('$HOME/SMS-space')
BUFFERPOOL "8K-BP"
5.5 In the DB2 Command Window, get the detailed information about the table spaces you
just created. Fill in the table with the data you received. Also find the size of the
containers using the ls -l command.
LIST TABLESPACES SHOW DETAIL
SMS-space DMS-space
Total pages 1 1280
Extent size 32 pages 32 pages
Used pages 1 96
Page size 8192 bytes 16384 bytes
Number of containers 1 1
Number of extents 1 3
Size of container (ls -l) 0 bytes (the DAT file) 20,971,520 bytes
5.6 Create a new table in each table space with these specifics:
SMS-space DMS-space
Table name sms1 dms1
Column name col1 col1
Column data type varchar (800) varchar (800)
CREATE TABLE sms1 (col1 VARCHAR (800)) IN "SMS-TBSPC"
CREATE TABLE dms1 (col1 VARCHAR (800)) IN "DMS-TBSPC"
5.8 Create a file named load.data in your home directory, and make it executable.
cd
touch load.data
chmod 770 load.data
Edit the file to include the following:
db2 connect to sample
x = 1
until
[ $x = 501 ]
do
db2 insert into sms1 values (ABCDEFGHIJKLMNOPQRSTUVWXYZ)
x = expr $x + 1
done
Note: the mark near expr and the end mark on the same line are the back-quote marks.
Execute the file (this will take some time to run).
./load.data
Edit the file, and change the table name to dms1, then execute it again.
This will put data into the two new tables.
6-2
DB2 UDB data types can be placed into these three categories:
Numeric
String (including LOB)
Date-time
6-3
Numeric data types are used to store various numerical values, such as decimal data, scientific
notation data, and integer data. The size of the value as well as the precision required help
determine which numeric data type to use.
String data types are used to store alphanumeric, character-type data. The particular data type to
choose for a column depends on the size of the character-type data and if that size is expected to
remain the same for all rows in the table.
There is a subcategory of string data types for storing large object (LOB) data, such as whole
documents, pictures, or sound information.
Date-time data types provide a means of storing date, time, and complete timestamp
information. These data types allow for various types of formatting.
6-4
Integer
Smallint is used for values ranging from -32,768 to 32,767 and provides precision of up to 5
digits (left of decimal). Smallint requires 2 bytes for storage.
Integer is used for values ranging from -2,147,483,648 to 2,147,483,647. A precision of 10
digits (left of decimal) is possible. Integer requires 4 bytes for storage.
Bigint is used for 64-bit integers, with values ranging from -9,223,372,036,854,775,808 to
9,223,372,036,854,775,807. Bigint requires 8 bytes for storage.
Floating Point
Real data type is used when floating point data (scientific notation) is required. It is single-
precision with a length between 1 and 24. This data type requires 4 bytes for storage.
Double / Double Precision data type is used when floating point data (scientific notation) is
required. It is double precision and has a length between 25 and 53. This data type requires 8
bytes for storage.
Note Float can be used for both real and double data types, depending on the size of the
value. See Appendix B for further details.
Decimal
Decimal/Numeric data type is used when you need precision from 1 to 32 (default 5) decimal
positions. This data type requires (p/2) + 1 bytes for storage (where p is the number of digits of
precision).
With DECIMAL(p,s) and NUMERIC(p,s), the two components are precision and scale. If the
scale is zero, it can be omitted, e.g., DECIMAL(p) but one of the INTEGER values may be
preferred, because of their speed and lower storage requirements.
6-6
Different Mappings
The Oracle data type NUMBER can be mapped to many DB2 types.
The type of mapping depends on whether the NUMBER is used to store:
an integer (NUMBER(p), or NUMBER(p,0))
a number with a fixed decimal point (NUMBER(p,s), s > 0)
a floating-point number (NUMBER).
An Oracle INTEGER is a synonym for NUMBER(38). The DB2 UDB INTEGER is a true operating
system integer.
Note that in DB2, unless you specify NOT NULL, another byte is required for the null indicator.
See Handling Nulls on page 6-17
6-8
Char data type is used when a fixed-length character string is required. It can store a length of
from 1 to 254 characters and requires the specified number of bytes (one byte for each character)
for storage.
Varchar data type is used when a variable-length character string is required. It can store a string
with a maximum length of 32,672 bytes. There is no optional minimum-length capability. You
must set the maximum size when creating a column with this data type. This is the only data
type that allows altering (using the ALTER TABLE command) after creation. You can alter the
maximum-length value.
Long varchar data type is used when a variable-length character string is required but the
varchar data type is not long enough. The maximum length of a long varchar column is 32,700
bytes.
6-9
Oracle CHAR has maximum length of 255 bytes in Oracle 7, and 2000 bytes in Oracle 8 & 9.
Oracle provides VARCHAR(n) to store variable-length strings up to n characters, as well as
VARCHAR2(n) for the same purpose (maximum is 2000 characters in Oracle 7, and 4000
characters in Oracle 8 & 9).
The storage for VARCHAR2 is truly varying-length, whereas VARCHAR uses a fixed array of
characters with an endmarker. VARCHAR in Oracle is deprecated (subject to discontinuance in
the future), and generally Oracle DBAs use use VARCHAR2.
Oracle applications often use VARCHAR2 for very small character strings. Generally, it is better
to port these fields to the fixed length DB2 datatype CHAR(N), as it is more efficient and takes
less storage than VARCHAR. In DB2 UDB on Unix, VARCHAR(N ) uses n+4 bytes of storage and
CHAR(N) uses only N bytes of storage. (Note: in DB2/390, VARCHAR(N) uses n+2 bytes.)
CHAR should always be used for columns of 10 bytes or fewer, and probably should be used for
longer columns that are relatively full of non-blank data.
6-10
Clob (character large object) data type handles varying-length character data and can be SBCS
(single-byte character set) or MBCS (multibyte character set). This data type can store up to 2
gigabytes of character data.
Graphic stores double-byte character strings (characters that use two bytes of storage). This data
type uses 2 bytes for each character, is fixed length, and has a maximum length of 127
characters.
Vargraphic stores double-byte character strings (characters that use two bytes of storage). This
data type uses 2 bytes for each character, is variable length, and has a maximum size of 16,336
characters.
Long vargraphic stores double-byte character strings (characters that use two bytes of storage).
This data type uses 2 bytes for each character, is variable length, and has a maximum size of
16,350 characters.
Dbclob data type is a double-byte character large object used for columns with large amounts of
double-byte data (>32K) of varying length. This data type uses 2 bytes of storage for each
character.
Blob data type stores binary large object data strings. The maximum size of blob column is two
gigabytes.
6-11
Date data type is used to store the date in a variety of formats. This data type requires four bytes
for storage (packed) with a string length of ten bytes. The default format is MM/DD/YYYY, but
this can vary depending on the country code. The Oracle date data type maps directly to this.
Time data type is used to store the time in a variety of formats. This data type requires three
bytes for storage (packed) with a string length of 8 bytes. The default format is HH.MM.SS, but
this can vary depending on the country code.
Timestamp data type is used to store the date-time combination. This data type requires 10 bytes
for storage (packed) with a string length of 26 bytes. The only format is for timestamp data is
YYYY-MM-DD-HH.MM.SS.NNNNNN.
Note DB2 also has a noncategorized data type of datalink that is used for external file
linking.
Oracle has other data types that need special consideration and
treatment:
LONG
RAW(n)
NCLOB
BFILE
TIMESTAMP(f) / TIMESTAMP(f) WITH {LOCAL} TIMEZONE
INTERVAL YEAR(y) TO MONTH(m) / INTERVAL DAY(d) TO SECOND(f)
User Defined Types (UDT)
ROWID / UROWID
6-12
6-15
where key is the primary key column and tablename is the table you wish to update.
The value(MAX(key),0) statement returns the first non-null value. Thus, if the table is empty, it
returns 0 and increments the key by one.
The CACHE parameter specifies the maximum number of sequence values that the database
manager preallocates and keeps in memory. If the server is brought down, cached sequence
numbers are lost.
Sequences are used in the following manner (incomplete code):
INSERT INTO orders (order_num, customer_num, ...)
VALUES (orderseq.NEXTVAL, 101, ...);
Note that the same order number is needed for the individual items. NEXTVAL causes the
sequence to be incremented, and PREVVAL returns the most recently generated value for the
specified sequence within the current session (Oracle uses CURRVAL to do this).
Once a table has been created, you cannot alter the table description to include an identity
column at a later point in time. You can alter an existing table to later include a generated
column using the db2gncol utility, but not an identity column.
6-16 Data Type Mapping
Handling Nulls
6-17
NULL is not a data type, but a data value. Specifically, a null is an unknown value. As such, nulls
are handled differently than non-null values. When columns containing null values are used in
calculations, the result is unknown. Special syntax is used in SQL statements to work with null
value data.
If null values are allowed in a column, that column requires an extra byte of storage to flag it as
nullable. This extra byte must be considered during space allocation for the table. This is the
only difference of NULL between Oracle and DB2 UDB.
You can specify a default value for a nullable column when creating the table. An insert on a
table with a NOT NULL column causes an error if data is not supplied for that column, or if a
default value is not specified.
6-18
Oracle DB2
Data Type Notes Data Type Notes
DATE TIME If only MM/DD/YYYY is required, use DATE
TIMESTAMP If only HH:MM:SS is required, use TIME
DATE If both date and time are required (MM/DD/YYYY-
HH:MM:SS.000000), use TIMESTAMP
Use Oracle TO_CHAR() function to format a DATE for
subsequent DB2 load. Note that the Oracle default
DATE format is DD-MON-YY
VARCHAR2(n) n <=4000 VARCHAR(n) n <= 32672
LONG n <= 2GB VARCHAR (n) If n <= 32672 bytes, use VARCHAR
LONG VARCHAR (n) If 32672 < n <= 32700 bytes, use LONG VARCHAR or
CLOB(n) CLOB
If 32672 < n <= 2 GB, use CLOB
RAW(n) n <= 255 CHAR(n) FOR BIT DATA If n <= 254, use CHAR(n) FOR BIT DATA
VARCHAR(n) FOR BIT DATA If n <= 32672, use VARCHAR(n) FOR BIT DATA
BLOB(n) If n<= 2 GB, use BLOB(n)
LONG RAW n <= 2 GB VARCHAR(n) FOR BIT DATA If n <= 32672 bytes, use VARCHAR(n) FOR BIT DATA
LONG VARCHAR FOR BIT DATA If 32672 < n <= 32700 bytes, use LONG VARCHAR
BLOB(n) FOR BIT DATA
If n <= 2 GB, use BLOB(n)
BLOB n <= 4 GB BLOB(n) If n <= 2GB use BLOB(n)
CLOB n <= 4GB CLOB(n) If n <= 2GB use CLOB(n)
NCLOB n <= 4GB DBCLOB(n) If n <= 2GB use DBCLOB(n/2)
NUMBER SMALLINT If Oracle decl is NUMBER(p) or NUMBER(p,0), use
INTEGER SMALLINT, if 1 <= p <= 4;
BIGINT INTEGER, if 5 <= p <= 9;
DECIMAL(p,s) BIGINT, if 10 <= p <= 18
DOUBLE / FLOAT(n) / REAL If Oracle decl is NUMBER(p,s), use DECIMAL(p,s) (s >
0)
If Oracle decl is NUMBER, use DOUBLE / FLOAT(n) /
REAL
Oracle data types DB2 UDB data types Type of match (exact, conditional, none)
Varchar / Varchar2 Varchar
Date Timestamp
Date Date
Number Decimal
Number Real
Char Char
Blob Blob
Clob Clob
Raw Blob
1.2 Can you automatically assume that Oracle CHAR = DB2 UDB CHAR? What about Oracle
VARCHAR / VARCHAR2 and DB2 UDB VARCHAR?
Varchar2
Clob / Blob
2.2 What kind of data type conversion tools might you use to accomplish the mapping?
Oracle data types DB2 UDB data types Type of match (exact, conditional, none)
Varchar / Varchar2 Varchar exact, subset, but other DB2 data types
may be more suitable
Date Timestamp exact, but may have too much precision
Date Date conditional
Number Decimal exact, but other DB2 data types may be
more suitable
Number Real conditional number can hold larger
numbers, and with more exact precision
Char Char almost exact, but differences in max
length (254 vs. 255)
Blob Blob exact, but DB2 data type is limited to
2GB
Clob Clob exact, but DB2 data type is limited to
2GB
Raw Blob exact, but DB2 data type is limited to
2GB and other data types may be
more suitable
1.2 Can you automatically assume that Oracle CHAR = DB2 UDB CHAR? What about Oracle
VARCHAR / VARCHAR2 and DB2 UDB VARCHAR?
No. DB2 UDB CHAR allows from 1 to 254 bytes, while Oracle allows from 1 to 255
bytes.
DB2 UDB VARCHAR allows up to 32,672 bytes, while Oracle allows from 0 to 255
bytes for VARCHAR and 2000 bytes for VARCHAR2.
7-2
The example tables and views that will be used in this course represent a small sporting goods
wholesaler. This sales database (storesdb) has the following tables:
Customer (PK: customer_num*)
Orders (PK: order_num*)
Items (composite PK: order_num, item_num)
Cust_Calls (PK: customer_num, call_dtime)
Manufact (PK: manu_code)
State (PK: code)
Call_type (PK: call_code)
Stock (composite PK: stock_num, manu_code)
Catalog (PK: catalog_num*)
This database is documented in Appendix F, The StoresDB Database. The origin of the data
for this database is IBM Informix. Columns marked with * are intended to be auto-numbered
(serial, generated, sequence). Appendix C, Example import and load Utilities Results, shows
the loading of this data into a student database for this course.
7-4
7-6
During table creation, it is good practice to defer the primary key and foreign key declarations
until after the table is created. This allows you to explicitly create indexes on the key columns
with your own naming convention, which the keys will use. Also, if you drop the primary key or
foreign key, the index remains for later use.
7-7
After creating a table, various object elements can be changed, such as adding a column,
dropping a constraint, or adding a key.
Indexes can created in their own table space, if DMS table spaces are used and this separation
of index table spaces from the data can be changed later, if desired.
Note You cannot alter column data types (except the size of a VARCHAR column).
There are differences between Oracle and DB2 UDB in the way you alter a table. You are very
limited in the elements that can be altered after creation. For example, you cannot drop a
column.
7-8
System temporary tables (not user temporary tables) are created implicitly in the system
tempspace1 table space.
Explicit (declared) temporary user tables, such as summary tables for report purposes, must be
created in a user temporary table space.
User temporary tables that are created explicitly (declared) are placed in previously-defined user
temporary table spaces (such as usertemp1).
Note User temporary tables must be placed in a table space defined as user temporary and
cannot be placed in TEMPSPACE1 which is the system temporary table space.
Views in DB2 UDB are similar to views in Oracle and the DB2
summary table has similarities to the Oracle materialized view
Example:
CREATE VIEW ca_cust_names AS
SELECT lname, fname
FROM customer
WHERE state = 'CA';
7-10
A view is a logical selection of data from the underlying table(s). A view is used to restrict
access to data or to simplify data retrieval in SQL statements (hide the detail of the SQL). A
view in DB2 UDB is created the same way as a view in Oracle. As with tables, the view is
created with the current schema as part of the object name.
If instead of REFRESH DEFERRED, the REFRESH IMMEDIATE clause is used, then changes to the
underlying table(s) as part of INSERT, UPDATE, or DELETE statements are cascaded to the
summary table. Thus the content of the summary table, in this case, is the same as the specified
full-select statement were procressed. Summary tables do not themselves allow INSERT, UPDATE,
or DELETE statements.
Thus, summary tables with REFRESH IMMEDIATE behave like Oracle materialized views.
7-12
To determine disk space usage for database data tables or indexes, use the DB2 list command.
You can see storage particulars for all table spaces for that database, but only the userspace1
and other user table spaces contain the data and indexes.
Look for the information for these items:
Item
Tablespace ID
Name
Type (SMS or DMS)
Total pages
Usable Pages
Used Pages
Page Size
Extent Size (pages)
Prefetch Size (pages)
# of Containers
7-13
Tip You will find it easier to create a script file with your SQL in it and run that script
file from the DB2 command line. For example, you can put the CREATE TABLE
statement for the parent table in a file called parent.sql, and then execute it using the
DB2 command line. Execute it using:
db2 -tvf parent.sql
This way, you can make changes more easily without having to enter it all again.
1.4 Execute a few SELECT statements on all of these structures and examine the data.
2.3 Determine how much space the table structures for your storesdb database took. Look
for the following:
Item Without Data
Table space ID 2
Name USERSPACE1
Type (SMS or DMS) System managed space
Total pages
Usable Pages
Used Pages
Page Size
Extent Size (pages)
Prefetch Size (pages)
# of Containers
1.4 Execute a few SELECT statements on all of these structures and examine the data.
SELECT * FROM parent;
SELECT * FROM child;
SELECT * FROM parent, child
WHERE parent.p1 = child.c3;
This exercise will help you learn to create tables in the storesdb database using some script files
provided for you. The differences will be addressed by viewing and comparing the two script
files.
You will add more structure to these tables in a later module, so make sure they are created
properly.
2.1 Create the storesdb tables in your DB2 UDB storesdb database using the storesdb.sql
script. In a system command window, change to your storesdb directory and execute the
storesdb.sql script file as shown below:
cd
cd storesdb
./storesdb.sql
To get help for the DB2 utility, execute:
db2 ?
12 record(s) selected.
You will also see the parent and child tables and the v1 view.
You should find that you have created the storesdb tables in your storesdb database.
You will load data into these tables in a later module.
8-2
The two data movement utilities used for data migration are import
and load:
Input data from files and insert it into the target tables
Use data files produced from another database in, for example,
delimited format
One file for each table
Comma delimited (other delimiters available)
8-3
There are a variety of data movement utilities provided in DB2 UDB, but the ones you can use
to migrate data from an Oracle database to a DB2 UDB database are either import or load.
These two utilities take input data from files and insert it into the target tables.
Other data movement utilities in DB2 UDB work with other database environments, such as
moving data to and from a DRDA (distributed relational database architecture) database system
(mainframe).
In this course, you will learn to use both the import and load utilities and determine the best one
to use for your data migration needs.
We will be using data from another database (storesdb) in delimited format (using the character
| as the field delimiter), since this is an efficient format for Unix/Linux systems and even for
NT/2000 systems. The same data has also been made available in the laboratory files as comma-
separated-variable (CSV) for those would prefer to use that format, if it better represents typical
data on your system. There is one file for each table.
8-4
The import utility can create and load data into a target table, provided you use the correct file
type for import and you have the proper authorities and privileges. In this course, assume that
the target table structure has already been created by the time you are ready to import data into
it.
The load utility requires that the target table structure is created before data can be loaded into it.
If you want to import or load data into a database, you must be connected to it and have the
proper privileges to insert data into the table in question. These include the SYSADM or
DBADM authorities, and the CONTROL or SELECT and INSERT privileges.
A special authority, LOAD, can be used only for the load utility. This can be used if the user
running the load utility does not have SYSADM or DBADM authorities. The user must still
have INSERT privilege in order to use the load utility.
8-5
8-6
The import utility uses the SQL INSERT statement to write data from an input file into a table or
view. If the target table or view already contains data, you can either replace or append to the
existing data.
Performing periodic COMMITs reduces the number of rows that are lost if a failure occurs during
the import operation. It also prevents the DB2 UDB logs from getting full when processing a
large input file.
You must be connected to the database that contains the target table
for the import:
Complete all transactions before executing an import
8-7
You must be connected to the database that contains the target table for the import. The import
utility issues a COMMIT or ROLLBACK statement, so you should complete all transactions before
executing the import utility.
The import utility can be started by:
The command line processor (CLP).
db2 "import from customer.unl of
del insert into inst001.customer"
The Import notebook in the Control Center.
From the Control Center, open the Tables folder for the proper database.
Select the table you want by clicking the right mouse button, then select Import
from the pop-up menu.
8-8
In this course, we will concentrate on using the command line version of the import and load
utilities. The DB2 UDB syntax that we will use for the import utility is:
db2 "import from <filename>
of <file_type> insert into <tablename>"
You can also specify the use of a message log, apply modifiers, specify column mapping,
specify commit count and restart count, and specify table-space usage.
For example:
db2 connect to storesdb
db2 "import from /home/inst001/customer.unl
of del modified by coldel|
savecount 100
messages /tmp/cust.msg
insert into customer"
This shows you the use of the pipe character as the column delimiter (default value is a comma).
You can also specify a character string delimiter (default is double quotation mark).
The load utility can move data from files, named pipes, or devices.
The following file types are support:
Non-delimited ASCII format (ASC)
Delimited ASCII format (DEL)
Integrated exchange format (IXF) not produced by Oracle
8-9
The DB2 UDB load utility can move data from files, named pipes, or devices into a DB2 UDB
table. The data sources can reside on the same node as the database or on a remotely connected
client.
The target table must exist. If the target table already contains data, you can replace or append to
the existing data.
The load utility is faster than the import utility because it writes
formatted pages directly into the database.
There are three phases to the load process:
Load data is written to the table
Build indexes are created
Delete rows that caused a unique key violation are removed
from the table and placed into the exception table, if specified
8-10
The load utility can quickly move large quantities of data into newly-created tables or into tables
that already contain data. The utility can handle all data types including large objects (LOBs)
and user-defined types (UDTs). The load utility is faster than the import utility because it writes
formatted pages directly into the database. The import utility performs SQL INSERTs. The load
utility does not fire triggers and does not perform referential or table constraint checking (other
than validating the uniqueness of the indexes).
Note Each deletion event is logged. If you have a large number of records that violate the
uniqueness condition, the log could fill up during the delete phase.
To load data into a table, you must have one of the following:
SYSADM authority
DBADM authority
LOAD authority on the database and INSERT, or INSERT and
DELETE privilege
8-13
To load data into a table, you must have one of the following:
SYSADM authority
DBADM authority
LOAD authority on the database and
INSERT privilege on the table when the load utility is invoked in INSERT mode,
TERMINATE mode (to terminate a previous load insert operation), or RESTART
mode (to restart a previous load insert operation)
INSERT and DELETE privilege on the table when the load utility is invoked in
REPLACE mode, TERMINATE mode (to terminate a previous load replace
operation), or RESTART mode (to restart a previous load replace operation)
INSERT privilege on the exception table, if such a table is used as part of the load
operation.
You must be connected to the database that contains the target table
for the load:
Complete all transactions before a load
8-14
You must be connected to the database that contains the target table for the load. The load utility
issues a COMMIT or ROLLBACK statement, so you should complete all transactions before
executing the load utility.
You can sort the data in the input file to reflect the desired load sequence. However, if clustering
is required, the data should be sorted on the clustering index before you attempt the load.
The load utility can be started by:
The command line processor (CLP).
db2 "load from customer.unl of del
modified by coldel| insert into customer"
The Load notebook in the Control Center.
From the Control Center, open the Tables folder for the proper database.
Select the table you want by clicking the right mouse button, then select Load
from the pop-up menu.
The load API
The DB2 load operation provides the ability to capture error information (and data) in
EXCEPTION tables.
Note Any rows rejected before the building of an index because of invalid data are not
inserted into the exception table.
Rows are appended to existing information in the exception table; this can include invalid rows
from previous load operations. If you want only the invalid rows from the current load
operation, you must remove the existing rows before invoking the utility.
Two methods of creating the exception table are shown here:
CREATE TABLE customerexc
LIKE customer;
ALTER TABLE customerexc
ADD COLUMN ts TIMESTAMP
ADD COLUMN msg CLOB(32K);
The Y's in the CONST_CHECKED column indicate that various constraints have been checked
by the system in our example.
The N in the STATUS column means the table is in a normal state.
For example:
db2 connect to storesdb
db2 "load from /home/inst001/customer.unl
of del modified by coldel|
insert into customer"
8-17
You can also: specify the use of a message log, apply modifiers, specify column mapping,
specify commit count and restart count, and specify table-space usage, such as:
db2 "load from <filename>
of <file_type> modified by coldel<value>
savecount <value>
messages <filename>
insert into <tablename>
for exception <exception_table>"
For example:
db2 connect to storesdb
db2 "load from /home/inst001/customer.unl
of del modified by coldel|
savecount 100
messages /tmp/cust.msg
insert into customer
for exception customerexc"
This shows you how to use the pipe character (|) as the field delimiter.
8-18
After a load operation, the loaded table may be in check-pending state if it has table check
constraints or referential integrity constraints defined on it.
The STATUS flag of the SYSCAT.TABLES entry for that loaded table indicates the check-
pending state of the table. For the loaded table to be usable, the STATUS must have a value of
N, indicating a normal state.
Use the SET INTEGRITY statement to remove the check-pending state. The SET INTEGRITY
statement checks a table for constraints violations, then takes the table out of check-pending
state.
By default, the SET INTEGRITY statement checks only the appended portion of the table for
constraints violations if all the load operations are performed in INSERT mode.
For example:
db2 "load from infile1.unl of del insert into table1"
db2 set integrity for table1 immediate checked
Only the appended portion of TABLE1 is checked for constraint violations, which is faster than
checking the entire table.
8-19
Use the LOAD QUERY command to check the status of a load operation during processing. You
must be connected to the same database with a separate CLP session. It can be used either by
local or remote users.
A user loading a large amount of data into the STOCK table should check the status of the load
operation.
The output of file /home/inst101/stock.tempmsg might look like the following:
SQL3500W The utility is beginning the "LOAD" phase at time
"02-13-2002 17:45:28.562345".
SQL3519W Begin Load Consistency Point. Input record count = "0".
SQL3520W Load Consistency Point was successful.
SQL3109N The utility is beginning to load data from file
"/home/inst101/stock.unl".
SQL3519W Begin Load Consistency Point. Input record count = "100"
SQL3520W Load Consistency Point was successful.
SQL3519W Begin Load Consistency Point. Input record count = "200"
SQL3520W Load Consistency Point was successful.
LOAD QUERY checks the status of a load operation during processing and returns
V8 the table state. If a load is not processing, then the table state alone is returned. A
connection to the same database, and a separate CLP session are also required to
successfully invoke this command. It can be used either by local or remote users.
8-20
To determine disk space usage for a database data tables or indexes, use the DB2 list function.
You can see storage particulars for all table spaces for that database, but only the USERSPACE1
and other user table spaces contain the data and indexes.
Look for the information for these items:
Item
table space ID
Name
Type (SMS or DMS)
Total pages
Usable Pages
Used Pages
Page Size
Extent Size (pages)
Prefetch Size (pages)
# of Containers
8-21
1.2 From you last exercise, which method would you use to load a small database, such as
storesdb? Explain your reasoning.
1.3 Which method would you use to load a larger database? Why?
2.3 Did you check to see that all rows were inserted? How many rows were inserted for each
table?
2.6 In order for you to practice with the load utility, you need to remove the data in the
tables. To do this, execute the delete.sql script.
2.8 What kind of information was provided during the load? You should see much more
information during a load then you did during an import.
2.12 Determine how much space the data load took by comparing the size of the
USERSPACE1 table space without data (from an exercise in the last module) to the table
space as it is now with data in it. Look for the following:
Item Without Data With Data
table space ID 2
Name USERSPACE1
Type (SMS or DMS) System managed space
Total pages 16
Usable Pages 16
Used Pages 16
Page Size 4096
Extent Size (pages) 32
Prefetch Size (pages) 32
# of Containers 1
2.13 How much space did the data and indexes for storesdb take?
Item Difference
Total pages
Usable Pages
Used Pages
1.2 From you last exercise, which method would you use to load a small database, such as
storesdb? Explain your reasoning.
The storesdb database is very small, and would load quickly no matter which
method you used. For small databases such as this, it would be better to use the
import utility because constraint checking is done on a row-by-row insert frequency.
This means that after an import, the tables would not be left in a check-pending
state.
1.3 Which method would you use to load a larger database? Why?
To load large databases, the load utility would be best to use.The load utility is faster
than the import utility, because it writes formatted pages directly into the database.
The load utility does not fire triggers and does not perform referential or table
constraint checking (other than validating the uniqueness of the indexes).
2.3 Did you check to see that all rows were inserted? How many rows were inserted for each
table?
table rows
state 52
manufact 9
call_type 5
stock 74
customer 28
orders 23
items 67
catalog 74
cust_calls 7
log_record 0
2.6 In order for you to practice with the load utility, you need to remove the data in the
tables. To do this, execute the delete.sql script.
./delete.sql
2.8 What kind of information was provided during the load? You should see much more
information during a load then you did during the import.
Information included in the output screen of the load utility includes the following:
Actual load command, including the input file and target table
Information about backup pending state
Indicates the start and end of the load phase (timestamped)
Information about the Load Consistency Point
Indicates the start and end of the build phase (timestamped)
Number of rows read
Number of rows skipped
Number of rows inserted
2.9 Did you check to see that all rows were inserted? How many rows were inserted for each
table?
table rows
state 52
manufact 9
call_type 5
stock 74
customer 28
orders 23
items 67
catalog 74
cust_calls 7
log_record 0
2.13 How much space did the data and indexes for storesdb take?
Item Difference
Total pages 25
Usable Pages 25
Used Pages 25
The data and indexes took 25 pages of 4096 bytes each for a total of 100 kilobytes.
9-2
9-3
When querying the database, you want the data to be returned to you as soon as possible.
Indexes are used to help speed up queries in several different ways.
Indexes provide the optimizer another way of retrieving data other than with a sequential scan.
You may only want 10 or so rows from a table that has 2 million rows. Without an index to find
the specific rows you want, the optimizer must scan all the pages of that table to find your rows.
With an index, only the requested rows are read and selected; it is not necessary to scan all rows.
Some queries return just the index key values. In this case the optimizer goes to the index nodes
and scans them in their already-sorted order (key-only search). There is no need to search
through the table for the values.
Indexes facilitate the process of joining multiple tables. When a column(s) is defined with a
primary key constraint, a unique index is created on that column(s) if one does not already exist.
When a column(s) is defined with a foreign key constraint, a duplicate index is created on that
column(s) if one does not already exist.
9-4
Storage Space
Storage space (in a user table space) required for each unique index can be estimated as:
(average_index_key_size + 8) * number_of_rows * 2
where:
The average_index_key_size is the byte count of each column in the index key.
The 2 is for overhead, such as nonleaf pages and free space.
Note For every column that allows NULLs, add one extra byte for the null indicator.
Leaf Pages
The following formula can be used to estimate the number of leaf pages. The accuracy of this
estimate depends largely on how well the averages reflect the actual data.
An estimate of the average number of keys per leaf page is:
L = number_of_leaf_pages = X / (avg_number_of_keys_on_leaf_page)
where X is the total number of rows in the table.
You can estimate the original size of an index as:
(L + 2L / (average_number_of_keys_on_leaf_page)) * pagesize
Note For SMS, the minimum required space is 12K. For DMS, the minimum is one
extent.
For DMS table spaces, add all the sizes of the indexes for a table and round up to a multiple of
the extent size for the table space where the index is stored.
Remember to provide additional space for index growth due to INSERT/UPDATE activity, which
can result in page splits.
Processing costs
Using an index to search for data requires extra processing by the instance. The optimizer
calculates how much time and processing is required to use an index, factoring in the size of the
index. Then the optimizer decides whether or not to use the index. The optimizer generates a
query plan and, if that plan includes use of the index, the index is accessed, scanned, and the
data rows requested are retrieved.
9-7
Indexes can be defined as unique or nonunique, and can be defined on a single column or on
multiple columns in a table. Indexes can be defined as ascending (default) or descending. Also
by default, DB2 UDB indexes are created for forward scanning only. You must specify that you
want to allow reverse scanning, if so desired.
Indexes can also be created on calculated columns of a table, which are maintained
automatically as data changes the calculated results.
Additional information beyond the key data is stored in an index (include). This extra data is not
considered in the search strategy, but if present, it must be considered for storage space.
Although indexes generally help data retrieval in the SELECT statements and facilitate join
conditions, there is maintenance overhead involved with each index during INSERT, UPDATE, or
DELETE statements.
By clustering a table by an index, the table data is sorted and reorganized into index-sorted
order.
In Oracle, the concept of index clusters is completely different. In the cluster database object,
tables can be stored in a prejoined mannerthe rows of several different tables can be stored
together in the same data block. There is, however, no necessary physical relationship between a
particular block with one key value (e.g., 100) and those with the adjacent key values (99, 101).
9-8
Version 8 adds support for type-2 indexes. The primary advantages of type-2 indexes are:
They improve concurrency because the use of next-key locking is reduced to a
minimum. Most next-key locking is eliminated because a key is marked deleted instead
of being physically removed from the index page. For information about key locking,
refer to topics that discuss the performance implications of locks.
An index can be created on columns that have a length greater than 255 bytes.
A table must have only type-2 indexes before online table reorg and online table load
can be used against the table.
They are required for the new multidimensional clustering facility.
All new indexes are created as type-2 indexes, except when you add an index on a table that
already has type-1 indexes. In this case the new index will also be a type-1 index because you
cannot mix type-1 and type-2 indexes on a table.
All indexes created before Version 8 were type-1 indexes. To convert type-1 indexes to type-2
indexes, uses the REORG INDEXES command. To find out what type of index exists for a
table, use the INSPECT command.
9-10
Multidimensional clustering (MDC) provides a method for flexible, continuous, and automatic
clustering of data along multiple dimensions. This can result in significant improvements in the
performance of queries, as well as significant reduction in the overhead of data maintenance
operations such as reorganization, and index maintenance operations during insert, update and
delete operations. Multidimensional clustering is primarily intended for data warehousing and
large database environments, and it can also be used in online transaction processing (OLTP)
environments.
Multidimensional clustering enables a table to be physically clustered on more than one key, or
dimension, simultaneously.
Using a clustering index, DB2 attempts to maintain the physical order of data on pages in the
key order of the index, as records are inserted and updated in the table.
MDC benefits:
Clustering is extended to more than one dimension, or clustering key.
Range queries involving any, or any combination of, specified dimensions of the table
will benefit from clustering.
Not only will these queries access only those pages having records with the correct
dimension values, these qualifying pages will be grouped by extents.
9-12
When you create a table, you can specify one or more keys as dimensions along which to cluster
the data. Each of these dimensions can consist of one or more columns, as index keys do. A
dimension block index will be automatically created for each of the dimensions specified, and it
will be used by the optimizer to quickly and efficiently access data along each dimension. A
composite block index will also automatically be created, containing all dimension key columns,
and will be used to maintain the clustering of data over insert and update activity.
In an MDC table, every unique combination of dimension values form a logical cell, which is
physically comprised of blocks of pages, where a block is a set of consecutive pages on disk.
The set of blocks that contain pages with data having a certain key value of one of the dimension
block indexes is called a slice. Every page of the table is part of exactly one block, and all blocks
of the table consist of the same number of pages: the blocking factor. The blocking factor is
equal to extent size, so that block boundaries line up with extent boundaries.
9-13
Consider an MDC table that records sales data for a national retailer. The table is clustered along
the dimensions YearAndMonth and Region. Records in the table are stored in blocks, which
contain an extents worth of consecutive pages on disk. In the figure above, a block is
represented by an rectangle, and is numbered according to the logical order of allocated extents
in the table. The grid in the diagram represents the logical partitioning of these blocks, and each
square represents a logical cell. A column or row in the grid represents a slice for a particular
dimension. For example, all records containing the value South-central in the region column
are found in the blocks contained in the slice defined by the South-central column in the grid.
In fact, each block in this slice also only contains records having South-central in the region
field. Thus, a block is contained in this slice or column of the grid if and only if it contains
records having South-central in the region field.
A dimension block index is created on the YearAndMonth dimension, and another on the
Region dimension. Each dimension block index is structured in the same manner as a traditional
RID index, except that at the leaf level the keys point to a block identifier (BID) instead of a
record identifier (RID). Since each block contains potentially many pages of records, these block
indexes are much smaller than RID indexes and need only be updated as new blocks are needed
and therefore added to a cell, or existing blocks are emptied and therefore removed from a cell.
9-15
SMS table spaces are used by default during the database create operation. An SMS-type table
space requires that there is enough disk space in the file system where the SMS table space is
created. The table space grows as needed to accommodate the data stored in it, up to the size
allowed by the operating system. An SMS table space cannot contain long types of data.
DMS table spaces can be used to store the table data separately from the indexes on that table.
All indexes on a table are placed in the same table space. Placing indexes in their own table
space, away from the table data, helps to improve performance. Ideally, these table spaces are
separate physical devices, thereby spreading the I/O out among several disk drives. This
minimizes disk contention when accessing indexes and table data at the same time.
9-16
Index Include
Indexes in DB2 UDB allow nonkey values to be stored along with the index key values. This
included data is not part of the key itself, but it allows a key-only scan so that data pages do not
need to be read. Oracle does not have an equivalent INCLUDE mechanism.
The INCLUDE attribute requires extra space in the index nodes to accommodate the included
information.
Important!
The INCLUDE attribute can only be used on unique indexes.
All of the data is included in the index node pages. This type of table is generally suitable only
for tables with relatively small rows. A primary key is required and the index is created on the
primary key.
FYI DB2 supports PCTFREE for data pages as well as index pages this feature is important for use
with clustered indexes so that inserts of new rows can be placed on the appropriate data page in
clustering order. Most DB2 datatypes are fixed length (thus differing from Oracle) and hence
PCTFREE for data pages is used mostly for new inserts.
Clustering
The DB2 command REORG can be used to cluster or recluster data into indexed order. An index
that is defined as a clustering index on a table helps DB2 UDB keep the data in more of an
indexed order during inserts, updates, and deletes of that data. The form of clustering is
persistent in DB2 UDB (but differs considerably from the Oracle notion of clustering). The
degree of clustering can be determined by viewing the clusterratio or clusterfactor columns of
the syscat.indexes system catalog table.
SQL Syntax
The SQL syntax of the DB2 UDB index creation is similar to the Oracle index creation, with the
following differences:
By default, index creation in DB2 UDB is set to scan in the forward direction only. If
you want dual-direction capability, you need to specify that you want to allow reverse
scans while creating the index.
The index can only be placed in a different table space from the data if it is created in
the CREATE TABLE statement.
Various other options and parameters of the CREATE INDEX SQL statement have
differences that make for subtle differences.
Create the unique index before defining the primary key constraint:
The primary key uses an existing index
The index uses your naming convention
The index remains in place after dropping a primary key
constraint
Create the duplicate index before defining the foreign key constraint:
The foreign key uses the index
The index uses your naming convention
The index remains in place after dropping a foreign key
constraint
9-20
When creating a table with a column (or columns) that you use as a primary key, it is good
practice to create a unique index on that column(s) before declaring the column(s) as the primary
key. This means the table structure and index are created before altering the table to include the
primary key.
This practice:
Ensures uniqueness on the key column(s)
Provides an index that uses your naming convention
Retains the index if the primary key constraint were to be removed.
9-22
Suppose we have created other table spaces to use for our data and indexes, named:
dept1_data
dept1_idx
If that is the case, we could have created our employee table this way:
CREATE TABLE employee (
empno INTEGER NOT NULL,
f_name CHAR(20),
l_name CHAR(20),
address CHAR(30),
city CHAR(15),
state CHAR(2),
zip CHAR(10),
phone CHAR(14)
)
IN dept1_data
INDEX IN dept1_idx
NOT LOGGED INITIALLY;
Index Advisor
The DB2 UDB Index Advisor is a utility that can be used to determine which indexes need to be
created. The Advisor can indicate what indexes would be best for a query(s), and can also test an
index without actually creating it.
9-24
The privileges required to create an index include at least one of the following:
SYSADM or DBADM authority.
One of:
CONTROL privilege on the table
INDEX privilege on the table
and one of:
IMPLICIT_SCHEMA authority on the database, if the implicit or explicit schema
name of the index does not exist
CREATEIN privilege on the schema, if the schema name of the index refers to an
existing schema.
9-25
Detailed optimizer information is kept in explain tables separate from the actual access plan
itself. This information can be accessed from the explain tables by:
Writing queries against explain tables
Use the db2exfmt tool
Use Visual Explain (to view explain snapshot information)
Visual Explain is a graphical tool that accesses the explain tables and provides information on
the optimizer access plans. Static and dynamic SQL statements can be analyzed with Visual
Explain.
Related Classes
DB2 Universal Database Administration Workshop for UNIX (CF211): Unit 7.6
(Explain)
Visual Explain is launched from the IBM DB2 UDB Control Center in
Windows by selecting: Start > Programs > IBM DB2 UDB
On Control Center:
Double-click your system
Double-click your instance
Double-click Databases
Right-click on your database
Select Explain SQL...
Enter your query
Check the box labeled Populate all columns in Explain tables
Click OK
9-27
In this course, you will learn to use Visual Explain from a Windows client, analyzing various
access plans in your storesdb database.
Visual Explain is launched from the IBM DB2 UDB Control Center, which is found on the IBM
DB2 UDB menu on the Windows Programs list by selecting: Start > Programs > IBM DB2
UDB menu.
Once Control Center is started, double-click on the system your instance is on, double-click on
the instance of choice, double-click on Databases, right-click on the database you want, and
select Explain SQL...
Enter your query in the box provided, check the Populate all columns in Explain tables box,
and click OK.This produces a graphical output of the access plan for your query.
Tip By double-clicking on the various blocks in the graphical output, you can get more
detailed information about the query. You can also review the access plans for
previously executed SQL statements.
9-28
The Visual Explain output graphic shown above was produced by the following SQL statement:
SELECT *
FROM customer, orders
WHERE customer.customer_num = orders.customer_num
AND customer.customer_num > 103
ORDER BY order_num;
9-29
9-30
1.3 Create an index with extra data included, but not indexed.
CREATE UNIQUE INDEX o_order_num
ON orders (order_num)
INCLUDE (customer_num) ALLOW REVERSE SCANS;
1.4 Explore how and where these indexes are stored in your SMS- and/or DMS-based
environment. Explain what other possibilities are open to a DB2 UDB administrator for
placement and allocation of index storage space.
1.5 Describe where and when various optional clauses are best used. Illustrate with
examples from the storesdb database. Be prepared to propose one or several situations
not yet discussed in class and defend your ideas.
Overall structure:
a. Learn to run Visual Explain and perform queries that explain how the inquiry is going
to be performed.
1.3 Create an index with extra data included, but not indexed:
CREATE UNIQUE INDEX o_order_num
ON orders (order_num)
INCLUDE (customer_num) ALLOW REVERSE SCANS;
1.5 Describe where and when various optional clauses are best utilized. Illustrate with
examples from the storesdb database. Be prepared to propose one or several situations
not yet discussed in class and defend your ideas.
Class discussion.
Consult the IBM DB2 Universal Database Administration Guide: Performance for
additional details.
10-2
10-3
There are several different kinds of constraints that can be added to tables in a DB2 UDB
database:
NOT NULL constraint
Data constraint on a column that requires known data to be inserted in the row
Ensures data is present in the column/row
Implemented using a one-byte nullable flag for every row
Example:
CREATE TABLE state (
state_code CHAR(2) NOT NULL,
state_descr CHAR(20) NOT NULL, ...
Note A NOT NULL constraint can only be specified on a column when a table is created
and cannot be added with the ALTER TABLE statement.
Note A unique constraint can only be added to column(s) that were originally created
with the NOT NULL constraint. Therefore, it may not be possible to add a unique
constraint with the ALTER TABLE statement.
Referential constraints
Referential constraints are used to impose referential integrity of the data between tables
in a parent-child relationship. These constraints are defined on a column or set of
columns as keys.
Primary key constraint
Referential constraint used on a parent table to enforce a parentchild
relationship
One primary key per table
May require multiple columns (composite) to ensure uniqueness
Requires the column(s) to be NOT NULL, imposes uniqueness
Causes an index to be created if one is not already present
Example:
ALTER TABLE customer
ADD CONSTRAINT cust_num_pk
PRIMARY KEY customer(cust_num);
Note A primary key constraint can only be added to column(s) that were originally
created with the NOT NULL constraint. Therefore, it may not be possible to add a
primary key constraint with the ALTER TABLE statement.
10-6
Example:
CREATE TABLE employee (
empnum INT NOT NULL,
lname CHAR(15),
mgrnum INT NOT NULL,
FOREIGN KEY (mgrnum)
REFERENCES person ON DELETE CASCADE);
10-7
You can specify the delete rule of a referential constraint when the referential constraint is
defined. You can specify NO ACTION, RESTRICT, CASCADE, or SET NULL (on NULL value
columns).
The delete rule is applied when a row of the parent table is deleted and that row has dependents
in the dependent table of the referential constraint. If the delete rule is:
RESTRICT or NO ACTIONan error occurs and no rows are deleted from the parent
CASCADEthe delete operation is propagated to the dependents of the parent table and
the parent table itself
SET NULLeach nullable column of the foreign key of each dependent of the parent
table is set to null
In Oracle, the delete rule of RESTRICT is the default. Oracle allows you to include the CASCADE
DELETE rule.
FYI DB2 also has an optional ON UPDATE clause that follows the ON DELETE clause and has two
alternate rules NO ACTION and RESTRICT but no CASCADE rule.This is similar to Oracle 9i.
Check constraints are used to check the validity of the data being
inserted or updated in a column of a table:
A check constraint defined on a table automatically applies to all
subtables of that table
A constraint name must be unique within the same table (but
not the database)
Check constraints are not checked for inconsistencies, duplicate
conditions, or equivalent conditions:
Can result in possible errors at execution time
10-8
A check constraint is used to check the validity of the data being inserted or updated in a column
of a table by evaluating the test condition of the column (must evaluate to not false, can be true,
or unknown). A check constraint defined on a table automatically applies to all subtables of that
table. The constraint is defined in the form of:
CONSTRAINT constraint-name <evaluation>
A constraint-name must be unique within the same table. However, the same constraint name
can be used on more than one table in the database. If the constraint name is omitted, an 18-
character identifier that is unique among those defined on the table is generated by the system.
When used with a PRIMARY KEY or UNIQUE constraint, the constraint-name may be used as the
name of an index that is created to support the constraint.
Note Defining triggers would be another way of enforcing business rules (not covered in
this module).
Check constraints are not checked for inconsistencies, duplicate conditions, or equivalent
conditions, so contradictory or redundant check constraints can be defined that result in possible
errors at execution time.
Constraint definitions use the same syntax for both Oracle and DB2
UDB.
Example:
ALTER TABLE orders
ADD CONSTRAINT ck_items_qty
CHECK (quantity >= 1 AND quantity <= 10);
10-9
Constraint definitions use the same syntax for both Oracle and DB2 UDB. Examples include:
ALTER TABLE items
ADD CONSTRAINT ck_items_qty
CHECK (quantity >=1 AND quantity <=10);
ALTER TABLE orders
ADD CONSTRAINT orders_fk1
FOREIGN KEY(customer_num)
REFERENCES customer
ON DELETE CASCADE;
ALTER TABLE customer
ADD CONSTRAINT pk_num
PRIMARY KEY(customer_num);
DB2 has an almost complete implementation of integrity constraints. A table can have named or
unnamed primary key, unique, referential, and user-defined check constraints. A referential
constraint can have no action, restrict, cascade, or set null options for deletes, but it can only
have the restrict options for updates.
Oracle implements referential constraints with restrict (default, and not expressed) and cascade
(CASCADE DELETE). Thus, for the most part, all Oracle syntax and functionality is available in
DB2 UDB without change to code; some UDB features are not available in Oracle.
10-10
Often, constraints are enforced by the logic in business applications and it is not desirable to use
system enforced constraints since re-verification of the constraints on insert, update and delete
operations can be costly. In this case, informational constraints are a better alternative.
Constraint Alteration
Options for changing attributes associated with referential or check constraints can be altered to:
ENFORCED: Change the constraint to ENFORCED. The constraint is enforced by the
database manager during normal operations, such as insert, update, or delete.
NOT ENFORCED: Change the constraint to NOT ENFORCED. The constraint is not
enforced by the database manager during normal operations, such as insert, update, or
delete. This should only be specified if the table data is independently known to
conform to the constraint.
10-11
1.2 What is the function of the ON DELETE NO ACTION clause in 1.1? What other possibilities are
available and how do they work?
1.3 What types of constraints can be added to an existing table without rebuilding the table?
What types of constraints can be applied only at the time of table creation?
1.4 Attempt to add the following constraint. Explain why it works or why it does not work:
Alter the customer table to add the check constraint valid_state so that only customers
in CA can be inserted.
1.2 What is the function of the ON DELETE NO ACTION clause in 1.1? What other
possibilities are available and how do they work?
The ON DELETE NO ACTION clause enforces the primary key constraint. A row
in the parent table cannot be deleted if there are corresponding rows in the child
table. This clause can be replaced by ON DELETE RESTRICT (with the same
meaning). The other possibilities are:
ON DELETE CASCADE
ON DELETE SET NULL
1.4 Attempt to add the following constraint. Explain why it works or why it does not work:
Alter the customer table to add the check constraint valid_state so that only customers
in CA can be inserted.
ALTER TABLE customer
ADD CONSTRAINT valid_state
CHECK (state = 'CA');
11-2
11-3
The IBM Data Management Tools are specifically designed to enhance the performance of IBM
DB2 UDB Database. And the tools support the newest versions of the databases as they become
available, making it easy to migrate from version to version and still benefit from tools support.
The Data Management Tools can be launched from centralized control points, and can share data
and functions across tools. This adds up to increased ease-of-use, less training time, and higher
productivity for DBAs.
These tools have been developed by IBMs Autonomic Computing initiative to help reduce
complexity and improve quality of service through the advancement of self-managing
capabilities in computing environments, and are part of the companys continuing effort to
expand SMART (Self Managing and Resource Tuning) technology into the DB2 database arena.
The full list of available DB2 tools available as of Spring 2003 (from www-3.ibm.com/
software/data/db2imstools) is:
DB2 Application Recovery Tool, v1.2
DB2 Administration Tool, v4.1
DB2 Archive Log Compression Tool, v1.1
DB2 Automation Tool, v1.3
DB2 Bind Manager, v2.1
Note:
The SQL error code (from sqlca.sqlcode) is -204
The format, including N (negative), is used for sqlcode errors
Sqlstate (alternate ANSI standard for errors) is 42704
11-5
o A data type is being used. This error can occur for the following reasons:
- If "<name>" is qualified, then a data type with this name does not
exist in the database.
- The data type does not exist in the database with a create timestamp
earlier than the time the package was bound (applies to static
statements).
- If the data type is in the UNDER clause of a CREATE TYPE statement, the
type name may be the same as the type being defined, which is not valid.
This return code can be generated for any type of database object.
sqlcode: -204
sqlstate: 42704
11-7
db2look -d dbname -p
% No userid was specified, db2look tries to use Environment variable USER
% USER is: GLEN
% Creating DDL for table(s)
-- This CLP file was created using DB2LOOK Version 7.1
-- Timestamp: 06-Sep-2002 02:27:50 PM
-- Database Name: MYDATABASE
-- Database Manager Version: DB2/NT Version 7.1.0
-- Database Codepage: 1252
CONNECT TO MYDATABASE;
COMMIT WORK;
CONNECT RESET;
TERMINATE;
db2look -d dbname -e
% No userid was specified, db2look tries to use Environment variable USER
% USER is: GLEN
% Use plain text format
%
% Using database MYDATABASE
% Using userid GLEN
% Database Manager Version DB2/NT Version 7.1.0
% Database Codepage 1252
%
%********************************************
TABLE TEST
%********************************************
%
CREATOR GLEN
CARD -1
NPAGES -1
%
COLUMNS
%
NAME I
COLNO 0
TYPE INTEGER
LENGTH 4
NULLS Y
COLCARD -1
NUMNULLS -1
NFRQ -1
NQUN -1
LOW2KEY
HIGH2KEY
AVGCOLLEN -1
%
COLUMN DISTRIBUTION
%
%
INDICES
Syntax: db2look -d DBname [-u Creator] [-s] [-g] [-a] [-t Tname1 Tname2...TnameN]
[-p] [-o Fname] [-i userID] [-w password]
db2look -d DBname [-u Creator] [-a] [-e] [-t Tname1 Tname2...TnameN]
[-m] [-c] [-r] [-x] [-l] [-f] [-o Fname] [-i userID]
[-w password]
db2look [-h]
Installation Types
The setup program provides three setup options:
Custom you may choose any one or more of the three components to install.
Typical A typical install will copy all files required during normal use of DB2 Table
Editor to the specified installation directory and system directories, as required.
Compact The minimum amount of files will be installed to run the user application.
Setup.ini example
[Options]
AutoInstall=1
FileServerInstall=0
SetupType=0
InstallPath=C:\Programs\DB2 Table Editor
ProgramGroup=DB2 Table Editor
:WAIT
db2javit -j:"CC" -d:"CC" -c:"db2plug.zip;db2forms.jar" -w: -o:"-mx128m -ms32m"
-a:"%2 %3 %4 %5 %6 %7 %8 %9"
GOTO END
:END
Here is an example of db2cc after you have added the files if you are working with DB2 V7:
IF "%1" == "wait" GOTO WAIT
:WAIT
db2javit -j:"CC" -d:"CC" -c:"db2forms.jar" -w: -o:"-mx128m -ms32m" -a:"%2 %3
%4 %5 %6 %7 %8 %9"
GOTO END
:END
If you are running the Control Center as a Java applet, complete the following steps:
1. Copy the db2forms.jar file where the <codebase> tag points to in db2cc.htm.
2. Update db2cc.htm to include db2plug.zip and db2forms.jar in the archive list.
11-16
1.2 Perform some valid and invalid queries (i.e., SELECT from tables and columns that may
or may not exist).
1.3 Obtain the DDL that was used to create the orders table using db2look.
2.3 (If Internet access available) Use the DB2 Table Editor itself to select one of the
demonstration forms from the rocketsoftware.com website using File > Open from
Server > List ... Good, illustrative forms include:
DB2FORMS.Customer
DB2FORMS.Suppliers
DBE.Org
DBE.Movies
DBE.GetColleagues
2.4 Take note of how NULL values are currently displayed. Change the option as to how
NULL values are displayed (View > Options).
2.5 Use the DB2 Table Editor Developer to open one of the forms and locally edit this form
to rearrange the fields or add new features. Do not save the form to the server.
1.2 Perform some valid and invalid queries (i.e., SELECT from tables and columns that may
or may not exist).
db2 select * from customerrrrrrrrr
db2 ? SQL0204N
1.3 Obtain the DDL that was used to create the customer and the orders tables using db2look
(note that from the syntax chart you can ask for a list of tables).
db2look -d storesdb -t customer orders -e
12-2
12-3
The objective of this module is to teach you what you need to know to perform a successful
backup and recovery. Further information can be found in the DB2 UDB Administration
Workshop course.
Logging
Recovery
Recovery of lost tables
Point-in-time recovery
General backup and recovery techniques
12-4
Crash Recovery
The instance can simply stop and require restarting. No data was lost, except those transactions
that had not been committed on system failure. Recovery from this type of failure can be
automated using the DB configuration parameter AUTORESTART. his process is also
automated.
Log-retention logging:
Provides the capability of complete backup and recovery
Logs are archived as they are used
12-5
In DB2 UDB, there are two types of logging: circular and log retention logging.
Important!
All logging is done at the database level, not the instance level!
Circular Logging
Circular logging provides the ability to recover a database from a crash conditionit logs
transactions as they occur but does not back up the logs as they are used. Therefore, when a log
is reused (circular) the old transaction data in it is no longer available for recovery. A log is not
freed for reuse until all transactions in it are committed or rolled back. Primary logs are used for
the normal transaction logging, but when the primary logs are full and none can be freed for
reuse, a secondary log is created to continue logging operations. This is useful when large units
of work (transactions) occur and cause all of the primary log files to be used. The secondary logs
prevent the database server from hanging.
Dual logging was introduced in FixPak 3 of V7.2, but only for Unix. In V8, it is
V8 available for all environments.
Note When MIRRORLOGPATH is first enabled, it will not actually be used until the next
database startup. This is similar to the NEWLOGPATH configuration parameter.
If there is an error writing to either the active log path or the mirror log path, the database will
mark the failing path as "bad", write a message to the administration notification log, and write
subsequent log records to the remaining "good" log path only. DB2 will not attempt to use the
"bad" path again until the current log file is completed. When DB2 needs to open the next log
file, it will verify that this path is valid, and if so, will begin to use it.
If this path is not valid, DB2 will not attempt to use the path again until the next log file is
accessed for the first time. There is no attempt to synchronize the log paths, but DB2 keeps
information about access errors that occur, so that the correct paths are used when log files are
archived. If a failure occurs while writing to the remaining "good" path, the database shuts
down.
12-8
Planning the backup includes selecting a backup device, ensuring the user has the proper
authority to perform the backup, ensuring both the database and the instance have the correct
configuration parameters, and ensuring that the scheduling of the backup meets your backup and
recovery strategy.
You must have SYSADM, SYSCTRL, or SYSMAINT authority to use the BACKUP DATABASE
command.
The database can be local or remote, but the backup remains on the database server
unless a storage manager is used.
You can back up a database to a fixed disk, a tape, or a location managed by a storage
manager.
Note In this class, we will backup to disk and recover from disk.
Configuration Parameters
Some DBM configuration parameters and DB configuration
parameters need to be addressed with respect to backup and
recovery.
To view the instance parameters:
db2 get dbm cfg
To view parameters for a specific database:
db2 get db cfg for <database-name>
To change configuration parameters:
db2 update dbm cfg using backbufsiz 1000
12-9
There are some configuration parameters that need to be addressed with respect to backup and
recovery. These parameters are both instance-wide and database specific.
To determine the current setting for both the DBM and the DB configuration parameters, use the
DB2 command line processor utility to view and change them.
To view the instance parameters, use:
db2 get dbm cfg
To view parameters for a specific database, use:
db2 get db cfg for <database-name>
Database parameters:
LOGFILSIZ LOGPRIMARY
LOGSECOND LOGRETAIN
USEREXIT AUTORESTART
NUM_DB_BACKUPS REC_HIS_RETENTN
Database configuration switches:
Backup pending Database is consistent
Rollforward pending Restore pending
12-10
The configuration parameters for the instance (DBM) that must be changed are:
BACKBUFSZ Backup buffer default size (4K)
RESTBUFSZ Restore buffer default size (4K)
The parameters for the database (DB) that must be changed are:
LOGFILSIZ Log file size (4K)
LOGPRIMARY Number of primary log files
LOGSECOND Number of secondary log files
LOGRETAIN Log retain for recovery enabled (ON)
USEREXIT User exit for logging enabled (ON)
AUTORESTART Auto restart enabled (ON)
NUM_DB_BACKUPS Number of database backups to retain
REC_HIS_RETENTN Recovery history retention (days)
12-11
You can use the backup command on the command line or the Control Center to perform a
database backup.
The pagesize for backup is always 4 kilobytes and should not be confused with the multiple
page sizes allowed in table spaces for database data.
When creating a backup image (or restoring a backup image) the buffer size is 1024
pages (of 4K size). This is important if you are using tape as your backup device. If you
are using variable block sizes, you must lower your buffer size to a range that your tape
drive uses.
Under supported Windows operating systems, you can back up to diskette.
For most versions of Linux, using the DB2 default buffer sizes for backup and restore
to a SCSI tape device results in the error message: SQL2025N, reason code 75. To
prevent the overflow of Linux internal SCSI buffers, use this formula:
12-12
To restore your database data to a new database, you must have either SYSADM or SYSCTRL
authority. If you are restoring to a current database, you can use the SYSMAINT authority.
You can choose to restore the recovery history file (backup image file information), one or more
table spaces, or the whole database, either local or remote.
A database restore requires an exclusive connection. No other applications can be connected to
that database. A table-space restore can take place with applications connected to the database,
but the table space being restored requires an exclusive connection by the restore process. To use
table-space-level backup and restore, log-retention logging (archive) must be used.
Point-in-time recovery during a restore is supported via the roll forward part of the recovery.
If the table spaces for a database do not exist when doing a restore, use the REDIRECT option on
the restore command (redirected restore). You can modify the table-space container definitions
during the restore (to add more storage space, for example).
After a restore, the database (or parts of it) may be left in a roll forward pending state. This is
used to ensure data consistency in the database and prevents any activity against the structures in
that state (see the roll forward discussion on the next page).
12-13
DB2 UDB has the capability to restore one or more individual table spaces specified in a
comma-separated list. In the example above, the ONLINE keyword is optional and allows users to
connect to the database and access data in any table spaces other than the ones that are being
restored.
Note DB2 UDB also has the capability to restore a dropped table, as long as the DROPPED
TABLE RECOVERY option for a table space is set to ON. There are some restrictions;
for more detail on this feature see the DB2 UDB Administration Guide:
Implementation.
In V8, you can do a point-in-time restore using the local time instead of GMT.
V8
Example:
db2 rollforward db storesdb
to end of logs and stop
12-14
The rollforward utility rolls the log files forward, applying the transaction in those logs to the
database. Use the rollforward utility after a table space has been restored from a backup or if
any of the table spaces have been taken offline.
If a database has been restored and was set to use the rollforward recovery, the database remains
in the rollforward-pending state until the rollforward utility completes successfully.
Tip It is a good idea to perform a complete backup after a recovery and rollforward.
Note If you restore from a full offline database backup image, you can bypass the
rollforward-pending state during the recovery process.
12-15
Related Classes
DB2 UDB Administration Workshop
DB2 UDB Advanced Recovery and High Availability Workshop
1.1 Find out the current settings for the backing up the storesdb database (archiving is done
at the database level and not at the instance levels). Use DB2 to get the database manager
configuration parameters and the storesdb database configuration parameters.
Make the following changes to the current settings. (Other changes may be necessary for a
production database. These settings are only illustrative and apply to disk backup. Database
settings need only be made once or when changes are necessary and, thus, not every time that
backup is required.)
1.2 Some settings are at the instance level (DBM) and apply to all databases controlled by
the instance. Change the DBM BACKBUFSZ configuration parameter to 1000.
1.3 Some settings are at the database level (DB) and apply just to that database. Change the
storesdb LOGRETAIN configuration parameter to RECOVERY.
1.4 Changes to the DB configuration file do not take effect until the database is deactivated,
so terminate the storesdb database activity.
1.5 Verify that the changes were made by getting the DB configuration parameters for
storesdb.
2.2 Look at the backup history for the storesdb database using the DB2 list command
db2 list history backup all for storesdb | more
2.3 Perform a simple restore from a backup using the DB2 restore command.
db2 restore db storesdb from /home/inst###/backup
2.4 You will need to complete the rollforward of the database in order to use it.
db2 rollforward db storesdb to end of logs and stop
3.4 List your backup directory. You should see two backup files.
ls -l backup
3.5 Select the same row from storesdb and check the phone number value.
db2 connect to storesdb
db2 "SELECT * FROM customer
WHERE customer_num = 103"
You should see the 32-555-1212 phone number.
3.6 Stop the database. Restore the data from the backup performed in exercise 2.1.
db2 force applications all
db2 restore db from /home/inst###/backup
taken at <timestamp>
without rolling forward
where the <timestamp> is the timestamp from your backup, from exercise 2.1.
1.2 Some settings are at the instance level (DBM) and apply to all databases controlled by
the instance. Change the DBM BACKBUFSZ configuration parameter to 1000.
db2 update dbm cfg using backbufsz 1000
1.3 Some settings are at the database level (DB) and apply just to that database. Change the
storesdb LOGRETAIN configuration parameter to RECOVERY.
db2 update db cfg for storesdb using logretain recovery
1.4 Changes to the DB configuration file do not take effect until the database is deactivated,
so terminate the storesdb database activity.
db2 terminate storesdb
1.5 Verify that the changes were made by getting the DB configuration parameters for
storesdb.
Note: Exercises 2 & 3 have their solutions built into them, and thus those solutions are not
repeated here.
13-2
Tuning a server instance and a database must begin as part of installation and development
before the data is laid out on disk and before the very first lines of code are written. This part of
design is too often omitted, unfortunately , or left until after the system is operational and
problems have already crept in.
Our discussion of performance and tuning here is intended to get you ahead of the curve. During
design, implementation, and testing you have a unique opportunity to experiment with your
settings before the new database and server are put into production.
The emphasis in this course is on DB2 UDB features that you will use throughout the life of
your server and need to plan for rather than comparisons with Oracle. The comparisons, while
interesting, do not provide a basis for making decisionseach database server has its own
underlying architecture that is more critical to performance than individual features.
The key factors that you do have some control over are:
Application designand moving an existing design from one server to another without
adapting it, as if understanding the server is not important, is generally a big mistake
Database designusing the features of the database software to implement a logical
design (tables, columns) and physical design (and thus placement of tables and indices)
Server architectureusing the features and parameters of your server to best effect
13-3
This module demonstrates basic database performance tuning of DB2 UDB. The goal of this
module is to highlight DB2 UDB tuning parameters that have significant impact on the DB2
UDB instance and a given database. For example, determine whether to use an SMS- or DMS-
type of table space for your data and indexes.
As in Oracle, there are two basic places in DB2 UDB for performance tuningthe instance and
the database. However, they differ in their approach to performance tuning.
In Oracle, you tune the instance from the operating system point of view, controlling the
resources allocated overall to the instance. The database is then tuned based on layout of data,
table schema, indexes, and the like.
In DB2 UDB, performance tuning for the instance is still based on operating system resource
usage, and the database tuning is somewhat based on data layout, table schema, and indexes.
Beyond that, each database can be tuned to use a specific part of the resources allocated to the
instance.
In DB2 UDB, each database has its own buffer pools and its own set of logs. These can be tuned
differently for each database.
13-4
The DB2 UDB database has a significantly greater capacity (and complexity) for tuning than
Oracle, which results in a greater degree of control over the use of system resources. In addition
to the increased number of tuning parameters in DB2 UDB, there is a significant amount of
interdependence between several of the instance configuration parameters and the database
configuration parameters, adding to the performance-tuning challenge.
Global Control
Database Global
Global
Package Sort Heap Buffer Pools Lock List
Cache
13-5
As shown in the above diagram, the Database Global Memory consists of:
Utility heap, backup buffer, and restore buffer
Extended memory storage
Database heap, log buffer, and catalog cache
Global package cache
Sort heap
Buffer pools
Lock list
It is beyond the scope of this module to define and explain each of these sections of memory.
However, the main point of the diagram is to show that DB2 UDB memory is complex and
adjustable. As part of the basic performance tuning of this module, you will be configuring
buffer pools, which can result in a significant gain in performance.
Oracle and DB2 UDB tuning deal with the same three components:
Memory
Disk
Processes
13-6
Even though the configuration names and settings may be different, a database engine is a
database engine, and both Oracle and DB2 UDB are tuned by manipulating the same three
components: memory, disk, and processes. The next few slides will concentrate on these three
areas.
13-7
RUNSTATS is used to update statistics about the physical characteristics of a table and the
associated indexes. These characteristics include number of records, number of pages, and
average record length. The optimizer uses these statistics when determining access paths to the
data.
This utility should be called when a table has had many updates, or after reorganizing a table.
It is recommended to run the RUNSTATS command:
On tables that have been modified considerably.
On tables that have been reorganized (using REORG, REDISTRIBUTE DATABASE
PARTITION GROUP).
When a new index has been created.
Before binding applications whose performance is critical.
When the prefetch quantity is changed.
The options chosen must depend on the specific table and the application. In general:
If the table is a very critical table in critical queries, is relatively small, or does not
change too much and there is not too much activity on the system itself, it may be worth
spending the effort on collecting statistics in as much detail as possible.
13-9
13-10
13-11
REORG Examples:
For a classic REORG TABLE like the default in DB2 Version 7, enter the following command:
db2 REORG TABLE employee INDEX empid ALLOW NO ACCESS
INDEXSCAN LONGLOBDATA
13-13
REORGCHK calculates statistics on the database to determine if tables or indexes, or both, need
to be reorganized or cleaned up.
This command does not display declared temporary table statistical information.
This utility does not support the use of nicknames.
Unless you specify the CURRENT STATISTICS option, REORGCHK gathers statistics on all
columns using the default options only. Specifically, column group are not gathered and if LIKE
statistics were previously gathered, they are not gathered by REORGCHK.
The statistics gathered depend on the kind of statistics currently stored in the catalog tables:
If detailed index statistics are present in the catalog for any index, table statistics and
detailed index statistics (without sampling) for all indexes are collected.
If detailed index statistics are not detected, table statistics as well as regular index
statistics are collected for every index.
If distribution statistics are detected, distribution statistics are gathered on the table. If
distribution statistics are gathered, the number of frequent values and quantiles are
based on the database configuration parameter settings.
REORGCHK calculates statistics obtained from eight different formulas to determine if
performance has deteriorated or can be improved by reorganizing a table or its indexes.
Table statistics:
13-16
BUFFPAGE
In DB2 UDB, buffer pools are part of the database and not part of the instance. Each database
has one default buffer pool with a default size of 1000 pages for UNIX systems and 250 pages
for Windows systems. Additional buffer pools can be created using the CREATE BUFFERPOOL
statement, and if the SIZE value is set to -1 then the BUFFPAGE value is used to size the buffer
pool. For example, the following SQL statement creates a buffer pool with a size defined by the
BUFFPAGE value:
Theory Two:
A second theory holds that there should be at least two buffer pools:
One for all of the static tables
Another for the data and index pages
This configuration allows the static tables to be retained in memory, and the most active data and
index pages are retained as well.
Theory Three:
A third theory suggests that there should be only one buffer pool:
Some IBM case studies have demonstrated that one large buffer pool is just as efficient
as multiple buffer pools, and only the most active pages of a database ought to be
retained in memory regardless of their type.
Many IBM articles are devoted to the subject of buffer pool configuration, and there are many
opinions on what is best.
Important!
When tuning, adjust only one parameter at a time!
NUM_IOCLEANERS
This database configuration parameter is equivalent to the Oracle parameter db_writers
(Oracle 7) or db_writer_processes (Oracle 8 and later) and is the number of page cleaning
processes that are allocated at database startup. These are the processes that are triggered when
the CHNGPGS_THRESH is exceeded. The rule of thumb is to set this parameter to one or two
more than the number of physical disks on which the database resides. The range for this
parameter is from 0 to 255, and the default is 1.
At a maximum interval:
SOFTMAX for DB2 UDB
13-19
The three page-cleaning mechanisms in DB2 UDB are similar to the mechanisms found in some
form in other vendor database servers. Thus, for instance, in Oracle, the number of Database
Writer processes (DBWR) handle the flushing of dirty pages from the buffer cache. As of Oracle
8i, different blocksizes can be used for different tablespaces.
We will discussed here, however, only the DB2 UDB mechanism in detail.
1. There is a mechanism to clean the buffer pool pages back to disk when a maximum
interval is exceeded. For DB2 UDB, this maximum interval is set using the DB
configuration parameter, SOFTMAX. This parameter is a percentage of logical log files
that can be filled between soft checkpoints. For example, if SOFTMAX is set to 200, then
200% of logical log files (or 2 logs) are filled between SOFTMAX soft checkpoints in the
logical log files. The concept of a checkpoint interval in DB2 UDB is measured in log
space used rather than seconds of time as one normal expects an interval to be.
2. There is a mechanism to clean buffer pool pages back to disk when a percentage of the
buffer pool pages have been modified. This mechanism is triggered by the DB
configuration parameter CHNGPGS_THRESH.
3. There is a mechanism to clean the buffer pool pages back to disk when the buffer pool
is full of modified pages. When this condition occurs in DB2 UDB, the least-recently-
used page is victimized, or stolen, and cleaned back to disk. In order for this page to be
Note * In DB2 UDB there is not a specific name for each type of page cleaning activity.
For example, asynchronous pages can include log pages depending on how the log
pages were cleaned to disk (either synchronously or asynchronously). The number of
pages written as a result of CHNGPGS_THRESH being exceeded is calculated by
taking asynchronous pages and subtracting log pages written asynchronously.
Victim pages are always called either victim pages or stolen pages.
CHNGPGS_THRESH
Set lower if victim/dirty page steals occur
NUM_IOCLEANERS
One or two more than the number of physical drives
13-21
The more active pages of the database that can be retained in memory, the better the query
performance. Therefore, the larger the buffer pool the better. However, a buffer pool that is too
large could cause memory paging at the OS level, so a good rule of thumb is to allocate the
largest buffer pool that is possible without causing paging.
If CHNGPGS_THRESH is set too high, the buffer pool fills up with modified pages faster than the
asynchronous page cleaning processes can clean them back to disk. Should the buffer pool
become full, the database agent processes that normally only handle SQL processing must take
over the process of page cleaning. When this occurs, the least-recently-used page in the pool is
declared a victim (is stolen) and is cleaned back to disk by the database agent as a synchronous
I/O event. If CHNGPGS_THRESH is set too low, then modifications to the pages are not allowed to
accumulate before they are written out to disk, and excessive I/O operations are performed.
You can monitor for victim pages by creating an event monitor using the CREATE EVENT
MONITOR BUFFERPOOLS statement and then analyzing the results by using the System Monitor
GUI tool or the db2evmon command line tool.
13-22
Disk space is either allocated as SMS table space or DMS table space. Generally SMS table
space access is slower, since the I/O is passed through the operating system I/O buffers.
However, SMS table space is easier to set up. DMS table spaces can be faster since the Database
Manager directly controls the I/O to these table spaces when the containers are raw devices.
However, DMS table spaces require some additional steps to set up, since the device-type
containers must be predefined.
In DB2 UDB, containers are either directories, files, or raw devices.
Therefore, since SMS table spaces uses directories as containers, SMS table spaces are always
cooked.
DMS table spaces are cooked if they use files as containers, and raw if they use devices as
containers.
DMS table spaces that use devices for containers have the best performance.
13-23
The page size that should be used for a DB2 UDB table space is dependent on four factors.
The first factor is the maximum row size. The page size of a table space should always exceed
the maximum row size for any table in the table space. The chart below specifies the maximum
row size for a given page size. It is possible to create a table in a table space whose page size is
smaller than the row size; however, query performance is seriously degraded.
The second factor is the maximum amount of data stored in a table. The chart below specifies
the maximum table size for a given table space page size:
Page Size Maximum Row Size Maximum Table Size
4 KB 4005 bytes 64 GB
8 KB 8101 bytes 128 GB
16 KB 16293 bytes 256 GB
32 KB 32677 bytes 512 GB
The third factor is the number of columns in a table. For a 4K page size, a table is limited to 500
columns. Should a table exceed this limitation, then the table should be moved to a table space
that has a page size of 8K, 16K, or 32K, in which case the maximum number of columns
allowed is 1012.
13-25
The extent size for a table space reflects the number of pages of data that are written to a
container before switching to the next container. For SMS table spaces, space is only allocated
one page at a time. When the number of pages for an extent size have been allocated, the engine
switches to the next container. For DMS table spaces, space is allocated as a full extent at a time,
which fills up as pages are used. When the extent is full, the engine switches to the next extent.
No matter which type of table space is used, extent size tuning is dependent on two factors.
Growth rate
The second factor is growth rate. For tables that are actively growing, a large extent size is
preferred. This reduces the number of extent allocation operations that the Database Manager
must perform.
13-27
Prefetching is the process of reading pages into the buffer pool prior to the user needing the
pages for a query. It is triggered by a sequential read of the table and is similar to the concept of
read-ahead.
Tuning the prefetch size is a straightforward process. First, the prefetch size should be a multiple
of the extent size. Second, IBM case studies have shown that prefetching works best at one or
two multiples of the extent size. There is almost no increase in performance for prefetch
multiples of three or more.
13-28
Since DB2 UDB is a single-threaded database engine, there are numerous configuration
parameters that control the quantity of processes running.
At the Database Manager level, there are several configuration parameters that limit the number
of agents available to process the requests submitted by application connections. For example,
the parameter MAXAGENTS limits the maximum number of agents available to all the databases
on the instance.
At the Database level, there are also several configuration parameters that limit the number of
processes running. For example, the parameter MAXAPPLS limits the number of applications that
can be connected to the database at one time.
MAXAGENTS:
Maximum number of agents available to process requests
across all databases
MAXAPPLS:
Maximum number of applications that can connect to the
database
13-29
MAXAGENTS applies to the Database Manager level. It limits the maximum number of processes
that are available to perform all of the requests from all of the applications connected to all of
the databases on the instance. MAXAPPLS only applies to the database and acts to limit the
number of applications that connect to the database at one time. The two configuration
parameters are interrelated and resetting one may require resetting the other.
13-30
DB2 UDB has several mechanisms that monitor current conditions within the instance or
databases and can self-tune the instance. For example, package cache is a section of memory
used for storing query plans optimized at BIND time. The package cache monitors its memory
use, and, when memory runs low, it attempts to borrow unused memory from other sections.
Another example is the allocation of secondary logical logs. When the database runs out of
space in the primary logical logs, it automatically allocates secondary logical logs. The numbers
of both primary and secondary logical logs are tunable parameters. You will get a chance to see
secondary logical logs that are allocated in a lab exercise at the end of this module.
13-31
In order to prevent an engine crash as a result of running out of logical logs, if the DB2 UDB
Database Manager determines that there are no more primary logs available, it automatically
begins allocating secondary log files in order to continue operation. As each secondary log
becomes full, the Database Manager again checks to see if there are any primary logs that are
not current and are available to be overwritten. If a primary log is now available, the engine uses
it. If a primary log is not available, the engine will allocate another secondary log. When a
secondary log is no longer needed, the engine deallocates it. This process continues until the
Database Manager has allocated all the secondary logs allowed by the LOGSECOND parameter.
In V8, there is another parameter that can be set for log management:
V8 MIRRORLOGPATH. Dual logging was introduced in 7.2, but only for Unix. In that
version, dual logging was enabled by setting the DB2NEWLOGPATH2 registry
variable to Yes. See module 12 (Managing Backup and Recovery) in this course for
more information.
13-32
Command parameters:
USING input-keyword param-value
Valid input keywords and parameter values:
Keyword Valid Default Explanation
values value
mem_percent 1100 80 Percentage of memory to dedicate. If
other applications (other than the
operating system) are running on this
server, set this to less than 100.
workload_type simple, mixed, mixed Simple workloads tend to be I/O
complex intensive and mostly transactions,
whereas complex workloads tend to be
CPU intensive and mostly queries.
num_stmts 11,000,000 10 Number of statements per unit of work.
tpm 150,000 60 Transactions per minute.
admin_priority performance, both Optimize for better performance (more
recovery, both transactions per minute) or better
recovery time.
is_populated yes, no yes Is the database populated with data?
There are two primary tools with which you can access system
monitor information, each serving a different purpose:
The snapshot monitor enables you to capture a picture of the
state of database activity at a particular point in time
Event monitors log data as specified database events occur
For both snapshot and event monitors you have the option of:
storing monitor information in files or SQL tables
viewing it on screen (directing it to standard-out)
processing it with a client application
13-34
13-36
The snapshot monitor provides two categories of information for each level being monitored:
State
This includes information such as:
Current status of the database.
Information on the current or most recent unit of work.
List of locks being held by an application.
Status of an application.
Current number of connections to a database.
Most recent SQL statement performed by an application.
Run-time values for configurable system parameters.
Counters
These accumulate counts for activities from the time monitoring started until the time a
snapshot is taken. Such as:
Number of deadlocks that have occurred.
Number of transactions performed on a database.
Amount of time an application has waited on locks.
13-37
You can view the current state of your applications monitor switches:
db2 GET MONITOR SWITCHES
13-39
The example below shows how you can obtain a list of the locks held by applications connected
to a database, using a database lock snapshot.
First, turn on the LOCK switch (UPDATE MONITOR SWITCHES), so that the time spent
waiting for locks is collected.
db2 UPDATE MONITOR SWITCHES USING LOCK ON
Connect to the database, start a transaction, then take the snapshot:
db2 CONNECT TO sample
db2 +c LIST TABLES FOR ALL
this command will require locks on the database catalogs
db2 GET SNAPSHOT FOR LOCKS ON sample
From this snapshot, you can see that there is currently one application connected to the
SAMPLE database, and it is holding five locks.
Locks held = 5
Applications currently connected = 1
Note that the time (Status change time) when the Application status became UOW Waiting is
returned as Not Collected, because the UOW switch is OFF.
13-42
Event monitors are used to collect information about the database and any connected
applications when specified events occur. An event monitor writes out database system monitor
data to either a file or a named pipe, when one of the following events occurs:
end of a transaction
end of a statement
a deadlock
start of a connection
end of a connection
database activation
database deactivation
end of a statements subsection (when a database is partitioned)
flush event monitor statement issued.
An event monitor effectively provides the ability to obtain a trace of the activity on a database.
For example, a deadlock event monitor waits for a deadlock to occur; when one does, it collects
information about the applications involved and the locks in contention.
In V8, event monitor output can be directed to SQL tables, a file, or a named pipe.
V8
13-44
To create an event monitor, use the CREATE EVENT MONITOR SQL statement. Event
monitors collect event data only when they are active. To activate or deactivate an event
monitor, use the SET EVENT MONITOR STATE SQL statement. The status of an event
monitor (whether it is active or inactive) can be determined by the SQL function
EVENT_MON_STATE.
When the CREATE EVENT MONITOR SQL statement is executed, the definition of the event
monitor it creates is stored in the following database system catalog tables:
SYSCAT.EVENTMONITORS: event monitors defined for the database.
SYSCAT.EVENTS: events monitored for the database.
SYSCAT.EVENTTABLES: target tables for table event monitors.
Each event monitor has its own private logical view of the instances data in the monitor
elements. If a particular event monitor is deactivated and then reactivated, its view of these
counters is reset. Only the newly activated event monitor is affected; all other event monitors
will continue to use their view of the counter values (plus any new additions).
Application 1:
db2 CONNECT TO sample
db2 +c "INSERT INTO staff VALUES (1, 'Ofer', 1, 'Mgr', 0, 0, 0)"
DB20000I The SQL command completed successfully.
The +c option turns autocommit off for CLP. Application 1 is now holding an exclusive lock on
a row of the staff table.
Application 2:
db2 CONNECT TO sample
db2 +c "INSERT INTO department VALUES
('1', 'System Monitor', '1', 'A00', NULL)"
DB20000I The SQL command completed successfully.
Application 2 now has an exclusive lock on a row of the department table.
Application 1:
db2 +c SELECT deptname FROM department
Assuming cursor stability, Application 1 needs a share lock on each row of the department table
as the rows are fetched, but a lock on the last row cannot be obtained because Application 2 has
an exclusive lock on it. Application 1 enters a LOCK WAIT state, while it waits for the lock to
be released.
Application 2:
SQLN0991N The current transaction has been rolled back
because of a deadlock or timeout.
Reason code "2". SQLSTATE=40001
At this point the event monitor logs a deadlock event to its target (in this case, file /tmp/dlocks).
Application 1 can now continue.
Because an event monitor buffers its output and this scenario did not generate enough event
records to fill a buffer, the event monitor values need to be flushed to the event monitor output
writer:
Monitor Session:
db2 "FLUSH EVENT MONITOR dlockmon BUFFER"
DB20000I The SQL command completed successfully.
The event trace is written as a binary file. It can now be formatted using the db2evmon tool:
Monitor Session:
db2evmon -path /tmp/dlocks
Reading /tmp/dlocks/00000000.evt . . .
13-48
You can use the Performance Configuration wizard to tune the performance of a database by
updating configuration parameters to match your business requirements.
From the Control Center, right-click the database you want to tune and select Configure
Performance Using Wizard (in V8, select Configuration Advisor).
Enter the following information on the various wizard panels:
Server panelSelect the proper Target Memory you want to use.
Workload panelOptimize for data warehousing, order entry, or both.
Transactions panelSelect short or long transactions and enter the estimated
transactions-per-minute.
Priority panelOptimize for faster transactions, faster recovery, or both.
Populated panelSelect whether the database is populated or not.
Connections panelEnter the average number of connected local applications and the
average number of connected remote applications.
Isolation panelIndicate RR, RS CS, or UR isolation level.
Schedule panelSchedule the task to run later.
Results panelReview your settings and change if needed.
Click the Finish button to execute the changes or schedule the task.
13-50
The Health Center is a graphical interface that is used to view the overall health of
V8 database systems. Using the Health Center, you can view details and
recommendations for alerts on health indicators and take the recommended actions
to resolve the alerts.
The Health Center provides the graphical interface to the Health Monitor. You use it to configure
the Health Monitor, and to see the rolled up alert state of your instances and database objects.
Using the Health Monitors drill-down capability, you can access details about current alerts and
obtain a list of recommended actions that describe how to resolve the alert.
Health Monitor
Health Monitor information can be accessed through the Health Center, Web Health Center, the
CLP, or APIs. Health indicator configuration is available through these same tools.
The Health Monitor is a server-side tool that constantly monitors the health of the instance, even
without user interaction. If the Health Monitor finds that a defined threshold has been exceeded
Health Indicators
A health indicator is a system characteristic that the Health Monitor checks. The Health Monitor
comes with a set of predefined thresholds for these health indicators. The Health Monitor checks
the state of your system against these health-indicator thresholds when determining whether to
issue an alert. Using the Health Center, commands, or APIs, you can customize the threshold
settings of the health indicators, and define who should be notified and what script or task
should be run if an alert is issued.
The following are the categories of health indicators:
Table space storage
Sorting
Database management system
Database
Logging
Application concurrency
Package and catalog caches, and workspaces
Memory
You can follow one of the recommended actions to address the alert. If the recommended action
is to make a database or database manager configuration change, a new value will be
recommended and you can implement the recommendation by clicking on a button. In other
cases, the recommendation will be to investigate the problem further by launching a tool, such as
the CLP or the new Memory Visualizer.
13-52
The Memory Visualizer helps you to uncover and fix memory-related problems on a DB2
instance. It uses visual displays and plotted graphs to help you understand memory components
and their relationships to one another. You can invoke it from a Health Center recommendation
or use it on its own as a monitoring tool.
There are several ways to start the Memory Visualizer:
Start > IBM DB2 > Monitoring Tools > Memory Visualizer
From the Control Center, right-click on the instance and select View memory usage
From the CLP, run the db2memvis program
With the Memory Visualizer, you can:
View or hide data in various columns on the memory utilization of selected components
for a DB2 instance and its databases.
Change settings for individual memory components by updating configuration
parameters.
Load performance data from a file into a Memory Visualizer window.
Save the performance data.
13-53
The Memory Visualizer window displays two views of data: a tree view (shown above) and a
historical view. A series of columns show percentage threshold values for upper and lower
alarms and warnings. The columns also display real time memory utilization.
Historical View
The historical view displays data for memory components selected in the tree view. The data
includes values for memory allocated and utilized, plotted graphs, as well as changes made to
the configuration parameters while the Memory Visualizer is running. The data is saved for a
specific period within the Memory Visualizer. You can save memory performance data to a
Memory Visualizer data file for tracking, comparing with other data, or troubleshooting.
Memory Graph
The memory graph displays plotted data for selected memory components in the Memory Usage
Plot. Each component in the graph is identified by a specific color, which also displays in the
Plot Legend column in the Memory Visualizer window. The graph also displays changes made
to the configuration parameters settings. The original value of the configuration parameter and
the new value setting appear in the graph, in addition to the time that the change was requested.
They become part of the history view that you can use in assessing memory performance.
Recovery Expert
Provides simplified, comprehensive, and automated recovery:
CourseUsing the DB2 Recovery Expert Tool for Multiplatforms
Performance Expert
Performance Expert integrates performance monitoring, reporting, buffer
pool analysis, and a Performance Warehouse function into one tool,
providing a comprehensive view of DB2 performance-related information:
CourseUsing the DB2 Performance Expert Tool
13-55
Recovery Expert
IBM DB2 Recovery Expert provides simplified, comprehensive, and automated recovery with
extensive diagnostics and SMART (self managing and resource tuning) techniques to minimize
outage duration.
Provides targeted, flexible and automated recovery of database assets, even as systems
remain online.
Allows expert or novice DBAs to recover database objects safely, precisely and quickly
without having to resort to full disaster recovery processes.
Offers precision recovery options to support safe database development and
maintenance.
Has built-in self-managing and resource tuning (SMART) features that provide
intelligent analysis of altered, incorrect or missing database assets including tables,
indexes, or data and automates the process of rebuilding those assets to a correct point
in time, all without disruption to normal database or business operations.
Supports DB2 Version 7 and later on Microsoft Windows, HP-UX, Sun's Solaris
Operating Environment, IBM AIX and Linux.
Performance Expert
Performance Expert integrates performance monitoring, reporting, buffer pool analysis, and a
Performance Warehouse function into one tool. It optimizes the performance of IBM DB2 for
DB2 Universal Database by providing a comprehensive view of DB2 performance-related
information. It also provides you with reports, analysis, and recommendations.
In general, Performance Expert includes the following advanced capabilities:
Analyzes, controls and tunes the performance of DB2 and DB2 applications.
Provides expert analysis, a real-time online monitor, a wide range of reports for
analyzing and optimizing DB2 application and SQL statements.
Includes a Performance Warehouse for storing performance data and analysis tools.
Defines and applies analysis functions (rules of thumb, queries) to identify performance
bottlenecks.
A starter set of smart features that provide recommendations for system tuning to gain
optimum throughput.
DB2 Buffer Pool Analyzer collects data and provides reports on related event activity,
to obtain information on current buffer pool behavior. It can provide these reports in the
form of tables, pie charts, and diagrams.
Monitors DB2 Connect Gateways including application and system related
information.
Supports DB2 on Microsoft Windows, HP-UX, Solaris Operating Environment, IBM
AIX, and Linux.
The IBM course available for this topic is:
Using the DB2 Performance Expert Tool (Course L1-274)
This course provides installation and usage training for Performance Expert V1.1 for
Multiplatform. The course trains the student to use the various GUI panels of the
Performance Expert tool on Windows.
13-58
Related Classes
For more training on performance tuning, consider attending DB2 UDB
Performance Tuning and Monitoring Workshop (CF41). This is a four day course
that covers:
Using the System Monitor GUI
Structure of database global memory
Snapshot and event monitors
RUNSTATS, REORG and REBIND
Optimizer strategies and the Visual Explain GUI
You will be loading raw data into both an SMS-type of table space and a DMS-type of table
space. This will allow you to compare the usage of these two types of table spaces, from a
performance perspective.
1.2 Create a database called wisc_db using an SMS table space using
/export/home/inst###/sms_disk as the path for the disk.
1.3 Connect to the wisc_db database and look at the three default table spaces created for
wisc_db. Each one should be of type System managed space.
1.5 Using the create_onektup_sms.sql script, create the onektup table. This will be created
using all system defaults.
1.6 Look at the details for all of the table spaces again.
1.8 Examine disk allocation for the sms_disk subdirectory with the UNIX command
du -k -s sms_disk. The output is in kilobytes.
Kilobytes used = _______.
1.10 Again, examine the disk allocation for the sms_disk subdirectory with
du -k -s sms_disk. The output is in kilobytes.
Kilobytes used = ______
1.11 Look at the table-space detail for USERSPACE1 and note the following:
Pages used = ________
Number of containers = ________
In the next lab, you will load the database using DMS containers and compare the load times.
1.13 Create a DMS table space named onektbl for the onektup table using one container. Use
the raw device /dev/rdsk/inst###F. This raw device is 100 megabytes in size and has
been preallocated for your use. Use the defaults for all other settings.
1.14 Look at the table-space details for the table space, onektbl. Note the following:
Table space ID
Name ONEKTBL
Type Database managed space
Total pages
Useable pages
Used pages
Page size
Extent size (pages)
Prefetch size (pages)
# of containers
1.16 Now load 250,000 rows into the onektup table using the load_onektup.250K script in
your home directory. It will report the time for the load.
Load time = __________ (Your time will depend on hardware and box load.)
1.17 Look at the table-space details for the table space onektbl. Note the following:
Table space ID
Name ONEKTBL
Type Database managed space
Total pages
Useable pages
Used pages
Page size
Extent size (pages)
Prefetch size (pages)
# of containers
2.3 Modify the create_tenktup1_lab2.db2 script. Locate the create tablespace command.
Change this to use the raw devices preallocated for your student login. For example, if
you are logged in as inst101, your raw devices would be /dev/rdsk/stu101A through
/dev/rdsk/stu101F.
Warning!
Please be very careful with your device names so as not to steal someone else's
raw devices.
2.5 Check the size of the buffer pool created for this lab by using:
db2 "select * from syscat.bufferpools"
Two buffer pools should be shown:
IBMDEFAULTBP as bufferpoolid 1, size of 1000 NPAGES
BP_TENKTUP1 as bufferpoolid 2, size of 1000 NPAGES initially
2.6 Validate the creation and details for the tablespace tenktbl using:
db2 list tablespaces show detail
2.7 Run the load_tenktup1.sh script. This will load the tenktup1 table with 2.5 million
rows. It will also perform a row count for verification. (The load should only take around
a minute, depending on the resource load on the box.)
2.8 Validate that CHNGPGS_THRESH is set to 60 and NUM_IOCLEANERS is set to 1 for the
wisc_db. If not, set these values, then deactivate and reactivate the database.
2.10 Take a snapshot of the database using the db2 get snapshot for all databases command.
Record the Dirty page threshold cleaner triggers from this output in the chart. Note: you
must have previously turned on the bufferpool snapshot switch before you can take
snapshots: db2 update dbm cfg using dft_mon_bufpool on.
2.11 Clean the buffer pool of all pages from the tenktup1 table.
(Note that with a small buffer pool this is not necessary, but a good practice for
benchmarking.)
2.12 Modify one parameter as asked for in the chart, and repeat at step 2.9 above. Note that
you need to use a different UPDATE string each time you rerun the UPDATE.
Run UPDATE Snapshot: NUM_IOCLEANERS CHNGPGS_THRESH
time in dirty page (%)
seconds threshold
cleaner
triggers
Default 1 60
2 2 60
3 4 60
4 4 40
5 4 20
6 4 10
7 4 5
Logical Logs
This lab shows how to change the location of the logical logs as well as how secondary logs are
utilized in DB2 UDB.
You will use a script (watch_logs.sh) that repeatedly reports the number of logs used by the
database. This is helpful because it will show the secondary log files that are allocated when
needed, then released and deallocated when no longer needed.
Scripts Needed:
watch_logs.sh - a script to recursively monitor the number of logical logs in a current
directory.
Lab Steps
This lab assumes that the onektup table is still loaded with around 250,000 rows. If not, reload
it as you did in the previous lab. Validate the row count in the onektup table. There should be
around 250,000 rows. If you need to reload the onektup table, do so with the scripts in your
$HOME/perf_tun_labs/lab1 directory.
You will need two terminal windows opened for this lab.
3.1 In window A:
a. Change to your home directory if not already there.
b. Create a subdirectory in your home directory called logs.
c. Change to the perf_tun_labs directory in your home directory.
d. Untar the db2_perf_tun_lab_3.tar using the tar xvf db2_perf_tun_lab_3.tar
command. This will create a directory called lab3 and will place the
watch_logs.sh script file there.
e. Change the location of the logical logs for the wisc_db database to
/export/home/inst<student number>/logs (the directory you just created).
f. Change the number of primary logs to 5.
g. Change the number of secondary logs to 100.
h. Change the size of the log files to 500.
i. Break any connections to the database.
3.2 In window B:
a. Change to the $HOME/perf_tun_labs/lab3 directory.
b. Run the watch_logs.sh using ./watch_logs.sh. This will watch the number of logs
existing in the $HOME/logs subdirectory. (Ignore any error messages as long as
the number of logs is reported accurately. You should see the number of logs equal
to the LOGPRIMARY parameter for the wisc_db.)
3.3 In window A:
a. Connect to the wisc_db database.
b. Run the following update, and watch the number of logs in window A:
time (db2 "update onektup set stringu1='<some string>'")
c. When the UPDATE is complete, the high number of logs should still be shown in
window B. These are not deallocated until the database is deactivated.
d. Break any connections to the database.
e. Deactivate the wisc_db database. The number of logs should go back down to the
LOGPRIMARY value, since the secondary logs have been deallocated.
Note In this exercise, your values of the various configuration parameters may be
different than those shown.
4.1 Open a DB2 Command Window, and get the current DBM and DB configuration
parameters, saving this information to files (dbm_cfg.txt and db_cfg.txt, respectively)
for further study.
4.2 Using the view program, open the dbm_cfg.txt file, and search for SHEAPTHRES.
Note its value.
SHEAPTHRES = ______________
4.3 In the DB2 Command Window, use the DB2 AUTOCONFIGURE utility to specify that
mem_percent be set to 50, and only for DB. Redirect your output to the auto_cfg.txt
file for later study.
4.4 Again in the command window, use the view program to look at the contents of the
auto_cfg.txt file. Search for the SHEAPTHRES parameter values, and list the old and
new values.
SHEAPTHRES - old = ______________
SHEAPTHRES - new = ______________
4.7 Are there other parameters, other than memory ones, that are effected by your change?
List them.
Description Parameter Old Value New Value
Maximum query degree of MAX_QUERYDEGREE
parallelism
Agent pool size NUM_POOLAGENTS
Catalog cache size CATALOGCACHE_SZ
Log file size LOGFILSIZ
Number of secondary log LOGSECOND
files
Percent. of lock lists per MAXLOCKS
application
Number of I/O servers NUM_IOSERVERS
Percent log file reclaimed SOFTMAX
before soft chckpt
4.8 In the open DB2 Command Window, stop and restart the instance.
4.9 Now check you parameter values and see if they got changed. Redirect your output to
new_dbm_cfg.txt and new_db_cfg.txt.
1.3 Connect to the wisc_db database and look at the three default table spaces created for
wisc_db. Each one should be of type, System managed space.
db2 connect to wisc_db
db2 list tablespaces show detail
1.5 Using the create_onektup_sms.sql script, create the onektup table. This will be created
using all system defaults.
./create_onektup_sms.sql
1.6 Look at the details for all of the table spaces again.
db2 connect to wisc_db
db2 list tablespaces show detail
1.8 Examine disk allocation for the sms_disk subdirectory with the UNIX command
du -k -s sms_disk. The output is in kilobytes (this number may vary slightly).
du -k -s sms_disk
kilobytes used = 16894
1.9 Now load 250,000 rows into the onektup table. Use the load_onektup.250K script in
your home directory. It will report the time for the load. The real time is the clock time.
./load_onektup.250K
Load time = _______. (Your time will depend on hardware and box load.)
1.10 Again, examine the disk allocation for the sms_disk subdirectory with du -k -s
sms_disk. The output is in kilobytes.
du -k -s sms_disk
kilobytes used = 72522
1.11 Look at the table-space detail for USERSPACE1, and note the following:
db2 connect to wisc_db
db2 list tablespaces show detail
Pages used: 13899-4K pages
Number of containers: 1
In the next lab, you will load using DMS containers and compare the load times.
1.13 Create a DMS table space named onektbl for the onektup table using one container. Use
the raw device /dev/rdsk/inst###F. This raw device is 100 megabytes in size and has
been preallocated for your use. Use the defaults for all other settings.
db2 "create tablespace onektbl pagesize 4096
managed by database using
(device '/dev/rdsk/inst####F' 25000)"
1.14 Look at the table-space details for the table space, onektbl. Note the following:
db2 connect to wisc_db
db2 list tablespaces show detail
Table space ID 3
Name ONEKTBL
Type Database managed space
Total pages 25000
Useable pages 24992
Used pages 96
Page size 4096
Extent size (pages) 32
Prefetch size (pages) 32
# of containers 1
1.15 Create the onektup table in this DMS table space using the create_onektup_dms.sql
script.
./create_onektup_dms.sql
1.16 Now load 250,000 rows into the onektup table using the load_onektup.250K script in
your home directory. It will report the time for the load.
./load_onektup.250K
Load time = __________(Your time depends on hardware and box load.)
2.6 Validate the creation and details for the tablespace tenktbl using:
db2 list tablespaces show detail
2.7 Run the load_tenktup1.sh script. This will load the tenktup1 table with 2.5 million
rows. It will also perform a row count for verification. (The load should only take around
a minute, depending on the resource load on the box.)
./load_tenktup1.sh
2.8 Validate that CHNGPGS_THRESH is set to 60 and NUM_IOCLEANERS is set to 1 for the
wisc_db. If not, set these values, then deactivate and reactivate the database.
db2 get db cfg for wisc_db
2.11 Clean the buffer pool of all pages from the tenktup1 table.
db2 force application all
db2 terminate
db2 deactivate database wisc_db
db2 activate database wisc_db
(Note that with a small buffer pool this is not necessary, but a good practice for
benchmarking.)
2.12 Modify one parameter as asked for in the chart, and repeat at step 2.9 above. Note that
you want to use a different UPDATE string each time you rerun the UPDATE.
db2 update db cfg for wisc_db using <parameter> <value>
To make the changes effective, do the following:
db2 force application all
db2 terminate
db2 deactivate database wisc_db
db2 activate database wisc_db
Run UPDATE Snapshot: num_iocleaners chngpgs_thresh (%)
time in dirty page
seconds threshold
cleaner
triggers
Default 1 60
2 2 60
3 4 60
4 4 40
5 4 20
6 4 10
7 4 5
Logical Logs
3.1 In window A:
a. Change to your home directory if not already there.
cd
e. Change the location of the logical logs for the wisc_db database to
/export/home/inst<student number>/logs (the directory you just created).
db2 update database configuration \
for wisc_db using newlogpath \
/export/home/inst<student number>/logs
k. Activate the wisc_db database. This will create the appropriate number of initial
logical logs.
db2 activate database wisc_db
3.2 In window B:
a. Change to the $HOME/perf_tun_labs/lab3 directory.
cd $HOME/perf_tun_labs/lab3
b. Run the watch_logs.sh using ./watch_logs.sh. This will watch the number of logs
existing in the $HOME/logs subdirectory. (Ignore any error messages as long as
the number of logs is reported accurately. You should see the number of logs equal
to the LOGPRIMARY parameter for the wisc_db.)
./watch_logs.sh
3.3 In Window A:
a. Connect to the wisc_db database.
connect to wisc_db
b. Run the following update, and watch the number of logs in window A:
time (db2 "update onektup set stringu1='<some string>'")
c. When the UPDATE is complete, the high number of logs should still be shown in
window B. These are not deallocated until the database is deactivated.
e. Deactivate the wisc_db database. The number of logs should go back down to the
LOGPRIMARY value, since the secondary logs have been deallocated.
db2 deactivate database wisc_db
Note In this exercise, your values of the various configuration parameters may be
different than those shown.
4.1 Open a DB2 Command Window, and get the current DBM and DB configuration
parameters, saving this information to files (dbm_cfg.txt and db_cfg.txt, respectively)
for further study.
db2 "GET DBM CFG" > dbm_cfg.txt
db2 "CONNECT TO sample"
db2 "GET DB CFG FOR sample" > db_cfg.txt
4.2 Using the view program, open the dbm_cfg.txt file, and search for SHEAPTHRES.
Note its value.
view dbm_cfg.txt
SHEAPTHRES = 20000
4.3 In the DB2 Command Window, use the DB2 AUTOCONFIGURE utility to specify that
mem_percent be set to 50, and only for DB. Redirect your output to the auto_cfg.txt
file for later study.
db2 "CONNECT TO sample"
db2 "AUTOCONFIGURE USING MEM_PERCENT 50
APPLY DB ONLY" > auto_cfg.txt
4.4 Again in the command window, use the view program to look at the contents of the
auto_cfg.txt file. Search for the SHEAPTHRES parameter values, and list the old and
new values.
view auto_cfg.txt
SHEAPTHRES - old = 20000
SHEAPTHRES - new = 2464
4.6 List the other memory parameters that will be changed, as specified in the auto_cfg.txt
file.
Description Parameter Old Value New Value
Sort heap threshold SHEAPTHRES 20000 2464
Max size of appl. group APPGROUP_MEM_SZ 30000 9790
mem set
Max storage for lock list LOCKLIST 100 374
Log buffer size LOGBUFSZ 8 19
Package cache size PCKCACHESZ MAXAPPLS*8 650
Sort list heap SORTHEAP 256 192
16K-BP Bufferpool size 40 125
8K-BP Bufferpool size 80 250
IBMDEFAULTBP Bufferpool size 1000 500
4.7 Are there other parameters, other than memory ones, that are effected by your change?
List them.
Yes.
Description Parameter Old Value New Value
Maximum query degree of MAX_QUERYDEGREE ANY 1
parallelism
Agent pool size NUM_POOLAGENTS 200(calc.) 10
Catalog cache size CATALOGCACHE_SZ MAXAPPLS*4 58
Log file size LOGFILSIZ 1000 1024
Number of secondary log LOGSECOND 2 1
files
Percent. of lock lists per MAXLOCKS 10 60
application
Number of I/O servers NUM_IOSERVERS 3 6
Percent log file reclaimed SOFTMAX 100 120
before soft chckpt
4.9 Now check you parameter values and see if they got changed. Redirect your output to
new_dbm_cfg.txt and new_db_cfg.txt.
db2 "GET DBM CFG" > new_dbm_cfg.txt
db2 "CONNECT TO sample"
db2 "GET DB CFG FOR sample" > new_db_cfg.txt
Course Summary
14-2
This section provides you with a summary of knowledge gained in this course and resources you
can use to further your education on DB2 UDB subjects. Included are document sources and
other courses you can attend.
14-3
A CBT self-study course, Fast Path to DB2 UDB for Experienced Relational DBAs (CT28),
contains a superb explanation of privileges and is available for download, free of charge, at:
www.ibm.com/software/data/db2/selfstudy
You will be required to register for this free download copy.
A classroom course, DB2 Universal Database Administration Workshop, is available for the
following operating systems:
Linux (CF20)
UNIX (CF21)
Windows NT (CF23)
Solaris (CF27)
There are also a variety of advanced courses described in the next few pages.
14-4
The following courses are available to you. Most of these are classroom courses, but there are
several CBT courses included in the list.
14-6
These courses are considered advanced and should be taken only after mastering the basic
courses outlined on the previous pages.
http://www-3.ibm.com/services/learning/
14-9
14-10
The list of technical documents shown above are the basic references needed to properly
maintain and administer the DB2 UDB database. These documents are provided on CD-ROM
with the product, but hardcopy can be ordered through your IBM sales representative.
For certification:
DB2 UDB v7.1 Database Administration Certification Guide
14-11
Thank You!
14-12
Terminology Comparisons
Database Manager
Does not exist in Oracle
In DB2, this is the instance that may be comprised of many databases
Can be tuned via the database manager configuration file
Concept SQL Data Types C/C++ Data Types / Java sqllen Description
/ Declaration Declaration Data sqltype
Types
Integer SMALLINT short age = 32; short 2 16-bit signed integer
age; short int year; 500/501 Range between (-32,768 and 32,767)
Precision of 5 digits
INTEGER long salary; int 4 32-bit signed integer
salary; long int deptno; 496/497 Range between (-2,147,483,648 and
INT salary; 2,147,483,647)
Precision of 10 digits
BIGINT long long long 8 64-bit signed integer
serial_num; __int64 serial; 492/493
sqlint64 serial
Floating REAL bonus; float bonus; float 4 Single precision floating point
point FLOAT(n); 480/481 32-bit approximation of a real number
FLOAT(n) can be synonym for REAL if 0 < n < 25
DOUBLE double wage; double 8 Double precision floating point
wage; 480/481 64-bit approximation of a real number
DOUBLE Range in (0, -1.79769E+308 to -2.225E-307,
PRECISION 2.225E-307 to 1.79769E+308)
wage; FLOAT(n) can be synonym for DOUBLE if 24 <
n < 54
Decimal DECIMAL(p,s double price; java.math (p/2)+1 Packed decimal
) price; .BigDeci 484/485 No exact equivalent for SQL decimal type -
DEC(p,s) mal use C double data type
price; If precision /scale not specified, default is (5,0)
NUMERIC Max precision is 31 digits, and max range
(p,s)price; between (-10**31 + 1 ... 10**31 -1)
NUM (p,s) Consider using char / decimal functions to
price manipulate packed decimal fields as char data
Date/Time DATE dt; char dt[11]; struct { java.sql. 10 Null-terminated character form (11 characters)
short len; char Date 384/385 or
data[10]; } dt; varchar struct form (10 characters); struct can be
divided as desired to obtain the individual fields
Example: 11/02/2000
Stored internally as a packed string of 4 bytes
TIME tm; char tm[9]; struct { java.sql. 8 Null-terminated character form (9 characters) or
short len; char Time 388/389 varchar struct form (8 characters); struct can be
data[8]; } tm; divided as desired to obtain the individual fields
Example: 19:21:39
Stored internally as a packed string of 3 bytes
CLOB locator sql type is Identifies CLOB entities residing on the server
variable clob_locator cref;
CLOB file sql type is Descriptor for file containing CLOB data
reference clob_file cFile;
variable
Binary BLOB(n) sql type is blob(1m) JDBC n Non null-terminated varying binary string with 4-
video; 1.22: 404/405 byte string length indicator
Byte[] Use char[n] in struct form where 1 <= n <=
JDBC 2147483647
2.0:
java.sq1.
Blob
BLOB sql type is Identifies BLOB entities on the server
locator variable blob_locator bref;
BLOB file sql type is blob_file Descriptor for the file containing BLOB data
reference bFile;
variable
Double-Byte GRAPHIC(1) sqldbchar String 24 sqldbchar is a single double-byte character
GRAPHIC(n) dbyte; 468/469 string
sqldbchar For a fixed-length graphic string of length integer
graphic1 [n+1]; which may range from 1 to 127. If the length
wchar_t specification is omitted, a length of 1 is assumed.
graphic2 [100]; Precompiled with WCHARTYPE
NOCONVERToption
Oracle DB2
Data Type Notes Data Type Notes
DATE TIME If only MM/DD/YYYY is required, use DATE
TIMESTAMP If only HH:MM:SS is required, use TIME
DATE If both date and time are required (MM/DD/YYYY-
HH:MM:SS.000000), use TIMESTAMP
Use Oracle TO_CHAR() function to format a DATE for
subsequent DB2 load. Note that the Oracle default
DATE format is DD-MON-YY
VARCHAR2(n) n <=4000 VARCHAR(n) n <= 32672
LONG n <= 2GB VARCHAR (n) If n <= 32672 bytes, use VARCHAR
LONG VARCHAR (n) If 32672 < n <= 32700 bytes, use LONG VARCHAR or
CLOB(n) CLOB
If 32672 < n <= 2 GB, use CLOB
RAW(n) n <= 255 CHAR(n) FOR BIT DATA If n <= 254, use CHAR(n) FOR BIT DATA
VARCHAR(n) FOR BIT DATA If n <= 32672, use VARCHAR(n) FOR BIT DATA
BLOB(n) If n<= 2 GB, use BLOB(n)
LONG RAW n <= 2 GB VARCHAR(n) FOR BIT DATA If n <= 32672 bytes, use VARCHAR(n) FOR BIT DATA
LONG VARCHAR FOR BIT DATA If 32672 < n <= 32700 bytes, use LONG VARCHAR
BLOB(n) FOR BIT DATA
If n <= 2 GB, use BLOB(n)
BLOB n <= 4 GB BLOB(n) If n <= 2GB use BLOB(n)
CLOB n <= 4GB CLOB(n) If n <= 2GB use CLOB(n)
NCLOB n <= 4GB DBCLOB(n) If n <= 2GB use DBCLOB(n/2)
NUMBER SMALLINT If Oracle decl is NUMBER(p) or NUMBER(p,0), use
INTEGER SMALLINT, if 1 <= p <= 4;
BIGINT INTEGER, if 5 <= p <= 9;
DECIMAL(p,s) BIGINT, if 10 <= p <= 18
DOUBLE / FLOAT(n) / REAL If Oracle decl is NUMBER(p,s), use DECIMAL(p,s) (s >
0)
If Oracle decl is NUMBER, use DOUBLE / FLOAT(n) /
REAL
DB2 does not have default input formats for dates or times. If table t1 has a single DATE column
and t2 has a single TIME column, each of these inserts three rows successfully:
INSERT INTO t1 VALUES (1991-10-27'), ('10/27/1991'), ('27.10.1991');
INSERT INTO t2 VALUES ('13.30.05'), ('1:30 PM'), ('13:30:05');
Because DB2 does not accept the Oracle default date format, dates inserted into DB2 must first
be mapped to an acceptable format using a formatting function; for example,
TO_CHAR(OracleDate,MM/DD/YYYY)
DB2 default output date and time formats are determined by the country code of the application,
but can be overridden with the DATETIME parameter of the Bind and Prep commands.
The only one timestamp format allowed in DB2 is YYYY-MM-DD-HH.MM.SS.MMMMMM (where
MMMMMM is microseconds). For more information on date and time formats in DB2, see the
Administration Guide index entry date formats.
Oracle built-in functions, such as ADD_MONTHS, NEXT_MONTH, and NEXT_DAY, can in some
cases be translated directly to equivalent or similar DB2 built-in functions. DB2 provides a suite
of durations and datetime arithmetic capability so, for example, ADD_MONTHS(mydate, 26)
can be mapped to mydate + 26 MONTHS. When an Oracle function does not have a DB2
equivalent, or the calling code must be unchanged, a DB2 user-defined function can be created
with the same name as the Oracle function.
NUMBER
The Oracle data type NUMBER can be mapped to many DB2 types. The type of mapping
depends on whether the NUMBER is used to store an integer (NUMBER(p), or NUMBER(p,0)), or a
number with a fixed decimal point (NUMBER(p,s), s > 0), or a floating-point number (NUMBER).
Another consideration is the space usage. Each DB2 type requires a different amount of space:
SMALLINT uses 2 bytes, INTEGER uses 4 bytes, and BIGINT uses 8 bytes. The space usage for
Oracle type NUMBER depends on the parameter used in the declaration. NUMBER, with the
default precision of 38 significant digits, uses 20 bytes of storage. Mapping NUMBER to
SMALLINT, for example, can save 18 bytes per column.
Note that in DB2, unless you specify NOT NULL, another byte is required for the null indicator.
DECIMAL
An Oracle NUMBER with non-zero scale (decimal places) should be mapped to DB2 data type
DECIMAL. DECIMAL is stored as packed decimal in DB2, with four bits used per decimal digit,
plus four bits for the sign, so a DECIMAL column with precision p takes (p/2 + 1) bytes. Decimal
processing in DB2 is less efficient than integer processing, so one of the integer data types
should be used where possible, particularly when the Oracle NUMBER has a scale of zero (no
decimal places).
Here is an example of how a decimal value can be inserted into and retrieved from the database
using the CLI interface. The C program could use a double (SQL_C_DOUBLE) or another numeric
variable type to store the value, but the default CLI data type for DECIMAL is CHAR
(SQL_C_CHAR), and CHAR is used in the example.
SQLCHAR *dec_input = (SQLCHAR *) "-0001234.56780000"; /* input for DEC(15,8) col */
SQLINTEGER ind = 0; /* indicator variable */
SQLCHAR dec_output[18]; /* output from DEC(15,8) col */
.....
/* bind char variable to parameter marker used to provide the value for updating a
decimal column. The length of 18 (2nd-last parameter) includes 15 for
precision plus 1 each for sign, decimal point, and null terminator */
rc = SQLBindParameter(hstmt,1, SQL_PARAM_INPUT, SQL_C_CHAR,SQL_DECIMAL, 15, 8,
dec_input, 18,&ind);
RAW
To simulate the Oracle RAW and LONG RAW data types, DB2 provides the FOR BIT DATA clause
for the VARCHAR, LONG VARCHAR , and CLOB data types. In addition, DB2 also provides the
BLOB data type to store up to 2 GB of binary data. Note that the hextoraw() and rawtohex()
functions are not provided in DB2, but it is possible to create a distinct user-defined type (UDT)
by using DB2 functions such as hex(), blob(), and cast().
Oracle8 extends the LONG type to BLOB and CLOB which can be mapped directly to the BLOB
and CLOB data types in DB2 UDB.
Import
import from state.unl of del modified by coldel| insert into inst01.state
SQL3109N The utility is beginning to load data from file "state.unl".
SQL3110N The utility has completed processing. "52" rows were read from the
input file.
SQL3149N "52" rows were processed from the input file. "52" rows were
successfully inserted into the table. "0" rows were rejected.
SQL3110N The utility has completed processing. "9" rows were read from the
input file.
SQL3149N "9" rows were processed from the input file. "9" rows were
successfully inserted into the table. "0" rows were rejected.
SQL3110N The utility has completed processing. "5" rows were read from the
input file.
SQL3149N "5" rows were processed from the input file. "5" rows were
successfully inserted into the table. "0" rows were rejected.
SQL3110N The utility has completed processing. "74" rows were read from the
input file.
SQL3149N "74" rows were processed from the input file. "74" rows were
successfully inserted into the table. "0" rows were rejected.
SQL3110N The utility has completed processing. "28" rows were read from the
input file.
SQL3149N "28" rows were processed from the input file. "28" rows were
successfully inserted into the table. "0" rows were rejected.
SQL3110N The utility has completed processing. "23" rows were read from the
input file.
SQL3149N "23" rows were processed from the input file. "23" rows were
successfully inserted into the table. "0" rows were rejected.
SQL3110N The utility has completed processing. "67" rows were read from the
input file.
SQL3149N "67" rows were processed from the input file. "67" rows were
successfully inserted into the table. "0" rows were rejected.
SQL3110N The utility has completed processing. "74" rows were read from the
input file.
SQL3149N "74" rows were processed from the input file. "74" rows were
successfully inserted into the table. "0" rows were rejected.
SQL3110N The utility has completed processing. "7" rows were read from the
input file.
SQL3149N "7" rows were processed from the input file. "7" rows were
successfully inserted into the table. "0" rows were rejected.
SQL3110N The utility has completed processing. "0" rows were read from the
input file.
SQL3149N "0" rows were processed from the input file. "0" rows were
successfully inserted into the table. "0" rows were rejected.
SQL3110N The utility has completed processing. "9" rows were read from the
input file.
SQL3515W The utility has finished the "LOAD" phase at time "02-18-2002
13:27:02.011812".
SQL3110N The utility has completed processing. "52" rows were read from the
input file.
SQL3515W The utility has finished the "LOAD" phase at time "02-18-2002
13:27:02.213066".
SQL3110N The utility has completed processing. "5" rows were read from the
input file.
SQL3515W The utility has finished the "LOAD" phase at time "02-18-2002
13:27:02.402025".
SQL3110N The utility has completed processing. "28" rows were read from the
input file.
SQL3515W The utility has finished the "LOAD" phase at time "02-18-2002
13:27:02.613186".
SQL3110N The utility has completed processing. "7" rows were read from the
input file.
SQL3515W The utility has finished the "LOAD" phase at time "02-18-2002
13:27:02.791886".
SQL3110N The utility has completed processing. "23" rows were read from the
input file.
SQL3515W The utility has finished the "LOAD" phase at time "02-18-2002
13:27:02.961747".
SQL3110N The utility has completed processing. "74" rows were read from the
input file.
SQL3515W The utility has finished the "LOAD" phase at time "02-18-2002
13:27:03.161828".
SQL3110N The utility has completed processing. "67" rows were read from the
input file.
SQL3515W The utility has finished the "LOAD" phase at time "02-18-2002
13:27:03.418401".
SQL3110N The utility has completed processing. "74" rows were read from the
input file.
SQL3515W The utility has finished the "LOAD" phase at time "02-18-2002
13:27:03.653511".
SQL3515W The utility has finished the "BUILD" phase at time "02-18-2002
13:27:03.684799".
SQL3110N The utility has completed processing. "0" rows were read from the
input file.
Node type = Enterprise Server Edition with local and remote clients
Database territory = US
Database code page = 1208
Database code set = UTF-8
Database country/region code = 1
Backup pending = NO
IBM offers many courses for your information needs. Check this web site for more information
on these in-depth courses:
http://www-3.ibm.com/software/info/education
and
http://www.ibm.com/software/data/db2/selfstudy
Related Classes
CBT self study course, Fast Path to DB2 UDB for Experienced Relational DBAs
(CT28)
DB2 Universal Database Administration Workshop for UNIX (CF211)
DB2 UDB Advanced Admin Workshop (CF45)
code zipcode
sname phone
Columns listed as SERIAL in the following pages have been maintained with autonumbering on
INSERT transactions. For Oracle databases this type of column is typically implemented using a
SEQUENCE.
In DB2 UDB these columns are managed to the same effect using one of three techniques (see
page 6-15):
Implement a trigger to generate a sequential number (older method)
Use a DB2 UDB sequence
Define an identity column for the table
SERIAL CHAR (15) CHAR (15) CHAR(20) CHAR(20) CHAR(20) CHAR(15) CHAR(2) CHAR(5) CHAR(18)
101 Ludwig Pauli All Sports Supplies 213 Erstwild Court Sunnyvale CA 94086 408-789-8075
102 Carole Sadler Sports Spot 785 Geary St San Francisco CA 94117 415-822-1289
103 Philip Currie Phils Sports 654 Poplar P.O.Box 3498 Palo Alto CA 94303 650-328-4543
104 Anthony Higgins Play Ball! East Shopping Cntr. 422 Bay Road Redwood City CA 94026 650-368-1100
105 Raymond Vector Los Altos Sports 1899 La Loma Drive Los Altos CA 94022 650-776-3249
106 George Watson Watson & Son 1143 Carver Place Mountain View CA 94063 650-389-8789
107 Charles Ream Athletic Supplies 41 Jordan Avenue Palo Alto CA 94304 650-356-9876
108 Donald Quinn Quinns Sports 587 Alvarado Redwood City CA 94063 650-544-8729
109 Jane Miller Sport Stuff Mayfair Mart 7345 Ross Blvd. Sunnyvale CA 94086 408-723-8789
110 Roy Jaeger AA Athletics 520 Topaz Way Redwood City CA 94062 650-743-3611
111 Frances Keyes Sports Center 3199 Sterling Court Sunnyvale CA 94085 408-277-7245
112 Margaret Lawson Runners & Others 234 Wyandotte Way Los Altos CA 94022 650-887-7235
113 Lana Beatty Sportstown 654 Oak Grove Menlo Park CA 94025 650-356-9982
114 Frank Albertson Sporting Place 947 Waverly Place Redwood City CA 94062 650-886-6677
115 Alfred Grant Gold Medal Sports 776 Gary Avenue Menlo Park CA 94025 650-356-1123
116 Jean Parmelee Olympic City 1104 Spinosa Drive Mountain View CA 94040 650-534-8822
117 Arnold Sipes Kids Korner 850 Lytton Court Redwood City CA 94063 650-245-4578
118 Dick Baxter Blue Ribbon Sports 5427 College Oakland CA 94609 650-655-0011
119 Bob Shorter The Triathletes Club 2405 Kings Highway Cherry Hill NJ 08002 609-663-6079
120 Fred Jewell Century Pro Shop 6627 N. 17th Way Phoenix AZ 85016 602-265-8754
121 Jason Wallack City Sports Lake Biltmore Mall 350 W. 23rdSt Wilmington DE 19898 302-366-7511
122 Cathy OBrian The Sporting Life 543 Nassau Street Princeton NJ 08540 609-342-0054
123 Marvin Hanlon Bay Sports 10100 Bay Meadows Rd Suite 1020 Jacksonville FL 32256 904-823-4239
124 Chris Putnum Putnums Putters 4715 S. E. Adams Blvd Suite 909C Bartlesville OK 74006 918-355-2074
125 James Henry Total Fitness Sports 1450 Commonwealth Brighton MA 02135 617-232-4159
Ave.
126 Eileen Neelie Neelies Discount 2539 South Utica St Denver CO 80219 303-936-7731
Sports
127 Kim Satifer Big Blue Bike Shop Blue Island Square 12222 Blue Island NY 60406 312-944-5691
Gregory St
128 Frank Lessor Phoenix University Athletic Department 1817 N. Phoenix AZ 85008 602-533-1817
Thomas Rd
SERIAL DATE INTEGER CHAR(40) CHAR(1) CHAR(10) DATE DECIMAL(8,2) MONEY(6,2) DATE
1002 05/21/1998 101 PO on box; delivery back door n 9270 05/26/1998 50.60 15.30 06/03/1998
only
1004 05/22/1998 106 ring bell twice y 8006 05/30/1998 95.80 19.20
1005 05/24/1998 116 call before delivery n 2865 06/09/1998 80.80 16.20 06/21/1998
1008 06/07/1998 110 closed Monday y LZ230 07/06/1998 45.60 13.80 07/21/1998
1009 06/14/1998 111 door next to grocery n 4745 06/21/1998 20.40 10.00 08/21/1998
1010 06/17/1998 115 deliver 776 King St. if no n 429Q 06/29/1998 40.60 12.30 08/22/1998
answer
1014 06/25/98 106 ring bell, kick door loudly n 8052 0703/98 40.60 12.30 07/10/98
1015 06/27/98 110 closed Mondays n MA003 07/16/98 20.60 6.30 08/31/98
1016 06/29/98 119 delivery entrance off Camp St. n PC6782 07/12/98 35.00 11.80
1017 07/09/98 120 north side of clubhouse n DM354331 07/13/98 60.00 18.00
1018 07/10/98 121 SW corner of Biltmore Mall n S22942 07/13/98 70.50 20.00 08/06/98
1019 07/11/98 122 closed til noon Mondays n Z55709 07/16/98 90.00 23.00 08/06/98
1021 07/23/98 124 ask for Elaine n C3288 07/25/98 40.00 12.00 08/22/98
1023 07/24/98 127 no deliveries after 3 p.m. n KF2961 07/30/98 60.00 18.00 08/22/98
In this appendix, you will learn how to connect to the Lab Exercises
environment in the IBM DB2 classrooms:
Client Setup (Windows)
LE-2
LE-3
LE-4
LE-5
On the Windows platform, you have three ways to work with DB2:
Graphical User Interface
Command Line Processor
Command Window (using the CLP)
LE-7
db2
option-flag db2-command
sql-statement
?
phrase
message
sql-state
class-code
LE-9
The basic Command Line syntax for the CLP is shown above.
LE-10
While the DB2 server is running, you can use the CLP to get command line help as shown
above.
You can also view pdf/HTML technical document files if they were installed with the server.
The IBM DB2 Command Reference document contains further information on using the CLP.
Non-interactive mode
db2 connect to eddb
db2 select * from syscat.tables | more
Interactive mode
db2
db2=> connect to eddb
db2=> select * from syscat.tables
LE-11
Use the non-interactive mode if you need to issue OS commands while performing your tasks.
quit No No
LE-12
Use the DB2 list command to view the Command Line Processor
option settings
db2 list command options
LE-13
LE-15
LE-16
Edit create.tab
commit work;
connect reset;
LE-17
out.sel contents:
Index-1
O T
Oracle Table space 1-4, 5-5
concurrency and locks 1-20 characteristics 5-14
isolation levels 1-20 Table space usage 5-15
transactions 1-20 Tuning the buffer pool 13-21
Types of DB2 UDB indexes 9-7
P
Package V
BIND, PREP 1-14 Visual Explain 9-27
Page size 13-23
Partitioning 7-5
PREDICATE 1-26
Prefetch size 13-27
Privileges 4-8
Privileges, CONTROL 8-4
Privileges, INSERT 8-4
Privileges, SELECT 8-4
R
Registry variables 1-12
Restore
recovery 12-12
Roll forward recovery utility 12-14
Run-Time Client 1-18, 2-3
S
SARGABLE 1-26
SCALAR 1-26
Schema 4-6
Security
instance 1-19
Self-tuning 13-30
Sequences 6-15, F-2
SET INTEGRITY statement 8-18
SET NULL delete rule 10-7
StoresDB database
call_type F-7
catalog F-11
cust_calls F-7
customer F-3
items F-5
manufact F-7
orders F-4
stock F-8
String data types 6-8
SYSADM 8-4
SYSADM, SYSADM_GROUP 3-4
System Catalog tables 4-15
Index-2