Anda di halaman 1dari 978

/***********************************************************************/ /* Doc

ument : Oracle 8i,9i,10g queries, information, and tips */ /* Doc. Versie : 58 *


/ /* File : oracle9i10g.txt */ /* Date : 23-05-2008 */ /* Content : Just a serie
s of handy DBA queries. */ /* Compiled by : Albert */ /*************************
**********************************************/
CONTENTS: 0. Common data dictionary queries for sessions, locks, perfoRMANce etc
.. 1. DATA DICTIONARY QUERIES m.b.t. files, tablespaces, logs: 2. NOTES ON PERFO
RMANCE: 3. Data dictonary queries m.b.t perfoRMANce: 4. IMP and EXP, 10g IMPDB a
nd EXPDB, and SQL*Loader Examples 5. Add, Move AND Size Datafiles,logfiles, crea
te objects etc..: 6. Install Oracle 92 on Solaris: 7. install Oracle 9i on Linux
: 8. Install Oracle 9.2.0.2 on OpenVMS: 9. Install Oracle 9.2.0.1 on AIX 9. Inst
allation Oracle 8i - 9i: 10. CONSTRAINTS: 11. DBMS_JOB and scheduled Jobs: 12. N
et8,9,10 / SQLNet: 13. Datadictionary queries Rollback segments: 14. Data dictio
nary queries m.b.t. security, permissions: 15. INIT.ORA parameters: 16. Snapshot
s: 17. Triggers: 19. BACKUP RECOVERY, TROUBLESHOOTING: 20. TRACING: 21. Overig:
22. DBA% and v$ views 23 TUNING: 24 RMAN: 25. UPGRADE AND MIGRATION 26. Some inf
o on Rdb: 27. Some info on IFS 28. Some info on 9iAS rel. 2 29 - 35 9iAS configu
rations and troubleshooting 30. BLOBS 31. BLOCK CORRUPTION 32. iSQL*Plus and EM
10g 33. ADDM 34. ASM and 10g RAC 35. CDC and Streams 36. X$ Tables
================================================================================
== ==========
0. QUICK INFO/VIEWS ON SESSIONS, LOCKS, AND UNDO/ROLLBACK INFORMATION IN A SINGL
E INSTANCE: ====================================================================
============== =========
SINGLE INSTANCE QUERIES: ======================== -- ---------------------------
- 0.1 QUICK VIEW ON SESSIONS: -- --------------------------SELECT substr(usernam
e, 1, 10), osuser, sql_address, to_char(logon_time, 'DD-MMYYYY;HH24:MI'), sid, s
erial#, command, substr(program, 1, 30), substr(machine, 1, 30), substr(terminal
, 1, 30) FROM v$session; SELECT sql_text, rows_processed from v$sqlarea where ad
dress='' -- ------------------------- 0.2 QUICK VIEW ON LOCKS: (use the sys.obj$
to find ID1:) -- -----------------------First, lets take a look at some importa
nt dictionary views with respect to locks: SQL> desc v$lock; Name Null? --------
--------------------- -------ADDR KADDR SID TYPE ID1 ID2 LMODE REQUEST CTIME BLO
CK Type -------------------RAW(8) RAW(8) NUMBER VARCHAR2(2) NUMBER NUMBER NUMBER
NUMBER NUMBER NUMBER
This view stores all information relating to locks in the database. The interest
ing columns in this view are sid (identifying the session holding or aquiring th
e lock), type, and the lmode/request pair. Important possible values of type are
TM (DML or Table Lock), TX (Transaction), MR (Media Recovery), ST (Disk Space T
ransaction). Exactly one of the lmode, request pair is either 0 or 1 while the o
ther indicates the lock mode. If lmode is not 0 or 1, then the session has aquir
ed the lock, while it waits to aquire the lock if request is other than 0 or 1.
The possible values for lmode and request are: 1: 2: 3: 4: null, Row Share (SS),
Row Exclusive (SX), Share (S),
5: Share Row Exclusive (SSX) and 6: Exclusive(X) If the lock type is TM, the col
umn id1 is the object's id and the name of the object can then be queried like s
o: select name from sys.obj$ where obj# = id1 A lock type of JI indicates that a
materialized view is being SQL> desc v$locked_object; Name Null? --------------
--------------- -------XIDUSN XIDSLOT XIDSQN OBJECT_ID SESSION_ID ORACLE_USERNAM
E OS_USER_NAME PROCESS LOCKED_MODE SQL> desc dba_waiters; Name Null? -----------
------------------ -------WAITING_SESSION HOLDING_SESSION LOCK_TYPE MODE_HELD MO
DE_REQUESTED LOCK_ID1 LOCK_ID2 SQL> desc v$transaction; Name Null? -------------
---------------- -------ADDR XIDUSN XIDSLOT XIDSQN UBAFIL UBABLK UBASQN UBAREC S
TATUS START_TIME START_SCNB START_SCNW START_UEXT START_UBAFIL START_UBABLK STAR
T_UBASQN START_UBAREC SES_ADDR FLAG SPACE
Type -------------------NUMBER NUMBER NUMBER NUMBER NUMBER VARCHAR2(30) VARCHAR2
(30) VARCHAR2(12) NUMBER
Type -------------------NUMBER NUMBER VARCHAR2(26) VARCHAR2(40) VARCHAR2(40) NUM
BER NUMBER
Type -------------------RAW(8) NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER
VARCHAR2(16) VARCHAR2(20) NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER RAW(8
) NUMBER VARCHAR2(3)
RECURSIVE NOUNDO PTX NAME PRV_XIDUSN PRV_XIDSLT PRV_XIDSQN PTX_XIDUSN PTX_XIDSLT
PTX_XIDSQN DSCN-B DSCN-W USED_UBLK USED_UREC LOG_IO PHY_IO CR_GET CR_CHANGE STA
RT_DATE DSCN_BASE DSCN_WRAP START_SCN DEPENDENT_SCN XID PRV_XID PTX_XID
VARCHAR2(3) VARCHAR2(3) VARCHAR2(3) VARCHAR2(256) NUMBER NUMBER NUMBER NUMBER NU
MBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER DATE NUMBER
NUMBER NUMBER NUMBER RAW(8) RAW(8) RAW(8)
Queries you can use in investigating locks: ====================================
======= SELECT XIDUSN,OBJECT_ID,SESSION_ID,ORACLE_USERNAME,OS_USER_NAME,PROCESS
from v$locked_object; SELECT d.OBJECT_ID, substr(OBJECT_NAME,1,20), l.SESSION_ID
, l.ORACLE_USERNAME, l.LOCKED_MODE from v$locked_object l, dba_objects d where d
.OBJECT_ID=l.OBJECT_ID; SELECT ADDR, KADDR, SID, TYPE, ID1, ID2, LMODE, BLOCK fr
om v$lock; SELECT a.sid, a.saddr, b.ses_addr, a.username, b.xidusn, b.used_urec,
b.used_ublk FROM v$session a, v$transaction b WHERE a.saddr = b.ses_addr; SELEC
T s.sid, l.lmode, l.block, substr(s.username, 1, 10), substr(s.schemaname, 1, 10
), substr(s.osuser, 1, 10), substr(s.program, 1, 30), s.command FROM v$session s
, v$lock l WHERE s.sid=l.sid;
SELECT p.spid, s.sid, p.addr,s.paddr,substr(s.username, 1, 10), substr(s.scheman
ame, 1, 10), s.command,substr(s.osuser, 1, 10), substr(s.machine, 1, 10) FROM v$
session s, v$process p WHERE s.paddr=p.addr SELECT sid, serial#, command,substr(
username, 1, 10), osuser, sql_address,LOCKWAIT, to_char(logon_time, 'DD-MM-YYYY;
HH24:MI'), substr(program, 1, 30) FROM v$session; SELECT sid, serial#, username,
LOCKWAIT from v$session;
SELECT v.SID, v.BLOCK_GETS, v.BLOCK_CHANGES, w.USERNAME, w.OSUSER, w.TERMINAL FR
OM v$sess_io v, V$session w WHERE v.SID=w.SID ORDER BY v.SID; SELECT * from dba_
waiters; SELECT waiting_session, holding_session, lock_type, mode_held FROM dba_
waiters; SELECT p.spid s.sid p.addr, s.paddr, substr(s.username, 1, 10) substr(s
.schemaname, 1, 10) s.command substr(s.osuser, 1, 10) substr(s.machine, 1, 25) F
ROM v$session s, v$process WHERE s.paddr=p.addr ORDER BY p.spid;
unix_spid, sid, username, schemaname, command, osuser, machine p
Usage of v$session_longops: =========================== SQL> desc v$session_long
ops; SID NUMBER Session identifier SERIAL# NUMBER Session serial number OPNAME V
ARCHAR2(64) Brief description of the operation TARGET VARCHAR2(64) The object on
which the operation is carried out TARGET_DESC VARCHAR2(32) Description of the
target SOFAR NUMBER The units of work done so far TOTALWORK NUMBER The total uni
ts of work UNITS VARCHAR2(32) The units of measurement START_TIME DATE The start
ing time of operation LAST_UPDATE_TIME DATE Time when statistics last updated
TIMESTAMP DATE Timestamp TIME_REMAINING NUMBER Estimate (in seconds) of time rem
aining for the operation to complete ELAPSED_SECONDS NUMBER The number of elapse
d seconds from the start of operations CONTEXT NUMBER Context MESSAGE VARCHAR2(5
12) Statistics summary message USERNAME VARCHAR2(30) User ID of the user perform
ing the operation SQL_ADDRESS RAW(4 | 8) Used with the value of the SQL_HASH_VAL
UE column to identify the SQL statement associated with the operation SQL_HASH_V
ALUE NUMBER Used with the value of the SQL_ADDRESS column to identify the SQL st
atement associated with the operation SQL_ID VARCHAR2(13) SQL identifier of the
SQL statement associated with the operation QCSID NUMBER Session identifier of t
he parallel coordinator This view displays the status of various operations that
run for longer than 6 seconds (in absolute time). These operations currently in
clude many backup and recovery functions, statistics gathering, and query execut
ion, and more operations are added for every Oracle release. To monitor query ex
ecution progress, you must be using the cost-based optimizer and you must: Set t
he TIMED_STATISTICS or SQL_TRACE parameter to true Gather statistics for your ob
jects with the ANALYZE statement or the DBMS_STATS package You can add informati
on to this view about application-specific long-running operations by using the
DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS procedure. Select 'long', to_char (l.s
id), to_char (l.serial#), to_char(l.sofar), to_char(l.totalwork), to_char(l.star
t_time, 'DD-Mon-YYYY HH24:MI:SS' ), to_char ( l.last_update_time , 'DD-Mon-YYYY
HH24:MI:SS'), to_char(l.time_remaining), to_char(l.elapsed_seconds), l.opname,l.
target,l.target_desc,l.message,s.username,s.osuser,s.lockwait from v$session_lon
gops l, v$session s where l.sid = s.sid and l.serial# = s.serial#; Select 'long'
, to_char (l.sid), to_char (l.serial#), to_char(l.sofar), to_char(l.totalwork),
to_char(l.start_time, 'DD-Mon-YYYY HH24:MI:SS' ), to_char ( l.last_update_time ,
'DD-Mon-YYYY HH24:MI:SS'), s.username,s.osuser,s.lockwait from v$session_longop
s l, v$session s where l.sid = s.sid and l.serial# = s.serial#; select substr(us
ername,1,15),target,to_char(start_time, 'DD-Mon-YYYY HH24:MI:SS' ), SOFAR,substr
(MESSAGE,1,70) from v$session_longops; select USERNAME, to_char(start_time, 'DD-
Mon-YYYY HH24:MI:SS' ), substr(message,1,90),to_char(time_remaining) from v$sess
ion_longops;
9i and 10G note: ================ Oracle has a view inside the Oracle data buffe
rs. The view is called v$bh, and while v$bh was originally developed for Oracle
Parallel Server (OPS), the v$bh view can be used to show the number of data bloc
ks in the data buffer for every object type in the database. The following query
is especially exciting because you can now see what objects are consuming the d
ata buffer caches. In Oracle9i, you can use this information to segregate tables
to separate RAM buffers with different blocksizes. Here is a sample query that
shows data buffer utilization for individual objects in the database. Note that
this script uses an Oracle9i scalar sub-query, and will not work in preOracle9i
systems unless you comment-out column c3. column column column column c0 c1 c2 c
3 heading heading heading heading 'Owner' 'Object|Name' 'Number|of|Buffers' 'Per
centage|of Data|Buffer' format format format format a15 a30 999,999 999,999,999
select owner c0, object_name c1, count(1) c2, (count(1)/(select count(*) from v$
bh)) *100 c3 from dba_objects o, v$bh bh where o.object_id = bh.objd and o.owner
not in ('SYS','SYSTEM','AURORA$JIS$UTILITY$') group by owner, object_name order
by count(1) desc ; -- ------------------------------ 0.3 QUICK VIEW ON TEMP USA
GE: -- ----------------------------select total_extents, used_extents, total_ext
ents, current_users, tablespace_name from v$sort_segment; select username, user,
sqladdr, extents, tablespace from v$sort_usage; SELECT b.tablespace, ROUND(((b.
blocks*p.value)/1024/1024),2),
a.sid||','||a.serial# SID_SERIAL, a.username, a.program FROM sys.v_$session a, s
ys.v_$sort_usage b, sys.v_$parameter p WHERE p.name = 'db_block_size' AND a.sadd
r = b.session_addr ORDER BY b.tablespace, b.blocks; -- -------------------------
-------- 0.4 QUICK VIEW ON UNDO/ROLLBACK: -- -------------------------------SELE
CT FROM WHERE AND substr(username, 1, 10), substr(terminal, 1, 10), substr(osuse
r, 1, 10), t.start_time, r.name, t.used_ublk "ROLLB BLKS", log_io, phy_io sys.v_
$transaction t, sys.v_$rollname r, sys.v_$session s t.xidusn = r.usn t.ses_addr
= s.saddr;
SELECT substr(n.name, 1, 10), s.writes, s.gets, s.waits, s.wraps, s.extents, s.s
tatus, s.optsize, s.rssize FROM V$ROLLNAME n, V$ROLLSTAT s WHERE n.usn=s.usn; SE
LECT substr(r.name, 1, 10) "RBS", s.sid, s.serial#, s.taddr, t.addr, substr(s.us
ername, 1, 10) "USER", t.status, t.cr_get, t.phy_io, t.used_ublk, t.noundo, subs
tr(s.program, 1, 15) "COMMAND" FROM sys.v_$session s, sys.v_$transaction t, sys.
v_$rollname r WHERE t.addr = s.taddr AND t.xidusn = r.usn ORDER BY t.cr_get, t.p
hy_io; SELECT substr(segment_name, 1, 20), substr(tablespace_name, 1, 20), statu
s, INITIAL_EXTENT, NEXT_EXTENT, MIN_EXTENTS, MAX_EXTENTS, PCT_INCREASE FROM DBA_
ROLLBACK_SEGS; select 'FREE',count(*) from sys.fet$ union select 'USED',count(*)
from sys.uet$; -- Quick view active transactions SELECT NAME, XACTS "ACTIVE TRA
NSACTIONS" FROM V$ROLLNAME, V$ROLLSTAT WHERE V$ROLLNAME.USN = V$ROLLSTAT.USN; SE
LECT to_char(BEGIN_TIME, 'DD-MM-YYYY;HH24:MI'), to_char(END_TIME, 'DD-MMYYYY;HH2
4:MI'), UNDOTSN, UNDOBLKS, TXNCOUNT, MAXCONCURRENCY AS "MAXCON" FROM V$UNDOSTAT
WHERE trunc(BEGIN_TIME)=trunc(SYSDATE); select TO_CHAR(MIN(Begin_Time),'DD-MON-Y
YYY HH24:MI:SS') "Begin Time", TO_CHAR(MAX(End_Time),'DD-MON-YYYY HH24:MI:SS') "
End Time",
SUM(Undoblks) "Total Undo Blocks Used", SUM(Txncount) "Total Num Trans Executed"
, MAX(Maxquerylen) "Longest Query(in secs)", MAX(Maxconcurrency) "Highest Concur
rent TrCount", SUM(Ssolderrcnt), SUM(Nospaceerrcnt) from V$UNDOSTAT; SELECT used
_urec FROM v$session s, v$transaction t WHERE s.audsid=sys_context('userenv', 's
essionid') and s.taddr = t.addr; (used_urec = Used Undo records) SELECT a.sid, a
.username, b.xidusn, b.used_urec, b.used_ublk FROM v$session a, v$transaction b
WHERE a.saddr = b.ses_addr; SELECT v.SID, v.BLOCK_GETS, v.BLOCK_CHANGES, w.USERN
AME, w.OSUSER, w.TERMINAL FROM v$sess_io v, V$session w WHERE v.SID=w.SID ORDER
BY v.SID;
-- --------------------------------- 0.5 SOME EXPLANATIONS: -- -----------------
---------------
-- explanation of "COMMAND": 1: CREATE TABLE 2: INSERT 3: SELECT 4: CREATE CLUST
ER 5: ALTER CLUSTER 6: UPDATE 7: DELETE 8: DROP CLUSTER 9: CREATE INDEX 10: DROP
INDEX 11: ALTER INDEX 12: DROP TABLE 13: CREATE SEQUENCE 14: ALTER SEQUENCE 15:
ALTER TABLE 16: DROP SEQUENCE 17: GRANT 18: REVOKE 19: CREATE SYNONYM 20: DROP
SYNONYM 21: CREATE VIEW 22: DROP VIEW 23: VALIDATE INDEX 24: CREATE PROCEDURE 25
: ALTER PROCEDURE 26: LOCK TABLE 27: NO OPERATION 28: RENAME 29: COMMENT 30: AUD
IT 31: NOAUDIT 32: CREATE DATABASE LINK 33: DROP DATABASE LINK 34: CREATE DATABA
SE 35: ALTER DATABASE 36: CREATE ROLLBACK SEGMENT 37: ALTER ROLLBACK SEGMENT 38:
DROP ROLLBACK SEGMENT 39: CREATE TABLESPACE 40: ALTER TABLESPACE 41: DROP TABLE
SPACE 42: ALTER SESSION 43: ALTER USE 44: COMMIT 45: ROLLBACK 46: SAVEPOINT 47:
PL/SQL EXECUTE 48: SET TRANSACTION 49: ALTER SYSTEM SWITCH LOG 50: EXPLAIN 51: C
REATE USER 25: CREATE ROLE 53: DROP USER 54: DROP ROLE 55: SET ROLE 56: CREATE S
CHEMA 57: CREATE CONTROL FILE 58: ALTER TRACING 59: CREATE TRIGGER 60: ALTER TRI
GGER 61: DROP TRIGGER 62: ANALYZE TABLE 63: ANALYZE INDEX 64: ANALYZE CLUSTER 65
: CREATE PROFILE 66: DROP PROFILE 67: ALTER PROFILE 68: DROP PROCEDURE 69: DROP
PROCEDURE 70: ALTER RESOURCE COST 71: CREATE SNAPSHOT LOG 72: ALTER SNAPSHOT LOG
73: DROP SNAPSHOT LOG 74: CREATE SNAPSHOT 75: ALTER SNAPSHOT 76: DROP SNAPSHOT
79: ALTER ROLE 85: TRUNCATE TABLE 86:
TRUNCATE COUSTER 88: ALTER VIEW 91: CREATE FUNCTION 92: ALTER FUNCTION 93: DROP
FUNCTION 94: CREATE PACKAGE 95: ALTER PACKAGE 96: DROP PACKAGE 97: CREATE PACKAG
E BODY 98: ALTER PACKAGE BODY 99: DROP PACKAGE BODY -- explanation of locks: Loc
ks: 0, 'None', /* Mon Lock equivalent */ 1, 'Null', /* N */ 2, 'Row-S (SS)', /*
L */ 3, 'Row-X (SX)', /* R */ 4, 'Share', /* S */ 5, 'S/Row-X (SRX)', /* C */ 6,
'Exclusive', /* X */ to_char(b.lmode) TX: enqueu, waiting TM: DDL on object MR:
Media Recovery A TX lock is acquired when a transaction initiates its first cha
nge and is held until the transaction does a COMMIT or ROLLBACK. It is used main
ly as a queuing mechanism so that other sessions can wait for the transaction to
complete. TM Per table locks are acquired during the execution of a transaction
when referencing a table with a DML statement so that the object is not dropped
or altered during the execution of the transaction, if and only if the dml_lock
s parameter is non-zero. LOCKS: locks op user objects, zoals tables en rows LATC
H: locks op system objects, zoals shared data structures in memory en data dicti
onary rows LOCKS - shared of exclusive LATCH - altijd exclusive UL= user locks,
geplaats door programmatuur m.b.v. bijvoorbeeld DBMS_LOCK package DML LOCKS: dat
a manipulatie: table lock, row lock DDL LOCKS: preserves de struktuur van object
(geen simulane DML, DDL statements) DML locks: row lock (TX): voor rows (insert
, update, delete) row lock plus table lock: row lock, maar ook voorkomen DDL sta
tements table lock (TM): automatisch bij insert, update, delete, ter voorkoming
DDL op table table lock: S: share lock RS: row share RSX: row share exlusive RX:
row exclusive X: exclusive (ANDere tansacties kunnen alleen SELECT..)
in V$LOCK lmode column: 0, 1, 2, 3, 4, 5, 6, None Null (NULL) Row-S (SS) Row-X (
SX) Share (S) S/Row-X (SSX) Exclusive (X)
Internal Implementation of Oracle Locks (Enqueue) Oracle server uses locks to pr
ovide concurrent access to shared resources whereas it uses latches to provide e
xclusive and short-term access to memory structures inside the SGA. Latches also
prevent more than one process to execute the same piece of code, which other pr
ocess might be executing. Latch is also a simple lock, which provides serialize
and only exclusive access to the memory area in SGA. Oracle doesnt use latches to p
rovide shared access to resources because it will increase CPU usage. Latches ar
e used for big memory structure and allow operations required for locking the su
b structures. Shared resources can be tables, transactions, redo threads, etc. E
nqueue can be local or global. If it is a single instance then enqueues will be
local to that instance. There are global enqueus also like ST enqueue, which is
held before any space transaction can be occurred on any tablespace in RAC. ST e
nqueues are held only for dictionary-managed tablespaces. These oracle locks are
generally known as Enqueue, because whenever there is a session request for a l
ock on any shared resource structure, it's lock data structure is queued to one
of the linked list attached to that resource structure (Resource structure is di
scussed later). Before proceeding further with this topic, here is little brief
about Oracle locks. Oracle locks can be applied to compound and simple objects l
ike tables and the cache buffer. Locks can be held in different modes like share
d, excusive, null, sub-shared, sub-exclusive and shared sub-exclusive. Depending
on the type of object, different modes are applied. Foe example, a compound obj
ect like a table with rows, all above mentioned modes could be applicable wherea
s for simple objects only the first three will be applicable. These lock modes d
ont have any importance of their own but the importance is how they are being used
by the subsystem. These lock modes (compatibility between locks) define how the
session will get a lock on that object.
-- Explanation of Waits: SQL> desc v$system_event; Name
-----------------------EVENT TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT
TIME_WAITED_MICRO v$system_event This view displays the count (total_waits) of
all wait events since startup of the instance. If timed_statistics is set to tru
e, the sum of the wait times for all events are also displayed in the column tim
e_waited. The unit of time_waited is one hundreth of a second. Since 10g, an add
itional column (time_waited_micro) measures wait times in millionth of a second.
total_waits where event='buffer busy waits' is equal the sum of count in v$wait
stat. v$enqueue_stat can be used to break down waits on the enqueue wait event.
While this view totals all events in an instance, v$session select event, total_
waits, time_waited from v$system_event where event like '%file%' Order by total_
waits desc; column column column column column c1 c2 c3 c4 c5 heading heading he
ading heading heading 'Event|Name' 'Total|Waits' 'Seconds|Waiting' 'Total|Timeou
ts' 'Average|Wait|(in secs)' format format format format format a30 999,999,999
999,999 999,999,999 99.999
ttitle 'System-wide Wait Analysis|for current wait events' select event c1, tota
l_waits c2, time_waited / 100 c3, total_timeouts c4, average_wait /100 c5 from s
ys.v_$system_event where event not in ( 'dispatcher timer', 'lock element cleanu
p', 'Null event', 'parallel query dequeue wait', 'parallel query idle wait - Sla
ves', 'pipe get', 'PL/SQL lock timer', 'pmon timer', 'rdbms ipc message', 'slave
wait', 'smon timer', 'SQL*Net break/reset to client', 'SQL*Net message from cli
ent',
) AND event not like 'DFS%' and event not like '%done%' and event not like '%Idl
e%' AND event not like 'KXFX%' order by c2 desc ;
'SQL*Net message to client', 'SQL*Net more data to client', 'virtual circuit sta
tus', 'WMON goes to sleep'
Create table beg_system_event as select * from v$system_event Run workload throu
gh system or user task Create table end_system_event as select * from v$system_e
vent Issue SQL to determine true wait events drop table beg_system_event; drop t
able end_system_event; SELECT b.event, (e.total_waits - b.total_waits) total_wai
ts, (e.total_timeouts - b.total_timeouts) total_timeouts, (e.time_waited - b.tim
e_waited) time_waited FROM beg_system_event b, end_system_event e WHERE b.event
= e.event; Cumulative info, after startup: ------------------------------SELECT
* FROM v$system_event WHERE event = 'enqueue';
SELECT * FROM v$sysstat WHERE class=4; select c.name,a.addr,a.gets,a.misses,a.sl
eeps, a.immediate_gets,a.immediate_misses,a.wait_time, b.pid from v$latch a, v$l
atchholder b, v$latchname c where a.addr = b.laddr(+) and a.latch# = c.latch# or
der by a.latch#; -- ------------------------------------------------------------
---- 0.6. QUICK INFO ON HIT RATIO, SHARED POOL etc.. -- ------------------------
---------------------------------------- Hit ratio: SELECT FROM WHERE (1-(pr.val
ue/(dbg.value+cg.value)))*100 v$sysstat pr, v$sysstat dbg, v$sysstat cg pr.name
= 'physical reads'
AND AND
dbg.name = 'db block gets' cg.name = 'consistent gets';
SELECT * FROM V$SGA; -- free memory shared pool: SELECT * FROM v$sgastat WHERE n
ame = 'free memory'; -- hit ratio shared pool: SELECT gethits,gets,gethitratio F
ROM v$librarycache WHERE namespace = 'SQL AREA'; SELECT SUM(PINS) "EXECUTIONS",
SUM(RELOADS) "CACHE MISSES WHILE EXECUTING" FROM V$LIBRARYCACHE; SELECT sum(shar
able_mem) FROM v$db_object_cache; -- finding literals in SP: SELECT substr(sql_t
ext,1,50) "SQL", count(*) , sum(executions) "TotExecs" FROM v$sqlarea WHERE exec
utions < 5 GROUP BY substr(sql_text,1,50) HAVING count(*) > 30 ORDER BY 2; -- --
-------------------------------------- 0.7 Quick Table and object information --
--------------------------------------SELECT distinct substr(t.owner, 1, 25), s
ubstr(t.table_name,1,50), substr(t.tablespace_name,1,20), t.chain_cnt, t.logging
, s.relative_fno FROM dba_tables t, dba_segments s WHERE t.owner not in ('SYS','
SYSTEM', 'OUTLN','DBSNMP','WMSYS','ORDSYS','ORDPLUGINS','MDSYS','CTXSYS','XDB')
AND t.table_name=s.segment_name AND s.segment_type='TABLE' AND s.segment_name li
ke 'CI_PAY%'; SELECT substr(segment_name, 1, 30), segment_type, substr(owner, 1,
10), extents, initial_extent, next_extent, max_extents FROM dba_segments WHERE
extents > max_extents - 100 AND owner not in ('SYS','SYSTEM'); SELECT FROM WHERE
and segment_name, owner, tablespace_name, extents dba_segments owner='SALES' --
you use the correct schema here extents > 700;
SELECT owner, substr(object_name, 1, 30), object_type, created, last_ddl_time, s
tatus FROM dba_objects where OWNER='RM_LIVE'; WHERE created > SYSDATE-1; SELECT
owner, substr(object_name, 1, 30), object_type, created, last_ddl_time, status F
ROM dba_objects WHERE status='INVALID'; Compare 2 owners: ----------------select
table_name from dba_tables where owner='MIS_OWNER' and table_name not in (SELEC
T table_name from dba_tables where OWNER='MARPAT'); Table and column information
: ----------------------------select substr(table_name, 1, 3) schema , table_nam
e , column_name , substr(data_type,1 ,1) data_type from user_tab_columns where C
OLUMN_NAME='ENV_ID' where table_name like 'ALG%' or table_name like 'STG%' or ta
ble_name like 'ODS%' or table_name like 'DWH%' or table_name like 'MKM%' order b
y decode(substr(table_name, 1, 3), 'ALG', 10, 'STG', 20, 'ODS', 30, 'DWH', 40, '
MKM', 50, 60) , table_name , column_id Check on existence of JServer: ----------
-------------------select count(*) from all_objects where object_name = 'DBMS_JA
VA'; should return a count of 3 -- --------------------------------------- 0.8 Q
UICK INFO ON PRODUCT INFORMATION: -- -------------------------------------ersa S
ELECT * FROM PRODUCT_COMPONENT_VERSION; SELECT * FROM NLS_DATABASE_PARAMETERS; S
ELECT * FROM NLS_SESSION_PARAMETERS; SELECT * FROM NLS_INSTANCE_PARAMETERS; SELE
CT * FROM V$OPTION; SELECT * FROM V$LICENSE;
SELECT * FROM V$VERSION; Oracle RDBMS releases: ---------------------9.2.0.1 is
the terminal release for Oracle 9i. Rel 2. Normally it's patched to 9.2.0.4. As
from october patches 9.2.0.5 and little later 9.2.0.6 were available 9.2.0.4 is
patch ID 3095277. 9.0.1.4 8.1.7 8.0.6 7.3.4 is is is is the the the the terminal
terminal terminal terminal release release release release for for for for Orac
le 9i Oracle8i. Oracle8. Oracle7. Rel. 1. Additional patchsets exists. Additiona
l patchsets exists. Additional patchsets exists.
IS ORACLE 32BIT or 64BIT? ------------------------Starting with version 8, Oracl
e began shipping 64bit versions of it's RDBMS product on UNIX platforms that sup
port 64bit software. IMPORTANT: 64bit Oracle can only be installed on Operating
Systems that are 64bit enabled. In general, if Oracle is 64bit, '64bit' will be
displayed on the opening banners of Oracle executables such as 'svrmgrl', 'exp'
and 'imp'. It will also be displayed in the headers of Oracle trace files. Other
wise if '64bit' is not display at these locations, it can be assumed that Oracle
is 32bit. or From the OS level: will be indicated. % cd $ORACLE_HOME/bin % file
oracle ...if 64bit, '64bit'
To verify the wordsize of a downloaded patchset: -------------------------------
----------------The filename of the downloaded patchset usually dictates which v
ersion and wordsize of Oracle it should be applied against. For instance: p18824
50_8172_SOLARIS64.zip is the 8.1.7.2 patchset for 64bit Oracle on Solaris. Also
refer to the README that is included with the patch or patch set and this Note:
Win2k Server Certifications: ---------------------------OS Product Certified Wit
h Version Status Addtl. Info. Components Other Install Issue 2000 10g N/A N/A Ce
rtified Yes None None None 2000 9.2 32-bit -Opteron N/A N/A Certified Yes None N
one None 2000 9.2 N/A N/A Certified Yes None None None 2000 9.0.1 N/A N/A Desupp
orted Yes None N/A N/A 2000 8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A 2000
8.1.6 (8i) N/A N/A Desupported Yes None N/A N/A 2000, Beta 3 8.1.5 (8i) N/A N/A
Withdrawn Yes N/A N/A N/A
Solaris Server certifications: -----------------------------Server Certification
s OS Product Certified With Version Status Addtl. Info. Components Other Install
Issue 9 10g 64-bit N/A N/A Certified Yes None None None 8 10g 64-bit N/A N/A Ce
rtified Yes None None None 10 10g 64-bit N/A N/A Projected None N/A N/A N/A 9 9.
2 64-bit N/A N/A Certified Yes None None None 8 9.2 64-bit N/A N/A Certified Yes
None None None 10 9.2 64-bit N/A N/A Projected None N/A N/A N/A 2.6 9.2 N/A N/A
Certified Yes None None None 9 9.2 N/A N/A Certified Yes None None None 8 9.2 N
/A N/A Certified Yes None None None 7 9.2 N/A N/A Certified Yes None None None 1
0 9.2 N/A N/A Projected None N/A N/A N/A 9 9.0.1 64-bit N/A N/A Desupported Yes
None N/A N/A 8 9.0.1 64-bit N/A N/A Desupported Yes None N/A N/A 2.6 9.0.1 N/A N
/A Desupported Yes None N/A N/A 9 9.0.1 N/A N/A Desupported Yes None N/A N/A 8 9
.0.1 N/A N/A Desupported Yes None N/A N/A 7 9.0.1 N/A N/A Desupported Yes None N
/A N/A 9 8.1.7 (8i) 64-bit N/A N/A Desupported Yes None N/A N/A 8 8.1.7 (8i) 64-
bit N/A N/A Desupported Yes None N/A N/A 2.6 8.1.7 (8i) N/A N/A Desupported Yes
None N/A N/A 9 8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A 8 8.1.7 (8i) N/A
N/A Desupported Yes None N/A N/A 7 8.1.7 (8i) N/A N/A Desupported Yes None N/A N
/A everything below: desupported Oracle clients: --------------Server Version Cl
ient Version 10.1.0 10.1.0 Yes Yes 9.2.0 Yes Yes Was 9.0.1 Was Was Was 8.1.7 Yes
Yes Was 8.1.6 No No Was 8.1.5 No No No 8.0.6 No Was Was 8.0.5 No No No 7.3.4 No
Was Was 9.2.0 Was Yes Was Yes Was Was Was Was Was 9.0.1 8.1.7 Yes #2 No No Was
No Was Was Was Was Was Was Was Was Was Was Was Was 8.1.6 No Was Was Was Was Was
Was Was Was 8.1.5 No No No Was Was Was Was Was Was 8.0.6 8.0.5 7.3.4 No No No No
#1 Was Was Was Was Was Was Was
-- ------------------------------------------------------ 0.9 QUICK INFO WITH RE
GARDS LOGS AND BACKUP RECOVERY: -- ---------------------------------------------
-------SELECT * from V$BACKUP; SELECT file#, substr(name, 1, 30), status, checkp
oint_change# controlfile -- uit
FROM V$DATAFILE; SELECT d.file#, d.status, d.checkpoint_change#, b.status, b.CHA
NGE#, to_char(b.TIME,'DD-MM-YYYY;HH24:MI'), substr(d.name, 1, 40) FROM V$DATAFIL
E d, V$BACKUP b WHERE d.file#=b.file#; SELECT file#, substr(name, 1, 30), status
, fuzzy, checkpoint_change# file header FROM V$DATAFILE_HEADER; -- uit
SELECT first_change#, next_change#, sequence#, archived, substr(name, 1, 40), CO
MPLETION_TIME, FIRST_CHANGE#, FIRST_TIME FROM V$ARCHIVED_LOG WHERE COMPLETION_TI
ME > SYSDATE -2; SELECT recid, first_change#, sequence#, next_change# FROM V$LOG
_HISTORY; SELECT resetlogs_change#, checkpoint_change#, controlfile_change#, ope
n_resetlogs FROM V$DATABASE; SELECT * FROM V$RECOVER_FILE -- Which file needs re
covery
-- -----------------------------------------------------------------------------
0.10 QUICK INFO WITH REGARDS TO TABLESPACES, DATAFILES, REDO LOGFILES etc..: --
------------------------------------------------------------------------------
online redo log informatie: V$LOG, V$LOGFILE: SELECT l.group#, l.members, l.stat
us, l.bytes, substr(lf.member, 1, 50) FROM V$LOG l, V$LOGFILE lf WHERE l.group#=
lf.group#; SELECT THREAD#, SEQUENCE#, FIRST_CHANGE#, FIRST_TIME, to_char(FIRST_T
IME, 'DD-MM-YYYY;HH24:MI') FROM V$LOG_HISTORY; -- WHERE SEQUENCE# SELECT GROUP#,
ARCHIVED, STATUS FROM V$LOG; -- tablespace free-used: SELECT Total.name "Tables
pace Name", Free_space, (total_space-Free_space) Used_space, total_space FROM (S
ELECT tablespace_name, sum(bytes/1024/1024) Free_Space FROM sys.dba_free_space G
ROUP BY tablespace_name ) Free, (SELECT b.name, sum(bytes/1024/1024) TOTAL_SPACE
FROM sys.v_$datafile a, sys.v_$tablespace B WHERE a.ts# = b.ts# GROUP BY b.name
) Total WHERE Free.Tablespace_name = Total.name;
SELECT substr(file_name, 1, 70), tablespace_name FROM dba_data_files; ----------
------------------------------------- 0.11 AUDIT Statements: -------------------
--------------------------select v.sql_text, v.FIRST_LOAD_TIME, v.PARSING_SCHEMA
_ID, v.DISK_READS, v.ROWS_PROCESSED, v.CPU_TIME, b.username from v$sqlarea v, db
a_users b where v.FIRST_LOAD_TIME > '2008-05-12' and v.PARSING_SCHEMA_ID=b.user_
id order by v.FIRST_LOAD_TIME ; ------------------------------------------------
0.12 EXAMPLE OF DYNAMIC SQL: ----------------------------------------------sele
ct 'UPDATE '||t.table_name||' SET '||c.column_name||'=REPLACE('|| c.column_name|
|','''',CHR(7));' from user_tab_columns c, user_tables t where c.table_name=t.ta
ble_name and t.num_rows>0 and c.DATA_LENGTH>10 and data_type like '%CHAR%' ORDER
BY t.table_name desc; create public synonym EMPLOYEE for HARRY.EMPLOYEE; select
'create public synonym '||table_name||' for CISADM.'||table_name||';' from dba_
tables where owner='CISADM'; select 'GRANT SELECT, INSERT, UPDATE, DELETE ON '||
table_name||' TO CISUSER;' from dba_tables where owner='CISADM'; select 'GRANT S
ELECT ON '||table_name||' TO CISREAD;' from dba_tables where owner='CISADM';
------------------------------------------------ 0.13 ORACLE MOST COMMON DATATYP
ES: -----------------------------------------------
Example: number as integer in comparison to smallint ---------------------------
-------------------------
SQL> create table a 2 (id number(3)); Table created. SQL> create table b 2 (id s
mallint); Table created. SQL> create table c 2 (id integer); Table created. SQL>
insert into a 2 values 3 (5); 1 row created. SQL> insert into a 2 values 3 (999
); 1 row created. SQL> insert into a 2 values 3 (1001); (1001) * ERROR at line 3
: ORA-01438: value larger than specified precision allowed for this column
SQL> insert into b 2 values 3 (5); 1 row created. SQL> insert into b 2 values 3
(99); 1 row created. SQL> insert into b 2 values 3 (999); 1 row created. SQL> in
sert into b
2 3
values (1001);
1 row created. SQL> insert into b 2 values 3 (65536); 1 row created. SQL> insert
into b 2 values 3 (1048576); 1 row created. SQL> insert into b 2 values 3 (1099
511627776); 1 row created. SQL> insert into b 2 values 3 (9.5); 1 row created. S
QL> insert into b 2 values 3 (100.23); 1 row created. SQL> select * from b; ID -
--------5 99 999 1001 65536 1048576 1.0995E+12 10 100 9 rows selected.
smallint is really not that "small". Actually its float(38).
SQL> insert into c 2 values 3 (5); 1 row created. SQL> insert into c 2 values 3
(9999); 1 row created. SQL> insert into c 2 values 3 (92.7); 1 row created. SQL>
insert into c 2 values 3 (1099511627776); 1 row created. SQL> select * from c;
ID ---------5 9999 93 1.0995E+12
======================== 1. NOTES ON PERFORMANCE: ========================= 1.1
POOLS: ========== -- SHARED POOL: -- -----------A literal SQL statement is consi
dered as one which uses literals in the predicate/s rather than bind variables w
here the value of the literal is likely to differ between various executions of
the statement. Eg 1: SELECT * FROM emp WHERE ename='CLARK'; is used by the appli
cation instead of
SELECT * FROM emp WHERE ename=:bind1; SQL statement for this article as it can b
e shared. -- Hard Parse If a new SQL statement is issued which does not exist in
the shared pool then this has to be parsed fully. Eg: Oracle has to allocate me
mory for the statement from the shared pool, check the statement syntactically a
nd semantically etc... This is referred to as a hard parse and is very expensive
in both terms of CPU used and in the number of latch gets performed. --Soft Par
se If a session issues a SQL statement which is already in the shared pool AND i
t can use an existing version of that statement then this is known as a 'soft pa
rse'. As far as the application is concerned it has asked to parse the statement
. if two statements are textually identical but cannot be shared then these are
called 'versions' of the same statement. If Oracle matches to a statement with m
any versions it has to check each version in turn to see if it is truely identic
al to the statement currently being parsed. Hence high version counts are best a
voided. The best approach to take is that all SQL should be sharable unless it i
s adhoc or infrequently used SQL where it is important to give CBO as much infor
mation as possible in order for it to produce a good execution plan. --Eliminati
ng Literal SQL If you have an existing application it is unlikely that you could
eliminate all literal SQL but you should be prepared to eliminate some if it is
causing problems. By looking at the V$SQLAREA view it is possible to see which
literal statements are good candidates for converting to use bind variables. The
following query shows SQL in the SGA where there are a large number of similar
statements: SELECT substr(sql_text,1,40) "SQL", count(*) , sum(executions) "TotE
xecs" FROM v$sqlarea WHERE executions < 5 GROUP BY substr(sql_text,1,40) HAVING
count(*) > 30 ORDER BY 2; The values 40,5 and 30 are example values so this quer
y is looking for different statements whose first 40 characters are the same whi
ch have only been executed a few times each and there are at least 30 different
occurrances in the shared pool. This query uses the idea it is common for litera
l statements to begin "SELECT col1,col2,col3 FROM table WHERE ..." with the lead
ing portion of each statement being the same. --Avoid Invalidations
Some specific orders will change the state of cursors to INVALIDATE. These order
s modify directly the context of related objects associated with cursors. That's
orders are TRUNCATE, ANALYZE or DBMS_STATS.GATHER_XXX on tables or indexes, gra
nts changes on underlying objects. The associated cursors will stay in the SQLAR
EA but when it will be reference next time, it should be reloaded and reparsed f
ully, so the global performance will be impacted. The following query could help
us to better identify the concerned cursors: SELECT substr(sql_text, 1, 40) "SQ
L", invalidations from v$sqlarea order by invalidations DESC; -- CURSOR_SHARING
parameter (8.1.6 onwards) <Parameter:CURSOR_SHARING> is a new parameter introduc
ed in Oracle8.1.6. It should be used with caution in this release. If this param
eter is set to FORCE then literals will be replaced by system generated bind var
iables where possible. For multiple similar statements which differ only in the
literals used this allows the cursors to be shared even though the application s
upplied SQL uses literals. The parameter can be set dynamically at the system or
session level thus: ALTER SESSION SET cursor_sharing = FORCE; or ALTER SYSTEM S
ET cursor_sharing = FORCE; or it can be set in the init.ora file. Note: As the F
ORCE setting causes system generated bind variables to be used in place of liter
als, a different execution plan may be chosen by the cost based optimizer (CBO)
as it no longer has the literal values available to it when costing the best exe
cution plan. In Oracle9i, it is possible to set CURSOR_SHARING=SIMILAR. SIMILAR
causes statements that may differ in some literals, but are otherwise identical,
to share a cursor, unless the literals affect either the meaning of the stateme
nt or the degree to which the plan is optimized. This enhancement improves the u
sability of the parameter for situations where FORCE would normally cause a diff
erent, undesired execution plan. With CURSOR_SHARING=SIMILAR, Oracle determines
which literals are "safe" for substitution with bind variables. This will result
in some SQL not being shared in an attempt to provide a more efficient executio
n plan. -- SESSION_CACHED_CURSORS parameter <Parameter:SESSION_CACHED_CURSORS> i
s a numeric parameter which can be set at instance level or at session level usi
ng the command: ALTER SESSION SET session_cached_cursors = NNN; The value NNN de
termines how many 'cached' cursors there can be in your session. Whenever a stat
ement is parsed Oracle first looks at the statements pointed to by your private
session cache if a sharable version of the statement exists it can be used. This
provides a shortcut access to frequently parsed statements that uses less CPU a
nd uses far fewer latch gets than a soft or hard parse.
To get placed in the session cache the same statement has to be parsed 3 times w
ithin the same cursor - a pointer to the shared cursor is then added to your ses
sion cache. If all session cache cursors are in use then the least recently used
entry is discarded. If you do not have this parameter set already then it is ad
visable to set it to a starting value of about 50. The statistics section of the
bstat/estat report includes a value for 'session cursor cache hits' which shows
if the cursor cache is giving any benefit. The size of the cursor cache can the
n be increased or decreased as necessary. SESSION_CACHED_CURSORS are particularl
y useful with Oracle Forms applications when forms are frequently opened and clo
sed. -- SHARED_POOL_RESERVED_SIZE parameter There are quite a few notes explaini
ng <Parameter:SHARED_POOL_RESERVED_SIZE> already in circulation. The parameter w
as introduced in Oracle 7.1.5 and provides a means of reserving a portion of the
shared pool for large memory allocations. The reserved area comes out of the sh
ared pool itself. From a practical point of view one should set SHARED_POOL_RESE
RVED_SIZE to about 10% of SHARED_POOL_SIZE unless either the shared pool is very
large OR SHARED_POOL_RESERVED_MIN_ALLOC has been set lower than the default val
ue: If the shared pool is very large then 10% may waste a significant amount of
memory when a few Mb will suffice. If SHARED_POOL_RESERVED_MIN_ALLOC has been lo
wered then many space requests may be eligible to be satisfied from this portion
of the shared pool and so 10% may be too little. It is easy to monitor the spac
e usage of the reserved area using the <View:V$SHARED_POOL_RESERVED> which has a
column FREE_SPACE. -- SHARED_POOL_RESERVED_MIN_ALLOC parameter In Oracle8i this
parameter is hidden. SHARED_POOL_RESERVED_MIN_ALLOC should generally be left at
its default value, although in certain cases values of 4100 or 4200 may help re
lieve some contention on a heavily loaded shared pool. -- SHARED_POOL_SIZE param
eter <Parameter:SHARED_POOL_SIZE> controls the size of the shared pool itself. T
he size of the shared pool can impact performance. If it is too small then it is
likely that sharable information will be flushed from the pool and then later n
eed to be reloaded (rebuilt). If there is heavy use of literal SQL and the share
d pool is too large then over time a lot of small chunks of memory can build up
on the internal memory freelists causing the shared pool latch to be held for lo
nger which in-turn can impact performance. In this situation a smaller shared po
ol may perform better than a larger one. This problem is greatly reduced in 8.0.
6 and in 8.1.6 onwards due to the enhancement in <bug:986149> . NB: The shared p
ool itself should never be made so large that paging or swapping occur as perfor
mance can then decrease by many orders of magnitude.
-- _SQLEXEC_PROGRESSION_COST parameter (8.1.5 onwards) This is a hidden paramete
r which was introduced in Oracle 8.1.5. The parameter is included here as the de
fault setting has caused some problems with SQL sharability. Setting this parame
ter to 0 can avoid these issues which result in multiple versions statements in
the shared pool. Eg: Add the following to the init.ora file # _SQLEXEC_PROGRESSI
ON_COST is set to ZERO to avoid SQL sharing issues # See Note:62143.1 for detail
s _sqlexec_progression_cost=0 Note that a side effect of setting this to '0' is
that the V$SESSION_LONGOPS view is not populated by long running queries. -- MTS
, Shared Server and XA The multi-threaded server (MTS) adds to the load on the s
hared pool and can contribute to any problems as the User Global Area (UGA) resi
des in the shared pool. This is also true of XA sessions in Oracle7 as their UGA
is located in the shared pool. (In Oracle8/8i XA sessions do NOT put their UGA
in the shared pool). In Oracle8 the Large Pool can be used for MTS reducing its
impact on shared pool activity - However memory allocations in the Large Pool st
ill make use of the "shared pool latch". See <Note:62140.1> for a description of
the Large Pool. Using dedicated connections rather than MTS causes the UGA to b
e allocated out of process private memory rather than the shared pool. Private m
emory allocations do not use the "shared pool latch" and so a switch from MTS to
dedicated connections can help reduce contention in some cases. In Oracle9i, MT
S was renamed to "Shared Server". For the purposes of the shared pool, the behav
iour is essentially the same. Useful SQL for looking at memory and Shared Pool p
roblems --------------------------------------------------------Indeling SGA: --
----------SELECT * FROM V$SGA; free memory shared pool: -----------------------S
ELECT * FROM v$sgastat WHERE name = 'free memory'; hit ratio shared pool: ------
---------------SELECT gethits,gets,gethitratio FROM v$librarycache WHERE namespa
ce = 'SQL AREA'; SELECT SUM(PINS) "EXECUTIONS", SUM(RELOADS) "CACHE MISSES WHILE
EXECUTING"
FROM V$LIBRARYCACHE; SELECT sum(sharable_mem) FROM v$db_object_cache; statistics
: ----------SELECT class, value, name FROM v$sysstat; Executions: ----------SELE
CT substr(sql_text,1,90) "SQL", count(*) , sum(executions) "TotExecs" FROM v$sql
area WHERE executions > 5 GROUP BY substr(sql_text,1,90) HAVING count(*) > 10 OR
DER BY 2 ; The values 40,5 and 30 are example values so this query is looking fo
r different statements whose first 40 characters are the same which have only be
en executed a few times each and there are at least 30 different occurrances in
the shared pool. This query uses the idea it is common for literal statements to
begin "SELECT col1,col2,col3 FROM table WHERE ..." with the leading portion of
each statement being the same. V$SQLAREA: SQL_TEXT VARCHAR2(1000) First thousand
characters of the SQL text for the current cursor SHARABLE_MEM NUMBER Amount of
shared memory used by a cursor. If multiple child cursors exist, then the sum o
f all shared memory used by all child cursors. PERSISTENT_MEM NUMBER Fixed amoun
t of memory used for the lifetime of an open cursor. If multiple child cursors e
xist, the fixed sum of memory used for the lifetime of all the child cursors. RU
NTIME_MEM NUMBER Fixed amount of memory required during execution of a cursor. I
f multiple child cursors exist, the fixed sum of all memory required during exec
ution of all the child cursors.
SORTS NUMBER Sum of the number of sorts that were done for all the child cursors
VERSION_COUNT NUMBER Number of child cursors that are present in the cache unde
r this parent LOADED_VERSIONS NUMBER Number of child cursors that are present in
the cache and have their context heap (KGL heap 6) loaded OPEN_VERSIONS NUMBER
The number of child cursors that are currently open under this current parent US
ERS_OPENING NUMBER The number of users that have any of the child cursors open F
ETCHES NUMBER Number of fetches associated with the SQL statement EXECUTIONS NUM
BER Total number of executions, totalled over all the child cursors USERS_EXECUT
ING NUMBER Total number of users executing the statement over all child cursors
LOADS NUMBER The number of times the object was loaded or reloaded FIRST_LOAD_TI
ME VARCHAR2(19) Timestamp of the parent creation time INVALIDATIONS NUMBER Total
number of invalidations over all the child cursors PARSE_CALLS NUMBER The sum o
f all parse calls to all the child cursors under this parent DISK_READS NUMBER T
he sum of the number of disk reads over all child cursors BUFFER_GETS NUMBER The
sum of buffer gets over all child cursors
ROWS_PROCESSED NUMBER The total number of rows processed on behalf of this SQL s
tatement COMMAND_TYPE NUMBER The Oracle command type definition OPTIMIZER_MODE V
ARCHAR2(10) Mode under which the SQL statement is executed PARSING_USER_ID NUMBE
R The user ID of the user that has parsed the very first cursor under this paren
t PARSING_SCHEMA_ID NUMBER The schema ID that was used to parse this child curso
r KEPT_VERSIONS NUMBER The number of child cursors that have been marked to be k
ept using the DBMS_SHARED_POOL package ADDRESS RAW(4) The address of the handle
to the parent for this cursor HASH_VALUE NUMBER The hash value of the parent sta
tement in the library cache MODULE VARCHAR2(64) Contains the name of the module
that was executing at the time that the SQL statement was first parsed as set by
calling DBMS_APPLICATION_INFO.SET_MODULE MODULE_HASH NUMBER The hash value of t
he module that is named in the MODULE column ACTION VARCHAR2(64) Contains the na
me of the action that was executing at the time that the SQL statement was first
parsed as set by calling DBMS_APPLICATION_INFO.SET_ACTION ACTION_HASH NUMBER Th
e hash value of the action that is named in the ACTION column SERIALIZABLE_ABORT
S NUMBER Number of times the transaction fails to serialize, producing ORA-08177
errors, totalled over all the child cursors
IS_OBSOLETE VARCHAR2(1) Indicates whether the cursor has become obsolete (Y) or
not (N). This can happen if the number of child cursors is too large. CHILD_LATC
H NUMBER Child latch number that is protecting the cursor V$SQL: -----V$SQL list
s statistics on shared SQL area without the GROUP BY clause and contains one row
for each child of the original SQL text entered. Column Datatype Description SQ
L_TEXT VARCHAR2(1000) First thousand characters of the SQL text for the current
cursor SHARABLE_MEM NUMBER Amount of shared memory used by this child cursor (in
bytes) PERSISTENT_MEM NUMBER Fixed amount of memory used for the lifetime of th
is child cursor (in bytes) RUNTIME_MEM NUMBER Fixed amount of memory required du
ring the execution of this child cursor SORTS NUMBER Number of sorts that were d
one for this child cursor LOADED_VERSIONS NUMBER Indicates whether the context h
eap is loaded (1) or not (0) OPEN_VERSIONS NUMBER Indicates whether the child cu
rsor is locked (1) or not (0) USERS_OPENING NUMBER Number of users executing the
statement FETCHES NUMBER Number of fetches associated with the SQL statement EX
ECUTIONS NUMBER Number of executions that took place on this object since it was
brought into the
library cache USERS_EXECUTING NUMBER Number of users executing the statement LOA
DS NUMBER Number of times the object was either loaded or reloaded FIRST_LOAD_TI
ME VARCHAR2(19) Timestamp of the parent creation time INVALIDATIONS NUMBER Numbe
r of times this child cursor has been invalidated PARSE_CALLS NUMBER Number of p
arse calls for this child cursor DISK_READS NUMBER Number of disk reads for this
child cursor BUFFER_GETS NUMBER Number of buffer gets for this child cursor ROW
S_PROCESSED NUMBER Total number of rows the parsed SQL statement returns COMMAND
_TYPE NUMBER Oracle command type definition OPTIMIZER_MODE VARCHAR2(10) Mode und
er which the SQL statement is executed OPTIMIZER_COST NUMBER Cost of this query
given by the optimizer PARSING_USER_ID NUMBER User ID of the user who originally
built this child cursor PARSING_SCHEMA_ID NUMBER Schema ID that was used to ori
ginally build this child cursor KEPT_VERSIONS NUMBER Indicates whether this chil
d cursor has been marked to be kept pinned in the cache using the DBMS_SHARED_PO
OL package
ADDRESS RAW(4) Address of the handle to the parent for this cursor TYPE_CHK_HEAP
RAW(4) Descriptor of the type check heap for this child cursor HASH_VALUE NUMBE
R Hash value of the parent statement in the library cache PLAN_HASH_VALUE NUMBER
Numerical representation of the SQL plan for this cursor. Comparing one PLAN_HA
SH_VALUE to another easily identifies whether or not two plans are the same (rat
her than comparing the two plans line by line). CHILD_NUMBER NUMBER Number of th
is child cursor MODULE VARCHAR2(64) Contains the name of the module that was exe
cuting at the time that the SQL statement was first parsed, which is set by call
ing DBMS_APPLICATION_INFO.SET_MODULE MODULE_HASH NUMBER Hash value of the module
listed in the MODULE column ACTION VARCHAR2(64) Contains the name of the action
that was executing at the time that the SQL statement was first parsed, which i
s set by calling DBMS_APPLICATION_INFO.SET_ACTION ACTION_HASH NUMBER Hash value
of the action listed in the ACTION column SERIALIZABLE_ABORTS NUMBER Number of t
imes the transaction fails to serialize, producing ORA-08177 errors, per cursor
OUTLINE_CATEGORY VARCHAR2(64) If an outline was applied during construction of t
he cursor, then this column displays the category of that outline. Otherwise the
column is left blank. CPU_TIME NUMBER CPU time (in microseconds) used by this c
ursor for parsing/executing/fetching
ELAPSED_TIME NUMBER Elapsed time (in microseconds) used by this cursor for parsi
ng/executing/fetching OUTLINE_SID NUMBER Outline session identifier CHILD_ADDRES
S RAW(4) Address of the child cursor SQLTYPE NUMBER Denotes the version of the S
QL language used for this statement REMOTE VARCHAR2(1) (Y/N) Identifies whether
the cursor is remote mapped or not OBJECT_STATUS VARCHAR2(19) Status of the curs
or (VALID/INVALID) LITERAL_HASH_VALUE NUMBER Hash value of the literals which ar
e replaced with system-generated bind variables and are to be matched, when CURS
OR_SHARING is used. This is not the hash value for the SQL statement. If CURSOR_
SHARING is not used, then the value is 0. LAST_LOAD_TIME VARCHAR2(19) IS_OBSOLET
E VARCHAR2(1) Indicates whether the cursor has become obsolete (Y) or not (N). T
his can happen if the number of child cursors is too large. CHILD_LATCH NUMBER C
hild latch number that is protecting the cursor
Checking for high version counts: -------------------------------SELECT address,
hash_value, version_count , users_opening , users_executing, substr(sql_text,1,
40) "SQL" FROM v$sqlarea
WHERE version_count > 10 ; "Versions" of a statement occur where the SQL is char
acter for character identical but the underlying objects or binds etc.. are diff
erent. Finding statement/s which use lots of shared pool memory: ---------------
----------------------------------------SELECT substr(sql_text,1,60) "Stmt", cou
nt(*), sum(sharable_mem) "Mem", sum(users_opening) "Open", sum(executions) "Exec
" FROM v$sql GROUP BY substr(sql_text,1,60) HAVING sum(sharable_mem) > 20000 ; S
ELECT substr(sql_text,1,100) "Stmt", count(*), sum(sharable_mem) "Mem", sum(user
s_opening) "Open", sum(executions) "Exec" FROM v$sql GROUP BY substr(sql_text,1,
60) HAVING sum(executions) > 200 ; SELECT substr(sql_text,1,100) "Stmt", count(*
), sum(executions) "Exec" FROM v$sql GROUP BY substr(sql_text,1,100) HAVING sum(
executions) > 200 ; where MEMSIZE is about 10% of the shared pool size in bytes.
This should show if there are similar literal statements, or multiple versions
of a statements which account for a large portion of the memory in the shared po
ol.
1.2 statistics: --------------- Rule based / Cost based - apply EXPLAIN PLAN in
query - ANALYZE COMMAND: ANALYZE TABLE EMPLOYEE COMPUTE STATISTICS; ANALYZE TABL
E EMPLOYEE COMPUTE STATISTICS FOR ALL INDEXES; ANALYZE INDEX scott.indx1 COMPUTE
STATISTICS; ANALYZE TABLE EMPLOYEE ESTIMATE STATISTICS SAMPLE 10 PERCENT; ALTER
TABLE EMPLOYEE DELETE STATISTICS; - DBMS_UTILITY.ANALYZE_SCHEMA() procedure: DB
MS_UTILITY.ANALYZE_SCHEMA (
schema method estimate_rows estimate_percent method_opt
VARCHAR2, VARCHAR2, NUMBER DEFAULT NULL, NUMBER DEFAULT NULL, VARCHAR2 DEFAULT N
ULL);
DBMS_UTILITY.ANALYZE_DATABASE ( method VARCHAR2, estimate_rows NUMBER DEFAULT NU
LL, estimate_percent NUMBER DEFAULT NULL, method_opt VARCHAR2 DEFAULT NULL); met
hod=compute, estimate, delete To exexcute: exec DBMS_UTILITY.ANALYZE_SCHEMA('CIS
ADM','COMPUTE');
1.3 Storage parameters: ----------------------segement: pctfree, pctused, number
AND size of extends in STORAGE clause - very low updates - if updates, oltp - i
f only inserts : pctfree low : pctfree 10, pctused 40 : pctfree low
1.4 rebuild indexes on regular basis: ----------------------------------------al
ter index SCOTT.EMPNO_INDEX rebuild tablespace INDEX storage (initial 5M next 5M
pctincrease 0); You should next use the ANALYZE TABLE COMPUTE STATISTICS comman
d 1.5 Is an index used in a query?: --------------------------------De WHERE cla
use of a query must use the 'leading column' of (one of the) index(es): Suppose
an index 'indx1' exists on EMPLOYEE(city, state, zip) Suppose a user issues the
query: SELECT .. FROM EMPLOYEE WHERE state='NY' Then this query will not use tha
t index! Therfore you must pay attention to the cardinal column of any index. 1.
6 set transaction parameters: ------------------------------ONLY ORACLE 7,8,8i:
Suppose you must perform an action which will generate a lot of redo and rollbac
k. If you want to influence which rollback segment will be used in your transact
ions, you can use the statement set transaction use rollback segment SEGMENT_NAM
E 1.7 Reduce fragmentation of a dictionary managed tablespace: -----------------
------------------------------------------alter tablespace DATA coalesce;
1.8 normalisation of tables: ---------------------------The more tables are 'nor
malized', the higher the performance costs for queries joining tables 1.9 commit
s na zoveel rows: ---------------------------declare i number := 0; cursor s1 is
SELECT * FROM tab1 WHERE col1 = 'value1' FOR UPDATE; begin for c1 in s1 loop up
date tab1 set col1 = 'value2' WHERE current of s1; i := i + 1; if i > 1000 then
commit; i := 0; end if; end loop; commit; end; / -- ----------------------------
-CREATE TABLE TEST ( ID NUMBER(10) DATUM DATE NAME VARCHAR2(10) ); declare i num
ber := 1000; begin -- Commit after every X records
NULL, NULL, NULL
while i>1 loop insert into TEST values (1, sysdate+i,'joop'); i := i - 1; commit
; end loop; commit; end; / -- -----------------------------CREATE TABLE TEST2 (
i number ID NUMBER(10) DATUM DATE DAG VARCHAR2(10) NAME VARCHAR2(10) );
NULL, NULL, NULL, NULL, NULL
declare i number := 1; j date; k varchar2(10); begin while i<1000000 loop j:=sys
date+i; k:=TO_CHAR(SYSDATE+i,'DAY'); insert into TEST2 values (i,1, j, k,'joop')
; i := i + 1; commit; end loop; commit; end; / -- -----------------------------C
REATE TABLE TEST3 ( ID NUMBER(10) DATUM DATE DAG VARCHAR2(10) VORIG VARCHAR2(10)
NAME VARCHAR2(10) ); declare i number := 1; j date;
NULL, NULL, NULL, NULL, NULL
k varchar2(10); l varchar2(10); begin while i<1000 loop j:=sysdate+i; k:=TO_CHAR
(SYSDATE+i,'DAY'); l:=TO_CHAR(SYSDATE+i-1,'DAY'); insert into TEST3 (ID,DATUM,DA
G,VORIG,NAME) values (i, j, k, l,'joop'); i := i + 1; commit; end loop; commit;
end; / 1.10 explain plan commAND, autotrace: -----------------------------------
-1 explain plan commAND: ----------------------First execute the utlxplan.sql sc
ript. This script will create the PLAN_TABLE table, needed for storage of perfor
mance data. Now it's possible to do the following: -- optionally, delete the for
mer performance data DELETE FROM plan_table WHERE statement_id = 'XXX'; COMMIT;
-- now you can run the query that is to be analyzed EXPLAIN PLAN SET STATEMENT_I
D = 'XXX' FOR SELECT * FROM EMPLOYEE WHERE city > 'Y%'; To view results, you can
use the utlxpls.sql script. 2. set autotrace on / off ------------------------D
eze maakt ook gebruik van de PLAN_TABLE en de PLUSTRACE role moet bestaan. Desge
wenst kan het plustrce.sql script worden uitgevoerd (onder SYS). Opmerking: Exec
ution plan / access path bij een join query: - nested loop: 1 table is de drivin
g table met full table scan of gebruik van index, en de tweede table wordt benad
ert m.b.v. een index van de tweede table gebaseerd op de WHERE clause. - merge j
oin: als er geen bruikbare index is, worden alle rows opgehaald, gesorteerd, en
gejoined naar een resultset.
- Hash join: bepaalde init.ora parameters moeten aanwezig zijn (HASH_JOIN_ENABLE
=TRUE, HASH_AREA_SIZE= , of via ALTER SESSION SET HASH_JOIN_ENABLED=TRUE). Meest
al zeer effectief bij joins van een kleine table met een grote table. De kleine
table is de driving table in memory en het vervolg is een algolritme wat lijkt o
p de nested loop Kan ook worden afgedwongen met een hint: SELECT /*+ USE_HASH(CO
MPANY) */ COMPANY.Name, SUM(Dollar_Amount) FROM COMPANY, SALES WHERE COMPANY.Com
pany_ID = SALES.Company_ID GROUP BY COMPANY.Name; 3 SQL trace en TKPROFF -------
--------------SQL trace kan geactiveerd worden via init.ora of via ALTER SESSION
SET SQL_TRACE=TRUE DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid, serial#, TRUE); DB
MS_SYSTEM.SET_SQL_TRACE_IN_SESSION(12, 398, TRUE); DBMS_SYSTEM.SET_SQL_TRACE_IN_
SESSION(12, 398, FALSE); DBMS_SUPPORT.START_TRACE_IN_SESSION(12,398); Turn SQL t
racing on in session 448. The trace information will get written to user_dump_de
st. SQL> exec dbms_system.set_sql_trace_in_session(448,2288,TRUE); Turn SQL trac
ing off in session 448 SQL> exec dbms_system.set_sql_trace_in_session(448,2288,F
ALSE);
Init.ora: Max_dump_file_size in OS blocks SQL_TRACE=TRUE (kan zeer grote files o
pleveren, is voor alle sessions) USER_DUMP_DEST= lokatie trace files 1.12 Indien
de CBO niet het beste access path gebruikt: hints in query: -------------------
---------------------------------------------------Goal hints: Access methods hi
nts: ALL_ROWS, FIRST_ROWS, CHOOSE, RULE FULL, ROWID, CLUSTER, HASH, INDEX
SELECT /*+ INDEX(emp_pk) */ FROM emp WHERE empno=12345; SELECT /*+ RULE */ ename
, dname FROM emp, dept WHERE emp.deptno=dept.deptno
==============================================
3. Data dictonary queries m.b.t perfoRMANce: ===================================
=========== 3.1 Reads AND writes in files: -----------------------------V$FILEST
AT, V$DATAFILE - Relative File I/O (1) SELECT fs.file#, df.file#, substr(df.name
, 1, 50), fs.phyrds, fs.phywrts, df.status FROM v$filestat fs, v$datafile df WHE
RE fs.file#=df.file# - Relative File I/O (2) set pagesize 60 linesize 80 newpage
0 feedback off ttitle skip centre 'Datafile IO Weights' skip centre column Tota
l_IO format 999999999 column Weigt format 999.99 column file_name format A40 bre
ak on drive skip 2 compute sum of Weight on Drive SELECT substr(DF.Name, 1, 6) D
rive, DF.Name File_Name, FS.Phyblkrd+FS.Phyblkwrt Total_IO, 100*(FS.Phyblkrd+FS.
Phyblkwrt) / MaxIO Weight FROM V$FILESTAT FS, V$DATAFILE DF, (SELECT MAX(Phyblkr
d+Phyblkwrt) MaxIO FROM V$FILESTAT) WHERE DF.File#=FS.File# ORDER BY Weight desc
/ 3.2 undocumented init parameters: --------------------------------SELECT * FR
OM SYS.X$KSPPI WHERE SUBSTR(KSPPINM,1,1) = '_'; 3.3 Kans op gebruik index of nie
t?: ----------------------------------Kijk in DBA_TAB_COLUMNS.NUM_DISTINCT DBA_T
ABLES.NUM_ROWS als num_distinct in de buurt komt van num_rows : index favoriet i
.p.v. full table
Kijk in DBA_INDEXES, USER_INDEXES.CLUSTERING_FACTOR als clustering_factor = aant
al blocks: ordered 3.4 snel overzicht hit ratio buffer cache: ------------------
-----------------------Hit ratio= (LR - PR) / LR Stel er zijn nauwelijk Physical
Reads PR, ofwel PR=0, dan is de Hit Ratio=LR/LR=1 Er worden dan geen blocks van
disk gelezen. Praktijk: Hit ratio moet gemiddeld wel zo > 0,8 - 0,9 V$sess_io e
n v$sysstat en v$session kunnen geraadpleegd worden om de hit ratio te bepalen.
V$sess_io: V$session: SELECT FROM WHERE sid, consistent_gets, physical_reads sid
, username
name, value v$sysstat name IN ('db block gets', 'consistent gets','physical read
s');
SELECT (1-(pr.value/(dbg.value+cg.value)))*100 FROM v$sysstat pr, v$sysstat dbg,
v$sysstat cg WHERE pr.name = 'physical reads' AND dbg.name = 'db block gets' AN
D cg.name = 'consistent gets'; -- uitgebeidere query m.b.t. hit ratio CLEAR SET
HEAD ON SET VERIFY OFF col col col col HitRatio format 999.99 heading 'Hit Ratio
' CGets format 9999999999999 heading 'Consistent Gets' DBGets format 99999999999
99 heading 'DB Block Gets' PhyGets format 9999999999999 heading 'Physical Reads'
SELECT substr(Username, 1, 10), v$sess_io.sid, consistent_gets, block_gets, phys
ical_reads, 100*(consistent_gets+block_gets-physical_reads)/ (consistent_gets+bl
ock_gets) HitRatio FROM v$session, v$sess_io WHERE v$session.sid = v$sess_io.sid
AND (consistent_gets+block_gets) > 0 AND Username is NOT NULL / SELECT 'Hit Rat
io' Database, cg.value CGets, db.value DBGets, pr.value PhyGets,
100*(cg.value+db.value-pr.value)/(cg.value+db.value) HitRatio FROM v$sysstat db,
v$sysstat cg, v$sysstat pr WHERE db.name = 'db block gets' AND cg.name = 'consi
stent gets' AND pr.name = 'physical reads' /
3.6 Wat zijn de actieve transacties?: ------------------------------------SELECT
substr(username, 1, 10), substr(terminal, 1, 10), substr(osuser, 1, 10), t.star
t_time, r.name, t.used_ublk "ROLLB BLKS", decode(t.space, 'YES', 'SPACE TX', dec
ode(t.recursive, 'YES', 'RECURSIVE TX', decode(t.noundo, 'YES', 'NO UNDO TX', t.
status) )) status FROM sys.v_$transaction t, sys.v_$rollname r, sys.v_$session s
WHERE t.xidusn = r.usn AND t.ses_addr = s.saddr 3.7 sid's, resource belasting e
n locks: --------------------------------------SELECT sid, lmode, ctime, block F
ROM v$lock SELECT s.sid, substr(s.username, 1, 10), substr(s.schemaname, 1, 10),
substr(s.osuser, 1, 10), substr(s.program, 1, 10), s.command, l.lmode, l.block
FROM v$session s, v$lock l WHERE s.sid=l.sid; SELECT l.addr, s.saddr, l.sid, s.s
id, l.type, l.lmode, s.status, substr(s.schemaname, 1, 10), s.lockwait, s.row_wa
it_obj# FROM v$lock l, v$session s WHERE l.addr=s.saddr SELECT sid, substr(owner
, 1, 10), substr(object, 1, 10) FROM v$access SID Session number that is accessi
ng an object OWNER Owner of the object OBJECT Name of the object TYPE Type ident
ifier for the object SELECT substr(s.username, 1, 10), s.sid, t.log_io, t.phy_io
FROM v$session s, v$transaction t WHERE t.ses_addr=s.saddr 3.8 latch use in SGA
(locks op process): ----------------------------------------
SELECT c.name,a.gets,a.misses,a.sleeps, a.immediate_gets,a.immediate_misses,b.pi
d FROM v$latch a, v$latchholder b, v$latchname c WHERE a.addr = b.laddr(+) AND a
.latch# = c.latch# AND (c.name like 'redo%' or c.name like 'row%') ORDER BY a.la
tch#; column latch_name format a40 SELECT name latch_name, gets, misses, round(d
ecode(gets-misses,0,1,gets-misses)/ decode(gets,0,1,gets),3) hit_ratio FROM v$la
tch WHERE name = 'redo allocation'; column latch_name format a40 SELECT name lat
ch_name, immediate_gets, immediate_misses, round(decode(immediate_gets-immediate
_misses,0,1, immediate_gets-immediate_misses)/ decode(immediate_gets,0,1,immedia
te_gets),3) hit_ratio FROM v$latch WHERE name = 'redo copy'; column name format
a40 column value format a10 SELECT name,value FROM v$parameter WHERE name in ('l
og_small_entry_max_size','log_simultaneous_copies', 'cpu_count'); -- latches en
locks in beeld set pagesize 23 set pause on set pause 'Hit any key...' col col c
ol col col col col col col sid format 999999 serial# format 999999 username form
at a12 trunc process format a8 trunc terminal format a12 trunc type format a12 t
runc lmode format a4 trunc lrequest format a4 trunc object format a73 trunc
SELECT s.sid, s.serial#, decode(s.process, null, decode(substr(p.username,1,1),
'?', upper(s.osuser), p.username), decode( p.username, 'ORACUSR ', upper(s.osuse
r), s.process) ) process, nvl(s.username, 'SYS ('||substr(p.username,1,4)||')')
username, decode(s.terminal, null, rtrim(p.terminal, chr(0)), upper(s.terminal))
terminal, decode(l.type, -- Long locks 'TM', 'DML/DATA ENQ', 'TX', 'TRANSAC ENQ
', 'UL', 'PLS USR LOCK', -- Short locks
'BL', 'BUF HASH TBL', 'CF', 'CONTROL FILE', 'CI', 'CROSS INST F', 'DF', 'DATA FI
LE ', 'CU', 'CURSOR BIND ', 'DL', 'DIRECT LOAD ', 'DM', 'MOUNT/STRTUP', 'DR', 'R
ECO LOCK ', 'DX', 'DISTRIB TRAN', 'FS', 'FILE SET ', 'IN', 'INSTANCE NUM', 'FI',
'SGA OPN FILE', 'IR', 'INSTCE RECVR', 'IS', 'GET STATE ', 'IV', 'LIBCACHE INV',
'KK', 'LOG SW KICK ', 'LS', 'LOG SWITCH ', 'MM', 'MOUNT DEF ', 'MR', 'MEDIA REC
VRY', 'PF', 'PWFILE ENQ ', 'PR', 'PROCESS STRT', 'RT', 'REDO THREAD ', 'SC', 'SC
N ENQ ', 'RW', 'ROW WAIT ', 'SM', 'SMON LOCK ', 'SN', 'SEQNO INSTCE', 'SQ', 'SEQ
NO ENQ ', 'ST', 'SPACE TRANSC', 'SV', 'SEQNO VALUE ', 'TA', 'GENERIC ENQ ', 'TD'
, 'DLL ENQ ', 'TE', 'EXTEND SEG ', 'TS', 'TEMP SEGMENT', 'TT', 'TEMP TABLE ', 'U
N', 'USER NAME ', 'WL', 'WRITE REDO ', 'TYPE='||l.type) type, decode(l.lmode, 0,
'NONE', 1, 'NULL', 2, 'RS', 3, 'RX', 4, 'S', 5, 'RSX', 6, 'X', to_char(l.lmode)
) lmode, decode(l.request, 0, 'NONE', 1, 'NULL', 2, 'RS', 3, 'RX', 4, 'S', 5, '
RSX', 6, 'X', to_char(l.request) ) lrequest, decode(l.type, 'MR', decode(u.name,
null, 'DICTIONARY OBJECT', u.name||'.'||o.name), 'TD', u.name||'.'||o.name, 'TM
', u.name||'.'||o.name, 'RW', 'FILE#='||substr(l.id1,1,3)|| ' BLOCK#='||substr(l
.id1,4,5)||' ROW='||l.id2, 'TX', 'RS+SLOT#'||l.id1||' WRP#'||l.id2, 'WL', 'REDO
LOG FILE#='||l.id1, 'RT', 'THREAD='||l.id1, 'TS', decode(l.id2, 0, 'ENQUEUE', 'N
EW BLOCK ALLOCATION'), 'ID1='||l.id1||' ID2='||l.id2) object FROM sys.v_$lock l,
sys.v_$session s, sys.obj$ o, sys.user$ u, sys.v_$process p WHERE s.paddr = p.a
ddr(+) AND l.sid = s.sid AND l.id1 = o.obj#(+) AND o.owner# = u.user#(+) AND l.t
ype <> 'MR' UNION ALL /*** LATCH HOLDERS ***/ SELECT s.sid, s.serial#, s.process
, s.username, s.terminal, 'LATCH', 'X', 'NONE', h.name||' ADDR='||rawtohex(laddr
) FROM sys.v_$process p, sys.v_$session s, sys.v_$latchholder h WHERE h.pid = p.
pid AND p.addr = s.paddr UNION ALL /*** LATCH WAITERS ***/ SELECT s.sid, s.seria
l#, s.process, s.username, s.terminal, 'LATCH', 'NONE', 'X', name||' LATCH='||p.
latchwait FROM sys.v_$session s, sys.v_$process p, sys.v_$latch l WHERE latchwai
t is not null AND p.addr = s.paddr
AND /
p.latchwait = l.addr
SELECT v.SID, v.BLOCK_GETS, v.BLOCK_CHANGES, w.USERNAME, w.OSUSER, w.TERMINAL FR
OM v$sess_io v, V$session w WHERE v.SID=w.SID ORDER BY v.SID; SQL> desc v$sess_i
o Name Null? ----------------------------- -------SID BLOCK_GETS CONSISTENT_GETS
PHYSICAL_READS BLOCK_CHANGES CONSISTENT_CHANGES SQL> desc v$session; Name Null?
----------------------------- -------SADDR SID SERIAL# AUDSID PADDR USER# USERN
AME COMMAND OWNERID TADDR LOCKWAIT STATUS SERVER SCHEMA# SCHEMANAME OSUSER PROCE
SS MACHINE TERMINAL PROGRAM TYPE SQL_ADDRESS SQL_HASH_VALUE SQL_ID SQL_CHILD_NUM
BER PREV_SQL_ADDR PREV_HASH_VALUE PREV_SQL_ID PREV_CHILD_NUMBER PLSQL_ENTRY_OBJE
CT_ID PLSQL_ENTRY_SUBPROGRAM_ID PLSQL_OBJECT_ID PLSQL_SUBPROGRAM_ID MODULE MODUL
E_HASH
Type -------------------NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER
Type -------------------RAW(8) NUMBER NUMBER NUMBER RAW(8) NUMBER VARCHAR2(30) N
UMBER NUMBER VARCHAR2(16) VARCHAR2(16) VARCHAR2(8) VARCHAR2(9) NUMBER VARCHAR2(3
0) VARCHAR2(30) VARCHAR2(12) VARCHAR2(64) VARCHAR2(30) VARCHAR2(48) VARCHAR2(10)
RAW(8) NUMBER VARCHAR2(13) NUMBER RAW(8) NUMBER VARCHAR2(13) NUMBER NUMBER NUMB
ER NUMBER NUMBER VARCHAR2(48) NUMBER
ACTION ACTION_HASH CLIENT_INFO FIXED_TABLE_SEQUENCE ROW_WAIT_OBJ# ROW_WAIT_FILE#
ROW_WAIT_BLOCK# ROW_WAIT_ROW# LOGON_TIME LAST_CALL_ET PDML_ENABLED FAILOVER_TYP
E FAILOVER_METHOD FAILED_OVER RESOURCE_CONSUMER_GROUP PDML_STATUS PDDL_STATUS PQ
_STATUS CURRENT_QUEUE_DURATION CLIENT_IDENTIFIER BLOCKING_SESSION_STATUS BLOCKIN
G_INSTANCE BLOCKING_SESSION SEQ# EVENT# EVENT P1TEXT P1 P1RAW P2TEXT P2 P2RAW P3
TEXT P3 P3RAW WAIT_CLASS_ID WAIT_CLASS# WAIT_CLASS WAIT_TIME SECONDS_IN_WAIT STA
TE SERVICE_NAME SQL_TRACE SQL_TRACE_WAITS SQL_TRACE_BINDS SQL>
VARCHAR2(32) NUMBER VARCHAR2(64) NUMBER NUMBER NUMBER NUMBER NUMBER DATE NUMBER
VARCHAR2(3) VARCHAR2(13) VARCHAR2(10) VARCHAR2(3) VARCHAR2(32) VARCHAR2(8) VARCH
AR2(8) VARCHAR2(8) NUMBER VARCHAR2(64) VARCHAR2(11) NUMBER NUMBER NUMBER NUMBER
VARCHAR2(64) VARCHAR2(64) NUMBER RAW(8) VARCHAR2(64) NUMBER RAW(8) VARCHAR2(64)
NUMBER RAW(8) NUMBER NUMBER VARCHAR2(64) NUMBER NUMBER VARCHAR2(19) VARCHAR2(64)
VARCHAR2(8) VARCHAR2(5) VARCHAR2(5)
======================================================== 4. IMP and EXP, IMPDP a
nd EXPDP, and SQL*Loader Examples ==============================================
========== 4.1 EXPDP and IMPDP examples: =============================
New for Oracle 10g, are the impdp and expdp utilities. EXPDP practice/practice P
ARFILE=par1.par EXPDP hr/hr DUMPFILE=export_dir:hr_schema.dmp LOGFILE=export_dir
:hr_schema.explog EXPDP system/******** PARFILE=c:\rmancmd\dpe_1.expctl Oracle 1
0g provides two new views, DBA_DATAPUMP_JOBS and DBA_DATAPUMP_SESSIONS that allo
w the DBA to monitor the progress of all DataPump operations. SELECT owner_name
,job_name ,operation ,job_mode ,state ,degree ,attached_sessions FROM dba_datapu
mp_jobs ; SELECT DPS.owner_name ,DPS.job_name ,S.osuser FROM dba_datapump_sessio
ns DPS ,v$session S WHERE S.saddr = DPS.saddr ; Example 1. EXPDP parfile -------
----------------JOB_NAME=NightlyDRExport DIRECTORY=export_dir DUMPFILE=export_di
r:fulldb_%U.dmp LOGFILE=export_dir:NightlyDRExport.explog FULL=Y PARALLEL=2 FILE
SIZE=650M CONTENT=ALL STATUS=30 ESTIMATE_ONLY=Y Example 2. EXPDP parfile, only f
or getting an estimate of export size ------------------------------------------
--------------------JOB_NAME=EstimateOnly DIRECTORY=export_dir LOGFILE=export_di
r:EstimateOnly.explog FULL=Y CONTENT=DATA_ONLY ESTIMATE=STATISTICS ESTIMATE_ONLY
=Y STATUS=60
Example 3. EXPDP parfile, only 1 schema, writing to multiple files with %U varia
ble, limited to 650M -----------------------------------------------------------
---------------------------------JOB_NAME=SH_TABLESONLY DIRECTORY=export_dir DUM
PFILE=export_dir:SHONLY_%U.dmp LOGFILE=export_dir:SH_TablesOnly.explog SCHEMAS=S
H PARALLEL=2 FILESIZE=650M STATUS=60 Example 4. EXPDP parfile, multiple tables,
writing to multiple files with %U variable, limited ----------------------------
----------------------------------------------------------JOB_NAME=HR_PAYROLL_RE
FRESH DIRECTORY=export_dir DUMPFILE=export_dir:HR_PAYROLL_REFRESH_%U.dmp LOGFILE
=export_dir:HR_PAYROLL_REFRESH.explog STATUS=20 FILESIZE=132K CONTENT=ALL TABLES
=HR.EMPLOYEES,HR.DEPARTMENTS,HR.PAYROLL_CHECKS,HR.PAYROLL_HOURLY,HR.PAYROLL_ SAL
ARY,HR.PAYROLL_TRANSACTIONS Example 5. EXPDP parfile, Exports all objects in the
HR schema, including metadata, asof just before midnight on April 10, 2005 ----
--------------------------------------------------------------------------------
-----------------------------------JOB_NAME=HREXPORT DIRECTORY=export_dir DUMPFI
LE=export_dir:HREXPORT_%U.dmp LOGFILE=export_dir:2005-04-10_HRExport.explog SCHE
MAS=HR CONTENTS=ALL FLASHBACK_TIME=TO_TIMESTAMP"('04-10-2005 23:59', 'MM-DD-YYYY
HH24:MI')" Example 6. IMPDP parfile, Imports data +only+ into selected tables i
n the HR schema, Multiple dump files will be used ------------------------------
--------------------------------------------------------------------------------
------JOB_NAME=HR_PAYROLL_IMPORT DIRECTORY=export_dir DUMPFILE=export_dir:HR_PAY
ROLL_REFRESH_%U.dmp LOGFILE=export_dir:HR_PAYROLL_IMPORT.implog STATUS=20 TABLES
=HR.PAYROLL_CHECKS,HR.PAYROLL_HOURLY,HR.PAYROLL_SALARY,HR.PAYROLL_TRANSACTIO NS
CONTENT=DATA_ONLY TABLE_EXISTS_ACTION=TRUNCATE
Example 7. IMPDP parfile,3 tables in the SH schema are the only tables to be ref
reshed,These tables will be truncated before loading ---------------------------
--------------------------------------------------------------------------------
-------------------DIRECTORY=export_dir JOB_NAME=RefreshSHTables DUMPFILE=export
_dir:fulldb_%U.dmp LOGFILE=export_dir:RefreshSHTables.implog STATUS=30 CONTENT=D
ATA_ONLY SCHEMAS=SH INCLUDE=TABLE:"IN('COUNTRIES','CUSTOMERS','PRODUCTS','SALES'
)" TABLE_EXISTS_ACTION=TRUNCATE Example IMPDP parfile,Generates SQLFILE output s
howing the DDL statements,Note that this code is +not+ executed! ---------------
--------------------------------------------------------------------------------
---------------DIRECTORY=export_dir JOB_NAME=GenerateImportDDL DUMPFILE=export_d
ir:hr_payroll_refresh_%U.dmp LOGFILE=export_dir:GenerateImportDDL.implog SQLFILE
=export_dir:GenerateImportDDL.sql INCLUDE=TABLE Example: schedule a procedure wh
ich uses DBMS_DATAPUMP -----------------------------------------------------BEGI
N DBMS_SCHEDULER.CREATE_JOB ( job_name => 'HR_EXPORT' ,job_type => 'PLSQL_BLOCK'
,job_action => 'BEGIN HR.SP_EXPORT;END;' ,start_date => '04/18/2005 23:00:00.00
0000' ,repeat_interval => 'FREQ=DAILY' ,enabled => TRUE ,comments => 'Performs H
R Schema Export nightly at 11 PM' );
END; / ====================================== How to use the NETWORK_LINK parama
ter: ====================================== Note 1: ======= Lora, the DBA at Acm
e Bank, is at the center of attention in a high-profile meeting of the bank's to
p management team. The objective is to identify ways of enabling end users to sl
ice and dice the data in the company's main data warehouse. At the meeting, one
idea presented is to create several small data martseach based on a particular func
tional areathat
can each be used by specialized teams. To effectively implement the data mart ap
proach, the data specialists must get data into the data marts quickly and effic
iently. The challenge the team faces is figuring out how to quickly refresh the
warehouse data to the data marts, which run on heterogeneous platforms. And that
's why Lora is at the meeting. What options does she propose for moving the data
? An experienced and knowledgeable DBA, Lora provides the meeting attendees with
three possibilities, as follows: Using transportable tablespaces Using Data Pum
p (Export and Import) Pulling tablespaces This article shows Lora's explanation
of these options, including their implementation details and their pros and cons
. Transportable Tablespaces: Lora starts by describing the transportable tablesp
aces option. The quickest way to transport an entire tablespace to a target syst
em is to simply transfer the tablespace's underlying files, using FTP (file tran
sfer protocol) or rcp (remote copy). However, just copying the Oracle data files
is not sufficient; the target database must recognize and import the files and
the corresponding tablespace before the tablespace data can become available to
end users. Using transportable tablespaces involves copying the tablespace files
and making the data available in the target database. A few checks are necessar
y before this option can be considered. First, for a tablespace TS1 to be transp
orted to a target system, it must be self-contained. That is, all the indexes, p
artitions, and other dependent segments of the tables in the tablespace must be
inside the tablespace. Lora explains that if a set of tablespaces contains all t
he dependent segments, the set is considered to be self-contained. For instance,
if tablespaces TS1 and TS2 are to be transferred as a set and a table in TS1 ha
s an index in TS2, the tablespace set is self-contained. However, if another ind
ex of a table in TS1 is in tablespace TS3, the tablespace set (TS1, TS2) is not
self-contained. To transport the tablespaces, Lora proposes using the Data Pump
Export utility in Oracle Database 10g. Data Pump is Oracle's next-generation dat
a transfer tool, which replaces the earlier Oracle Export (EXP) and Import (IMP)
tools. Unlike those older tools, which use regular SQL to extract and insert da
ta, Data Pump uses proprietary APIs that bypass the SQL buffer, making the proce
ss extremely fast. In addition, Data Pump can extract specific objects, such as
a particular
stored procedure or a set of tables from a particular tablespace. Data Pump Expo
rt and Import are controlled by jobs, which the DBA can pause, restart, and stop
at will. Lora has run a test before the meeting to see if Data Pump can handle
Acme's requirements. Lora's test transports the TS1 and TS2 tablespaces as follo
ws: 1. Check that the set of TS1 and TS2 tablespaces is self- contained. Issue t
he following command: BEGIN SYS.DBMS_TTS.TRANSPORT_SET_CHECK ('TS1','TS2'); END;
2. Identify any nontransportable sets. If no rows are selected, the tablespaces
are self-contained: SELECT * FROM SYS.TRANSPORT_SET_VIOLATIONS; no rows selected
3. Ensure the tablespaces are read-only: SELECT STATUS FROM DBA_TABLESPACES WHE
RE TABLESPACE_NAME IN ('TS1','TS2'); STATUS --------READ ONLY READ ONLY 4. Trans
fer the data files of each tablespace to the remote system, into the directory /
u01/oradata, using a transfer mechanism such as FTP or rcp. 5. In the target dat
abase, create a database link to the source database (named srcdb in the line be
low). CREATE DATABASE LINK srcdb USING 'srcdb'; 6. In the target database, impor
t the tablespaces into the database, using Data Pump Import. impdp lora/lora123
TRANSPORT_DATAFILES="'/u01/oradata/ts1_1.dbf','/u01/oradata/ts2_1.dbf'" NETWORK_
LINK='srcdb'
TRANSPORT_TABLESPACES=\(TS1,TS2\) NOLOGFILE=Y This step makes the TS1 and TS2 ta
blespaces and their data available in the target database. Note that Lora doesn'
t export the metadata from the source database. She merely specifies the value s
rcdb, the database link to the source database, for the parameter NETWORK_LINK i
n the impdp command above. Data Pump Import fetches the necessary metadata from
the source across the database link and re-creates it in the target. 7. Finally,
make the TS1 and TS2 tablespaces in the source database read-write. ALTER TABLE
SPACE TS1 READ WRITE; ALTER TABLESPACE TS2 READ WRITE; Note 2: ======= One of th
e most significant characteristics of an import operation is its mode, because t
he mode largely determines what is imported. The specified mode applies to the s
ource of the operation, either a dump file set or another database if the NETWOR
K_LINK parameter is specified. The NETWORK_LINK parameter initiates a network im
port. This means that the impdp client initiates the import request, typically t
o the local database. That server contacts the remote source database referenced
by the database link in the NETWORK_LINK parameter, retrieves the data, and wri
tes it directly back to the target database. There are no dump files involved. I
n the following example, the source_database_link would be replaced with the nam
e of a valid database link that must already exist. impdp hr/hr TABLES=employees
DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link EXCLUDE=CONSTRAINT This
example results in an import of the employees table (excluding constraints) from
the source database. The log file is written to dpump_dir1, specified on the DI
RECTORY parameter.
4.2 Export / Import examples: ============================= In all Oracle versio
ns 7,8,8i,9i,10g you can use the exp and imp utilities. exp system/manager file=
expdat.dmp compress=Y owner=(HARRY, PIET)
exp system/manager file=hr.dmp owner=HR indexes=Y exp system/manager file=expdat
.dmp TABLES=(john.SALES) imp system/manager file=hr.dmp full=Y buffer=64000 comm
it=Y imp system/manager file=expdat.dmp FROMuser=ted touser=john indexes=N commi
t=Y buffer=64000 imp rm_live/rm file=dump.dmp tables=(employee) imp system/manag
er file=expdat.dmp FROMuser=ted touser=john buffer=4194304 c:\> cd [oracle_db_ho
me]\bin c:\> set nls_lang=american_america.WE8ISO8859P15 # export NLS_LANG=AMERI
CAN_AMERICA.UTF8 # export NLS_LANG=AMERICAN_AMERICA.AL32UTF8 c:\> imp system/man
ager fromuser=mis_owner touser=mis_owner file=[yourexport.dmp] FROM Oracle8i one
can use the QUERY= export parameter to SELECTively unload a subset of the data
FROM a table. Look at this example: exp scott/tiger tables=emp query=\"WHERE dep
tno=10\" -- Export metadata only: The Export utility is used to export the metad
ata describing the objects contained in the transported tablespace. For our exam
ple scenario, the Export command could be: EXP TRANSPORT_TABLESPACE=y TABLESPACE
S=ts_temp_sales FILE=jan_sales.dmp This operation will generate an export file,
jan_sales.dmp. The export file will be small, because it contains only metadata.
In this case, the export file will contain information describing the table tem
p_jan_sales, such as the column names, column datatype, and all other informatio
n that the target Oracle database will need in order to access the objects in ts
_temp_sales. $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$$$$$$$$$$$ $ Extended example: ----------------CASE 1: ======= We create a
user Albert on a 10g DB. This user will create a couple of tables with referenti
al constraints (PK-FK relations). Then we will export this user, drop the user,
and do an import. See what we have after the import. -- User: create user albert
identified by albert default tablespace ts_cdc
temporary QUOTA 10M QUOTA 20M QUOTA 50M ;
tablespace temp ON sysaux ON users ON TS_CDC
-- GRANTS: GRANT create session TO albert; GRANT create table TO albert; GRANT c
reate sequence TO albert; GRANT create procedure TO albert; GRANT connect TO alb
ert; GRANT resource TO albert; -- connect albert/albert -- create tables create
table LOC -- table of locations ( LOCID int, CITY varchar2(16), constraint pk_lo
c primary key (locid) ); create table DEPT -- table of departments ( DEPID int,
DEPTNAME varchar2(16), LOCID int, constraint pk_dept primary key (depid), constr
aint fk_dept_loc foreign key (locid) references loc(locid) ); create table EMP -
- table of employees ( EMPID int, EMPNAME varchar2(16), DEPID int, constraint pk
_emp primary key (empid), constraint fk_emp_dept foreign key (depid) references
dept(depid) ); -- show constraints: SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE
,TABLE_NAME,R_CONSTRAINT_NAME from user_constraints; CONSTRAINT_NAME -----------
-----------------------------------------------FK_EMP_DEPT FK_DEPT_LOC PK_LOC PK
_DEPT C TABLE_NAME R_CONSTRAINT_NAME - -----------------------------R R P P EMP
DEPT LOC DEPT PK_DEPT PK_LOC
PK_EMP -- insert some data: INSERT INSERT INSERT INSERT INSERT INSERT INSERT INS
ERT INSERT INSERT INSERT INSERT INSERT INSERT INSERT INSERT INTO INTO INTO INTO
INTO INTO INTO INTO INTO INTO INTO INTO INTO INTO INTO INTO LOC LOC LOC LOC VALU
ES VALUES VALUES VALUES VALUES VALUES VALUES VALUES VALUES
P EMP
(1,'Amsterdam'); (2,'Haarlem'); (3,null); (4,'Utrecht'); (1,'Sales',1); (2,'PZ',
1); (3,'Management',2); (4,'RD',3); (5,'IT',4); (1,'Joop',1); (2,'Gerrit',2); (3
,'Harry',2); (4,'Christa',3); (5,null,4); (6,'Nina',5); (7,'Nadia',5);
DEPT DEPT DEPT DEPT DEPT EMP EMP EMP EMP EMP EMP EMP
VALUES VALUES VALUES VALUES VALUES VALUES VALUES
-- make an export C:\oracle\expimp>exp '/@test10g2 as sysdba' file=albert.dat ow
ner=albert Export: Release 10.2.0.1.0 - Production on Sat Mar 1 08:03:59 2008 Co
pyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 Producti
on With the Partitioning, OLAP and Data Mining options Export done in WE8MSWIN12
52 character set and AL16UTF16 NCHAR character set server uses AL32UTF8 characte
r set (possible charset conversion) About to export specified users ... . export
ing pre-schema procedural objects and actions . exporting foreign function libra
ry names for user ALBERT . exporting PUBLIC type synonyms . exporting private ty
pe synonyms . exporting object type definitions for user ALBERT About to export
ALBERT's objects ... . exporting database links . exporting sequence numbers . e
xporting cluster definitions . about to export ALBERT's tables via Conventional
Path ... . . exporting table DEPT 5 rows exported . . exporting table EMP 7 rows
exported . . exporting table LOC 4 rows exported . exporting synonyms . exporti
ng views . exporting stored procedures . exporting operators . exporting referen
tial integrity constraints
. exporting triggers . exporting indextypes . exporting bitmap, functional and e
xtensible indexes . exporting posttables actions . exporting materialized views
. exporting snapshot logs . exporting job queues . exporting refresh groups and
children . exporting dimensions . exporting post-schema procedural objects and a
ctions . exporting statistics Export terminated successfully without warnings. C
:\oracle\expimp> -- drop user albert SQL>drop user albert cascade - create user
albert See above -- do the import C:\oracle\expimp>imp '/@test10g2 as sysdba' fi
le=albert.dat fromuser=albert touser=albert Import: Release 10.2.0.1.0 - Product
ion on Sat Mar 1 08:09:26 2008 Copyright (c) 1982, 2005, Oracle. All rights rese
rved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 Producti
on With the Partitioning, OLAP and Data Mining options Export file created by EX
PORT:V10.02.01 via conventional path import done in WE8MSWIN1252 character set a
nd AL16UTF16 NCHAR character set import server uses AL32UTF8 character set (poss
ible charset conversion) . importing ALBERT's objects into ALBERT . . importing
table "DEPT" 5 rows imported . . importing table "EMP" 7 rows imported . . impor
ting table "LOC" 4 rows imported About to enable constraints... Import terminate
d successfully without warnings. C:\oracle\expimp> - connect albert/albert SQL>
select * from emp; EMPID ---------1 2 EMPNAME DEPID ---------------- ---------Jo
op 1 Gerrit 2
3 4 5 6 7
Harry Christa Nina Nadia
2 3 4 5 5
7 rows selected. SQL> select * from loc; LOCID ---------1 2 3 4 CITY -----------
----Amsterdam Haarlem Utrecht
SQL> select * from dept; DEPID ---------1 2 3 4 5 DEPTNAME LOCID ---------------
- ---------Sales 1 PZ 1 Management 2 RD 3 IT 4
-- show constraints: SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE,TABLE_NAME,R_C
ONSTRAINT_NAME from user_constraints; CONSTRAINT_NAME --------------------------
--------------------------------FK_DEPT_LOC FK_EMP_DEPT PK_DEPT PK_EMP PK_LOC Ev
erything is back again. CASE 2: ======= We are not going to drop the user, but e
mpty the tables: SQL> SQL> SQL> SQL> SQL> SQL> SQL> alter table dept disable con
straint FK_DEPT_LOC; alter table emp disable constraint FK_EMP_DEPT; alter table
dept disable constraint PK_DEPT; alter table emp disable constraint pk_emp; alt
er table loc disable constraint pk_loc; truncate table emp; truncate table loc;
C TABLE_NAME R_CONSTRAINT_NAME - -----------------------------R R P P P DEPT EMP
DEPT EMP LOC PK_LOC PK_DEPT
SQL> truncate table dept; -- do the import C:\oracle\expimp>imp '/@test10g2 as s
ysdba' file=albert.dat ignore=y fromuser=albert touser=albert Import: Release 10
.2.0.1.0 - Production on Sat Mar 1 08:25:27 2008 Copyright (c) 1982, 2005, Oracl
e. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 Producti
on With the Partitioning, OLAP and Data Mining options Export file created by EX
PORT:V10.02.01 via conventional path import done in WE8MSWIN1252 character set a
nd AL16UTF16 NCHAR character set import server uses AL32UTF8 character set (poss
ible charset conversion) . importing ALBERT's objects into ALBERT . . importing
table "DEPT" 5 rows imported . . importing table "EMP" 7 rows imported . . impor
ting table "LOC" 4 rows imported About to enable constraints... IMP-00017: follo
wing statement failed with ORACLE error 2270: "ALTER TABLE "EMP" ENABLE CONSTRAI
NT "FK_EMP_DEPT"" IMP-00003: ORACLE error 2270 encountered ORA-02270: no matchin
g unique or primary key for this column-list IMP-00017: following statement fail
ed with ORACLE error 2270: "ALTER TABLE "DEPT" ENABLE CONSTRAINT "FK_DEPT_LOC""
IMP-00003: ORACLE error 2270 encountered ORA-02270: no matching unique or primar
y key for this column-list Import terminated successfully with warnings. So the
data gets imported, but we have a problem with the FOREIGN KEYS: SQL> select CON
STRAINT_NAME, CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME, STATUS from user_con
strai nts; CONSTRAINT_NAME STATUS ----------------------------------------------
------------FK_DEPT_LOC DISABLED FK_EMP_DEPT DISABLED PK_LOC DISABLED PK_EMP DIS
ABLED PK_DEPT DISABLED C TABLE_NAME R_CONSTRAINT_NAME
- ---------------------------------R DEPT PK_LOC R EMP P LOC P EMP P DEPT PK_DEP
T
alter alter alter alter alter alter
table table table table table table
dept enable constraint pk_dept; emp enable constraint pk_emp; loc enable constra
int pk_loc; dept enable constraint FK_DEPT_LOC; emp enable constraint FK_EMP_DEP
T; dept enable constraint PK_DEPT;
SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME, STATU
S from user_constraints; CONSTRAINT_NAME STATUS --------------------------------
--------------------------FK_DEPT_LOC ENABLED FK_EMP_DEPT ENABLED PK_DEPT ENABLE
D PK_EMP ENABLED PK_LOC ENABLED SQL> Everything is back again. $$$$$$$$$$$$$$$$$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ $$$$$$$$$$$$$$
$$$$$$ C TABLE_NAME R_CONSTRAINT_NAME
- ---------------------------------R DEPT PK_LOC R EMP P DEPT P EMP P LOC PK_DEP
T
What is exported?: -----------------Tables, indexes, data, database links gets e
xported. Example: -------exp system/manager file=oemuser.dmp owner=oemuser Verbo
nden met: Oracle9i Enterprise Edition Release 9.0.1.4.0 - Production With the Pa
rtitioning option JServer Release 9.0.1.4.0 - Production. Export is uitgevoerd i
n WE8MSWIN1252 tekenset en AL16UTF16 NCHAR-tekenset. Export van opgegeven gebrui
kers gaat beginnen ... . pre-schema procedurele objecten en acties wordt gexporteer
d. . bibliotheeknamen van verwijzende functie voor gebruiker OEMUSER worden gexpo t
eerd . objecttypedefinities voor gebruiker OEMUSER worden gexporteerd Export van ob
jecten van OEMUSER gaat beginnen ...
. databasekoppelingen worden gexporteerd. . volgnummers worden gexporteerd. . clusterd
efinities worden gexporteerd. . export van tabellen van OEMUSER gaat beginnen ... v
ia conventioneel pad ... . . tabel CUSTOMERS wordt gexporteerd.Er zijn 2 rijen gexport
eerd. . synoniemen worden gexporteerd. . views worden gexporteerd. . opgeslagen proced
ures worden gexporteerd. . operatoren worden gexporteerd. . referentile integriteitsb
kingen worden gexporteerd. . triggers worden gexporteerd. . indextypen worden gexport
. . bitmap, functionele en uit te breiden indexen worden gexporteerd. . acties post
-tabellen worden gexporteerd . snapshots worden gexporteerd. . logs voor snapshots wor
den gexporteerd. . takenwachtrijen worden gexporteerd . herschrijfgroepen en kinderen
worden gexporteerd . dimensies worden gexporteerd. . post-schema procedurele objecten
en acties wordt gexporteerd. . statistieken worden gexporteerd. Export is succesvol be
igd zonder waarschuwingen. D:\temp> Can one import tables to a different tablesp
ace? ------------------------------------------------Import the dump file using
the INDEXFILE= option Edit the indexfile. Remove remarks and specify the correct
tablespaces. Run this indexfile against your database, this will create the req
uired tables in the appropriate tablespaces Import the table(s) with the IGNORE=
Y option. Change the default tablespace for the user: Revoke the "UNLIMITED TABL
ESPACE" privilege FROM the user Revoke the user's quota FROM the tablespace FROM
WHERE the object was exported. This forces the import utility to create tables
in the user's default tablespace. Make the tablespace to which you want to impor
t the default tablespace for the user Import the table Can one export to multipl
e files?/ Can one beat the Unix 2 Gig limit? -----------------------------------
---------------------------------FROM Oracle8i, the export utility supports mult
iple output files. exp SCOTT/TIGER FILE=D:\F1.dmp,E:\F2.dmp FILESIZE=10m LOG=sco
tt.log Use the following technique if you use an Oracle version prior to 8i: Cre
ate a compressed export on the fly. # create a named pipe mknod exp.pipe p
# read the pipe - output to zip file in the background gzip < exp.pipe > scott.e
xp.gz & # feed the pipe exp userid=scott/tiger file=exp.pipe ... Some famous Err
ors: ------------------Error 1: -------EXP-00008: ORACLE error 6550 encountered
ORA-06550: line 1, column 31: PLS-00302: component 'DBMS_EXPORT_EXTENSION' must
be declared 1. The errors indicate that $ORACLE_HOME/rdbms/admin/CATALOG.SQL and
$ORACLE_HOME/rdbms/admin/CATPROC.SQL Should be run again, as has been previousl
y suggested. Were these scripts run connected as SYS? Try SELECT OBJECT_NAME, OB
JECT_TYPE FROM DBA_OBJECTS WHERE STATUS = 'INVALID' AND OWNER = 'SYS'; Do you ha
ve invalid objects? Is DBMS_EXPORT_EXTENSION invalid? If so, try compiling it ma
nually: ALTER PACKAGE DBMS_EXPORT_EXTENSION COMPILE BODY; If you receive errors
during manual compilation, please show errors for further information. 2. Or pos
sibly different imp/exp versions are run to another version of the database. The
problem can be resolved by copying the higher version CATEXP.SQL and executed i
n the lesser version RDBMS. 3. Other fix: If there are problems in exp/imp from
single byte to multibyte databases: - Analyze which tables/rows could be affecte
d by national characters before running the export - Increase the size of affect
ed rows. - Export the table data once again. Error 2: -------EXP-00091: Exportin
g questionable statistics. Hi. This warning is generated because the statistics
are questionable due to the client character set difference from the server char
acter set. There is an article which discusses the causes of questionable statis
tics available via the MetaLink Advanced Search option by Doc ID: Doc ID: 159787
.1 9i: Import STATISTICS=SAFE If you do not want this conversion to occur, you n
eed to ensure the client NLS environment
performing the export is set to match the server. Fix ~~~~ a) If the statistics
of a table are not required to include in export take the export with parameter
STATISTICS=NONE Example: $exp scott/tiger file=emp1.dmp tables=emp STATISTICS=NO
NE b) In case, the statistics are need to be included can use STATISTICS=ESTIMAT
E or COMPUTE (default is Estimate). Error 3: -------EXP-00056: ORA-01403: EXP-00
056: ORA-01403: EXP-00000: ORACLE error 1403 encountered no data found ORACLE er
ror 1403 encountered no data found Export terminated unsuccessfully
You can't export any DB with an exp utility of a newer version. The exp version
must be equal or older than the DB version Doc ID </help/usaeng/Search/search.ht
ml>: Note:281780.1 Content Type: TEXT/PLAIN Subject: Oracle 9.2.0.4.0: Schema Ex
port Fails with ORA-1403 (No Data Found) on Exporting Cluster Definitions Creati
on Date: 29-AUG-2004 Type: PROBLEM Last Revision Date: 29-AUG-2004 Status: PUBLI
SHED The information in this article applies to: - Oracle Server - Enterprise Ed
ition - Version: 9.2.0.4 to 9.2.0.4 - Oracle Server - Personal Edition - Version
: 9.2.0.4 to 9.2.0.4 - Oracle Server - Standard Edition - Version: 9.2.0.4 to 9.
2.0.4 This problem can occur on any platform. ERRORS -----EXP-56 ORACLE error en
countered ORA-1403 no data found EXP-0: Export terminated unsuccessfully SYMPTOM
S -------A schema level export with the 9.2.0.4 export utility from a 9.2.0.4 or
higher release database in which XDB has been installed, fails when exporting t
he cluster definitions with: ... . exporting cluster definitions EXP-00056: ORAC
LE error 1403 encountered ORA-01403: no data found EXP-00000: Export terminated
unsuccessfully You can confirm that XDB has been installed in the database:
SQL> SELECT substr(comp_id,1,15) comp_id, status, substr(version,1,10) version,
substr(comp_name,1,30) comp_name FROM dba_registry ORDER BY 1; COMP_ID ---------
-----... XDB XML XOQ STATUS VERSION COMP_NAME ----------- ---------- -----------
------------------INVALID VALID LOADED 9.2.0.4.0 9.2.0.6.0 9.2.0.4.0 Oracle XML
Database Oracle XDK for Java Oracle OLAP API
You create a trace file of the ORA-1403 error: SQL> SHOW PARAMETER user_dump SQL
> ALTER SYSTEM SET EVENTS '1403 trace name errorstack level 3'; System altered.
-- Re-run the export SQL> ALTER SYSTEM SET EVENTS '1403 trace name errorstack of
f'; System altered. The trace file that was written to your USER_DUMP_DEST direc
tory, shows: ksedmp: internal or fatal error ORA-01403: no data found Current SQ
L statement for this session: SELECT xdb_uid FROM SYS.EXU9XDBUID You can confirm
that you have no invalid XDB objects in the database: SQL> SET lines 200 SQL> S
ELECT status, object_id, object_type, owner||'.'||object_name "OWNER.OBJECT" FRO
M dba_objects WHERE owner='XDB' AND status != 'VALID' ORDER BY 4,2; no rows sele
cted Note: If you do have invalid XDB objects, and the same ORA-1403 error occur
s when performing a full database export, see the solution mentioned in: [NOTE:2
55724.1] <ml2_documents.showDocument?p_id=255724.1&p_database_id=NOT> "Oracle 9i
: Full Export Fails with ORA-1403 (No Data Found) on Exporting Cluster Defintion
s" CHANGES ------You recently restored the database from a backup or you recreat
ed the controlfile, or you performed Operating System actions on your database t
empfiles. CAUSE
----The Temporary tablespace does not have any tempfiles. Note that the errors a
re different when exporting with a 9.2.0.3 or earlier export utility: . exportin
g cluster definitions EXP-00056: ORACLE error 1157 encountered ORA-01157: cannot
identify/lock data file 201 - see DBWR trace file ORA-01110: data file 201: 'M:
\ORACLE\ORADATA\M9201WA\TEMP01.DBF' ORA-06512: at "SYS.DBMS_LOB", line 424 ORA-0
6512: at "SYS.DBMS_METADATA", line 1140 ORA-06512: at line 1 EXP-00000: Export t
erminated unsuccessfully The errors are also different when exporting with a 9.2
.0.5 or later export utility: . exporting cluster definitions EXP-00056: ORACLE
error 1157 encountered ORA-01157: cannot identify/lock data file 201 - see DBWR
trace file ORA-01110: data file 201: 'M:\ORACLE\ORADATA\M9205WA\TEMP01.DBF' EXP-
00000: Export terminated unsuccessfully FIX --1. If the controlfile does not hav
e any reference to the tempfile(s), add the tempfile(s): SQL> SET lines 200 SQL>
SELECT status, enabled, name FROM v$tempfile; no rows selected SQL> ALTER TABLE
SPACE temp ADD TEMPFILE 'M:\ORACLE\ORADATA\M9204WA\TEMP01.DBF' REUSE; or: If the
controlfile has a reference to the tempfile(s), but the files are missing on di
sk, re-create the temporary tablespace, e.g.: SQL> SET lines 200 SQL> CREATE TEM
PORARY TABLESPACE temp2 TEMPFILE 'M:\ORACLE\ORADATA\M9204WA\TEMP201.DBF' SIZE 10
0m AUTOEXTEND ON NEXT 100M MAXSIZE 2000M; SQL> ALTER DATABASE DEFAULT TEMPORARY
TABLESPACE temp2; SQL> DROP TABLESPACE temp; SQL> CREATE TEMPORARY TABLESPACE te
mp TEMPFILE 'M:\ORACLE\ORADATA\M9204WA\TEMP01.DBF' SIZE 100m AUTOEXTEND ON NEXT
100M MAXSIZE 2000M; SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp; SQL>
SHUTDOWN IMMEDIATE SQL> STARTUP SQL> DROP TABLESPACE temp2 INCLUDING CONTENTS AN
D DATAFILES;
2. Now re-run the export. Other errors: ------------Doc ID </help/usaeng/Search/
search.html>: Note:175624.1 Content Type: TEXT/X-HTML Subject: Oracle Server - E
xport and Import FAQ Creation Date: 08-FEB-2002 Type: FAQ Last Revision Date: 16
-FEB-2005 Status: PUBLISHED PURPOSE ======= This Frequently Asked Questions (FAQ
) provides common Export and Import issues in the following sections: - GENERIC
- LARGE FILES - INTERMEDIA - TOP EXPORT DEFECTS - COMPATIBILITY - TABLESPACE - A
DVANCED QUEUING - TOP IMPORT DEFECTS - PARAMETERS - ORA-942 - REPLICATION - PERF
ORMANCE - NLS - FREQUENT ERRORS GENERIC ======= Question: What is actually happe
ning when I export and import data? See Note 61949.1 </metalink/plsql/showdoc?db
=NOT&id=61949.1> "Overview of Export and Import in Oracle7" Question: What is im
portant when doing a full database export or import? See Note 10767.1 </metalink
/plsql/showdoc?db=NOT&id=10767.1> "How to perform full system Export/Import" Que
stion: Can data corruption occur using export & import (version 8.1.7.3 to 9.2.0
)? See Note 199416.1 </metalink/plsql/showdoc?db=NOT&id=199416.1> "ALERT: EXP Ca
n Produce Dump File with Corrupted Data" Question: How to Connect AS SYSDBA when
Using Export or Import? See Note 277237.1 </metalink/plsql/showdoc?db=NOT&id=27
7237.1> "How to Connect AS SYSDBA when Using Export or Import" COMPATIBILITY ===
========== Question: Which version should I use when moving data between differe
nt database releases? See Note 132904.1 </metalink/plsql/showdoc?db=NOT&id=13290
4.1> "Compatibility Matrix for Export & Import Between Different Oracle Versions
" See Note 291024.1 </metalink/plsql/showdoc?db=NOT&id=291024.1> "Compatibility
and New Features when Transporting Tablespaces with Export and Import" See Note
76542.1 </metalink/plsql/showdoc?db=NOT&id=76542.1> "NT: Exporting from Oracle8,
Importing Into Oracle7" Question: How to resolve the IMP-69 error when importin
g into a database? See Note 163334.1 </metalink/plsql/showdoc?db=NOT&id=163334.1
> "Import Gets IMP-00069 when Importing 8.1.7 Export" See Note 1019280.102 </met
alink/plsql/showdoc?db=NOT&id=1019280.102> "IMP-69 on Import"
PARAMETERS ========== Question: What is the difference between a Direct Path and
a Conventional Path Export? See Note 155477.1 </metalink/plsql/showdoc?db=NOT&i
d=155477.1> "Parameter DIRECT: Conventional Path Export versus Direct Path Expor
t" Question: What is the meaning of the Export parameter CONSISTENT=Y and when s
hould I use it? See Note 113450.1 </metalink/plsql/showdoc?db=NOT&id=113450.1> "
When to Use CONSISTENT=Y During an Export" Question: How to use the Oracle8i/9i
Export parameter QUERY=... and what does it do? See Note 91864.1 </metalink/plsq
l/showdoc?db=NOT&id=91864.1> "Query= Syntax in Export in 8i" See Note 277010.1 <
/metalink/plsql/showdoc?db=NOT&id=277010.1> "How to Specify a Query in Oracle10g
Export DataPump and Import DataPump" Question: How to create multiple export du
mpfiles instead of one large file? See Note 290810.1 </metalink/plsql/showdoc?db
=NOT&id=290810.1> "Parameter FILESIZE - Make Export Write to Multiple Export Fil
es" PERFORMANCE =========== Question: Import takes so long to complete. How can
I improve the performance of Import? See Note 93763.1 </metalink/plsql/showdoc?d
b=NOT&id=93763.1> "Tuning Considerations when Import is slow" Question: Why has
export performance decreased after creating tables with LOB columns? See Note 28
1461.1 </metalink/plsql/showdoc?db=NOT&id=281461.1> "Export and Import of Table
with LOB Columns (like CLOB and BLOB) has Slow Performance" LARGE FILES ========
=== Question: Which commands to use for solving Export dump file problems on UNI
X platforms? See Note 30528.1 </metalink/plsql/showdoc?db=NOT&id=30528.1> "QREF:
Export/Import/SQL*Load Large Files in Unix - Quick Reference" Question: How to
solve the EXP-15 and EXP-2 errors when Export dump file is larger than 2Gb? See
Note 62427.1 </metalink/plsql/showdoc?db=NOT&id=62427.1> "2Gb or Not 2Gb - File
limits in Oracle" See Note 1057099.6 </metalink/plsql/showdoc?db=NOT&id=1057099.
6> "Unable to export when export file grows larger than 2GB" See Note 290810.1 <
/metalink/plsql/showdoc?db=NOT&id=290810.1> "Parameter FILESIZE - Make Export Wr
ite to Multiple Export Files" Question: How to export to a tape device by using
a named pipe? See Note 30428.1 </metalink/plsql/showdoc?db=NOT&id=30428.1> "Expo
rting to Tape on Unix System"
TABLESPACE ========== Question: How to transport tablespace between different ve
rsions? See Note 291024.1 </metalink/plsql/showdoc?db=NOT&id=291024.1> "Compatib
ility and New Features when Transporting Tablespaces with Export and Import" Que
stion: How to move tables to a different tablespace and/or different user? See N
ote 1012307.6 </metalink/plsql/showdoc?db=NOT&id=1012307.6> "Moving Tables Betwe
en Tablespaces Using EXPORT/IMPORT" See Note 1068183.6 </metalink/plsql/showdoc?
db=NOT&id=1068183.6> "How to change the default tablespace when importing using
the INDEXFILE option" Question: How can I export all tables of a specific tables
pace? See Note 1039292.6 </metalink/plsql/showdoc?db=NOT&id=1039292.6> "How to E
xport Tables for a specific Tablespace" ORA-942 ======= Question: How to resolve
an ORA-942 during import of the ORDSYS schema? See Note 109576.1 </metalink/pls
ql/showdoc?db=NOT&id=109576.1> "Full Import shows Errors when adding Referential
Constraint on Cartrige Tables" Question: How to resolve an ORA-942 during impor
t of a snapshot (log) into a different schema? See Note 1017292.102 </metalink/p
lsql/showdoc?db=NOT&id=1017292.102> "IMP-00017 IMP-00003 ORA-00942 USING FROMUSE
R/TOUSER ON SNAPSHOT [LOG] IMPORT" Question: How to resolve an ORA-942 during im
port of a trigger on a renamed table? See Note 1020026.102 </metalink/plsql/show
doc?db=NOT&id=1020026.102> "ORA-01702, ORA-00942, ORA-25001, When Importing Trig
gers" Question: How to resolve an ORA-942 during import of one specific table? S
ee Note 1013822.102 </metalink/plsql/showdoc?db=NOT&id=1013822.102> "ORA-00942:
ON TABLE LEVEL IMPORT" NLS === Question: Which effect has the client's NLS_LANG
setting on an export and import? See Note 227332.1 </metalink/plsql/showdoc?db=N
OT&id=227332.1> "NLS considerations in Import/Export - Frequently Asked Question
s" See Note 15656.1 </metalink/plsql/showdoc?db=NOT&id=15656.1> "Export/Import a
nd NLS Considerations" Question: How to prevent the loss of diacritical marks du
ring an export/import? See Note 96842.1 </metalink/plsql/showdoc?db=NOT&id=96842
.1> "Loss Of Diacritics When Performing EXPORT/IMPORT Due To Incorrect Character
sets" INTERMEDIA OBJECTS ================== Question: How to solve an EXP-78 whe
n exporting metadata for an interMedia Text index? See Note 130080.1 </metalink/
plsql/showdoc?db=NOT&id=130080.1> "Problems
with EXPORT after upgrading from 8.1.5 to 8.1.6" Question: I dropped the ORDSYS
schema, but now I get ORA-6550 and PLS-201 when exporting? See Note 120540.1 </m
etalink/plsql/showdoc?db=NOT&id=120540.1> "EXP-8 PLS-201 After Drop User ORDSYS"
ADVANCED QUEUING OBJECTS ======================== Question: Why does export sho
w ORA-1403 and ORA-6512 on an AQ object, after an upgrade? See Note 159952.1 </m
etalink/plsql/showdoc?db=NOT&id=159952.1> "EXP-8 and ORA-1403 When Performing A
Full Export" Question: How to resolve export errors on DBMS_AQADM_SYS and DBMS_A
Q_SYS_EXP_INTERNAL? See Note 114739.1 </metalink/plsql/showdoc?db=NOT&id=114739.
1> "ORA-4068 while performing full database export" REPLICATION OBJECTS ========
=========== Question: How to resolve import errors on DBMS_IJOB.SUBMIT for Repli
cation jobs? See Note 137382.1 </metalink/plsql/showdoc?db=NOT&id=137382.1> "IMP
-3, PLS-306 Unable to Import Oracle8i JobQueues into Oracle8" Question: How to r
eorganize Replication base tables with Export and Import? See Note 1037317.6 </m
etalink/plsql/showdoc?db=NOT&id=1037317.6> "Move Replication System Tables using
Export/Import for Oracle 8.X" FREQUENTLY REPORTED EXPORT/IMPORT ERRORS ========
================================ EXP-00002: Error in writing to export file Note
1057099.6 </metalink/plsql/showdoc?db=NOT&id=1057099.6> "Unable to export when
export file grows larger than 2GB" EXP-00002: error in writing to export file Th
e export file could not be written to disk anymore, probably because the disk is
full or the device has an error. Most of the time this is followed by a device
(filesystem) error message indicating the problem. Possible causes are file syst
ems that do not support a certain limit (eg. dump file size > 2Gb) or a disk/fil
esystem that ran out of space. EXP-00003: No storage definition found for segmen
t(%s,%s) (EXP-3 EXP-0) Note 274076.1 </metalink/plsql/showdoc?db=NOT&id=274076.1
> "EXP-00003 When Exporting From Oracle9i 9.2.0.5.0 with a Pre-9.2.0.5.0 Export
Utility" Note 124392.1 </metalink/plsql/showdoc?db=NOT&id=124392.1> "EXP-3 while
exporting Rollback Segment definitions during FULL Database Export" EXP-00067:
"Direct path cannot export %s which contains object or lob data." Note 1048461.6
</metalink/plsql/showdoc?db=NOT&id=1048461.6> "EXP-00067 PERFORMING DIRECT PATH
EXPORT"
EXP-00079: Data in table %s is protected (EXP-79) Note 277606.1 </metalink/plsql
/showdoc?db=NOT&id=277606.1> "How to Prevent EXP-00079 or EXP-00080 Warning (Dat
a in Table xxx is Protected) During Export" EXP-00091: Exporting questionable st
atistics Note 159787.1 </metalink/plsql/showdoc?db=NOT&id=159787.1> "9i: Import
STATISTICS=SAFE" IMP-00016: Required character set conversion (type %lu to %lu)
not supported Note 168066.1 </metalink/plsql/showdoc?db=NOT&id=168066.1> "IMP-16
When Importing Dumpfile into a Database Using Multibyte Characterset" IMP-00020
: Long column too large for column buffer size Note 148740.1 </metalink/plsql/sh
owdoc?db=NOT&id=148740.1> "ALERT: Export of table with dropped functional index
may cause IMP-20 on import" ORA-00904: Invalid column name (EXP-8 ORA-904 EXP-0)
Note 106155.1 </metalink/plsql/showdoc?db=NOT&id=106155.1> "EXP-00008 ORA-1003
ORA-904 During Export" Note 172220.1 </metalink/plsql/showdoc?db=NOT&id=172220.1
> "Export of Database fails with EXP-00904 and ORA-01003" Note 158048.1 </metali
nk/plsql/showdoc?db=NOT&id=158048.1> "Oracle8i Export Fails on Synonym Export wi
th EXP-8 and ORA-904" Note 130916.1 </metalink/plsql/showdoc?db=NOT&id=130916.1>
"ORA-904 using EXP73 against Oracle8/8i Database" Note 1017276.102 </metalink/p
lsql/showdoc?db=NOT&id=1017276.102> "Oracle8i Export Fails on Synonym Export wit
h EXP-8 and ORA-904" ORA-01406: Fetched column value was truncated (EXP-8 ORA-14
06 EXP-0) Note 163516.1 </metalink/plsql/showdoc?db=NOT&id=163516.1> "EXP-0 and
ORA-1406 during Export of Object Types" ORA-01422: Exact fetch returns more than
requested number of rows Note 221178.1 </metalink/plsql/showdoc?db=NOT&id=22117
8.1> "PLS-201 and ORA-06512 at 'XDB.DBMS_XDBUTIL_INT' while Exporting Database"
Note 256548.1 </metalink/plsql/showdoc?db=NOT&id=256548.1> "Export of Database w
ith XDB Throws ORA-1422 Error" ORA-01555: Snapshot too old Note 113450.1 </metal
ink/plsql/showdoc?db=NOT&id=113450.1> "When to Use CONSISTENT=Y During an Export
" ORA-04030: Out of process memory when trying to allocate %s bytes (%s,%s) (IMP
-3 ORA-4030 ORA-3113) Note 165016.1 </metalink/plsql/showdoc?db=NOT&id=165016.1>
"Corrupt Packages When Export/Import Wrapper PL/SQL Code" ORA-06512: at "SYS.DB
MS_STATS", line ... (IMP-17 IMP-3 ORA-20001 ORA-6512) Note 123355.1 </metalink/p
lsql/showdoc?db=NOT&id=123355.1> "IMP-17 and IMP-3 errors referring dbms_stats p
ackage during import" ORA-29344: Owner validation failed - failed to match owner
'SYS' Note 294992.1 </metalink/plsql/showdoc?db=NOT&id=294992.1> "Import DataPu
mp: Transport Tablespace Fails with ORA-39123 and 29344 (Failed to match owner S
YS)"
ORA-29516: Aurora assertion failure: Assertion failure at %s (EXP-8 ORA-29516 EX
P0) Note 114356.1 </metalink/plsql/showdoc?db=NOT&id=114356.1> "Export Fails Wit
h ORA-29516 Aurora Assertion Failure EXP-8" PLS-00103: Encountered the symbol ",
" when expecting one of the following ... (IMP-17 IMP-3 ORA-6550 PLS-103) Note 1
23355.1 </metalink/plsql/showdoc?db=NOT&id=123355.1> "IMP-17 and IMP-3 errors re
ferring dbms_stats package during import" Note 278937.1 </metalink/plsql/showdoc
?db=NOT&id=278937.1> "Import DataPump: ORA-39083 and PLS-103 when Importing Stat
istics Created with Non "." NLS Decimal Character" EXPORT TOP ISSUES CAUSED BY D
EFECTS =================================== Release : 8.1.7.2 and below Problem :
Export may fail with ORA-1406 when exporting object type definitions Solution :
apply patch-set 8.1.7.3 Workaround: no, see Note 163516.1 </metalink/plsql/show
doc?db=NOT&id=163516.1> "EXP-0 and ORA-1406 during Export of Object Types" Bug 1
098503 </metalink/plsql/showdoc?db=Bug&id=1098503> Release : Oracle8i (8.1.x) an
d Oracle9i (9.x) Problem : EXP-79 when Exporting Protected Tables Solution : thi
s is not a defect Workaround: N/A, see Note 277606.1 </metalink/plsql/showdoc?db
=NOT&id=277606.1> "How to Prevent EXP-00079 or EXP-00080 Warning (Data in Table
xxx is Protected) During Export" Bug 2410612 </metalink/plsql/showdoc?db=Bug&id=
2410612> Release : 8.1.7.3 and higher and 9.0.1.2 and higher Problem : Conventio
nal export may produce an export file with corrupt data Solution : 8.1.7.5 and 9
.2.0.x or check for Patch 2410612 <http://updates.oracle.com/ARULink/PatchDetail
s/process_form?patch_num=2410612> (for 8.1.7.x), 2449113 (for 9.0.1.x) Workaroun
d: yes, see Note 199416.1 </metalink/plsql/showdoc?db=NOT&id=199416.1> "ALERT: C
lient Program May Give Incorrect Query Results (EXP Can Produce Dump File with C
orrupted Data)" Release : Problem : for segment Solution : Workaround: "EXP-3 wh
ile Oracle8i (8.1.x) Full database export fails with EXP-3: no storage definitio
n found Oracle9i (9.x) yes, see Note 124392.1 </metalink/plsql/showdoc?db=NOT&id
=124392.1> exporting Rollback Segment definitions during FULL Database Export"
Bug 2900891 </metalink/plsql/showdoc?db=Bug&id=2900891> Release : 9.0.1.4 and be
low and 9.2.0.3 and below Problem : Export with 8.1.7.3 and 8.1.7.4 from Oracle9
i fails with invalid identifier SPOLICY (EXP-8 ORA-904 EXP-0) Solution : 9.2.0.4
or 9.2.0.5 Workaround: yes, see Bug 2900891 </metalink/plsql/showdoc?db=Bug&id=
2900891> how to recreate view sys.exu81rls Bug 2685696 </metalink/plsql/showdoc?
db=Bug&id=2685696> Release : 9.2.0.3 and below
: Export fails when exporting triggers in call to XDB.DBMS_XDBUTIL_INT (EXP-56 O
RA-1422 ORA-6512) Solution : 9.2.0.4 or check for Patch 2410612 <http://updates.
oracle.com/ARULink/PatchDetails/process_form?patch_num=2410612> (for 9.2.0.2 and
9.2.0.3) Workaround: yes, see Note 221178.1 </metalink/plsql/showdoc?db=NOT&id=
221178.1> "ORA-01422 ORA-06512: at "XDB.DBMS_XDBUTIL_INT" while exporting full d
atabase" Bug 2919120 </metalink/plsql/showdoc?db=Bug&id=2919120> Release : 9.2.0
.4 and below Problem : Export fails when exporting triggers in call to XDB.DBMS_
XDBUTIL_INT (EXP-56 ORA-1422 ORA-6512) Solution : 9.2.0.5 or check for Patch 291
9120 <http://updates.oracle.com/ARULink/PatchDetails/process_form?patch_num=2919
120> (for 9.2.0.4) Workaround: yes, see Note 256548.1 </metalink/plsql/showdoc?d
b=NOT&id=256548.1> "Export of Database with XDB Throws ORA-1422 Error" IMPORT TO
P ISSUES CAUSED BY DEFECTS =================================== Bug 1335408 </met
alink/plsql/showdoc?db=Bug&id=1335408> Release : 8.1.7.2 and below Problem : Bad
export file using a locale with a ',' decimal seperator (IMP-17 IMP-3 ORA-6550
PLS-103) Solution : apply patch-set 8.1.7.3 or 8.1.7.4 Workaround: yes, see Note
123355.1 </metalink/plsql/showdoc?db=NOT&id=123355.1> "IMP-17 and IMP-3 errors
referring DBMS_STATS package during import" Bug 1879479 </metalink/plsql/showdoc
?db=Bug&id=1879479> Release : 8.1.7.2 and below and 9.0.1.2 and below Problem :
Export of a wrapped package can result in a corrupt package being imported (IMP-
3 ORA-4030 ORA-3113 ORA-7445 ORA-600[16201]). Solution : in Oracle8i with 8.1.7.
3 and higher; in Oracle9iR1 with 9.0.1.3 and higher Workaround: no, see Note 165
016.1 </metalink/plsql/showdoc?db=NOT&id=165016.1> "Corrupt Packages When Export
/Import Wrapper PL/SQL Code" Bug 2067904 </metalink/plsql/showdoc?db=Bug&id=2067
904> Release : Oracle8i (8.1.7.x) and 9.0.1.2 and below Problem : Trigger-name c
auses call to DBMS_DDL.SET_TRIGGER_FIRING_PROPERTY to fail during Import (IMP-17
IMP-3 ORA-931 ORA-23308 ORA-6512). Solution : in Oracle9iR1 with patchset 9.0.1
.3 Workaround: yes, see Note 239821.1 </metalink/plsql/showdoc?db=NOT&id=239821.
1> "ORA-931 or ORA-23308 in SET_TRIGGER_FIRING_PROPERTY on Import of Trigger in
8.1.7.x and 9.0.1.x" Bug 2854856 </metalink/plsql/showdoc?db=Bug&id=2854856> Rel
ease : Oracle8i (8.1.7.x) and 9.0.1.2 and below Problem : Schema-name causes cal
l to DBMS_DDL.SET_TRIGGER_FIRING_PROPERTY to fail during Import (IMP-17 IMP-3 OR
A-911 ORA-6512). Solution : in Oracle9iR2 with patchset 9.2.0.4 Workaround: yes,
see Note 239890.1 </metalink/plsql/showdoc?db=NOT&id=239890.1> "ORA-911 in SET_
TRIGGER_FIRING_PROPERTY on Import of Trigger
Problem
in 8.1.7.x and Oracle9i"
4.3 SQL*Loader examples: -======================= SQL*Loader is used for loading
data from text files into Oracle tables. The text file can have fixed column po
sitions or columns separated by a special character, for example an ",". to call
sqlloader sqlldr system/manager control=smssoft.ctl sqlldr parfile=bonus.par Ex
ample 1: ---------BONUS.PAR: userid=scott control=bonus.ctl bad=bonus.bad log=bo
nus.log discard=bonus.dis rows=2 errors=2 skip=0 BONUS.CTL: LOAD DATA INFILE bon
us.dat APPEND INTO TABLE BONUS (name position(01:08) char, city position(09:19)
char, salary position(20:22) integer external) Now you can use the command: $ sq
lldr parfile=bonus.par Example 2: ---------LOAD1.CTL: LOAD DATA INFILE 'PLAYER.T
XT' INTO TABLE BASEBALL_PLAYER FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '
"' (player_id,last_name,first_name,middle_initial,start_date) SQLLDR system/mana
ger CONTROL=LOAD1.CTL LOG=LOAD1.LOG
BAD=LOAD1.BAD DISCARD=LOAD1.DSC Example 3: another controlfile: ----------------
-------------SMSSOFT.CTL: LOAD DATA INFILE 'SMSSOFT.TXT' TRUNCATE INTO TABLE SMS
SOFTWARE FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' (DWMACHINEID, SERIA
LNUMBER, NAME, SHORTNAME, SOFTWARE, CMDB_ID, LOGONNAME) Example 4: another contr
olfile: ------------------------------LOAD DATA INFILE * BADFILE 'd:\stage\loade
r\load.bad' DISCARDFILE 'd:\stage\loader\load.dsc' APPEND INTO TABLE TEST FIELDS
TERMINATED BY "<tab>" TRAILING NULLCOLS ( c1, c2 char, c3 date(8) "DD-MM-YY" )
BEGINDATA 1<tab>X<tab>25-12-00 2<tab>Y<tab>31-12-00 Note: The <tab> placeholder
is only for illustration purposes, in the acutal implementation, one would use a
real tab character which is not visible. - Convential path load: When the DIREC
T=Y parameter is not used, the convential path is used. This means that essentia
lly INSERT statements are used, triggers and referential integrety are in normal
use, and that the buffer cache is used. - Direct path load: Buffer cache is not
used. Existing used blocks are not used. New blocks are written as needed. Refe
rential integrety and triggers are disabled during the load. Example 5: --------
-The following shows the control file (sh_sales.ctl) loading the sales table: LO
AD DATA INFILE sh_sales.dat APPEND INTO TABLE sales FIELDS TERMINATED BY "|" (PR
OD_ID, CUST_ID, TIME_ID, CHANNEL_ID, PROMO_ID, QUANTITY_SOLD, AMOUNT_SOLD) It ca
n be loaded with the following command:
$
sqlldr sh/sh control=sh_sales.ctl direct=true
4.4 Creation of new table on basis of existing table: ==========================
=========================== CREATE TABLE EMPLOYEE_2 AS SELECT * FROM EMPLOYEE CR
EATE TABLE temp_jan_sales NOLOGGING TABLESPACE ts_temp_sales AS SELECT * FROM sa
les WHERE time_id BETWEEN '31-DEC-1999' AND '01-FEB-2000'; insert into t SELECT
* FROM t2; insert into DSA_IMPORT SELECT * FROM MDB_DW_COMPONENTEN@SALES
4.5 Copy commAND om data uit een remote database te halen: =====================
===================================== set copycommit 1 set arraysize 1000 copy F
ROM HR/PASSWORD@loc create EMPLOYEE using SELECT * FROM employee WHERE state='NM
' 4.6 Simple differences between table versions: ===============================
=============== SELECT * FROM new_version MINUS SELECT * FROM old_version; SELEC
T * FROM old_version MINUS SELECT * FROM new_version;
======================================================= 5. Add, Move AND Size Da
tafiles, tablespaces, logfiles: ================================================
======= 5.1 ADD OR DROP REDO LOGFILE GROUP: ===================================
ADD: ---alter database
add logfile group 4 ('/db01/oracle/CC1/log_41.dbf', '/db02/oracle/CC1/log_42.dbf
') size 5M; ALTER DATABASE ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/oracle/dbs/lo
g2c.rdo') SIZE 500K; ALTER DATABASE ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/orac
le/dbs/log2c.rdo') SIZE 500K; Add logfile plus group: ALTER DATABASE ADD LOGFILE
GROUP 4 ('/dbms/tdbaeduc/educslot/recovery/redo_logs/redo04.log') SIZE 50M; ALT
ER DATABASE ADD LOGFILE GROUP 5 ('/dbms/tdbaeduc/educslot/recovery/redo_logs/red
o05.log') SIZE 50M; ALTER DATABASE ADD LOGFILE ('G:\ORADATA\AIRM\REDO05.LOG') SI
ZE 20M; DROP: -----An instance requires at least two groups of online redo log f
iles, regardless of the number of members in the groups. (A group is one or more
members.) -You can drop an online redo log group only if it is inactive. If you
need to drop the current group, first force a log switch to occur. ALTER DATABA
SE DROP LOGFILE GROUP 3; ALTER DATABASE DROP LOGFILE 'G:\ORADATA\AIRM\REDO02.LOG
'; 5.2 ADD REDO LOGFILE MEMBER: ============================ alter database add
logfile member '/db03/oracle/CC1/log_3c.dbf' to group 4; Note: More on ONLINE LO
GFILES: ------------------------------- Log Files Without Redundancy LOGFILE GRO
UP GROUP GROUP GROUP 1 2 3 4 '/u01/oradata/redo01.log'SIZE '/u02/oradata/redo02.
log'SIZE '/u03/oradata/redo03.log'SIZE '/u04/oradata/redo04.log'SIZE 10M, 10M, 1
0M, 10M
-- Log Files With Redundancy LOGFILE GROUP 1 ('/u01/oradata/redo1a.log','/u05/or
adata/redo1b.log') SIZE 10M, GROUP 2 ('/u02/oradata/redo2a.log','/u06/oradata/re
do2b.log') SIZE 10M, GROUP 3 ('/u03/oradata/redo3a.log','/u07/oradata/redo3b.log
') SIZE 10M,
GROUP 4 ('/u04/oradata/redo4a.log','/u08/oradata/redo4b.log') SIZE 10M -- Relate
d Queries View information on log files SELECT * FROM gv$log; View information o
n log file history SELECT thread#, first_change#, TO_CHAR(first_time,'MM-DD-YY H
H12:MIPM'), next_change# FROM gv$log_history; -- Forcing log file switches ALTER
SYSTEM SWITCH LOGFILE; -- Clear A Log File If It Has Become Corrupt ALTER DATAB
ASE CLEAR LOGFILE GROUP <group_number>; This statement overcomes two situations
where dropping redo logs is not possible: If there are only two log groups The c
orrupt redo log file belongs to the current group. ALTER DATABASE CLEAR LOGFILE
GROUP 4; -- Clear A Log File If It Has Become Corrupt And Avoid Archiving ALTER
DATABASE CLEAR UNARCHIVED LOGFILE GROUP <group_number>; -- Use this version of c
learing a log file if the corrupt log file has not been archived. ALTER DATABASE
CLEAR UNARCHIVED LOGFILE GROUP 3; Managing Log File Groups Adding a redo log fi
le group ALTER DATABASE ADD LOGFILE ('<log_member_path_and_name>', '<log_member_
path_and_name>') SIZE <integer> <K|M>; ALTER DATABASE ADD LOGFILE ('/oracle/dbs/
log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE 500K; Adding a redo log file group and
specifying the group number ALTER DATABASE ADD LOGFILE GROUP <group_number> ('<
log_member_path_and_name>') SIZE <integer> <K|M>; ALTER DATABASE ADD LOGFILE GRO
UP 4 ('c:\temp\newlog1.log') SIZE 100M; Relocating redo log files ALTER DATABASE
RENAME FILE '<existing_path_and_file_name>' TO '<new_path_and_file_name>'; conn
/ as sysdba SELECT member FROM v_$logfile; SHUTDOWN; host $ cp /u03/logs/log1a.
log /u04/logs/log1a.log $ cp /u03/logs/log1b.log /u05/logs/log1b.log
$ exit startup mount ALTER DATABASE RENAME FILE '/u03/logs/log1a.log' TO '/u04/o
radata/log1a.log'; ALTER DATABASE RENAME FILE '/u04/logs/log1b.log' TO '/u05/ora
data/log1b.log'; ALTER DATABASE OPEN host $ rm /u03/logs/log1a.log $ rm /u03/log
s/log1b.log $ exit SELECT member FROM v_$logfile; Drop a redo log file group ALT
ER DATABASE DROP LOGFILE GROUP <group_number>; ALTER DATABASE DROP LOGFILE GROUP
4; Managing Log File Members Adding log file group members ALTER DATABASE ADD L
OGFILE MEMBER '<log_member_path_and_name>' TO GROUP <group_number>; ALTER DATABA
SE ADD LOGFILE MEMBER '/oracle/dbs/log2b.rdo' TO GROUP 2; Dropping log file grou
p members ALTER DATABASE DROP LOGFILE MEMBER '<log_member_path_and_name>'; ALTER
DATABASE DROP LOGFILE MEMBER '/oracle/dbs/log3c.rdo'; Dumping Log Files Dumping
a log file to trace ALTER SYSTEM DUMP LOGFILE '<logfile_path_and_name>' DBA MIN
<file_number> <block_number> DBA MAX <file_number> <block_number>; or ALTER SYS
TEM DUMP LOGFILE '<logfile_path_and_name>' TIME MIN <value> TIME MIN <value> con
n uwclass/uwclass alter session set nls_date_format='MM/DD/YYYY HH24:MI:SS'; SEL
ECT SYSDATE FROM dual; CREATE TABLE test AS SELECT owner, object_name, object_ty
pe FROM all_objects WHERE SUBSTR(object_name,1,1) BETWEEN 'A' AND 'W'; INSERT IN
TO test
(owner, object_name, object_type) VALUES ('UWCLASS', 'log_dump', 'TEST'); COMMIT
; conn / as sysdba SELECT ((SYSDATE-1/1440)-TO_DATE('01/01/2007','MM/DD/YYYY'))*
86400 ssec FROM dual; ALTER SYSTEM DUMP LOGFILE 'c:\oracle\product\oradata\oraba
se\redo01.log' TIME MIN 579354757; Disable Log Archiving Stop log file archiving
The following is undocumented and unsupported and should be used only with grea
t care and following through tests. One might consider this for loading a data w
arehouse. Be sure to restart logging as soon as the load is complete or the syst
em will be at extremely high risk. The rest of the database remains unchanged. T
he buffer cache works in exactly the same way, old buffers get overwritten, old
dirty buffers get written to disk. It's just the process of physically flushing
the redo buffer that gets disabled. I used it in a very large test environment w
here I wanted to perform a massive amount of changes (a process to convert blobs
to clobs actually) and it was going to take days to complete. By disabling logg
ing, I completed the task in hours and if anything untoward were to have happene
d, I was quite happy to restore the test database back from backup. ~ the above
paraphrased from a private email from Richard Foote. conn / as sysdba SHUTDOWN;
STARTUP MOUNT EXCLUSIVE; ALTER DATABASE NOARCHIVELOG; ALTER DATABASE OPEN; ALTER
SYSTEM SET "_disable_logging"=TRUE;
5.3 RESIZE DATABASE FILE: ========================= alter database datafile '/db
05/oracle/CC1/data01.dbf' rezise 400M; (increase or decrease size) alter tablesp
ace DATA datafile '/db05/oracle/CC1/data01.dbf' rezise 400M; (increase or decrea
se size) 5.4 ADD FILE TO TABLESPACE: ===========================
alter tablespace DATA add datafile '/db05/oracle/CC1/data02.dbf' size 50M autoex
tend ON maxsize unlimited; 5.5 ALTER STORAGE FOR FILE: =========================
== alter database datafile '/db05/oracle/CC1/data01.dbf' autoextend ON maxsize u
nlimited; alter database datafile '/oradata/temp/temp.dbf' autoextend off; The A
UTOEXTEND option cannot be turned OFF at for the entire tablespace with a single
command. Each datafile within the tablespace must explicitly turn off the AUTOE
XTEND option via the ALTER DATABASE command. +447960585647 5.6 MOVE OF DATA FILE
: ====================== connect internal shutdown mv /db01/oracle/CC1/data01.db
f connect / as SYSDBA startup mount CC1 alter database rename file '/db01/oracle
/CC1/data01.dbf' to '/db02/oracle/CC1/data01.dbf'; alter database open; alter da
tabase rename file '/dbms/tdbaplay/playdwhs/database/playdwhs/sysaux01.dbf' to '
/dbms/tdbaplay/playdwhs/database/default/sysaux01.dbf'; alter database rename fi
le '/dbms/tdbaplay/playdwhs/database/playdwhs/system01.dbf' to '/dbms/tdbaplay/p
laydwhs/database/default/system01.dbf'; alter database rename file '/dbms/tdbapl
ay/playdwhs/database/playdwhs/temp01.dbf' to '/dbms/tdbaplay/playdwhs/database/d
efault/temp01.dbf'; alter database rename file '/dbms/tdbaplay/playdwhs/database
/playdwhs/undotbs01.dbf' to '/dbms/tdbaplay/playdwhs/database/default/undotbs01.
dbf'; alter database rename file '/dbms/tdbaplay/playdwhs/database/playdwhs/user
s01.dbf' to '/dbms/tdbaplay/playdwhs/database/default/users01.dbf'; /db02/oracle
/CC1
alter database rename file '/dbms/tdbaplay/playdwhs/database/playdwhs/redo01.log
' to '/dbms/tdbaplay/playdwhs/recovery/redo_logs/redo01.log'; alter database ren
ame file '/dbms/tdbaplay/playdwhs/database/playdwhs/redo02.log' to '/dbms/tdbapl
ay/playdwhs/recovery/redo_logs/redo02.log'; alter database rename file '/dbms/td
baplay/playdwhs/database/playdwhs/redo03.log' to '/dbms/tdbaplay/playdwhs/recove
ry/redo_logs/redo03.log'; 5.7 MOVE OF REDO LOG FILE: ==========================
connect internal shutdown mv /db05/oracle/CC1/redo01.dbf connect / as SYSDBA sta
rtup mount CC1 alter database rename file '/db05/oracle/CC1/redo01.dbf' to '/db0
2/oracle/CC1/redo01.dbf'; alter database open; in case of problems: ALTER DATABA
SE CLEAR LOGFILE GROUP n example: -------shutdown immediate op Unix: mv /u01/ora
data/spltst1/redo01.log /u02/oradata/spltst1/ mv /u03/oradata/spltst1/redo03.log
/u02/oradata/spltst1/ startup mount pfile=/apps/oracle/admin/SPLTST1/pfile/init
.ora alter database rename file '/u01/oradata/spltst1/redo01.log' to '/u02/orada
ta/spltst1/redo01.log'; alter database rename file '/u03/oradata/spltst1/redo03.
log' to '/u02/oradata/spltst1/redo03.log'; alter database open; /db02/oracle/CC1
5.8 Put a datafile or tablespace ONLINE or OFFLINE: ============================
======================= alter tablespace data offline; alter tablespace data onl
ine; alter database datafile 8 offline; alter database datafile 8 online; 5.9 AL
TER DEFAULT STORAGE: ========================== alter tablespace AP_INDEX_SMALL
default storage (initial 5M next 5M pctincrease 0); 5.10 CREATE TABLESPACE STORA
GE PARAMETERS: ========================================== locally managed 9i sty
le: -- autoallocate: ---------------CREATE TABLESPACE DEMO DATAFILE '/u02/oracle
/data/lmtbsb01.dbf' size 100M extent management local autoallocate; -- uniform s
ize, 1M is default: ------------------------------CREATE TABLESPACE LOBS DATAFIL
E 'f:\oracle\oradata\pegacc\lobs01.dbf' SIZE 3000M EXTENT MANAGEMENT LOCAL UNIFO
RM SIZE 64K; CREATE TABLESPACE LOBS2 DATAFILE 'f:\oracle\oradata\pegacc\lobs02.d
bf' SIZE 3000M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M; CREATE TABLESPACE CISTS_
01 DATAFILE '/u04/oradata/pilactst/cists_01.dbf' SIZE 1000M EXTENT MANAGEMENT LO
CAL UNIFORM SIZE 128K; CREATE TABLESPACE CISTS_01 DATAFILE '/u01/oradata/spldev1
/cists_01.dbf' SIZE 400M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; CREATE TABLE
SPACE PUB DATAFILE 'C:\ORACLE\ORADATA\TEST10G\PUB.DBF' SIZE 50M EXTENT MANAGEMEN
T LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO; CREATE TABLESPACE STAGING DA
TAFILE 'C:\ORACLE\ORADATA\TEST10G\STAGING.DBF' SIZE 50M EXTENT MANAGEMENT LOCAL
AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO; CREATE TABLESPACE RMAN DATAFILE 'C:\
ORACLE\ORADATA\RMAN\RMAN.DBF' SIZE 100M EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEG
MENT SPACE MANAGEMENT AUTO;
CREATE TABLESPACE CISTS_01 DATAFILE '/u07/oradata/spldevp/cists_01.dbf' SIZE 120
0M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; CREATE TABLESPACE USERS DATAFILE '
/u06/oradata/splpack/users01.dbf' SIZE 50M EXTENT MANAGEMENT LOCAL UNIFORM SIZE
128K; CREATE TABLESPACE INDX DATAFILE '/u06/oradata/splpack/indx01.dbf' SIZE 100
M EXTENT MANAGEMENT LOCAL UNIFORM SIZE CREATE TEMPORARY TABLESPACE TEMP TEMPFILE
'/u07/oradata/spldevp/temp01.dbf' SIZE 200M EXTENT MANAGEMENT LOCAL UNIFORM SIZ
E 10M; ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP;
ALTER TABLESPACE CISTS_01 ADD DATAFILE '/u03/oradata/splplay/cists_02.dbf' SIZE
1000M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; ALTER TABLESPACE UNDOTBS ADD DA
TAFILE '/dbms/tdbaprod/prodross/database/default/undotbs03.dbf' SIZE 2000M; alte
r tablespace DATA add datafile '/db05/oracle/CC1/data02.dbf' size 50M autoextend
ON maxsize unlimited; -- segment management manual or automatic: -- -----------
---------------------------We can have a locally managed tablespace, but the seg
ment space management, via the free lists and the pct_free and pct_used paramete
rs, be still used manually. To specify manual space management, use the SEGMENT
SPACE MANAGEMENT MANUAL clause CREATE TABLESPACE INDX2 DATAFILE '/u06/oradata/bc
ict2/indx09.dbf' SIZE 5000M EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE M
ANAGEMENT MANUAL; or if you want segement space management to be automatic: CREA
TE TABLESPACE INDX2 DATAFILE '/u06/oradata/bcict2/indx09.dbf' SIZE 5000M EXTENT
MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO; -- temporary tables
pace: -----------------------CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/u04/ora
data/pilactst/temp01.dbf' SIZE 200M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 10M; create user cisadm identified by cisad
m default tablespace cists_01 temporary tablespace temp; create user cisuser ide
ntified by cisuser default tablespace cists_01 temporary tablespace temp; create
user cisread identified by cisread default tablespace cists_01 temporary tables
pace temp; grant connect to cisadm; grant connect to cisuser; grant connect to c
isread; grant resource to cisadm; grant resource to cisuser; grant resource to c
isread;
CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/u04/oradata/bcict2/tempt01.dbf' SIZE
5000M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 100M; alter tablespace TEMP add temp
file '/u04/oradata/bcict2/temp02.dbf' SIZE 5000M; alter tablespace UNDO add file
'/u04/oradata/bcict2/undo07.dbf' size 500M; ALTER DATABASE datafile '/u04/orada
ta/bcict2/undo07.dbf' RESIZE 3000M; CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE '
/u04/oradata/bcict2/temp01.dbf' SIZE 5000M EXTENT MANAGEMENT LOCAL UNIFORM SIZE
100M; ALTER TABLESPACE TEMP ADD TEMPFILE '/u04/oradata/bcict2/tempt4.dbf' 1 /u03
/oradata/bcict2/temp.dbf 2 /u03/oradata/bcict2/temp01.dbf 3 /u03/oradata/bcict2/
temp02.dbf ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' DROP INCLUDIN
G DATAFILES; The extent management clause is optional for temporary tablespaces
because all temporary tablespaces are created with locally managed extents of a
uniform size. The Oracle default for SIZE is 1M. But if you want to specify anot
her value for SIZE, you can do so as shown in the
SIZE 5000M;
above statement. The AUTOALLOCATE clause is not allowed for temporary tablespace
s. If you get errors: -----------------If the controlfile does not have any refe
rence to the tempfile(s), add the tempfile(s): SQL> SET lines 200 SQL> SELECT st
atus, enabled, name FROM v$tempfile; no rows selected SQL> ALTER TABLESPACE temp
ADD TEMPFILE 'M:\ORACLE\ORADATA\M9204WA\TEMP01.DBF' REUSE; or: If the controlfi
le has a reference to the tempfile(s), but the files are missing on disk, re-cre
ate the temporary tablespace, e.g.: SQL> SET lines 200 SQL> CREATE TEMPORARY TAB
LESPACE temp2 TEMPFILE 'M:\ORACLE\ORADATA\M9204WA\TEMP201.DBF' SIZE 100m AUTOEXT
END ON NEXT 100M MAXSIZE 2000M; SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE
temp2; SQL> DROP TABLESPACE temp; SQL> CREATE TEMPORARY TABLESPACE temp TEMPFIL
E 'M:\ORACLE\ORADATA\M9204WA\TEMP01.DBF' SIZE 100m AUTOEXTEND ON NEXT 100M MAXSI
ZE 2000M; SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp; SQL> SHUTDOWN I
MMEDIATE SQL> STARTUP SQL> DROP TABLESPACE temp2 INCLUDING CONTENTS AND DATAFILE
S;
-- undo tablespace: -- ---------------CREATE UNDO TABLESPACE undotbs_02 DATAFILE
'/u01/oracle/rbdb1/undo0201.dbf' SIZE 2M REUSE AUTOEXTEND ON; ALTER SYSTEM SET
UNDO_TABLESPACE = undotbs_02; -- ROLLBACK TABLESPACE: -- -------------------crea
te tablespace RBS datafile '/disk01/oracle/oradata/DB1/rbs01.dbf' size 25M defau
lt storage ( initial 500K next 500K pctincrease 0 minextents 2 );
################################################################################
## ##### CREATE TABLESPACE "DRSYS" LOGGING DATAFILE '/u02/oradata/pegacc/drsys01
.dbf' SIZE 20M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED EXTENT MANAGEMEN
T LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE TABLESPACE "INDX" LOGGING DATAFIL
E '/u02/oradata/pegacc/indx01.dbf' SIZE 100M REUSE AUTOEXTEND ON NEXT 1024K MAXS
IZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE TAB
LESPACE "TOOLS" LOGGING DATAFILE '/u02/oradata/pegacc/tools01.dbf' SIZE 100M REU
SE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SP
ACE MANAGEMENT AUTO ; CREATE TABLESPACE "USERS" LOGGING DATAFILE '/u02/oradata/p
egacc/users01.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED E
XTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE TABLESPACE "XDB" L
OGGING DATAFILE '/u02/oradata/pegacc/xdb01.dbf' SIZE 20M REUSE AUTOEXTEND ON NEX
T 1024K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO
; CREATE TABLESPACE "LOBS" LOGGING DATAFILE '/u02/oradata/pegacc/lobs01.dbf' SIZ
E 2000M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL
UNIFORM SIZE 1M ; #############################################################
##################### #####
General form of a 8i type statement: CREATE TABLESPACE DATA DATAFILE 'G:\ORADATA
\RCDB\DATA01.DBF' size 100M EXTENT MANAGEMENT DICTIONARY default storage ( initi
al 512K next 512K minextents 1 pctincrease 0 ) minimum extent 512K logging onlin
e peRMANENTt; More info: ---------By declaring a tablespace as DICTIONARY manage
d, you are specifying that extent management for segments in this tablespace wil
l be managed using the dictionary tables sys.fet$ and sys.uet$. Oracle updates t
hese tables in the data dictionary whenever an extent is allocated, or freed for
reuse. This is the default
in Oracle8i when no extent management clause is used in the CREATE TABLESPACE st
atement. The sys.fet$ table is clustered in the C_TS# cluster. Because it is cre
ated without a SIZE clause, one block will be reserved in the cluster for each t
ablespace. Although, if a tablespace has more free extents than can be contained
in a single cluster block, then cluster block chaining will occur which can sig
nificantly impact performance on the data dictionary and space management transa
ctions in particular. Unfortunately, chaining in this cluster cannot be repaired
without recreating the entire database. Preferably, the number of free extents
in a tablespace should never be greater than can be recorded in the primary clus
ter block for that tablespace, which is about 500 free extents for a database wi
th an 8K database block size. Used extents, on the other hand, are recorded in t
he data dictionary table sys.uet$, which is clustered in the C_FILE#_BLOCK# clus
ter. Unlike the C_TS# cluster, C_FILE#_BLOCK# is sized on the assumption that se
gments will have an average of just 4 or 5 extents each. Unless your data dictio
nary was specifically customized prior to database creation to allow for more us
ed extents per segment, then creating segments with thousands of extents (like m
entioned in the previous section) will cause excessive cluster block chaining in
this cluster. The major dilemma with an excessive number of used and/or free ex
tents is that they can misrepresent the operations of the dictionary cache LRU m
echanism. Extents should therefore not be allowed to grow into the thousands, no
t because of the impact of full table scans, but rather the performance of the d
ata dictionary and dictionary cache. A Locally Managed Tablespace is a tablespac
e that manages its own extents by maintaining a bitmap in each datafile to keep
track of the free or used status of blocks in that datafile. Each bit in the bit
map corresponds to a block or a group of blocks. When the extents are allocated
or freed for reuse, Oracle simply changes the bitmap values to show the new stat
us of the blocks. These changes do not generate rollback information because the
y do not update tables in the data dictionary (except for tablespace quota infor
mation). This is the default in Oracle9i. If COMPATIBLE is set to 9.0.0, then th
e default extent management for any new tablespace is locally managed in Oracle9
i. If COMPATIBLE is less than 9.0.0, then the default extent management for any
new tablespace is dictionary managed in Oracle9i. While free space is represente
d in a bitmap within the tablespace, used extents are only recorded in the exten
t map in the segment header block of each segment, and if necessary, in addition
al extent map blocks within the segment. Keep in mind though, that this informat
ion is not cached in the dictionary cache.
It must be obtained from the database block every time that it is required, and
if those blocks are not in the buffer cache, that involves I/O and potentially l
ots of it. Take for example a query against DBA_EXTENTS. This query would be req
uired to read every segment header and every additional extent map block in the
entire database. It is for this reason that it is recommended that the number of
extents per segment in locally managed tablespaces be limited to the number of
rows that can be contained in the extent map with the segment header block. This
would be approximately - (db_block_size / 16) - 7. For a database with a db blo
ck size of 8K, the above formula would be 505 extents.
5.11 DEALLOCATE EN OPSPOREN VAN UNUSED SPACE IN EEN TABLE: =====================
===================================== alter table emp deallocate unused; alter t
able emp deallocate unused keep 100K; alter table emp allocate extent ( size 100
K datafile '/db05/oradata/CC1/user05.dbf'); Deze datafile moet in dezelfde table
space bestaan. -- gebruik van de dbms_space.unused_space package declare var1 nu
mber; var2 number; var3 number; var4 number; var5 number; var6 number; var7 numb
er; begin dbms_space.unused_space('AUTOPROV1', 'MACADDRESS_INDEX', 'INDEX', var1
, var2, var3, var4, var5, var6, var7); dbms_output.put_line('OBJECT_NAME = NOG Z
ON SLECHTE INDEX'); dbms_output.put_line('TOTAL_BLOCKS ='||var1); dbms_output.pu
t_line('TOTAL_BYTES ='||var2); dbms_output.put_line('UNUSED_BLOCKS ='||var3); db
ms_output.put_line('UNUSED_BYTES ='||var4); dbms_output.put_line('LAST_USED_EXTE
NT_FILE_ID ='||var5); dbms_output.put_line('LAST_USED_EXTENT_BLOCK_ID ='||var6);
dbms_output.put_line('LAST_USED_BLOCK ='||var7); end;
/ 5.12 CREATE TABLE: ================== -- STORAGE PARAMETERS EXAMPLE: -- ------
--------------------create table emp ( id number, name varchar(2) ) tablespace u
sers pctfree 10 storage (initial 1024K next 1024K pctincrease 10 minextents 2);
ALTER a COLUMN: =============== ALTER TABLE GEWEIGERDETRANSACTIE MODIFY (VERBRUI
KTIJD DATE); -- Creation of new table on basis of existing table: -- -----------
-------------------------------------CREATE TABLE EMPLOYEE_2 AS SELECT * FROM EM
PLOYEE insert into t SELECT * FROM t2; insert into DSA_IMPORT SELECT * FROM MDB_
DW_COMPONENTEN@SALES -- Creation of a table with an autoincrement: -- ----------
-------------------------------CREATE SEQUENCE seq_customer INCREMENT BY 1 START
WITH 1 MAXVALUE 99999 NOCYCLE; CREATE SEQUENCE seq_employee INCREMENT BY 1 STAR
T WITH 1218 MAXVALUE 99999 NOCYCLE; CREATE SEQUENCE seq_a
INCREMENT BY 1 START WITH 1 MAXVALUE 99999 NOCYCLE; CREATE TABLE CUSTOMER ( CUST
OMER_ID NUMBER (10) NOT NULL, NAAM VARCHAR2 (30) NOT NULL, CONSTRAINT PK_CUSTOME
R PRIMARY KEY ( CUSTOMER_ID ) USING INDEX TABLESPACE INDX PCTFREE 10 STORAGE ( I
NITIAL 16K NEXT 16K PCTINCREASE 0 )) TABLESPACE USERS PCTFREE 10 PCTUSED 40 INIT
RANS 1 MAXTRANS 255 STORAGE ( INITIAL 80K NEXT 80K PCTINCREASE 0 MINEXTENTS 1 MA
XEXTENTS 2147483645 ) NOCACHE; CREATE OR REPLACE TRIGGER tr_CUSTOMER_ins BEFORE
INSERT ON CUSTOMER FOR EACH ROW BEGIN SELECT seq_customer.NEXTVAL INTO :NEW.CUST
OMER_ID FROM dual; END;
CREATE SEQUENCE seq_brains_verbruik INCREMENT BY 1 START WITH 1750795 MAXVALUE 1
00000000 NOCYCLE; CREATE OR REPLACE TRIGGER tr_PARENTEENHEID_ins BEFORE INSERT O
N PARENTEENHEID FOR EACH ROW BEGIN SELECT seq_brains_verbruik.NEXTVAL INTO :NEW.
VERBRUIKID FROM dual; END; 5.13 REBUILD OF INDEX: ====================== ALTER I
NDEX emp_pk REBUILD -- online 8.16 or higher NOLOGGING TABLESPACE INDEX_BIG PCTF
REE 10 STORAGE ( INITIAL 5M NEXT 5M pctincrease 0 ); ALTER INDEX emp_ename
INITRANS 5 MAXTRANS 10 STORAGE (PCTINCREASE 50); In situations where you have B*
-tree index leaf blocks that can be freed up for reuse, you can merge those leaf
blocks using the following statement: ALTER INDEX vmoore COALESCE; DROP INDEX e
mp_ename: -- Basic example of creating an index: CREATE INDEX emp_ename ON emp(e
name) TABLESPACE users STORAGE (INITIAL 20K NEXT 20k PCTINCREASE 75) PCTFREE 0;
If you have a LMT, you can just do: create index cust_indx on customers(id) nolo
gging; This statement is without storage parameters. -- Dropping an index: DROP
INDEX emp_ename: 5.14 MOVE TABLE TO OTHER TABLESPACE: ==========================
========== ALTER TABLE CHARLIE.CUSTOMERS MOVE TABLESPACE USERS2 5.15 SYNONYM (po
inter to an object): ==================================== example: create public
synonym EMPLOYEE for HARRY.EMPLOYEE; 5.16 DATABASE LINK: =================== CR
EATE PUBLIC DATABASE LINK SALESLINK CONNECT TO FRONTEND IDENTIFIED BY cygnusx1 U
SING 'SALES'; SELECT * FROM employee@MY_LINK; For example, using a database link
to database sales.division3.acme.com, a user or application can reference remot
e data as follows: SELECT * FROM scott.emp@sales.division3.acme.com; # emp table
in scott's schema SELECT loc FROM scott.dept@sales.division3.acme.com;
If GLOBAL_NAMES is set to FALSE, then you can use any name for the link to sales
.division3.acme.com. For example, you can call the link foo. Then, you can acces
s the remote database as follows: SELECT name FROM scott.emp@foo; Synonyms for S
chema Objects: Oracle lets you create synonyms so that you can hide the database
link name FROM the user. A synonym allows access to a table on a remote databas
e using the same syntax that you would use to access a table on a local database
. For example, assume you issue the following query against a table in a remote
database: SELECT * FROM emp@hq.acme.com; You can create the synonym emp for emp@
hq.acme.com so that you can issue the following query instead to access the same
data: SELECT * FROM emp; View DATABASE LINKS: select substr(owner,1,10), substr
(db_link,1,50), substr(username,1,25), substr(host,1,40), created from dba_db_li
nks 5.17 TO CLEAR TABLESPACE TEMP: ============================== alter tablespa
ce TEMP default storage (pctincrease 0); alter session set events 'immediate tra
ce name DROP_SEGMENTS level TS#+1'; 5.18 RENAME OF OBJECT: =====================
= RENAME sales_staff TO dept_30; RENAME emp2 TO emp; 5.19 CREATE PROFILE: ======
============== CREATE PROFILE DEVELOP_FIN LIMIT SESSIONS_PER_USER 4 IDLE_TIME 30
; CREATE PROFILE PRIOLIMIT LIMIT SESSIONS_PER_USER 10; ALTER USER U_ZKN # link n
ame different FROM global name
PROFILE EXTERNLIMIT; ALTER PROFILE EXTERNLIMIT LIMIT PASSWORD_REUSE_TIME 90 PASS
WORD_REUSE_MAX UNLIMITED; ALTER PROFILE EXTERNLIMIT LIMIT SESSIONS_PER_USER 20 I
DLE_TIME 20; 5.20 RECOMPILE OF FUNCTION, PACKAGE, PROCEDURE: ===================
============================ ALTER FUNCTION schema.function COMPILE; example: AL
TER FUNCTION oe.get_bal COMPILE; ALTER PACKAGE schema.package COMPILE specificat
ion/body/package example ALTER PACKAGE emp_mgmt COMPILE PACKAGE; ALTER PROCEDURE
schema.procedure COMPILE; example ALTER PROCEDURE hr.remove_emp COMPILE; TO FIN
D OBJECTS: SELECT 'ALTER '||decode( object_type, 'PACKAGE SPECIFICATION' ,'PACKA
GE' ,'PACKAGE BODY' ,'PACKAGE' ,object_type) ||' '||owner ||'.'|| object_name ||
' COMPILE ' ||decode( object_type, 'PACKAGE SPECIFICATION' ,'SPECIFACTION' ,'PAC
KAGE BODY' ,'BODY' , NULL) ||';' FROM dba_objects WHERE status = 'INVALID'; 5.21
CREATE PACKAGE: ==================== A package is a set of related functions an
d / or routines. Packages are used to group together PL/SQL code blocks which ma
ke up a common application or are attached to a single business function. Packag
es consist of a specification and a body. The package specification lists the pu
blic interfaces to the blocks within the package body. The package body contains
the public and private PL/SQL blocks which make up the application, private blo
cks are not defined in the package specification and cannot be called by any rou
tine other than one defined within the package body. The benefits of packages ar
e that they improve the organisation of procedure
and function blocks, allow you to update the blocks that make up the package bod
y without affecting the specification (which is the object that users have right
s to) and allow you to grant execute rights once instead of for each and every b
lock. To create a package specification we use a variation on the CREATE command
, all we need put in the specification is each PL/SQL block header that will be
public within the package. An example follows :CREATE OR REPLACE PACKAGE MYPACK1
AS PROCEDURE MYPROC1 (REQISBN IN NUMBER, MYVAR1 IN OUT CHAR,TCOST OUT NUMBER);
FUNCTION MYFUNC1; END MYPACK1; To create a package body we now specify each PL/S
QL block that makes up the package, note that we are not creating these blocks s
eparately (no CREATE OR REPLACE is required for the procedure and function defin
itions). An example follows :CREATE OR REPLACE PACKAGE BODY MYPACK1 AS PROCEDURE
MYPROC1 (REQISBN IN NUMBER, MYVAR1 IN OUT CHAR, TCOST OUT NUMBER) TEMP_COST NUM
BER(10,2)) IS BEGIN SELECT COST FROM JD11.BOOK INTO TEMP_COST WHERE ISBN = REQIS
BN; IF TEMP_COST > 0 THEN UPDATE JD11.BOOK SET COST = (TEMP_COST*1.175) WHERE IS
BN = REQISBN; ELSE UPDATE JD11.BOOK SET COST = 21.32 WHERE ISBN = REQISBN; END I
F; TCOST := TEMP_COST; COMMIT; EXCEPTION WHEN NO_DATA_FOUND THEN INSERT INTO JD1
1.ERRORS (CODE, MESSAGE) VALUES(99, 'ISBN NOT FOUND'); END MYPROC1; FUNCTION MYF
UNC1 RETURN NUMBER IS RCOST NUMBER(10,2); BEGIN SELECT COST FROM JD11.BOOK INTO
RCOST WHERE ISBN = 21; RETURN (RCOST); END MYFUNC1; END MYPACK1; You can execute
a public package block like this :EXECUTE :PCOST := JD11.MYPACK1.MYFUNC1 - WHER
E JD11 is the schema name that owns the package. You can use DROP PACKAGE and DR
OP PACKAGE BODY to remove the package objects FROM the database. CREATE OR REPLA
CE PACKAGE schema.package CREATE PACKAGE emp_mgmt AS
FUNCTION hire (last_name VARCHAR2, job_id VARCHAR2, manager_id NUMBER, salary NU
MBER, commission_pct NUMBER, department_id NUMBER) RETURN NUMBER; FUNCTION creat
e_dept(department_id NUMBER, location NUMBER) RETURN NUMBER; PROCEDURE remove_em
p(employee_id NUMBER); PROCEDURE remove_dept(department_id NUMBER); PROCEDURE in
crease_sal(employee_id NUMBER, salary_incr NUMBER); PROCEDURE increase_comm(empl
oyee_id NUMBER, comm_incr NUMBER); no_comm EXCEPTION; no_sal EXCEPTION; END emp_
mgmt; / Before you can call this package's procedures and functions, you must de
fine these procedures and functions in the package body. 5.22 View a view: =====
============ set long 2000 SELECT text FROM sys.dba_views WHERE view_name = 'CON
TROL_PLAZA_V'; 5.23 ALTER SYSTEM: ================== ALTER ALTER ALTER ALTER ALT
ER ALTER ALTER ALTER ALTER SYSTEM SYSTEM SYSTEM SYSTEM SYSTEM SYSTEM SYSTEM SYST
EM SYSTEM CHECKPOINT; ENABLE/DISABLE RESTRICTED SESSION; FLUSH SHARED_POOL; SWIT
CH LOGFILE; SUSPEND/RESUME; SET RESOURCE_LIMIT = TRUE; SET LICENSE_MAX_USERS = 3
00; SET GLOBAL_NAMES=FALSE; SET COMPATIBLE = '9.2.0' SCOPE=SPFILE;
5.24 HOW TO ENABLE OR DISABLE TRIGGERS: =======================================
Disable enable trigger: ALTER TRIGGER Reorder DISABLE; ALTER TRIGGER Reorder ENA
BLE; Or in 1 time for all triggers on a table: ALTER TABLE Inventory DISABLE ALL
TRIGGERS;
5.25 DIASABLING AND ENABLING AN INDEX: ====================================== al
ter index HEAT_CUSTOMER_POSTAL_CODE unusable; alter index HEAT_CUSTOMER_POSTAL_C
ODE rebuild; 5.26 CREATE A VIEW: =================== CREATE VIEW v1 AS SELECT LP
AD(' ',40-length(size_tab.size_col)/2,' ') size_col FROM size_tab; CREATE VIEW X
AS SELECT * FROM gebruiker@aptest 5.27 MAKE A USER: ================= CREATE US
ER jward IDENTIFIED BY aZ7bC2 DEFAULT TABLESPACE data_ts QUOTA 100M ON test_ts Q
UOTA 500K ON data_ts TEMPORARY TABLESPACE temp_ts PROFILE clerk; GRANT connect T
O jward; create user jaap identified by jaap default tablespace users temporary
tablespace temp; grant connect to jaap; grant resource to jaap; Dynamic queries:
----------------- CREATE USER AND GRANT PERMISSION STATEMENTS -- dynamic querie
S SELECT 'CREATE USER '||USERNAME||' identified by '||USERNAME||' default tableS
pace '|| DEFAULT_TABLESPACE||' temporary tableSpace '||TEMPORARY_TABLESPACE||';'
FROM DBA_USERS WHERE USERNAME NOT IN ('SYS','SYSTEM','OUTLN','CTXSYS','ORDSYS',
'MDSYS'); SELECT 'GRANT CREATE SeSSion to '||USERNAME||';' FROM DBA_USERS WHERE
USERNAME NOT IN ('SYS','SYSTEM','OUTLN','CTXSYS','ORDSYS','MDSYS'); SELECT 'GRAN
T connect to '||USERNAME||';' FROM DBA_USERS WHERE USERNAME NOT IN ('SYS','SYSTE
M','OUTLN','CTXSYS','ORDSYS','MDSYS'); SELECT 'GRANT reSource to '||USERNAME||';
' FROM DBA_USERS
WHERE USERNAME NOT IN ('SYS','SYSTEM','OUTLN','CTXSYS','ORDSYS','MDSYS'); SELECT
'GRANT unlimited tableSpace to '||USERNAME||';' FROM DBA_USERS WHERE USERNAME N
OT IN ('SYS','SYSTEM','OUTLN','CTXSYS','ORDSYS','MDSYS'); Becoming another user:
====================== - Do the query: select 'ALTER USER '||username||' IDENTI
FIED BY VALUES '||''''|| password||''''||';' from dba_users; - change the passwo
rd - do what you need to do as the other account - change the password back to t
he original value -- grant <other roles or permissions> to <user> SELECT 'ALTER
TABLE RM_LIVE.'||table_name||' disable constraint '|| constraint_name||';' from
dba_constraints where owner='RM_LIVE' and CONSTRAINT_TYPE='R'; SELECT 'ALTER TAB
LE RM_LIVE.'||table_name||' disable constraint '|| constraint_name||';' from dba
_constraints where owner='RM_LIVE' and CONSTRAINT_TYPE='P'; 5.28 CREATE A SEQUEN
CE: ======================= Sequences are database objects from which multiple u
sers can generate unique integers. You can use sequences to automatically genera
te primary key values. CREATE SEQUENCE INCREMENT BY START WITH MAXVALUE CYCLE ;
<sequence name> <increment number> <start number> <maximum value>
CREATE SEQUENCE department_seq INCREMENT BY 1 START WITH 1 MAXVALUE 99999 NOCYCL
E; 5.29 STANDARD USERS IN 9i: ========================== CTXSYS is the primary s
chema for interMedia. MDSYS, ORDSYS, and ORDPLUGINS are schemas required when in
stalling any of the cartridges.
MTSSYS is required for the Oracle Service for MTS and is specific to NT. OUTLN i
s an integral part of the database required for the plan stability feature in Or
acle8i. While the interMedia and cartridge schemas can be recreated by running t
heir associated scripts as needed, I am not 100% on the steps associated with th
e MTSSYS user. Unfortunately, the OUTLN user is created at database creation tim
e when sql.bsq is run. The OUTLN user owns the package OUTLN_PKG which is used t
o manage stored outlines and their outline categories. There are other tables (b
ase tables), indexes, grants, and synonyms related to this package. By default,
are automatically created during database creation : SCOTT by script $ORACLE_HOM
E/rdbms/admin/utlsampl.sql OUTLN by script $ORACLE_HOME/rdbms/admin/sql.bsq Opti
onally: DBSNMP if Enterprise Manager Intelligent Agent is installed TRACESVR if
Enterprise Manager is installed AURORA$ORB$UNAUTHENTICATED \ AURORA$JIS$UTILITY$
-- if Oracle Servlet Engine (OSE) is installed OSE$HTTP$ADMIN / MDSYS if Oracle
Spatial option is installed ORDSYS if interMedia Audio option is installed ORDP
LUGINS if interMedia Audio option is installed CTXSYS if Oracle Text option is i
nstalled REPADMIN if Replication Option is installed LBACSYS if Oracle Label Sec
urity option is installed ODM if Oracle Data Mining option is installed ODM_MTR
idem OLAPSYS if OLAP option is installed WMSYS if Oracle Workspace Manager scrip
t owmctab.plb is executed. ANONYMOUS if catqm.sql catalog script for SQL XML man
agement XDB is executed 5.30 FORCED LOGGING: ==================== alter database
no force logging; If a database is in force logging mode, all changes, except t
hose in temporary tablespaces, will be logged, independently from any nologging
specification. It is also possible to put arbitrary tablespaces into force loggi
ng mode: alter tablespace force logging. A force logging might take a while to c
omplete because alter database add supplemental log data; ALTER DATABASE DROP SU
PPLEMENTAL LOG DATA; ALTER TABLESPACE TDBA_CDC NO FORCE LOGGING;
==================================================== ORACLE INSTALLATIONS ON SOL
ARIS, LINUX, AIX, VMS: ==================================================== 6: 7
: 8: 9: Install Install Install Install on on on on Solaris Linux OpenVMS AIX
================================== 6.1. Install Oracle 92 on Solaris: ==========
======================== 6.1 Tutorial 1: =============== Short Guide to install
Oracle 9.2.0 on SUN Solaris 8 --------------------------------------------------
-----------------------------The Oracle 9i Distribution can be found on Oracle T
echnet (http://technet.oracle.com) The following, short Installation Guide shows
how to install Oracle 9.2.0 for SUN Solaris 8. You may download our scripts to
create a database, we suggest this way and NOT using DBASSIST. Besides this scri
pts, you can download our SQLNET configuration files TNSNAMES.ORA. LISTENER.ORA
and SQLNET.ORA. Check Hardware Requirements Operating System Software Requiremen
ts Java Runtime Environment (JRE) Check Software Limits Setup the Solaris Kernel
Create Unix Group dba Create Unix User oracle Setup ORACLE environment ($HOME/.pr
follows Install from CD-ROM ... ... or Unpacking downloaded installation files C
heck oraInst.loc File Install with Installer in interactive mode Create the Data
base Start Listener Automatically Start / Stop the Database Install Oracle Optio
ns (optional) Download Scripts for Sun Solaris For our installation, we used the
following ORACLE_HOME and ORACLE_SID, please adjust these parameters for your o
wn environment.
ORACLE_HOME = /opt/oracle/product/9.2.0 ORACLE_SID = TYP2 ----------------------
---------------------------------------------------------Check Hardware Requirem
ents Minimal Memory: 256 MB Minimal Swap Space: Twice the amount of the RAM To d
etermine the amount of RAM memory installed on your system, enter the following
command. $ /usr/sbin/prtconf To determine the amount of SWAP installed on your s
ystem, enter the following command and multiply the BLOCKS column by 512. $ swap
-l Use the latest kernel patch from Sun Microsystems (http://sunsolve.sun.com)
Operating System Software Requirements Use the latest kernel patch from Sun Micr
osystems. - Download the Patch from: http://sunsolve.sun.com - Read the README F
ile included in the Patch - Usually the only thing you have to do is: $ $ $ $ cd
<patch cluster directory> ./install_custer cat /var/sadm/install_data/<luster n
ame>_log showrev -p
- Reboot the system To determine your current operating system information: $ un
ame -a To determine which operating system patches are installed: $ showrev -p T
o determine which operating system packages are installed: $ pkginfo -i [package
_name] To determine if your X-windows system is working properly on your local s
ystem, but you can redirect the X-windows output to another system. $ xclock To
determine if you are using the correct system executables:
$ $ $ $
/usr/bin/which /usr/bin/which /usr/bin/which /usr/bin/which
make ar ld nm
Each of the four commands above should point to the /usr/ccs/bin directory. If n
ot, add /usr/ccs/bin to the beginning of the PATH environment variable in the cu
rrent shell. Java Runtime Environment (JRE) The JRE shipped with Oracle9i is use
d by Oracle Java applications such as the Oracle Universal Installer is the only
one supported. You should not modify this JRE, unless it is done through a patc
h provided by Oracle Support Services. The inventory can contain multiple versio
ns of the JRE, each of which can be used by one or more products or releases. Th
e Installer creates the oraInventory directory the first time it is run to keep
an inventory of products that it installs on your system as well as other instal
lation information. The location of oraInventory is defined in /var/opt/oracle/o
raInst.loc. Products in an ORACLE_HOME access the JRE through a symbolic link in
$ORACLE_HOME/JRE to the actual location of a JRE within the inventory. You shou
ld not modify the symbolic link. Check Software Limits Oracle9i includes native
support for files greater than 2 GB. Check your shell to determine whether it wi
ll impose a limit. To check current soft shell limits, enter the following comma
nd: $ ulimit -Sa To check maximum hard limits, enter the following command: $ ul
imit -Ha The file (blocks) value should be multiplied by 512 to obtain the maxim
um file size imposed by the shell. A value of unlimited is the operating system
default and is the maximum value of 1 TB. Setup the Solaris Kernel Set to the su
m of the PROCESSES parameter for each Oracle database, adding the largest one tw
ice, then add an additional 10 for each database. For example, consider a system
that has three Oracle instances with the PROCESSES parameter in their initSID.o
ra files set to the following values: ORACLE_SID=TYP1, PROCESSES=100 ORACLE_SID=
TYP2, PROCESSES=100 ORACLE_SID=TYP3, PROCESSES=200
The value of SEMMNS is calculated as follows: SEMMNS = [(A=100) + (B=100)] + [(C
=200) * 2] + [(# of instances=3) * 10] = 630 Setting parameters too high for the
operating system can prevent the machine from booting up. Refer to Sun Microsys
tems Sun SPARC Solaris system administration documentation for parameter limits.
* * Kernel Parameters on our SUN Enterprise with 640MB for Oracle 9 * set shmsy
s:shminfo_shmmax=4294967295 set shmsys:shminfo_shmmin=1 set shmsys:shminfo_shmmn
i=100 set shmsys:shminfo_shmseg=10 set semsys:seminfo_semmni=100 set semsys:semi
nfo_semmsl=100 set semsys:seminfo_semmns=2500 set semsys:seminfo_semopm=100 set
semsys:seminfo_semvmx=32767 -- remarks: The parameter for shared memory (shminfo
_shmmax) can be set to the maximum value; it will not impact Solaris in any way.
The values for semaphores (seminfo_semmni and seminfo_semmns) depend on the num
ber of clients you want to collect data from. As a rule of the thumb, the values
should be set to at least (2*nr of clients + 15). You will have to reboot the s
ystem after making changes to the /etc/system file. Solaris doesn't automaticall
y allocate shared memory, unless you specify the value in /etc/system and reboot
. Were I you, i'd put in lines in /etc/system that look something like this: onl
y the first value is *really* important. It specifies the maximum amount of shar
ed memory to allocate. I'd make this parameter be about 70-75% of your physical
ram (assuming you have nothing else on this machine running besides Oracle ... i
f not, adjust down accordingly). Then this value will dictate your maximum SGA s
ize as you build your database. set set set set set set set shmsys:shminfo_shmma
x=4294967295 shmsys:shminfo_shmmin=1 shmsys:shminfo_shmmni=100 shmsys:shminfo_sh
mseg=10 semsys:seminfo_semmsl=256 semsys:seminfo_semmns=1024 semsys:seminfo_semm
ni=400
-- end remarks Create Unix Group dba $ groupadd -g 400 dba $ groupdel dba
Create Unix User oracle $ useradd -u 400 -c "Oracle Owner" -d /export/home/oracle \ -g
"dba" -m -s /bin/ksh oracle Setup ORACLE environment ($HOME/.profile) as follow
s # Setup ORACLE environment ORACLE_HOME=/opt/oracle/product/9.2.0; export ORACL
E_HOME ORACLE_SID=TYP2; export ORACLE_SID ORACLE_TERM=xterm; export ORACLE_TERM
TNS_ADMIN=/export/home/oracle/config/9.2.0; export TNS_ADMIN NLS_LANG=AMERICAN_A
MERICA.WE8ISO8859P1; export NLS_LANG ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/da
ta; export ORA_NLS33 LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/openwin
/lib LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/dt/lib:/usr/ucblib:/usr/local/lib exp
ort LD_LIBRARY_PATH # Set up the search paths: PATH=/bin:/usr/bin:/usr/sbin:/opt
/bin:/usr/ccs/bin:/opt/local/GNU/bin PATH=$PATH:/opt/local/bin:/opt/NSCPnav/bin:
$ORACLE_HOME/bin PATH=$PATH:/usr/local/samba/bin:/usr/ucb:. export PATH # CLASSP
ATH must include the following JRE location(s): CLASSPATH=$ORACLE_HOME/JRE:$ORAC
LE_HOME/jlib:$ORACLE_HOME/rdbms/jlib CLASSPATH=$CLASSPATH:$ORACLE_HOME/network/j
lib Install from CD-ROM ... Usually the CD-ROM will be mounted automatically by
the Solaris Volume Manager, if not, do it as follows as user root. $ su root $ m
kdir /cdrom $ mount -r -F hsfs /dev/.... /cdrom exit or CTRL-D ... or Unpacking
downloaded installation files If you downloaded database installation files from
Oracle site (901solaris_disk1.cpio.gz, 901solaris_disk2.cpio.gz and 901solaris_
disk3.cpio.gz) gunzip them somewhere and you'll get three .cpio files. The best
way to download the huge files is to use the tool GetRight ( http://www.getright
.com/ ) $ $ $ $ $ cd <somewhere> mkdir Disk1 Disk2 Disk3 cd Disk1 gunzip 901sola
ris_disk1.cpio.gz cat 901solaris_disk1.cpio | cpio -icd
This will extract all the files for Disk1, repeat steps for Disk2 and D3isk3. No
w
you should have three directories (Disk1, Disk2 and Disk3) containing installati
on files. Check oraInst.loc File If you used Oracle before on your system, then
you must edit the Oracle Inventory File, usually located in: /var/opt/oracle/ora
Inst.loc inventory_loc=/opt/oracle/product/oraInventory Install with Installer i
n interactive mode Install Oracle 9i with Oracle Installer $ $ $ $ cd /Disk1 DIS
PLAY=<Any X-Window Host>:0.0 export DISPLAY ./runInstaller example display: $ ex
port DISPLAY=192.168.1.10:0.0 Answer the questions in the Installer, we use the
following install directories Inventory Location: /opt/oracle/product/oraInvento
ry Oracle Universal Installer in: /opt/oracle/product/oui Java Runtime Environme
nt in: /opt/oracle/product/jre/1.1.8 Edit the Database Startup Script /var/opt/o
racle/oratab TYP2:/opt/oracle/product/9.2.0:Y Create the Database Edit and save
the CREATE DATABASE File initTYP2.sql in $ORACLE_HOME/dbs, or create a symbolic-
Link from $ORACLE_HOME/dbs to your Location. $ cd $ORACLE_HOME/dbs $ ln -s /expo
rt/home/oracle/config/9.2.0/initTYP2.ora initTYP2.ora $ ls -l initTYP2.ora -> /e
xport/home/oracle/config/9.2.0/initTYP2.ora First start the Instance, just to te
st your initTYP2.ora file for correct syntax and system resources. $ cd /export/
home/oracle/config/9.2.0/ $ sqlplus /nolog SQL> connect / as sysdba SQL> startup
nomount SQL> shutdown immediate Now you can create the database
SQL> @initTYP2.sql SQL> @shutdown immediate SQL> startup Check the Logfile: init
TYP2.log Start Listener $ lsnrctl start LSNRTYP2 Automatically Start / Stop the
Database To start the Database automatically on Boot-Time, create or use our Sta
rtup Scripts dbora and lsnrora (included in ora_config_sol_920.tar.gz), which mu
st be installed in /etc/init.d. Create symbolic Links from the Startup Directori
es. lrwxrwxrwx 1 root root S99dbora -> ../init.d/dbora* lrwxrwxrwx 1 root root S
99lsnrora -> ../init.d/lsnrora* Install Oracle Options (optional) You may want t
o install the following Options: Oracle JVM Orcale XML Oracle Spatial Oracle Ult
ra Search Oracle OLAP Oracle Data Mining Example Schemas Run the following scrip
t install_options.sh to enable this options in the database. Before running this
scripts adjust the initSID.ora paramaters as follows for the build process. Aft
er this, you can reset the paramters to smaller values. parallel_automatic_tunin
g = false shared_pool_size = 200000000 java_pool_size = 100000000 $ ./install_op
tions.sh Download Scripts for Sun Solaris These Scripts can be used as Templates
. Please note, that some Parameters like ORACLE_HOME, ORACLE_SID and PATH must b
e adjusted on your own Environment. Besides this, you should check the initSID.o
ra Parameters for your Database (Size, Archivelog, ...) 6.2 Environment oracle u
ser: ---------------------------typical profile for Oracle account on most unix
systems:
.profile -------MAIL=/usr/mail/${LOGNAME:?} umask=022 EDITOR=vi; export EDITOR O
RACLE_BASE=/opt/app/oracle; export ORACLE_BASE ORACLE_HOME=$ORACLE_BASE/product/
9.2; export ORACLE_HOME ORACLE_SID=OWS; export ORACLE_SID ORACLE_TERM=xterm; exp
ort ORACLE_TERM TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN NLS_LANG=
AMERICAN_AMERICA.AL16UTF8; export NLS_LANG ORA_NLS33=$ORACLE_HOME/ocommon/nls/ad
min/data; export ORA_NLS33 LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/o
penwin/lib LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/dt/lib:/usr/ucblib:/usr/local/l
ib export LD_LIBRARY_PATH PATH=.:/usr/bin:/usr/sbin:/sbin:/usr/ucb:/etc:$ORACLE_
HOME/lib:/usr/oasys/bin:$ORA CLE_HOME/bin:/usr/local/bin: export PATH PS1='$PWD
>' DISPLAY=172.17.2.128:0.0 export DISPLAY /etc >more passwd ----------------roo
t:x:0:1:Super-User:/:/sbin/sh daemon:x:1:1::/: bin:x:2:2::/usr/bin: sys:x:3:3::/
: adm:x:4:4:Admin:/var/adm: lp:x:71:8:Line Printer Admin:/usr/spool/lp: uucp:x:5
:5:uucp Admin:/usr/lib/uucp: nuucp:x:9:9:uucp Admin:/var/spool/uucppublic:/usr/l
ib/uucp/uucico smmsp:x:25:25:SendMail Message Submission Program:/: listen:x:37:
4:Network Admin:/usr/net/nls: nobody:x:60001:60001:Nobody:/: noaccess:x:60002:60
002:No Access User:/: nobody4:x:65534:65534:SunOS 4.x Nobody:/: avdsel:x:1002:10
0:Albert van der Sel:/export/home/avdsel:/bin/ksh oraclown:x:1001:102:Oracle own
er:/export/home/oraclown:/bin/ksh brighta:x:1005:102:Bright Alley:/export/home/b
righta:/bin/ksh customer:x:2000:102:Customer account:/export/home/customer:/usr/
bin/tcsh /etc >more group ---------------root::0:root other::1: bin::2:root,bin,
daemon sys::3:root,bin,sys,adm adm::4:root,adm,daemon uucp::5:root,uucp mail::6:
root tty::7:root,adm lp::8:root,lp,adm nuucp::9:root,nuucp staff::10: daemon::12
:root,daemon sysadmin::14:
smmsp::25:smmsp nobody::60001: noaccess::60002: nogroup::65534: dba::100:oraclow
n,brighta oper::101: oinstall::102:
===================================== 7. install Oracle 9i on Linux: ===========
========================== ==================== 7.1.Article 1: =================
=== The Oracle 9i Distribution can be found on Oracle Technet (http://technet.or
acle.com) The following short Guide shows how to install and configure Oracle 9.
2.0 on RedHat Linux 7.2 / 8.0 You may download our Scripts to create a database,
we suggest this way and NOT using DBASSIST. Besides these scripts, you can down
load our NET configuration files: LISTNER.ORA, TNSNAMES.ORA and SQLNET.ORA. Syst
em Requirements Create Unix Group dba Create Unix User oracle Setup Environment ($
_profile) as follows Mount the Oracle 9i CD-ROM (only if you have the CD) ... ..
. or Unpacking downloaded installation files Install with Installer in interacti
ve mode Create the Database Create your own DB-Create Script (optional) Start Li
stener Automatically Start / Stop the Database Setup Kernel Parameters ( if nece
ssary ) Install Oracle Options (optional) Download Scripts for RedHat Linux 7.2
For our installation, we used the following ORACLE_HOME AND ORACLE_SID, please a
djust these parameters for your own environment. ORACLE_HOME = /opt/oracle/produ
ct/9.2.0 ORACLE_SID = VEN1 -----------------------------------------------------
--------------------------System Requirements Oracle 9i needs Kernel Version 2.4
and glibc 2.2, which is included in RedHat Linux 7.2.
Component Check with ... ... Output Liunx Kernel Version 2.4 rpm -q kernel kerne
l-2.4.7-10 System Libraries rpm -q glibc glibc-2.2.4-19.3 Proc*C/C++ rpm -q gcc
gcc-2.96-98 Create Unix Group dba $ groupadd -g 400 dba Create Unix User oracle $
400 -c "Oracle Owner" -d /home/oracle \ -g "dba" -m -s /bin/bash oracle Setup E
nvironment ($HOME/.bash_profile) as follows # Setup ORACLE environment ORACLE_HO
ME=/opt/oracle/product/9.2.0; export ORACLE_HOME ORACLE_SID=VEN1; export ORACLE_
SID ORACLE_TERM=xterm; export ORACLE_TERM ORACLE_OWNER=oracle; export ORACLE_OWN
ER TNS_ADMIN=/home/oracle/config/9.2.0; export TNS_ADMIN NLS_LANG=AMERICAN_AMERI
CA.WE8ISO8859P1; export NLS_LANG ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data;
export ORA_NLS33 CLASSPATH=$ORACLE_HOME/jdbc/lib/classes111.zip LD_LIBRARY_PATH=
$ORACLE_HOME/lib; export LD_LIBRARY_PATH ### see JSDK: export CLASSPATH # Set up
JAVA and JSDK environment: export JAVA_HOME=/usr/local/jdk export JSDK_HOME=/us
r/local/jsdk CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JSDK_HOME/lib/jsdk.jar export
CLASSPATH # Set up the search paths: PATH=$POSTFIX/bin:$POSTFIX/sbin:$POSTFIX/se
ndmail PATH=$PATH:/usr/local/jre/bin:/usr/local/jdk/bin:/bin:/sbin:/usr/bin:/usr
/sbin PATH=$PATH:/usr/local/bin:$ORACLE_HOME/bin:/usr/local/jsdk/bin PATH=$PATH:
/usr/local/sbin:/usr/bin/X11:/usr/X11R6/bin:/root/bin PATH=$PATH:/usr/local/samb
a/bin
export PATH Mount the Oracle 9i CD-ROM (only if you have the CD) ... Mount the C
D-ROM as user root. $ $ $ $ su root mkdir /cdrom mount -t iso9660 /dev/cdrom /cd
rom exit ... or Unpacking downloaded installation files If you downloaded databa
se installation files from Oracle site (Linux9i_Disk1.cpio.gz, Linux9i_Disk2.cpi
o.gz and Linux9i_Disk3.cpio.gz) gunzip them somewhere and you'll get three .cpio
files. The best way to download the huge files is to use the tool GetRight ( ht
tp://www.getright.com/ ) $ $ $ $ cd <somewhere> cpio -idmv < Linux9i_Disk1.cpio
cpio -idmv < Linux9i_Disk2.cpio cpio -idmv < Linux9i_Disk3.cpio
Now you should have three directories (Disk1, Disk2 and Disk3) containing instal
lation files. Install with Installer in interactive mode Install Oracle 9i with
Oracle Installer $ $ $ $ cd Disk1 DISPLAY=<Any X-Window Host>:0.0 export DISPLAY
./runInstaller
Answer the questions in the Installer, we use the following install directories
Inventory Location: /opt/oracle/product/oraInventory Oracle Universal Installer
in: /opt/oracle/product/oui Java Runtime Environment in: /opt/oracle/product/jre
/1.1.8 Edit the Database Startup Script /etc/oratab VEN1:/opt/oracle/product/9.2
.0:Y Create the Database Edit and save the CREATE DATABASE File initVEN1.sql in
$ORACLE_HOME/dbs, or create a symbolic-Link from $ORACLE_HOME/dbs to your Locati
on. $ cd $ORACLE_HOME/dbs $ ln -s /home/oracle/config/9.2.0/initVEN1.ora initVEN
1.ora $ ls -l initVEN1.ora -> /home/oracle/config/9.2.0/initVEN1.ora
First start the Instance, just to test your initVEN1.ora file for correct syntax
and system resources. $ cd /home/oracle/config/9.2.0/ $ sqlplus /nolog SQL> con
nect / as sysdba SQL> startup nomount SQL> shutdown immediate Now you can create
the database SQL> @initVEN1.sql SQL> @shutdown immediate SQL> startup Check the
Logfile: initVEN1.log Create your own DB-Create Script (optional) You can gener
ate your own DB-Create Script using the Tool: $ORACLE_HOME/bin/dbca Start Listen
er $ lsnrctl start LSNRVEN1 Automatically Start / Stop the Database To start the
Database automatically on Boot-Time, create or use our Startup Scripts dbora an
d lsnrora (included in ora_config_linux_901.tar.gz), which must be installed in
/etc/rc.d/init.d. Create symbolic Links from the Startup Directories in /etc/rc.
d (e.g. /etc/rc.d/rc2.d). lrwxrwxrwx 1 root root S99dbora -> ../init.d/dbora* lr
wxrwxrwx 1 root root S99lsnrora -> ../init.d/lsnrora* Setup Kernel Parameters (
if necessary ) Oracle9i uses UNIX resources such as shared memory, swap space, a
nd semaphores extensively for interprocess communication. If your kernel paramet
er settings are insufficient for Oracle9i, you will experience problems during i
nstallation and instance startup. The greater the amount of data you can store i
n memory, the faster your database will operate. In addition, by maintaining dat
a in memory, the UNIX kernel reduces disk I/O activity. Use the ipcs command to
obtain a list of the systems current shared memory and semaphore segments, and thei
r identification number and owner. You can modify the kernel parameters by using
the /proc file system. To modify kernel parameters using the /proc file system:
1. Log in as root user. 2. Change to the /proc/sys/kernel directory.
3. Review the current semaphore parameter values in the sem file using the cat o
r more utility # cat sem The output will list, in order, the values for the SEMM
SL, SEMMNS, SEMOPM, and SEMMNI parameters. The following example shows how the o
utput will appear. 250 32000 32 128 In the preceding example, 250 is the value o
f the SEMMSL parameter, 32000 is the value of the SEMMNS parameter, 32 is the va
lue of the SEMOPM parameter, and 128 is the value of the SEMMNI parameter. 4. Mo
dify the parameter values using the following command: # echo SEMMSL_value SEMMN
S_value SEMOPM_value SEMMNI_value > sem In the preceding command, all parameters
must be entered in order. 5. Review the current shared memory parameters using
the cat or more utility. # cat shared_memory_parameter In the preceding example,
the shared_memory_parameter is either the SHMMAX or SHMMNI parameter. The param
eter name must be entered in lowercase letters. 6. Modify the shared memory para
meter using the echo utility. For example, to modify the SHMMAX parameter, enter
the following: # echo 2147483648 > shmmax 7. Write a script to initialize these
values during system startup and include the script in your system init files.
Refer to the following table to determine if your system shared memory and semap
hore kernel parameters are set high enough for Oracle9i. The parameters in the f
ollowing table are the minimum values required to run Oracle9i with a single dat
abase instance. You can put the initialization in the file /etc/rc.d/rc.local #
Setup Kernel Parameters for Oracle 9i echo 250 32000 100 128 > /proc/sys/kernel/
sem echo 2147483648 > /proc/sys/kernel/shmmax echo 4096 > /proc/sys/kernel/shmmn
i Install Oracle Options (optional) You may want to install the following Option
s: Oracle Orcale Oracle Oracle Oracle JVM XML Spatial Ultra Search OLAP
Oracle Data Mining Example Schemas Run the following script install_options.sh t
o enable this options in the database. Before running this scripts adjust the in
itSID.ora paramaters as follows for the build process. After this, you can reset
the paramters to smaller values. parallel_automatic_tuning = false shared_pool_
size = 200000000 java_pool_size = 100000000 $ ./install_options.sh Download Scri
pts for RedHat Linux 7.2 These Scripts can be used as Templates. Please note, th
at some Parameters like ORACLE_HOME, ORACLE_SID and PATH must be adjusted on you
r own Environment. Besides this, you should check the initSID.ora Parameters for
your Database (Size, Archivelog, ...)
==================== 7.2.Article 2: ==================== Installing Oracle9i (9.
2.0.5.0) on Red Hat Linux (Fedora Core 2) by Jeff Hunter, Sr. Database Administr
ator ---------------------------------------------------------------------------
----Contents Overview Swap Space Considerations Configuring Shared Memory Config
uring Semaphores Configuring File Handles Create Oracle Account and Directories
Configuring the Oracle Environment Configuring Oracle User Shell Limits Download
ing / Unpacking the Oracle9i Installation Files Update Red Hat Linux System - (O
racle Metalink Note: 252217.1) Install the Oracle 9.2.0.4.0 RDBMS Software Insta
ll the Oracle 9.2.0.5.0 Patchset Post Installation Steps Creating the Oracle Dat
abase --------------------------------------------------------------------------
-----Overview The following article is a summary of the steps required to succes
sfully install the Oracle9i (9.2.0.4.0) RDBMS software on Red Hat Linux Fedora C
ore 2. Also
included in this article is a detailed overview for applying the Oracle9i (9.2.0
.5.0) patchset. Keep in mind the following assumptions throughout this article:
When installing Red Hat Linux Fedora Core 2, I install ALL components. (Everythi
ng). This makes it easier than trying to troubleshoot missing software component
s. As of March 26, 2004, Oracle includes the Oracle9i RDBMS software with the 9.
2.0.4.0 patchset already included. This will save considerable time since the pa
tchset does not have to be downloaded and installed. We will, however, be applyi
ng the 9.2.0.5.0 patchset. Although it is not required, it is recommend to apply
the 9.2.0.5.0 patchset. The post installation section includes steps for config
uring the Oracle Networking files, configuring the database to start and stop wh
en the machine is cycled, and other miscellaneous tasks. Finally, at the end of
this article, we will be creating an Oracle 9.2.0.5.0 database named ORA920 usin
g supplied scripts. ------------------------------------------------------------
-------------------Swap Space Considerations Ensure enough swap space is availab
le. Installing Oracle9i requires a minimum of 512MB of memory. (An inadequate am
ount of swap during the installation will cause the Oracle Universal Installer t
o either "hang" or "die") To check the amount of memory / swap you have allocate
d, type either: # free - OR # cat /proc/swaps - OR # cat /proc/meminfo | grep Me
mTotal If you have less than 512MB of memory (between your RAM and SWAP), you ca
n add temporary swap space by creating a temporary swap file. This way you do no
t have to use a raw device or even more drastic, rebuild your system. As root, m
ake a file that will act as additional swap space, let's say about 300MB: # dd i
f=/dev/zero of=tempswap bs=1k count=300000 Now we should change the file permiss
ions: # chmod 600 tempswap Finally we format the "partition" as swap and add it
to the swap space: # mke2fs tempswap # mkswap tempswap
# swapon tempswap
-------------------------------------------------------------------------------C
onfiguring Shared Memory The Oracle RDBMS uses shared memory in UNIX to allow pr
ocesses to access common data structures and data. These data structures and dat
a are placed in a shared memory segment to allow processes the fastest form of I
nterprocess Communications (IPC) available. The speed is primarily a result of p
rocesses not needing to copy data between each other to share common data and st
ructures - relieving the kernel from having to get involved. Oracle uses shared
memory in UNIX to hold its Shared Global Area (SGA). This is an area of memory w
ithin the Oracle instance that is shared by all Oracle backup and foreground pro
cesses. It is important to size the SGA to efficiently hold the database buffer
cache, shared pool, redo log buffer as well as other shared Oracle memory struct
ures. Inadequate sizing of the SGA can have a dramatic decrease in performance o
f the database. To determine all shared memory limits you can use the ipcs comma
nd. The following example shows the values of my shared memory limits on a fresh
RedHat Linux install using the defaults: # ipcs -lm ------ Shared Memory Limits
-------max number of segments = 4096 max seg size (kbytes) = 32768 max total sh
ared memory (kbytes) = 8388608 min seg size (bytes) = 1 Let's continue this sect
ion with an overview of the parameters that are responsible for configuring the
shared memory settings in Linux. SHMMAX The SHMMAX parameter is used to define t
he maximum size (in bytes) for a shared memory segment and should be set large e
nough for the largest SGA size. If the SHMMAX is set incorrectly (too low), it i
s possible that the Oracle SGA (which is held in shared segments) may be limited
in size. An inadequate SHMMAX setting would result in the following: ORA-27123:
unable to attach to shared memory segment You can determine the value of SHMMAX
by performing the following: # cat /proc/sys/kernel/shmmax 33554432 As you can
see from the output above, the default value for SHMMAX is 32MB. This is often t
oo small to configure the Oracle SGA. I generally set the SHMMAX parameter to 2G
B.
NOTE: With a 32-bit Linux operating system, the default maximum size of the SGA
is 1.7GB. This is the reason I will often set the SHMMAX parameter to 2GB since
it requires a larger value for SHMMAX. On a 32-bit Linux operating system, witho
ut Physical Address Extension (PAE), the physical memory is divided into a 3GB u
ser space and a 1GB kernel space. It is therefore possible to create a 2.7GB SGA
, but you will need make several changes at the Linux operating system level by
changing the mapped base. In the case of a 2.7GB SGA, you would want to set the
SHMMAX parameter to 3GB. Keep in mind that the maximum value of the SHMMAX param
eter is 4GB.
To change the value SHMMAX, you can use either of the following three methods: T
his is method I use most often. This method sets the SHMMAX on startup by insert
ing the following kernel parameter in the /etc/sysctl.conf startup file: # echo
"kernel.shmmax=2147483648" >> /etc/sysctl.conf If you wanted to dynamically alte
r the value of SHMMAX without rebooting the machine, you can make this change di
rectly to the /proc file system. This command can be made permanent by putting i
t into the /etc/rc.local startup file: # echo "2147483648" > /proc/sys/kernel/sh
mmax You can also use the sysctl command to change the value of SHMMAX: # sysctl
-w kernel.shmmax=2147483648 SHMMNI We now look at the SHMMNI parameters. This k
ernel parameter is used to set the maximum number of shared memory segments syst
em wide. The default value for this parameter is 4096. This value is sufficient
and typically does not need to be changed. You can determine the value of SHMMNI
by performing the following: # cat /proc/sys/kernel/shmmni 4096 SHMALL Finally,
we look at the SHMALL shared memory kernel parameter. This parameter controls t
he total amount of shared memory (in pages) that can be used at one time on the
system. In short, the value of this parameter should always be at least: ceil(SH
MMAX/PAGE_SIZE) The default size of SHMALL is 2097152 and can be queried using t
he following command: # cat /proc/sys/kernel/shmall 2097152 From the above outpu
t, the total amount of shared memory (in bytes) that can be used at one time on
the system is: SM = (SHMALL * PAGE_SIZE) = 2097152 * 4096 = 8,589,934,592 bytes
The default setting for SHMALL should be adequate for our Oracle installation. N
OTE: The page size in Red Hat Linux on the i386 platform is 4096 bytes. You can,
however, use bigpages which supports the configuration of larger memory page si
zes.
-------------------------------------------------------------------------------C
onfiguring Semaphores Now that we have configured our shared memory settings, it
is time to take care of configuring our semaphores. A semaphore can be thought
of as a counter that is used to control access to a shared resource. Semaphores
provide low level synchronization between processes (or threads within a process
) so that only one process (or thread) has access to the shared segment, thereby
ensureing the integrity of that shared resource. When an application requests s
emaphores, it does so using "sets". To determine all semaphore limits, use the f
ollowing: # ipcs -ls ------ Semaphore Limits -------max number of arrays = 128 m
ax semaphores per array = 250 max semaphores system wide = 32000 max ops per sem
op call = 32 semaphore max value = 32767 You can also use the following command:
# cat /proc/sys/kernel/sem 250 32000 32 128 SEMMSL The SEMMSL kernel parameter
is used to control the maximum number of semaphores per semaphore set. Oracle re
commends setting SEMMSL to the largest PROCESS instance parameter setting in the
init.ora file for all databases hosted on the Linux system plus 10. Also, Oracl
e recommends setting the SEMMSL to a value of no less than 100. SEMMNI The SEMMN
I kernel parameter is used to control the maximum number of semaphore sets on th
e entire Linux system. Oracle recommends setting the SEMMNI to a value of no les
s than 100. SEMMNS The SEMMNS kernel parameter is used to control the maximum nu
mber of semaphores (not semaphore sets) on the entire Linux system. Oracle recom
mends setting the SEMMNS to the sum of the PROCESSES instance parameter setting
for each database on the system, adding the largest PROCESSES twice, and then fi
nally adding 10 for each Oracle database on the system. To summarize: SEMMNS = s
um of PROCESSES setting for each database on the system + ( 2 * [largest PROCESS
ES setting]) + (10 * [number of databases on system] To determine the maximum nu
mber of semaphores that can be allocated on a Linux
system, use the following calculation. It will be the lesser of: SEMMNS SEMOPM -
or(SEMMSL * SEMMNI)
The SEMOPM kernel parameter is used to control the number of semaphore operation
s that can be performed per semop system call. The semop system call (function)
provides the ability to do operations for multiple semaphores with one semop sys
tem call. A semaphore set can have the maximum number of SEMMSL semaphores per s
emaphore set and is therefore recommended to set SEMOPM equal to SEMMSL. Oracle
recommends setting the SEMOPM to a value of no less than 100. Setting Semaphore
Kernel Parameters Finally, we see how to set all semaphore parameters using seve
ral methods. In the following, the only parameter I care about changing (raising
) is SEMOPM. All other default settings should be sufficient for our example ins
tallation. This is method I use most often. This method sets all semaphore kerne
l parameters on startup by inserting the following kernel parameter in the /etc/
sysctl.conf startup file: # echo "kernel.sem=250 32000 100 128" >> /etc/sysctl.c
onf If you wanted to dynamically alter the value of all semaphore kernel paramet
ers without rebooting the machine, you can make this change directly to the /pro
c file system. This command can be made permanent by putting it into the /etc/rc
.local startup file: # echo "250 32000 100 128" > /proc/sys/kernel/sem You can a
lso use the sysctl command to change the value of all semaphore settings: # sysc
tl -w kernel.sem="250 32000 100 128" -------------------------------------------
------------------------------------Configuring File Handles When configuring ou
r Linux database server, it is critical to ensure that the maximum number of fil
e handles is large enough. The setting for file handles designate the number of
open files that you can have on the entire Linux system. Use the following comma
nd to determine the maximum number of file handles for the entire system: # cat
/proc/sys/fs/file-max 103062 Oracle recommends that the file handles for the ent
ire system be set to at least 65536. In most cases, the default for Red Hat Linu
x is 103062. I have seen others (Red Hat Linux AS 2.1, Fedora Core 1, and Red Ha
t version 9) that will only default to 32768. If this is the case, you will want
to increase this value to at least 65536. This is method I use most often. This
method sets the maximum number of file handles (using the kernel parameter file
-max) on startup by inserting the following kernel parameter in the /etc/sysctl.
conf startup file:
# echo "fs.file-max=65536" >> /etc/sysctl.conf If you wanted to dynamically alte
r the value of all semaphore kernel parameters without rebooting the machine, yo
u can make this change directly to the /proc file system. This command can be ma
de permanent by putting it into the /etc/rc.local startup file: # echo "65536" >
/proc/sys/fs/file-max You can also use the sysctl command to change the maximum
number of file handles: # sysctl -w fs.file-max=65536 NOTE: It is also possible
to query the current usage of file handles using the following command: # cat /
proc/sys/fs/file-nr 1140 0 103062 In the above example output, here is an explan
ation of the three values from the file-nr command: Total number of allocated fi
le handles. Total number of file handles currently being used. Maximum number of
file handles that can be allocated. This is essentially the value of file-max -
(see above).
NOTE: If you need to increase the value in /proc/sys/fs/file-max, then make sure
that the ulimit is set properly. Usually for 2.4.20 it is set to unlimited. Ver
ify the ulimit setting my issuing the ulimit command: # ulimit unlimited
-------------------------------------------------------------------------------C
reate Oracle Account and Directories Now let's create the Oracle UNIX account al
l all required directories: Login as the root user id. % su Create directories.
# mkdir -p /u01/app/oracle # mkdir -p /u03/app/oradata # mkdir -p /u04/app/orada
ta # mkdir -p /u05/app/oradata # mkdir -p /u06/app/oradata Create the UNIX Group
for the Oracle User Id. # groupadd -g 115 dba Create the UNIX User for the Orac
le Software. # useradd -u 173 -c "Oracle Software Owner" -d /u01/app/oracle -g "
dba" -m -s /bin/bash oracle # passwd oracle Changing password for user oracle. N
ew UNIX password: ************ BAD PASSWORD: it is based on a dictionary word Re
type new UNIX password: ************
passwd: all authentication tokens updated successfully. Change ownership of all
Oracle Directories to the Oracle UNIX User. # chown -R oracle:dba /u01 # chown -
R oracle:dba /u03 # chown -R oracle:dba /u04 # chown -R oracle:dba /u05 # chown
-R oracle:dba /u06 Oracle Environment Variable Settings NOTE: Ensure to set the
environment variable: LD_ASSUME_KERNEL=2.4.1 Failing to set the LD_ASSUME_KERNEL
parameter will cause the Oracle Universal Installer to hang!
Verify all mount points. Please keep in mind that all of the following mount poi
nts can simply be directories if you only have one hard drive. For our installat
ion, we will be using four mount points (or directories) as follows: /u01 : The
Oracle RDBMS software will be installed to /u01/app/oracle. /u03 : This mount po
int will contain the physical Oracle files: Control File 1 Online Redo Log File
- Group 1 / Member 1 Online Redo Log File - Group 2 / Member 1 Online Redo Log F
ile - Group 3 / Member 1 /u04 : This mount point will contain the physical Oracl
e files: Control File 2 Online Redo Log File - Group 1 / Member 2 Online Redo Lo
g File - Group 2 / Member 2 Online Redo Log File - Group 3 / Member 2 /u05 : Thi
s mount point will contain the physical Oracle files: Control File 3 Online Redo
Log File - Group 1 / Member 3 Online Redo Log File - Group 2 / Member 3 Online
Redo Log File - Group 3 / Member 3 /u06 : This mount point will contain the all
physical Oracle data files. This will be one large RAID 0 stripe for all Oracle
data files. All tablespaces including System, UNDO, Temporary, Data, and Index.
-------------------------------------------------------------------------------C
onfiguring the Oracle Environment After configuring the Linux operating environm
ent, it is time to setup the Oracle UNIX User ID for the installation of the Ora
cle RDBMS Software. Keep in mind that the following steps need to be performed b
y the oracle user id. Before delving into the details for configuring the Oracle
User ID, I packaged an archive of shell scripts and configuration files to assi
st
with the Oracle preparation and installation. You should download the archive "o
racle_920_installation_files_linux.tar" as the Oracle User ID and place it in hi
s HOME directory. Login as the oracle user id. % su - oracle Unpackage the conte
nts of the oracle_920_installation_files_linux.tar archive. After extracting the
archive, you will have a new directory called oracle_920_installation_files_lin
ux that contains all required files. The following set of commands descibe how t
o extract the file and where to copy/extract all required files: $ id uid=173(or
acle) gid=115(dba) groups=115(dba) $ pwd /u01/app/oracle $ tar xvf oracle_920_in
stallation_files_linux.tar oracle_920_installation_files_linux/ oracle_920_insta
llation_files_linux/admin.tar oracle_920_installation_files_linux/common.tar ora
cle_920_installation_files_linux/dbora oracle_920_installation_files_linux/dbshu
t oracle_920_installation_files_linux/.bash_profile oracle_920_installation_file
s_linux/dbstart oracle_920_installation_files_linux/ldap.ora oracle_920_installa
tion_files_linux/listener.ora oracle_920_installation_files_linux/sqlnet.ora ora
cle_920_installation_files_linux/tnsnames.ora oracle_920_installation_files_linu
x/crontabORA920.txt $ cp oracle_920_installation_files_linux/.bash_profile ~/.ba
sh_profile $ tar xvf oracle_920_installation_files_linux/admin.tar $ tar xvf ora
cle_920_installation_files_linux/common.tar $ . ~/.bash_profile .bash_profile ex
ecuted $ -----------------------------------------------------------------------
--------Configuring Oracle User Shell Limits Many of the Linux shells (including
BASH) implement certain controls over certain critical resources like the numbe
r of file descriptors that can be opened and the maximum number of processes ava
ilable to a user's session. In most cases, you will not need to alter any of the
se shell limits, but you find yourself getting errors when creating or maintaini
ng the Oracle database, you may want to read through this section. You can use t
he following command to query these shell limits: # ulimit -a
core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (bl
ocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kb
ytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 stack size
(kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1638
3 virtual memory (kbytes, -v) unlimited Maximum Number of Open File Descriptors
for Shell Session Let's first talk about the maximum number of open file descrip
tors for a user's shell session. NOTE: Make sure that throughout this section, t
hat you are logged in as the oracle user account since this is the shell account
we want to test! Ok, you are first going to tell me, "But I've already altered
my Linux environment by setting the system wide kernel parameter /proc/sys/fs/fi
le-max". Yes, this is correct, but there is still a per user limit on the number
of open file descriptors. This typically defaults to 1024. To check that, use t
he following command: % su - oracle % ulimit -n 1024 If you wanted to change the
maximum number of open file descriptors for a user's shell session, you could e
dit the /etc/security/limits.conf as the root account. For your Linux system, yo
u would add the following lines: oracle soft nofile 4096 oracle hard nofile 1010
62 The first line above sets the soft limit, which is the number of files handle
s (or open files) that the Oracle user will have after logging in to the shell a
ccount. The hard limit defines the maximum number of file handles (or open files
) are possible for the user's shell account. If the oracle user account starts t
o recieve error messages about running out of file handles, then number of file
handles should be increased for the oracle using the user should increase the nu
mber of file handles using the hard limit setting. You can increase the value of
this parameter to 101062 for the current session by using the following: % ulim
it -n 101062 Keep in mind that the above command will only effect the current sh
ell session. If you were to log out and log back in, the value would be set back
to its default for that shell session. NOTE: Although you can set the soft and
hard file limits higher, it is critical to understand to never set the hard limi
t for nofile for your shell account equal to /proc/sys/fs/file-max. If you were
to do this, your shell session could use up all of the file descriptors for the
entire Linux system, which means that the entire Linux system would run out of f
ile descriptors. At this point, you would not be able to initiate any new logins
since the system would not be able to open any PAM modules, which are required
for login. Notice that I set my hard limit to 101062 and not 103062. In short, I
am leaving 2000 spare! We're not totally done yet. We still need to ensure that
pam_limits is configured in the /etc/pam.d/system-auth file. The steps defined
below sould already be
performed with a normal Red Hat Linux installation, but should still be validate
d! The PAM module will read the /etc/security/limits.conf file. You should have
an entry in the /etc/pam.d/system-auth file as follows: session required /lib/se
curity/$ISA/pam_limits.so I typically validate that my /etc/pam.d/system-auth fi
le has the following two entries: session required /lib/security/$ISA/pam_limits
.so session required /lib/security/$ISA/pam_unix.so Finally, let's test our new
settings for the maximum number of open file descriptors for the oracle shell se
ssion. Logout and log back in as the oracle user account then run the following
commands. Let's first check all current soft shell limits: $ ulimit -Sa core fil
e size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f
) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m
) unlimited open files (-n) 4096 pipe size (512 bytes, -p) 8 stack size (kbytes,
-s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 16383 virtua
l memory (kbytes, -v) unlimited Finally, let's check all current hard shell limi
ts: $ ulimit -Ha core file size (blocks, -c) unlimited data seg size (kbytes, -d
) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unli
mited max memory size (kbytes, -m) unlimited open files (-n) 101062 pipe size (5
12 bytes, -p) 8 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimit
ed max user processes (-u) 16383 virtual memory (kbytes, -v) unlimited The soft
limit is now set to 4096 while the hard limit is now set to 101062. NOTE: There
may be times when you cannot get access to the root user account to change the /
etc/security/limits.conf file. You can set this value in the user's login script
for the shell as follows: su - oracle cat >> ~oracle/.bash_profile << EOF ulimi
t -n 101062 EOF
NOTE: For this section, I used the BASH shell. The session values will not alway
s be the same for other shells.
Maximum Number of Processes for Shell Session This section is very similar to th
e previous section, "Maximum Number of Open File Descriptors for Shell Session"
and deals with the same concept of soft limits and hard limits as well as config
uring pam_limits. For most default Red Hat Linux installations, you will not nee
d to be concerned with the maximum number of user processes as this value is gen
erally high enough! NOTE: For this section, I used the BASH shell. The session v
alues will not always be the same for other shells. Let's start by querying the
current limit of the maximum number of processes for the oracle user: % su - ora
cle % ulimit -u 16383 If you wanted to change the soft and hard limits for the m
aximum number of processes for the oracle user, (and for that matter, all users)
, you could edit the /etc/security/limits.conf as the root account. For your Lin
ux system, you would add the following lines: oracle soft nproc 2047 oracle hard
nproc 16384 NOTE: There may be times when you cannot get access to the root use
r account to change the /etc/security/limits.conf file. You can set this value i
n the user's login script for the shell as follows: su - oracle cat >> ~oracle/.
bash_profile << EOF ulimit -u 16384 EOF
Miscellaneous Notes To check all current soft shell limits, enter the following
command: $ ulimit -Sa core file size (blocks, -c) 0 data seg size (kbytes, -d) u
nlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimit
ed max memory size (kbytes, -m) unlimited open files (-n) 4096 pipe size (512 by
tes, -p) 8 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max us
er processes (-u) 16383 virtual memory (kbytes, -v) unlimited To check maximum h
ard limits, enter the following command: $ ulimit -Ha core file size (blocks, -c
) unlimited data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimite
d max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited open files (-n) 101062 pipe size (512 byt
es, -p) 8 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max
user processes (-u) 16383 virtual memory (kbytes, -v) unlimited The file (block
s) value should be multiplied by 512 to obtain the maximum file size imposed by
the shell. A value of unlimited is the operating system default and typically ha
s a maximum value of 1 TB. NOTE: Oracle9i Release 2 (9.2.0) includes native supp
ort for files greater than 2 GB. Check your shell to determine whether it will i
mpose a limit.
-------------------------------------------------------------------------------D
ownloading / Unpacking the Oracle9i Installation Files Most of the actions throu
ghout the rest of this document should be done as the "oracle" user account unle
ss otherwise noted. If you are not logged in as the "oracle" user account, do so
now. Download Oracle9i from Oracle's OTN Site. (If you do not currently have an
account with Oracle OTN, you will need to create one. This is a FREE account!)
http://www.oracle.com/technology/software/products/oracle9i/htdocs/linuxsoft.htm
l Download the following files to a temporary directory (i.e. /u01/app/oracle/or
ainstall: ship_9204_linux_disk1.cpio.gz (538,906,295 bytes) (cksum - 245082434)
ship_9204_linux_disk2.cpio.gz (632,756,922 bytes) (cksum - 2575824107) ship_9204
_linux_disk3.cpio.gz (296,127,243 bytes) (cksum - 96915247) Directions to extrac
t the files. Run "gunzip <filename>" on all the files. % gunzip ship_9204_linux_
disk1.cpio.gz Extract the cpio archives with the command: "cpio -idmv < <filenam
e>" % cpio -idmv < ship_9204_linux_disk1.cpio NOTE: Some browsers will uncompres
s the files but leave the extension the same (gz) when downloading. If the above
steps do not work for you, try skipping step 1 and go directly to step 2 withou
t changing the filename. % cpio -idmv < ship_9204_linux_disk1.cpio.gz
You should now have three directories called "Disk1, Disk2 and Disk3" containing
the Oracle9i Installation files: /Disk1 /Disk2
/Disk3
-------------------------------------------------------------------------------U
pdate Red Hat Linux System - (Oracle Metalink Note: 252217.1) The following RPMs
, all of which are available on the Red Hat Fedora Core 2 CDs, will need to be u
pdated as per the steps described in Metalink Note: 252217.1 "Requirements for I
nstalling Oracle 9iR2 on RHEL3". All of these packages will need to be installed
as the root user: From Fedora Core 2 / Disk #1 # cd /mnt/cdrom/Fedora/RPMS # rp
m -Uvh libpng-1.2.2-22.i386.rpm From Fedora Core 2 / Disk #2 # cd /mnt/cdrom/Fed
ora/RPMS # rpm -Uvh gnome-libs-1.4.1.2.90-40.i386.rpm From Fedora Core 2 / Disk
#3 # cd /mnt/cdrom/Fedora/RPMS # rpm -Uvh compat-libstdc++-7.3-2.96.126.i386.rpm
# rpm -Uvh compat-libstdc++-devel-7.3-2.96.126.i386.rpm # rpm -Uvh compat-db-4.
1.25-2.1.i386.rpm # rpm -Uvh compat-gcc-7.3-2.96.126.i386.rpm # rpm -Uvh compat-
gcc-c++-7.3-2.96.126.i386.rpm # rpm -Uvh openmotif21-2.1.30-9.i386.rpm # rpm -Uv
h pdksh-5.2.14-24.i386.rpm From Fedora Core 2 / Disk #4 # cd /mnt/cdrom/Fedora/R
PMS # rpm -Uvh sysstat-5.0.1-2.i386.rpm Set gcc296 and g++296 in PATH Put gcc296
and g++296 first in $PATH variable by creating the following symbolic links: #
mv /usr/bin/gcc /usr/bin/gcc323 # mv /usr/bin/g++ /usr/bin/g++323 # ln -s /usr/b
in/gcc296 /usr/bin/gcc # ln -s /usr/bin/g++296 /usr/bin/g++ Check hostname Make
sure the hostname command returns a fully qualified host name by amending the /e
tc/hosts file if necessary: # hostname Install the 3006854 patch: The Oracle / L
inux Patch 3006854 can be downloaded here. # unzip p3006854_9204_LINUX.zip # cd
3006854 # sh rhel3_pre_install.sh ----------------------------------------------
---------------------------------Install the Oracle 9.2.0.4.0 RDBMS Software As
the "oracle" user account: Set your DISPLAY variable to a valid X Windows displa
y.
% DISPLAY=<Any X-Windows Host>:0.0 % export DISPLAY NOTE: If you forgot to set t
he DISPLAY environment variable and you get the following error: Xlib: connectio
n to ":0.0" refused by server Xlib: Client is not authorized to connect to Serve
r you will then need to execute the following command to get "runInstaller" work
ing again: % rm -rf /tmp/OraInstall If you don't do this, the Installer will han
g without giving any error messages. Also make sure that "runInstaller" has stop
ped running in the background. If not, kill it. Change directory to the Oracle i
nstallation files you downloaded and extracted. Then run: runInstaller. $ su - o
racle $ cd orainstall/Disk1 $ ./runInstaller Initializing Java Virtual Machine f
rom /tmp/OraInstall2004-05-02_08-4513PM/jre/bin/java. Please wait... Screen Name
Response Welcome Screen: Click "Next" Inventory Location: Click "OK" UNIX Group
Name: Use "dba" Root Script Window: Open another window, login as the root user
id, and run "/tmp/orainstRoot.sh". When the script has completed, return to the
dialog from the Oracle Installer and hit Continue. File Locations: Leave the "So
urce Path" at its default setting. For the Destination name, I like to use "OraH
ome920". You can leave the Destination path at it's default value which should b
e "/u01/app/oracle/product/9.2.0". Available Products: Select "Oracle9i Database
9.2.0.4.0" and click "Next" Installation Types: Select "Enterprise Edition (2.8
4GB)" and click "Next" Database Configuration: Select "Software Only" and click
"Next" Summary: Click "Install" Running root.sh script. When the "Link" phase is
complete, you will be prompted to run the $ORACLE_HOME/root.sh script as the "r
oot" user account. Shutdown any started Oracle processes The Oracle Universal In
staller will succeed in starting some Oracle programs, in particular the Oracle
HTTP Server (Apache), the Oracle Intelligent Agent, and possibly the Orcle TNS L
istener. Make sure all programs are shutdown before attempting to continue in in
stalling the Oracle 9.2.0.5.0 patchset: % $ORACLE_HOME/Apache/Apache/bin/apachec
tl stop
% agentctl stop % lsnrctl stop -------------------------------------------------
------------------------------Install the Oracle 9.2.0.5.0 Patchset Once you hav
e completed installing of the Oracle9i (9.2.0.4.0) RDBMS software, you should no
w apply the 9.2.0.5.0 patchset. NOTE: The details and instructions for applying
the 9.2.0.5.0 patchset in this article is not absolutely necessary. I provide it
here simply as a convenience for those how do want to apply the latest patchset
. The 9.2.0.5.0 patchset can be downloaded from Oracle Metalink: Patch Number: 3
501955 Description: ORACLE 9i DATABASE SERVER RELEASE 2 - PATCH SET 4 VERSION 9.
2.0.5.0 Product: Oracle Database Family Release: Oracle 9.2.0.5 Select a Platfor
m or Language: Linux x86 Last Updated: 26-MAR-2004 Size: 313M (328923077 bytes)
Use the following steps to install the Oracle10g Universal Installer and then th
e Oracle 9.2.0.5.0 patchset. To start, let's unpack the Oracle 9.2.0.5.0 to a te
mporary directory: % cd orapatch % unzip p3501955_9205_LINUX.zip % cpio -idmv <
9205_lnx32_release.cpio Next, we need to install the Oracle10g Universal Install
er into the same $ORACLE_HOME we used to install the Oracle9i RDBMS software. NO
TE: Using the old Universal Installer that was used to install the Oracle9i RDBM
S software, (OUI release 2.2), cannot be used to install the 9.2.0.5.0 patchset
and higher!
Starting with the Oracle 9.2.0.5.0 patchset, Oracle requires the use of the Orac
le10g Universal Installer to apply the 9.2.0.5.0 patchset and to perform all sub
sequent maintenance operations on the Oracle software $ORACLE_HOME. Let's get th
is thing started by installing the Oracle10g Universal Installer. This must be d
one by running the runInstaller that is included with the 9.2.0.5.0 patchset we
extracted in the above step: % cd orapatch/Disk1 % ./runInstaller -ignoreSysPrer
eqs Starting Oracle Universal Installer... Checking installer requirements...
Checking operating system version: must be redhat-2.1, UnitedLinux-1.0, redhat-3
, SuSE-7 or SuSE-8 Failed <<<< >>> Ignoring required pre-requisite failures. Con
tinuing... Preparing to launch Oracle Universal Installer from /tmp/OraInstall20
04-08-30_0748-15PM. Please wait ... Oracle Universal Installer, Version 10.1.0.2
.0 Production Copyright (C) 1999, 2004, Oracle. All rights reserved. Use the fol
lowing options in the Oracle Universal Installer to install the Oracle10g OUI: S
creen Name Response Welcome Screen: Click "Next" File Locations: The "Source Pat
h" should be pointing to the products.xml file by default. For the Destination n
ame, choose the same one you created when installing the Oracle9i software. The
name we used in this article was "OraHome920" and the destination path should be
"/u01/app/oracle/product/9.2.0". Select a Product to Install: Select "Oracle Un
iversal Installer 10.1.0.2.0" and click "Next" Summary: Click "Install"
Exit from the Oracle Universal Installer. Correct the runInstaller symbolic link
bug. (Bug 3560961) After the installation of Oracle10g Universal Installer, the
re is a bug that does NOT update the $ORACLE_HOME/bin/runInstaller symbolic link
to point to the new 10g installation location. Since the symbolic link does not
get updated, the runInstaller command still points to the old installer (2.2) a
nd will be run instead of the new 10g installer. To correct this, you will need
to manually update the $ORACLE_HOME/bin/runInstaller symbolic link: % cd $ORACLE
_HOME/bin % ln -s -f $ORACLE_HOME/oui/bin/runInstaller.sh runInstaller We now in
stall the Oracle 9.2.0.5.0 patchset by executing the newly installed 10g Univers
al Installer: % cd % runInstaller -ignoreSysPrereqs Starting Oracle Universal In
staller... Checking installer requirements... Checking operating system version:
must be redhat-2.1, UnitedLinux-1.0, redhat-3, SuSE-7 or SuSE-8 Failed <<<< >>>
Ignoring required pre-requisite failures. Continuing...
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2004-08-30_07
59-30PM. Please wait ... Oracle Universal Installer, Version 10.1.0.2.0 Producti
on Copyright (C) 1999, 2004, Oracle. All rights reserved. Here is an overview of
the selections I made while performing the 9.2.0.5.0 patchset install: Screen N
ame Response Welcome Screen: Click "Next" File Locations: The "Source Path" shou
ld be pointing to the products.xml file by default. For the Destination name, ch
oose the same one you created when installing the Oracle9i software. The name we
used in this article was "OraHome920" and the destination path should be "/u01/
app/oracle/product/9.2.0". Select a Product to Install: Select "Oracle 9iR2 Patc
hsets 9.2.0.5.0" and click "Next" Summary: Click "Install"
Running root.sh script. When the Link phase is complete, you will be prompted to
run the $ORACLE_HOME/root.sh script as the "root" user account. Go ahead and ru
n the root.sh script. Exit Universal Installer Exit from the Universal Installer
and continue on to the Post Installation section of this article.
-------------------------------------------------------------------------------P
ost Installation Steps After applying the Oracle 9.2.0.5.0 patchset, we should p
erform several miscellaneous tasks like configuring the Oracle Networking files
and setting up startup and shutdown scripts for then the machine is cycled. Conf
iguring Oracle Networking Files: I already included sample configuration files (
contained in the oracle_920_installation_files_linux.tar file) that can be simpl
y copied to their proper location and started. Change to the oracle HOME directo
ry and copy the files as follows: % % % % % % cd cd cp cp cp cp oracle_920_insta
llation_files_linux ldap.ora $ORACLE_HOME/network/admin/ tnsnames.ora $ORACLE_HO
ME/network/admin/ sqlnet.ora $ORACLE_HOME/network/admin/ listener.ora $ORACLE_HO
ME/network/admin/
% cd % lsnrctl start Update /etc/oratab:
The dbora script (below) relies on an entry in the /etc/oratab. Perform the foll
owing actions as the oracle user account: % echo "ORA920:/u01/app/oracle/product
/9.2.0:Y" >> /etc/oratab Configuring Startup / Shutdown Scripts: Also included i
n the oracle_920_installation_files_linux.tar file is a script called dbora. Thi
s script can be used by the init process to startup and shutdown the database wh
en the machine is cycled. The following tasks will need to be performed by the r
oot user account: % su # cp /u01/app/oracle/oracle_920_installation_files_linux/
dbora /etc/init.d # chmod 755 /etc/init.d/dbora # # # # # ln ln ln ln ln -s -s -
s -s -s /etc/init.d/dbora /etc/init.d/dbora /etc/init.d/dbora /etc/init.d/dbora
/etc/init.d/dbora /etc/rc3.d/S99dbora /etc/rc4.d/S99dbora /etc/rc5.d/S99dbora /e
tc/rc0.d/K10dbora /etc/rc6.d/K10dbora
-------------------------------------------------------------------------------C
reating the Oracle Database Finally, let's create an Oracle9i database. This can
be done using scripts that I already included with the oracle_920_installation_
files_linux.tar download. The scripts are included in the ~oracle/admin/ORA920/c
reate directory. To create the database, perform the following steps: % su - ora
cle % cd admin/ORA920/create % ./RUN_CRDB.sh After starting the RUN_CRDB.sh, the
re will be no screen activity until the database creation is complete. You can,
however, bring up a new console window to the Linux databse server as the oracle
user account, navigate to the same directory you started the database creation
from, and tail the crdb.log log file. $ telnet linux3 ... Fedora Core release 2
(Tettnang) Kernel 2.6.5-1.358 on an i686 login: oracle Password: xxxxxx .bash_pr
ofile executed [oracle@linux3 oracle]$ cd admin/ORA920/create [oracle@linux3 cre
ate]$ tail -f crdb.log
===================================== 8. Install Oracle 9.2.0.2 on OpenVMS: ====
================================= VMS: ====
Using OUI to install Oracle9i Release 2 on an OpenVMS System We have a PC runnin
g Xcursion and a 16 Processor GS1280 with the 2 built-in disks In the examples w
e booted on disk DKA0: Oracle account is on disk DKA100. Oracle and the database
will be installed on DKA100. Install disk MUST be ODS-5. Installation uses the
9.2 downloaded from the Oracle website. It comes in a Java JAR file. Oracle ship
s a JRE with its product. However, you will have to install Java on OpenVMS so y
ou can unpack the 9.2 JAR file that comes from the Oracle website Unpack the JAR
file as described on the Oracle website. This will create two .BCK files. Follo
w the instructions in the VMS_9202_README.txt file on how to restore the 2 backu
p save sets. When the two backup save sets files are restored, you should end up
with two directories: [disk1] directory [disk2] directory These directories wil
l be in the root of a disk. In this example they are in the root of DKA100. The
OUI requires X-Windows. If the Alpha system you are using does not have a graphi
c head, use a PC with an X-Windows terminal such as Xcursion. During this instal
l we discovered a problem: Instructions tell you to run @DKA100:[disk1]runinstal
ler. This will not work because the RUNINSTALLER.COM file is not in the root of
DKA100:[disk1]. You must first copy RUNINSTALLER.COM from the dka100:[disk1.0000
00] directory into dka100:[disk1]: $ Copy dka100:[disk1.000000]runinstaller.com
dka100:[disk1] From a terminal window execute: @DKA100:[disk1]runinstaller - Ora
cle Installer starts Start the installation Click Next to start the installation
. - Assign name and directory structure for the Oracle Home ORACLE_HOME Assign a
name for your Oracle home. Assign the directory structure for the home, for exa
mple Ora_home
Dka100:[oracle.oracle9] This is where the OUI will install Oracle. The OUI will
create the directories as necessary - Select product to install Select Database.
Click Next. - Select type of installation Select Enterprise Edition (or Standar
d Edition or Custom). Click Next. - Enable RAC Select No. Click Next. - Database
summary View list of products that will be installed. Click Install. - Installa
tion begins Installation takes from 45 minutes to an hour. Installation ends Cli
ck Exit. Oracle is now installed in DKA100:[oracle.oracle9]. To create the first
database, you must first set up Oracle logicals. To do this use a terminal and
execute @[.oracle9]orauser . The tool to create and manage databases is DBCA. On
the terminal, type DBCA to launch the Database Assistant. Welcome to Database C
onfiguration Assistant DBCA starts. Click Next. Select an operation Select Creat
e a Database. Click Next. Select a template Select New Database. Click Next. Ent
er database name and SID Enter the name of the database and Oracle System Identi
fier (SID): In this example, the database name is DB9I. The SID is DB9I1. Click
Next. Select database features Select which demo databases are installed. In the
example, we selected all possible databases. Click Next. Select default node Se
lect the node in which you want your database to operate by default. In the exam
ple, we selected Shared Server Mode. Click Next. Select memory In the example, w
e selected the default. Click Next. Specify database storage parameters Select t
he device and directory. Use the UNIX device syntax I.E.
For example, DKA100:[oracle.oracle9.database] would be: /DKA100/oracle/oracle9/d
atabase/ In the example, we kept the default settings. Click Next. Select databa
se creation options Creating a template saves time when creating a database. Cli
ck Finish. Create a template Click OK. Creating and starting Oracle Instance The
database builds. If it completes successfully, click Exit. If it does not compl
ete successfully, build it again. Running the database Enter show system to see the Or
acle database up and running. Set up some files to start and stop the database.
Example of a start file This command sets the logicals to manage the database: $
@dka100:[oracle.oracle9]orauser db9i1 The next line starts the Listener (needed
for client connects). The final lines start the database. Stop database example
Example of how to stop the database. Test database server Use the Enterprise Ma
nager console to test the database server. Oracle Enterprise Manager Enter addre
ss of server and SID. Name the server. Click OK. Databases connect information S
elect database. Enter system account and password. Change connection box to AS SYSD
BA. Click OK. Open database Database is opened and exposed. Listener Listener autom
atically picks up the SID from the database. Start Listener before database and
the SID will display in the Listener. If you start the database before the Liste
ner, the SID may not appear immediately. To see if the SID is registered in the
Listener, enter: $lsnrctl stat Alter a user User is altered: SQL> alter user oe
identified by oe account unlock; SQL> exit Preferred method is to use the Enterp
rise Manager Console.
================================================== 9. Installation of Oracle 9i
on AIX and other UNIX ================================================== AIX: ==
== 9.1 Installation of Oracle 9i on AIX Doc ID: Note:201019.1 Content Type: TEXT
/PLAIN Subject: AIX: Quick Start Guide - 9.2.0 RDBMS Installation 25-JUN-2002 Ty
pe: REFERENCE Last Revision Date: 14-APR-2004 Status: PUBLISHED Quick Start Guid
e Oracle9i Release 2 (9.2.0) RDBMS Installation AIX Operating System Purpose ===
==== This document is designed to be a quick reference that can be used when ins
talling Oracle9i Release 2 (9.2.0) on an AIX platform. It is NOT designed to rep
lace the Installation Guide or other documentation. A familiarity with the AIX O
perating System is assumed. If more detailed information is needed, please see t
he Appendix at the bottom of this document for additional resources. Each step s
hould be done in the order that it is listed. These steps are the bare minimum t
hat is necessary for a typical install of the Oracle9i RDBMS. Verify OS version
is certified with the RDBMS version ============================================
========== The following steps are required to verify your version of the AIX op
erating system is certified with the version of the RDBMS (Oracle9i Release 2 (9
.2.0)): 1. 2. 3. 4. 5. 6. 7. 8. 9. Point your web browser to http://metalink.ora
cle.com. Click the "Certify & Availability" button near the left. Click the "Cer
tifications" button near the top middle. Click the "View Certifications by Platf
orm" link. Select "IBM RS/6000 AIX" and click "Submit". Select Product Group "Or
acle Server" and click "Submit". Select Product "Oracle Server - Enterprise Edit
ion" and click "Submit". Read any general notes at the top of the page. Select "
9.2 (9i) 64-bit" and click "Submit".
Creation Date:
The "Status" column displays the certification status. The links in the "Addt'l
Info" and "Install Issue" columns may contain additional information relevant to
a given version. Note that if patches are listed under one of these links, your
installation is not considered certified unless you apply them. The "Addt'l Inf
o" link also contains information about available patchsets. Installation of pat
chsets is not required to be considered certified, but they are highly recommend
ed.
Pre-Installation Steps for the System Administrator ============================
======================== The following steps are required to verify your operati
ng system meets minimum requirements for installation, and should be performed b
y the root user. For assistance with system administration issues, please contac
t your system administator or operating system vendor. Use these steps to manual
ly check the operating system requirements before attempting to install Oracle R
DBMS software, or you may choose to use the convenient "Unix InstallPrep script"
which automates these checks for you. more information about the script, includ
ing download information, please review the following article: Note:189256.1 UNI
X: Script to Verify Installation Requirements for Oracle 9.x version of RDBMS
For
The InstallPrep script currently does not check requirements for AIX5L systems.
The Following Steps Need to be Performed by the Root User: 1. Configure Operatin
g System Resources: Ensure that the system has at least the following resources:
? 400 MB in /tmp * ? 256 MB of physical RAM memory ? Two times the amount of ph
ysical RAM memory for Swap/Paging space (On systems with more than 2 GB of physi
cal RAM memory, the requirements for Swap/Paging space can be lowered, but Swap/
Paging space should never be less than physical RAM memory.) * You may also redi
rect /tmp by setting the TEMP environment variable. This is only recommended in
rare circumstances where /tmp cannot be expanded to meet free space requirements
. 2. Create an Oracle Software Owner and Group: Create an AIX user and group tha
t will own the Oracle software. (user = oracle, group = dba) ? Use the "smit sec
urity" command to create a new group and user Please ensure that the user and gr
oup you use are defined in the local /etc/passwd (user) and /etc/group (group) f
iles rather than resolved via a network service such as NIS. 3. Create a Softwar
e Mount Point and Datafile Mount Points: Create a mount point for the Oracle sof
tware installation. (at least 3.5 GB, typically /u01) Create a second, third, an
d fourth mount point for the database files. (typically /u02, /u03, and /u04) Us
e of multiple mount points is not required, but is highly recommended for best p
erformance and ease of
recoverability. 4. Ensure that Asynchronous Input Output (AIO) is "Available": U
se the following command to check the current AIO status: # lsdev -Cc aio Verify
that the status shown is "Available". If the status shown is "Defined", then ch
ange the "STATE to be configured at system restart" to "Available" after running
the following command: # smit chaio 5. Ensure that the math library is installe
d on your system: Use the following command to determine if the math library is
installed: # lslpp -l bos.adt.libm If this fileset is not installed and "COMMITT
ED", then you must install it from the AIX operating system CD-ROM from IBM. Wit
h the correct CD-ROM mounted, run the following command to begin the process to
load the required bos.adt.libm fileset: # smit install_latest AIX5L systems also
require the following filesets: # lslpp -l bos.perf.perfstat # lslpp -l bos.per
f.libperfstat 6. Download and install JDK 1.3.1 from IBM. At the time this artic
le was created, the JDK could be downloaded from the following URL: http://www.i
bm.com/developerworks/java/jdk/aix/index.html Please contact IBM Support if you
need assistance downloading or installing the JDK. 7. Mount the Oracle CD-ROM: M
ount the Oracle9i Release 2 (9.2.0) CD-ROM using the command: # mount -rv cdrfs
/dev/cd0 /cdrom 8. Run the rootpre.sh script: NOTE: You must shutdown ALL Oracle
database instances (if any) before running the rootpre.sh script. Do not run th
e rootpre.sh script if you have a newer version of an Oracle database already in
stalled on this system. Use the following command to run the rootpre.sh script:
# /cdrom/rootpre.sh
Installation Steps for the Oracle User ======================================= T
he Following Steps Need to be Performed by the Oracle User: 1. Set Environment V
ariables Environment variables should be set in the login script for the oracle
user. If the oracle user's default shell is the C-shell (/usr/bin/csh), then the
login script will be named ".login". If the oracle user's default shell is the
Bourne-shell (/usr/bin/bsh) or the Korn-shell (/usr/bin/sh or /usr/bin/ksh), the
n the login script will be named ".profile". In either case, the login script wi
ll be located in the oracle user's home directory ($HOME). The examples below as
sume that your software mount point is /u01. Parameter ----------ORACLE_HOME PAT
H Value ----------------------------/u01/app/oracle/product/9.2.0 /u01/app/oracl
e/product/9.2.0/bin:/usr/ccs/bin: /usr/bin/X11: (followed by any other directori
es you wish to include) Set this to what you will call your database instance. (
typically 4 characters in length) <ip-address>:0.0 (review Note:153960.1 for det
ailed information)
ORACLE_SID DISPLAY 2. Set the umask:
Set the oracle user's umask to "022" in you ".profile" or ".login" file. Example
: umask 022 3. Verify the Environment Log off and log on as the oracle user to e
nsure all environment variables are set correctly. Use the following command to
view them: % env | more Before attempting to run the Oracle Universal Installer
(OUI), verify that you can successfully run the following command: % /usr/bin/X1
1/xclock If this does not display a clock on your display screen, please review
the following article: Note:153960.1 FAQ: X Server testing and troubleshooting
4. Start the Oracle Universal Installer and install the RDBMS software:
Use the following commands to start the installer: % cd /tmp % /cdrom/runInstall
er Respond to the installer prompts as shown below: ? When prompted for whether
rootpre.sh has been run by root, enter "y". This should have been done in Pre-In
stallation step 8 above. ? At the "Welcome Screen", click Next. ? If prompted, e
nter the directory to use for the "Inventory Location". This can be any director
y, but is usually not under ORACLE_HOME because the oraInventory is shared with
all Oracle products on the system. ? If prompted, enter the "UNIX Group Name" fo
r the oracle user (dba). ? At the "File Locations Screen", verify the Destinatio
n listed is your ORACLE_HOME directory. Also enter a NAME to identify this ORACL
E_HOME. The NAME can be anything, but is typically "DataServer" and the first th
ree digits of the version. For example: "DataServer920" ? At the "Available Prod
ucts Screen", choose Oracle9i Database, then click Next. ? At the "Installation
Types Screen", choose Enterprise Edition, then click Next. ? If prompted, click
Next at the "Component Locations Screen" to accept the default directories. ? At
the "Database Configuration Screen", choose the the configuration based on how
you plan to use the database, then click Next. ? If prompted, click Next at the
"Privileged Operating System Groups Screen" to accept the default values (your c
urrent OS primary group). ? If prompted, enter the Global Database Name in the f
ormat "ORACLE_SID.hostname" at the "Database Identification Screen". For example
: "TEST.AIXhost". The SID entry should be filled in with the value of ORACLE_SID
. Click Next. ? If prompted, enter the directory where you would like to put dat
afiles at the "Database File Location Screen". Click Next. ? If prompted, select
"Use the default character set" (WE8ISO8859P1) at the "Database Character Set S
creen". Click Next. ? At the "Choose JDK Home Directory", enter the directory wh
ere you have previously installed the JDK 1.3.1 from IBM. This should have been
done in Pre-Installation step 6 above. ? At the "Summary Screen", review your ch
oices, then click Install. The install will begin. Follow instructions regarding
running "root.sh" and any other prompts. When completed, the install will have
created a
default database, configured a Listener, and started both for you. Note: If you
are having problems changing CD-ROMs when prompted to do so, please review the f
ollowing article: Note:146566.1 How to Unmount / Eject First Cdrom
Your Oracle9i Release 2 (9.2.0) RDBMS installation is now complete and ready for
use. Appendix A ========== Documentation is available from the following resour
ces: Oracle9i Release 2 (9.2.0) CD-ROM Disk1 -----------------------------------
----Mount the CD-ROM, then use a web browser to open the file "index.htm" locate
d at the top level directory of the CD-ROM. On this CD-ROM you will find the Ins
tallation Guide, Administrator's Reference, and other useful documentation. Orac
le Documentation Center --------------------------Point your web browser to the
following URL: http://otn.oracle.com/documentation/content.html Select the highe
st version CD-pack displayed to ensure you get the most up-to-date information.
Unattended install: ------------------Note 1: ------This note describes how to s
tart the unattended install of patch 9.2.0.5 on AIX 5L, which can be applied to
9.2.0.2, 9.2.0.3, 9.2.0.4 Shut down the existing Oracle server instance with nor
mal or immediate priority. For example, shutdown all instances (cleanly) if runn
ing Parallel Server. Stop all listener, agent and other processes running in or
against the ORACLE_HOME that will have the patch set installation. Run slibclean
(/usr/sbin/slibclean) as root to remove ant currently unused modules in kernel
and library memory. To perform a silent installation requiring no user intervent
ion:
Copy the response file template provided in the response directory where you unp
acked the patch set tar file. Edit the values for all fields labeled as <Value R
equired> according to the comments and examples in the template. Start the Oracl
e Universal Installer from the directory described in Step 4 which applies to yo
ur situation. You should pass the full path of the response file template you ha
ve edited locally as the last argument with your own value of ORACLE_HOME and FR
OM_LOCATION. The following is an example of the command: % ./runInstaller -silen
t -responseFile full_path_to_your_response_file Run the $ORACLE_HOME/root.sh scr
ipt from a root session. If you are applying the patch set in a cluster database
environment, the root.sh script should be run in the same way on both the local
node and all participating nodes. Note 2: ------In order to make an unattended
install of 9.2.0.1 on Win2K: Running Oracle Universal Installer and Specifying a
Response File To run Oracle Universal Installer and specify the response file:
Go to the MS-DOS command prompt. Go to the directory where Oracle Universal Inst
aller is installed. Run the appropriate response file. For example, C:\program f
iles\oracle\oui\install> setup.exe -silent -nowelcome -responseFile filename Whe
re... Description filename Identifies the full path of the specific response fil
e -silent Runs Oracle Universal Installer in complete silent mode. The Welcome w
indow is suppressed automatically. This parameter is optional. If you use -silen
t, -nowelcome is not necessary. -nowelcome Suppresses the Welcome window that ap
pears during installation. This parameter is optional. Note 3: -------
Unattended install of 9.2.0.5 on Win2K: To perform a silent installation requiri
ng no user intervention: Make a copy of the response file template provided in t
he response directory where you unzipped the patch set file. Edit the values for
all fields labeled as <Value Required> according to the comments and examples i
n the template. Start Oracle Universal Installer release 10.1.0.2 located in the
unzipped area of the patch set. For example, Disk1\setup.exe. You should pass t
he full path of the response file template you have edited locally as the last a
rgument with your own value of ORACLE_HOME and FROM_LOCATION. The syntax is as f
ollows: setup.exe -silent -responseFile ORACLE_BASE\ORACLE_HOME\response_file_pa
th
=============================== 9.2 Oracle and UNIX and other OS: ==============
================= You have the following options for creating your new Oracle da
tabase: - Use the Database Configuration Assistant (DBCA). DBCA can be launched
by the Oracle Universal Installer, depending upon the type of install that you s
elect, and provides a graphical user interface (GUI) that guides you through the
creation of a database. You can chose not to use DBCA, or you can launch it as
a standalone tool at any time in the future to create a database. Run DCBA as %
dbca - Create the database manually from a script. If you already have existing
scripts for creating your database, you can still create your database manually.
However, consider editing your existing script to take advantage of new Oracle
features. Oracle provides a sample database creation script and a sample initial
ization parameter file with the database software files it distributes, both of
which can be edited to suit your needs. - Upgrade an existing database. In all c
ases, the Oracle software needs to be installed on your host machine.
9.1.1 Operating system dependencies: -----------------------------------First, d
etermine for this version of Oracle, what OS settings must be made, and if any p
atches must be installed. For example, on Linux, glibc 2.1.3 is needed with Orac
le version 8.1.7. Linux could be quite critical with respect to libraries in com
bination with Oracle. Ook moet er mogelijk shmmax (max size of shared memory seg
ment) en dergelijke kernel parameters worden aangepast. # sysctl -w kernel.shmma
x=100000000 # echo "kernel.shmmax = 100000000" >> /etc/sysctl.conf Opmerking: He
t onderstaANDe is algemeen, maar is ook afgeleid van een Oracle 8.1.7 installati
e op Linux Redhat 6.2 Als de 8.1.7 installatie gedaan wordt is ook nog de Java J
DK 1.1.8 nodig. Deze kan gedownload worden van www.blackdown.org Download jdk-1.
1.8_v3 jdk118_v3-glibc-2.1.3.tar.bz2 in /usr/local tar xvif jdk118_v3-glibc-2.1.
3.tar.bz2 ln -s /usr/local/jdk118_v3 /usr/local/java 9.1.2 Environment variables
: ---------------------------Make sure you have the following environment variab
les set: ON UNIX: ======== Example 1: ---------ORACLE_BASE=/u01/app/oracle; expo
rt ORACLE_BASE (root voor oracle software) ORACLE_HOME=$ORACLE_BASE/product/8.1.
5; export ORACLE_HOME (bepaald de directory waarin de instance software zich bev
ind) ORACLE_SID=brdb; export ORACLE_SID (bepaald de naam van de huidige instance
) ORACLE_TERM=xterm, vt100, ansi of wat ANDers; export ORACLE_TERM ORA_NLSxx=$OR
ACLE_HOME/ocommon/nls/admin/data; export ORA_NLS (bepaald de nls directory t.b.v
. datafiles voor meerdere talen) NLS_LANG="Dutch_The NetherlANDs.WE8ISO8859P1";
export NLS_LANG (Dit specificeert de language, territory en characterset t.b.v d
e client applicaties. LD_LIBRARY_PATH=/u01/app/oracle/product/8.1.7/lib; export
LD_LIBRARY_PATH PATH=$ORACLE_HOME/bin:/bin:/user/bin:/usr/sbin:/bin; export PATH
plaats deze variabelen in de oracle user profile file: .profile, of .bash_profi
le etc..
Example 2: ---------/dbs01 /dbs01 /dbs01 directory /dbs01 directory /dbs01 /dbs0
1 directory /app /app /app /app /app /oracle /oracle /oracle /oracle Db director
y 1 Constante $ORACLE_BASE /817
Oracle base Oracle admin
/admin /product /product
$ORACLE_ADMIN
Constante $ORACLE_HOME Oracle home
# LISTENER.ORA Network Configuration File: /dbs01/app/oracle/product/817/network
/admin/listener.ora # TNSNAMES.ORA Network Configuration File: /dbs01/app/oracle
/product/817/network/admin/tnsnames.ora Example 3: ---------/dbs01/app/orace Ora
cle software /dbs02/oradata database files /dbs03/oradata database files .. .. /
var/opt/oracle network files /opt/oracle/admin/bin Example 4: ---------Mountpunt
/ /usr /var /home /opt /u01 /u02 /u03 /u04 /u05 /u06 /u07 Example 5: ---------i
nitBENE.ora /opt/oracle/product/8.0.6/dbs tnsnames.ora /opt/oracle/product/8.0.6
/network/admin listener.ora /opt/oracle/product/8.0.6/network/admin alert log /v
ar/opt/oracle/bene/bdump oratab /var/opt/oracle Device Omvang /dev/md/dsk/d1 /de
v/md/dsk/d3 /dev/md/dsk/d4 /dev/md/dsk/d5 /dev/md/dsk/d6 /dev/md/dsk/d7 /dev/md/
dsk/d8 /dev/md/dsk/d9 /dev/md/dsk/d10 /dev/md/dsk/d110 8700 /dev/md/dsk/d120 870
0 /dev/md/dsk/d123 8650 (Mbyte) Doel 100 Unix Root-filesysteem 1200 Unix usr-fil
esysteem 200 Unix var-filesysteem 200 Unix opt-filesysteem 4700 Oracle_Home 8700
Oracle datafiles 8700 Oracle datafiles 8700 Oracle datafiles 8700 Oracle datafi
les Oracle datafiles Oracle datafiles Oracle datafiles
Example 6: ---------ORACLE_BASE /u01/app/oracle ORACLE_HOME $ORACLE_BASE/product
/10.1.0/db_1 ORACLE_PATH /u01/app/oracle/product/10.1.0/db_1/bin:. Note: The per
iod adds the current working directory to the search path. ORACLE_SID SAL1 ORAEN
V_ASK NO SQLPATH /home:/home/oracle:/u01/oracle TNS_ADMIN $ORACLE_HOME/network/a
dmin TWO_TASK Function Specifies the default connect identifier to use in the co
nnect string. If this environment variable is set, you do not need to specify th
e connect identifier in the connect string. For example, if the TWO_TASK environ
ment variable is set to sales, you can connect to a database using the CONNECT u
sername/password command rather than the CONNECT username/password@sales command
. Syntax Any connect identifier. Example PRODDB_TCP to identify the SID and Orac
le home directory for the instance that you want to shut down, enter the followi
ng command: Solaris: $ cat /var/opt/oracle/oratab Other operating systems: $ cat
/etc/oratab ON NT/2000: =========== SET SET SET SET SET ORACLE_BASE=G:\ORACLE O
RACLE_HOME=G:\ORACLE\ORA81 ORACLE_SID=AIRM ORA_NLSxxx=G:\ORACLE\ORA81\ocommon\nl
s\admin\data NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
ON OpenVMS: =========== When Oracle is installed on VMS, a root directory is cho
sen which is pointed to by the logical name ORA_ROOT. This directory can be plac
ed anywhere on the VMS system. The majority of code, configuration files and com
mand procedures are found below this root directory. When a new database is crea
ted a new directory is created in the root directory to store database specific
configuration files. This directory is called [.DB_dbname].
This directory will normally hold the system tablespace data file as well as the
database specific startup, shutdown and orauser files. The Oracle environment f
or a VMS user is set up by running the appropriate ORAUSER_dbname.COM file. This
sets up the necessary command symbols and logical names to access the various O
RACLE utilities. Each database created on a VMS system will have an ORAUSER file
in it's home directory and will be named ORAUSER_dbname.COM, e.g. for a databas
e SALES the file specification could be: ORA_ROOT:[DB_SALES]ORAUSER_SALES.COM To
have the environment set up automatically on login, run this command file in yo
ur login.com file. To access SQLPLUS use the following command with a valid user
name and password: $ SQLPLUS username/password SQLDBA is also available on VMS a
nd can be invoked similarly: $ SQLDBA username/password 9.1.3 OFA directory stru
ctuur: -----------------------------Hou je aan OFA. Een voorbeeld voor database
PROD: /opt/oracle/product/8.1.6 /opt/oracle/product/8.1.6/admin/PROD /opt/oracle
/product/8.1.6/admin/pfile /opt/oracle/product/8.1.6/admin/adhoc /opt/oracle/pro
duct/8.1.6/admin/bdump /opt/oracle/product/8.1.6/admin/udump /opt/oracle/product
/8.1.6/admin/adump /opt/oracle/product/8.1.6/admin/cdump /opt/oracle/product/8.1
.6/admin/create /u02/oradata/PROD /u03/oradata/PROD /u04/oradata/PROD etc.. Exam
ple mountpoints and disks: -----------------------------Mountpunt / /usr /var /h
ome /opt /u01 /u02 Device /dev/md/dsk/d1 /dev/md/dsk/d3 /dev/md/dsk/d4 /dev/md/d
sk/d5 /dev/md/dsk/d6 /dev/md/dsk/d7 /dev/md/dsk/d8 Omvang Doel 100 Unix Root-fil
esysteem 1200 Unix usr-filesysteem 200 Unix var-filesysteem 200 Unix opt-filesys
teem 4700 Oracle_Home 8700 Oracle datafiles 8700 Oracle datafiles
/u03 /u04 /u05 /u06 /u07
/dev/md/dsk/d9 /dev/md/dsk/d10 /dev/md/dsk/d110 /dev/md/dsk/d120 /dev/md/dsk/d12
3
8700 8700 8700 8700 8650
Oracle Oracle Oracle Oracle Oracle
datafiles datafiles datafiles datafiles datafiles
9.1.4 Users en groups: ---------------------Als je met OS verificatie wilt werke
n, moet in de init.ora gezet zijn: remote_login_passwordfile=none (passwordfile
authentication via exlusive) Benodigde groups in UNIX: group dba. Deze moet voor
komen in de /etc/group file vaak is ook nog nodig de group oinstall groupadd dba
groupadd oinstall groupadd oper Maak nu user oracle aan: adduser -g oinstall -G
dba -d /home/oracle oracle # # # # # # # groupadd dba useradd oracle mkdir /usr
/oracle mkdir /usr/oracle/9.0 chown -R oracle:dba /usr/oracle touch /etc/oratab
chown oracle:dba /etc/oratab
9.1.5 mount points en disks: ---------------------------maak de mount points: mk
dir mkdir mkdir mkdir /opt/u01 /opt/u02 /opt/u03 /opt/u04
dit moeten voor een produktie omgeving aparte schijven zijn Geef nu ownership va
n deze mount points aan user oracle en group oinstall chown chown chown chown -R
-R -R -R oracle:oinstall oracle:oinstall oracle:oinstall oracle:oinstall /opt/u
01 /opt/u02 /opt/u03 /opt/u04 oracle oracle oracle dba dba dba
directories: drwxr-xr-x files : -rw-r----: -rw-r--r-chmod 644 *
chmod u+x filename chmod ug+x filename 9.1.6 test van user oracle: -------------
-------------log in als user oracle en geef de commANDo's $groups $umask laat de
groups zien (oinstall, dba) laat 022 zien, zoniet zet dan de line umask 022 in
het .profile
umask is de default mode van een file of directory wanneer deze aangemaakt wordt
. rwxrwxrwx=777 rw-rw-rw-=666 rw-r--r--=644 welke correspondeert met umask 022 V
erander nu het .profile of .bash_profile van de user oracle. Plaats de environme
nt variabelen van 9.1 in het profile. log uit en in als user oracle, en test de
environment: %env %echo $variablename
9.1.7 Oracle Installer bij 8.1.x op Linux: -------------------------------------
----Log in als user oracle. Draai nu oracle installer: Linux: startx cd /usr/loc
al/src/Oracle8iR3 ./runInstaller of Ga naar install/linux op de CD en run runIns
.sh Nu volgt een grafische setup. Beantwoord de vragen. Het kan zijn dat de inst
aller vraagt om scripts uit te voeren zoals: orainstRoot.sh en root.sh Om dit ui
t te voeren: open een nieuw window su root cd $ORACLE_HOME ./orainstRoot.sh Inst
allatie database op Unix: -----------------------------
$ export PATH=$PATH:$ORACLE_HOME/bin $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$
ORACLE_HOME/lib $ dbca & or $ cat "db1:/usr/oracle/9.0:Y >> /etc/oratab" $ cd $O
RACLE_HOME/dbs $ cat initdw.ora |sed s/"#db_name = MY_DB_NAME"/"db_name = db1"/|
sed s/#control_files/control_files/ > initdb1.ora Start and create database : $
export PATH=$PATH:$ORACLE_HOME/bin $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$OR
ACLE_HOME/lib $ export ORACLE_SID=db1 $ sqlplus /nolog <<! connect / as sysdba s
tartup nomount create database db1 ! This creates a default database with files
in $ORACLE_HOME/dbs Now add the database meta data to actually make it useful :
$ sqlplus /nolog <<! connect / as sysdba @?/rdbms/admin/catalog # E.g: /apps/ora
cle/product/9.2/rdbms/admin @?/rdbms/admin/catproc ! Now create a user and give
it wide ranging permissions : $ sqlplus /nolog <<! connect / as sysdba create us
er myuser identified by password; grant create session,create any table to myuse
r; grant unlimited tablespace to myuser; ! 9.1.8 OS or Password Authentication:
------------------------------------- Preparing to Use OS Authentication To enab
le authentication of an administrative user using the operating system you must
do the following: Create an operating system account for the user. Add the user
to the OSDBA or OSOPER operating system defined groups. Ensure that the initiali
zation parameter, REMOTE_LOGIN_PASSWORDFILE, is set to NONE. This is the default
value for this parameter. A user can be authenticated, enabled as an administra
tive user, and connected to a local database by typing one of the following SQL*
Plus commands:
CONNECT / AS SYSDBA CONNECT / AS SYSOPER For a remote database connection over a
secure connection, the user must also specify the net service name of the remot
e database: CONNECT /@net_service_name AS SYSDBA CONNECT /@net_service_name AS S
YSOPER OSDBA: unix : dba windows: ORA_DBA OSOPER: unix : oper windows: ORA_OPER
-- Preparing to Use Password File Authentication To enable authentication of an
administrative user using password file authentication you must do the following
: Create an operating system account for the user. If not already created, Creat
e the password file using the ORAPWD utility: ORAPWD FILE=filename PASSWORD=pass
word ENTRIES=max_users Set the REMOTE_LOGIN_PASSWORDFILE initialization paramete
r to EXCLUSIVE. Connect to the database as user SYS (or as another user with the
administrative privilege). If the user does not already exist in the database,
create the user. Grant the SYSDBA or SYSOPER system privilege to the user: GRANT
SYSDBA to scott; This statement adds the user to the password file, thereby ena
bling connection AS SYSDBA. For example, user scott has been granted the SYSDBA
privilege, so he can connect as follows: CONNECT scott/tiger AS SYSDBA 9.1.9 Cre
ate a 9i database: --------------------------Step 1: Decide on Your Instance Ide
ntifier (SID) Step 2: Establish the Database Administrator Authentication Method
Step 3: Create the Initialization Parameter File
Step 4: Connect to the Instance Step 5: Start the Instance. Step 6: Issue the CR
EATE DATABASE Statement Step 7: Create Additional Tablespaces Step 8: Run Script
s to Build Data Dictionary Views Step 9: Run Scripts to Install Additional Optio
ns (Optional) Step 10: Create a Server Parameter File (Recommended) Step 11: Bac
k Up the Database. Step 1: ------% ORACLE_SID=ORATEST; export ORACLE_SID Step 2:
see above ----------------Step 3: init.ora ---------------Note DB_CACHE_SIZE 10
g: Parameter type Big integer Syntax DB_CACHE_SIZE = integer [K | M | G] Default
value If SGA_TARGET is set: If the parameter is not specified, then the default
is 0 (internally determined by the Oracle Database). If the parameter is specif
ied, then the user-specified value indicates a minimum value for the memory pool
. If SGA_TARGET is not set, then the default is either 48 MB or 4MB * number of
CPUs * granule size, whichever is greater Modifiable ALTER SYSTEM Basic No Oracl
e10g Obsolete Oracle SGA Parameters Using AMM via the sga_target parameter rende
rs several parameters obsolete. Remember, you can continue to perform manual SGA
tuning if you like, but if you set sga_target, then these parameters will defau
lt to zero: db_cache_size - This parameter determines the number of database blo
ck buffers in the Oracle SGA and is the single most important parameter in Oracl
e memory. db_xk_cache_size - This set of parameters (with x replaced by 2, 4, 8,
16, or 32) sets the size for specialized areas of the buffer area used to store
data from tablespaces with varying blocksizes. When these are set,
they impose a hard limit on the maximum size of their respective areas. db_keep_
cache_size - This is used to store small tables that perform full table scans. T
his data buffer pool was a sub-pool of db_block_buffers in Oracle8i. db_recycle_
cache_size - This is reserved for table blocks from very large tables that perfo
rm full table scans. This was buffer_pool_keep in Oracle8i. large_pool_size - Th
is is a special area of the shared pool that is reserved for SGA usage when usin
g the multi-threaded server. The large pool is used for parallel query and RMAN
processing, as well as setting the size of the Java pool. log_buffer - This para
meter determines the amount of memory to allocate for Oracle's redo log buffers.
If there is a high amount of update activity, the log_buffer should be allocate
d more space. shared_pool_size - This parameter defines the pool that is shared
by all users in the system, including SQL areas and data dictionary caching. A l
arge shared_pool_size is not always better than a smaller shared pool. If your a
pplication contains non-reusable SQL, you may get better performance with a smal
ler shared pool. java_pool_size -- This parameter specifies the size of the memo
ry area used by Java, which is similar to the shared pool used by SQL and PL/SQL
. streams_pool_size - This is a new area in Oracle Database 10g that is used to
provide buffer areas for the streams components of Oracle. This is exactly the s
ame automatic tuning principle behind the Oracle9i pga_aggregate_target paramete
r that made these parameters obsolete. If you set pga_aggregate_target, then the
se parameters are ignored: sort_area_size - This parameter determines the memory
region that is allocated for in-memory sorting. When the v$sysstat value sorts
(disk) become excessive, you may want to allocate additional memory. hash_area_s
ize - This parameter determines the memory region reserved for hash joins. Start
ing with Oracle9i, Oracle Corporation does not recommend using hash_area_size un
less the instance is configured with the shared server option. Oracle recommends
that you enable automatic sizing of SQL work areas by setting pga_aggregate_tar
get hash_area_size is retained only for backward compatibility purposes.
Sample Initialization Parameter File # Cache and I/O DB_BLOCK_SIZE=4096 DB_CACHE
_SIZE=20971520 # Cursors and Library Cache CURSOR_SHARING=SIMILAR OPEN_CURSORS=3
00 # Diagnostics and Statistics BACKGROUND_DUMP_DEST=/vobs/oracle/admin/mynewdb/
bdump CORE_DUMP_DEST=/vobs/oracle/admin/mynewdb/cdump TIMED_STATISTICS=TRUE
USER_DUMP_DEST=/vobs/oracle/admin/mynewdb/udump # Control File Configuration CON
TROL_FILES=("/vobs/oracle/oradata/mynewdb/control01.ctl", "/vobs/oracle/oradata/
mynewdb/control02.ctl", "/vobs/oracle/oradata/mynewdb/control03.ctl") # Archive
LOG_ARCHIVE_DEST_1='LOCATION=/vobs/oracle/oradata/mynewdb/archive' LOG_ARCHIVE_F
ORMAT=%t_%s.dbf LOG_ARCHIVE_START=TRUE # Shared Server # Uncomment and use first
DISPATCHES parameter below when your listener is # configured for SSL # (listen
er.ora and sqlnet.ora) # DISPATCHERS = "(PROTOCOL=TCPS)(SER=MODOSE)", # "(PROTOC
OL=TCPS)(PRE=oracle.aurora.server.SGiopServer)" DISPATCHERS="(PROTOCOL=TCP)(SER=
MODOSE)", "(PROTOCOL=TCP)(PRE=oracle.aurora.server.SGiopServer)", (PROTOCOL=TCP)
# Miscellaneous COMPATIBLE=9.2.0 DB_NAME=mynewdb # Distributed, Replication and
Snapshot DB_DOMAIN=us.oracle.com REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE # Network
Registration INSTANCE_NAME=mynewdb # Pools JAVA_POOL_SIZE=31457280 LARGE_POOL_SI
ZE=1048576 SHARED_POOL_SIZE=52428800 # Processes and Sessions PROCESSES=150 # Re
do Log and Recovery FAST_START_MTTR_TARGET=300 # Resource Manager RESOURCE_MANAG
ER_PLAN=SYSTEM_PLAN # Sort, Hash Joins, Bitmap Indexes SORT_AREA_SIZE=524288 # A
utomatic Undo Management UNDO_MANAGEMENT=AUTO UNDO_TABLESPACE=undotbs Reasonable
10g init.ora: ------------------------
########################################### # Cache and I/O ####################
####################### db_block_size=8192 db_file_multiblock_read_count=16 ####
####################################### # Cursors and Library Cache ############
############################### open_cursors=300 ###############################
############ # Database Identification #########################################
## db_domain=antapex.org db_name=test10g #######################################
#### # Diagnostics and Statistics ########################################### ba
ckground_dump_dest=C:\oracle/admin/test10g/bdump core_dump_dest=C:\oracle/admin/
test10g/cdump user_dump_dest=C:\oracle/admin/test10g/udump #####################
###################### # File Configuration ####################################
####### control_files=("C:\oracle\oradata\test10g\control01.ctl", "C:\oracle\ora
data\test10g\control02.ctl", "C:\oracle\oradata\test10g\control03.ctl") db_recov
ery_file_dest=C:\oracle/flash_recovery_area db_recovery_file_dest_size=214748364
8 ########################################### # Job Queues #####################
###################### job_queue_processes=10 ##################################
######### # Miscellaneous ########################################### compatible
=10.2.0.1.0 ########################################### # Processes and Sessions
########################################### processes=150 #####################
###################### # SGA Memory ###########################################
sga_target=287309824 ########################################### # Security and
Auditing ########################################### audit_file_dest=C:\oracle/a
dmin/test10g/adump
remote_login_passwordfile=EXCLUSIVE ###########################################
# Shared Server ########################################### dispatchers="(PROTOC
OL=TCP) (SERVICE=test10gXDB)" ########################################### # Sort
, Hash Joins, Bitmap Indexes ########################################### pga_agg
regate_target=95420416 ########################################### # System Mana
ged Undo and Rollback Segments ########################################### undo_
management=AUTO undo_tablespace=UNDOTBS1 LOG_ARCHIVE_DEST=c:\oracle\oradata\log
LOG_ARCHIVE_FORMAT=arch_%t_%s_%r.dbf' Flash_recovery_area: location where RMAN s
tores diskbased backups
Step 4: Connect to the Instance: -------------------------------Start SQL*Plus a
nd connect to your Oracle instance AS SYSDBA. $ SQLPLUS /nolog CONNECT SYS/passw
ord AS SYSDBA Step 5: Start the Instance: --------------------------Start an ins
tance without mounting a database. Typically, you do this only during database c
reation or while performing maintenance on the database. Use the STARTUP command
with the NOMOUNT option. In this example, because the initialization parameter
file is stored in the default location, you are not required to specify the PFIL
E clause: STARTUP NOMOUNT At this point, there is no database. Only the SGA is c
reated and background processes are started in preparation for the creation of a
new database. Step 6: Issue the CREATE DATABASE Statement: --------------------
-----------------------To create the new database, use the CREATE DATABASE state
ment. The following statement creates database mynewdb: CREATE DATABASE mynewdb
USER SYS IDENTIFIED BY pz6r58
USER SYSTEM IDENTIFIED BY y1tz5p LOGFILE GROUP 1 ('/vobs/oracle/oradata/mynewdb/
redo01.log') SIZE 100M, GROUP 2 ('/vobs/oracle/oradata/mynewdb/redo02.log') SIZE
100M, GROUP 3 ('/vobs/oracle/oradata/mynewdb/redo03.log') SIZE 100M MAXLOGFILES
5 MAXLOGMEMBERS 5 MAXLOGHISTORY 1 MAXDATAFILES 100 MAXINSTANCES 1 CHARACTER SET
US7ASCII NATIONAL CHARACTER SET AL16UTF16 DATAFILE '/vobs/oracle/oradata/mynewd
b/system01.dbf' SIZE 325M REUSE EXTENT MANAGEMENT LOCAL DEFAULT TEMPORARY TABLES
PACE tempts1 DATAFILE '/vobs/oracle/oradata/mynewdb/temp01.dbf' SIZE 20M REUSE U
NDO TABLESPACE undotbs DATAFILE '/vobs/oracle/oradata/mynewdb/undotbs01.dbf' SIZ
E 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED;
Oracle 10g create statement: CREATE DATABASE playdwhs USER SYS IDENTIFIED BY cac
tus USER SYSTEM IDENTIFIED BY cactus LOGFILE GROUP 1 ('/dbms/tdbaplay/playdwhs/r
ecovery/redo_logs/redo01.log') SIZE 100M, GROUP 2 ('/dbms/tdbaplay/playdwhs/reco
very/redo_logs/redo02.log') SIZE 100M, GROUP 3 ('/dbms/tdbaplay/playdwhs/recover
y/redo_logs/redo03.log') SIZE 100M MAXLOGFILES 5 MAXLOGMEMBERS 5 MAXLOGHISTORY 1
MAXDATAFILES 100 MAXINSTANCES 1 CHARACTER SET US7ASCII NATIONAL CHARACTER SET A
L16UTF16 DATAFILE '/dbms/tdbaplay/playdwhs/database/default/system01.dbf' SIZE 5
00M REUSE EXTENT MANAGEMENT LOCAL SYSAUX DATAFILE '/dbms/tdbaplay/playdwhs/datab
ase/default/sysaux01.dbf' SIZE 300M REUSE DEFAULT TEMPORARY TABLESPACE temp TEMP
FILE '/dbms/tdbaplay/playdwhs/database/default/temp01.dbf' SIZE 1000M REUSE UNDO
TABLESPACE undotbs DATAFILE '/dbms/tdbaplay/playdwhs/database/default/undotbs01
.dbf' SIZE 1000M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; CONNECT SYS/password AS
SYSDBA -- create a user tablespace to be assigned as the default tablespace for
users CREATE TABLESPACE users LOGGING DATAFILE '/u01/oracle/oradata/mynewdb/user
s01.dbf' SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGE
MENT LOCAL;
-- create a tablespace for indexes, separate from user tablespace CREATE TABLESP
ACE indx LOGGING DATAFILE '/u01/oracle/oradata/mynewdb/indx01.dbf' SIZE 25M REUS
E AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL; For inform
ation about creating tablespaces, see Chapter 8, " Managing Tablespaces". Step 9
: Run Scripts to Build Data Dictionary Views Run the scripts necessary to build
views, synonyms, and PL/SQL packages: CONNECT SYS/password AS SYSDBA @/u01/oracl
e/rdbms/admin/catalog.sql @/u01/oracle/rdbms/admin/catproc.sql EXIT catalog.sql
All databases Creates the data dictionary and public synonyms for many of its vi
ews Grants PUBLIC access to the synonyms catproc.sql All databases Runs all scri
pts required for, or used with PL/SQL catclust.sql Real Application Clusters Cre
ates Real Application Clusters data dictionary views Oracle supplies other scrip
ts that create additional structures you can use in managing your database and c
reating database applications. These scripts are listed in Table B-2. See Also:
Your operating system-specific Oracle documentation for the exact names and loca
tions of these scripts on your operating system Table B-2 Creating Additional Da
ta Dictionary Structures Script Name Needed For Run By Description catblock.sql
Performance management SYS Creates views that can dynamically display lock depen
dency graphs catexp7.sql Exporting data to Oracle7 SYS Creates the dictionary vi
ews needed for the Oracle7 Export utility to export data from the Oracle Databas
e in Oracle7 Export file format caths.sql Heterogeneous Services SYS Installs pa
ckages for administering heterogeneous services catio.sql Performance management
SYS Allows I/O to be traced on a table-by-table basis catoctk.sql Security SYS
Creates the Oracle Cryptographic Toolkit package catqueue.sql Advanced Queuing C
reates the dictionary objects required for Advanced Queuing catrep.sql Oracle Re
plication SYS Runs all SQL scripts for enabling database replication catrman.sql
Recovery Manager RMAN or any user with GRANT_RECOVERY_CATALOG_OWNER role Create
s recovery manager tables and views (schema) to establish an external recovery c
atalog for the backup, restore, and recovery functionality provided by the Recov
ery Manager (RMAN) utility
dbmsiotc.sql Storage management Any user Analyzes chained rows in index-organize
d tables dbmsotrc.sql Performance management SYS or SYSDBA Enables and disables
generation of Oracle Trace output dbmspool.sql Performance management SYS or SYS
DBA Enables DBA to lock PL/SQL packages, SQL statements, and triggers into the s
hared pool userlock.sql Concurrency control SYS or SYSDBA Provides a facility fo
r user-named locks that can be used in a local or clustered environment to aid i
n sequencing application actions utlbstat.sql and utlestat.sql Performance monit
oring SYS Respectively start and stop collecting performance tuning statistics u
tlchn1.sql Storage management Any user For use with the Oracle Database. Creates
tables for storing the output of the ANALYZE command with the CHAINED ROWS opti
on. Can handle both physical and logical rowids. utlconst.sql Year 2000 complian
ce Any user Provides functions to validate that CHECK constraints on date column
s are year 2000 compliant utldtree.sql Metadata management Any user Creates tabl
es and views that show dependencies between objects utlexpt1.sql Constraints Any
user For use with the Oracle Database. Creates the default table (EXCEPTIONS) f
or storing exceptions from enabling constraints. Can handle both physical and lo
gical rowids. utlip.sql PL/SQL SYS Used primarily for upgrade and downgrade oper
ations. It invalidates all existing PL/SQL modules by altering certain dictionar
y tables so that subsequent recompilations will occur in the format required by
the database. It also reloads the packages STANDARD and DBMS_STANDARD, which are
necessary for any PL/SQL compilations. utlirp.sql PL/SQL SYS Used to change fro
m 32-bit to 64-bit word size or vice versa. This script recompiles existing PL/S
QL modules in the format required by the new database. It first alters some data
dictionary tables. Then it reloads the packages STANDARD and DBMS_STANDARD, whi
ch are necessary for using PL/SQL. Finally, it triggers a recompilation of all P
L/SQL modules, such as packages, procedures, and types. utllockt.sql Performance
monitoring SYS or SYSDBA Displays a lock wait-for graph, in tree structure form
at utlpwdmg.sql Security SYS or SYSDBA Creates PL/SQL functions for default pass
word complexity verification. Sets the default password profile parameters and e
nables password management features. utlrp.sql PL/SQL SYS Recompiles all existin
g PL/SQL modules that were previously in an INVALID state, such as packages, pro
cedures, and types. utlsampl.sql Examples SYS or any user with DBA role Creates
sample tables, such as emp and dept, and users, such as scott utlscln.sql Oracle
Replication Any user Copies a snapshot schema from another snapshot site utltkp
rf.sql Performance management SYS Creates the TKPROFER role to allow the TKPROF
profiling utility to be run by non-DBA users utlvalid.sql Partitioned tables Any
user Creates tables required for storing output of ANALYZE TABLE ...VALIDATE ST
RUCTURE of a partitioned table utlxplan.sql Performance management Any user
+++++++ Graag op de pl003 de twee volgende instances: - playdwhs - accpdwhs
En op de pl101 de volgende instance: - proddwhs Graag conform de huidige standaa
rd voor filesystems. Dat wil zeggen, al deze databases komen op volumegroup roca
_vg. Met daaronder het de volgende mount points: /dbms/tdba[env]/[env]dwhs/admin
/dbms/tdba[env]/[env]dwhs/database /dbms/tdba[env]/[env]dwhs/recovery /dbms/tdb
a[env]/[env]dwhs/export /dev/fslv32 0.25 0.23 /dbms/tdbaaccp/accproca/admin /dev
/fslv33 15.00 11.78 /dbms/tdbaaccp/accproca/database /dev/fslv34 4.00 3.51 /dbms
/tdbaaccp/accproca/recovery /dev/fslv35 5.00 4.99 /dbms/tdbaaccp/accproca/export
1. FS: /dbms/tdbaplay/playdwhs/admin /dbms/tdbaplay/playdwhs/database /dbms/tdb
aplay/playdwhs/recovery /dbms/tdbaplay/playdwhs/export /dbms/tdbaaccp/accpdwhs/a
dmin /dbms/tdbaaccp/accpdwhs/database /dbms/tdbaaccp/accpdwhs/recovery /dbms/tdb
aaccp/accpdwhs/export /dbms/tdbaprod/proddwhs/admin /dbms/tdbaprod/proddwhs/data
base /dbms/tdbaprod/proddwhs/recovery /dbms/tdbaprod/proddwhs/export SIZE(G): 0.
25 15 4 5 SIZE(G): 0.25 15 4 5 SIZE(G): 0.25 15 4 5 LPs: 4 240 64 80 LPs: 4 240
64 80 LPs: 4 240 64 80 PPs: 8 480 128 160 PPs: 8 480 128 160 PPs: 8 480 128 160
7% 22% 13% 1% 55 17 12 10 1% 1% 1% 1%
CREATE DATABASE mynewdb USER SYS IDENTIFIED BY pz6r58 USER SYSTEM IDENTIFIED BY
y1tz5p LOGFILE GROUP 1 ('/vobs/oracle/oradata/mynewdb/redo01.log') SIZE 100M, GR
OUP 2 ('/vobs/oracle/oradata/mynewdb/redo02.log') SIZE 100M, GROUP 3 ('/vobs/ora
cle/oradata/mynewdb/redo03.log') SIZE 100M MAXLOGFILES 5 MAXLOGMEMBERS 5 MAXLOGH
ISTORY 1 MAXDATAFILES 100 MAXINSTANCES 1 CHARACTER SET US7ASCII NATIONAL CHARACT
ER SET AL16UTF16 DATAFILE '/vobs/oracle/oradata/mynewdb/system01.dbf' SIZE 325M
REUSE EXTENT MANAGEMENT LOCAL DEFAULT TEMPORARY TABLESPACE tempts1 DATAFILE '/vo
bs/oracle/oradata/mynewdb/temp01.dbf'
SIZE 20M REUSE UNDO TABLESPACE undotbs DATAFILE '/vobs/oracle/oradata/mynewdb/un
dotbs01.dbf' SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED; +++++++
Step 7: Create Additional Tablespaces: -------------------------------------To
make the database functional, you need to create additional files and tablespace
s for users. The following sample script creates some additional tablespaces: CO
NNECT SYS/password AS SYSDBA -- create a user tablespace to be assigned as the d
efault tablespace for users CREATE TABLESPACE users LOGGING DATAFILE '/vobs/orac
le/oradata/mynewdb/users01.dbf' SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE
UNLIMITED EXTENT MANAGEMENT LOCAL; -- create a tablespace for indexes, separate
from user tablespace CREATE TABLESPACE indx LOGGING DATAFILE '/vobs/oracle/orada
ta/mynewdb/indx01.dbf' SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL; EXIT Step 8: Run Scripts to Build Data Dictionary View
s: --------------------------------------------------Run the scripts necessary t
o build views, synonyms, and PL/SQL packages: CONNECT SYS/password AS SYSDBA @/v
obs/oracle/rdbms/admin/catalog.sql @/vobs/oracle/rdbms/admin/catproc.sql EXIT Do
not forget to run as SYSTEM the script /sqlplus/admin/pupbld.sql; @/dbms/tdbaac
cp/ora10g/home/sqlplus/admin/pupbld.sql @/dbms/tdbaaccp/ora10g/home/rdbms/admin/
catexp.sql The following table contains descriptions of the scripts: Script Desc
ription CATALOG.SQL: Creates the views of the data dictionary tables, the dynami
c performance views, and public synonyms for many of the views. Grants PUBLIC ac
cess to the synonyms. CATPROC.SQL: Runs all scripts required for or used with PL
/SQL.
Step 10: Create a Server Parameter File (Recommended): -------------------------
----------------------------Oracle recommends you create a server parameter file
as a dynamic means of
maintaining initialization parameters. The following script creates a server par
ameter file from the text initialization parameter file and writes it to the def
ault location. The instance is shut down, then restarted using the server parame
ter file (in the default location). CONNECT SYS/password AS SYSDBA -- create the
server parameter file CREATE SPFILE='/vobs/oracle/dbs/spfilemynewdb.ora' FROM P
FILE='/vobs/oracle/admin/mynewdb/scripts/init.ora'; SHUTDOWN -- this time you wi
ll start up using the server parameter file CONNECT SYS/password AS SYSDBA START
UP EXIT CREATE SPFILE='/opt/app/oracle/product/9.2/dbs/spfileOWS.ora' FROM PFILE
='/opt/app/oracle/admin/OWS/pfile/init.ora'; CREATE SPFILE='/opt/app/oracle/prod
uct/9.2/dbs/spfilePEGACC.ora' FROM PFILE='/opt/app/oracle/admin/PEGACC/scripts/i
nit.ora'; CREATE SPFILE='/opt/app/oracle/product/9.2/dbs/spfilePEGTST.ora' FROM
PFILE='/opt/app/oracle/admin/PEGTST/scripts/init.ora'; 9.10 Oracle 9i licenses:
-----------------------Setting License Parameters Oracle no longer offers licens
ing by the number of concurrent sessions. Therefore the LICENSE_MAX_SESSIONS and
LICENSE_SESSIONS_WARNING initialization parameters have been deprecated. - name
d user licesnsing: If you use named user licensing, Oracle can help you enforce
this form of licensing. You can set a limit on the number of users created in th
e database. Once this limit is reached, you cannot create more users. Note: This
mechanism assumes that each person accessing the database has a unique user nam
e and that no people share a user name. Therefore, so that named user licensing
can help you ensure compliance with your Oracle license agreement, do not allow
multiple users to log in using the same user name. To limit the number of users
created in a database, set the LICENSE_MAX_USERS initialization parameter in the
database's initialization parameter file, as shown in the following example: LI
CENSE_MAX_USERS = 200 - per-processor licensing:
Oracle encourages customers to license the database on the per-processor licensi
ng model. With this licensing method you count up the number of CPUs in your com
puter, and multiply that number by the licensing cost of the database and databa
se options you need. Currently the Standard (STD) edition of the database is pri
ced at $15,000 per processor, and the Enterprise (EE) edition is priced at $40,0
00 per processor. The RAC feature is $20,000 per processor extra, and you need t
o add 22 percent annually for the support contract. It's possible to license the
database on a per-user basis, which makes financial sense if there'll never be
many users accessing the database. However, the licensing method can't be change
d after it is initially licensed. So if the business grows and requires signific
antly more users to access the database, the costs could exceed the costs under
the per-processor model. You also have to understand what Oracle corporation con
siders to be a user for the purposes of licensing purposes. If 1,000 users acces
s the database through an application server, which only makes five connections
to the database, then Oracle will require that either 1,000 user licenses be pur
chased or that the database be licensed via the per-processor pricing model. The
Oracle STD edition is licensed at $300 per user (with a five user minimum), and
EE edition costs $800 per user (with a 25 user minimum). There is still an annu
al support fee of 22 percent, which should be budgeted in addition to the licens
ing fees. If the support contract is not paid each year, then the customer is no
t licensed to upgrade to the latest version of the database and must re-purchase
all of the licenses over again in order to upgrade versions. This section only
gives you a brief overview of the available licensing options and costs, so if y
ou have additional questions you really should contact an Oracle sales represent
ative Note about 10g init.ora: -----------------------PARALLEL_MAX_SERVERS=(> ap
ply or capture processes) Each capture process and apply process may use multipl
e parallel execution servers. The apply process by default needs two parallel se
rvers. So this parameter needs to set to at least 2 even for a single non-parall
el apply process. Specify a value for this parameter to ensure that there are en
ough parallel execution servers. In our installation we went for 12 apply server
, so we increased the number of parallel_max_server above this figure of 12. _kg
hdsidx_count=1 This parameter prevents the shared_pool from being divided among
CPUs LOG_PARALLELISM=1 This parameter must be set to 1 at each database that cap
tures events. Parameters set using DBMS_CAPTURE_ADM package: Using the DBMS_CAPT
URADM.SET_PARAMETER procedure there a 3 a parameters that are of common usage to
affect installation
PARALLELISM=3 There may be only one logminer session for the whole ruleset and o
nly one enqueuer process that will push the objects. you can safely define as mu
ch as 3 execution capture process per CPU _CHECKPOINT_FREQUENCY=1 Increase the f
requency of logminer checkpoints especially in a database with significant LOB o
r DDL activity. A logminer checkpoint is requested by default every 10Mb of redo
mined. _SGA_SIZE Amount of memory available from the shared pool for logminer p
rocessing. The default amount of shared_pool memory allocated to logminer is 10M
b. Increase this value especially in environments where large LOBs are processed
. 9.11. Older Database installations: ----------------------------------CREATE D
ATABASE Examples on 8.x The easiest way to create a 8i, 9i database, is using th
e "Database Configuration Assistant". Using this tool, you are able to create a
database and setup the NET configuration and the listener, in a graphical enviro
nment. It is also possible to use a script running in sqlpus (8i,9i) or svrmgrl
(only in 8i). Charactersets that are used a lot in europe: WE8ISO8859P15 WE8MMSW
IN1252 Example 1: ---------$ SQLPLUS /nolog CONNECT username/password AS sysdba
STARTUP NOMOUMT PFILE=<path to init.ora> -- Create database CREATE DATABASE rbdb
1 CONTROLFILE REUSE LOGFILE '/u01/oracle/rbdb1/redo01.log' SIZE 1M '/u01/oracle/
rbdb1/redo02.log' SIZE 1M '/u01/oracle/rbdb1/redo03.log' SIZE 1M '/u01/oracle/rb
db1/redo04.log' SIZE 1M DATAFILE '/u01/oracle/rbdb1/system01.dbf' SIZE AUTOEXTEN
D ON NEXT 10M MAXSIZE 200M CHARACTER SET WE8ISO8859P1;
REUSE, REUSE, REUSE, REUSE 10M REUSE
run catalog.sql run catproq.sql -- Create another (temporary) system tablespace
CREATE ROLLBACK SEGMENT rb_temp STORAGE (INITIAL 100 k NEXT 250 k); -- Alter tem
porary system tablespace online before proceding ALTER ROLLBACK SEGMENT rb_temp
ONLINE; -- Create additional tablespaces ... -- RBS: For rollback segments -- US
ERs: Create user sets this as the default tablespace -- TEMP: Create user sets t
his as the temporary tablespace CREATE TABLESPACE rbs DATAFILE '/u01/oracle/rbdb
1/rbs01.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT 5M MAXSIZE 150M; CREATE TABLESPACE
users DATAFILE '/u01/oracle/rbdb1/users01.dbf' SIZE 3M REUSE AUTOEXTEND ON NEXT
5M MAXSIZE 150M; CREATE TABLESPACE temp DATAFILE '/u01/oracle/rbdb1/temp01.dbf'
SIZE 2M REUSE AUTOEXTEND ON NEXT 5M MAXSIZE 150M; -- Create rollback segments.
CREATE ROLLBACK SEGMENT rb1 STORAGE(INITIAL tablespace rbs; CREATE ROLLBACK SEGM
ENT rb2 STORAGE(INITIAL tablespace rbs; CREATE ROLLBACK SEGMENT rb3 STORAGE(INIT
IAL tablespace rbs; CREATE ROLLBACK SEGMENT rb4 STORAGE(INITIAL tablespace rbs;
50K NEXT 250K) 50K NEXT 250K) 50K NEXT 250K) 50K NEXT 250K)
-- Bring new rollback segments online and drop the temporary system one ALTER RO
LLBACK SEGMENT rb1 ONLINE; ALTER ROLLBACK SEGMENT rb2 ONLINE; ALTER ROLLBACK SEG
MENT rb3 ONLINE; ALTER ROLLBACK SEGMENT rb4 ONLINE; ALTER ROLLBACK SEGMENT rb_te
mp OFFLINE; DROP ROLLBACK SEGMENT rb_temp ; Example 2: ---------connect internal
startup nomount pfile=/disk00/oracle/software/7.3.4/dbs/initDB1.ora create data
base "DB1" maxinstances 2 maxlogfiles 32 maxdatafiles 254 characterset "US7ASCII
" datafile '/disk02/oracle/oradata/DB1/system01.dbf' size 128M autoextent on nex
t 8M maxsize 256M
logfile group 1 ('/disk03/oracle/oradata/DB1/redo1a.log', '/disk04/oracle/oradat
a/DB1/redo1b.log') size 5M, group 2 ('/disk05/oracle/oradata/DB1/redo2a.log', ('
/disk06/oracle/oradata/DB1/redo2b.log') size 5M REM * install data dictionary vi
ews @/disk00/oracle/software/7.3.4/rdbms/admin/catalog.sql @/disk00/oracle/softw
are/7.3.4/rdbms/admin/catproq.sql create rollback segment SYSROLL tablespace sys
tem storage (initial 2M next 2M minextents 2 maxextents 255); alter rollback seg
ment SYSROLL online; create tablespace RBS datafile '/disk01/oracle/oradata/DB1/
rbs01.dbf' size 25M default storage ( initial 500K next 500K pctincrease 0 minex
tents 2 ); create rollback segment RBS_1 tablespace RBS1 storage (initial 512K n
ext 512K minextents 50); create rollback segment RBS02 tablespace RBS storage (i
nitial 500K next 500K minextents 2 optimal 1M); etc.. alter rollback segment RBS
01 online; alter rollback segment RBS02 online; etc.. create tablespace DATA dat
afile '/disk05/oracle/oradata/DB1/data01.dbf' size 25M default storage ( initial
500K next 500K pctincrease 0 maxextends UNLIMITED ); etc.. other tablespaces yo
u need run other scripts you need. alter user sys temporary tablespace TEMP; alt
er user system default tablespace TOOLS temporary tablespace TEMP; connect syste
m/manager @/disk00/oracle/software/7.3.4/rdbms/admin/catdbsyn.sql
@/disk00/oracle/software/7.3.4/rdbms/admin/pubbld.sql t.b.v. PRODUCT_USER_PROFIL
E, SQLPLUS_USER_PROFILE Example 3: on NT/2000 8i best example: -----------------
--------------------Suppose you want a second database on a NT/2000 Server: 1. c
reate a service with oradim oradim -new -sid -startmode -pfile 2. sqlplus /nolog
(or use svrmgrl) startup nomount pfile="G:\oracle\admin\hd\pfile\init.ora" SVRM
GR> CREATE DATABASE hd LOGFILE 'G:\oradata\hd\redo01.log' SIZE 2048K, 'G:\oradat
a\hd\redo02.log' SIZE 2048K, 'G:\oradata\hd\redo03.log' SIZE 2048K MAXLOGFILES 3
2 MAXLOGMEMBERS 2 MAXLOGHISTORY 1 DATAFILE 'G:\oradata\hd\system01.dbf' SIZE 264
M 10240K MAXDATAFILES 254 MAXINSTANCES 1 CHARACTER SET WE8ISO8859P1 NATIONAL CHA
RACTER SET WE8ISO8859P1; @catalog.sql @catproq.sql Oracle 9i: ---------Example 1
: ---------CREATE DATABASE mynewdb USER SYS IDENTIFIED BY pz6r58 USER SYSTEM IDE
NTIFIED BY y1tz5p LOGFILE GROUP 1 ('/vobs/oracle/oradata/mynewdb/redo01.log') SI
ZE 100M, GROUP 2 ('/vobs/oracle/oradata/mynewdb/redo02.log') SIZE 100M, GROUP 3
('/vobs/oracle/oradata/mynewdb/redo03.log') SIZE 100M MAXLOGFILES 5 MAXLOGMEMBER
S 5 MAXLOGHISTORY 1 MAXDATAFILES 100 MAXINSTANCES 1 CHARACTER SET US7ASCII NATIO
NAL CHARACTER SET AL16UTF16 DATAFILE '/vobs/oracle/oradata/mynewdb/system01.dbf'
SIZE 325M REUSE EXTENT MANAGEMENT LOCAL DEFAULT TEMPORARY TABLESPACE tempts1
REUSE AUTOEXTEND ON NEXT
DATAFILE '/vobs/oracle/oradata/mynewdb/temp01.dbf' SIZE 20M REUSE UNDO TABLESPAC
E undotbs DATAFILE '/vobs/oracle/oradata/mynewdb/undotbs01.dbf' SIZE 200M REUSE
AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED;
9.2 Automatische start oracle bij system boot: =================================
============= 9.2.1 oratab: ------------Inhoud ORATAB in /etc of /var/opt: Voorb
eeld: # $ORACLE_SID:$ORACLE_HOME:[N|Y] # ORCL:/u01/app/oracle/product/8.0.5:Y #
De oracle scripts om de database te starten en te stoppen zijn: $ORACLE_HOME/bin
/dbstart en dbshut, of startdb en stopdb of wat daarop lijkt. Deze kijken in ORA
TAB om te zien welke databases gestart moeten worden. 9.2.2 dbstart en dbshut: -
----------------------Het script dbstart zal oratab lezen en ook tests doen en o
m de oracle versie te bepalen. Verder bestaat de kern uit: het starten van sqldb
a, svrmgrl of sqlplus vervolgens doen we een connect vervolgens geven we het sta
rtup commando. Voor dbshut geldt een overeenkomstig verhaal. 9.2.3 init, sysinit
, rc: -----------------------Voor een automatische start, voeg nu de juiste entr
ies toe in het /etc/rc2.d/S99dbstart (or equivalent) file: Tijdens het opstarten
van Unix worden de scrips in de /etc/rc2.d uitgevoerd die beginnen met een 'S'
en in alfabetische volgorde. De Oracle database processen zullen als (een van de
) laatste processen worden
gestart. Het bestAND S99oracle is gelinkt met deze directory. Inhoud S99oracle:
su - oracle -c "/path/to/$ORACLE_HOME/bin/dbstart" su - oracle -c "/path/to/$ORA
CLE_HOME/bin/lsnrctl start" su - oracle -c "/path/tp/$ORACLE_HOME/bin/namesctl s
tart" (optional) # Start DB's # Start listener # Start OraNames
Het dbstart script is een standaard Oracle script. Het kijkt in oratab welke sid
's op 'Y' staan, en zal deze databases starten. of customized via een customized
startdb script: ORACLE_ADMIN=/opt/oracle/admin; export ORACLE_ADMIN su - oracle
-c "$ORACLE_ADMIN/bin/startdb WPRD 1>$ORACLE_ADMIN/log/WPRD/startWPRD.$$ 2>&1"
su - oracle -c "$ORACLE_ADMIN/bin/startdb WTST 1>$ORACLE_ADMIN/log/WTST/startWTS
T.$$ 2>&1" su - oracle -c "$ORACLE_ADMIN/bin/startdb WCUR 1>$ORACLE_ADMIN/log/WC
UR/startWCUR.$$ 2>&1"
9.3 Het stoppen van Oracle in unix: ----------------------------------Tijdens he
t down brengen van Unix (shutdown -i 0) worden de scrips in de directory /etc/rc
2.d uitgevoerd die beginnen met een 'K' en in alfabetische volgorde. De Oracle d
atabase processen zijn een van de eerste processen die worden afgesloten. Het be
stand K10oracle is gelinkt met de /etc/rc2.d/K10oracle # Configuration File: /op
t/oracle/admin/bin/K10oracle ORACLE_ADMIN=/opt/oracle/admin; export ORACLE_ADMIN
su - oracle -c "$ORACLE_ADMIN/bin/stopdb WPRD 1>$ORACLE_ADMIN/log/WPRD/stopWPRD
.$$ 2>&1" su - oracle -c "$ORACLE_ADMIN/bin/stopdb WCUR 1>$ORACLE_ADMIN/log/WCUR
/stopWCUR.$$ 2>&1" su - oracle -c "$ORACLE_ADMIN/bin/stopdb WTST 1>$ORACLE_ADMIN
/log/WTST/stopWTST.$$ 2>&1" 9.4 startdb en stopdb: ---------------------Startdb
[ORACLE_SID] -------------------Dit script is een onderdeel van het script S99Or
acle. Dit script heeft 1
parameter, ORACLE_SID # Configuration File: /opt/oracle/admin/bin/startdb # Alge
mene omgeving zetten . $ORACLE_ADMIN/env/profile ORACLE_SID=$1 echo $ORACLE_SID
# Omgeving zetten RDBMS . $ORACLE_ADMIN/env/$ORACLE_SID.env # Het starten van de
database sqlplus /nolog << EOF connect / as sysdba startup EOF # Het starten va
n de listener lsnrctl start $ORACLE_SID # Het starten van de intelligent agent v
oor alle instances #lsnrctl dbsnmp_start
Stopdb [ORACLE_SID] ------------------Dit script is een onderdeel van het script
K10Oracle. Dit script heeft 1 parameter, ORACLE_SID # Configuration File: /opt/
oracle/admin/bin/stopdb # Algemene omgeving zetten . $ORACLE_ADMIN/env/profile O
RACLE_SID=$1 export $ORACLE_SID # Settings van het RDBMS . $ORACLE_ADMIN/env/$OR
ACLE_SID.env # Het stoppen van de intelligent agent #lsnrctl dbsnmp_stop # Het s
toppen van de listener lsnrctl stop $ORACLE_SID # Het stoppen van de database. s
qlplus /nolog << EOF connect / as sysdba shutdown immediate EOF
9.5 Batches: -----------De batches (jobs) worden gestart door het Unix proces cr
on # Batches (Oracle) # Configuration File: /var/spool/cron/crontabs/root # Form
at of lines: # min hour daymo month daywk cmd # # Dayweek 0=sunday, 1=monday...
0 9 * * 6 /sbin/sh /opt/oracle/admin/batches/bin/batches.sh >> /opt/oracle/admin
/batches/log/batcheserroroutput.log 2>&1 # Configuration File: /opt/oracle/admin
/batches/bin/batches.sh # Door de op de commandline ' BL_TRACE=T ; export BL_TRA
CE ' worden alle commando's getoond. case $BL_TRACE in T) set -x ;; esac ORACLE_
ADMIN=/opt/oracle/admin; export ORACLE_ADMIN ORACLE_HOME=/opt/oracle/product/8.1
.6; export ORACLE_HOME ORACLE_SID=WCUR ; export ORACLE_SID su - oracle -c ". $OR
ACLE_ADMIN/env/profile ; . $ORACLE_ADMIN/env/$ORACLE_SID.env; cd $ORACLE_ADMIN/b
atches/bin; sqlplus /NOLOG @$ORACLE_ADMIN/batches/bin/Analyse_WILLOW2K.sql 1> $O
RACLE_ADMIN/batches/log/batches$ORACLE_SID.`date +"%y%m%d"` 2>&1" ORACLE_SID=WCO
N ; export ORACLE_SID su - oracle -c ". $ORACLE_ADMIN/env/profile ; . $ORACLE_AD
MIN/env/$ORACLE_SID.env; cd $ORACLE_ADMIN/batches/bin; sqlplus /NOLOG @$ORACLE_A
DMIN/batches/bin/Analyse_WILLOW2K.sql 1> $ORACLE_ADMIN/batches/log/batches$ORACL
E_SID.`date +"%y%m%d"` 2>&1" 9.6 Autostart in NT/Win2K: ------------------------
-1) Older versions delete the existing instance FROM the command prompt: oradim8
0 -delete -sid SID recreate the instance FROM the command prompt: oradim -new -s
id SID -intpwd <password> -startmode <auto> -pfile <path\initSID.ora> Execute th
e command file FROM the command prompt: oracle_home\database\strt<sid>.cmd Check
the log file generated FROM this execution: oracle_home\rdbmsxx\oradimxx.log
2) NT Registry value HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\HOME0\ORA_SID_AUTOSTART
REG_EXPAND_SZ TRUE 9.7 Tools: ---------Relink van Oracle: -----------------info:
showrev -p pkginfo -i relink: mk -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk install
mk -f $ORACLE_HOME/svrmgr/lib/ins_svrmgr.mk install mk -f $ORACLE_HOME/network/
lib/ins_network.mk install $ORACLE_HOME/bin relink all Relinking Oracle Backgrou
nd: Applications for UNIX are generally not distributed as complete executables.
Oracle, like many application vendors who create products for UNIX, distribute
individual object files, library archives of object files, and some source files
which then get ?relinked? at the operating system level during installation to
create usable executables. This guarantees a reliable integration with functions
provided by the OS system libraries. Relinking occurs automatically under these
circumstances: - An Oracle product has been installed with an Oracle provided i
nstaller. - An Oracle patch set has been applied via an Oracle provided installe
r. [Step 1] Log into the UNIX system as the Oracle software owner Typically this
is the user 'oracle'. [STEP 2] Verify that your $ORACLE_HOME is set correctly:
For all Oracle Versions and Platforms, perform this basic environment check firs
t: % cd $ORACLE_HOME % pwd ...Doing this will ensure that $ORACLE_HOME is set co
rrectly in your current environment. [Step 3] Verify and/or Configure the UNIX E
nvironment for Proper Relinking: For all Oracle Versions and UNIX Platforms: The
Platform specific environment variables LIBPATH, LD_LIBRARY_PATH, & SHLIB_PATH
typically are already set to include system library locations like '/usr/lib'. I
n most cases, you need only check what they are set to first, then add the
$ORACLE_HOME/lib directory to them where appropriate. i.e.: % setenv LD_LIBRARY_
PATH ${ORACLE_HOME}/lib:${LD_LIBRARY_PATH} (see [NOTE:131207.1] How to Set UNIX
Environment Variables for help with setting UNIX environment variables) If on SO
LARIS (Sparc or Intel) with: Oracle 7.3.X, 8.0.X, or 8.1.X: - Ensure that /usr/c
cs/bin is before /usr/ucb in $PATH % which ld ....should return '/usr/ccs/bin/ld
' If using 32bit(non 9i) Oracle, - Set LD_LIBRARY_PATH=$ORACLE_HOME/lib If using
64bit(non 9i) Oracle, - Set LD_LIBRARY_PATH=$ORACLE_HOME/lib - Set LD_LIBRARY_P
ATH_64=$ORACLE_HOME/lib64 Oracle 9.X.X (64Bit) on Solaris (64Bit) OS - Set LD_LI
BRARY_PATH=$ORACLE_HOME/lib32 - Set LD_LIBRARY_PATH_64=$ORACLE_HOME/lib Oracle 9
.X.X (32Bit) on Solaris (64Bit) OS - Set LD_LIBRARY_PATH=$ORACLE_HOME/lib [Step
4] For all Oracle Versions and UNIX Platforms: Verify that you performed Step 2
correctly: % env|pg ....make sure that you see the correct absolute path for $OR
ACLE_HOME in the variable definitions. [Step 5] Run the OS Commands to Relink Or
acle: Before relinking Oracle, shut down both the database and the listener. Ora
cle 8.1.X or 9.X.X -----------------------*** NEW IN 8i AND ABOVE *** A 'relink'
script is provided in the $ORACLE_HOME/bin directory. % cd $ORACLE_HOME/bin % r
elink ...this will display all of the command's options. usage: relink <paramete
r> accepted values for parameter: all Every product executable that has been ins
talled oracle Oracle Database executable only network net_client, net_server, cm
an client net_client, plsql client_sharedlib Client shared library interMedia ct
x ctx Oracle Text utilities precomp All precompilers that have been installed ut
ilities All utilities that have been installed oemagent oemagent Note: To give t
he correct permissions to the nmo and nmb executables, you must run the root.sh
script after relinking oemagent. ldap ldap, oid
Note: ldap option is available only from 9i. In 8i, you would have to manually r
elink ldap. You can relink most of the executables associated with an Oracle Ser
ver Installation by running the following command: % relink all
This will not relink every single executable Oracle provides (you can discern wh
ich executables were relinked by checking their timestamp with 'ls -l' in the $O
RACLE_HOME/bin directory). However, 'relink all' will recreate the shared librar
ies that most executables rely on and thereby resolve most issues that require a
proper relink. -orSince the 'relink' command merely calls the traditional 'make
' commands, you still have the option of running the 'make' commands independent
ly: For executables: oracle, exp, imp, sqlldr, tkprof, mig, dbv, orapwd, rman, s
vrmgrl, ogms, ogmsctl % cd $ORACLE_HOME/rdbms/lib % make -f ins_rdbms.mk install
For executables: sqlplus % cd $ORACLE_HOME/sqlplus/lib % make -f ins_sqlplus.mk
install For executables: isqlplus % cd $ORACLE_HOME/sqlplus/lib % make -f ins_s
qlplus install_isqlplus For executables: dbsnmp, oemevent, oratclsh % cd $ORACLE
_HOME/network/lib % make -f ins_oemagent.mk install For executables: names, name
sctl % cd $ORACLE_HOME/network/lib % make -f ins_names.mk install For executable
s: osslogin, trcasst, trcroute, onrsd, tnsping % cd $ORACLE_HOME/network/lib % m
ake -f ins_net_client.mk install For executables: tnslsnr, lsnrctl % cd $ORACLE_
HOME/network/lib % make -f ins_net_server.mk install For executables related to
ldap (for example Oracle Internet Directory): % cd $ORACLE_HOME/ldap/lib % make
-f ins_ldap.mk install Note: Unix Installation/OS: RDBMS Technical Forum Display
ed below are the messages of the selected thread. Thread Status: Closed From: Ra
y Stell 20-Apr-05 21:43 Subject: solaris upgrade RDBMS Version: 9.2.0.4 Operatin
g System and Version: Solaris 8 Error Number (if applicable): Product (i.e. SQL*
Loader, Import, etc.): Product Version: solaris upgrade I need to move a server
from solaris 5.8 to 5.9. Does this
require a new oracle 9.2.0 ee server install or relink or nothing at all? Thanks
. ------------------------------------------------------------------------------
-From: Samir Saad 21-Apr-05 03:28 Subject: Re : solaris upgrade You must relink
even if you find that the databases came up after Solaris upgrade and they seem
fine. As for the existing Oracle installations, they will all be fine. Samir. --
-----------------------------------------------------------------------------Fro
m: Oracle, soumya anand 21-Apr-05 10:59 Subject: Re : solaris upgrade Hello Ray,
As rightly pointed by Samir, after an OS upgrade it sufficient to relink the ex
ecutables. Regards, Soumya Note: troubles after relink: ------------------------
---If you see on AIX something that resembles the following: P522:/home/oracle $
lsnrctl exec(): 0509-036 Cannot load program lsnrctl because of the following er
rors: 0509-130 Symbol resolution failed for /usr/lib/libc.a[aio_64.o] because: 0
509-136 Symbol kaio_rdwr64 (number 0) is not exported from dependent module /uni
x. 0509-136 Symbol listio64 (number 1) is not exported from dependent module /un
ix. 0509-136 Symbol acancel64 (number 2) is not exported from dependent module /
unix. 0509-136 Symbol iosuspend64 (number 3) is not exported from dependent modu
le /unix. 0509-136 Symbol aio_nwait (number 4) is not exported from dependent mo
dule /unix. 0509-150 Dependent module libc.a(aio_64.o) could not be loaded. 0509
-026 System error: Cannot run a file that does not have a valid format. 0509-192
Examine .loader section symbols with the 'dump -Tv' command. If this occurs, yo
u have asynchronous I/O turned off. To turn on asynchronous I/O:
Run smitty chgaio and set STATE to be configured at system restart from defined
to available. Press Enter. Do one of the following: Restart your system. Run smi
tty aio and move the cursor to Configure defined Asynchronous I/O. Then press En
ter.
trace: -----truss -aef -o /tmp/trace svrmgrl To trace what a Unix process is doi
ng enter: truss -rall -wall -p <PID> truss -p $ lsnrctl dbsnmp_start NOTE: The "
truss" command works on SUN and Sequent. Use "tusc" on HP-UX, "strace" on Linux,
"trace" on SCO Unix or call your system administrator to find the equivalent co
mmand on your system. Monitor your Unix system: Logfiles: --------Unix message f
iles record all system problems like disk errors, swap errors, NFS problems, etc
. Monitor the following files on your system to detect system problems: tail -f
/var/adm/SYSLOG tail -f /var/adm/messages tail -f /var/log/syslog
=============== 10. CONSTRAINTS: =============== 10.1 index owner en table owner
information: DBA_INDEXES ------------------------------------------set linesize
100 SELECT DISTINCT substr(owner, 1, 10) substr(index_name, 1, 40) substr(table
space_name,1,40) substr(index_type, 1, 10) as as as as INDEX_OWNER, INDEX_NAME,
TABLE_SPACE, INDEX_TYPE,
substr(table_owner, 1, 10) substr(table_name, 1, 40) BLEVEL,NUM_ROWS,STATUS FROM
DBA_INDEXES order by index_owner;
as TABLE_OWNER, as TABLE_NAME,
SELECT DISTINCT substr(owner, 1, 10) as INDEX_OWNER, substr(index_name, 1, 40) a
s INDEX_NAME, substr(index_type, 1, 10) as INDEX_TYPE, substr(table_owner, 1, 10
) as TABLE_OWNER, substr(table_name, 1, 40) as TABLE_NAME FROM DBA_INDEXES WHERE
table_name='HEAT_CUSTOMER'; SELECT substr(owner, 1, 10) substr(index_name, 1, 4
0) substr(index_type, 1, 10) substr(table_owner, 1, 10) substr(table_name, 1, 40
) FROM DBA_INDEXES WHERE owner<>table_owner;
as as as as as
INDEX_OWNER, INDEX_NAME, INDEX_TYPE, TABLE_OWNER, TABLE_NAME
10.2 PK en FK constraint relations: ---------------------------------SELECT c.co
nstraint_type as TYPE, SUBSTR(c.table_name, 1, 40) as TABLE_NAME, SUBSTR(c.const
raint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(c.r_constraint_name, 1, 40) as REF
_KEY, SUBSTR(b.column_name, 1, 40) as COLUMN_NAME FROM DBA_CONSTRAINTS c, DBA_CO
NS_COLUMNS b WHERE c.constraint_name=b.constraint_name AND c.OWNER in ('TRIDION_
CM','TCMLOGDBUSER','VPOUSERDB') AND c.constraint_type in ('P', 'R', 'U'); SELECT
c.constraint_type as TYPE, SUBSTR(c.table_name, 1, 40) as TABLE_NAME, SUBSTR(c.
constraint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(c.r_constraint_name, 1, 40) a
s REF_KEY, SUBSTR(b.column_name, 1, 40) as COLUMN_NAME FROM DBA_CONSTRAINTS c, D
BA_CONS_COLUMNS b WHERE c.constraint_name=b.constraint_name AND c.OWNER='RM_LIVE
' AND c.constraint_type in ('P', 'R', 'U'); SELECT distinct c.constraint_type SU
BSTR(c.table_name, 1, 40)
as TYPE, as TABLE_NAME,
SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(c.r_constraint_name,
1, 40) as REF_KEY FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b WHERE c.constraint
_name=b.constraint_name AND c.OWNER='RM_LIVE' AND c.constraint_type ='R'; ------
----------------------------------------------------------------create table ref
tables (TYPE varchar2(32), TABLE_NAME varchar2(40), CONSTRAINT_NAME varchar2(40)
, REF_KEY varchar2(40), REF_TABLE varchar2(40));
insert into reftables (type,table_name,constraint_name,ref_key) SELECT distinct
c.constraint_type as TYPE, SUBSTR(c.table_name, 1, 40) as TABLE_NAME, SUBSTR(c.c
onstraint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(c.r_constraint_name, 1, 40) as
REF_KEY FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b WHERE c.constraint_name=b.co
nstraint_name AND c.OWNER='RM_LIVE' AND c.constraint_type ='R'; update reftables
set REF_TABLE=(select distinct table_name from dba_cons_columns where owner='RM
_LIVE' and CONSTRAINT_NAME=REF_KEY); -------------------------------------------
--------------------------SELECT c.constraint_type as TYPE, SUBSTR(c.table_name,
1, 40) as TABLE_NAME, SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME, SUBS
TR(c.r_constraint_name, 1, 40) as REF_KEY FROM DBA_CONSTRAINTS c, DBA_CONS_COLUM
NS b WHERE c.constraint_name=b.constraint_name AND c.OWNER='RM_LIVE' AND c.const
raint_type ='R'; SELECT c.constraint_type as TYPE, SUBSTR(c.table_name, 1, 40) a
s TABLE_NAME, SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(c.r_co
nstraint_name, 1, 40) as REF_KEY, (select b.table_name from dba_cons_columns whe
re b.constraint_name=c.r_constraint_name) as REF_TABLE FROM DBA_CONSTRAINTS c, D
BA_CONS_COLUMNS b WHERE
c.constraint_name=b.constraint_name AND c.OWNER='RM_LIVE' AND c.constraint_type
='R' or c.constraint_type ='P' ; select
select c.constraint_name, c.constraint_type, c.table_name, (select table_name fr
om c where c.r_constraint_name, o.constraint_name, o.column_name from dba_constr
aints c, dba_cons_columns o where c.constraint_name=o.constraint_name and c.cons
traint_type='R' and c.owner='BRAINS'; SELECT 'SELECT * FROM '||c.table_name||' W
HERE '||b.column_name||' '|| c.search_condition FROM DBA_CONSTRAINTS c, DBA_CONS
_COLUMNS b WHERE c.constraint_name=b.constraint_name AND c.OWNER='BRAINS' AND c.
constraint_type = 'C'; SELECT 'ALTER TABLE PROJECTS.'||table_name||' enable cons
traint '|| constraint_name||';' FROM DBA_CONSTRAINTS WHERE owner='PROJECTS' AND
constraint_type='R'; SELECT 'ALTER TABLE BRAINS.'||table_name||' disable constra
int '|| constraint_name||';' FROM USER_CONSTRAINTS WHERE owner='BRAINS' AND cons
traint_type='R';
10.3 PK en FK constraint informatie: DBA_CONSTRAINTS ---------------------------
--------- owner and all foreign key, constraints SELECT SUBSTR(owner, 1, 10) as
OWNER, constraint_type as TYPE, SUBSTR(table_name, 1, 40) as TABLE_NAME, SUBSTR(
constraint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(r_constraint_name, 1, 40) as
REF_KEY, DELETE_RULE as DELETE_RULE, status FROM DBA_CONSTRAINTS WHERE OWNER='BR
AINS' AND constraint_type in ('R', 'P', 'U'); SELECT
SUBSTR(owner, 1, 10) as OWNER, constraint_type as TYPE, SUBSTR(table_name, 1, 30
) as TABLE_NAME, SUBSTR(constraint_name, 1, 30) as CONSTRAINT_NAME, SUBSTR(r_con
straint_name, 1, 30) as REF_KEY, DELETE_RULE as DELETE_RULE, status FROM DBA_CON
STRAINTS WHERE OWNER='BRAINS' AND constraint_type in ('R'); -- owner en alle pri
mary key constraints bepalen van een bepaalde user, op bepaalde objects Zelfde q
uery: Zet OWNER='gewenste_owner' AND constraint_type='P' select owner, CONSTRAIN
T_NAME, CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME,STATUS from dba_constraints
where owner='FIN_VLIEG' and constraint_type in ('P','R','U');
10.4 opsporen bijbehorende index van een bepaalde constraint: DBA_INDEXES, DBA_C
ONSTRAINTS -----------------------------------------------------------SELECT c.c
onstraint_type as substr(x.index_name, 1, 40) as substr(c.constraint_name, 1, 40
) as substr(x.tablespace_name, 1, 40) as FROM DBA_CONSTRAINTS c, DBA_INDEXES x W
HERE c.constraint_name=x.index_name AND c.constraint_name='UN_DEMO1'; SELECT c.c
onstraint_type substr(x.index_name, 1, 40) substr(c.constraint_name, 1, 40) subs
tr(c.table_name, 1, 40) substr(c.owner, 1, 10) FROM DBA_CONSTRAINTS c, DBA_INDEX
ES WHERE c.constraint_name=x.index_name AND c.owner='JOOPLOC'; Type, INDX_NAME,
CONSTRAINT_NAME, TABLESPACE
as as as as as x
Type, INDX_NAME, CONSTRAINT_NAME, TABLE_NAME, OWNER
10.5 opsporen tablespace van een constraint of constraint owner: ---------------
-----------------------------------------------SELECT substr(s.segment_name, 1,
40) substr(c.constraint_name, 1, 40) as Segmentname, as Constraintname,
substr(s.tablespace_name, 1, 40) as Tablespace, substr(s.segment_type, 1, 10) as
Type FROM DBA_SEGMENTS s, DBA_CONSTRAINTS c WHERE s.segment_name=c.constraint_n
ame AND c.owner='PROJECTS'; 10.6 Ophalen index create statements: --------------
---------------------DBA_INDEXES DBA_IND_COLUMNS SELECT substr(i.index_name, 1,
40) as INDEX_NAME, substr(i.index_type, 1, 15) as INDEX_TYPE, substr(i.table_nam
e, 1, 40) as TABLE_NAME, substr(c.index_owner, 1, 10) as INDEX_OWNER, substr(c.c
olumn_name, 1, 40) as COLUMN_NAME, c.column_position as POSITION FROM DBA_INDEXE
S i, DBA_IND_COLUMNS c WHERE i.index_name=c.index_name AND i.owner='SALES';
10.7 Aan en uitzetten van constraints: -------------------------------------- aa
nzetten: alter table tablename enable constraint constraint_name -- uitzetten: a
lter table tablename disable constraint constraint_name -- voorbeeld: ALTER TABL
E EMPLOYEE DISABLE CONSTRAINT FK_DEPNO; ALTER TABLE EMPLOYEE ENABLE CONSTRAINT F
K_DEPNO; maar ook kan: ALTER TABLE DEMO ENABLE PRIMARY KEY; -- Alle FK constrain
ts van een schema in een keer uitzetten: SELECT 'ALTER TABLE MIS_OWNER.'||table_
name||' disable constraint '|| constraint_name||';' FROM DBA_CONSTRAINTS WHERE o
wner='MIS_OWNER' AND constraint_type='R' AND TABLE_NAME LIKE 'MKM%';
SELECT 'ALTER TABLE MIS_OWNER.'||table_name||' enable constraint '|| constraint_
name||';' FROM DBA_CONSTRAINTS WHERE owner='MIS_OWNER' AND constraint_type='R' A
ND TABLE_NAME LIKE 'MKM%'; 10.8 Constraint aanmaken en initieel uit: -----------
----------------------------Dit kan handig zijn bij bijvoorbeeld het laden van e
en table waarbij mogelijk dubbele waarden voorkomen ALTER TABLE CUSTOMERS ADD CO
NSTRAINT PK_CUST PRIMARY KEY (custid) DISABLE; Als nu blijkt dat bij het aanzett
en van de constraint, er dubbele records voorkomen, kunnen we deze dubbele recor
ds plaatsen in de EXCEPTIONS table: 1. aanmaken EXCEPTIONS table: @ORACLE_HOME\r
dbms\admin\utlexcpt.sql 2. Constraint aaNzetten: ALTER TABLE CUSTOMERS ENABLE PR
IMARY KEY exceptions INTO EXCEPTIONS; Nu bevat de EXCEPTIONS table de dubbele ri
jen. 3. Welke dubbele rijen: SELECT c.custid, c.name FROM CUSTOMERS c, EXCEPTION
S s WHERE c.rowid=s.row_id; 10.9 Gebruik PK FK constraints: --------------------
---------10.9.1: Voorbeeld normaal gebruik met DRI: create table customers ( cus
tid number not null, custname varchar(10), CONSTRAINT pk_cust PRIMARY KEY (custi
d) ); create table contacts ( contactid number not null, custid number, contactn
ame varchar(10), CONSTRAINT pk_contactid PRIMARY KEY (contactid), CONSTRAINT fk_
cust FOREIGN KEY (custid) REFERENCES customers(custid)
); Hierbij kun je dus niet zondermeer een row met een bepaald custid uit custome
rs verwijderen, indien er een row in contacts bestaat met hetzelfde custid. 10.9
.2: Voorbeeld met ON DELETE CASCADE: create table contacts ( contactid number no
t null, custid number, contactname varchar(10), CONSTRAINT pk_contactid PRIMARY
KEY (contactid), CONSTRAINT fk_cust FOREIGN KEY (custid) REFERENCES customers(cu
stid) ON DELETE CASCADE ); Ook de clausule "ON DELETE SET NULL" kan gebruikt wor
den. Nu is het wel mogelijk om in customers een row te verwijderen, terwijl in c
ontacts een overeenkomende custid bestaat. De row in contacts wordt dan namelijk
ook verwijdert. 10.10 Procedures voor insert, delete: -------------------------
----------Als voorbeeld op table customers: CREATE OR REPLACE PROCEDURE newcusto
mer (custid NUMBER, custname VARCHAR) IS BEGIN INSERT INTO customers values (cus
tid,custname); commit; END; / CREATE OR REPLACE PROCEDURE delcustomer (cust NUMB
ER) IS BEGIN delete from customers where custid=cust; commit; END; / 10.11 User
datadictonary views: ----------------------------We hebben al gezien dat we voor
constraint informatie voornamelijk de onderstaande views raadplegen: DBA_TABLES
DBA_INDEXES,
DBA_CONSTRAINTS, DBA_IND_COLUMNS, DBA_SEGMENTS Deze zijn echter voor de DBA. Gew
one users kunnen informatie opvragen uit USER_ en ALL_ views. USER_ : ALL_ : in
the schema van de user waar de user bij kan ALL_TABLES ALL_INDEXES ALL_CONSTRAIN
TS ALL_VIEWS ALL_SEQUENCES ALL_CONS_COLUMNS ALL_TAB_COLUMNS ALL_SOURCE
USER_TABLES, USER_INDEXES, USER_CONSTRAINTS, USER_VIEWS, USER_SEQUENCES, USER_CO
NS_COLUMNS, USER_TAB_COLUMNS, USER_SOURCE, cat tab col dict
10.12 Create en drop index examples: ----------------------------------CREATE UN
IQUE INDEX HEATCUST0 ON HEATCUST(CUSTTYPE) TABLESPACE INDEX_SMALL PCTFREE 10 STO
RAGE(INITIAL 163840 NEXT 163840 PCTINCREASE 0 ); DROP INDEX indexname 10.13 Chec
k the height of indexes: --------------------------------Is an index rebuild nec
cessary ? SELECT index_name, owner, blevel, decode(blevel,0,'OK BLEVEL',1,'OK BL
EVEL', 2,'OK BLEVEL',3,'OK BLEVEL',4,'OK BLEVEL','BLEVEL HIGH') OK FROM dba_inde
xes WHERE owner='SALES' and blevel > 3; 10.14 Make indexes unusable (before a la
rge dataload): ------------------------------------------------------ Make Index
es unusable alter index HEAT_CUSTOMER_DISCON_DATE unusable; alter index HEAT_CUS
TOMER_EMAIL_ADDRESS unusable; alter index HEAT_CUSTOMER_POSTAL_CODE unusable; --
Enable Indexes again
alter index HEAT_CUSTOMER_DISCON_DATE rebuild; alter index HEAT_CUSTOMER_EMAIL_A
DDRESS rebuild; alter index HEAT_CUSTOMER_POSTAL_CODE rebuild;
================================ 11. DBMS_JOB and scheduled Jobs: ==============
================== Used in Oracle 9i and lower versions. 11.1 SNP background pro
cess: ---------------------------Scheduled jobs zijn mogelijk wanneer het SNP ba
ckground process geactiveerd is. Dit kan via de init.ora: JOB_QUEUE_PROCESSES=1
aantal SNP processes (SNP0, SNP1), max 36 t.b.v. replication en jobqueue's JOB_Q
UEUE_INTERVAL=60 check interval 11.2 DBMS_JOB package: ---------------------DBMS
_JOB.SUBMIT() DBMS_JOB.REMOVE() DBMS_JOB.CHANGE() DBMS_JOB.WHAT() DBMS_JOB.NEXT_
DATE() DBMS_JOB.INTERVAL() DBMS_JOB.RUN() 11.2.1 DBMS_JOB.SUBMIT() -------------
---------There are actually two versions SUBMIT() and ISUBMIT() PROCEDURE DBMS_J
OB.SUBMIT (job OUT BINARY_INTEGER, what IN VARCHAR2, next_date IN DATE DEFAULT S
YSDATE, interval IN VARCHAR2 DEFAULT 'NULL', no_parse IN BOOLEAN DEFAULT FALSE);
PROCEDURE DBMS_JOB.ISUBMIT (job IN BINARY_INTEGER, what IN VARCHAR2, next_date
in DATE DEFAULT SYSDATE interval IN VARCHAR2 DEFAULT 'NULL', no_parse in BOOLEAN
DEFAULT FALSE); The difference between ISUBMIT and SUBMIT is that ISUBMIT speci
fies a job number,
whereas SUBMIT returns a job number generated by the DBMS_JOB package Look for s
ubmitted jobs: -----------------------select job, last_date, next_date, interval
, substr(what, 1, 50) from dba_jobs; Submit a job: -------------The jobnumber (i
f you use SUBMIT() ) will be derived from the sequence SYS.JOBSEQ Suppose you ha
ve the following procedure: create or replace procedure test1 is begin dbms_outp
ut.put_line('Hallo grapjas.'); end; / Example 1: ---------variable jobno number;
begin DBMS_JOB.SUBMIT(:jobno, 'test1;', Sysdate, 'Sysdate+1'); commit; end; / D
ECLARE jobno NUMBER; BEGIN DBMS_JOB.SUBMIT (job => jobno ,what => 'test1;' ,next
_date => SYSDATE ,interval => 'SYSDATE+1/24'); COMMIT; END; / So suppose you sub
mit the above job at 08.15h. Then the next, and first time, that the job will ru
n is at 09.15h. Example 2: ---------variable jobno number; begin DBMS_JOB.SUBMIT
(:jobno, 'test1;', LAST_DAY(SYSDATE+1), 'LAST_DAY(ADD_MONTHS(LAST_DAY(SYSDATE+1)
,1))'); commit; end;
/ Example 3: ---------VARIABLE jobno NUMBER BEGIN DBMS_JOB.SUBMIT(:jobno, 'DBMS_
DDL.ANALYZE_OBJECT(''TABLE'', ''CHARLIE'', ''X1'', ''ESTIMATE'', NULL, 50);', SY
SDATE, 'SYSDATE + 1'); COMMIT; END; / PRINT jobno JOBNO ---------14144 Example 4
: this job is scheduled every hour ------------------------------------------DEC
LARE jobno NUMBER; BEGIN DBMS_JOB.SUBMIT (job => jobno ,what => 'begin space_log
ger; end;' ,next_date => SYSDATE ,interval => 'SYSDATE+1/24'); COMMIT; END; / Ex
ample 5: Examples of intervals -------------------------------'SYSDATE + 7' days
from the last execution 'SYSDATE + 1/48' 'NEXT_DAY(TRUNC(SYSDATE), ''MONDAY'')
+ 15/24' 3PM 'NEXT_DAY(ADD_MONTHS(TRUNC(SYSDATE, ''Q''), 3), ''THURSDAY'')' each
quarter 'TRUNC(SYSDATE + 1)' 12:00 midnight 'TRUNC(SYSDATE + 1) + 8/24' a.m. 'N
EXT_DAY(TRUNC(SYSDATE ), "TUESDAY" ) + 12/24' 12:00 noon 'TRUNC(LAST_DAY(SYSDATE
) + 1)' month at midnight :exactly seven :every half hour :every Monday at :fir
st Thursday of :Every day at :Every day at 8:00 :Every Tuesday at :First day of
the
'TRUNC(ADD_MONTHS(SYSDATE + 2/24, 3 ), 'Q' ) - 1/24' quarter at 11:00 p.m. NEXT_
DAY(SYSDATE, "FRIDAY") ) ) + 9/24' Wednesday, and Friday at 9:00 a.m.
:Last day of the :Every Monday,
--------------------------------------------------------------------------------
Example 6: ---------You have this testprocedure create or replace procedure test
1 as id_next number; begin select max(id) into id_next from iftest; insert into
iftest (id) values (id_next+1); commit; end; / Suppose on 16 juli at 9:26h you d
o: variable jobno number; begin DBMS_JOB.SUBMIT(:jobno, 'test1;', LAST_DAY(SYSDA
TE+1), 'LAST_DAY(ADD_MONTHS(LAST_DAY(SYSDATE+1),1))'); commit; end; / select job
, to_char(this_date,'DD-MM-YYYY;HH24:MI'), to_char(next_date, 'DD-MMYYYY;HH24:MI
') from dba_jobs; JOB TO_CHAR(THIS_DAT TO_CHAR(NEXT_DAT ---------- -------------
--- ---------------25 31-07-2004;09:26 Suppose on 16 juli at 9:38h you do: varia
ble jobno number; begin DBMS_JOB.SUBMIT(:jobno, 'test1;', LAST_DAY(SYSDATE)+1, '
LAST_DAY(ADD_MONTHS(LAST_DAY(SYSDATE+1),1))'); commit; end; / JOB TO_CHAR(THIS_D
AT TO_CHAR(NEXT_DAT ---------- ---------------- ---------------25 31-07-2004;09:
26 26 01-08-2004;09:38
Suppose on 16 juli at 9:41h you do: variable jobno number; begin DBMS_JOB.SUBMIT
(:jobno, 'test1;', SYSDATE, 'LAST_DAY(ADD_MONTHS(LAST_DAY(SYSDATE+1),1))'); comm
it; end; / JOB TO_CHAR(THIS_DAT TO_CHAR(NEXT_DAT ---------- ---------------- ---
------------27 31-08-2004;09:41 25 31-07-2004;09:26 26 01-08-2004;09:39
Suppose on 16 juli at 9:46h you do: variable jobno number; begin DBMS_JOB.SUBMIT
(:jobno, 'test1;', SYSDATE, 'TRUNC(LAST_DAY(SYSDATE + 1/24 ) )'); commit; end; /
JOB TO_CHAR(THIS_DAT TO_CHAR(NEXT_DAT --------- ---------------- --------------
-27 31-08-2004;09:41 28 31-07-2004;00:00 25 31-07-2004;09:26 29 31-07-2004;00:00
-------------------------------------------------------------------------------
-----variable jobno number; begin DBMS_JOB.SUBMIT(:jobno, 'test1;', null, 'TRUNC
(LAST_DAY(SYSDATE ) + 1)' ); commit; end; / In the job definition, use two singl
e quotation marks around strings. Always include a semicolon at the end of the j
ob definition.
11.2.2 DBMS_JOB.REMOVE() -----------------------Removing a Job FROM the Job Queu
e To remove a job FROM the job queue, use the REMOVE procedure in the DBMS_JOB p
ackage.
The following statements remove job number 14144 FROM the job queue: BEGIN DBMS_
JOB.REMOVE(14144); END; / 11.2.3 DBMS_JOB.CHANGE() -----------------------In thi
s example, job number 14144 is altered to execute every three days: BEGIN DBMS_J
OB.CHANGE(1, NULL, NULL, 'SYSDATE + 3'); END; / If you specify NULL for WHAT, NE
XT_DATE, or INTERVAL when you call the procedure DBMS_JOB.CHANGE, the current va
lue remains unchanged. 11.2.4 DBMS_JOB.WHAT() ---------------------You can alter
the definition of a job by calling the DBMS_JOB.WHAT procedure. The following e
xample changes the definition for job number 14144: BEGIN DBMS_JOB.WHAT(14144, '
DBMS_DDL.ANALYZE_OBJECT(''TABLE'', ''HR'', ''DEPARTMENTS'', ''ESTIMATE'', NULL,
50);'); END; / 11.2.5 DBMS_JOB.NEXT_DATE() --------------------------You can alt
er the next execution time for a job by calling the DBMS_JOB.NEXT_DATE procedure
, as shown in the following example: BEGIN DBMS_JOB.NEXT_DATE(14144, SYSDATE + 4
); END; / 11.2.6 DBMS_JOB.INTERVAL(): --------------------------The following ex
ample illustrates changing the execution interval for a job by calling the DBMS_
JOB.INTERVAL procedure: BEGIN DBMS_JOB.INTERVAL(14144, 'NULL'); END; /
execute dbms_job.interval(<job number>,'SYSDATE+(1/48)'); In this case, the job
will not run again after it successfully executes and it will be deleted FROM th
e job queue 11.2.7 DBMS_JOB.BROKEN(): ------------------------A job is labeled a
s either broken or not broken. Oracle does not attempt to run broken jobs. Examp
le: BEGIN DBMS_JOB.BROKEN(10, TRUE); END; / Example: The following example marks
job 14144 as not broken and sets its next execution date to the following Monda
y: BEGIN DBMS_JOB.BROKEN(14144, FALSE, NEXT_DAY(SYSDATE, 'MONDAY')); END; / Exam
ple: exec DBMS_JOB.BROKEN( V_JOB_ID, true); Example: select JOB into V_JOB_ID fr
om DBA_JOBS where WHAT like '%SONERA%'; DBMS_SNAPSHOT.REFRESH( 'SONERA', 'C'); D
BMS_JOB.BROKEN( V_JOB_ID, false); fix broken jobs: ---------------/* Filename on
companion disk: job5.sql */* CREATE OR REPLACE PROCEDURE job_fixer AS /* || cal
ls DBMS_JOB.BROKEN to try and set || any broken jobs to unbroken */ /* cursor se
lects user's broken jobs */ CURSOR broken_jobs_cur IS SELECT job FROM user_jobs
WHERE broken = 'Y'; BEGIN FOR job_rec IN broken_jobs_cur LOOP DBMS_JOB.BROKEN(jo
b_rec.job,FALSE); END LOOP; END job_fixer; 11.2.8 DBMS_JOB.RUN(): --------------
-------BEGIN DBMS_JOB.RUN(14144); END; / 11.3 DBMS_SCHEDULER: ------------------
-Used in Oracle 10g. BEGIN DBMS_SCHEDULER.create_job ( job_name => 'test_self_co
ntained_job', job_type => 'PLSQL_BLOCK', job_action => 'BEGIN DBMS_STATS.gather_
schema_stats(''JOHN''); END;', start_date => SYSTIMESTAMP, repeat_interval => 'f
req=hourly; byminute=0', end_date => NULL, enabled => TRUE, comments => 'Job cre
ated using the CREATE JOB procedure.'); End; / BEGIN DBMS_SCHEDULER.run_job (job
_name => 'TEST_PROGRAM_SCHEDULE_JOB', use_current_session => FALSE); END; / BEGI
N DBMS_SCHEDULER.stop_job (job_name => 'TEST_PROGRAM_SCHEDULE_JOB'); END; / Jobs
can be deleted using the DROP_JOB procedure: BEGIN DBMS_SCHEDULER.drop_job (job
_name => 'TEST_PROGRAM_SCHEDULE_JOB'); DBMS_SCHEDULER.drop_job (job_name => 'tes
t_self_contained_job'); END; /
Oracle 10g: ----------BMS_JOB has been replaced by DBMS_SCHEDULER. Views: V_$SCH
EDULER_RUNNING_JOBS GV_$SCHEDULER_RUNNING_JOBS DBA_QUEUE_SCHEDULES USER_QUEUE_SC
HEDULES _DEFSCHEDULE DEFSCHEDULE AQ$SCHEDULER$_JOBQTAB_S AQ$_SCHEDULER$_JOBQTAB_
F AQ$SCHEDULER$_JOBQTAB AQ$SCHEDULER$_JOBQTAB_R AQ$SCHEDULER$_EVENT_QTAB_S AQ$_S
CHEDULER$_EVENT_QTAB_F AQ$SCHEDULER$_EVENT_QTAB AQ$SCHEDULER$_EVENT_QTAB_R DBA_S
CHEDULER_PROGRAMS USER_SCHEDULER_PROGRAMS ALL_SCHEDULER_PROGRAMS DBA_SCHEDULER_J
OBS USER_SCHEDULER_JOBS ALL_SCHEDULER_JOBS DBA_SCHEDULER_JOB_CLASSES ALL_SCHEDUL
ER_JOB_CLASSES DBA_SCHEDULER_WINDOWS ALL_SCHEDULER_WINDOWS DBA_SCHEDULER_PROGRAM
_ARGS USER_SCHEDULER_PROGRAM_ARGS ALL_SCHEDULER_PROGRAM_ARGS DBA_SCHEDULER_JOB_A
RGS USER_SCHEDULER_JOB_ARGS ALL_SCHEDULER_JOB_ARGS DBA_SCHEDULER_JOB_LOG DBA_SCH
EDULER_JOB_RUN_DETAILS USER_SCHEDULER_JOB_LOG USER_SCHEDULER_JOB_RUN_DETAILS ALL
_SCHEDULER_JOB_LOG ALL_SCHEDULER_JOB_RUN_DETAILS DBA_SCHEDULER_WINDOW_LOG DBA_SC
HEDULER_WINDOW_DETAILS ALL_SCHEDULER_WINDOW_LOG ALL_SCHEDULER_WINDOW_DETAILS DBA
_SCHEDULER_WINDOW_GROUPS ALL_SCHEDULER_WINDOW_GROUPS DBA_SCHEDULER_WINGROUP_MEMB
ERS ALL_SCHEDULER_WINGROUP_MEMBERS DBA_SCHEDULER_SCHEDULES USER_SCHEDULER_SCHEDU
LES ALL_SCHEDULER_SCHEDULES DBA_SCHEDULER_RUNNING_JOBS ALL_SCHEDULER_RUNNING_JOB
S USER_SCHEDULER_RUNNING_JOBS DBA_SCHEDULER_GLOBAL_ATTRIBUTE
ALL_SCHEDULER_GLOBAL_ATTRIBUTE DBA_SCHEDULER_CHAINS USER_SCHEDULER_CHAINS ALL_SC
HEDULER_CHAINS DBA_SCHEDULER_CHAIN_RULES USER_SCHEDULER_CHAIN_RULES ALL_SCHEDULE
R_CHAIN_RULES DBA_SCHEDULER_CHAIN_STEPS USER_SCHEDULER_CHAIN_STEPS ALL_SCHEDULER
_CHAIN_STEPS DBA_SCHEDULER_RUNNING_CHAINS USER_SCHEDULER_RUNNING_CHAINS ALL_SCHE
DULER_RUNNING_CHAINS
================== 12. Net8 / SQLNet: ================== In bijvoorbeeld sql*plu
s vult men in: ----------------Username: system Password: manager Host String: X
XX ----------------NET8 bij de client kijkt in TNSNAMES.ORAnaar de eerste entry
XXX= (description.. protocol..host...port.. SERVICE_NAME=Y) XXX is eigenlijk een
alias en is dus willekeurig hoewel het uiteraard aansluit bij de instance name
of database name waarnaar je wilt connecten. Maar het zou dus zelfs pipo mogen z
ijn. Wordt XXX niet gevonden, dan meld de client: ORA-12154 TNS: could not resol
ve SERVICE NAME Vervolgens wordt door NET8 via de connect descriptor Y contact g
emaakt met de listener op de Server die luistert naar Y Is Y niet wat de listene
r verwacht, dan meldt de listener aan de client: TNS: listener could not resolve
SERVICE_NAME in connect descriptor 12.1 sqlnet.ora voorbeeld: -----------------
--------SQLNET.AUTHENTICATION_SERVICES= (NTS) NAMES.DIRECTORY_PATH= (TNSNAMES) 1
2.2 tnsnames.ora voorbeelden: ------------------------------
voorbeeld 1. DB1= (DESCRIPTION= (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=STAR
BOSS)(PORT=1521) ) (CONNECT_DATA= (SERVICE_NAME=DB1.world) ) ) voorbeeld 2. DB1.
world= (DESCRIPTION= (ADDRESS_LIST= (ADDRESS=(COMMUNITY=tcp.world)(PROTOCOL=TCP)
(HOST=STARBOSS)(PORT=1521) ) (CONNECT_DATA=(SID=DB1) ) ) DB2.world= (... ) DB3.w
orld= (... ) etc.. voorbeeld 3. RCAT = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS =
(PROTOCOL = TCP)(HOST = w2ktest)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME
= rcat.antapex) ) )
12.3 listener.ora voorbeelden: -----------------------------Example 1: ---------
LISTENER= (DESCRIPTION= (ADDRESS=(PROTOCOL=TCP)(HOST=STARBOSS)(PORT=1521)) )
SID_LIST_LISTENER= (SID_LIST= (SID_DESC= (GLOBAL_DBNAME=DB1.world) (ORACLE_HOME=
D:\oracle8i) (SID_NAME=DB1) ) ) Example 2: ---------############## WPRD ########
############################################# LOG_DIRECTORY_WPRD = /opt/oracle/a
dmin/WPRD/network/log LOG_FILE_WPRD = WPRD.log TRACE_LEVEL_WPRD = OFF #ADMIN TRA
CE_DIRECTORY_WPRD = /opt/oracle/admin/WPRD/network/trace TRACE_FILE_WPRD = WPRD.
trc WPRD = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST=(ADDRESS=(PROTOCOL=T
CP)(HOST=blnl01)(PORT=1521))))) SID_LIST_WPRD = (SID_LIST = (SID_DESC = (GLOBAL_
DBNAME = WPRD) (ORACLE_HOME = /opt/oracle/product/8.1.6) (SID_NAME = WPRD))) ###
########### WTST ##################################################### LOG_DIREC
TORY_WTST = /opt/oracle/admin/WTST/network/log LOG_FILE_WTST = WTST.log TRACE_LE
VEL_WTST = OFF #ADMIN TRACE_DIRECTORY_WTST = /opt/oracle/admin/WTST/network/trac
e TRACE_FILE_WTST = WTST.trc WTST = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_
LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=blnl01)(PORT=1522))))) SID_LIST_WTST = (SID_LI
ST = (SID_DESC = (GLOBAL_DBNAME = WTST) (ORACLE_HOME = /opt/oracle/product/8.1.6
) (SID_NAME = WTST))) Example 3: ---------# LISTENER.ORA Network Configuration F
ile: D:\oracle\ora901\NETWORK\ADMIN\listener.ora
# Generated by Oracle configuration tools. LISTENER = (DESCRIPTION_LIST = (DESCR
IPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0)) ) (DESCRIPTION = (ADDRESS
= (PROTOCOL = TCP)(HOST = missrv)(PORT = 1521)) ) ) SID_LIST_LISTENER = (SID_LIS
T = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = D:\oracle\ora901) (PROGRA
M = extproc) ) (SID_DESC = (GLOBAL_DBNAME = o901) (ORACLE_HOME = D:\oracle\ora90
1) (SID_NAME = o901) ) (SID_DESC = (SID_NAME = MAST) (ORACLE_HOME = D:\oracle\or
a901) (PROGRAM = hsodbc) ) (SID_DESC = (SID_NAME = NATOPS) (ORACLE_HOME = D:\ora
cle\ora901) (PROGRAM = hsodbc) ) (SID_DESC = (SID_NAME = VRF) (ORACLE_HOME = D:\
oracle\ora901) (PROGRAM = hsodbc) ) (SID_DESC = (SID_NAME = DRILLS) (ORACLE_HOME
= D:\oracle\ora901) (PROGRAM = hsodbc) ) (SID_DESC = (SID_NAME = DDS) (ORACLE_H
OME = D:\oracle\ora901) (PROGRAM = hsodbc) ) (SID_DESC = (SID_NAME = IVP) (ORACL
E_HOME = D:\oracle\ora901) (PROGRAM = hsodbc) (SID_DESC = (SID_NAME = ALBERT) (O
RACLE_HOME = D:\oracle\ora901) (PROGRAM = hsodbc) )
) 12.4: CONNECT TIME FAILOVER: ---------------------------The connect-time failo
ver feature allows clients to connect to another listener if the initial connect
ion to the first listener fails. Multiple listener locations are specified in th
e clients tnsnames.ora file. If a connection attempt to the first listener fails
, a connection request to the next listener in the list is attempted. This featu
re increases the availablity of the Oracle service should a listener location be
unavailable. Here is an example of what a tnsnames.ora file looks like with con
nect-time failover enabled: ORCL= (DESCRIPTION= (ADDRESS_LIST= (ADDRESS=(PROTOCO
L=TCP)(HOST=DBPROD)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=DBFAIL)(PORT=1521))
) (CONNECT_DATA=(SERVICE_NAME=PROD)(SERVER=DEDICATED) ) ) 12.5: CLIENT LOAD BAL
ANCING: ---------------------------Client Load Balancing is a feature that allow
s clients to randomly select from a list of listeners. Oracle Net moves through
the list of listeners and balances the load of connection requests accross the a
vailable listeners. Here is an example of the tnsnames.ora entry that allows for
load balancing: ORCL= (DESCRIPTION= (LOAD_BALANCE=ON) (ADDRESS_LIST= (ADDRESS=(
PROTOCOL=TCP)(HOST=MWEISHAN-DELL)(PORT=1522)) (ADDRESS=(PROTOCOL=TCP)(HOST=MWEIS
HAN-DELL)(PORT=1521)) ) (CONNECT_DATA=(SERVICE_NAME=PROD)(SERVER=DEDICATED) ) )
Notice the additional parameter of LOAD_BALANCE. This enables load balancing bet
ween the two listener locations specified. 12.6: ORACLE SHARED SERVER:
--------------------------With the dedicated Server, each server process has a P
GA, outside the SGA When Shared Server is used, the user program area's are in t
he SGA in the large pool. With a few init.ora parameters, you can configure Shar
ed Server. 1. DISPATCHERS: The DISPATCHERS parameter defines the number of dispa
tchers that should start when the instance is started. For example, if you want
to configure 3 TCP/IP dispatchers and to IPC dispatchers, you set the parameters
as follows: DISPATCHERS="(PRO=TCP)(DIS=3)(PRO=IPC)(DIS=2)" For example, if you
have 500 concurrent TCP/IP connections, and you want each dispatcher to manage 5
0 concurrent connections, you need 10 dispatchers. You set your DISPATCHERS para
meter as follows: DISPATCHERS="(PRO=TCP)(DIS=10)" 2. SHARED_SERVER: The Shared_S
ervers parameter specifies the minimum number of Shared Servers to start and ret
ain when the Oracle instance is started. View information about dispatchers and
shared servers with the following commands and queries: lsnrctl services SELECT
name, status, messages, idle, busy, bytes, breaks FROM v$dispatcher; 12.7: Keepi
ng Oracle connections alive through a Firewall: --------------------------------
-------------------------Implementing keep alive packets: SQLNET.INBOUND_CONNECT
_TIMEOUT
Notes: ======= Note 1: -------
Doc ID: Subject: Type: Status: PURPOSE -------
Note:274130.1 Content Type: TEXT/PLAIN SHARED SERVER CONFIGURATION Creation Date
: 25-MAY-2004 BULLETIN Last Revision Date: 24-JUN-2004 PUBLISHED
This article discusses about the configuration of shared servers on 9i DB. SHARE
D SERVER CONFIGURATION: =========================== 1. Add the parameter shared_
servers in the init.ora SHARED_SERVERS specifies the number of server processes
that you want to create when an instance is started up. If system load decreases
, this minimum number of servers is maintained. Therefore, you should take care
not to set SHARED_SERVERS too high at system startup. Parameter type Parameter c
lass Integer Dynamic: ALTER SYSTEM
2. Add the parameter DISPATCHERS in the init.ora DISPATCHERS configures dispatch
er processes in the shared server architecture. USAGE: ----DISPATCHERS = "(PROTO
COL=TCP)(DISPATCHERS=3)" 3. Save the init.ora file. 4. Change the connect string
in tnsnames.ora from ORACLE.IDC.ORACLE.COM = (DESCRIPTION = (ADDRESS_LIST = (AD
DRESS = (PROTOCOL = TCP)(HOST = xyzac)(PORT = 1521)) ) (CONNECT_DATA = (SERVER =
DEDICATED) (SERVICE_NAME = oracle) ) ) to ORACLE.IDC.ORACLE.COM = (DESCRIPTION
= (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = xyzac)(PORT = 1521)) ) (CON
NECT_DATA =
) )
(SERVER = SHARED) (SERVICE_NAME = Oracle)
Change SERVER=SHARED. 5. Shutdown and startup the database. 6. Make a new connec
tion to database other than SYSDBA. (NOTE: SYSDBA will always acquire dedicated
connection by default.) 7. Check whether the connection is done through server s
erver. > Select server from v$session. SERVER --------DEDICATED DEDICATED DEDICA
TED SHARED DEDICATED NOTE: ==== The following parameters are optional (if not sp
ecified, Oracle selects defaults): MAX_DISPATCHERS: =============== Specifies th
e maximum number of dispatcher processes that can run simultaneously. SHARED_SER
VERS: ============== Specifies the number of shared server processes created whe
n an instance is started up. MAX_SHARED_SERVERS: ================== Specifies th
e maximum number of shared server processes that can run simultaneously. CIRCUIT
S: ======== Specifies the total number of virtual circuits that are available fo
r inbound and outbound network sessions. SHARED_SERVER_SESSIONS: ===============
======= Specifies the total number of shared server user sessions to allow. Sett
ing this parameter enables you to reserve user sessions for dedicated servers.
Other parameters affected by shared server that may require adjustment: LARGE_PO
OL_SIZE: =============== Specifies the size in bytes of the large pool allocatio
n heap. Shared server may force the default value to be set too high, causing pe
rformance problems or problems starting the database. SESSIONS: ======== Specifi
es the maximum number of sessions that can be created in the system. May need to
be adjusted for shared server.
12.7 password for the listener: ------------------------------Note 1: LSNRCTL> s
et password <password> where <password> is the password you want to use. To chan
ge a password, use "Change_Password" You can also designate a password when you
configure the listener with the Net8 Assistant. These passwords are stored in th
e listener.ora file and although they will not show in the Net8 Assistant, they
are readable in the listener.ora file. Note 2: The password can be set either by
specifying it through the command CHANGE_PASSWORD, or through a parameter in th
e listener.ora file. We saw how to do that through the CHANGE_PASSWORD command e
arlier. If the password is changed this way, it should not be specified in the l
istener.ora file. The password is not displayed anywhere. When supplying the pas
sword in the listener control utility, you must supply it at the Password: promp
t as shown above. You cannot specify the password in one line as shown below. LS
NRCTL> set password t0p53cr3t LSNRCTL> stop Connecting to (DESCRIPTION=(ADDRESS=
(PROTOCOL=IPC)(KEY=EXTPROC))) TNS-01169: The listener has not recognized the pas
sword LSNRCTL> Note 3: more correct method would be to password protect the list
ener functions. See the net8 admin guide for info but in short -- you can: LSNRC
TL> change_password
Old password: <just hit enter if you don't have one yet> New password: Reenter n
ew password: Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=slackdog)(P
ORT=1521))) Password changed for LISTENER The command completed successfully LSN
RCTL> set password Password: The command completed successfully LSNRCTL> save_co
nfig Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=slackdog)(PORT=1521
))) Saved LISTENER configuration parameters. Listener Parameter File /d01/home/o
racle8i/network/admin/listener.ora Old Parameter File /d01/home/oracle8i/network
/admin/listener.bak The command completed successfully LSNRCTL> Now, you need to
use a password to do various operations (such as STOP) but not others (such as
STATUS)
============================================= 13. Datadictionary queries Rollbac
k segments: ============================================= 13.1 naam, plaats en s
tatus van rollback segementen: -------------------------------------------------
--SELECT substr(segment_name, 1, 10), substr(tablespace_name, 1, 20), status, IN
ITIAL_EXTENT, NEXT_EXTENT, MIN_EXTENTS, MAX_EXTENTS, PCT_INCREASE FROM DBA_ROLLB
ACK_SEGS; 13.2 indruk van aantal active transactions per rollback segment: -----
----------------------------------------------------------aantal actieve transac
ties: V$ROLLSTAT naam rollback segment: V$ROLLNAME SELECT n.name, s.xacts FROM V
$ROLLNAME n, V$ROLLSTAT s WHERE n.usn=s.usn; (usn=undo segment number) 13.3 groo
tte, naam, extents, bytes van de rollback segmenten: ---------------------------
---------------------------------SELECT substr(segment_name, 1, 15), bytes/1024/
1024 Size_in_MB, blocks, extents, substr(tablespace_name, 1, 15) FROM DBA_SEGMEN
TS WHERE segment_type='ROLLBACK'; SELECT n.name, s.extents, s.rssize FROM V$ROLL
NAME n, V$ROLLSTAT s WHERE n.usn=s.usn;
Create Tablespace RBS datafile '/db1/oradata/oem/rbs.dbf' SIZE 200M AUTOEXTEND O
N NEXT 20M MAXSIZE 500M LOGGING DEFAULT STORAGE ( INITIAL 5M NEXT 5M MINEXTENTS
2 MAXEXTENTS 100 PCTINCREASE 0 ) ONLINE PERMANENT; 13.4 De optimal parameter: --
-----------------------SELECT n.name, s.optsize FROM V$ROLLNAME n, V$ROLLSTAT s
WHERE n.usn=s.usn; 13.5 writes to rollback segementen: -------------------------
---------Doe de query begin meting, en bij einde meting en bekijk het verschil S
ELECT n.name, s.writes FROM V$ROLLNAME n, V$ROLLSTAT s WHERE n.usn=s.usn 13.6 Wi
e en welke processes gebruiken de rollback segs: -------------------------------
-----------------------Query1: Query op v$lock, v$session, v$rollname column col
umn column column rr us os te heading heading heading heading 'RB Segment' 'User
name' 'OS user' 'Terminal' format format format format a15 a10 a10 a15
SELECT R.name rr, nvl(S.username, 'no transaction') us, S.Osuser os, S.Terminal
te FROM V$LOCK L, V$SESSION S, V$ROLLNAME R WHERE L.Sid=S.Sid(+) AND trunc(L.Id1
/65536)=R.usn AND L.Type='TX' AND L.Lmode=6 ORDER BY R.name / Query 2: SELECT r.
name "RBS", s.sid, s.serial#, s.username "USER", t.status, t.cr_get, t.phy_io, t
.used_ublk, t.noundo,
FROM WHERE AND ORDER /
substr(s.program, 1, 78) "COMMAND" sys.v_$session s, sys.v_$transaction t, sys.v
_$rollname r t.addr = s.taddr t.xidusn = r.usn BY t.cr_get, t.phy_io
13.7 Bepaling minimum aantal rollbacksegmenten: --------------------------------
---------------Bepaal in init.ora via "show parameter transactions" transactions
= a transactions_per_rollback_segment= b minimum=a/b (100/10=10) (max no of tran
sactions, stel 100) (allowed no of concurrent tr/rbs, stel 10)
13.8 Bepaling minimale grootte rollback segmenten: -----------------------------
--------------------lts=largest transaction size (normal production, niet af en
toe batch loads) min_size=minimum size van rollback segment min_size= lts * 100
/ (100 - (40 {%free} + 15 {iaiu} +5 {header} min_size=lts * 1.67 Stel lts=700K,
dan is de startwaarde rollbacksegment=1400K ====================================
===================== 14. Data dictionary queries m.b.t. security, permissions:
=========================================================
14.1 user information in datadictionary --------------------------------------SE
LECT username, user_id, password FROM DBA_USERS WHERE username='Kees'; 14.2 defa
ult tablespace, account_status of users ----------------------------------------
-------SELECT username, default_tablespace, account_status FROM DBA_USERS; 14.3
tablespace quotas of users ------------------------------SELECT tablespace_name,
bytes, max_bytes, blocks, max_blocks FROM DBA_TS_QUOTAS WHERE username='CHARLIE
'; 14.4 Systeem rechten van een user opvragen: DBA_SYS_PRIVS -------------------
--------------------------------------
SELECT substr(grantee, 1, 15), substr(privilege, 1, 40), admin_option FROM DBA_S
YS_PRIVS WHERE grantee='CHARLIE'; SELECT * FROM dba_sys_privs WHERE grantee='Kee
s'; 14.5 Invalid objects in DBA_OBJECTS: -----------------------------------SELE
CT substr(owner, 1, 10), substr(object_name, 1, 40), substr(object_type, 1, 40),
status FROM DBA_OBJECTS WHERE status='INVALID'; 14.6 session information ------
-----------------SELECT sid, serial#, substr(username, 1, 10), substr(osuser, 1,
10), substr(schemaname, 1, 10), substr(program, 1, 15), substr(module, 1, 15),
status, logon_time, substr(terminal, 1, 15), substr(machine, 1, 15) FROM V$SESSI
ON; 14.7 kill a session ------------------alter system kill session 'SID, SERIAL
#'
======================== 15. INIT.ORA parameters: ======================== 15.1
init.ora parameters en ARCHIVE MODE: ---------------------------------------LOG_
ARCHIVE_DEST=/oracle/admin/cc1/arch LOG_ARCHIVE_START=TRUE LOG_ARCHIVE_FORMAT=ar
chcc1_%s.log 10g: LOG_ARCHIVE_DEST=c:\oracle\oradata\log' LOG_ARCHIVE_FORMAT=arc
h_%t_%s_%r.dbf' other: LOG_ARCHIVE_DEST_1= LOG_ARCHIVE_DEST_2= LOG_ARCHIVE_MAX_P
ROCESSES=2
15.2 init.ora en perfoRMANce en SGA: -----------------------------------SORT_ARE
A_SIZE = SORT_AREA_RETAINED_SIZE = PROCESSES = DB_BLOCK_SIZE = DB_BLOCK_BUFFERS
= SHARED_POOL_SIZE = LOG_BUFFER = LARGE_POOL_SIZE = DBWR_IO_SLAVES DB_WRITER_PRO
CESSES = 2 LGWR_IO_SLAVES= DB_FILE_MULTIBLOCK_READ_COUNT =16 in one read) BUFFER
_POOL_RECYCLE BUFFER_POOL_KEEP TIMED_STATISTICES or not) OPTIMIZER_MODE PARALLEL
_MIN_SERVERS recovery) PARALLEL_MAX_SERVERS RECOVERY_PARALLELISM niveau) = = =TR
UE (statistics related to time are collected =RULE, CHOOSE, FIRST_ROWS, ALL_ROWS
= 2 = 4 = 2 (set parallel recovery op database (voor Parallel Query, en paralle
l 65536 65536 100 8192 3400 52428800 26214400 (per PGA, max sort area) (size aft
er sort) (alle processes) (DB_CACHE_SIZE in Oracle 9i)
(DB_WRITER_PROCESSES) (minimize io during table scans, it specifies max number o
f blocks io operation during sequential
SHARED_POOL_SIZE: in bytes or K or M SHARED_POOL_SIZE specifies (in bytes) the s
ize of the shared pool. The shared pool contains shared cursors, stored procedur
es, control structures, and other structures. If you set PARALLEL_AUTOMATIC_TUNI
NG to false, Oracle also allocates parallel execution message buffers from the s
hared pool. Larger values improve perfoRMANce in multi-user systems. Smaller val
ues use less memory. You can monitor utilization of the shared pool by querying
the view V$SGASTAT. SHARED_POOL_RESERVED_SIZE: The parameter was introduced in O
racle 7.1.5 and provides a means of reserving a portion of the shared pool for l
arge memory allocations. The reserved area comes out of the shared pool itself.
From a practical point of view one should set SHARED_POOL_RESERVED_SIZE to about
10% of SHARED_POOL_SIZE unless either the shared pool is very large OR SHARED_P
OOL_RESERVED_MIN_ALLOC has been set lower than the default value:
15.3 init.ora en jobs: ---------------------JOB_QUEUE_PROCESSES=1 aantal SNP pro
cesses (SNP0, SNP1), max 36 t.b.v. replication en jobqueue's JOB_QUEUE_INTERVAL=
60 check interval 15.4 instance name, sid: -----------------------db_name global
_names instance_name db_domain 15.5 overige parameters: -----------------------O
S_AUTHENT_PREFIX REMOTE_OS_AUTHENTICATION via het netwerk kan) REMOTE_LOGIN_PASS
WORDFILEe distributed_transactions aq_tm_processes mts_servers multithreaded ser
ver) mts_max_servers audit_file_dest background_dump_dest user_dump_dest core_du
mp_dest resource_limit profiles are in effect) license_max_sessions sessions) li
cense_sessions_warning log) license_max_users be created in the database) compat
ible control_files control_files control_files db_files java_pool_size log_check
point_interval = 150 =true = = = = "" = TRUE or FALSE (stANDaard is dat OPS$) (o
f een OS authentication = = = = CC1 TRUE CC1 antapex.net
= NONE or EXCLUSIVE =0 or >0 = = = = = = = /dbs01/app/oracle/admin/AMI_PRD/adump
/dbs01/app/oracle/admin/AMI_PRD/bdump /dbs01/app/oracle/admin/AMI_PRD/udump /db
s01/app/oracle/admin/AMI_PRD/cdump (specifies whether resource limits in (max nu
mber of concurrent user (at this limit, warning in alert (maximum number of user
s that can are enforced) 8.1.7.0.0 /dbs04/oradata/AMI_PRD/ctrl/cc1_01.ctl /dbs05
/oradata/AMI_PRD/ctrl/cc1_02.ctl /dbs06/oradata/AMI_PRD/ctrl/cc1_03.ctl (max num
ber of data files opened) (starts the RECO process) (advanced queuing, message q
ueues) (number of shared server processes in
= = = =
= 0 = 10000
log_checkpoint_timeout max_dump_file_size max_enabled_roles nls_date_format nls_
language nls_territory o7_dictionary_accessibility open_cursors optimizer_max_pe
rmutations optimizer_mode parallel_max_servers pre_page_sga service_names utl_fi
le_dir
= 1800 = 10240 = 40 = "DD-MM-YYYY" = AMERICAN = AMERICA = TRUE = 250 = 1000 = CH
OOSE = 5 = TRUE = CC1 = /app01/oradata/cc1/utl_file
All init.ora parameters: ------------------------PARAMETER ---------------------
--------O7_DICTIONARY_ACCESSIBILITY active_instance_count aq_tm_processes archiv
e_lag_target asm_diskgroups asm_diskstring asm_power_limit audit_file_dest ['Pat
h'] audit_sys_operations audit_trail background_core_dump full] background_dump_
dest backup_tape_io_slaves bitmap_merge_area_size blank_trimming buffer_pool_kee
p buffer_pool_recycle latches:m)] circuits cluster_database FALSE] cluster_datab
ase_instances cluster_interconnects commit_point_strength DESCRIPTION ----------
-----------------------------Version 7 Dictionary Accessibility support [TRUE |
FALSE] Number of active instances in the cluster database [NUMBER] Number of AQ
Time Managers to start [NUMBER] Maximum number of seconds of redos the standby c
ould lose [NUMBER] Disk groups to mount automatically [CHAR] Disk set locations
for discovery [CHAR] Number of processes for disk rebalancing [NUMBER] Directory
in which auditing files are to reside Enable sys auditing [TRUE|FALSE] Enable s
ystem auditing [NONE|DB|DB_EXTENDED|OS] Core Size for Background Processes [part
ial | Detached process dump directory [file_path] BACKUP Tape I/O slaves [TRUE |
FALSE] Maximum memory allow for BITMAP MERGE [NUMBER] Blank trimming semantics
parameter [TRUE | FALSE] Number of database blocks/latches in keep buffer pool [
CHAR: (buffers:n, latches:m)] Number of database blocks/latches in recycle buffe
r pool [CHAR: (buffers:n, Max number of virtual circuits [NUMBER] If TRUE startu
p in cluster database mode [TRUE | Number of instances to use for sizing cluster
db SGA structures [NUMBER] Interconnects for RAC use [CHAR] Bias this node has
toward not preparing in a two-phase commit [NUMBER (0-255)]
compatible control_file_record_keep_time control_files core_dump_dest cpu_count
[NUMBER] create_bitmap_area_size cursor_sharing create_stored_outlines FALSE | c
ategory_name] cursor_space_for_time db_16k_cache_size db_2k_cache_size db_32k_ca
che_size db_4k_cache_size db_8k_cache_size db_block_buffers db_block_checking db
_block_checksum db_block_size db_cache_advice db_cache_size db_create_file_dest
db_create_online_log_dest_n ['Path'] db_domain * db_file_multiblock_read_count d
b_file_name_convert db_files db_flashback_retention_target minutes [NUMBER] db_k
eep_cache_size db_name db_recovery_file_dest db_recovery_file_dest_size db_recyc
le_cache_size db_unique_name db_writer_processes dblink_encrypt_login dbwr_io_sl
aves ddl_wait_for_locks FALSE] dg_broker_config_file1 dg_broker_config_file2
Database will be completely compatible with this software version [CHAR: 9.2.0.0
.0] Control file record keep time in days [NUMBER] Control file names list [file
_path,file_path..] Core dump directory [file_path] Initial number of cpu's for t
his instance Size of create bitmap buffer for bitmap index [INTEGER] Cursor shar
ing mode [EXACT | SIMILAR | FORCE] Create stored outlines for DML statements [TR
UE | Use more memory in order to get faster execution [TRUE | FALSE] Size of cac
he for 16K buffers [bytes] Size of cache for 2K buffers [bytes] Size of cache fo
r 32K buffers [bytes] Size of cache for 4K buffers [bytes] Size of cache for 8K
buffers [bytes] Number of database blocks to cache in memory [bytes: 8M or NUMBE
R of blocks (Ora7)] Data and index block checking [TRUE | FALSE] Store checksum
in db blocks and check during reads [TRUE | FALSE] Size of database block [bytes
] Buffer cache sizing advisory [internal use only] Size of DEFAULT buffer pool f
or standard block size buffers [bytes] Default database location ['Path_to_direc
tory'] Online log/controlfile destination (where n=1-5) Directory part of global
database name stored with CREATE DATABASE [CHAR] Db blocks to be read each IO [
NUMBER] Datafile name convert patterns and strings for standby/clone db [, ] Max
allowable # db files [NUMBER] Maximum Flashback Database log retention time in
Size of KEEP buffer pool for standard block size buffers [bytes] Database name s
pecified in CREATE DATABASE [CHAR] Default database recovery file location [CHAR
] Database recovery files size limit [bytes] Size of RECYCLE buffer pool for sta
ndard block size buffers [bytes] Database Unique Name [CHAR] Number of backgroun
d database writer processes to start [NUMBER] Enforce password for distributed l
ogin always be encrypted [TRUE | FALSE] DBWR I/O slaves [NUMBER] Disable NOWAIT
DML lock acquisitions [TRUE | Data guard broker configuration file #1 ['Path'] D
ata guard broker configuration file #2 ['Path']
dg_broker_start disk_asynch_io FALSE] dispatchers (MTS_dispatchers in Ora 8) dis
tributed_lock_timeout dml_locks drs_start FALSE] enqueue_resources event fal_cli
ent fal_server fast_start_io_target fast_start_mttr_target fast_start_parallel_r
ollback file_mapping fileio_network_adapters filesystemio_options fixed_date '20
00_12_30_24_59_00'] gc_files_to_locks gcs_server_processes start [NUMBER] global
_context_pool_size global_names hash_area_size Server)[bytes] hash_join_enabled
hi_shared_memory_address hs_autoregister ifile instance_groups instance_name ins
tance_number instance_type ASM] java_max_sessionspace_size java_pool_size java_s
oft_sessionspace_limit
Start Data Guard broker framework (DMON process) [TRUE | FALSE] Use asynch I/O f
or random access devices [TRUE | Specifications of dispatchers [CHAR] Number of
seconds a distributed transaction waits for a lock [Internal] Dml locks - one fo
r each table modified in a transaction [NUMBER] Start DG Broker monitor (DMON pr
ocess)[TRUE | Resources for enqueues [NUMBER] Debug event control - default null
string [CHAR] FAL client [CHAR] FAL server list [CHAR] Upper bound on recovery
reads [NUMBER] MTTR target of forward crash recovery in seconds [NUMBER] Max num
ber of parallel recovery slaves that may be used [LOW | HIGH | FALSE] Enable fil
e mapping [TRUE | FALSE] Network Adapters for File I/O [CHAR] IO operations on f
ilesystem files [Internal] Fix SYSDATE value for debugging[NONE or RAC/OPS - loc
k granularity number of global cache locks per file (DFS) [CHAR] Number of backg
round gcs server processes to Global Application Context Pool Size in Bytes [byt
es] Enforce that database links have same name as remote database [TRUE | FALSE]
Size of in-memory hash work area (Shared Enable/disable hash join (CBO) [TRUE |
FALSE] SGA starting address (high order 32-bits on 64-bit platforms) [NUMBER] E
nable automatic server DD updates in HS agent self-registration [TRUE | FALSE] I
nclude file in init.ora ['path_to_file'] List of instance group names [CHAR] Ins
tance name supported by the instance [CHAR] Instance number [NUMBER] Type of ins
tance to be executed RDBMS or Automated Storage Management [RDBMS | Max allowed
size in bytes of a Java sessionspace [bytes] Size in bytes of the Java pool [byt
es] Warning limit on size in bytes of a Java
job_queue_processes large_pool_size [bytes] ldap_directory_access SSL] license_m
ax_sessions license_max_users license_sessions_warning local_listener [CHAR] loc
k_name_space standby/primary database lock_sga log_archive_config log_archive_de
st log_archive_dest_n log_archive_dest_state_n log_archive_duplex_dest log_archi
ve_format log_archive_local_first | FALSE] log_archive_max_processes log_archive
_min_succeed_dest log_archive_start [TRUE | FALSE] log_archive_trace log_buffer
log_checkpoint_interval log_checkpoint_timeout between log_checkpoints_to_alert
FALSE] log_file_name_convert log_parallelism logmnr_max_persistent_sessions max_
commit_propagation_delay max_dispatchers max_dump_file_size bytes] max_enabled_r
oles [NUMBER] max_rollback_segments [NUMBER] max_shared_servers
sessionspace [NUMBER] Number of job queue slave processes [NUMBER] Size in bytes
of the large allocation pool RDBMS's LDAP access option [NONE | PASSWORD | Maxi
mum number of non-system user sessions (concurrent licensing) [NUMBER] Maximum n
umber of named users that can be created (named user licensing) [NUMBER] Warning
level for number of non-system user sessions [NUMBER] Define which listeners in
stances register with Used for generating lock names for assign each a unique na
me space [CHAR] Lock entire SGA in physical memory [Internal] Log archive config
[SEND|NOSEND] [RECEIVE|NORECEIVE] [ DG_CONFIG] Archive logs destination ['path_
to_directory'] Archive logging parameters (n=1-10) Enterprise Edition [CHAR] Arc
hive logging parameter status (n=1-10) [CHAR] Enterprise Edition [CHAR] Duplex a
rchival destination ['path_to_directory'] Archive log filename format [CHAR: "My
App%S.ARC"] Establish EXPEDITE attribute default value [TRUE Maximum number of a
ctive ARCH processes [NUMBER] Minimum number of archive destinations that must s
ucceed [NUMBER] Start archival process on SGA initialization Archive log tracing
level [NUMBER] Redo circular buffer size [bytes] Checkpoint threshold, # redo b
locks [NUMBER] Checkpoint threshold, maximum time interval checkpoints in second
s [NUMBER] Log checkpoint begin/end to alert file [TRUE | Logfile name convert p
atterns and strings for standby/clone db [, ] Number of log buffer strands [NUMB
ER] Maximum number of threads to mine [NUMBER] Max age of new snapshot in .01 se
conds [NUMBER] Max number of dispatchers [NUMBER] Maximum size (blocks) of dump
file [UNLIMITED or Max number of roles a user can have enabled Max number of rol
lback segments in SGA cache Max number of shared servers [NUMBER]
mts_circuits mts_dispatchers mts_listener_address mts_max_dispatchers mts_max_se
rvers mts_multiple_listeners mts_servers mts_service mts_sessions nls_calendar [
CHAR] nls_comp ANSI] nls_currency nls_date_format nls_date_language nls_dual_cur
rency nls_iso_currency nls_language nls_length_semantics nls_nchar_conv_excp nls
_numeric_characters nls_sort nls_territory nls_time_format nls_time_tz_format nl
s_timestamp_format nls_timestamp_tz_format object_cache_max_size_percent object_
cache_optimal_size olap_page_pool_size open_cursors open_links open_links_per_in
stance optimizer_dynamic_sampling optimizer_features_enable optimizer_index_cach
ing optimizer_index_cost_adj optimizer_max_permutations optimizer_mode ALL_ROWS]
oracle_trace_collection_name oracle_trace_collection_path oracle_trace_collecti
on_size oracle_trace_enable oracle_trace_facility_name oracle_trace_facility_pat
h
Max number of circuits [NUMBER] Specifications of dispatchers [CHAR] Address(es)
of network listener [CHAR] Max number of dispatchers [NUMBER] Max number of sha
red servers [NUMBER] Are multiple listeners enabled? [TRUE | FALSE] Number of sh
ared servers to start up [NUMBER] Service supported by dispatchers [CHAR] max nu
mber of shared server sessions [NUMBER] NLS calendar system name (Default=GREGOR
IAN) NLS comparison, Enterprise Edition [BINARY | NLS local currency symbol [CHA
R] NLS Oracle date format [CHAR] NLS date language name (Default=AMERICAN) [CHAR
] Dual currency symbol [CHAR] NLS ISO currency territory name override the defau
lt set by NLS_TERRITORY [CHAR] NLS language name (session default) [CHAR] Create
columns using byte or char semantics by default [BYTE | CHAR] NLS raise an exce
ption instead of allowing implicit conversion [CHAR] NLS numeric characters [CHA
R] Case-sensitive or insensitive sort [Language] language may be BINARY, BINARY_
CI, BINARY_AI, GERMAN, GERMAN_CI, etc NLS territory name (country settings) [CHA
R] Time format [CHAR] Time with timezone format [CHAR] Time stamp format [CHAR]
Timestamp with timezone format [CHAR] Percentage of maximum size over optimal of
the user session's ob [NUMBER] Optimal size of the user session's object cache
in bytes [bytes] Size of the olap page pool in bytes [bytes] Max # cursors per s
ession [NUMBER] Max # open links per session [NUMBER] Max # open links per insta
nce [NUMBER] Optimizer dynamic sampling [NUMBER] Optimizer plan compatibility (o
racle version e.g. 8.1.7) [CHAR] Optimizer index caching percent [NUMBER] Optimi
zer index cost adjustment [NUMBER] Optimizer maximum join permutations per query
block [NUMBER] Optimizer mode [RULE | CHOOSE | FIRST_ROWS | Oracle Oracle Oracl
e Oracle Oracle Oracle TRACE TRACE TRACE Trace TRACE TRACE default collection na
me [CHAR] collection path [CHAR] collection file max. size [NUMBER] enabled/disa
bled [TRUE | FALSE] default facility name [CHAR] facility path [CHAR]
os_authent_prefix os_roles FALSE] parallel_adaptive_multi_user
Prefix for auto-logon accounts [CHAR] Retrieve roles from the operating system [
TRUE |
Enable adaptive setting of degree for multiple user streams [TRUE | FALSE] paral
lel_automatic_tuning Enable intelligent defaults for parallel execution paramete
rs [TRUE | FALSE] parallel_execution_message_size Message buffer size for parall
el execution [bytes] parallel_instance_group Instance group to use for all paral
lel operations [CHAR] parallel_max_servers Maximum parallel query servers per in
stance [NUMBER] parallel_min_percent Minimum percent of threads required for par
allel query [NUMBER] parallel_min_servers Minimum parallel query servers per ins
tance [NUMBER] parallel_server If TRUE startup in parallel server mode [TRUE | F
ALSE] parallel_server_instances Number of instances to use for sizing OPS SGA st
ructures [NUMBER] parallel_threads_per_cpu Number of parallel execution threads
per CPU [NUMBER] partition_view_enabled Enable/disable partitioned views [TRUE |
FALSE] pga_aggregate_target Target size for the aggregate PGA memory consumed b
y the instance [bytes] plsql_code_type PL/SQL code-type [INTERPRETED | NATIVE] p
lsql_compiler_flags PL/SQL compiler flags [CHAR] plsql_debug PL/SQL debug [TRUE
| FALSE] plsql_native_c_compiler plsql native C compiler [CHAR] plsql_native_lib
rary_dir plsql native library dir ['Path_to_directory'] plsql_native_library_sub
dir_count plsql native library number of subdirectories [NUMBER] plsql_native_li
nker plsql native linker [CHAR] plsql_native_make_file_name plsql native compila
tion make file [CHAR] plsql_native_make_utility plsql native compilation make ut
ility [CHAR] plsql_optimize_level PL/SQL optimize level [NUMBER] plsql_v2_compat
ibility PL/SQL version 2.x compatibility flag [TRUE | FALSE] plsql_warnings PL/S
QL compiler warnings settings [CHAR] See also DBMS_WARNING and DBA_PLSQL_OBJECT_
SETTINGS pre_page_sga Pre-page sga for process [TRUE | FALSE] processes User pro
cesses [NUMBER] query_rewrite_enabled query_rewrite_integrity TRUSTED | ENFORCED
] rdbms_server_dn read_only_open_delayed recovery_parallelism remote_archive_ena
ble Allow rewrite of queries using materialized views if enabled [FORCE | TRUE |
FALSE] Perform rewrite using materialized views with desired integrity [STALE_T
OLERATED | RDBMS's Distinguished Name [CHAR] If TRUE delay opening of read only
files until first access [TRUE | FALSE] Number of server processes to use for pa
rallel recovery [NUMBER] Remote archival enable setting [RECEIVE[,SEND] |
FALSE | TRUE] remote_dependencies_mode
Remote-procedure-call dependencies mode parameter [TIMESTAMP | SIGNATURE] remote
_listener Remote listener [CHAR] remote_login_passwordfile Use a password file [
NONE | SHARED | EXCLUSIVE] remote_os_authent Allow non-secure remote clients to
use auto-logon accounts [TRUE | FALSE] remote_os_roles Allow non-secure remote c
lients to use os roles [TRUE | FALSE] replication_dependency_tracking Tracking d
ependency for Replication parallel propagation [TRUE | FALSE] resource_limit Mas
ter switch for resource limit [TRUE | FALSE] resource_manager_plan Resource mgr
top plan [Plan_Name] resumable_timeout Set resumable_timeout, seconds [NUMBER] r
ollback_segments Undo segment list [CHAR] row_locking Row-locking [ALWAYS | DEFA
ULT | INTENT] (Default=always) serial_reuse PLSQL|ALL|NULL] serializable service
_names session_cached_cursors session_max_open_files sessions sga_max_size sga_t
arget shadow_core_dump NONE] shared_memory_address shared_pool_reserved_size sha
red_pool_size shared_server_sessions shared_servers skip_unusable_indexes FALSE]
sort_area_retained_size sort_area_size smtp_out_server [server_clause] spfile s
p_name sql92_security sql_trace sqltune_category sql_version standby_archive_des
t standby_file_management star_transformation_enabled Reuse the frame segments [
DISABLE | SELECT|DML| Serializable [Internal] Service names supported by the ins
tance [CHAR] Number of cursors to save in the session cursor cache [NUMBER] Maxi
mum number of open files allowed per session [NUMBER] User and system sessions [
NUMBER] Max total SGA size [bytes] Target size of SGA [bytes] Core Size for Shad
ow Processes [PARTIAL | FULL | SGA starting address (low order 32-bits on 64-bit
platforms) [NUMBER] Size in bytes of reserved area of shared pool [bytes] Size
in bytes of shared pool [bytes] Max number of shared server sessions [NUMBER] Nu
mber of shared servers to start up [NUMBER] Skip unusable indexes if set to true
[TRUE | Size of in-memory sort work area retained between fetch calls [bytes] S
ize of in-memory sort work area [bytes] utl_smtp server and port configuration p
arameter Server parameter file [CHAR] Service Provider Name [CHAR] Require selec
t privilege for searched update/delete [TRUE | FALSE] Enable SQL trace [TRUE | F
ALSE] Category qualifier for applying hintsets [CHAR] Sql language version param
eter for compatibility issues [CHAR] Standby database archivelog destination tex
t string ['Path_to_directory'] If auto then files are created/dropped automatica
lly on standby [MANUAL | AUTO] Enable the use of star transformation
statistics_level streams_pool_size tape_asynch_io FALSE] thread timed_os_statist
ics timed_statistics FALSE] trace_enabled FALSE] tracefile_identifier transactio
n_auditing
[TRUE | FALSE | DISABLE_TEMP_TABLE] Statistics level [ALL | TYPICAL | BASIC] Siz
e in bytes of the streams pool [bytes] Use asynch I/O requests for tape devices
[TRUE | Redo thread to mount [NUMBER] Internal os statistic gathering interval i
n seconds [NUMBER] Maintain internal timing statistics [TRUE | Enable KST tracin
g (Internal parameter) [TRUE |
Trace file custom identifier [CHAR] Transaction auditing records generated in th
e redo log [TRUE | FALSE] transactions Max. number of concurrent active transact
ions [NUMBER] transactions_per_rollback_segment Number of active transactions pe
r rollback segment [NUMBER] undo_management undo_retention undo_suppress_errors
undo_tablespace use_indirect_data_buffers user_dump_dest utl_file_dir Instance r
uns in SMU mode if TRUE, else in RBU mode [MANUAL | AUTO] Undo retention in seco
nds [NUMBER] Suppress RBU errors in SMU mode [TRUE | FALSE] Use or switch undo t
ablespace [Undo_tbsp_name] Enable indirect data buffers (very large SGA on 32-bi
t platforms [TRUE | FALSE] User process dump directory ['Path_to_directory'] utl
_file accessible directories list utl_file_dir='Path1', 'Path2'.. or utl_file_di
r='Path1' # Must be utl_file_dir='Path2' # consecutive entries Policy used to si
ze SQL working areas [MANUAL |
workarea_size_policy AUTO]
db_file_multiblock_read_count: The db_file_multiblock_read_count initialization
parameter determines the number of database blocks read in one I/O operation dur
ing a full table scan. The setting of this parameter can reduce the number of I/
O calls required for a full table scan, thus improving performance. 15.6 9i UNDO
or ROLLBACK parameters: ------------------------------------ UNDO_MANAGEMENT If
AUTO, use automatic undo management mode. If MANUAL, use manual undo management
mode. - UNDO_TABLESPACE A dynamic parameter specifying the name of an undo tabl
espace to use.
maximum
- UNDO_RETENTION A dynamic parameter specifying the length of time to retain und
o. Default is 900 seconds. - UNDO_SUPPRESS_ERRORS If TRUE, suppress error messag
es if manual undo management SQL statements are issued when operating in automat
ic undo management mode. If FALSE, issue error message. This is a dynamic parame
ter. If you're database is on manual, you can still use the following 8i type pa
rameters: - ROLLBACK_SEGMENTS Specifies the rollback segments to be acquired at
instance startup - TRANSACTIONS Specifies the maximum number of concurrent trans
actions - TRANSACTIONS_PER_ROLLBACK_SEGMENT Specifies the number of concurrent t
ransactions that each rollback segment is expected to handle - MAX_ROLLBACK_SEGM
ENTS Specifies the maximum number of rollback segments that can be online for an
y instance
15.7 Oracle 9i init file examples: ---------------------------------= Example 1:
---------# Cache and I/O DB_BLOCK_SIZE=4096 DB_CACHE_SIZE=20971520 # Cursors an
d Library Cache CURSOR_SHARING=SIMILAR OPEN_CURSORS=300 # Diagnostics and Statis
tics BACKGROUND_DUMP_DEST=/vobs/oracle/admin/mynewdb/bdump CORE_DUMP_DEST=/vobs/
oracle/admin/mynewdb/cdump TIMED_STATISTICS=TRUE USER_DUMP_DEST=/vobs/oracle/adm
in/mynewdb/udump # Control File Configuration CONTROL_FILES=("/vobs/oracle/orada
ta/mynewdb/control01.ctl", "/vobs/oracle/oradata/mynewdb/control02.ctl", "/vobs/
oracle/oradata/mynewdb/control03.ctl") # Archive LOG_ARCHIVE_DEST_1='LOCATION=/v
obs/oracle/oradata/mynewdb/archive'
LOG_ARCHIVE_FORMAT=%t_%s.dbf LOG_ARCHIVE_START=TRUE # Shared Server # Uncomment
and use first DISPATCHES parameter below when your listener is # configured for
SSL # (listener.ora and sqlnet.ora) # DISPATCHERS = "(PROTOCOL=TCPS)(SER=MODOSE)
", # "(PROTOCOL=TCPS)(PRE=oracle.aurora.server.SGiopServer)" DISPATCHERS="(PROTO
COL=TCP)(SER=MODOSE)", "(PROTOCOL=TCP)(PRE=oracle.aurora.server.SGiopServer)", (
PROTOCOL=TCP) # Miscellaneous COMPATIBLE=9.2.0 DB_NAME=mynewdb # Distributed, Re
plication and Snapshot DB_DOMAIN=us.oracle.com REMOTE_LOGIN_PASSWORDFILE=EXCLUSI
VE # Network Registration INSTANCE_NAME=mynewdb # Pools JAVA_POOL_SIZE=31457280
LARGE_POOL_SIZE=1048576 SHARED_POOL_SIZE=52428800 # Processes and Sessions PROCE
SSES=150 # Redo Log and Recovery FAST_START_MTTR_TARGET=300 # Resource Manager R
ESOURCE_MANAGER_PLAN=SYSTEM_PLAN # Sort, Hash Joins, Bitmap Indexes SORT_AREA_SI
ZE=524288 # Automatic Undo Management UNDO_MANAGEMENT=AUTO UNDO_TABLESPACE=undot
bs Example 2: ---------#########################################################
##################### # Copyright (c) 1991, 2001 by Oracle Corporation #########
##################################################################### ##########
################################# # Cache and I/O ##############################
############# db_block_size=8192 db_cache_size=50331648
########################################### # Cursors and Library Cache ########
################################### open_cursors=300 ###########################
################ # Diagnostics and Statistics ##################################
######### background_dump_dest=D:\oracle\admin\iasdb\bdump core_dump_dest=D:\ora
cle\admin\iasdb\cdump timed_statistics=TRUE user_dump_dest=D:\oracle\admin\iasdb
\udump ########################################### # Distributed, Replication an
d Snapshot ########################################### db_domain=missrv.miskm.mi
ndef.nl remote_login_passwordfile=EXCLUSIVE ####################################
####### # File Configuration ########################################### control
_files=("D:\oracle\oradata\iasdb\CONTROL01.CTL", "D:\oracle\oradata\iasdb\CONTRO
L02.CTL", "D:\oracle\oradata\iasdb\CONTROL03.CTL") #############################
############## # Job Queues ########################################### job_queu
e_processes=4 ########################################### # MTS ################
########################### dispatchers="(PROTOCOL=TCP)(PRE=oracle.aurora.server
.GiopServer)", "(PROTOCOL=TCP)(PRE=oracle.aurora.server.SGiopServer)" ##########
################################# # Miscellaneous ##############################
############# aq_tm_processes=1 compatible=9.0.0 db_name=iasdb #################
########################## # Network Registration ##############################
############# instance_name=iasdb ########################################### #
Pools ########################################### java_pool_size=41943040 shared
_pool_size=33554432 ########################################### # Processes and
Sessions ###########################################
processes=150 ########################################### # Redo Log and Recover
y ########################################### fast_start_mttr_target=300 #######
#################################### # Sort, Hash Joins, Bitmap Indexes ########
################################### pga_aggregate_target=33554432 sort_area_size
=524288 ########################################### # System Managed Undo and Ro
llback Segments ########################################### undo_management=AUTO
undo_tablespace=UNDOTBS
============== 17. Snapshots: ============== Snapshots allow you to replicate da
ta based on column- and/or row-level subsetting, while multimaster replication r
equires replication of the entire table. You need a database link to implement r
eplication. 17.1 Database link: ------------------In de "local" database, waar d
e snapshot copy komt te staan, geef een statement als bijv: CREATE PUBLIC DATABA
SE LINK MY_LINK CONNECT TO HARRY IDENTIFIED BY password USING 'DB1'; De servicen
ame "DB1" wordt via de tnsnames.ora geresolved in een connectdescriptor, waarin
de remote Servername, protocol, en SID van de remote database bekend is geworden
. Nu is het mogelijk om bijv. de table employee in de remote database "DB1" te S
ELECTeren: SELECT * FROM employee@MY_LINK; Ook 2PC is geimplementeerd: update em
ployee set amount=amount-100; set amount=amount+100;
update employee@my_link
commit; 17.2 Snapshots: --------------There are in general 2 styles of snapshots
available Simple snapshot: One to one replication of a remote table to a local
snapshot (=table). The refresh of the snapshot can be a complete refresh, with t
he refresh rate specified in the "create snapshot" command. Also a snapshot log
can be used at the remote original table in order to replicate only the transact
ion data. Complex snapshot: If multiple remote tables are joined in order to cre
ate/refresh a local snapshot, it is a "complex snapshot". Only complete refreshe
s are possible. If joins or complex query clauses are used, like group by, one c
an only use a "complex snapshot". -> Example COMPLEX snapshot: On the local data
base: CREATE SNAPSHOT EMP_DEPT_COUNT pctfree 5 tablespace SNAP storage (initial
100K next 100K pctincrease 0) REFRESH COMPLETE START WITH SYSDATE NEXT SYSDATE+7
AS SELECT DEPTNO, COUNT(*) Dept_count FROM EMPLOYEE@MY_LINK GROUP BY Deptno; Be
cause the records in this snapshot will not correspond one to one with the recor
ds in the master table (since the query contains a group by clause) this is a co
mplex snapshot. Thus the snapshot will be completely recreated every time it is
refreshed. -> Example SIMPLE snapshot: On the local database: CREATE SNAPSHOT EM
P_DEPT_COUNT pctfree 5 tablespace SNAP
storage (initial 100K next 100K pctincrease 0) REFRESH FAST START WITH SYSDATE N
EXT SYSDATE+7 AS SELECT * FROM EMPLOYEE@MY_LINK In this case the refresh fast cl
ause tells oracle to use a snapshot log to refresh the local snapshot. When a sn
apshotlog is used, only the changes to the master table are sent to the targets.
The snapshot log must be created in the master database (WHERE the original obj
ect is) create snapshot log on employee tablespace data storage (initial 100K ne
xt 100K pctincrease 0); Snapshot groups: ---------------A snapshot group in a re
plication system maintains a partial or complete copy of the objects at the targ
et master group. Snapshot groups cannot span master group boundaries. Figure 3-7
displays the correlation between Groups A and B at the master site and Groups A
and B at the snapshot site. Group A at the snapshot site (see Figure 3-7) conta
ins only some of the objects in the corresponding Group A at the master site. Gr
oup B at the snapshot site contains all objects in Group B at the master site. U
nder no circumstances, however, could Group B at the snapshot site contain objec
ts FROM Group A at the master site. As illustrated in Figure 3-7, a snapshot gro
up has the same name as the master group on which the snapshot group is based. F
or example, a snapshot group based on a "PERSONNEL" master group is also named "
PERSONNEL." In addition to maintaining organizational consistency between snapsh
ot sites and master sites, snapshot groups are required for supporting updateabl
e snapshots. If a snapshot does not belong to a snapshot group, then it must be
a read-only snapshot. A snapshot group is used to organize snapshots in a logica
l manner. Refresh groups: --------------If 2 or more master tables which have a
PK-FK relationship, are replicated, it is possible'that the 2 cooresponding snap
shots violate the referential integrety, because of different refresh times and
schedules etc.. Related snapshots can be collected int refresh groups. The purpo
se of a refresh group is to coordinate
the refresh schedules of it's members. This is achieved via the DBMS_REFRESH pac
kage. The procedures in this package are MAKE, ADD, SUBSTRACT, CHANGE, DESTROY,
and REFRESH A refresh group could contain more than one snapshot groups.
Types of snapshots: ------------------Primary Key ----------Primary key snapshot
s are the default type of snapshot. They are updateable if the snapshot was crea
ted as part of a snapshot group and "FOR UPDATE" was specified when defining the
snapshot. Changes are propagated according to the row-level changes that have o
ccurred, as identified by the primary key value of the row (not the ROWID). The
SQL statement for creating an updateable, primary key snapshot might look like:
CREATE SNAPSHOT sales.customer FOR UPDATE AS SELECT * FROM sales.customer@dbs1.a
cme.com; Primary key snapshots may contain a subquery so that you can create a h
orizontally partitioned subset of data at the remote snapshot site. This subquer
y may be as simple as a basic WHERE clause or as complex as a multilevel WHERE E
XISTS clause. Primary key snapshots that contain a SELECTed class of subqueries
can still be incrementally or fast refreshed. The following is a subquery snapsh
ot with a WHERE clause containing a subquery: CREATE SNAPSHOT sales.orders REFRE
SH FAST AS SELECT * FROM sales.orders@dbs1.acme.com o WHERE EXISTS (SELECT 1 FRO
M sales.customer@dbs1.acme.com c WHERE o.c_id = c.c_id AND zip = 19555); ROWID -
---For backwards compatibility, Oracle supports ROWID snapshots in addition to t
he default primary key snapshots. A ROWID snapshot is based on the physical row
identifiers (ROWIDs) of the rows in a master table. ROWID snapshots should be us
ed only for snapshots based on master tables FROM an Oracle7 database, and shoul
d not be used when creating new snapshots based on master tables FROM Oracle rel
ease 8.0 or greater databases. CREATE SNAPSHOT sales.customer REFRESH WITH ROWID
AS SELECT * FROM sales.customer@dbs1.acme.com;
Complex ------To be fast refreshed, the defining query for a snapshot must obser
ve certain restrictions. If you require a snapshot whose defining query is more
general and cannot observe the restrictions, then the snapshot is complex and ca
nnot be fast refreshed. Specifically, a snapshot is considered complex when the
defining query of the snapshot contains: A CONNECT BY clause Clauses that do not
comply with the requirements detailed in Table 3-1, "Restrictions for Snapshots
with Subqueries" A set operation, such as UNION, INTERSECT, or MINUS In most ca
ses, a distinct or aggregate function, although it is possible to have a distinc
t or aggregate function in the defining query and still have a simple snapshot S
ee Also: Oracle8i Data Warehousing Guide for more information about complex mate
rialized views. "Snapshot" is synonymous with "materialized view" in Oracle docu
mentation, and "materialized view" is used in the Oracle8i Data Warehousing Guid
e. The following statement is an example of a complex snapshot CREATE statement:
CREATE SNAPSHOT scott.snap_employees AS SELECT emp.empno, emp.ename FROM scott.
emp@dbs1.acme.com UNION ALL SELECT new_emp.empno, new_emp.ename FROM scott.new_e
mp@dbs1.acme.com; Read Only --------Any of the previously described types of sna
pshots can be made read-only by omitting the FOR UPDATE clause or disabling the
equivalent checkbox in the Replication Manager interface. Read-only snapshots us
e many of the same mechanisms as updateable snapshots, except that they do not n
eed to belong to a snapshot group. Snapshot Registration at a Master Site ------
-------------------------------At the master site, an Oracle database automatica
lly registers information about a snapshots based on its master table(s). The fo
llowing sections explain more about Oracle's snapshot registration mechanism. DB
A_REGISTERED_SNAPSHOTS and DBA_SNAPSHOT_REFRESH_TIMES dictionary views
You can query the DBA_REGISTERED_SNAPSHOTS data dictionary view to list the foll
owing information about a remote snapshot: The owner, name, and database that co
ntains the snapshot The snapshot's defining query Other snapshot characteristics
, such as its refresh method (fast or complete) You can also query the DBA_SNAPS
HOT_REFRESH_TIMES view at the master site to obtain the last refresh times for e
ach snapshot. Administrators can use this information to monitor snapshot activi
ty FROM master sites and coordinate changes to snapshot sites if a master table
needs to be dropped, altered, or relocated. Internal Mechanisms Oracle automatic
ally registers a snapshot at its master database when you create the snapshot, a
nd unregisters the snapshot when you drop it. Caution: Oracle cannot guarantee t
he registration or unregistration of a snapshot at its master site during the cr
eation or drop of the snapshot, respectively. If Oracle cannot successfully regi
ster a snapshot during creation, Oracle completes snapshot registration during a
subsequent refresh of the snapshot. If Oracle cannot successfully unregister a
snapshot when you drop the snapshot, the registration information for the snapsh
ot persists in the master database until it is manually unregistered. Complex sn
apshots might not be registered. Manual registration ------------------If necess
ary, you can maintain registration manually. Use the REGISTER_SNAPSHOT and UNREG
ISTER_SNAPSHOT procedures of the DBMS_SNAPSHOT package at the master site to add
, modify, or remove snapshot registration information. Snapshot Log -----------W
hen you create a snapshot log for a master table, Oracle creates an underlying t
able as the snapshot log. A snapshot log holds the primary keys and/or the ROWID
s of rows that have been updated in the master table. A snapshot log can also co
ntain filter columns to support fast refreshes of snapshots with subqueries. The
name of a snapshot log's table is MLOG$_master_table_name. The snapshot log is
created in the same schema as the target master table. One snapshot log can supp
ort multiple snapshots on its master table. As described in the previous section
, the internal trigger adds change information to the snapshot log whenever a DM
L transaction has taken place on the target
master table. There are three types of snapshot logs: Primary Key: The snapshot
records changes to the master table based on the primary key of the affected row
s. Row ID: The snapshot records changes to the master table based on the ROWID o
f the affected rows. Combination: The snapshot records changes to the master tab
le based on both the primary key and the ROWID of the affected rows. This snapsh
ot log supports both primary key and ROWID snapshots, which is helpful for mixed
environments. A combination snapshot log works in the same manner as the primar
y key and ROWID snapshot log, except that both the primary key and the ROWID of
the affected row are recorded. Though the difference between snapshot logs based
on primary keys and ROWIDs is small (one records affected rows using the primar
y key, while the other records affected rows using the physical ROWID), the prac
tical impact is large. Using ROWID snapshots and snapshot logs makes reorganizin
g and truncating your master tables difficult because it prevents your ROWID sna
pshots FROM being fast refreshed. If you reorganize or truncate your master tabl
e, your ROWID snapshot must be COMPLETE refreshed because the ROWIDs of the mast
er table have changed. To delete a snapshot log, execute the DROP SNAPSHOT LOG S
QL statement in SQL*Plus. For example, the following statement deletes the snaps
hot log for a table named CUSTOMERS in the SALES schema: DROP SNAPSHOT LOG ON sa
les.customers; To delete the master table, use truncate table TABLE_NAME purge s
napshot log;
============= 18. Triggers: ============= A trigger is PL/SQL code block attache
d and executed by an event which occurs to a database table. Triggers are implic
itly invoked by DML commands. Triggers are stored as text and compiled at execut
e time, because of this it is wise not to include much code in them but to call
out to previously stored procedures or packages as this will greatly improve per
foRMANce. You may not use COMMIT, ROLLBACK and SAVEPOINT statements within trigg
er blocks. Remember that triggers may be executed thousands of times for a large
update they can seriously affect SQL execution perfoRMANce.
Triggers may be called BEFORE or AFTER the following events :INSERT, UPDATE and
DELETE. Triggers may be STATEMENT or ROW types. - STATEMENT triggers fire BEFORE
or AFTER the execution of the statement that caused the trigger to fire. - ROW
triggers fire BEFORE or AFTER any affected row is processed. An example of a sta
tement trigger follows :CREATE OR REPLACE TRIGGER MYTRIG1 BEFORE DELETE OR INSER
T OR UPDATE ON JD11.BOOK BEGIN IF (TO_CHAR(SYSDATE,'DAY') IN ('sat','sun')) OR (
TO_CHAR(SYSDATE,'hh24:mi') NOT BETWEEN '08:30' AND '18:30') THEN RAISE_APPLICATI
ON_ERROR(-20500,'Table is secured'); END IF; END; After the CREATE OR REPLACE st
atement is the object identifier (TRIGGER) and the object name (MYTRIG1). This t
rigger specifies that before any data change event on the BOOK table this PL/SQL
code block will be compiled and executed. The user will not be allowed to updat
e the table outside of normal working hours. An example of a row trigger follows
:CREATE OR REPLACE TRIGGER MYTRIG2 AFTER DELETE OR INSERT OR UPDATE ON JD11.BOO
K FOR EACH ROW BEGIN IF DELETING THEN INSERT INTO JD11.XBOOK (PREVISBN, TITLE, D
ELDATE) VALUES (:OLD.ISBN, :OLD.TITLE, SYSDATE); ELSIF INSERTING THEN INSERT INT
O JD11.NBOOK (ISBN, TITLE, ADDDATE) VALUES (:NEW.ISBN, :NEW.TITLE, SYSDATE); ELS
IF UPDATING ('ISBN) THEN INSERT INTO JD11.CBOOK (OLDISBN, NEWISBN, TITLE, UP_DAT
E) VALUES (:OLD.ISBN :NEW.ISBN, :NEW.TITLE, SYSDATE); ELSE /* UPDATE TO ANYTHING
ELSE THAN ISBN */ INSERT INTO JD11.UBOOK (ISBN, TITLE, UP_DATE) VALUES (:OLD.IS
BN :NEW.TITLE, SYSDATE); END IF END; In this case we have specified that the tri
gger will be executed after any data change event on any affected row. Within th
e PL/SQL block body we can check which update action is being performed for the
currently affected row and take whatever action we feel is appropriate. Note tha
t we can specify the old and new values of updated rows by prefixing column name
s with the
:OLD and :NEW qualifiers. ------------------------------------------------------
--------------------------
The following statement creates a trigger for the Emp_tab table: CREATE OR REPLA
CE TRIGGER Print_salary_changes BEFORE DELETE OR INSERT OR UPDATE ON Emp_tab FOR
EACH ROW WHEN (new.Empno > 0) DECLARE sal_diff number; BEGIN sal_diff := :new.s
al - :old.sal; dbms_output.put('Old salary: ' || :old.sal); dbms_output.put(' Ne
w salary: ' || :new.sal); dbms_output.put_line(' Difference ' || sal_diff); END;
/ If you enter a SQL statement, such as the following: UPDATE Emp_tab SET sal =
sal + 500.00 WHERE deptno = 10; Then, the trigger fires once for each row that
is updated, and it prints the new and old salaries, and the difference. CREATE O
R REPLACE TRIGGER "SALES".HENKILOROOLI_CHECK2 AFTER INSERT OR UPDATE OR DELETE O
N AH_HENKILOROOLI BEGIN IF INSERTING OR DELETING THEN handle_delayed_triggers ('
AH_HENKILOROOLI', 'HENKILOROOLI_CHECK'); END IF; IF INSERTING OR UPDATING OR DEL
ETING THEN handle_delayed_triggers('AH_HENKILOROOLI', 'FRONTEND_FLAG'); END IF;
END; A trigger is either a stored PL/SQL block or a PL/SQL, C, or Java procedure
associated with a table, view, schema, or the database itself. Oracle automatic
ally executes a trigger when a specified event takes place, which may be in the
form of a system event or a DML statement being issued against the table. Trigge
rs can be: -DML triggers on tables. -INSTEAD OF triggers on views. -System trigg
ers on DATABASE or SCHEMA: With DATABASE, triggers fire for each event for all u
sers; with SCHEMA, triggers fire for each event
/* FE */ /* FE */ /* FE */
for that specific user. BEFORE and AFTER Options The BEFORE or AFTER option in t
he CREATE TRIGGER statement specifies exactly when to fire the trigger body in r
elation to the triggering statement that is being run. In a CREATE TRIGGER state
ment, the BEFORE or AFTER option is specified just before the triggering stateme
nt. For example, the PRINT_SALARY_CHANGES trigger in the previous example is a B
EFORE trigger. INSTEAD OF Triggers The INSTEAD OF option can also be used in tri
ggers. INSTEAD OF triggers provide a transparent way of modifying views that can
not be modified directly through UPDATE, INSERT, and DELETE statements. These tr
iggers are called INSTEAD OF triggers because, unlike other types of triggers, O
racle fires the trigger instead of executing the triggering statement. The trigg
er performs UPDATE, INSERT, or DELETE operations directly on the underlying tabl
es.
CREATE TABLE Prj_level Projno Resp_dept CREATE TABLE Empno Ename Job Mgr Hiredat
e Sal Comm Deptno CREATE TABLE Deptno Dname Loc Mgr_no Dept_type
Project_tab ( NUMBER, NUMBER, NUMBER); Emp_tab ( NUMBER NOT NULL, VARCHAR2(10),
VARCHAR2(9), NUMBER(4), DATE, NUMBER(7,2), NUMBER(7,2), NUMBER(2) NOT NULL); Dep
t_tab ( NUMBER(2) NOT NULL, VARCHAR2(14), VARCHAR2(13), NUMBER, NUMBER);
The following example shows an INSTEAD OF trigger for inserting rows into the MA
NAGER_INFO view. CREATE OR REPLACE VIEW manager_info AS SELECT e.ename, e.empno,
d.dept_type, d.deptno, p.prj_level, p.projno FROM Emp_tab e, Dept_tab d, Projec
t_tab p WHERE e.empno = d.mgr_no
AND
d.deptno = p.resp_dept;
CREATE OR REPLACE TRIGGER manager_info_insert INSTEAD OF INSERT ON manager_info
REFERENCING NEW AS n -- new manager information FOR EACH ROW DECLARE rowcnt numb
er; BEGIN SELECT COUNT(*) INTO rowcnt FROM Emp_tab WHERE empno = :n.empno; IF ro
wcnt = 0 THEN INSERT INTO Emp_tab (empno,ename) VALUES (:n.empno, :n.ename); ELS
E UPDATE Emp_tab SET Emp_tab.ename = :n.ename WHERE Emp_tab.empno = :n.empno; EN
D IF; SELECT COUNT(*) INTO rowcnt FROM Dept_tab WHERE deptno = :n.deptno; IF row
cnt = 0 THEN INSERT INTO Dept_tab (deptno, dept_type) VALUES(:n.deptno, :n.dept_
type); ELSE UPDATE Dept_tab SET Dept_tab.dept_type = :n.dept_type WHERE Dept_tab
.deptno = :n.deptno; END IF; SELECT COUNT(*) INTO rowcnt FROM Project_tab WHERE
Project_tab.projno = :n.projno; IF rowcnt = 0 THEN INSERT INTO Project_tab (proj
no, prj_level) VALUES(:n.projno, :n.prj_level); ELSE UPDATE Project_tab SET Proj
ect_tab.prj_level = :n.prj_level WHERE Project_tab.projno = :n.projno; END IF; E
ND;
FOR EACH ROW Option The FOR EACH ROW option determines whether the trigger is a
row trigger or a statement trigger. If you specify FOR EACH ROW, then the trigge
r fires once for each row of the table that is affected by the triggering statem
ent. The absence of the FOR EACH ROW option indicates that the trigger fires onl
y once for each applicable statement, but not separately for each row affected b
y the statement. For example, you define the following trigger: ----------------
---------------------------------------------------------------Note: You may nee
d to set up the following data structures for certain examples to work: CREATE T
ABLE Emp_log (
Emp_id Log_date New_salary Action
NUMBER, DATE, NUMBER, VARCHAR2(20));
-------------------------------------------------------------------------------C
REATE OR REPLACE TRIGGER Log_salary_increase AFTER UPDATE ON Emp_tab FOR EACH RO
W WHEN (new.Sal > 1000) BEGIN INSERT INTO Emp_log (Emp_id, Log_date, New_salary,
Action) VALUES (:new.Empno, SYSDATE, :new.SAL, 'NEW SAL'); END; Then, you enter
the following SQL statement: UPDATE Emp_tab SET Sal = Sal + 1000.0 WHERE Deptno
= 20; If there are five employees in department 20, then the trigger fires five
times when this statement is entered, because five rows are affected. The follo
wing trigger fires only once for each UPDATE of the Emp_tab table: CREATE OR REP
LACE TRIGGER Log_emp_update AFTER UPDATE ON Emp_tab BEGIN INSERT INTO Emp_log (L
og_date, Action) VALUES (SYSDATE, 'Emp_tab COMMISSIONS CHANGED'); END; Trigger S
ize The size of a trigger cannot be more than 32K. Valid SQL Statements in Trigg
er Bodies The body of a trigger can contain DML SQL statements. It can also cont
ain SELECT statements, but they must be SELECT... INTO... statements or the SELE
CT statement in the definition of a cursor. DDL statements are not allowed in th
e body of a trigger. Also, no transaction control statements are allowed in a tr
igger. ROLLBACK, COMMIT, and SAVEPOINT cannot be used.For system triggers, {CREA
TE/ALTER/DROP} TABLE statements and ALTER...COMPILE are allowed. Recompiling Tri
ggers Use the ALTER TRIGGER statement to recompile a trigger manually. For examp
le, the following statement recompiles the PRINT_SALARY_CHANGES trigger:
ALTER TRIGGER Print_salary_changes COMPILE; Disable enable trigger: ALTER TRIGGE
R Reorder DISABLE; ALTER TRIGGER Reorder ENABLE; Or in 1 time for all triggers o
n a table: ALTER TABLE Inventory DISABLE ALL TRIGGERS;
ALTER DATABASE rename GLOBAL_NAME TO NEW_NAME;
==================================== 19 BACKUP RECOVERY, TROUBLESHOOTING: ======
==============================
19.1 SCN: -------The Control files and all datafiles contain the last SCN (Syste
m Change Number) after: checkpoint, for example via ALTER SYSTEM CHECKPOINT, shu
tdown normal/immediate/transactional, log switch occurs by the system via alter
system switch logfile, alter tablespace begin backup etc..
at checkpoint the following occurs: -----------------------------------and The d
atabase writer (DBWR) writes all modified database blocks in the buffer cache ba
ck to datafiles, Log writer (LGWR) or Checkpoint process (CHKPT) updates both th
e controlfile the datafiles to indicate when the last checkpoint occurred (SCN)
Log switching causes a checkpoint, but a checkpoint does not cause a logswitch.
LGWR writes logbuffers to online redo log: -------------------------------------
----- at commit - redolog buffers 1/3 full, > 1 MB changes - before DBWR writes
modified blocks to datafiles LOG_CHECKPOINT_INTERVAL init.ora parameter:
------------------------------------------The LOG_CHECKPOINT_INTERVAL init.ora p
arameter controls how often a checkpoint operation will be performed based upon
the number of operating system blocks that have been written to the redo log. If
this value is larger than the size of the redo log, then the checkpoint will on
ly occur when Oracle performs a log switch FROM one group to another, which is p
referred. NOTE: Starting with Oracle 8.1, LOG_CHECKPOINT_INTERVAL will be interp
reted to mean that the incremental checkpoint should not lag the tail of the log
by more than log_checkpoint_interval number of redo blocks. On most Unix system
s the operating system block size is 512 bytes. This means that setting LOG_CHEC
KPOINT_INTERVAL to a value of 10,000 (the default setting), causes a checkpoint
to occur after 5,120,000 (5M) bytes are written to the redo log. If the size of
your redo log is 20M, you are taking 4 checkpoints for each log. LOG_CHECKPOINT_
TIMEOUT init.ora parameter: -----------------------------------------The LOG_CHE
CKPOINT_TIMEOUT init.ora parameter controls how often a checkpoint will be perfo
rmed based on the number of seconds that have passed since the last checkpoint.
NOTE: Starting with Oracle 8.1, LOG_CHECKPOINT_TIMEOUT will be interpreted to me
an that the incremental checkpoint should be at the log position WHERE the tail
of the log was LOG_CHECKPOINT_TIMEOUT seconds ago. Checkpoint frequency impacts
the time required for the database to recover FROM an unexpected failure. Longer
intervals between checkpoints mean that more time will be required during datab
ase recovery. LOG_CHECKPOINTS_TO_ALERT init.ora parameter: ---------------------
----------------------The LOG_CHECKPOINTS_TO_ALERT init.ora parameter, when set
to a value of TRUE, allows you to log checkpoint start and stop times in the ale
rt log. This is very helpful in determining if checkpoints are occurring at the
optimal frequency and gives a chronological view of checkpoints and other databa
se activities occurring in the background. It is a misconception that setting LO
G_CHECKPOINT_TIMEOUT to a given value will initiate a log switch at that interva
l, enabling a recovery window used for a stand-by database configuration. Log sw
itches cause a checkpoint, but a checkpoint does not cause a log switch. The onl
y way to cause a log switch is manually with ALTER SYSTEM SWITCH LOGFILE or resi
zing the redo logs to cause more FAST_START_MTTR_TARGET init.ora parameter: ----
-------------------------------------FAST_START_MTTR_TARGET enables you to speci
fy the number of seconds the database takes to perform crash recovery of a singl
e instance. It is the number of seconds it takes to recover FROM crash recovery.
The lower the value, the more often DBWR will write the blocks to disk. FAST_ST
ART_MTTR_TARGET can be overridden by either FAST_START_IO_TARGET or LOG_CHECKPOI
NT_INTERVAL.
FAST_START_IO_TARGET init.ora paramater: ---------------------------------------
FAST_START_IO_TARGET (available only with the Oracle Enterprise Edition) specifi
es the number of I/Os that should be needed during crash or instance recovery. S
maller values for this parameter result in faster recovery times. This improveme
nt in recovery perfoRMANce is achieved at the expense of additional writing acti
vity during normal processing. ARCHIVE_LAG_TARGET init.ora parameter: ----------
---------------------------The following initialization parameter setting sets t
he log switch interval to 30 minutes (a typical value). ARCHIVE_LAG_TARGET = 180
0
Note: More on SCN: ================== >>>> thread from asktom You Asked Tom, Wou
ld you tell me what snapshot too old error. When does it happen? What's the poss
ible causes? How to fix it? Thank you very much. Jane and we said... I think sup
port note <Note:40689.1> covers this topic very well: ORA-01555 "Snapshot too ol
d" - Detailed Explanation =================================================== Ov
erview ~~~~~~~~ This article will discuss the circumstances under which a query
can return the Oracle error ORA-01555 "snapshot too old (rollback segment too sm
all)". The article will then proceed to discuss actions that can be taken to avo
id the error and finally will provide
some simple PL/SQL scripts that illustrate the issues discussed. Terminology ~~~
~~~~~~~~ It is assumed that the reader is familiar with standard Oracle terminol
ogy such as 'rollback segment' and 'SCN'. If not, the reader should first read t
he Oracle Server Concepts manual and related Oracle documentation. In addition t
o this, two key concepts are briefly covered below which help in the understandi
ng of ORA-01555: 1. READ CONSISTENCY: ==================== This is documented in
the Oracle Server Concepts manual and so will not be discussed further. However
, for the purposes of this article this should be read and understood if not und
erstood already. Oracle Server has the ability to have multi-version read consis
tency which is invaluable to you because it guarantees that you are seeing a con
sistent view of the data (no 'dirty reads'). 2. DELAYED BLOCK CLEANOUT: ========
================== This is best illustrated with an example: Consider a transact
ion that updates a million row table. This obviously visits a large number of da
tabase blocks to make the change to the data. When the user commits the transact
ion Oracle does NOT go back and revisit these blocks to make the change permanen
t. It is left for the next transaction that visits any block affected by the upd
ate to 'tidy up' the block (hence the term 'delayed block cleanout'). Whenever O
racle changes a database block (index, table, cluster) it stores a pointer in th
e header of the data block which identifies the rollback segment used to hold th
e rollback information for the changes made by the transaction. (This is require
d if the user later elects to not commit the changes and wishes to 'undo' the ch
anges made.) Upon commit, the database simply marks the relevant rollback segmen
t header entry as committed. Now, when one of the changed blocks is revisited Or
acle examines the
header of the data block which indicates that it has been changed at some point.
The database needs to confirm whether the change has been committed or whether
it is currently uncommitted. To do this, Oracle determines the rollback segment
used for the previous transaction (from the block's header) and then determines
whether the rollback header indicates whether it has been committed or not. If i
t is found that the block is committed then the header of the data block is upda
ted so that subsequent accesses to the block do not incur this processing. This
behaviour is illustrated in a very simplified way below. Here we walk through th
e stages involved in updating a data block. STAGE 1 - No changes made Descriptio
n: This is the starting point. At the top of the data block we have an area used
to link active transactions to a rollback segment (the 'tx' part), and the roll
back segment header has a table that stores information upon all the latest tran
sactions that have used that rollback segment. In our example, we have two activ
e transaction slots (01 and 02) and the next free slot is slot 03. (Since we are
free to overwrite committed transactions.) Data Block 500 +----+--------------+
| tx | None | +----+--------------+ | row 1 | | row 2 | | ... .. | | row n | +-
------------------+ Rollback Segment Header 5 +----------------------+---------+
| transaction entry 01 |ACTIVE | | transaction entry 02 |ACTIVE | | transaction
entry 03 |COMMITTED| | transaction entry 04 |COMMITTED| | ... ... .. | ... | |
transaction entry nn |COMMITTED| +--------------------------------+
STAGE 2 - Row 2 is updated Description: We have now updated row 2 of block 500.
Note that the data block header is updated to point to the rollback segment 5, t
ransaction slot 3 (5.3) and that it is marked uncommitted (Active). Data Block 5
00 Rollback Segment Header 5 +----+--------------+ +----------------------+-----
----+ | tx |5.3uncommitted|-+ | transaction entry 01 |ACTIVE | +----+-----------
---+ | | transaction entry 02 |ACTIVE | | row 1 | +-->| transaction entry 03 |AC
TIVE | | row 2 *changed* | | transaction entry 04 |COMMITTED|
| ... .. | | row n | +------------------+
| ... ... .. | ... | | transaction entry nn |COMMITTED| +-----------------------
---------+
STAGE 3 - The user issues a commit Description: Next the user hits commit. Note
that all that this does is it updates the rollback segment header's correspondin
g transaction slot as committed. It does *nothing* to the data block. Data Block
500 Rollback Segment Header 5 +----+--------------+ +----------------------+---
------+ | tx |5.3uncommitted|--+ | transaction entry 01 |ACTIVE | +----+--------
------+ | | transaction entry 02 |ACTIVE | | row 1 | +--->| transaction entry 03
|COMMITTED| | row 2 *changed* | | transaction entry 04 |COMMITTED| | ... .. | |
... ... .. | ... | | row n | | transaction entry nn |COMMITTED| +--------------
----+ +--------------------------------+ STAGE 4 - Another user selects data blo
ck 500 Description: Some time later another user (or the same user) revisits dat
a block 500. We can see that there is an uncommitted change in the data block ac
cording to the data block's header. Oracle then uses the data block header to lo
ok up the corresponding rollback segment transaction table slot, sees that it ha
s been committed, and changes data block 500 to reflect the true state of the da
tablock. (i.e. it performs delayed cleanout). Data Block 500 +----+-------------
-+ | tx | None | +----+--------------+ | row 1 | | row 2 | | ... .. | | row n |
+------------------+ ORA-01555 Explanation ~~~~~~~~~~~~~~~~~~~~~ There are two f
undamental causes of the error ORA-01555 that are a result of Oracle trying to a
ttain a 'read consistent' image. These are : o The rollback information itself i
s overwritten so that Oracle is unable to rollback the (committed) transaction e
ntries to attain a sufficiently old enough version of Rollback Segment Header 5
+----------------------+---------+ | transaction entry 01 |ACTIVE | | transactio
n entry 02 |ACTIVE | | transaction entry 03 |COMMITTED| | transaction entry 04 |
COMMITTED| | ... ... .. | ... | | transaction entry nn |COMMITTED| +------------
--------------------+
the block. o The transaction slot in the rollback segment's transaction table (s
tored in the rollback segment's header) is overwritten, and Oracle cannot rollba
ck the transaction header sufficiently to derive the original rollback segment t
ransaction slot. Note: If the transaction of User A is not committed, the rollba
ck segment entries will NOT be reused, but if User A commits, the entries become
free for reuse, and if a query of User B takes a lot of time, and "meet" those
overwritten entries, user B gets an error. Both of these situations are discusse
d below with the series of steps that cause the ORA-01555. In the steps, referen
ce is made to 'QENV'. 'QENV' is short for 'Query Environment', which can be thou
ght of as the environment that existed when a query is first started and to whic
h Oracle is trying to attain a read consistent image. Associated with this envir
onment is the SCN (System Change Number) at that time and hence, QENV 50 is the
query environment with SCN 50. CASE 1 - ROLLBACK OVERWRITTEN This breaks down in
to two cases: another session overwriting the rollback that the current session
requires or the case where the current session overwrites the rollback informati
on that it requires. The latter is discussed in this article because this is usu
ally the harder one to understand. Steps: 1. Session 1 starts query at time T1 a
nd QENV 50 2. Session 1 selects block B1 during this query 3. Session 1 updates
the block at SCN 51 4. Session 1 does some other work that generates rollback in
formation. 5. Session 1 commits the changes made in steps '3' and '4'. (Now othe
r transactions are free to overwrite this rollback information) 6. Session 1 rev
isits the same block B1 (perhaps for a different row). Now, Oracle can see from
the block's header that it has been changed and it is later than the required QE
NV (which was 50). Therefore we need to get an image of the block as of this QEN
V.
If an old enough version of the block can be found in the buffer cache then we w
ill use this, otherwise we need to rollback the current block to generate anothe
r version of the block as at the required QENV. It is under this condition that
Oracle may not be able to get the required rollback information because Session
1's changes have generated rollback information that has overwritten it and retu
rns the ORA-1555 error. CASE 2 - ROLLBACK TRANSACTION SLOT OVERWRITTEN 1. Sessio
n 1 starts query at time T1 and QENV 50 2. Session 1 selects block B1 during thi
s query 3. Session 1 updates the block at SCN 51 4. Session 1 commits the change
s (Now other transactions are free to overwrite this rollback information) 5. A
session (Session 1, another session or a number of other sessions) then use the
same rollback segment for a series of committed transactions. These transactions
each consume a slot in the rollback segment transaction table such that it even
tually wraps around (the slots are written to in a circular fashion) and overwri
tes all the slots. Note that Oracle is free to reuse these slots since all trans
actions are committed. 6. Session 1's query then visits a block that has been ch
anged since the initial QENV was established. Oracle therefore needs to derive a
n image of the block as at that point in time. Next Oracle attempts to lookup th
e rollback segment header's transaction slot pointed to by the top of the data b
lock. It then realises that this has been overwritten and attempts to rollback t
he changes made to the rollback segment header to get the original transaction s
lot entry. If it cannot rollback the rollback segment transaction table sufficie
ntly it will return ORA-1555 since Oracle can no longer derive the required vers
ion of the data block. It is also possible to encounter a variant of the transac
tion slot being overwritten when using block cleanout. This is briefly described
below :
Session 1 starts a query at QENV 50. After this another process updates the bloc
ks that Session 1 will require. When Session 1 encounters these blocks it determ
ines that the blocks have changed and have not yet been cleaned out (via delayed
block cleanout). Session 1 must determine whether the rows in the block existed
at QENV 50, were subsequently changed, In order to do this, Oracle must look at
the relevant rollback segment transaction table slot to determine the committed
SCN. If this SCN is after the QENV then Oracle must try to construct an older v
ersion of the block and if it is before then the block just needs clean out to b
e good enough for the QENV. If the transaction slot has been overwritten and the
transaction table cannot be rolled back to a sufficiently old enough version th
en Oracle cannot derive the block image and will return ORA-1555. (Note: Normall
y Oracle can use an algorithm for determining a block's SCN during block cleanou
t even when the rollback segment slot has been overwritten. But in this case Ora
cle cannot guarantee that the version of the block has not changed since the sta
rt of the query). Solutions ~~~~~~~~~ This section lists some of the solutions t
hat can be used to avoid the ORA-01555 problems discussed in this article. It ad
dresses the cases where rollback segment information is overwritten by the same
session and when the rollback segment transaction table entry is overwritten. It
is worth highlighting that if a single session experiences the ORA-01555 and it
is not one of the special cases listed at the end of this article, then the ses
sion must be using an Oracle extension whereby fetches across commits are tolera
ted. This does not follow the ANSI model and in the rare cases where ORA-01555 i
s returned one of the solutions below must be used. CASE 1 - ROLLBACK OVERWRITTE
N 1. Increase size of rollback segment which will reduce the likelihood of overw
riting rollback information that is needed. 2. Reduce the number of commits (sam
e reason as 1).
3. Run the processing against a range of data rather than the whole table. (Same
reason as 1). 4. Add additional rollback segments. This will allow the updates
etc. to be spread across more rollback segments thereby reducing the chances of
overwriting required rollback information. 5. If fetching across commits, the co
de can be changed so that this is not done. 6. Ensure that the outer select does
not revisit the same block at different times during the processing. This can b
e achieved by : - Using a full table scan rather than an index lookup - Introduc
ing a dummy sort so that we retrieve all the data, sort it and
then sequentially visit these data blocks.
CASE 2 - ROLLBACK TRANSACTION SLOT OVERWRITTEN 1. Use any of the methods outline
d above except for '6'. This will allow transactions to spread their work across
multiple rollback segments therefore reducing the likelihood or rollback segmen
t transaction table slots being consumed. 2. If it is suspected that the block c
leanout variant is the cause, then force block cleanout to occur prior to the tr
ansaction that returns the ORA-1555. This can be achieved by issuing the followi
ng in SQL*Plus, SQL*DBA or Server Manager : alter session set optimizer_goal = r
ule; select count(*) from table_name; If indexes are being accessed then the pro
blem may be an index block and clean out can be forced by ensuring that all the
index is traversed. Eg, if the index is on a numeric column with a minimum value
of 25 then the following query will force cleanout of the index : select index_
column from table_name where index_column > 24; Examples ~~~~~~~~ Listed below a
re some PL/SQL examples that can be used to illustrate the ORA-1555 cases given
above. Before these PL/SQL examples will return this error the database must be
configured as follows :
o Use a small buffer cache (db_block_buffers). REASON: You do not want the sessi
on executing the script to be able to find old versions of the block in the buff
er cache which can be used to satisfy a block visit without requiring the rollba
ck information. o Use one rollback segment other than SYSTEM. REASON: You need t
o ensure that the work being done is generating rollback information that will o
verwrite the rollback information required. o Ensure that the rollback segment i
s small. REASON: See the reason for using one rollback segment. ROLLBACK OVERWRI
TTEN rem rem rem rem * * * * 1555_a.sql Example of getting ora-1555 "Snapshot to
o old" by session overwriting the rollback information required by the same sess
ion.
drop table bigemp; create table bigemp (a number, b varchar2(30), done char(1));
drop table dummy1; create table dummy1 (a varchar2(200)); rem * Populate the ex
ample tables. begin for i in 1..4000 loop insert into bigemp values (mod(i,20),
to_char(i), 'N'); if mod(i,100) = 0 then insert into dummy1 values ('sssssssssss
s'); commit; end if; end loop; commit; end; / rem * Ensure that table is 'cleane
d out'. select count(*) from bigemp; declare -- Must use a predicate so that we
revisit a changed block at a different -- time. -- If another tx is updating the
table then we may not need the predicate cursor c1 is select rowid, bigemp.* fr
om bigemp where a < 20; begin for c1rec in c1 loop update dummy1 set a = 'aaaaaa
aa'; update dummy1 set a = 'bbbbbbbb';
update dummy1 set a = 'cccccccc'; update bigemp set done='Y' where c1rec.rowid =
rowid; commit; end loop; end; / ROLLBACK TRANSACTION SLOT OVERWRITTEN rem * 155
5_b.sql - Example of getting ora-1555 "Snapshot too old" by rem * overwriting th
e transaction slot in the rollback rem * segment header. This just uses one sess
ion. drop table bigemp; create table bigemp (a number, b varchar2(30), done char
(1)); rem * Populate demo table. begin for i in 1..200 loop insert into bigemp v
alues (mod(i,20), to_char(i), 'N'); if mod(i,100) = 0 then commit; end if; end l
oop; commit; end; / drop table mydual; create table mydual (a number); insert in
to mydual values (1); commit; rem * Cleanout demo table. select count(*) from bi
gemp; declare cursor c1 is select * from bigemp; begin -- The following update i
s required to illustrate the problem if block -- cleanout has been done on 'bige
mp'. If the cleanout (above) is commented -- out then the update and commit stat
ements can be commented and the -- script will fail with ORA-1555 for the block
cleanout variant. update bigemp set b = 'aaaaa'; commit; for c1rec in c1 loop fo
r i in 1..20 loop update mydual set a=a; commit; end loop; end loop; end; /
Special Cases ~~~~~~~~~~~~~ There are other special cases that may result in an
ORA-01555. These are given below but are rare and so not discussed in this artic
le : o Trusted Oracle can return this if configured in OS MAC mode. Decreasing L
OG_CHECKPOINT_INTERVAL on the secondary database may overcome the problem. o If
a query visits a data block that has been changed by using the Oracle discrete t
ransaction facility then it will return ORA-01555. o It is feasible that a rollb
ack segment created with the OPTIMAL clause maycause a query to return ORA-01555
if it has shrunk during the life of the query causing rollback segment informat
ion required to generate consistent read versions of blocks to be lost. Summary
~~~~~~~ This article has discussed the reasons behind the error ORA-01555 "Snaps
hot too old", has provided a list of possible methods to avoid the error when it
is encountered, and has provided simple PL/SQL scripts that illustrate the case
s discussed.
>>>>> thread about SCN Do It Yourself (DIY) Oracle replication Here's a demonstr
ation. First I create a simple table, called TBL_SRC. This is the table on which
we want to perform change-data-capture (CDC). create table tbl_src ( x number p
rimary key, y number ); Next, I show a couple of CDC tables, and the trigger on
TBL_SRC that will load the CDC tables. create table trx ( trx_id varchar2(25) pr
imary key, SCN number, username varchar2(30) ); create table trx_detail
( trx_id varchar(25) , step_id number , step_tms date , old_x number , old_y num
ber , new_x number , new_y number , operation char(1) ); alter table trx_detail
add constraint xp_trx_detail primary key ( trx_id, step_id ); create or replace
trigger b4_src before insert or update or delete on tbl_src for each row DECLARE
l_trx_id VARCHAR2(25); l_step_id NUMBER; BEGIN BEGIN l_trx_id := dbms_transacti
on.local_transaction_id; l_step_id := dbms_transaction.step_id; INSERT INTO trx
VALUES (l_trx_id, userenv('COMMITSCN'), USER); EXCEPTION WHEN dup_val_on_index T
HEN NULL; END; INSERT INTO trx_detail (trx_id, step_id, step_tms, old_x, old_y,
new_x, new_y) VALUES (l_trx_id, l_step_id, SYSDATE, :OLD.x, :OLD.y, :NEW.x, :NEW
.y); END; /
Let's see the magic in action. I'll insert a record. We'll see the 'provisional'
SCN in the TRX table. Then we'll commit, and see the 'true'/post-commit SCN: in
sert into tbl_src values ( 1, 1 ); 1 row created. select * from trx; TRX_ID SCN
USERNAME ------------------------- ---------- ------------------3.4.33402 373293
1665 CIDW commit; Commit complete. select * from trx;
TRX_ID SCN USERNAME ------------------------- ---------- ------------------3.4.3
3402 3732931668 CIDW Notice how the SCN "changed" from 3732931665 to 3732931668.
Oracle was doing some background transactions in between. And we can look at th
e details of the transaction: column step_id format 999,999,999,999,999,999,999;
/ TRX_ID STEP_ID STEP_TMS OLD_X OLD_Y NEW_X NEW_Y O ------------------------- -
--------------------------- --------- ------------------- ---------- ----------
3.4.33402 4,366,162,821,393,448 11-NOV-06 1 1 This approach works back to at lea
st Oracle 7.3.4. Not perfect, because it only captures DML. A TRUNCATE is DDL, a
nd that's not captured. For the actual implementation, I stored the before and a
fter values as CSV strings. For 9i or later, I'd use built-in Oracle functionali
ty.
19.2 init.ora parameters and ARCHIVE MODE: -------------------------------------
--LOG_ARCHIVE_DEST=/oracle/admin/cc1/arch LOG_ARCHIVE_DEST_1=d:\oracle\oradata\a
rc LOG_ARCHIVE_START=TRUE LOG_ARCHIVE_FORMAT=arc_%s.log LOG_ARCHIVE_DEST_1= LOG_
ARCHIVE_DEST_2= LOG_ARCHIVE_MAX_PROCESSES=2 19.3 Enabling or disabling archive m
ode: ---------------------------------ALTER DATABASE ARCHIVELOG (mounted, niet o
pen) ALTER DATABASE NOARCHIVELOG (mounted, niet open) 19.4 Implementation backup
in archive mode via OS script: ------------------------------------------------
--------
19.4.1 OS backup script in unix -----------------------------###################
############################ # Example archive log backup script in UNIX: # ####
########################################### # Set up the environment to point to
the correct database ORACLE_SID=CC1; export ORACLE_SID ORAENV_ASK=NO; export OR
AENV_ASK .oraenv # Backup the tablespaces svrmgrl <<EOFarch1 connect internal al
ter tablespace SYSTEM begin backup; ! tar -cvf /dev/rmt/0hc /u01/oradata/sys01.d
bf alter tablespace data end backup; alter tablespace DATA begin backup; ! tar -
rvf /dev/rmt/0hc /u02/oradata/data01.dbf alter tablespace data end backup; etc .
. .. # Now we backup the archived redo logs before we delete them. # We must bri
efly stop the archiving process in order that # we do not miss the latest files
for sure. archive log stop; exit EOFarch1 # Get a listing of all archived files.
FILES='ls /db01/oracle/arch/cc1/arch*.dbf'; export FILES # Start archiving agai
n svrmgrl <<EOFarch2 connect internal archive log start; exit EOFarch2 # Now bac
kup the archived files to tape tar -rvf /dev/rmt/0hc $FILES # Delete the backupp
ed archived files rm -f $FILES # Backup the control file
svrmgrl <<EOFarch3 connect internal alter database backup controlfile to '/db01/
oracle/cc1/cc1controlfile.bck'; exit EOFarch3 tar -rvf /dev/rmt/0hc /db01/oracle
/cc1/cc1controlfile.bck ############################### # End backup script exam
ple # ############################### 19.5 Tablespaces en datafiles online/offli
ne in non-archive en archive mode: ---------------------------------------------
-----------------------------Tablespace: Een tablespace kan in archive mode en n
on-archive mode offline worden geplaatst zonder dat media recovery nodig is. Dit
is zo met de NORMAL clausule: alter tablespace offline normal; Met de immediate
clausule is wel recovery nodig. Datafile; Een datafile kan in archive mode offl
ine worden gezet. Als de datafile online wordt gebracht, moet eerst media recove
ry wordfen toegepast. Een datafile kan in non-archive mode niet offline worden g
eplaatst. Backup mode: When you issue ALTER TABLESPACE .. BEGIN BACKUP, it freez
es the datafile header. This is so that we know what redo logs we need to apply
to a given file to make it consistent. While you are backing up that file hot, w
e are still writing to it -- it is logically inconsistent. Some of the backed up
blocks could be from the SCN in place at the time the backup began -- others fr
om the time it ended and others from various points in between. 19.6 Recovery in
archive mode: ----------------------------19.6.1: recovery waarbij een current
controlfile bestaat ======================================================= Medi
a recovery na de loss van datafile(s) en dergelijke, gebeurt normaliter op basis
van de SCN in de controlfile. A1: complete recovery: -----------------RECOVER D
ATABASE RECOVER TABLESPACE DATA RECOVER DATAFILE 5 A2: incomplete recovery: ----
--------------------
(database not open) (database open, except this tablespace) (database open, exce
pt this datafile)
time based: cancel based: change bases:
recover database until time '1999-12-31:23.40.00' recover database until cancel
recover database until change 60747681;
Bij beide recoveries worden de archived redo logs toegepast. Een incomplete reco
very altijd met "alter database open resetlogs;" uitvoeren om de nieuwe logentri
es te purgen uit de online redo files 19.6.2: Recovery zonder huidige controlfil
e ========================================== media recovery wanneer er geen huid
ige controlfile bestaat De control file bevat dus een SCN die te oud is t.o.v. d
e SCN's in de archived redo logs. Dit moet je Oracle laten weten via RECOVER DAT
ABASE UNTIL CANCEL USING BACKUP CONTROLFILE; specifying "using backup controlfil
e" is effectively telling oracle that you've lost your controlfile, and thus SCN
's in file headers cannot be compared to anything. So Oracle will happily keep a
pplying archives until you tell it to stop (or run out)
19.7 Queries om SCN te vinden: ----------------------------Iedere redo log is ge
associeerd met een hoog en laag scn In V$LOG_HISTORY, V$ARCHIVED_LOG, V$DATABASE
, V$DATAFILE_HEADER, V$DATAFILE scn's: Queries: -------SELECT file#, substr(name
, 1, 30), status, checkpoint_change# controlfile FROM V$DATAFILE; SELECT file#,
substr(name, 1, 30), status, fuzzy, checkpoint_change# file header FROM V$DATAFI
LE_HEADER; -- uit staan
-- uit
SELECT first_change#, next_change#, sequence#, archived, substr(name, 1, 40) FRO
M V$ARCHIVED_LOG; SELECT recid, first_change#, sequence#, next_change# FROM V$LO
G_HISTORY; SELECT resetlogs_change#, checkpoint_change#, controlfile_change#, op
en_resetlogs
FROM V$DATABASE; SELECT * FROM V$RECOVER_FILE -- Which file needs recovery
Find the latest archived redologs: SELECT name FROM v$archived_log WHERE sequenc
e# = (SELECT max(sequence#) FROM v$archived_log WHERE 1699499 >= first_change#;
sequence# : geeft het nummer aan van de archived redo log first_change# : eerste
scn in archived redo log next_change# : laatste scn in archived redo log, en de
eerste scn van de volgende log checkpoint_change# : laatste actuele SCN FUZZY :
Y/N, indien YES dan bevat de file changes die later zijn dan de scn in de heade
r A datafile that contains a block whose SCN is more recent than the SCN of its
header is called a fuzzy datafile. 19.8 Archived redo logs nodig voor recovery:
------------------------------------------In V$RECOVERY_LOG staan die archived l
ogs vermeld die nodig zijn bij een recovery. Je kunt ook V$RECOVER_FILE gebruike
n om te bepalen welke files moeten recoveren. SELECT * FROM v$recover_file; Hier
vind je de FILE# en deze kun je weer gebruiken met v$datafile en v$tablespace:
SELECT d.name, t.name FROM v$datafile d, v$tablespace t WHERE t.ts# = d.ts# AND
d.file# in (14,15,21); # use values obtained FROM V$RECOVER_FILE query 19.9 voor
beeld recovery 1 datafile: ---------------------------------Stel 1 datafile is c
orrupt. Nu behoeft slechts die ene file te worden teruggezet en daarna recovery
toe te passen. SVRMGRL>alter database datafile '/u01/db1/users01.dbf' offline; $
cp /stage/users01.dbf /u01/db1 SVRMGRL>recover datafile '/u01/db1/users01.dbf';
en oracle komt met een suggestie van het toepassen van archived logfiles SVRMGR
L>alter database datafile '/u01/db1/users01.dbf' online;
19.10 voorbeeld recovery database: --------------------------------Stel meerdere
datafiles zijn verloren. Zet nu backup files terug. SVRMGRL>startup mount; SVRM
GRL>recover database; en oracle zal de archived redo logfiles toepassen. media r
ecovery complete SVRMGRL>alter database open;
19.11 restore naar ANDere disks: ------------------------------- alter database
backup controlfile to trace; - restore files naar nieuwe lokatie: - edit control
file met nieuwe lokatie files - save dit als .sql script en voer het uit: SVRMG
RL>@new.sql controlfile: startup nomount create controlfile reuse database "brdb
" noresetlogs archivelog maxlogfiles 16 maxlogmembers 2 maxdatafiles 100 maxinst
ances 1 maxloghistory 226 logfile group 1 ('/disk03/db1/redo/redo01a.dbf', '/dis
k04/db1/redo/redo01b.dbf') size 2M, group 2 ('/disk03/db1/redo/redo02a.dbf', '/d
isk04/db1/redo/redo02b.dbf') size 2M datafile '/disk04/oracle/db1/sys01.dbf', '/
disk05/oracle/db1/rbs01.dbf', '/disk06/oracle/db1/data01.dbf', '/disk04/oracle/d
b1/index01.dbf', character set 'us7ascii' ; RECOVER DATABASE UNTIL CANCEL USING
BACKUP CONTROLFILE; ALTER DATABASE OPEN RESETLOGS; 19.12 Copy van database naar
ANDere Server: -----------------------------------------1. kopieer alle files pr
ecies van ene lokatie naar ANDere
2. source server: alter database backup controlfile to trace 3. Maak een juiste
init.ora met references nieuwe server 4. edit de ascii versie controlfile uit st
ap 2 waarbij alle schijflokaties verwijzen naar de target STARTUP NOMOUNT CREATE
CONTROLFILE REUSE SET DATABASE "FSYS" RESETLOGS noARCHIVELOG MAXLOGFILES 8 MAXL
OGMEMBERS 4 etc.. ALTER DATABASE OPEN resetlogs; of CREATE CONTROLFILE REUSE SET
DATABASE "TEST" RESETLOGS ARCHIVELOG .. #RECOVER DATABASE ALTER DATABASE OPEN R
ESETLOGS; ALTER DATABASE OPEN RESETLOGS;
CREATE CONTROLFILE REUSE DATABASE "PROD" NORESETLOGS ARCHIVELOG .. .. RECOVER DA
TABASE # All logs need archiving AND a log switch is needed. ALTER SYSTEM ARCHIV
E LOG ALL; # Database can now be opened normally. ALTER DATABASE OPEN; 5. SVRMGR
L>@script bij probleem: delete originele controlfiles en geen reuse. Voorbeeld c
reate controlfile: ----------------------------If you want another database name
use CREATE CONTROLFILE SET DATABASE STARTUP NOMOUNT CREATE CONTROLFILE REUSE DA
TABASE "O901" RESETLOGS NOARCHIVELOG MAXLOGFILES 50 MAXLOGMEMBERS 5 MAXDATAFILES
100 MAXINSTANCES 1 MAXLOGHISTORY 113 LOGFILE GROUP 1 'D:\ORACLE\ORADATA\O901\RE
DO01.LOG' SIZE 100M, GROUP 2 'D:\ORACLE\ORADATA\O901\REDO02.LOG' SIZE 100M, GROU
P 3 'D:\ORACLE\ORADATA\O901\REDO03.LOG' SIZE 100M DATAFILE
'D:\ORACLE\ORADATA\O901\SYSTEM01.DBF', 'D:\ORACLE\ORADATA\O901\UNDOTBS01.DBF', '
D:\ORACLE\ORADATA\O901\CWMLITE01.DBF', 'D:\ORACLE\ORADATA\O901\DRSYS01.DBF', 'D:
\ORACLE\ORADATA\O901\EXAMPLE01.DBF', 'D:\ORACLE\ORADATA\O901\INDX01.DBF', 'D:\OR
ACLE\ORADATA\O901\TOOLS01.DBF', 'D:\ORACLE\ORADATA\O901\USERS01.DBF' CHARACTER S
ET UTF8 ; Voorbeeld controlfile: ---------------------STARTUP NOMOUNT CREATE CON
TROLFILE REUSE DATABASE "SALES" NORESETLOGS ARCHIVELOG MAXLOGFILES 5 MAXLOGMEMBE
RS 2 MAXDATAFILES 255 MAXINSTANCES 2 MAXLOGHISTORY 1363 LOGFILE GROUP 1 ( '/orad
ata/system/log/log1.log', '/oradata/dump/log/log1.log' ) SIZE 100M, GROUP 2 ( '/
oradata/system/log/log2.log', '/oradata/dump/log/log2.log' ) SIZE 100M DATAFILE
'/oradata/system/system.dbf', '/oradata/rbs/rollback.dbf', '/oradata/rbs/rollbig
.dbf', '/oradata/system/users.dbf', '/oradata/temp/temp.dbf', '/oradata/data_big
/ahp_lkt_data_small.dbf', '/oradata/data_small/ahp_lkt_data_big.dbf', '/oradata/
data_big/ahp_lkt_index_small.dbf', '/oradata/index_small/ahp_lkt_index_big.dbf',
'/oradata/data_small/maniin_ah_data_small.dbf', '/oradata/index_small/maniin_ah
_data_big.dbf', '/oradata/index_big/maniin_ah_index_small.dbf', '/oradata/index_
big/maniin_ah_index_big.dbf', '/oradata/index_big/fe_heat_data_big.dbf', '/orada
ta/data_small/fe_heat_index_big.dbf', '/oradata/data_small/eksa_data_small.dbf',
'/oradata/data_big/eksa_data_big.dbf', '/oradata/index_small/eksa_index_small.d
bf', '/oradata/index_big/eksa_index_big.dbf', '/oradata/data_small/provisioning_
data_small.dbf', '/oradata/data_small/softplan_data_small.dbf', '/oradata/index_
small/provisioning_index_small.dbf', '/oradata/system/tools.dbf', '/oradata/inde
x_small/fe_heat_index_small.dbf', '/oradata/data_small/softplan_data_big.dbf', '
/oradata/index_small/softplan_index_small.dbf', '/oradata/index_small/softplan_i
ndex_big.dbf',
'/oradata/data_small/fe_heat_data_small.dbf' ; # Recovery is required if any of
the datafiles are restored backups, # or if the last shutdown was not normal or
immediate. RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE; ALTER DATABAS
E OPEN RESETLOGS; 19.13 PROBLEMS DURING RECOVERY: ------------------------------
-
normal business | system=453 switch logfile | users=455 | | CRASH tools=459 | |
| | | | | ----------------------------------------------------------------------
-------t=t0 t=t1 t=t2 t=t3 ORA-01194, ORA-01195: --------------------------Note
1: ------Suppose the system comes with: ORA-01194: file 1 needs more recovery to
be consistent ORA-01110: data file 1: '/u03/oradata/tstc/dbsyst01.dbf' Either y
ou had the database in archive mode or in non archive mode: archive mode RECOVER
DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE; ALTER DATABASE OPEN RESETLOGS;
non-archive mode: # RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE; ALTE
R DATABASE OPEN RESETLOGS; If you have checked that the scn's of all files are t
he samed number, you might try in the init.ora file: _allow_resetlogs_corruption
= true ------Note 2: ------Problem Description
BEGIN BACKUP
END BACKUP
------------------You restored your hot backup and you are trying to do a point-
in-time recovery. When you tried to open your database you received the followin
g error: ORA-01195: online backup of file <name> needs more recovery to be consi
stent Cause: An incomplete recovery session was started, but an insufficient num
ber of redo logs were applied to make the file consistent. The reported file is
an online backup that must be recovered to the time the backup ended. Action: Ei
ther apply more redo logs until the file is consistent or restore the file from
an older backup and repeat the recovery. For more information about online backu
p, see the index entry "online backups" in the <Oracle7 Server Administrator's G
uide>. This is assuming that the hot backup completed error free. Solution Descr
iption -------------------Continue to apply the requested logs until you are abl
e to open the
database.
Explanation ----------When you perform hot backups on a file, the file header is
frozen. For example, datafile01 may have a file header frozen at SCN #456. When
you backup the next datafile the SCN # may be differnet. For example the file h
eader for datafile02 may be frozen with SCN #457. Therefore, you must apply arch
ive logs until you reach the SCN # of the last file that was backed up. Usually,
applying one or two more archive logs will solve the problem, unless there was
alot of activity on the database during the backup.
------Note 3: ------ORA-01194: file 1 needs more recovery to be consistent I am
working with a test server, I can load it again but I would like to know if this
kind of problem could be solved or not. Just to let you know, that I am new in
Oracle Database Administration. I ran a hot backup script, which deleted the old
ARCHIVE, logs at the end. After checking the script's log, I realized that the
hot backup was not successful and it deleted the Archives. I tried to startup th
e database and an error occurred; "ORA-01589: must use RESETLOGS or NORESETLOGS
option for database open" I tried to open it with the RESETLOGS option then anot
her error occurred; "ORA-01195: online backup of file 1 needs more recovery to b
e consistent" Just because, it was a test environment, I have never taken any co
ld backups. I still have hot backups. I don't know how to recover from those.
If anyone can tell me how to do it from SQLPLUS (SVRMGRL is not loaded), I would
really appreciate it. Thanks, Hi Hima, The following might help. You now have a
database that is operating like it's in noarchive mode since the logs are gone.
1. Mount the database. 2. Issue the following query: SELECT V1.GROUP#, MEMBER,
SEQUENCE#, FIRST_CHANGE# FROM V$LOG V1, V$LOGFILE V2 WHERE V1.GROUP# = V2.GROUP#
; This will list all your online redolog files and their respective sequence an
d first change numbers. 3. If the database is in NOARCHIVELOG mode, issue the qu
ery: SELECT FILE#, CHANGE# FROM V$RECOVER_FILE; If the CHANGE# is GREATER than t
he minimum FIRST_CHANGE# of your logs, the datafile can be recovered. 4. Recover
the datafile, after taking offline, you cannot take system offline which is the
file in error in your case. RECOVER DATAFILE '<full_path_file_name>' 5. Confirm
each of the logs that you are prompted for until you receive the message "Media
recovery complete". If you are prompted for a nonexisting archived log, Oracle
probably needs one or more of the online logs to proceed with the recovery. Comp
are the sequence number referenced in the ORA-280 message with the sequence numb
ers of your online logs. Then enter the full path name of one of the members of
the redo group whose sequence number matches the one you are being asked for. Ke
ep entering online logs as requested until you receive the message "Media recove
ry complete". 6. Bring the datafile online. No need for system. 7. If the databa
se is at mount point, open it Perform a full closed backup of the existing datab
ase ------Note 4: ------Recover until time using backup controlfile
Hi, I am trying to perform an incomplete recovery to an arbitrary point in time
in the past. Eg. I want to go back five minutes. I have a hot backup of my datab
ase. (Tablespaces into hotbackup mode, copy files, tablespaces out of hotbackup
mode, archive current log, backup controlfile to a file and also to a trace). (y
ep im in archivelog mode as well) I shutdown the current database and blow the d
atafiles,online redo logs,controlfiles away. I restore my backup copy of the dat
abase - (just the datafiles) startup nomount and then run an edited controlfile
trace backup (with resetlogs). I then RECOVER DATABSE UNTIL TIME 'whenever' USIN
G BACKUP CONTROLFILE. I'm prompted for logs in the usual way but the recovery en
ds with an ORA-1547 Recover succeeded but open resetlogs would give the followin
g error. The next error is that datafile 1 (system ts) - would need more recover
y. Now metalink tells me that this is usually due to backups being restored that
are older than the archive redo logs - this isn't the case. I have all the arch
ive redo logs I need to cover the time the backup was taken up to the present. T
he time specified in the recovery is after the backup as well. What am I missing
here? Its driving me nuts. I'm off back to the docs again! Thanks in advance Ti
m ------------------------------------------------------------------------------
-From: Anand Devaraj 15-Aug-02 15:15 Subject: Re : Recover until time using back
up controlfile The error indicates that Oracle requires a few more scns to get a
ll the datafiles in sync. It is quite possible that those scns are present in th
e online redo logfiles which were lost. In such cases when Oracle asks for a non
-existent archive log, you should provide the complete path of the online log fi
le for the recovery to succeed. Since you dont have an online log file you shoul
d use RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE. In this case when
you exhaust all the archive log files, you issue the cancel command which will
automatically rollback all the incomplete transactions and get all the datafile
headers in sync with the controlfile. To do an incomplete recovery using time,yo
u usually require the online logfiles to be present. Anand ---------------------
----------------------------------------------------------From: Radhakrishnan pa
ramukurup 15-Aug-02 16:19 Subject: Re : Recover until time using backup controlf
ile I am not sure whether you have You need to also to switch the practice else
you need the next log which is not Otherwise some of the changes to reach a cons
istant state is untill you reach a consistent state. Hope this helps ........ --
-----------------------------------------------------------------------------Fro
m: Mark Gokman 15-Aug-02 16:41 Subject: Re : Recover until time using backup con
trolfile To successfully perform incomplete recovery, you need a full db backup
that was completed prior to the point to which you want to recover, plus you nee
d all archive logs containing all SCNs up to the point to which you want to reco
ver. Applying these rules to your case, I have two questions: - are you recoveri
ng to the point in time AFTER the time the successful full backup was copleted?
- is there an archive log that was generated AFTER the time you specify in until
time? If both answers are yes, then you should have no problems. I actually rec
ently performed such a recovery several times. ---------------------------------
----------------------------------------------From: Tim Palmer 15-Aug-02 18:02 S
ubject: Re : Re : Recover until time using backup controlfile Thanks Guys! I thi
nk Mark has hit the nail on the head here. I was being an idiot! Ive ran this ex
ercise a few more times (with success) and I am convinced that what I was doing
was trying to recover to a point in time that basically was before the latest sc
n of any one file in the hot backup set I was using - convinced myself that I wa
snt missed this step or just missed in the note. log at the end of the back up (
I do as a matter of sure to be available in case of a failure). still in the onl
ine log and you can never open
but I must have been..... perhaps I need a holiday! Thanks again Tim -----------
--------------------------------------------------------------------From: Oracle
, Rowena Serna 16-Aug-02 15:44 Subject: Re : Recover until time using backup con
trolfile Thanks to mark for his input for helping you out. ------Note 5: ------O
RA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below OR
A-01152: file 2 was not restored from a sufficiently old backup ORA-01110: data
file 2: 'D:\ORACLE\ORADATA\<instance>\UNDOTBS01.DBF' File number, name and direc
tory may vary depending on Oracle configuration Details: Undo tablespace data de
scription In an Oracle database, Undo tablespace data is an image or snapshot of
the original contents of a row (or rows) in a table. This data is stored in Und
o segments (formerly Rollback segments in earlier releases of Oracle) in the Und
o tablespace. When a user begins to make a change to the data in a row in an Ora
cle table, the original data is first written to Undo segments in the Undo table
space. The entire process (including the creation of the Undo data) is recorded
in Redo logs before the change is completed and written in the Database Buffer C
ache, and then the data files via the database writer (DBWn) process. If the tra
nsaction does not complete due to some error or should there be a user decision
to reverse (rollback) the change, this Undo data is critical for the ability to
roll back or undo the changes that were made. Undo data also ensures a way to pr
ovide read consistency in the database. Read consistency means that if there is
a data change in a row of data that is not yet committed, a new query of this sa
me row or table will not display any of the uncommitted data to other users, but
will use the information from the Undo segments in the Undo tablespace to actua
lly construct and present a consistent view of the data that only includes commi
tted transactions or information. During recovery, Oracle uses its Redo logs to
play forward through transactions in
a database so that all lost transactions (data changes and their Undo data gener
ation) are replayed into the database. Then, once all the Redo data is applied t
o the data files, Oracle uses the information in the Undo segments to undo or ro
ll back all uncommitted transactions. Once recovery is complete, all data in the
database is committed data, the System Change Numbers (SCN) on all data files a
nd the control_files match, and the database is considered consistent. As for Or
acle 9i, the default method of Undo management is no longer manual, but automati
c; there are no Rollback segments in individual user tablespaces, and all Undo m
anagement is processed by the Oracle server, using the Undo tablespace as the co
ntainer to maintain the Undo segments for the user tablespaces in the database.
The tablespace that still maintains its own Rollback segments is the System tabl
espace, but this behavior is by design and irrelevant to the discussion here. If
this configuration is left as the default for the database, and the 5.022 or 5.
025 version of the VERITAS Backup Exec (tm) Oracle Agent is used to perform Orac
le backups, the Undo tablespace will not be backed up. If Automatic Undo Managem
ent is disabled and the database administrator (DBA) has modified the locations
for the Undo segments (if the Undo data is no longer in the Undo tablespace), th
is data may be located elsewhere, and the issues addressed by this TechNote may
not affect the ability to fully recover the database, although it is still recom
mended that the upgrade to the 5.026 Oracle Agent be performed. Scenario 1 The f
irst scenario would be a recovery of the entire database to a previous pointin-t
ime. This type of recovery would utilize the RECOVER DATABASE USING BACKUP CONTR
OLFILE statement and its customizations to restore the entire database to a poin
t before the entry of improper or corrupt data or to roll back to a point before
the accidental deletion of critical data. In this type of situation, the most c
ommon procedure for the restore is to just restore the entire online backup over
the existing Oracle files with the database shutdown. (See the Related Document
s section for the appropriate instructions on how to restore and recover an Orac
le database to a point-in-time using an online backup.) In this scenario, where
the entire database would be rolled back in time, an offline restore would inclu
de all data files, archived log files, and the backup control_file from
the tape or backup media. Once the RECOVER DATABASE USING BACKUP CONTROLFILE com
mand was executed, Oracle would begin the recovery process to roll forward throu
gh the Redo log transactions, and it would then roll back or undo uncommitted tr
ansactions. At the point when the recovery process started on the actual Undo ta
blespace, Oracle would see that the SCN of that tablespace was too high (in rela
tion to the record in the control_file). This would happen simply because the Un
do tablespace wasn't on the tape or backup media that was restored, so the origi
nal Undo tablespace wouldn't have been overwritten, as were the other data files
, during the restore operation. The failure would occur because the Undo tablesp
ace would still be at its SCN before the restore from backup (an SCN in the futu
re as related to the restored backup control_file). All other tablespaces and co
ntrol_files would be back at their older SCNs (not necessarily consistent yet),
and the Oracle server would respond with the following error messages: ORA-01547
: warning: RECOVER succeeded but OPEN RESETLOGS would get error below ORA-01152:
file 2 was not restored from a sufficiently old backup ORA-01110: data file 2:
'D:\ORACLE\ORADATA\<instance>\UNDOTBS01.DBF' At this point, the database cannot
be opened with the RESETLOGS option, nor in a normal mode. Any attempt to do so
yields the error referenced above. SQL> alter database open resetlogs; alter dat
abase open resetlogs * Error at line 1: ORA-01152: file 2 was not restored from
a sufficiently old backup ORA-01110: data file 2: 'D:\ORACLE\ORADATA\DRTEST\UNDO
TBS01.DBF' The only recourse here is to recover or restore an older backup that
contains an Undo tablespace, whether from an older online backup, or from a clos
ed or offline backup or copy of the database. Without this ability to acquire an
older Undo tablespace to rerun the recovery operation, it will not be possible
to start the database. At this point, Oracle Technical Support must be contacted
. Scenario 2 The second scenario would involve the actual corruption or loss of
the Undo tablespace's data files. If the Undo tablespace data is lost or corrupt
ed due to media failure or other internal logical error or user error, this data
/tablespace must be recovered. Oracle 9i does offer the ability to create a new
Undo tablespace and to alter the
Oracle Instance to use this new tablespace when deemed necessary by the DBA. One
of the requirements to accomplish this change, though, is that there cannot be
any active transactions in the Undo segments of the tablespace when it is time t
o actually drop it. In the case of data file corruption, uncommitted transaction
s in the database that have data in Undo segments can be extremely troublesome b
ecause the existence of any uncommitted transactions will lock the Undo segments
holding the data so that they cannot be dropped. This will be evidenced by an "
ORA-01548" error if this is attempted. This error, in turn, prevents the drop an
d recreation of the Undo tablespace, and thus prevents the successful recovery o
f the database. To overcome this problem, the transaction tables of the Undo seg
ments can be traced to provide details on transactions that Oracle is trying to
recover via rollback and these traces will also identify the objects that Oracle
is trying to apply the undo to. Oracle Doc ID: 94114.1 may be referenced to set
up a trace on the database startup so that the actual transactions that are loc
king the Undo segments can be identified and dropped. Dropping objects that cont
ain uncommitted transactions that are holding locks on Undo segments does entail
data loss, and the amount of loss depends on how much uncommitted data was in t
he Undo segments at the point of failure. When utilized, this trace is actually
monitoring or dumping data from the transaction tables in the headers of the Und
o segments (where the records that track the data in the Undo segments are locat
ed), but if the Undo tablespace's data file is actually missing, has been offlin
e dropped, or if these Undo segment headers have been corrupted, even the abilit
y to dump the transaction table data is lost and the only recourse at this point
may be to open the database, export, and rebuild. At this point, Oracle Technic
al Support must be contacted. Backup Exec Agent for Oracle 5.022 and 5.025 shoul
d be upgraded to 5.026 When using the 5.022 or 5.025 version of the Backup Exec
for Windows Servers Oracle Agent (see the Related Documents section for the appr
opriate instructions on how to identify the version of the Oracle Agent in use),
the Oracle Undo tablespace is not available for backup because the Undo tablesp
ace falls into the type category of Undo, and only tablespaces with a content ty
pe of PERMANENT are located and made available for backup. Normal full backups w
ith all Oracle components selected will run without error and will complete with
a successful status since the Undo tablespace is not actually flagged as a sele
ction. In most Oracle recovery situations, this absence of the Undo tablespace d
ata for restore would not
cause any problem because the original Undo tablespace is still available on the
database server. Restores of User tablespaces, which do not require a rollback
in time, would proceed normally since lost data or changes would be replayed bac
k into the database, and Undo data would be available to roll back uncommitted t
ransactions to leave the database in a consistent state and ready for user acces
s. However, in certain recovery scenarios, (in which a rollback in time or full
database recovery is attempted, or in the case of damaged or missing Undo tables
pace data files) this missing Undo data can result in the inability to properly
recover tablespaces back to a point-intime, and could potentially render the dat
abase unrecoverable without an offline backup or the assistance of Oracle Techni
cal Support. The scenarios in this TechNote describe two examples (this does not
necessarily imply that these are the only scenarios) of how this absence of the
Undo tablespace on tape or backup media, and thus its inability to be restored,
can result in failure of the database to open and can result in actual data los
s. The only solution to the problems referenced within this TechNote is to upgra
de the Backup Exec for Windows Servers Oracle Agent to version 5.026, and to tak
e new offline (closed database) and then new online (running database) backups o
f the entire Oracle 9i database as per the Oracle Agent documentation in the Bac
kup Exec 9.0 for Windows Servers Administrator's Guide. Oracle 9i database backu
ps made with the 5.022 and 5.025 Agent that shipped with Backup Exec 9.0 for Win
dows Servers build 4367 or build 4454 should be considered suspect in the contex
t of the information provided in this TechNote. Note: The 5.022, 5.025, and 5.02
6 versions of the Oracle Agent are compatible with Backup Exec 8.6 for Windows N
T and Windows 2000, which includes support for Oracle 9i, as well as Backup Exec
9.0 for Windows Servers. See the Related Documents section for instructions on
how to identify the version of the Oracle Agent in use. ------Note 6: ------- Ba
ckup a) Consistent backups A consistent backup means that all data files and con
trol files are consistent to a point in time. I.e. they have the same SCN. This
is the only method of backup when the database is in NO Archive log mode.
b) Inconsistent backups An Inconsistent backup is possible only when the databas
e is in Archivelog mode and proper Oracle aware software is used. Most default b
ackup software can not backup open files. Special precautions need to be used an
d testing needs to be done. You must apply redo logs to the data files, in order
to restore the database to a consistent state. c) Database Archive mode The dat
abase can run in either Archivelog mode or noarchivelog mode. When you first cre
ate the database, you specify if it is to be in Archivelog mode. Then in the ini
t.ora file you set the parameter log_archive_start=true so that archiving will s
tart automatically on startup. If the database has not been created with Archive
log mode enabled, you can issue the command whilst the database is mounted, not
open. SVRMGR> alter database Archivelog;. SVRMGR> log archive start SVRMGR> alte
r database open SVRMGR> archive log list This command will show you the log mode
and if automatic archival is set. d) Backup Methods Essentially, there are two
backup methods, hot and cold, also known as online and offline, respectively. A
cold backup is one taken when the database is shutdown. A hot backup is on taken
when the database is running. Commands for a hot backup: 1. Svrmgr>alter databa
se Archivelog Svrmgr> log archive start Svrmgr> alter database open 2. Svrmgr> a
rchive log list --This will show what the oldest online log sequence is. As a pr
ecaution, always keep the all archived log files starting from the oldest online
log sequence. 3. Svrmgr> Alter tablespace tablespace_name BEGIN BACKUP 4. --Usi
ng an OS command, backup the datafile(s) of this tablespace. 5. Svrmgr> Alter ta
blespace tablespace_name END BACKUP --- repeat step 3, 4, 5 for each tablespace.
6. Svrmgr> archive log list ---do this again to obtain the current log sequence
. You will want to make sure you have a copy of this redo log file. 7. So to for
ce an archived log, issue Svrmgr> ALTER SYSTEM SWITCH LOGFILE A better way to fo
rce this would be: svrmgr> alter system archive log current; 8. Svrmgr> archive
log list This is done again to check if the log file had been archived and to fi
nd the latest archived sequence number. 9. Backup all archived log files determi
ned from steps 2 and 8. Do not backup the online redo logs. These will contain t
he end-of-backup marker and can cause corruption if use doing recovery. 10. Back
up the control file: Svrmgr> Alter database backup controlfile to 'filename' e)
Incremental backups These are backups that are taken on blocks that have been m
odified since the
last backup. These are useful as they don't take up as much space and time. Ther
e are two kinds of incremental backups Cumulative and Non cumulative. Cumulative
incremental backups include all blocks that were changed since the last backup
at a lower level. This one reduces the work during restoration as only one backu
p contains all the changed blocks. Noncumulative only includes blocks that were
changed since the previous backup at the same or lower level. Using rman, you is
sue the command "backup incremental level n" f) Support scenarios When the datab
ase crashes, you now have a backup. You restore the backup and then recover the
database. Also, don't forget to take a backup of the control file whenever there
is a schema change. RECOVERY ========= There are several kinds of recovery you
can perform, depending on the type of failure and the kind of backup you have. E
ssentially, if you are not running in archive log mode, then you can only recove
r the cold backup of the database and you will lose any new data and changes mad
e since that backup was taken. If, however, the database is in Archivelog mode y
ou will be able to restore the database up to the time of failure. There are thr
ee basic types of recovery: 1. Online Block Recovery. This is performed automati
cally by Oracle.(pmon) Occurs when a process dies while changing a buffer. Oracl
e will reconstruct the buffer using the online redo logs and writes it to disk.
2. Thread Recovery. This is also performed automatically by Oracle. Occurs when
an instance crashes while having the database open. Oracle applies all the redo
changes in the thread that occurred since the last time the thread was checkpoin
ted. 3. Media Recovery. This is required when a data file is restored from backu
p. The checkpoint count in the data files here are not equal to the check point
count in the control file. This is also required when a file was offlined withou
t checkpoint and when using a backup control file. Now let's explain a little ab
out Redo vs Rollback. Redo information is recorded so that all commands that too
k place can be repeated during recovery. Rollback information is recorded so tha
t you can undo changes made by the current transaction but were not committed. T
he Redo Logs are used to Roll Forward the changes made, both committed and non-
committed changes. Then from the Rollback segments, the undo information is used
to rollback the uncommitted changes. Media Failure and Recovery in Noarchivelog
Mode In this case, your only option is to restore a backup of your Oracle files
. The files you need are all datafiles, and control files. You only need to rest
ore the password file or parameter files if they are lost or are corrupted. Medi
a Failure and Recovery in Archivelog Mode In this case, there are several kinds
of recovery you can perform, depending on what has been lost. The three basic ki
nds of recovery are: 1. Recover database - here you use the recover database com
mand and the database must be closed and mounted. Oracle will recover all datafi
les that are online. 2. Recover tablespace - use the recover tablespace command.
The database can be open but the tablespace must be offline. 3. Recover datafil
e - use the recover datafile command. The database can be
open but the specified datafile must be offline. Note: You must have all archive
d logs since the backup you restored from, or else you will not have a complete
recovery. a) Point in Time recovery: A typical scenario is that you dropped a ta
ble at say noon, and want to recover it. You will have to restore the appropriat
e datafiles and do a point-in-time recovery to a time just before noon. Note: yo
u will lose any transactions that occurred after noon. After you have recovered
until noon, you must open the database with resetlogs. This is necessary to rese
t the log numbers, which will protect the database from having the redo logs tha
t weren't used be applied. The four incomplete recovery scenarios all work the s
ame: Recover database until time '1999-12-01:12:00:00'; Recover database until c
ancel; (you type in cancel to stop) Recover database until change n; Recover dat
abase until cancel using backup controlfile; Note: When performing an incomplete
recovery, the datafiles must be online. Do a select name, status from v$datafil
e to find out if there are any files which are offline. If you were to perform a
recovery on a database which has tablespaces offline, and they had not been tak
en offline in a normal state, you will lose them when you issue the open resetlo
gs command. This is because the data file needs recovery from a point before the
resetlogs option was used. b) Recovery without control file If you have lost th
e current control file, or the current control file is inconsistent with files t
hat you need to recover, you need to recover either by using a backup control fi
le command or create a new control file. You can also recreate the control file
based on the current one using the 'backup control file to trace' command which
will create a script for you to run to create a new one. Recover database using
backup control file command must be used when using a control file other that th
e current. The database must then be opened with resetlogs option. c) Recovery o
f missing datafile with rollback segment The tricky part here is if you are perf
orming online recovery. Otherwise you can just use the recover datafile command.
Now, if you are performing an online recovery, you must first ensure that in th
e init.ora file, you remove the parameter rollback_segments. Otherwise, oracle w
ill want to use those rollback segments when opening the database, but can't fin
d them and wont open. Until you recover the datafiles that contain the rollback
segments, you need to create some temporary rollback segments in order for new t
ransactions to work. Even if other rollback segments are ok, they will have to b
e taken offline. So, all the rollback segments that belong to the datafile need
to be recovered. If all the datafiles belonging to the tablespace rollback_data
were lost, you can now issue a recover tablespace rollback_data. Next bring the
tablespace online and check the status of the rollback segments by doing a selec
t segment_name, status from dba_rollback_segs; You will see the list of rollback
segments that are in status Need Recovery. Simply issue alter rollback segment
online command to complete. Don't forget to reset the rollback_segments paramete
r in the init.ora. d) Recovery of missing datafile without rollback segment Ther
e are three ways to recover in this scenario, as mentioned above. 1. recover dat
abase 2. recover datafile 'c:\orant\database\usr1orcl.ora' 3. recover tablespace
user_data e) Recovery with missing online redo logs Missing online redo logs me
ans that somehow you have lost your redo logs before they had a chance to archiv
ed. This means that crash recovery cannot be performed, so media recovery is req
uired instead. All datafiles will need to
berestored and rolled forwarded until the last available archived log file is ap
plied. This is thus an incomplete recovery, and as such, the recover database co
mmand is necessary. (i.e. you cannot do a datafile or tablespace recovery). As a
lways, when an incomplete recovery is performed, you must open the database with
resetlogs. Note: the best way to avoid this kind of a loss, is to mirror your o
nline log files. f) Recovery with missing archived redo logs If your archives ar
e missing, the only way to recover the database is to restore from your latest b
ackup. You will have lost any uncommitted transactions which were recorded in th
e archived redo logs. Again, this is why Oracle strongly suggests mirroring your
online redo logs and duplicating copies of the archives. g) Recovery with reset
logs option Reset log option should be the last resort, however, as we have seen
from above, it may be required due to incomplete recoveries. (recover using a b
ackup control file, or a point in time recovery). It is imperative that you back
up up the database immediately after you have opened the database with reset log
s. The reason is that oracle updates the control file and resets log numbers, an
d you will not be able to recover from the old logs. The next concern will be if
the database crashes after you have opened the database with resetlogs, but hav
e not had time to backup the database. How to recover? Shut down the database Ba
ckup all the datafiles and the control file Startup mount Alter database open re
setlogs This will work, because you have a copy of a control file after the rese
tlogs point. Media failure before a backup after resetlogs. If a media failure s
hould occur before a backup was made after you opened the database using resetlo
gs, you will most likely lose data. The reason is because restoring a lost dataf
ile from a backup prior to the resetlogs will give an error that the file is fro
m a point in time earlier, and you don't have its backup log anymore. h) Recover
y with corrupted/missing rollback segments. If a rollback segment is missing or
corrupted, you will not be able to open the database. The first step is to find
out what object is causing the rollback to appear corrupted. If we can determine
that, we can drop that object. If we can't we will need to log an iTar to engag
e support. So, how do we find out if it's actually a bad object? 1. Make sure th
at all tablespaces are online and all datafiles are online. This can be checked
through v$datafile, under the status column. For tablespaces associated with the
datafiles, look in dba_tablespaces. If this doesn't show us anything, i.e., all
are online, then 2. Put the following in the init.ora: event = "10015 trace nam
e context forever, level 10" This event will generate a trace file that will rev
eal information about the transaction Oracle is trying to roll back and most imp
ortantly, what object Oracle is trying to apply the undo to. Stop and start the
database. 3. Check in the directory that is specified by the user_dump_dest para
meter (in the init.ora or show parameter command) for a trace file that was gene
rated at startup time. 4. In the trace file, there should be a message similar t
o: error recovery tx(#,#) object #. TX(#,#) refers to transaction information.
The object # is the same as the object_id in sys.dba_objects. 5. Use the followi
ng query to find out what object Oracle is trying to perform recovery on. select
owner, object_name, object_type, status from dba_objects where object_id = <obj
ect #>; 6. Drop the offending object so the undo can be released. An export or r
elying on a backup may be necessary to restore the object after the corrupted ro
llback segment goes away. 7. After dropping the object, put the rollback segment
back in the init.ora parameter rollback_segments, remove the event, and shutdow
n and startup the database. In most cases, the above steps will resolve the prob
lematic rollback segment. If this still does not resolve the problem, it may be
likely that the corruption is in the actual rollback segment. If in fact the rol
lback segment itself is corrupted, we should see if we can restore from a backup
. However, that isn't always possible, there may not be a recent backup etc. In
this case, we have to force the database open with the unsupported, hidden param
eters, you will need to log an iTar to engage support. Please note, that this is
potentially dangerous! When these are used, transaction tables are not read on
opening of the database Because of this, the typical safeguards associated with
the rollback segment are disabled. Their status is 'offline' in dba_rollback_seg
s. Consequently, there is no check for active transactions before dropping the r
ollback segment. If you drop a rollback segment which contains active transactio
ns then you will have logical corruption. Possibly this corruption will be in th
e data dictionary. If the rollback segment datafile is physically missing, has b
een offlined dropped, or the rollback segment header itself is corrupt, there is
no way to dump the transaction table to check for active transactions. So the o
nly thing to do is get the database open, export and rebuild. Log an iTar to eng
age support to help with this process. If you cannot get the database open, ther
e is no other alternative than restoring from a backup. i) Recovery with System
Clock change. You can end up with duplicate timestamps in the datafiles when a s
ystem clock changes. A solution here is to recover the database until time 'yyyy
-mm-dd:00:00:00', and set the time to be later than the when the problem occurre
d. That way it will roll forward through the records that were actually performe
d later, but have an earlier time stamp due to the system clock change. Performi
ng a complete recovery is optimal, as all transactions will be applied. j) Recov
ery with missing System tablespace. The only option is to restore from a backup.
k) Media Recovery of offline tablespace When a tablespace is offline, you canno
t recover datafiles belonging to this tablespace using recover database command.
The reason is because a recover database command will only recover online dataf
iles. Since the tablespace is offline, it thinks the datafiles are offline as we
ll, so even if you recover database and roll forward, the datafiles in this tabl
espace will not be touched. Instead, you need to perform a recover tablespace co
mmand. Alternatively, you could restored the datafiles from a cold backup, mount
the database and select from the v$datafile view to see if any of the datafiles
are offline. If they are, bring them online, and then you can perform a recover
database command. l) Recovery of Read-Only tablespaces If you have a current co
ntrol file, then recovery of read only tablespaces is no different than recoveri
ng read-write files. The issues with read-only tablespaces arise if you have to
use a backup control
file. If the tablespace is in read-only mode, and hasn't changed to read-write s
ince the last backup, then you will be able to media recovery using a backup con
trol file by taking the tablespace offline. The reason here is that when you are
using the backup control file, you must open the database with resetlogs. And w
e know that Oracle wont let you read files from before a resetlogs was done. How
ever, there is an exception with read-only tablespaces. You will be able to take
the datafiles online after you have opened the database. When you have tablespa
ces that switch modes and you don't have a current control file, you should use
a backup control file that recognizes the tablespace in read-write mode. If you
don't have a backup control file, you can create a new one using the create cont
rolfile command. Basically, the point here is that you should take a backup of t
he control file every time you switch a tablespaces mod ORA-01547: ORA-01110: OR
A-01588 ORA-00205: ---------OTHER ERRORS: ============= 1. Control file missing
ORA-00202: ORA-27041: OSD-04002: O/S-Error: controlfile: 'g:\oradata\airm\contro
l03.ctl' unable to open file unable to open file (OS 2) The system cannot find t
he file specified.
Sat May 24 20:02:40 2003 ORA-205 signalled during: alter database airm mount...
Solution: just copy one of the present to the missing one ORA=00214 --------1. o
ne Control file is different version Solution: just copy one of the present to t
he different one 19.13 recovery FROM -----------------alter system disable distr
ibuted recovery ORA-2019 ORA-2058 ORA-2068 ORA-2050: FAILED DISTRIBUTED TRANSACT
IONS for step by step instructions on how to proceed. The above errors indicates
that there is a failed distributed transaction that needs to be manually cleane
d up.
See <Note 1012842.102> In some cases, the instance may crash before the solution
s are implemented. If this is the case, issue an 'alter system disable distribut
ed recovery' immediately after the database starts to allow the database to run
without having reco terminate the instance. 19.14 get a tablespace out of backup
mode: -------------------------------------SVRMGR> connect internal SVRMGR> sta
rtup mount SVRMGR> SELECT df.name,bk.time FROM v$datafile df,v$backup bk 2> WHER
E df.file# = bk.file# and bk.status = 'ACTIVE'; Shows the datafiles currently in
a hot backup state. SVRMGR> alter database datafile 2> '/u03/oradata/PROD/devlP
ROD_1.dbf' end backup; Do an "end backup" on those listed hot backup datafiles.
SVRMGR> alter database open; 19.15 Disk full, corrupt archive log --------------
------------------Archive mandatory in log_archive_dest is unavailable and it's
impossible to make a full recovery. Workaround Configure log_archive_min_succeed
_dest = 2 Do not use log_archive_duplex_dest 19.16 ORA-1578 ORACLE data block co
rrupted (file # %s, block # %s) ------------------------------------------------
--------------SELECT FROM WHERE AND segment_name , segment_type , owner , tables
pace_name sys.dba_extents file_id = &bad_file_id &bad_block_id BETWEEN block_id
and block_id + blocks -1
19.17 Database does not start (1) SGADEF.DBF LK.DBF ----------------------------
---------------------Note:1034037.6 Subject: ORA-01102: WHEN STARTING THE DATABA
SE Type: PROBLEM Status: PUBLISHED Content Type: TEXT/PLAIN Creation Date: 25-JU
L-1997 Last Revision Date: 10-FEB-2000 Problem Description: ====================
You are trying to startup the database and you receive the following error:
ORA-01102: Cause: or
cannot mount database in EXCLUSIVE mode Some other instance has the database mou
nted exclusive or shared. Action: Shutdown other instance or mount in a compatib
le mode.
scumnt: failed to lock /opt/oracle/product/8.0.6/dbs/lkSALES Fri Sep 13 14:29:19
2002 ORA-09968: scumnt: unable to lock file SVR4 Error: 11: Resource temporaril
y unavailable Fri Sep 13 14:29:19 2002 ORA-1102 signalled during: alter database
mount... Fri Sep 13 14:35:20 2002 Shutting down instance (abort) Problem Explan
ation: ==================== A database is started in EXCLUSIVE mode by default.
Therefore, the ORA-01102 error is misleading and may have occurred due to one of
the following reasons: - there is still an "sgadef<sid>.dbf" file in the "ORACL
E_HOME/dbs" directory - the processes for Oracle (pmon, smon, lgwr and dbwr) sti
ll exist - shared memory segments and semaphores still exist even though the dat
abase has been shutdown - there is a "ORACLE_HOME/dbs/lk<sid>" file Search Words
: ============= ORA-1102, crash, immediate, abort, fail, fails, migration Soluti
on Description: ===================== Verify that the database was shutdown clea
nly by doing the following: 1. Verify that there is not a "sgadef<sid>.dbf" file
in the directory "ORACLE_HOME/dbs". % ls $ORACLE_HOME/dbs/sgadef<sid>.dbf If th
is file does exist, remove it. % rm $ORACLE_HOME/dbs/sgadef<sid>.dbf 2. Verify t
hat there are no background processes owned by "oracle" % ps -ef | grep ora_ | g
rep $ORACLE_SID If background processes exist, remove them by using the Unix com
mand "kill". For example: % kill -9 <Process_ID_Number>
3. Verify that no shared memory segments and semaphores that are owned by "oracl
e" still exist % ipcs -b If there are shared memory segments and semaphores owne
d by "oracle", remove the shared memory segments % ipcrm -m <Shared_Memory_ID_Nu
mber> and remove the semaphores % ipcrm -s <Semaphore_ID_Number> NOTE: The examp
le shown above assumes that you only have one database on this machine. If you h
ave more than one database, you will need to shutdown all other databases before
proceeding with Step 4.
4. Verify that the "$ORACLE_HOME/dbs/lk<sid>" file does not exist 5. Startup the
instance Solution Explanation: ===================== The "lk<sid>" and "sgadef<
sid>.dbf" files are used for locking shared memory. It seems that even though no
memory is allocated, Oracle thinks memory is still locked. By removing the "sga
def" and "lk" files you remove any knowledge oracle has of shared memory that is
in use. Now the database can start. . 19.18 Rollback segment missing, active tr
ansactions -----------------------------------------------Note:1013221.6 Subject
: RECOVERING FROM A LOST DATAFILE IN A ROLLBACK TABLESPACE Type: PROBLEM Status:
PUBLISHED Content Type: TEXT/PLAIN Creation Date: 16-OCT-1995 Last Revision Dat
e: 18-JUN-2002 Solution 1: --------------Error scenario: 1. 2. 3. 4. 5. set tran
saction use rollback segment rb1; INSERTS into's... SHUTDOWN ABORT; (simulate Me
dia errors) Delete file rb1.ora (Tablespace RB1 with segment rb1 ); Restore a ba
ckup of the file
Recover: 1. comment out INIT.ORA ROLLBACK_SEGMENT parameter , so ORACLE does not
try to find the incorrect segment rb1 2. STARTUP MOUNT 3. ALTER DATABASE DATAFI
LE 'rb1.ora' OFFLINE; 4. ALTER DATABASE OPEN # now we are in business 5. CREATE
ROLLBACK SEGMENT rbtemp TABLESPACE SYSTEM; # We need Temporary RBS for further s
teps; 6. ALTER ROLLBACK SEGMENT rbtemp ONLINE; 7. RECOVER TABLESPACE RB1; 8. ALT
ER TABLESPACE RB1 ONLINE; 9. ALTER ROLLBACK SEGMENT rb1 ONLINE; 10. ALTER ROLLBA
CK SEGMENT rbtemp OFFLINE; 11. DROP ROLLBACK SEGMENT rbtemp; Result: Successfull
y rollback uncommitted Transactions, no suspect instance.
Solution 2: --------------INTRODUCTION -----------Rollback segments can be monit
ored through the data dictionary view, dba_rollback_segs. There is a status colu
mn that describes what state the rollback segment is currently in. Normal states
are either online or offline. Occasionally, the status of "needs recovery" will
appear. When a rollback segment is in this state, bringing the rollback segment
offline or online either through the alter rollback segment command or removing
it FROM the rollback_segments parameter in the init.ora usually has no effect.
UNDERSTANDING ------------A rollback segment falls into this status of needs rec
overy whenever Oracle tries to roll back an uncommitted transaction in its trans
action table and fails. Here are some examples of why a transaction may need to
rollback: 1-A user may do a dml transaction and decides to issue rollback 2-A sh
utdown abort occurs and the database needs to do an instance recovery in which c
ase, Oracle has to roll back all uncommitted transactions. When a rollback of a
transaction occurs, undo must be applied to the data block the modified row/s ar
e in. If for whatever reason, that data block is unavailable, the undo cannot be
applied. The result is a 'corrupted' rollback segment with the status of needs
recovery. What could be some reasons a datablock is unaccessible for undo? 1-If
a tablespace or a datafile is offline or missing. 2-If the object the datablock
belongs to is corrupted. 3-If the datablock that is corrupt is actually in the r
ollback segment itself rather than the object.
HOW TO RESOLVE IT ----------------1-MAKE sure that all tablespaces are online an
d all datafiles are online. This can be checked through v$datafile, under the st
atus column. For tablespaces associated with the datafiles, look in dba_tablespa
ces. If that still does not resolve the problem then 2-PUT the following in the
init.oraevent = "10015 trace name context forever, level 10" Setting this event
will generate a trace file that will reveal the necessary information about the
transaction Oracle is trying to roll back and most importantly, what object Orac
le is trying to apply the undo to. 3-SHUTDOWN the database (if normal does not w
ork, immediate, if that does not work, abort) and bring it back up. Note: An ora
-1545 may be encountered, or other errors. If the database cannot startup, conta
ct customer support at this point. 4-CHECK in the directory that is specified by
the user_dump_dest parameter (in the init.ora or show parameter command) for a
trace file that was generated at startup time. 5-IN the trace file, there should
be a message similar toerror recovery tx(#,#) object #. TX(#,#) refers to trans
action information. The object # is the same as the object_id in sys.dba_objects
. 6-USE the following query to find out what object Oracle is trying to perform
recovery on. SELECT owner, object_name, object_type, status FROM dba_objects WHE
RE object_id = <object #>; 7-THIS object must be dropped so the undo can be rele
ased. An export or relying on a backup may be necessary to restore the object af
ter the corrupted rollback segment goes away. 8-AFTER dropping the object, put t
he rollback segment back in the init.ora parameter rollback_segments, removed th
e event, and shutdown and startup the database. In most cases, the above steps w
ill resolve the problematic rollback segment. If this still does not resolve the
problem, it may be likely that the corruption is in the actual rollback segment
. At this point, if the problem has not been resolved, please contact customer s
upport. Solution 3: --------------Recovery FROM the loss of a Rollback segment d
atafile containing active
transactions How do I recover the datafile containing rollback segments having a
ctive transactions and if the backup is done with RMAN without using catalog. I
have tried the case study FROM the Oracle recovery handbook,but when i tried to
open the database after offlining the Rollback segment file I got the following
errors ORA-00604: error occurred at recursive SQL level 2 ORA-00376: file 2 cann
ot be read at this time ORA-01110:data file 2: '/orabackup/CCD1prod/oradata/rbs0
1CCD1prod.dbf' the status of the datafile was "Recover". Anyhow shutting down an
d starup mounting the database allows for the database or the datafile recovery,
but this was done through SVRMGRL. Here is whats happening. simulate the loss o
f datafile by removing FROM the os and shut down abort the database. mount the d
atabase so RMAN can restore the file, at this point offlining the file succeeds
but you cannot open the database. so the question is can we offline a rollback s
egment datafile containing active transactions and open the database ? How to pe
rform recovery in such case using an RMAN backup without using the catalog. I ap
preciate for any insight and tips into this issue. Madhukar FROM: Oracle, Tom Vi
llane 01-May-02 21:04 Subject: Re : Recovery FROM the loss of a Rollback segment
datafile containing active transactions Hi, The only supported way to recover F
ROM the loss of a rollback segment datafile containing a rollback segment with a
potentially active data dictionary transaction is to restore the datafile FROM
backup and roll forward to a point in time prior to the loss of the datafile (as
suming archivelog mode). Tom Villane Oracle Support Metalink Analyst FROM: Madhu
kar Yedulapuram 02-May-02 06:46 Subject: Re : Recovery FROM the loss of a Rollba
ck segment datafile containing active transactions Hi Tom, What does Rollforward
upto a time prior to the loss of the datafile got to do with the recovery,
are you suggesting this so that active transaction is not lost,is it possible ?
Because during the recovery the rollforward is followed by rollback and all the
active transactions FROM the rollback segment's transaction table will be rolled
back isnt it ? My question is if I have a active transaction in a rollback segm
ent and the file containing that rollback segment is lost and the database crash
ed or did a shutdown abort can we open the database after offlining the datafile
and commenting out the rollback_segments parameter in the init.ora parameter, I
tried to do it and got the errors which I mentioned earlier. So in this case I
have to do offline recovery only or what ? Thanks, madhukar FROM: Oracle, Tom Vi
llane 02-May-02 16:24 Subject: Re : Re : Recovery FROM the loss of a Rollback se
gment datafile containing active transactions Hi, You won't be able to open the
database if you lose a rollback segment datafile that contains an active transac
tion. You will have to: Restore a good backup of the file RECOVER DATAFILE '<nam
e>' ALTER DATABASE DATAFILE '<name>' ONLINE; The only way you would be able to o
pen the database is if the status of the rollback were OFFLINE, any other status
requires that you recover as noted before. As recovering FROM rollback corrupti
on needs to be done properly, you may want to log an iTAR if you have additional
questions. Regards Tom Villane Oracle Support Metalink Analyst FROM: Madhukar Y
edulapuram 03-May-02 07:22 Subject: Re : Recovery FROM the loss of a Rollback se
gment datafile containing active transactions Hi Tom, Thank you for the reply.yo
u said that the only way the database can be opened is if the status of the roll
back segment was offline,but what happens to an active transaction which was usi
ng this rollback segment, once the database is opened and the media recovery per
formed on the datafile,the database will show values which were part of an activ
e transaction and not committed,isnt this the logical corruption? madhukar
FROM: Madhukar Yedulapuram 05-May-02 08:14 Subject: Re : Recovery FROM the loss
of a Rollback segment datafile containing active transactions Tom, Can I get som
e reponse to my questions. Thank You, Madhukar FROM: Oracle, Tom Villane 07-May-
02 13:53 Subject: Re : Re : Recovery FROM the loss of a Rollback segment datafil
e containing active transactions Hi, Sorry for the confusion, I should not have
said "rolling forward to a point in time..." in my previous reply. No, there won
't be corruption or inconsistency. The redo logs will contain the information fo
r both committed and uncommitted transactions. Since this includes changes made
to rollback segment blocks, it follows that rollback data is also (indirectly) r
ecorded in the redo log. To recover FROM a loss of Datafiles in the SYSTEM table
space or datafiles with active rollback segments. You must perform closed databa
se recovery. -Shutdown the database -Restore the file FROM backup -Recover the d
atafile -Open the database. References: Oracle8i Backup and Recovery Guide, chap
ter 6 under "Losing Datafiles in ARCHIVELOG Mode ". Regards Tom Villane Oracle S
upport Metalink Analyst FROM: Madhukar Yedulapuram 07-May-02 22:23 Subject: Re :
Recovery FROM the loss of a Rollback segment datafile containing active transac
tions Hi Tom, After offlining the rollback segment containing active transaction
you can open the database and do the recovery and after that any active transac
tions should be rolled back and the data should not show up, but I performed the
following test and Oracle is showing logical corruption by showing data which w
as never committed. SVRMGR> create tablespace test_rbs datafile '/orabackup/CCD1
prod/oradata/test_rbs01.dbf' size 10M 2> default storage (initial 1M next 1M min
extents 1 maxextents 1024);
Statement processed. SVRMGR> create rollback segment test_rbs tablespace test_rb
s; Statement processed. SVRMGR> create table case5 (c1 number) tablespace tools;
Statement processed. SVRMGR> set transaction use rollback segment test_rbs; ORA
-01598: rollback segment 'TEST_RBS' is not online SVRMGR> alter rollback segment
test_rbs online; Statement processed. SVRMGR> set transaction use rollback segm
ent test_rbs; Statement processed. SVRMGR> insert into case5 values (5); 1 row p
rocessed. SVRMGR> alter rollback segment test_rbs offline; Statement processed.
SVRMGR> shutdown abort ORACLE instance shut down. SVRMGR> startup mount ORACLE i
nstance started. Total System Global Area 145981600 bytes Fixed Size 73888 bytes
Variable Size 98705408 bytes Database Buffers 26214400 bytes Redo Buffers 20987
904 bytes Database mounted. SVRMGR> alter database datafile '/orabackup/CCD1prod
/oradata/test_rbs01.dbf' offline; Statement processed. SVRMGR> alter database op
en; Statement processed. SVRMGR> recover tablespace test_rbs; Media recovery com
plete. SVRMGR> alter tablespace test_rbs online; Statement processed. SVRMGR> SE
LECT * FROM case5; C1 ---------5 1 row SELECTed. SVRMGR> alter rollback segment
test_rbs online; Statement processed. SVRMGR> SELECT * FROM case5; C1 ---------5
1 row SELECTed. SVRMGR> drop rollback segment test_rbs; drop rollback segment t
est_rbs * ORA-01545: rollback segment 'TEST_RBS' specified not available SVRMGR>
SELECT segment_name,status FROM dba_rollback_segs; SEGMENT_NAME STATUS --------
---------------------- ---------------SYSTEM ONLINE R0 OFFLINE R01 OFFLINE R02 O
FFLINE R03 OFFLINE
R04 OFFLINE R05 OFFLINE R06 OFFLINE R07 OFFLINE R08 OFFLINE R09 OFFLINE R10 OFFL
INE R11 OFFLINE R12 OFFLINE BIG_RB OFFLINE TEST_RBS ONLINE 16 rows SELECTed. SVR
MGR> drop rollback segment test_rbs; drop rollback segment test_rbs * ORA-01545:
rollback segment 'TEST_RBS' specified not available Here I have to bring the ro
llback segment offline to dropt it. Can this be explained or is this a bug,becau
se this caused logical corruption. FROM: Oracle, Tom Villane 10-May-02 13:19 Sub
ject: Re : Re : Recovery FROM the loss of a Rollback segment datafile containing
active transactions Hi, What you are showing is expected and normal, and not co
rruption. At the time that you issue the "alter rollback segment test_rbs online
;" Oracle does an implicit commit becuase any "ALTER" statement is considered DD
L and Oracle issues an implicit COMMIT before and after any data definition lang
uage (DDL)statement. Regards Tom Villane Oracle Support Metalink Analyst
-------------------------------------------------------------------------------F
ROM: Madhukar Yedulapuram 14-May-02 20:12 Subject: Re : Recovery FROM the loss o
f a Rollback segment datafile containing active transactions Hi Tom, So what you
are saying is the moment I say Alter rollback segment RBS# online,oracle will i
ssue an implicit commit,but if you look at my test just after performing the tab
lespace recovery (had only one datafile in the RBS tablespace which was offlined
before opening the database and doing the recovery), I brought the tablespace o
nline and did a SELECT FROM the table which was having the active transaction in
one of the rollback segments,so this statement has
issued an implicit commit and I could see the data which was never actually comm
itted,doesnt this contradict the Oracle's stance that only that data will be sho
wn which shown which is committed, I think this statement is true for Intance an
d Crash recovery,not for media recovery as the case in point proves,but still if
you say Oracle issues an implicit commit,then the stance of oracle is consisten
t. madhukar
FROM: Oracle, Tom Villane 15-May-02 18:30 Subject: Re : Re : Recovery FROM the l
oss of a Rollback segment datafile containing active transactions Hi, A slight c
orrection to what I posted, I should have said the implicit commit happened when
the rollback segment was altered offline. Whether it's an implicit commit (befo
re and after a DDL statement like CREATE, DROP, RENAME, ALTER) or if the user di
d the commit, or if the user exits the application (forces a commit). All of the
above are considered commits and the data will be saved. Regards Tom Villane Or
acle Support Metalink Analyst FROM: Madhukar Yedulapuram 16-May-02 23:17 Subject
: Re : Recovery FROM the loss of a Rollback segment datafile containing active t
ransactions Hi Tom, Thank You very much,so the moment i brought the RBS offline,
the transaction was committed and the data saved in the table,is that what you a
re saying. So the data was committed even before performing the recovery,so reco
very is essentially not applying anything in this case. madhukar
FROM: Oracle, Tom Villane 17-May-02 12:18 Subject: Re : Re : Recovery FROM the l
oss of a Rollback segment datafile containing active transactions Hi,
Yes, that is what happened. Regards Tom Villane Oracle Support Metalink Analyst
19.19 After backup you increase a datafile. ------------------------------------
-----problem 2: "the backed up datafile size is smaller, and Oracle won't accept
it for recovery." isn't a problem because we most certainly will accept that fi
le. can do this (i just did) o o o o o o o o o o o create a small 1m tablespace
with a datafile. alter it and begin backup. copy the datafile alter it and end b
ackup. alter the datafile and "autoextend on next 1m" it. create a table with in
itial 2m initial extent. This will grow the datafile. offline the tablespace cop
y the 1m original file back. try to online it -- it'll tell you the file that ne
eds recovery (its already accepted the smaller file at this point) alter databas
e recover datafile 'that file'; alter the tablespace online again -- all is well
. As a test you
As for the questions: 1) There is such a command -- "alter database create dataf
ile". example I just ran through: tkyte@TKYTE816> alter tablespace t begin backu
p; Tablespace altered. I copied the single datafile that is in T at this point t
kyte@TKYTE816> alter tablespace t end backup; Tablespace altered. tkyte@TKYTE816
> alter tablespace t add datafile 'c:\temp\t2.dbf' size 1m; Tablespace altered.
So, I added a datafile AFTER the backup... tkyte@TKYTE816> alter tablespace t of
fline; Tablespace altered. At this point, I went out and erased the two datafile
s associated with T. moved the copy of the one datafile in place... I Here is an
tkyte@TKYTE816> alter tablespace t online; alter tablespace t online * ERROR at
line 1: ORA-01113: file 9 needs media recovery ORA-01110: data file 9: 'C:\TEMP\
T.DBF' So, it sees the copy is out of sync... tkyte@TKYTE816> recover tablespace
t; ORA-00283: recovery session canceled due to errors ORA-01157: cannot identif
y/lock data file 10 - see DBWR trace file ORA-01110: data file 10: 'C:\TEMP\T2.D
BF' and now it tells of the missing datafile -- all we need do at this point is:
tkyte@TKYTE816> alter database create datafile 'c:\temp\t2.dbf'; Database alter
ed. tkyte@TKYTE816> recover tablespace t; Media recovery complete. tkyte@TKYTE81
6> alter tablespace t online; Tablespace altered. and we are back in business...
.
19.22 Setting Trace Events ------------------------database level via init.ora E
VENT="604 TRACE NAME ERRORSTACK FOREVER" EVENT="10210 TRACE NAME CONTEXT FOREVER
, LEVEL 10" session level ALTER SESSION SET EVENTS 'IMMEDIATE TRACE NAME BLOCKDU
MP LEVEL 67109037'; ALTER SESSION SET EVENTS 'IMMEDIATE TRACE NAME CONTROLF LEVE
L 10'; system trace dump file ALTER SESSION SET EVENTS 'IMMEDIATE TRACE NAME SYS
TEMSTATE LEVEL 10'; 19.23 DROP TEMP DATAFILE ----------------------SVRMGRL>start
up mount SVRMGRL>alter database open; ora-01157 cannot identify datafile 4 - fil
e not found ora-01110 data file 4 '/oradata/temp/temp.dbf' SVRMGRL>alter databas
e datafile '/oradata/temp/temp.dbf' offline drop; SVRMGRL>alter database open; S
VRMGRL>drop tablespace temp including contents; SVRMGRL>create tablespace temp d
atafile '....
19.24 SYSTEM DATAFILE RECOVERY ----------------------------- a normal datafile c
an be taken offline and the database started up. - the system file can be taken
offline but the database cannot start - restore a backup copy of the system file
- recover the file 19.25 Strange processes=.. and database does not start -----
-----------------------------------------------Does the PROCESSES initialization
parameter of init.ora depend on some other parameter ? We were getting the erro
r as maximum no of process (50) exceeded..... The value was initially set to 50,
so when the value was....changed to 200, and the database was restarted, it gav
e an error of "end-of-file on communication channel" The value was reduced to 15
0 & 100 and the same error was encountered.... when it was set back to 50, the d
atabase started.... Can anyone clear ? check out ur semaphore settings in /etc/s
ystem. try increasing seminfo_semmns 19.26 ORA-00600 -------------I work with OR
ACLE DB ver.8.0.5 and recieved an error in alert.log ksedmp: internal or fatal e
rror ORA-00600: internal error code, arguments: [12700], [3383], [41957137], [44
], [], [], [], [] oerr ora 600 00600, 00000, "internal error code, arguments: [%
s], [%s], [%s], [%s], [%s], [%s], [%s], [%s]" Cause: This is the generic interna
l error number for Oracle program exceptions. This indicates that a process has
encountered an exceptional condition. Action: Report as a bug - the first argume
nt is the internal error number Number [12700] indicates "invalid NLS parameter
value (%s)" Cause: An invalid or unknown NLS configuration parameter was specifi
ed. 19.27 segment has reached it's max_extents ---------------------------------
-------oracle later than 7.3.x
Version 7.3 and later: You can set the MAXEXTENTS storage parameter value to UNL
IMITED for any object. Rollback Segment ================ ALTER ROLLBACK SEGMENT
rollback_segment STORAGE ( MAXEXTENTS UNLIMITED); Temporary Segment ============
===== ALTER TABLESPACE tablespace DEFAULT STORAGE ( MAXEXTENTS UNLIMITED); Table
Segment ============= ALTER TABLE MANIIN_ASIAKAS STORAGE ( MAXEXTENTS UNLIMITED
); ALTER TABLE MANIIN_ASIAKAS STORAGE ( NEXT 5M ); Index Segment ============= A
LTER INDEX index STORAGE ( MAXEXTENTS UNLIMITED); Table Partition Segment ======
================= ALTER TABLE table MODIFY PARTITION partition STORAGE (MAXEXTEN
TS UNLIMITED);
19.28 max logs -------------Problem Description ------------------In the "alert.
log", you find the following warning messages: kccrsz: denied expansion of contr
olfile section 9 by 65535 record(s) the number of records is already at maximum
value (65535) krcpwnc: following controlfile record written over: RECID #520891
Recno 53663 Record timestamp ... kccrsz: denied expansion of controlfile section
9 by 65535 record(s) the number of records is already at maximum value (65535)
krcpwnc: following controlfile record written over: RECID #520892 Recno 53664 Re
cord timestamp The database is still running. The CONTROL_FILE_RECORD_KEEP_TIME
init parameter is set to 7. If you display the records used in the LOG HISTORY s
ection 9 of the controlfile: SQL> SELECT * FROM v$controlfile_record_section WHE
RE type='LOG HISTORY' ; TYPE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX L
AST_RECID ------------- ------------- ------------ ----------- ---------- ------
---LOG HISTORY 65535 65535 33864 33863 520892 The number of RECORDS_USED has rea
ched the maximum allowed in RECORDS_TOTAL. Solution Description ----------------
---Set the CONTROL_FILE_RECORD_KEEP_TIME to 0:
* Insert the parameter CONTROL_FILE_RECORD_KEEP_TIME = 0 IN "INIT.ORA" -OR* Set
it momentarily if you cannot shut the database down now: SQL> alter system set c
ontrol_file_record_keep_time=0; Explanation ----------The default value for * th
e CONTROL_FILE_RECORD_KEEP_TIME is 7 days. SELECT value FROM v$parameter WHERE n
ame='control_file_record_keep_time'; VALUE ----7 * the MAXLOGHISTORY database pa
rameter has already reached the maximum of 65535 and it cannot be increased anym
ore. SQL> alter database backup controlfile to trace; => in the trace file, MAXL
OGHISTORY is 65535 The MAXLOGHISTORY increases dynamically when the CONTROL_FILE
_RECORD_KEEP_TIME is set to a value different FROM 0, but does not exceed 65535.
Once reached, the message appears in the alert.log warning you that a controlfi
le record is written over. 19.29 ORA-470 maxloghistory -------------------------
Problem Description: ==================== Instance cannot be started because of
ORA-470. LGWR has also died creating a trace file with an ORA-204 error. It is p
ossible that the maxloghistory limit of 65535 as specified in the controlfile ha
s been reached. Diagnostic Required: ==================== The following informat
ion should be requested for diagnostics: 1. LGWR trace file produced 2. Dump of
the control file - using the command: ALTER SESSION SET EVENTS 'immediate trace
name controlf level 10' 3. Controlfile contents, using the command: ALTER DATABA
SE BACKUP CONTROLFILE TO TRACE; Diagnostic Analysis: ==================== The fo
llowing observations will indicate that we have the maxloghistory limit of 65535
: 1. The Lgwr trace file should show the following stack trace: - in 8.0.3 and 8
.0.4, OSD skgfdisp returns ORA-27069, stack: kcrfds -> kcrrlh -> krcpwnc -> kccr
oc -> kccfrd -> kccrbl -> kccrbp - in 8.0.5 kccrbl causes SEGV before the call t
o skgfdisp with wrong block number. stack: kcrfds -> kcrrlh -> krcpwnc -> kccwnc
-> kccfrd -> kccrbl 2. FROM the 'dump of the controlfile': ...
... numerous lines omittted ... LOG FILE HISTORY RECORDS: (blkno = 0x13, size =
36, max = 65535, in-use = 65535, last-recid= 188706) ... the max value of 65535
reconfirms that the limit has been reached. 3. Further confirmation can be seen
FROM the controlfile trace: CREATE CONTROLFILE REUSE DATABASE "ORCL" NORESETLOGS
NOARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 50 MAXINSTANCES 1 MAXL
OGHISTORY 65535 ... Diagnostic Solution: =================== 1. Set control_file
_record_keep_time = 0 in the init.ora. This parameter specifies the minimum age
of a log history record in days before it can be reused. With the parameter set
to 0, reusable sections never expand and records are reused immediately as requi
red. [NOTE:1063567.6] <ml2_documents.showDocument?p_id=1063567.6&p_database_id=N
OT> gives a good description on the use of this parameter. 2. Mount the database
and retrieve details of online redo log files for use in step 6. Because the re
covery will need to roll forward through current online redo logs, a list of onl
ine log details is required to indicate which redo log is current. This can be o
btained using the following command: startup mount SELECT * FROM v$logfile; 3. O
pen the database. This is a very important step. Although the startup will fail,
it is a very important step before recreating the controlfile in step 5 and hen
se, enabling crash recovery to repair any incomplete log switch. Without this st
ep it may be impossible to recover the database. alter database open 4. Shutdown
the database, if it did not already crash in step 3. 5. Using the backup contro
lfile trace, recreate the controlfile with a smaller maxloghistory value. The MA
XLOGHISTORY section of the current control file cannot be extended beyond 65536
entries. The value should reflect the amount of log history that you wish to mai
ntain. An ORA-219 may be returned when the size of the controlfile, based on the
values of the MAX- parameters, is higher then the maximum allowable size. [NOTE
:1012929.6] <ml2_documents.showDocument?p_id=1012929.6&p_database_id=NOT> gives
a good step-by-step guide to recreating the control file. 6. Recover the databas
e. The database will automatically be mounted due to the recreation of the contr
olfile in step 5 : Recover database using backup controlfile; At the recovery pr
ompt apply the online logs in sequence by typing the unquoted full path and file
name of the online redo log to apply, as noted in step 2. After applying the cu
rrent redo log, you will receive the message 'Media Recovery Complete'. 7. Once
media recovery is complete, open the database as follows: alter database open re
setlogs; Note: keep recurring "Control file resized from"
> /dbms/tdbaplay/playroca/admin/dump/udump/playroca_ora_1548438.trc > Oracle Dat
abase 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production > With the Pa
rtitioning, OLAP and Data Mining options > ORACLE_HOME = /dbms/tdbaplay/ora10g/h
ome > System name: AIX > Node name: pl003 > Release: 3 > Version: 5 > Machine: 0
0CB560D4C00 > Instance name: playroca > Redo thread mounted by this instance: 1
> Oracle process number: 28 > Unix process pid: 1548438, image: oracle@pl003 (TN
S V1-V3) > > *** 2008-02-21 12:51:57.587 > *** ACTION NAME:(0000010 FINISHED67)
2008-02-21 12:51:57.583 > *** SERVICE NAME:(SYS$USERS) 2008-02-21 12:51:57.583 >
*** SESSION ID:(518.643) 2008-02-21 12:51:57.583 > Control file resized from 45
4 to 470 blocks > kccrsd_append: rectype = 28, lbn = 227, recs = 1128
19.30 Compatible init.ora change: -------------------------------Database files
have the COMPATIBLE version in the file header. If you set the parameter to a hi
gher value, all the headers will be updated at next database startup. This means
that if you shutdown your database, downgrade the COMPATIBLE parameter, and try
to restart your database, you'll receive an error message something like: ORA-0
0201: control file version 7.3.2.0.0 incompatible with ORACLE version 7.0.12.0.0
ORA-00202: control file: '/usr2/oracle/dbs/V73A/ctrl1V73A.ctl' In the above cas
e, database was running with COMPATIBLE 7.3.2.0. I commented out the parameter i
n init.ora, that is; kernel uses default 7.0.12.0 and returns an error before mo
unting since kernel cannot read the controlfile header. - You may only change th
e value of COMPATIBLE after a COLD Backup. - You may only change the value of CO
MPATIBLE if the database has been shutdown in NORMAL/IMMEDIATE mode. This parame
ter allows you to use a new release, while at the same time guaranteeing backwar
d compatibility with an earlier release (in case it becomes necessary to revert
to the earlier release). This parameter specifies the release with which Oracle7
Server must maintain compatibility. Some features of the current release may be
restricted. For example, if you are running release 7.2.2.0 with compatibility
set to 7.1.0.0 in order to guarantee compatibility, you will not be able to use
7.2 features. When using the standby database and feature, this parameter must h
ave the same value on the primary and standby databases, and the value must be 7
.3.0.0.0 or higher. This parameter
allows you to immediately take advantage of the maintenance improvements of a ne
w release in your production systems without testing the new functionality in yo
ur environment. The default value is the earliest release with which compatibili
ty can be guaranteed. Ie: It is not possible to set COMPATIBLE to 7.3 on an Orac
le8 database. ----------------Hi Tom, Just installed DB9.0.1, I tried to modify
parameter in init.ora file: compatible=9.0.0(default) to 8.1.0. After I restarte
d the 901 DB, I got error below when I login to sqlplus: ERROR: ORA-01033: ORACL
E initialization or shutdown in progress Anything wrong with that? If I change b
ack, everything is ok. The database could not start up. If you start the databas
e manually, from the command line -you would discover this. For example: idle> s
tartup pfile=initora920.ora ORACLE instance started. Total System Global Area 14
3725064 bytes Fixed Size 451080 bytes Variable Size 109051904 bytes Database Buf
fers 33554432 bytes Redo Buffers 667648 bytes Database mounted. ORA-00402: datab
ase changes by release 9.2.0.0.0 cannot be used by release 8.1.0.0.0 ORA-00405:
compatibility type "Locally Managed SYSTEM tablespace" ..... Generally, compatib
le cannot be set DOWN as you are already using new features many times that are
not compatible with the older release. You would have had to of created the data
base with 8.1 file formats (compatible set to 8.1 from the very beginning) -----
------------------------19.31 ORA-27044: unable to write the header block of fil
e: --------------------------------------------------------Problem Description:
==================== When you manually switch redo logs, or when the log buffer
causes the redo threads to switch, you see errors similar to the following in yo
ur alert log: ... Fri Apr 24 13:42:00 1998 Thread 1 advanced to log sequence 170
Current log# 4 seq# 170 mem# 0: /.../rdlACPT04.rdl Fri Apr 24 13:42:04 1998 Err
ors in file /.../acpt_arch_15973.trc: ORA-202: controlfile: '/.../ctlACPT01.dbf'
ORA-27044: unable to write the header block of file SVR4 Error: 48: Operation n
ot supported Additional information: 3 Fri Apr 24 13:42:04 1998
kccexpd: controlfile resize from 356 to 368 block(s) denied by OS ... Note: The
particular SVR4 error observed may differ in your case and is irrelevant here. O
RA-00202: "controlfile: '%s'" Cause: This message reports the name file involved
in other messages. Action: See associated error messages for a description of t
he problem. ORA-27044: "unable to write the header block of file" Cause: write s
ystem call failed, additional information indicates which function encountered t
he error Action: check errno Solution Description: ===================== To work
around this problem you can: 1. Use a database blocksize smaller than 16k. This
may not be practical in all cases, and to change the db_block_size of a database
you must rebuild the database. - OR 2. Set the init.ora parameter CONTROL_FILE_
RECORD_KEEP_TIME equal to zero. This can be done by adding the following line to
your init.ora file: CONTROL_FILE_RECORD_KEEP_TIME = 0 The database must be shut
down and restarted to have the changed init.ora file read. Explanation: =======
===== This is [BUG:663726] <ml2_documents.showDocument?p_id=663726&p_database_id
=BUG>, which is fixed in release 8.0.6. The write of a 16K buffer to a control f
ile seems to fail during an implicit resize operation on the controlfile that ca
me as a result of adding log history records (V$LOG_HISTORY) when archiving an o
nline redo log after a log switch. Starting with Oracle8 the control file can gr
ow to a much larger size than it was able to in Oracle7. Bug 663726 <ml2_documen
ts.showDocument?p_id=663726&p_database_id=BUG> is only reproducible when the con
trol file needs to grow AND when the db_block_size = 16k. This has been tested o
n instances with a smaller database block size and the problem has not been able
to be reproduced.
Records in some sections in the control file are circularly reusable while recor
ds in other sections are never reused. CONTROL_FILE_RECORD_KEEP_TIME applies to
reusable sections. It specifies the minimum age in days that a record must have
before it can be reused. In the event a new record needs to be added to a reusab
le section and the oldest record has not aged enough, the record section expands
. If CONTROL_FILE_RECORD_KEEP_TIME is set to 0, then reusable sections never exp
and and records are reused as needed. 19.32 ORA-04031 error shared_pool: -------
-------------------------DIAGNOSING AND RESOLVING ORA-04031 ERROR For most appli
cations, shared pool size is critical to Oracle perfoRMANce. The shared pool hol
ds both the d ata dictionary cache and the fully parsed or compiled representati
ons of PL/SQL blocks and SQL statements. When any attempt to allocate a large pi
ece of contiguous memory in the shared pool fails Oracle first flushes all objec
ts that are not currently in use from the pool and the resulting free memory chu
nks are merged. If there is still not a single chunk large enough to satisfy the
request ORA-04031 is returned. The message that you will get when this error ap
pears is the following: Error: ORA 4031 Text: unable to allocate %s bytes of sha
red memory (%s,%s,%s) The ORA-04031 error is usually due to fragmentation in the
library cache or shared pool reserved space. Before of increasing the shared po
ol size consider to tune the application to use shared sql and tune SHARED_POOL_
SIZE, SHARED_POOL_RESERVED_SIZE, and SHARED_POOL_RESERVED_MIN_ALLOC. First deter
mine if the ORA-04031 was a result of fragmentation in the library cache or in t
he shared pool reserved space by issuing the following query: SELECT free_space,
avg_free_size, used_space, avg_used_size, request_failures, last_failure_size F
ROM v$shared_pool_reserved; The ORA-04031 is a result of lack of contiguous spac
e in the shared pool reserved space if: REQUEST_FAILURES is > 0 and LAST_FAILURE
_SIZE is > SHARED_POOL_RESERVED_MIN_ALLOC. To resolve this consider increasing S
HARED_POOL_RESERVED_MIN_ALLOC to lower the number of objects being cached into t
he shared pool reserved space and increase SHARED_POOL_RESERVED_SIZE and SHARED_
POOL_SIZE to increase the available memory in the shared pool reserved space. Th
e ORA-04031 is a result of lack of contiguous space in the library cache if: REQ
UEST_FAILURES is > 0 and LAST_FAILURE_SIZE is < SHARED_POOL_RESERVED_MIN_ALLOC o
r
REQUEST_FAILURES is 0 and LAST_FAILURE_SIZE is < SHARED_POOL_RESERVED_MIN_ALLOC
The first step would be to consider lowering SHARED_POOL_RESERVED_MIN_ALLOC to p
ut more objects into the shared pool reserved space and increase SHARED_POOL_SIZ
E. This view keeps information of every SQL statement and PL/SQL block executed
in the database. The following SQL can show you statements with literal values o
r candidates to include bind variables: SELECT substr(sql_text,1,40) "SQL", coun
t(*) , sum(executions) "TotExecs" FROM v$sqlarea WHERE executions < 5 GROUP BY s
ubstr(sql_text,1,40) HAVING count(*) > 30 ORDER BY 2; 19.33 ORA-4030 Out of memo
ry: ---------------------------Possibly no memory left in Oracle, or the OS does
not grant more memory. Also inspect the size of any swap file. The errors is al
so reported if execute permissions are not in place on some procedure. 19.34 wro
ng permissions on oracle: ---------------------------------Hi, I am under very c
onfusing situation. I'm running database (8.1.7) My oracle is installed under ow
nership of userid "oracle" when i login with unix id "TEST" and give oracle_sid,
oracle_home,PATH variables and then do sqlplus sys after logging in when i give
"select file#,error from v$datafile_header;" for some file# i get error as "CAN
NOT READ HEADER" but when i login through other unix id and do the same thing. I
'm not getting any error.. This seems very very confusing, Could you tell me the
reason behind this?? Thank & Regards, Atul
Followup: sounds like you did not run the root.sh during the install and the per
missions on the oracle binaries are wrong. what does ls -l $ORACLE_HOME/bin/orac
le look like. $ ls -l $ORACLE_HOME/bin/oracle -rwsr-s--x 1 ora920 ora920 /usr/or
acle/ora920/bin/oracle with the "s" bits set. rwsr-s--x 1 oracle dba 494456 Dec
7 1999 lsnrctl it should look like this:
51766646 Mar 31 13:03
regardless of who I log in as, when you have a setuid program as the oracle bina
ry is, it'll be running "as the owner" tell me, what does ipcs -a show you, who
is the owner of the shared memory segments associated with the SGA. If that is n
ot Oracle -- you are "getting confused" somewhere for the s bit would ensure tha
t Oracle was the owner. Some connection troubleshooting: -----------------------
--------19.35: ====== ORA-12545: ---------This one is probaly due to the fact th
e IP or HOSTNAME in tnsnames is wrong. ORA-12514: ---------This one is probaly d
ue to the fact the SERVICE_NAME in tnsnames is wrong or should be fully qualifie
d with domain name. ORA-12154: ---------This one is probaly due to the fact the
alias you have used in the logon dialogbox is wrong. fully qualified with domain
name. ORA-12535: ---------The TNS-12535 or ORA-12535 error is normally a timeou
t error associated with Firewalls or slow Networks. + It can also be an incorrec
t listener.ora parameter setting for the CONNECT_TIMEOUT_<listener_name> value s
pecified.
+ In essence, the ORA-12535/TNS-12535 is a timing issue between the client and s
erver. ORA-12505: ---------TNS:listener does not currently know of SID given in
connect descriptor Note 1: ------Symptom: When trying to connect to Oracle the f
ollowing error is generated: ORA-12224: TNS: listener could not resolve SID give
n in connection description. Cause: The SID specified in the connection was not
found in the listeners tables. This error will be returned if the database instance
has not registered with the listener. Possible Remedy: Check to make sure that
the SID is correct. The SIDs that are currently registered with the listener can
be obtained by typing: LSNRCTL SERVICES <listener-name> These SIDs correspond t
o SID_NAMEs in TNSNAMES.ORA or DB_NAME in the initialisation file. Note 2: -----
-ORA-12505: TNS:listener could not resolve SID given in connect descriptor You a
re trying to connect to a database, but the SID is not known. Although it is pos
sible that a tnsping command succeeds, there might still a problem with the SID
parameter of the connection string. eg. C:>tnsping ora920 TNS Ping Utility for 3
2-bit Windows: Version 9.2.0.7.0 - Production Copyright (c) 1997 Oracle Corporat
ion. All rights reserved.
Used parameter files: c:\oracle\ora920\network\admin\sqlnet.ora Used TNSNAMES ad
apter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = DEV01)(PORT = 2491))) (CONNECT_DATA = (SID =
UNKNOWN) (SERVER = DEDICATED))) OK (20 msec) As one can see, this is the connect
ion information stored in a tnsnames.ora file:
ORA920.EU.DBMOTIVE.COM = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = T
CP)(HOST = DEV01)(PORT = 2491)) ) (CONNECT_DATA = (SID = UNKNOWN) (SERVER = DEDI
CATED) ) ) However, the SID UNKNOWN is not known by the listener at the database
server side. In order to test the known services by a listener, we can issue fo
llowing command at the database server side: C:>lsnrctl services LSNRCTL for 32-
bit Windows: Version 10.1.0.2.0 - Production Copyright (c) 1991, 2004, Oracle. A
ll rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=DEV01)(PORT=1521))) Serv
ices Summary... Service "ORA10G.eu.dbmotive.com" has 1 instance(s). Instance "OR
A10G", status UNKNOWN, has 1 handler(s) for this service... Handler(s): "DEDICAT
ED" established:0 refused:0 LOCAL SERVER Service "ORA920.eu.dbmotive.com" has 2
instance(s). Instance "ORA920", status UNKNOWN, has 1 handler(s) for this servic
e... Handler(s): "DEDICATED" established:0 refused:0 LOCAL SERVER Instance "ORA9
20", status READY, has 1 handler(s) for this service... Handler(s): "DEDICATED"
established:2 refused:0 state:ready LOCAL SERVER The command completed successfu
lly Know services are ORA10G and ORA920. Changing the SID in our tnsnames.ora to
a known service by the listener (ORA920) solved the problem.
19.36 ORA-12560 --------------Note 1: ------Oracle classify this as a generic proto
col adapter error. In my experience it indicates that Oracle client does not know w
hat instance to connect to or what TNS alias to use. Set the correct ORACLE_HOME
ans ORACLE_SID variables. Note 2:
------Doc ID: Note:73399.1 Subject: WINNT: ORA-12560 DB Start via SVRMGRL or SQL
*PLUS ORACLE_SID is set correctly Type: BULLETIN Status: PUBLISHED Content Type:
TEXT/PLAIN Creation Date: 28-JUL-1999 Last Revision Date: 14-JAN-2004 PURPOSE T
o assist in resolving ORA-12560 errors on Oracle8i. SCOPE & APPLICATION Support
Analysts and customers. RELATED DOCUMENTS PR:1070749.6 NOTE:1016454.102 <ml2_doc
uments.showDocument?p_id=1016454.102&p_database_id=NOT> TNS 12560 DB CREATE VIA
INSTALLATION OR CONFIGURATION ASSISTANT FAILS BUG:948671 <ml2_documents.showDocu
ment?p_id=948671&p_database_id=BUG> ORADIM SUCCSSFULLY CREATES AN UNUSABLE SID W
ITH NON-ALPHANUMERIC CHARACTER BUG:892253 <ml2_documents.showDocument?p_id=89225
3&p_database_id=BUG> ORA-12560 CREATING DATABASE WITH DB CONFIGURATION ASSISTANT
IF SID HAS NON-ALPHA If you encounter an ORA-12560 error when you try to start
Server Manager or SQL*Plus locally on your Windows NT server, you should first c
heck the ORACLE_SID value. Make sure the SID is correctly set, either in the Win
dows NT registry or in your environment (with a set command). Also, you must ver
ify that the service is running. See the entries above for more details. If you
have verified that ORACLE_SID is properly set, and the service is running, yet y
ou still get an ORA-12560, then it is possible that you have created an instance
with a non-alphanumeric character. The Getting Started Guide for Oracle8i on Wi
ndows NT documents that SID names can contain only alphanumerics, however if you
attempt to create a SID with an underscore or a dash on Oracle8i you are not pr
evented from doing so. The service will be created and started successfully, but
attempts to connect will fail with an ORA-12560. You must delete the instance a
nd recreate it with no special characters only alphanumerics are allowed in the
SID name. See BUG#948671, which was logged against 8.1.5 on Windows NT for this
issue. Note 3: ------Doc ID </help/usaeng/Search/search.html>: TEXT/PLAIN Note:1
19008.1 Content Type:
Subject: ORA-12560 Connecting to the Server on Unix - Troubleshooting Creation D
ate: 04-SEP-2000 Type: PROBLEM Last Revision Date: 20-MAR-2003 Status: PUBLISHED
PURPOSE ------This note describes some of the possible reasons for ORA-12560 er
rors connecting to server on Unix Box. The list below shows some of the causes,
the symptoms and the action to take. It is possible you will hit a cause not des
cribed here, in that case the information above should allow it to be identified
. SCOPE & APPLICATION ------------------Support Analysts and customers alike. OR
A-12560 CONNECTING TO THE SERVER ON UNIX - TROUBLESHOOTING ---------------------
--------------------------------------ORA-12560: TNS:protocol adapter error Caus
e: A generic protocol adapter error occurred. Action: Check addresses used for p
roper protocol specification. Before reporting this error, look at the error sta
ck and check for lower level transport errors. For further details, turn on trac
ing and re execute the operation. Turn off tracing when the operation is complet
e. This is a high-level error just reporting an error occurred in the actual tra
nsport layer. Look at the next error down the stack and process that. 1. ORA-125
00 ORA-12560 MAKING MULTIPLE CONNECTIONS TO DATABASE Problem: Trying to connect
to the database via listener and the ORA-12500 are prompted. You may see in the
listener.log ORA-12500 and ORA-12560: ORA-12500: Cause: Action: TNS:listener fai
led to start a dedicated server process The process of starting up a dedicated s
erver process failed. The executable could not be found or the environment maybe
set up incorrectly. Turn on tracing at the ADMIN level and re execute the opera
tion. Verify that the ORACLE Server executable is present and has execute permis
sions enabled. Ensure that the ORACLE environment is specified correctly in LIST
ENER.ORA. If error persists, contact Worldwide Customer Support.
In many cases the error ORA-12500 is caused due to leak of resources in the Unix
Box, if you are enable to connect to database and randomly you get the error yo
ur operating system is reached the maximum values for some resources. Otherwise,
if you get the error in first connection the problem may be in the configuratio
n of the system.
Solution: Finding the resource which is been reached is difficult, the note 2064
862.102 <ml2_documents.showDocument?p_id=2064862.102&p_database_id=NOT> indicate
s some suggestion to solve the problems. 2. ORA-12538/ORA-12560 connecting to th
e database via SQL*Net Problem: Trying to connect to database via SQL*Net the er
ror the error ORA-12538 is prompted. In the trace file you can see: nscall: erro
r exit nioqper: error from nscall nioqper: nr err code: 0 nioqper: ns main err c
ode: nioqper: ns (2) err code: nioqper: nt main err code: nioqper: nt (2) err co
de: nioqper: nt OS err code:
12538 12560 508 0 0
Solution: - Check the protocol used in the TNSNAMES.ORA by the connection string
- Ensure that the TNSNAMES.ORA you check is the one that is actually being used
by Oracle. Define the TNS_ADMIN environment variable to point to the TNSNAMES d
irectory. - Using the $ORACLE_HOME/bin/adapters command, ensure the protocol is
installed. Run the command without parameters to check if the protocol is instal
led, then run the command with parameters to see whether a particular tool/appli
cation contains the protocol symbols e.g.: 1. $ORACLE_HOME/bin/adapters 2. $ORAC
LE_HOME/bin/adapters $ORACLE_HOME/bin/oracle $ORACLE_HOME/bin/adapters $ORACLE_H
OME/bin/sqlplus Explanation: If the protocol is not installed every connection a
ttempting to use it will fail with ORA-12538 because the executable doesn't cont
ain the required protocol symbol/s. Error ORA-12538 may also be caused by an iss
ue with the '$ORACLE_HOME/bin/relink all' command. 'Relink All' does not relink
the sqlplus executable. If you receive error ORA-12538 when making a sqlplus con
nection, it may be for this reason. To relink sqlplus manually: $ su - oracle $
cd $ORACLE_HOME/sqlplus/lib $ make -f ins_sqlplus.mk install $ ls -l $ORACLE_HOM
E/bin/sqlplus --> should show a current date/time stamp 3. ORA-12546 ORA-12560 c
onnecting locally to the database Problem: Trying to connect to database locally
with a different account to the software owner, the error the error ORA-12546 i
s prompted. In the trace file you can see:
nioqper: nioqper: nioqper: nioqper: nioqper: nioqper: nioqper:
error from nscall nr err code: 0 ns main err code: ns (2) err code: nt main err
code: nt (2) err code: nt OS err code:
12546 12560 516 13 0
Solution: Make sure the permissions of oracle executable are correct, this shoul
d be: 52224 -rwsr-sr-x 1 oracle dba 53431665 Aug 10 11:07 oracle
Explanation: The problem occurs due to an incorrect setting on the oracle execut
able. 4. ORA-12541 ORA-12560 TRYING TO CONNECT TO A DATABASE Problem: You are tr
ying to connect to a database using SQL*Net and receive the following error ORA-
12541 ORA-12560 after change the TCP/IP port in the listener.ora and you are usi
ng PARAMETER USE_CKPFILE_LISTENER in listener.ora. The following error struct ap
pears in the SQLNET.LOG: nr err code: 12203 TNS-12203: TNS:unable to connect to
destination ns main err code: 12541 TNS-12541: TNS:no listener ns secondary err
code: 12560 nt main err code: 511 TNS-00511: No listener nt secondary err code:
239 nt OS err code: 0 Solution: Check [NOTE:1061927.6] <ml2_documents.showDocume
nt?p_id=1061927.6&p_database_id=NOT> to resolve the problem. Explanation: If TCP
protocol is listed in the Listener.ora's ADDRESS_LIST section and the parameter
USE_CKPFILE_LISTENER = TRUE, the Listener ignores the TCP port number defined i
n the ADDRESS section and listens on a random port. RELATED DOCUMENTS ----------
------Note:39774.1 <ml2_documents.showDocument?p_id=39774.1&p_database_id=NOT> T
RACE Facilities on NET . Note:45878.1 <ml2_documents.showDocument?p_id=45878.1&p
_database_id=NOT> SQL*Net Common Errors & Diagnostic Worksheet Net8i Admin/Ch.11
Troubleshooting Net8 / Resolving the Most Common Error Messages
LOG &
19.37 ORA-12637 --------------Packet received failed. A process was unable to re
ceive a packet from another process. Possible causes are: 1. The other process w
as terminated. 2. The machine on which the other process is running went down. 3
. Some other communications error occurred. Note 1: Just edit the file sqlnet.or
a and search for the string SQLNET.AUTHENTICATION_SERVICES. When it exists its set
to = (TNS), change this to = (NONE). When it doesnt exist, add the string SQLNET.AU
THENTICATION_SERVICES = (NONE) Note 2: What does SQLNET.AUTHENTICATION_SERVICES
do? SQLNET.AUTHENTICATION_SERVICES Purpose Use the parameter SQLNET.AUTHENTICATI
ON_SERVICES to enable one or more authentication services. If authentication has
been installed, it is recommended that this parameter be set to either none or
to one of the authentication methods. Default None Values Authentication Methods
Available with Oracle Net Services: none for no authentication methods. A valid
username and password can be used to access the database. all for all authentic
ation methods nts for Windows NT native authentication Authentication Methods Av
ailable with Oracle Advanced Security: kerberos5 for Kerberos authentication cyb
ersafe for Cybersafe authentication radius for RADIUS authentication dcegssapi f
or DCE GSSAPI authentication See Also: Oracle Advanced Security Administrator's
Guide Example SQLNET.AUTHENTICATION_SERVICES=(kerberos5, cybersafe) Note 3: ORA-
12637 for members of one NT group, using OPS$ login
Being "identified externally", users can work fine until the user is added to a
"wwwauthor" NT group to allow them to publish documents on Microsoft IIS (intran
et) -- then they get ORA-12637 starting the Oracle c/s application (document man
agement system). The environment is: Oracle 9.2.0.1.0 on Windows 2000 Advanced S
erver w. SP4, Windows 2003 domain controllers in W2K compatible mode, client wor
kstations with W2K and Win XP. Any hint will be appreciated. Problem solved. Spe
cific NT group (wwwauthor) which caused problems had existed already with specif
ic permissions, then it was dropped and created again with exactly the same name
(but, of course, with different internal ID). This situation have been identifi
ed as causing some kind of mess. A completely new group with different name has
been created. Note 4: ORA-12637 packet receive failure I added a second instance
to the Oracle server. Since then, on the server and all clients, I get ORA-1263
7 packet receive failure when I try to connect to this database. Why is this? He
llo Try commenting out the SQLNET.CRYPTO_SEED and SQLNET.AUTHENTICATION_SERVICES
in the server's SQLNET.ORA and on the client sqlnet file if they exist. Please
also verify that the server's LISTENER.ORA file contains the following parameter
: CONNECT_TIMEOUT_LISTENER=0 Note 5: Workaround is to turn off prespawned server
processes in "listener.ora". In the "listener.ora", comment out or delete the p
respawn parameters, ie: SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME =
prd) (ORACLE_HOME = /raid/app/oracle/product/7.3.4) # (PRESPAWN_MAX = 99) # (PRE
SPAWN_LIST = # (PRESPAWN_DESC = (PROTOCOL = TCP) (POOL_SIZE = 1) (TIMEOUT = 30))
# ) ) Note 6: Problem Description ------------------Connections to Oracle 9.2 u
sing a Cybersafe authenticated user fails on Solaris 2.6 with ORA-12637 and a co
re dump is generated.
)
Solution Description -------------------1) Shutdown Oracle, the listener and any
clients. 2) In $ORACLE_HOME/lib take a backup copy of the file sysliblist 3) Ed
it sysliblist. Move the -lthread entry to the beginning. So change from, -lnsl -
lsocket -lgen -ldl -lsched -lthread lthread -lnsl -lsocket -lgen -ldl -lsched 4)
Do $ORACLE_HOME/bin/relink all Note 7:
To,
-
fact: Oracle Server - Personal Edition 8.1 fact: MS Windows symptom: Starting Se
rver Manager (Svrmgrl) Fails symptom: ORA-12637: Packet Receive Failed cause: Or
acle's installer will set the authentication to (NTS) by default. However, if th
e Windows machine is not in a Domain where there is a Windows Domain Controller,
it will not be able to contact the KDC (Key Distribtion Centre) needed for Auth
entication. fix: Comment out SQLNET.AUTHENTICATION_SERVICES=(NTS) in sqlnet.ora
19.38 ORA 02058: ================ dba_2pc_pending: Lists all in-doubt distribute
d transactions. The view is empty until populated by an in-doubt transaction. Af
ter the transaction is resolved, the view is purged. SQL> SELECT LOCAL_TRAN_ID,
GLOBAL_TRAN_ID, STATE, MIXED, HOST, COMMIT# 2 FROM DBA_2PC_PENDING 3 / LOCAL_TRA
N_ID GLOBAL_TRAN_ID
---------------------- ---------------------------------------------------------
6.31.5950 1145324612.10D447310B5FCE408A296417959EBEEC00000000 SQL> select STATE,
MIXED, HOST, COMMIT# 2 FROM DBA_2PC_PENDING 3 / STATE MIX HOST
---------------- --- -----------------------------------------------------------
forced rollback no REBV\PGSS-TST-TCM SQL> select * from dba_2pc_neighbors; LOCAL
_TRAN_ID IN_ DATABASE
---------------------- --- -------------------------------------------------6.31
.5950 in O SQL> select state, tran_comment, advice from dba_2pc_pending; STATE T
RAN_COMMENT ---------------- ---------------------------------------------------
--------prepared SQL> rollback force '6.31.5950'; Rollback complete. SQL> commit
; Doc ID: Note:290405.1 Subject: ORA-30019 When Executing Dbms_transaction.Purge
_lost_db_entry Type: PROBLEM Status: MODERATED Content Type: TEXT/X-HTML Creatio
n Date: 11-NOV-2004 Last Revision Date: 16-NOV-2004 The information in this docu
ment applies to: Oracle Server - Enterprise Edition - Version: 9.2.0.5 This prob
lem can occur on any platform. Errors ORA-30019 Illegal rollback Segment operati
on in Automatic Undo mode Symptoms Attempting to clean up the pending transactio
n using DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY, getting ora-30019: ORA-30019: Ille
gal rollback Segment operation in Automatic Undo mode Changes AUTO UNDO MANAGEME
NT is running Cause DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY is not supported in AUT
O UNDO MANAGEMENT This is due to fact that "set transaction use rollback segment
.." cannot be done in AUM. Fix 1.) alter session set "_smu_debug_mode" = 4; 2.)
execute DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('local_tran_id');
19.39. ORA-600 [12850]: ======================= Doc ID </help/usaeng/Search/sear
ch.html>: Note:1064436.6 Content Type: TEXT/PLAIN Subject: ORA-00600 [12850], AN
D ORA-00600 [15265]: WHEN SELECT OR DESCRIBE ON TABLE Creation Date: 14-JAN-1999
Type: PROBLEM Last Revision Date: 29-FEB-2000
Status:
PUBLISHED
Problem Description: --------------------You are doing a describe or select on a
table and receive: ORA-600 [12850]: Meaning: 12850 occurs when it can't find th
e user who owns the object from the dictionary. If you try to delete the table,
you receive: ORA-600 [15625]: Meaning: The arguement 15625 is occuring because s
ome index entry for the table is not found in obj$. Problem Explanation: -------
------------The data dictionary is corrupt. You cannot drop the tables in questi
on because the data dictionary doesn't know they exist. Search Words: ----------
--ORA-600 [12850] ORA-600 [15625] describe delete table Solution Description: --
------------------You need to rebuild the database. Solution Explanation: ------
--------------Since the table(s) cannot be accessed or dropped because of the da
ta dictionary corruption, rebuilding the database is the only option.
19.40 ORA-01092: ================ ----------------------------------------------
-------------------------------------------Doc ID </help/usaeng/Search/search.ht
ml>: Note:222132.1 Content Type: TEXT/PLAIN Subject: ORA-01599 and ORA-01092 whi
le starting databaseCreation Date: 03DEC-2002 Type: PROBLEM Last Revision Date:
07-AUG-2003 Status: PUBLISHED PURPOSE -------
The purpose of this Note is to fix errors ORA-01599 & ORA-01092 when recieved at
startup. SCOPE & APPLICATION ------------------All DBAs, Support Analyst. Sympt
om(s) ~~~~~~~~~~ Starting the database gives errors similar to: ORA-01599: faile
d to acquire rollback segment (20), cache space is full (currently has (19) entr
ies) ORA-01092: ORACLE instance terminated Change(s) ~~~~~~~~~~ Increased shared
_pool_size parameter. Increased processes and/or sessions parameters. Cause ~~~~
~~~ Low value for max_rollback_segments The above changes changed the value for
max_rollback_segments internally. Fix ~~~~ The value for max_rollback_segments w
hich is to be calculated as follows: max_rollback_segments = transactions/transa
ctions_per_rollback_segment or 30 whichever is greater. transactions = session *
1.1; sessions = (processes * 1.1) + 5; The default value for transactions_per_r
ollback_segment = 5; 1. Use these calculations and find out the value for max_ro
llback_segments. 2. Set it to this value or 30 whichever is greater. 3. Startup
database after this correct setting. Reference info ~~~~~~~~~~~~~~ [BUG:2233336]
<ml2_documents.showDocument?p_id=2233336&p_database_id=BUG> RDBMS ERRORS AT STA
RTUP CAN CAUSE ODMA TO OMIT CLEANUP ACTIONS [NOTE:30764.1] <ml2_documents.showDo
cument?p_id=30764.1&p_database_id=NOT> -
Init.ora Parameter "MAX_ROLLBACK_SEGMENTS" Reference Note ----------------------
--------------------------------------------------------------------Doc ID </hel
p/usaeng/Search/search.html>: Note:1038418.6 Content Type: TEXT/PLAIN Subject: O
RA-01092 STARTING UP ORACLE RDBMS DATABASE Creation Date: 17NOV-1997 Type: PROBL
EM Last Revision Date: 06-JUL-1999 Status: PUBLISHED Problem Summary: ==========
====== ORA-01092 starting up Oracle RDBMS database. Problem Description: =======
============= When you startup your Oracle RDBMS database, you receive the follo
wing error: ORA-01092: ORACLE instance terminated. Disconnection forced. Problem
Explanation: ==================== Oracle cannot write to the alert_<SID>.log fi
le because the ownership and/or permissions on the BACKGROUND_DUMP_DEST director
y are incorrect. Solution Summary: ================= Modify the ownership and pe
rmissions of directory BACKGROUND_DUMP_DEST. Solution Description: =============
======== To allow oracle to write to the BACKGROUND_DUMP_DEST directory (contain
s alert_<SID>.log), modify the ownership of directory BACKGROUND_DUMP_DEST so th
at the oracle user (software owner) is the owner and make the permissions on dir
ectory BACKGROUND_DUMP_DEST 755. Follow these steps: 1. 2. 3. 4. Determine the l
ocation of the BACKGROUND_DUMP_DEST parameter defined in the init<SID>.ora or co
nfig<SID>.ora files. Login as root. Change directory to the location of BACKGROU
ND_DUMP_DEST. Change the owner of all the files and the directory to the softwar
e owner.
For example: % chown oracle * 5. Change the permissions on the directory to 755.
% chmod 755 . Solution Explanation: ===================== Changing the ownershi
p and permissions of the BACKGROUND_DUMP_DEST directory, enables oracle to write
to the alert_<SID>.log file. --------------------------------------------------
------------------------Doc ID </help/usaeng/Search/search.html>: Note:273413.1
Content Type: TEXT/X-HTML Subject: Database Does not Start, Ora-00604 Ora-25153
Ora-00604 Ora-1092 Creation Date: 19-MAY-2004 Type: PROBLEM Last Revision Date:
04-OCT-2004 Status: MODERATED The information in this article applies to: Oracle
Server - Enterprise Edition - Version: 8.1.7.4 to 10.1.0.4 This problem can occ
ur on any platform. Errors ORA-1092 Oracle instance terminated. ORA-25153 Tempor
ary Tablespace is Empty ORA-604 error occurred at recursive SQL level <num> Symp
toms The database is not opening and in the alert.log the following errors are r
eported: ORA-00604: error occurred at recursive SQL level 1 ORA-25153: Temporary
Tablespace is Empty Error 604 happened during db open, shutting down database U
SER: terminating instance due to error 604 Instance terminated by USER, pid = xx
xxx ORA-1092 signalled during: alter database open... You might find SQL in the
trace file like: select distinct d.p_obj#,d.p_timestamp from sys.dependency$ d,
obj$ o where d.p_obj#>=:1 and d.d_obj#=o.obj# and o.status!=5 Cause In the case
where there's locally managed temp tablespace in the database,after controlfile
is re-created using the statement generated by "alter database backup controlfil
e to trace", the database can't be opened again because it complains that temp t
ablespace is empty. However no tempfiles can be added to the temp tablespace, no
r can the temp tablespace be dropped because the database is not yet open.
The query failed because of inadequate sort space(memory + disk) Fix We can incr
ease the sort_area_size and sort_area_retained_size to a very high value so that
the query completes. Then DB will open and we can take care of the TEMP tablesp
ace If the error still persists after increasing the sort_area_size and sort_are
a_retained_size to a high vale, then the only option remains is to restore and r
ecover. ------------------------------------------------------------------------
------Displayed below are the messages of the selected thread. Thread Status: Ac
tive From: Ronald Shaffer 17-Mar-05 19:23 Subject: Deleted OUTLN and now I get O
RA-1092 and ORA-18008 RDBMS Version: 10G Operating System and Version: RedHat ES
3 Error Number (if applicable): ORA-1092 and ORA-18008 Product (i.e. SQL*Loader
, Import, etc.): Product Version: Deleted OUTLN and now I get ORA-1092 and ORA-1
8008 One of our DBAs dropped the OUTLN user in 10G and now the instance will not
start. We get an ORA-18008 specifying the schema is missing and an ORA-1092 whe
n it attempts to OPEN. Startup mount is as far as we can get. Any experience wit
h this issue out there? Thanks... From: Fairlie Rego 23-Mar-05 01:26 Subject: Re
: Deleted OUTLN and now I get ORA-1092 and ORA-18008 Hi Ronald, You are hitting
bug 3786479 AFTER DROPPING THE OUTLN USER/SCHEMA, DB WILL NO LONGER OPEN.ORA-18
008 http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_datab
ase_id =BUG&p_id=3786479 If this is still an issue file a Tar and get a backport
. Regards, Fairlie Rego --------------------------------------------------------
--------------------------
Displayed below are the messages of the selected thread. Thread Status: Closed F
rom: Henry Lau 06-Mar-03 10:38 Subject: ORA-01092 while alter datbase open RDBMS
Version: 9.0.1.3 Operating System and Version: Linux Redhat 7.1 Error Number (i
f applicable): ORA-01092 Product (i.e. SQL*Loader, Import, etc.): ORACLE DATABAS
E Product Version: 9.0.1.3 ORA-01092 while alter datbase open Hi, Since our undo
tbs is very large and we try to follow the Doc ID: 157278.1, we are trying to ch
ange the undotbs to a new one We try to 1. Create UNDO tablespace undotb2 datafi
le $ORACLE_HOME/oradata/undotb2.dbf size 300M 2. ALTER SYSTEM SET undo_tablespac
e=undotb2; 3. Change undo = undotb2; 4. Restart the database; 5. alter tablespac
e undotbs offline; 6. when we restart the database, it shows the following error
. SQL> startup mount pfile=$ORACLE_HOME/admin/TEST/pfile/init.ora ORACLE instanc
e started. Total System Global Area 386688540 bytes Fixed Size 280092 bytes Vari
able Size 318767104 bytes Database Buffers 67108864 bytes Redo Buffers 532480 by
tes Database mounted. SQL> alter database nomount; alter database nomount * ERRO
R at line 1: ORA-02231: missing or invalid option to ALTER DATABASE SQL> alter d
atabase open; alter database open * ERROR at line 1: ORA-01092: ORACLE instance
terminated. Disconnection forced I have checked the Log file as follow: SQL> /u0
1/oracle/product/9.0.1/admin/TEST/udump/ora_29151.trc
Oracle9i Release 9.0.1.3.0 - Production JServer Release 9.0.1.3.0 - Production O
RACLE_HOME = /u01/oracle/product/9.0.1 System name: Linux Node name: utxrho01.un
itex.com.hk Release: 2.4.2-2smp Version: #1 SMP Sun Apr 8 20:21:34 EDT 2001 Mach
ine: i686 Instance name: TEST Redo thread mounted by this instance: 1 Oracle pro
cess number: 9 Unix process pid: 29151, image: oracle@utxrho01.unitex.com.hk (TN
S V1-V3) *** SESSION ID:(8.3) 2003-03-06 17:25:38.615 Evaluating checkpoint for
thread 1 sequence 8 block 2 ORA-00376: file 2 cannot be read at this time ORA-01
110: data file 2: '/u01/oracle/product/9.0.1/oradata/TEST/undotbs01.dbf' ~ ~ ~ ~
Please help to check what the problem is ?? Thank you !! Regards, Henry From: O
racle, Pravin Sheth 07-Mar-03 09:31 Subject: Re : ORA-01092 while alter datbase
open Hi Henry, What you are seeing is bug 2360088, which is fixed in Oracle 9.2.
0.2. I suggest that you log an iSR (formerly iTAR) for a quicker solution for th
e problem. Regards Pravin ------------------------------------------------------
----------------------------
19.41 ORA-600 [qerfxFetch_01] ============================= Note 1: ------Doc ID
: Note:255881.1 Subject: ORA-600 [qerfxFetch_01] Type: REFERENCE Status: PUBLISH
ED Content Type: TEXT/X-HTML Creation Date: 10-NOV-2003 Last Revision Date: 12-N
OV-2004
<Internal_Only> This note contains information that has not yet been reviewed by
the PAA Internals group or DDR. As such, the contents are not necessarily accur
ate and care should be taken when dealing with customers who have encountered th
is error. If you are going to use the information held in this note then please
take whatever steps are needed to in order to confirm that the information is ac
curate. Until the article has been set to EXTERNAL, we do not guarantee the cont
ents. Thanks. PAA Internals Group (Note - this section will be deleted as the no
te moves to publication) </Internal_Only> Note: For additional ORA-600 related i
nformation please read Note 146580.1 PURPOSE: This article represents a partiall
y published OERI note. It has been published because the ORA-600 error has been
reported in at least one confirmed bug. Therefore, the SUGGESTIONS section of th
is article may help in terms of identifying the cause of the error. This specifi
c ORA-600 error may be considered for full publication at a later date. If/when
fully published, additional information will be available here on the nature of
this error. <Internal_Only> PURPOSE: This article discusses the internal error "
ORA-600 [qerfxFetch_01]", what it means and possible actions. The information he
re is only applicable to the versions listed and is provided only for guidance.
ERROR: ORA-600 [qerfxFetch_01] VERSIONS: versions 9.2 DESCRIPTION: During databa
se operations, user interrupts need to be handled correctly. ORA-600 [qerfxFetch
_01] is raised when an interrupt has been trapped but has not been handled corre
ctly. FUNCTIONALITY: Fixed table row source.
IMPACT: NON CORRUPTIVE - No underlying data corruption. </Internal_Only> SUGGEST
IONS: If the Known Issues section below does not help in terms of identifying a
solution, please submit the trace files and alert.log to Oracle Support Services
for further analysis. Known Issues: Bug# 2306106 See Note 2306106.8 OERI:[qerfx
Fetch_01] possible - affects OEM Fixed: 9.2.0.2, 10.1.0.2 <Internal_Only> INTERN
AL ONLY SECTION - NOT FOR PUBLICATION OR DISTRIBUTION TO CUSTOMERS =============
=========================================================== Ensure that this not
e comes out on top in Metalink when ora-600 ora-600 ora-600 ora-600 ora-600 ora-
600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 qerfxFetch_0
1 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFet
ch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 qerf
xFetch_01 qerfxFetch_01 qerfxFetch_01 qerfxFetch_01 </Internal_Only> Note 2: ---
---Doc ID </help/usaeng/Search/search.html>: Note:2306106.8 Content Type: TEXT/X
-HTML Subject: Support Description of Bug 2306106 Creation Date: 13-AUG-2003 Typ
e: PATCH Last Revision Date: 14-AUG-2003 Status: PUBLISHED Click here <javascrip
t:getdoc('NOTE:245840.1')> for details of sections in this note. Bug 2306106 OER
I:[qerfxFetch_01] possible - affects OEM This note gives a brief overview of bug
2306106. Affects: Product (Component) Oracle Server (RDBMS) Range of versions b
elieved to be affected Versions >= 9.2 but < 10G Versions confirmed as being aff
ected 9.2.0.1 Platforms affected Generic (all / most platforms affected) Fixed:
This issue is fixed in 9.2.0.2 (Server Patch Set) 10G Production Base Release Sy
mptoms: Error may occur <javascript:taghelp('TAGS_ERROR')> Internal Error may oc
cur (ORA-600) <javascript:taghelp('TAGS_OERI')> searched qerfxFetch_01 qerfxFetc
h_01 qerfxFetch_01 qerfxFetch_01
ORA-600 [qerfxFetch_01] Related To: (None Specified) Description ORA-600 [qerfxF
etch_01] possible Note 3: -------
affects OEM
Bug 2306106 is fixed in the 9.2.0.2 patchset. This bug is not published and thus
cannot be viewed externally in MetaLink. All it says on this bug is 'ORA-600 [q
erfxFetch_01] possible affects OEM'. 19.42 Undo corruption: ====================
== Note 1: ------Doc ID </help/usaeng/Search/search.html>: Note:2431450.8 Conten
t Type: TEXT/X-HTML Subject: Support Description of Bug 2431450 Creation Date: 0
8-AUG-2003 Type: PATCH Last Revision Date: 05-JAN-2004 Status: PUBLISHED Click h
ere <javascript:getdoc('NOTE:245840.1')> for details of sections in this note. B
ug 2431450 SMU Undo corruption possible on instance crash This note gives a brie
f overview of bug 2431450. Affects: Product (Component) (Rdbms) Range of version
s believed to be affected Versions >= 9 but < 10G Versions confirmed as being af
fected 9.0.1.4 9.2.0.3 Platforms affected Generic (all / most platforms affected
) Fixed: This issue is fixed in 9.0.1.5 iAS Patch Set 9.2.0.4 (Server Patch Set)
Production Base Release Symptoms: Corruption (Physical) <javascript:taghelp('TA
GS_CORR_PHY')> Internal Error may occur (ORA-600) <javascript:taghelp('TAGS_OERI
')> ORA-600 [kteuPropTime-2] / ORA-600 [4191] Related To: System Managed Undo De
scription SMU (System Managed Undo) Undo corruption possible on instance crash.
This can result in subsequent ORA-600 errors due to the undo corruption. Note 2:
------Doc ID </help/usaeng/Search/search.html>: Note:233864.1 Content Type:
10g
TEXT/X-HTML Subject: ORA-600 [kteuproptime-2] Creation Date: 28-MAR-2003 Type: R
EFERENCE Last Revision Date: 07-APR-2005 Status: PUBLISHED Note: For additional
ORA-600 related information please read Note 146580.1 </metalink/plsql/showdoc?d
b=NOT&id=146580.1> PURPOSE: This article discusses the internal error "ORA-600 [
kteuproptime-2]", what it means and possible actions. The information here is on
ly applicable to the versions listed and is provided only for guidance. ERROR: O
RA-600 [kteuproptime-2] VERSIONS: versions 9.0 to 9.2 DESCRIPTION: Oracle has en
countered an error propagating Extent Commit Times in the Undo Segment Header /
Extent Map Blocks, for System Managed Undo Segments The extent being referenced
is not valid. FUNCTIONALITY: UNDO EXTENTS IMPACT: INSTANCE FAILURE POSSIBLE PHYS
ICAL CORRUPTION SUGGESTIONS: If instance is down and fails to restart due to thi
s error then set the following parameter, which will gather additional informati
on to assist support in identifing the cause: # Dump Undo Segment Headers during
transaction recovery event="10015 trace name context forever, level 10" Restart
the instance and submit the trace files and alert.log to Oracle Support Service
s for further analysis. Do not set any other undo/rollback_segment parameters wi
thout direction from Support. Known Issues: Bug# 2431450 See Note 2431450.8 </me
talink/plsql/showdoc?db=NOT&id=2431450.8> SMU Undo corruption possible on instan
ce crash Fixed: 9.2.0.4, 10.1.0.2 Note 3: -------
Hi, apply patchset 9.2.0.2, bug 2431450 is fixed in 9.2.0.2 that made SMU (Syste
m Managed Undo) Undo corruption possible on instance crash. It's a very rare sce
nario : This will only cause a problem if there was an instance crash after a tr
ansaction committed but before it propogated the extent commit times to all its
extents AND there was a shrink of extents before the transaction could be recove
red. But still, this bug was not published (not for any particular reason except
it was found internal). Greetings, Note 4: ------From: Oracle, Ken Robinson 21-
Feb-03 17:44 Subject: Re : ORA-600 kteuPropTime-2 Forgot to mention the second b
ug for this....bug 2689239. Regards, Ken Robinson Oracle Server EE Analyst ORA-6
00 [4191] possible on shrink of system managed undo segment. Note 5: ------BUGBU
STER - System-managed undo segment corruption Affects Versions: 9.2.0.1.0, 9.2.0
.2.0, Fixed in: Patch 2431450, 9.2.0.4.0 BUG# (if recognised) 2431450 This info.
correct on: 31-AUG-2003 Symptoms Oracle instance crashes and details of the ORA
-00600 error are written to the alert.log ORA-00600: internal error code, argume
nts: [kteuPropTime-2], [], [], [] Followed by Fatal internal error happened whil
e SMON was doing active transaction recovery. Then SMON: terminating instance du
e to error 600 Instance terminated by SMON, pid = 22972 9.2.0.3.0
This occurs as Oracle encounters an error when propagating Extent Commit Times i
n the Undo Segment Header Extent Map Blocks. It could be because SMON is over-en
thusiastic in shrinking extents in SMU segments. As a result, extent commit time
s do not get written to all the extents and SMON causes the instance to crash, l
eaving one or more of the undo segments corrupt. When opening the database follo
wing the crash, Oracle tries to perform crash recovery and encounters problems r
ecovering committed transactions stored in the corrupt undo segments. This leads
to more ORA-00600 errors and a further instance crash. The net result is that t
he database cannot be opened: "Error 600 happened during db open, shutting down
database" Workaround Until the corrupt undo segment can be identified and offlin
ed then unfortunately the database will not open. Identify the corrupt undo segm
ent by setting the following parameters in the init.ora file: _smu_debug_mode=1
event="10015 trace name context forever, level 10" (set event 10511) event="1051
1 trace name context forever, level 2" _smu_debug_mode simply collects diagnosti
c information for support purposes. Event 10015 is the undo segment recovery tra
cing event. Use this to identify corrupted rollback/undo segments when a databas
e cannot be started. With these parameters set, an attempt to open the database
will still cause a crash, but Oracle will write vital information about the corr
upt rollback/undo segments to a trace file in user_dump_dest. This is an extract
from such a trace file, revealing that undo segment number 6 (_SYSSMU6$) is cor
rupt. Notice that the information stored in the segment header about the number
of extents was inconsistent with the extent map. Recovering rollback segment _SY
SSMU6$ UNDO SEG (BEFORE RECOVERY): usn = 6 Extent Control Header ---------------
-------------------------------------------------Extent Header:: spare1: 0 spare
2: 0 #extents: 7 #blocks: 1934 last map 0x00805f89 #maps: 1 offset: 4080 Highwat
er:: 0x0080005b ext#: 0 blk#: 1 ext size: 7 #blocks in seg. hdr's freelists: 0 #
blocks below: 0 mapblk 0x00000000 offset: 0
Unlocked Map Header:: next 0x00805f89 #extents: 5 obj#: 0 flag: 0x40000000 Exten
t Map ----------------------------------------------------------------0x0080005a
length: 7 0x00800061 length: 8 0x0081ac89 length: 1024 0x00805589 length: 256 0
x00805a89 length: 256 Retention Table ------------------------------------------
----------------Extent Number:0 Commit Time: 1060617115 Extent Number:1 Commit T
ime: 1060611728 Extent Number:2 Commit Time: 1060611728 Extent Number:3 Commit T
ime: 1060611728 Extent Number:4 Commit Time: 1060611728 Comment out parameters u
ndo_management and undo_tablespace and set the undocumented _corrupted_rollback_
segments parameter to tell Oracle to ignore any corruptions and force the databa
se open: _corrupted_rollback_segments=(_SYSSMU6$) This time, Oracle will start a
nd open OK, which will allow you to check the status of the undo segments by que
rying DBA_ROLLBACK_SEGS. select segment_id, segment_name, tablespace_name, statu
s from dba_rollback_segs where owner='PUBLIC'; SEGMENT_ID ---------1 2 3 4 5 6 7
8 9 10 SEGMENT_NAME -----------_SYSSMU1$ _SYSSMU2$ _SYSSMU3$ _SYSSMU4$ _SYSSMU5
$ _SYSSMU6$ _SYSSMU7$ _SYSSMU8$ _SYSSMU9$ _SYSSMU10$ TABLESPACE_NAME -----------
---UNDOTS UNDOTS UNDOTS UNDOTS UNDOTS UNDOTS UNDOTS UNDOTS UNDOTS UNDOTS STATUS
---------------OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE NEEDS RECOVERY OFFLINE OF
FLINE OFFLINE OFFLINE
SMON will complain every 5 minutes by writing entries to the alert.log as long a
s there are undo segments in need of recovery SMON: about to recover undo segmen
t 6 SMON: mark undo segment 6 as needs recovery At this point, you must either d
ownload and apply patch 2431450 or create private rollback segments. Note 6:
------Repair UNDO log corruption Don Burleson In rare cases (usually DBA error)
the Oracle UNDO tablespace can become corrupted. This manifests with this error:
ORA-00376: file xx cannot be read at this time In cases of UNDO log corruption,
you must: Change the undo_management parameter from AUTO to MANUAL Create
Drop the old UNDO tablespace Dropping the corrupt UNDO tablespace can be tricky a
nd you may get the message: ORA-00376: file string cannot be read at this time T
o drop a corrupt UNDO tablespace: 1 Identify the bad segment: select segment_name,
status from dba_rollback_segs where tablespace_name='undotbs_corrupt' and statu
s = NEEDS RECOVERY; SEGMENT_NAME STATUS ------------------------------ ---------------
_SYSSMU22$ NEEDS RECOVERY 2. Bounce the instance with the hidden parameter _offline
_rollback_segments, specifying the bad segment name: _OFFLINE_ROLLBACK_SEGMENTS=_SY
SSMU22$ 3. Bounce database, nuke the corrupt segment and tablespace: SQL> drop r
ollback segment "_SYSSMU22$"; Rollback segment dropped. SQL > drop tablespace un
dotbs including contents and datafiles; Tablespace dropped. Note 7: ------Someti
mes there can be trouble with an undo segment. Actually there might be something
with a normal object: PUT the following in the init.oraevent = "10015 trace nam
e context forever, level 10"
Setting this event will generate a trace file that will reveal the necessary inf
ormation about the transaction Oracle is trying to rollback and most importantly
, what object Oracle is trying to apply the undo to. USE the following query to
find out what object Oracle is trying to perform recovery on. select owner, obje
ct_name, object_type, status from dba_objects where object_id = <object #>; THIS
object must be dropped so the undo can be released. An export or relying on a b
ackup may be necessary to restore the object after the corrupted rollback segmen
t goes away.
19.43 ORA-1653 ============== Note 1: ------Doc ID </help/usaeng/Search/search.h
tml>: Note:151994.1 Content Type: TEXT/PLAIN Subject: Overview Of ORA-01653: Una
ble To Extend Table %s.%s By %s In Tablespace %s Creation Date: 12-JUL-2001 Type
: TROUBLESHOOTING Last Revision Date: 15-JUN-2004 Status: PUBLISHED PURPOSE ----
--This bulletin is an overview of ORA-1653 error message for tablespace dictiona
ry managed. SCOPE& APPLICATION -----------------It is for users requiring furthe
r information on ORA-01653 error message. When looking to resolve the error by u
sing any of the solutions suggested, please consult the DBA for assistance. Erro
r: ORA-01653 Text: unable to extend table %s.%s by %s in tablespace %s ---------
---------------------------------------------------------------------Cause: Fail
ed to allocate an extent for table segment in tablespace. Action: Use ALTER TABL
ESPACE ADD DATAFILE statement to add one or more files to the tablespace indicat
ed. Explanation: -----------This error does not necessarily indicate whether or
not you have enough space in the tablespace, it merely indicates that Oracle cou
ld not find a large enough area of free contiguous space in which to fit the nex
t extent.
Diagnostic Steps: ----------------1. In order to see the free space available fo
r a particular tablespace, you must use the view DBA_FREE_SPACE. Within this vie
w, each record represents one fragment of space. How the view DBA_FREE_SPACE can
be used to determine the space available in the database is described in: [NOTE
:121259.1] <ml2_documents.showDocument?p_id=121259.1&p_database_id=NOT> Using DB
A_FREE_SPACE 2. The DBA_TABLES view describes the size of next extent (NEXT_EXTE
NT) and the percentage increase (PCT_INCREASE) for all tables in the database. T
he "next_extent" size is the size of extent that is trying to be allocated (and
for which you have the error). When the extent is allocated : next_extent = next
_extent * (1 + (pct_increase/100)) Algorythm to allocate extent for segment is d
escribed in the Concept Guide Chapter : Data Blocks, Extents, and Segments - How
Extents Are Allocated 3. Look to see if any users have the tablespace in questi
on as their temporary tablespace. This can be checked by looking at DBA_USERS (T
EMPORARY_TABLESPACE). Possible solutions: ------------------- Manually Coalesce
Adjacent Free Extents ALTER TABLESPACE <tablespace name> COALESCE; The extents m
ust be adjacent to each other for this to work. - Add a Datafile: ALTER TABLESPA
CE <tablespace name> ADD DATAFILE '<full path and file name>' SIZE <integer> <k|
m>; - Resize the Datafile: ALTER DATABASE DATAFILE '<full path and file name>' R
ESIZE <integer> <k| m>; - Enable autoextend: ALTER DATABASE DATAFILE '<full path
and file name>' AUTOEXTEND ON MAXSIZE UNLIMITED; - Defragment the Tablespace: -
Lower "next_extent" and/or "pct_increase" size: ALTER <segment_type> <segment_n
ame> STORAGE ( next <integer> <k|m> pctincrease <integer>); - If the tablespace
is being used as a temporary tablespace, temporary segments may be still holding
the space. References: -----------
[NOTE:1025288.6] <ml2_documents.showDocument?p_id=1025288.6&p_database_id=NOT> H
ow to Diagnose and Resolve ORA-01650, ORA-01652, ORA-01653, ORA-01654, ORA-01688
: Unable to Extend < OBJECT > by %S in Tablespace [NOTE:1020090.6] <ml2_documen
ts.showDocument?p_id=1020090.6&p_database_id=NOT> Script to Report on Space in T
ablespaces [NOTE:1020182.6] <ml2_documents.showDocument?p_id=1020182.6&p_databas
e_id=NOT> Script to Detect Tablespace Fragmentation [NOTE:1012431.6] <ml2_docume
nts.showDocument?p_id=1012431.6&p_database_id=NOT> Overview of Database Fragment
ation [NOTE:121259.1] <ml2_documents.showDocument?p_id=121259.1&p_database_id=NO
T> Using DBA_FREE_SPACE [NOTE:61997.1] <ml2_documents.showDocument?p_id=61997.1&
p_database_id=NOT> SMON - Temporary Segment Cleanup and Free Space Coalescing
Note 2: ------Doc ID </help/usaeng/Search/search.html>: Note:1025288.6 Content T
ype: TEXT/PLAIN Subject: How to Diagnose and Resolve ORA-01650,ORA-01652,ORA-016
53,ORA01654,ORA-01688 : Unable to Extend < OBJECT > by %S in Tablespace %S Creat
ion Date: 02-JAN-1997 Type: TROUBLESHOOTING Last Revision Date: 10-JUN-2004 Stat
us: PUBLISHED PURPOSE ------This document can be used to diagnose and resolve sp
ace management errors - ORA1650, ORA-1652, ORA-1653, ORA-1654 and ORA-1688. SCOP
E & APPLICATION ------------------You are working with the database and have enc
ountered one of the following errors: ORA-01650: unable to extend rollback segme
nt %s by %s in tablespace %s Cause: Failed to allocate extent for the rollback s
egment in tablespace. Action: Use the ALTER TABLESPACE ADD DATAFILE statement to
add one or more files to the specified tablespace. ORA-01652: unable to extend
temp segment by %s in tablespace %s Cause: Failed to allocate an extent for temp
segment in tablespace. Action: Use ALTER TABLESPACE ADD DATAFILE statement to a
dd one or more files to the tablespace indicated or create the object in other t
ablespace. ORA-01653: unable to extend table %s.%s by %s in tablespace %s Cause:
Failed to allocate extent for table segment in tablespace. Action: Use the ALTE
R TABLESPACE ADD DATAFILE statement to add one or more files to the specified ta
blespace. ORA-01654: unable to extend index %s.%s by %s in tablespace %s Cause:
Failed to allocate extent for index segment in tablespace. Action: Use the ALTER
TABLESPACE ADD DATAFILE statement to add one or more files to the specified tab
lespace.
ORA-01688: unable to extend table %s.%s partition %s by %s in tablespace %s Caus
e: Failed to allocate an extent for table segment in tablespace. Action: Use ALT
ER TABLESPACE ADD DATAFILE statement to add one or more files to the tablespace
indicated. How to Solve the Following Errors About UNABLE TO EXTEND ------------
-------------------------------------------An "unable to extend" error is raised
when there is insufficient contiguous space available to extend the object. A.
In order to address the UNABLE TO EXTEND issue, you need to get the following in
formation: 1. The largest contiguous space available for the tablespace SELECT F
ROM WHERE max(bytes) dba_free_space tablespace_name = '<tablespace name>';
The above query returns the largest available contiguous chunk of space. Please
note that if the tablespace you are concerned with is of type TEMPORARY, then pl
ease refer to [NOTE:188610.1] <ml2_documents.showDocument?p_id=188610.1&p_databa
se_id=NOT>. If this query is done immediately after the failure, it will show th
at the largest contiguous space in the tablespace is smaller than the next exten
t the object was trying to allocate. 2. => "next_extent" for the object => "pct_
increase" for the object => The name of the tablespace in which the object resid
es Use the "next_extent" size with "pct_increase" in the following formula to de
termine the size of extent that is trying to be allocated. extent size = next_ex
tent * (1 + (pct_increase/100) next_extent = 512000 pct_increase = 50 => extent
size = 512000 * (1 + (50/100)) = ORA-01650 Rollback Segment ====================
====== SELECT FROM WHERE next_extent, pct_increase, tablespace_name dba_rollback
_segs segment_name = '<rollback segment name>';
512000 * 1.5 = 768000
Note: pct_increase is only needed for early versions of Oracle, by default in la
ter versions pct_increase for a rollback segment is 0. ORA-01652 Temporary Segme
nt
=========================== SELECT FROM WHERE next_extent, pct_increase, tablesp
ace_name dba_tablespaces tablespace_name = '<tablespace name>';
Temporary segments take the default storage clause of the tablespace in which th
ey are created. If this error is caused by a query, then try and ensure that the
query is tuned to perform its sorts as efficiently as possible. To find the own
er of a sort, please refer to [NOTE:1069041.6] <ml2_documents.showDocument?p_id=
1069041.6&p_database_id=NOT> ORA-01653 Table Segment ======================= SEL
ECT FROM WHERE next_extent, pct_increase , tablespace_name dba_tables table_name
= '<table name>' AND owner = '<owner>';
ORA-01654 Index Segment ======================= SELECT FROM WHERE next_extent, p
ct_increase, tablespace_name dba_indexes index_name = '<index name>' AND owner =
'<owner>';
ORA-01688 Table Partition ========================= SELECT next_extent, pct_incr
ease, tablespace_name FROM dba_tab_partitions WHERE partition_name='<partition n
ame>' AND table_owner = '<owner>'; B. Possible Solutions There are several optio
ns for solving errors due to failure to extend: a. Manually Coalesce Adjacent Fr
ee Extents --------------------------------------ALTER TABLESPACE <tablespace na
me> COALESCE; The extents must be adjacent to each other for this to work. b. Ad
d a Datafile -------------ALTER TABLESPACE <tablespace name> ADD DATAFILE '<full
path and file name>' SIZE <integer> <k|m>; c. Lower "next_extent" and/or "pct_i
ncrease" size
---------------------------------------------For non-temporary and non-partition
ed segment problem: ALTER <segment_type> <segment_name> STORAGE ( next <integer>
<k|m> pctincrease <integer>); For non-temporary and partitioned segment problem
: ALTER TABLE <table_name> MODIFY PARTITION <partition_name> STORAGE ( next <int
eger> <k|m> pctincrease <integer>); For a temporary segment problem: ALTER TABLE
SPACE <tablespace name> DEFAULT STORAGE (initial <integer> next <integer> <k|m>
pctincrease <integer>); d. Resize the Datafile ------------------ALTER DATABASE
DATAFILE '<full path and file name>' RESIZE <integer> <k|m>; e. Defragment the T
ablespace ------------------------If you would like more information on fragment
ation, the following documents are available from Oracle WorldWide Support . (th
is is not a comprehensive list) [NOTE:1020182.6] <ml2_documents.showDocument?p_i
d=1020182.6&p_database_id=NOT> Script to Detect Tablespace Fragmentation [NOTE:1
012431.6] <ml2_documents.showDocument?p_id=1012431.6&p_database_id=NOT> Overview
of Database Fragmentation [NOTE:30910.1] <ml2_documents.showDocument?p_id=30910
.1&p_database_id=NOT> Recreating Database Objects Related Documents: ===========
======= [NOTE:15284.1] <ml2_documents.showDocument?p_id=15284.1&p_database_id=NO
T> Understanding and Resolving ORA-01547 <Note.151994.1> Overview Of ORA-01653 U
nable To Extend Table %s.%s By %s In Tablespace %s: <Note.146595.1> Overview Of
ORA-01654 Unable To Extend Index %s.%s By %s In Tablespace %s: [NOTE:188610.1] <
ml2_documents.showDocument?p_id=188610.1&p_database_id=NOT> DBA_FREE_SPACE Does
not Show Information about Temporary Tablespaces [NOTE:1069041.6] <ml2_documents
.showDocument?p_id=1069041.6&p_database_id=NOT> How to Find Creator of a SORT or
TEMPORARY SEGMENT or Users Performing Sorts for Oracle8 and 9 Search Words:
============= ORA-1650 ORA-1652 ORA-1653 ORA-1654 ORA-1688 ORA-01650 ORA-01652 O
RA-01653 ORA-01654 ORA-01688 1650 1652 1653 1654 1688
19.44: Other ORA- errors on 9i: =============================== Doc ID </help/us
aeng/Search/search.html>: Note:201342.1 TEXT/X-HTML Subject: Top Internal Errors
- Oracle Server Release 9.2.0 27-JUN-2002 Type: BULLETIN Last Revision Date: 24
-MAY-2004 Status: PUBLISHED Top Internal Errors - Oracle Server Release 9.2.0 Co
ntent Type: Creation Date:
Additional information or documentation on ORA-600 errors not listed here may be
available from the ORA-600 Lookup tool : <Note:153788.1 </metalink/plsql/showdo
c?db=Not&id=153788.1>> <Note:189908.1 </metalink/plsql/showdoc?db=Not&id=189908.
1>> Oracle9i Release 2 (9.2) Support Status and Alerts ORA-600 [KSLAWE:!PWQ] Pos
sible bugs: Fixed in: <Bug:3566420 </metalink/plsql/showdoc?db=Bug&id=3566420>>
OERI:KSLAWE:!PWQ AND INSTANCE CRASHES 9.2.0.6, 10G
BACKGROUND PROCESS GOT
References: <Note:271084.1 </metalink/plsql/showdoc?db=Not&id=271084.1>> 600[KSL
AWE:!PWQ] RAISED IN V92040 OR V92050 ON SUN 64BIT ORACLE
ALERT: ORA-
ORA-600 [ksmals] Possible bugs: Fixed in: <Bug:2662683 </metalink/plsql/showdoc?
db=Bug&id=2662683>> ORA-7445 & HEAP CORRUPTION WHEN RUNNING APPS PROGRAM THAT DO
ES HEAVY INSERTS 9.2.0.4 References: <Note:247822.1 </metalink/plsql/showdoc?db=
Not&id=247822.1>> ORA-600 [ksmals]
ORA-600 [4000] Possible bugs: Fixed in: <Bug:2959556 </metalink/plsql/showdoc?db
=Bug&id=2959556>> STARTUP after an ORA701 fails with OERI[4000] 9.2.0.5, 10G <Bu
g:1371820 </metalink/plsql/showdoc?db=Bug&id=1371820>> OERI:4506 / OERI:4000 pos
sible against transported tablespace 8.1.7.4, 9.0.1.4, 9.2.0.1 References: <Note
:47456.1 </metalink/plsql/showdoc?db=Not&id=47456.1>> "trying to get dba of undo
segment header block from usn" ORA-600 [4454] Possible bugs: Fixed in: ORA-600
[4000]
<Bug:1402161 </metalink/plsql/showdoc?db=Bug&id=1402161>> long running job 8.1.7
.3, 9.0.1.3, 9.2.0.1
OERI:4411/OERI:4454 on
References: <Note:138836.1 </metalink/plsql/showdoc?db=Not&id=138836.1>>
ORA-600 [4454]
ORA-600 [kcbgcur_9] Possible bugs: Fixed in: <Bug:2722809 </metalink/plsql/showd
oc?db=Bug&id=2722809>> OERI:kcbgcur_9 on direct load into AUTO space managed seg
ment 9.2.0.4, 10G <Bug:2392885 </metalink/plsql/showdoc?db=Bug&id=2392885>> Dire
ct path load may fail with OERI:kcbgcur_9 / OERI:ktfduedel2 9.2.0.4, 10G <Bug:22
02310 </metalink/plsql/showdoc?db=Bug&id=2202310>> OERI:KCBGCUR_9 possible from
SMON dropping a rollback segment in locally managed tablespace 9.0.1.4, 9.2.0.1
<Bug:2035267 </metalink/plsql/showdoc?db=Bug&id=2035267>> OERI:KCBGCUR_9 possibl
e during TEMP space operations 9.0.1.3, 9.2.0.1 <Bug:1804676 </metalink/plsql/sh
owdoc?db=Bug&id=1804676>> OERI:KCBGCUR_9 possible from ONLINE REBUILD INDEX with
concurrent DML 8.1.7.3, 9.0.1.3, 9.2.0.1 <Bug:1785175 </metalink/plsql/showdoc?
db=Bug&id=1785175>> OERI:kcbgcur_9 from CLOB TO CHAR or BLOB TO RAW conversion 9
.2.0.2, 10G References: <Note:114058.1 </metalink/plsql/showdoc?db=Not&id=114058
.1>> [kcbgcur_9] "Block class pinning violation" ORA-600 [qerrmOFBu1], [1003] Po
ssible bugs: Fixed in: <Bug:2308496 </metalink/plsql/showdoc?db=Bug&id=2308496>>
LOGGING INTO ORACLE 7.3.4 DATABASE ORA-600
SQL*PLUS CRASH IN TTC
References: <Note:209363.1 </metalink/plsql/showdoc?db=Not&id=209363.1>> [qerrmO
FBu1] - "Error during remote row fetch operation <Note:207319.1 </metalink/plsql
/showdoc?db=Not&id=207319.1>> from Oracle 9.2 to Oracle7 are Not Supported ORA-6
00 [ktsgsp5] or ORA-600 [kdddgb2] Possible bugs: Fixed in: <Bug:2384289 </metali
nk/plsql/showdoc?db=Bug&id=2384289>> [435816] [2753588] & PROBABLE INDEX CORRUPT
ION 9.2.0.2
ORA-600 ALERT: Connections
ORA-600 [KDDDGB2]
References: <Note:139037.1 </metalink/plsql/showdoc?db=Not&id=139037.1>> <Note:1
39180.1 </metalink/plsql/showdoc?db=Not&id=139180.1>> <Note:197737.1 </metalink/
plsql/showdoc?db=Not&id=197737.1>> / Internal Errors possible after Upgrading to
9.2.0.1 19.45: ADJUST SCN: ================== Note 1 Adjust SCN: --------------
----
ORA-600 [kdddgb2] ORA-600 [ktsgsp5] ALERT: Corruption
Doc ID: Note:30681.1 Subject: EVENT: ADJUST_SCN - Quick Reference Type: REFERENC
E Status: PUBLISHED Content Type: TEXT/PLAIN Creation Date: 20-OCT-1997 Last Rev
ision Date: 04-AUG-2000 Language: USAENG ADJUST_SCN Event ~~~~~~~~~~~~~~~~ *** W
ARNING *** This event should only ever be used under the guidance of an experien
ced Oracle analyst. If an SCN is ahead of the current database SCN, this indicat
es some form of database corruption. The database should be rebuilt after bumpin
g the SCN. **************** The ADJUST_SCN event is useful in some recovery situ
ations where the current SCN needs to be incremented by a large value to ensure
it is ahead of the highest SCN in the database. This is typically required if ei
ther: a. An ORA-600 [2662] error is signalled against database blocks or b. ORA-
1555 errors keep occuring after forcing the database open or ORA-604 / ORA-1555
errors occur during database open. (Note: If startup reports ORA-704 & ORA-1555
errors together then the ADJUST_SCN event cannot be used to bump the SCN as the
error is occuring during bootstrap. Repeated startup/shutdown attempts may help
if the SCN mismatch is small) or c. If a database has been forced open used _ALL
OW_RESETLOGS_CORRUPTION (See <Parameter:Allow_Resetlogs_Corruption> ) The ADJUST
_SCN event acts as described below. **NOTE: You can check that the ADJUST_SCN ev
ent has fired as it should write a message to the alert log in the form "Debuggi
ng event used to advance scn to %s". If this message is NOT present in the alert
log the event has probably not fired. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If the dat
abase will NOT open: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Take a backup. You can use e
vent 10015 to trigger an ADJUST_SCN on database open: startup mount; alter sessi
on set events '10015 trace name adjust_scn level 1'; (NB: You can only use IMMED
IATE here on an OPEN database. If the
database is only mounted use the 10015 trigger to adjust SCN, otherwise you get
ORA 600 [2251], [65535], [4294967295] ) alter database open; If you get an ORA 6
00:2256 shutdown, use a higher level and reopen. Do *NOT* set this event in init
.ora or the instance will crash as soon as SMON or PMON try to do any clean up.
Always use it with the "alter session" command. ~~~~~~~~~~~~~~~~~~~~~~~~~~ If th
e database *IS* OPEN: ~~~~~~~~~~~~~~~~~~~~~~~~~~ You can increase the SCN thus:
alter session set events 'IMMEDIATE trace name ADJUST_SCN level 1'; LEVEL: Level
1 is usually sufficient - it raises the SCN to 1 billion (1024*1024*1024) Level
2 raises it to 2 billion etc... If you try to raise the SCN to a level LESS THA
N or EQUAL to its current setting you will get <OERI:2256> - See below. Ie: The
event steps the SCN to known levels. You cannot use the same level twice. Calcul
ating a Level from 600 errors: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To get a L
EVEL for ADJUST_SCN: a) Determine the TARGET scn: ora-600 [2662] See <OERI:2662>
ora-600 [2256] See <OERI:2256> Use TARGET >= blocks SCN Use TARGET >= Current S
CN
b) Multiply the TARGET wrap number by 4. This will give you the level to use in
the adjust_scn to get the correct wrap number. c) Next, add the following value
to the level to get the desired base value as well : Add to Level Base ~~~~~~~~~
~~~ ~~~~~~~~~~~~ 0 0 1 1073741824 2 2147483648 3 3221225472
Note 2: Adjust SCN -----------------Subject: OERR: 600 2662 Block SCN is ahead o
f Current SCN Creation Date: 21-OCT-1997 ORA-600 [2662] [a] [b] [c] [d] [e] Vers
ions: 7.0.16 - 8.0.5 Source: kcrf.h
=========================================================================== Mean
ing: There are 3 forms of this error. 4/5 argument forms The SCN found on a bloc
k (dependant SCN) was ahead of the current SCN. See below for this 1 Argument (b
efore 7.2.3): Oracle is in the process of writing a block to a log file. If the
calculated block checksum is less than or equal to 1 (0 and 1 are reserved) ORA-
600 [2662] is returned. This is a problem generating an offline immediate log ma
rker (kcrfwg). *NOT DOCUMENTED HERE* -------------------------------------------
-------------------------------Argument Description: Until version 7.2.3 this in
ternal error can be logged for two separate reasons, which we will refer to as t
ype I and type II. The two types can be distinguished by the number of arguments
: Type I has four or five arguments after the [2662]. Type II has one argument a
fter the [2662]. From 7.2.3 onwards type II no longer exists. Type I ~~~~~~ a. b
. c. d. e.
Current SCN WRAP Current SCN BASE dependant SCN WRAP dependant SCN BASE Where pr
esent this is the DBA where the dependant SCN came from. From kcrf.h: If the SCN
comes from the recent or current SCN then a dba of zero is saved. If it comes f
rom undo$ because the undo segment is not available then the undo segment number
is saved, which looks like a block from file 0. If the SCN is for a media recov
ery redo (i.e. block number == 0 in change vector), then the dba is for block 0
of the relevant datafile. If it is from another database for distribute xact the
n dba is DBAINF(). If it comes from a TX lock then the dba is really usn<<16+slo
t.
Type II ~~~~~~~ a. checksum -> log block checksum - zero if none (thread # in ol
d format) ----------------------------------------------------------------------
----Diagnosis: ~~~~~~~~~~ In addition to different basic types from above, there
are different situations and coherences where ORA-600 [2662] type 'I' can be ra
ised. For diagnosis we can split up startup-issues and no-startup-issues. Usuall
y the startup-issues are more critical. Getting started:
~~~~~~~~~~~~~~~~ (1) is the error raised during normal database operations (i.e.
when the database is up) or during startup of the database? (2) what is the SCN
difference [d]-[b] ( subtract argument 'b' from arg 'd')? (3) is there a fifth
argument [e] ? If so convert the dba to file# block# Is it a data dictionary obj
ect? (file#=1) If so find out object name with the help of reference dictionary
from second database (4) What is the current SQL statement? (see trace) Which ta
ble is refered to? Does the table match the object you found in step before? Be
careful at this point: there may be no relationship between DBA in [e] and real
source of problem (blockdump). Deeper analysis: ~~~~~~~~~~~~~~~~ - investigate t
race file this will be a user trace file normally but could be an smon trace too
- search for: 'buffer' ("buffer dba" in Oracle7 dumps, "buffer tsn" in Oracle8
dumps) this will bring you to a blockdump which usually represents the 'real' so
urce of OERI:2662 WARNING: There may be more than one buffer pinned to the proce
ss so ensure you check out all pinned buffers. -> does the blockdump match the d
ba from e.? -> what kind of blockdump is it? (a) rollbacksegment header (b) data
block (c) other SEE BELOW for EXAMPLES which demonstrate the sort of output you
may see in trace files and the things to check. Check list and possible causes ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - If Parallel Server check both nodes are using th
e same lock manager instance & point at the same control files. - If not Paralle
l Server check that 2 instances haven't mounted the same database (Is there a se
cond PMON process around ?? - shut down any other instances to be sure) Possible
causes: - doing an open resetlogs with _ALLOW_RESETLOGS_CORRUPTION enabled - a
hardware problem, like a faulty controller, resulting in a failed write to the c
ontrol file or the redo logs - restoring parts of the database from backup and n
ot doing the appropriate recovery - restoring a control file and not doing a REC
OVER DATABASE USING BACKUP CONTROLFILE - having _DISABLE_LOGGING set during cras
h recovery
- problems with the DLM in a parallel server environment - a bug Solutions: - if
the SCNs in the error are very close: Attempting a startup several times will b
ump up the dscn every time we open the database even if open fails. The database
will open when dscn=scn. - ** You can bump the SCN on open using <Event:ADJUST_
SCN> See [NOTE:30681.1] Be aware that you should really rebuild the database if
you use this option. - Once this has occurred you would normally want to rebuild
the database via exp/rebuild/imp as there is no guarantee that some other block
s are not ahead of time. Articles: ~~~~~~~~~ Solutions: [NOTE:30681.1] Details o
f the ADJUST_SCN Event [NOTE:1070079.6] alter system checkpoint Possible Causes:
[NOTE:1021243.6] [NOTE:74903.1] [NOTE:41399.1] [NOTE:851959.9] Known Bugs: ~~~~
~~~~~~~ Fixed In. Bug No. Description ---------+------------+-------------------
--------------------------------7.0.14 BUG:153638 7.1.5 BUG:229873 7.1.3 Bug:195
115 Miscalculation of SCN on startup for distributed TX ? 7.1.6.2.7 Bug:297197 P
ort specific Solaris OPS problem 7.3 Bug:336196 Port specific IBM SP AIX problem
-> dlm issue 7.3.4.5 Bug:851959 OERI:2662 possible from distributed OPS select
--------------------------------------------------------------------------------
--------------------------------------------------------------------Examples: ~~
~~~~~~ Below are some examples of this type of error and the information you wil
l see in the trace files. ~~~~~~~~~~ CASE (a) ~~~~~~~~~~ blockdump should look l
ike this: *** buffer dba: 0x05000002 inc: 0x00000001 seq: 0x0001a9c6 CHECK INIT.
ORA SETTING _DISABLE_LOGGING How to Force the Database Open (_ALLOW_RESETLOGS_CO
RRUPTION) Forcing the database open with `_ALLOW_RESETLOGS_CORRUPTION` OERI:2662
DURING CREATE SNAPSHOT AT MASTER SITE
ver: 1 type: 1=KTU UNDO HEADER Extent Control Header ---------------------------
-------------------------------------Extent Control:: inc#: 716918 tsn: 4 object
#: 0 *** -> interpret: dba: 0x05000002 -> 83886082 (0x05000002) = 5,2 XXX tsn: 4
-> this is rollback segment 4 tsn: 4 -> this rollback segment is in tablespace
4 ORA-00600: Interner Fehlercode, Argumente: [2662], [0], [71183], [0], [71195],
[83886082], [], [] -> [e] > 0 and represents dba from block which is in trace -
> [d]-[b] = 71195 - 71183 = 12 -> convert [b] to hex: 71195 = 0x1161B so this va
lue can be found in blockdump: *** TRN TBL:: index state cflags wrap# uel scn db
a -----------------------------------------------------------------... 0x4e 9 0x
00 0x00d6 0xffff 0x0000.0001161b 0x00000000 ... *** -> possible cause so in this
case the CURRENT SCN is LOWER than the SCN on this transaction ie: The current
SCN looks like it has decreased !! This could happen if the database is opened w
ith the _allow_resetlogs_corruption parameter -> If some recovery steps have jus
t been performed review these steps as the mismatch may be due to open resetlogs
with _allow_resetlogs_corruption enabled or similar. See <Parameter:Allow_Reset
logs_corruption> for information on this parameter. ----------------------------
-------------------------------------~~~~~~~~~~ CASE (b) ~~~~~~~~~~ blockdump lo
oks like this: *** buffer dba: 0x0100012f inc: 0x00000815 seq: 0x00000d48 ver: 1
type: 6=trans data Block header dump: dba: 0x0100012f Object id on Block? Y seg
/obj: 0xe csc: 0x00.5fed6 itc: 2
flg: O
typ: 1 - DATA
fsl: 0 Itl 0x01 0x02
fnx: 0x0 Uba 0x0100261c.0138.04 0x0100261d.0138.01 Flag --U--ULck 1 1 Scn/Fsc fs
c 0x0000.0005fed7 fsc 0x0000.0005fed4
Xid 0x0000.00b.0000036c 0x0000.00a.0000037b
data_block_dump =============== ... *** interpret: dba: 0x0100012f ->
8,10
==>
16777519 (0x0100012f) =
(0x1
1,303 0x12f)
*** SVRMGR> SELECT SEGMENT_NAME, SEGMENT_TYPE FROM DBA_EXTENTS 2> WHERE FILE_ID
= 1 AND 303 BETWEEN BLOCK_ID AND 3> BLOCK_ID + BLOCKS - 1; SEGMENT_NAME SEGMENT_
TYPE ---------------------------------------------------------- ----------------
UNDO$ TABLE 1 row selected. *** -> current sql-statement (trace): *** update und
o$ set name=:2,file#=:3,block#=:4,status$=:5,user#=:6, undosqn=:7,xactsqn=:8,scn
bas=:9,scnwrp=:10,inst#=:11 where us#=:1 ksedmp: internal or fatal error ORA-006
00: internal error code, arguments: [2662], [0], [392916], [0], [392919], [0], [
], [] *** -> -> -> -> e. = 0 info not available d-b = 392919 - 392916 = 3 dba fr
om blockdump matches the object from current sql statement convert b. to hex: =
0x5FED7 so this value can be found in blockdump -> see ITL slot 0x01!
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
--------------------------------------------------------------Some more internal
s: ~~~~~~~~~~~~~~~~~~~~ I will try to give another example in oder to answer que
stion if current SCN is decreased or dependant SCN increase. hypothesis: current
SCN decreased Evidence:
reproduced ORA-600 [2662] by aborting tx and using _allow_resetlog_corruption wh
ile open resetlogs. check database SCN before! Prerequisits: _allow_resetlogs_co
rruption = true in init<SID>.ora shutdown/startup db *** BEGIN TESTCASE SVRMGR>
drop table tx; Statement processed. SVRMGR> create table tx (scn# number); State
ment processed. SVRMGR> insert into tx values( userenv('COMMITSCN') ); 1 row pro
cessed. SVRMGR> select * from tx; SCN# ---------392942 1 row selected. *********
*** another session ************** SQL> connect scott/tiger Connected. SQL> upda
te emp set sal=sal+1; 13 rows processed. SQL> -- no commit here ****************
*************************** SVRMGR> insert into tx values( userenv('COMMITSCN')
); 1 row processed. SVRMGR> select * from tx; SCN# ---------392942 392943 2 rows
selected. -- so current SCN will be 392943 SVRMGR> shutdown abort ORACLE instan
ce shut down. -- this breaks tx SVRMGR> startup mount pfile=e:\jv734\initj734.or
a ORACLE instance started. Total System Global Area 11018952 bytes Fixed Size 35
760 bytes Variable Size 7698200 bytes Database Buffers 3276800 bytes Redo Buffer
s 8192 bytes Database mounted. SVRMGR> recover database until cancel; ORA-00279:
Change 392925 generated at 10/26/99 17:13:03 needed for thread 1 ORA-00289: Sug
gestion : e:\jv734\arch\arch_2.arc ORA-00280: Change 392925 for thread 1 is in s
equence #2
Specify log: {<RET>=suggested | filename | AUTO | CANCEL} cancel Media recovery
cancelled. SVRMGR> alter database open resetlogs; alter database open resetlogs
* ORA-00600: internal error code, arguments: [2662], [0], [392928], [0], [392931
], [0], [], [] *** END TESTCASE because we know current SCN before (392943) we s
ee, that current SCN has decreased after solving the problem with: shutdown abor
t/startup -> works SVRMGR> drop table tx; Statement processed. SVRMGR> create ta
ble tx (scn# number); Statement processed. SVRMGR> insert into tx values( useren
v('COMMITSCN') ); 1 row processed. SVRMGR> select * from tx; SCN# ---------39294
3 1 row selected. so we have exactly reached the current SCN from before 'shutdo
wn abort' So current SCN was bumpt up from 392928 to 392942. Note 3: Adjust SCN
-----------------Doc ID </help/usaeng/Search/search.html>: Note:28929.1 TEXT/X-H
TML Subject: ORA-600 [2662] "Block SCN is ahead of Current SCN" 21-OCT-1997 Type
: REFERENCE Last Revision Date: 15-OCT-2004 Status: PUBLISHED <Internal_Only> Th
is note contains information that was not reviewed by DDR. As such, the contents
are not necessarily accurate and care should be taken when dealing with custome
rs who have encountered this error. Thanks. PAA Internals Group </Internal_Only>
Note: For additional ORA-600 related information please read Note 146580.1 </me
talink/plsql/showdoc?db=NOT&id=146580.1> PURPOSE: Content Type: Creation Date:
This article discusses the internal error "ORA-600 [2662]", what it means and po
ssible actions. The information here is only applicable to the versions listed a
nd is provided only for guidance. ERROR: ORA-600 [2662] [a] [b] [c] [d] [e] VERS
IONS: versions 6.0 to 10.1 DESCRIPTION: A data block SCN is ahead of the current
SCN. The ORA-600 [2662] occurs when an SCN is compared to the dependent SCN sto
red in a UGA variable. If the SCN is less than the dependent SCN then we signal
the ORA-600 [2662] internal error. ARGUMENTS: Arg [a] Arg [b] Arg [c] Arg [d] Ar
g [e] Current SCN WRAP Current SCN BASE dependent SCN WRAP dependent SCN BASE Wh
ere present this is the DBA where the dependent SCN came from.
FUNCTIONALITY: File and IO buffer management for redo logs IMPACT: INSTANCE FAIL
URE POSSIBLE PHYSICAL CORRUPTION SUGGESTIONS: There are different situations whe
re ORA-600 [2662] can be raised. It can be raised on startup or duing database o
peration. If not using Parallel Server, check that 2 instances have not mounted
the same database. Check for SMON traces and have the alert.log and trace files
ready to send to support. Check the SCN difference [argument d]-[argument b]. If
the SCNs in the error are very close, then try to shutdown and startup the inst
ance several times. In some situations, the SCN increment during startup may per
mit the database to open. Keep track of the number of times you attempted a star
tup. If the Known Issues section below does not help in terms of identifying a s
olution, please submit the trace files and alert.log to Oracle Support Services
for further analysis.
Known Issues: Bug# 2899477 See Note 2899477.8 </metalink/plsql/showdoc?db=NOT&id
=2899477.8> Minimise risk of a false OERI[2662] Fixed: 9.2.0.5, 10.1.0.2 Bug# 27
64106 See Note 2764106.8 </metalink/plsql/showdoc?db=NOT&id=2764106.8> False OER
I[2662] possible on SELECT which can crash the instance Fixed: 9.2.0.5, 10.1.0.2
Bug# 2054025 See Note 2054025.8 </metalink/plsql/showdoc?db=NOT&id=2054025.8> O
ERI:2662 possible on new TEMPORARY index block Fixed: 9.0.1.3, 9.2.0.1 Bug# 8519
59 See Note 851959.8 </metalink/plsql/showdoc?db=NOT&id=851959.8> OERI:2662 poss
ible from distributed OPS select Fixed: 7.3.4.5 Bug# 647927 P See Note 647927.8
</metalink/plsql/showdoc?db=NOT&id=647927.8> Digital Unix ONLY: OERI:2662 could
occur under heavy load Fixed: 8.0.4.2, 8.0.5.0 <Internal_Only> INTERNAL ONLY SEC
TION - NOT FOR PUBLICATION OR DISTRIBUTION TO CUSTOMERS ========================
================================================ There were 2 forms of this erro
r until 7.2.3: Type I: 4/5 argument forms The SCN found on a block (dependent SC
N) is ahead of the current SCN. See below for this 1 Argument (before 7.2.3 only
): Oracle is in the process of writing a block to a log file. If the calculated
block checksum is less than or equal to 1 (0 and 1 are reserved) ORA-600 [2662]
is returned. This is a problem generating an offline immediate log marker (kcrfw
g). *NOT DOCUMENTED HERE*
Type II:
Type I ~~~~~~ a. b. c. d. e.
Current SCN WRAP Current SCN BASE dependent SCN WRAP dependent SCN BASE Where pr
esent this is the DBA where the dependent SCN came from. From kcrf.h: If the SCN
comes from the recent or current SCN then a dba of zero is saved. If it comes f
rom undo$ because the undo segment is not available then the undo segment number
is saved, which looks like a block from file 0. If the SCN is for a media recov
ery redo (i.e. block number == 0 in change vector), then the dba is for block 0
of the relevant datafile. If it is from another database for a distributed trans
action then dba is DBAINF(). If it comes from a TX lock then the dba is really u
sn<<16+slot.
Type II ~~~~~~~ a. checksum -> log block checksum - zero if none (thread # in ol
d format) ----------------------------------------------------------------------
----Diagnosis: ~~~~~~~~~~ In addition to different basic types from above, there
are different situations where ORA-600 [2662] type I can be raised. Getting sta
rted: ~~~~~~~~~~~~~~~~ (1) is the error raised during normal database operations
(i.e. when the database is up) or during startup of the database? (2) what is t
he SCN difference [d]-[b] ( subtract argument 'b' from arg 'd')? (3) is there a
fifth argument [e] ? If so convert the dba to file# block# Is it a data dictiona
ry object? (file#=1) If so find out object name with the help of reference dicti
onary from second database (4) What is the current SQL statement? (see trace) Wh
ich table is refered to? Does the table match the object you found in previous s
tep? Be careful at this point: there may be no relationship between DBA in [e] a
nd the real source of problem (blockdump). Deeper analysis: ~~~~~~~~~~~~~~~~ (1)
investigate trace file: this will be a user trace file normally but could be an
smon trace too (2) search for: 'buffer' ("buffer dba" in Oracle7 dumps, "buffer
tsn" in Oracle8/Oracle9 dumps) this will bring you to a blockdump which usually
represents the 'real' source of OERI:2662 WARNING: There may be more than one b
uffer pinned to the process so ensure you check out all pinned buffers. -> does
the blockdump match the dba from e.? -> what kind of blockdump is it? (a) rollba
ck segment header (b) datablock (c) other Check list and possible causes ~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~ If Parallel Server check both nodes are using the same l
ock manager instance & point at the same control files. Possible causes: (1) doi
ng an open resetlogs with _ALLOW_RESETLOGS_CORRUPTION enabled
(2) a hardware problem, like a faulty controller, resulting in a failed write to
the control file or the redo logs (3) restoring parts of the database from back
up and not doing the appropriate recovery (4) restoring a control file and not d
oing a RECOVER DATABASE USING BACKUP CONTROLFILE (5) having _DISABLE_LOGGING set
during crash recovery (6) problems with the DLM in a parallel server environmen
t (7) a bug Solutions: (1) if the SCNs in the error are very close, attempting a
startup several times will bump up the dscn every time we open the database eve
n if open fails. The database will open when dscn=scn. (2)You can bump the SCN e
ither on open or while the database is open using <Event:ADJUST_SCN> (see Note 3
0681.1 </metalink/plsql/showdoc?db=NOT&id=30681.1>). Be aware that you should re
build the database if you use this option. Once this has occurred you would norm
ally want to rebuild the database via exp/rebuild/imp as there is no guarantee t
hat some other blocks are not ahead of time. Articles: ~~~~~~~~~ Solutions: Note
30681.1 </metalink/plsql/showdoc?db=NOT&id=30681.1> Details of the ADJUST_SCN E
vent Note 1070079.6 </metalink/plsql/showdoc?db=NOT&id=1070079.6> Alter System C
heckpoint Possible Causes: Note 1021243.6 </metalink/plsql/showdoc?db=NOT&id=102
1243.6> CHECK INIT.ORA SETTING _DISABLE_LOGGING Note 41399.1 </metalink/plsql/sh
owdoc?db=NOT&id=41399.1> Forcing the database open with `_ALLOW_RESETLOGS_CORRUP
TION` Note 851959.9 </metalink/plsql/showdoc?db=NOT&id=851959.9> OERI:2662 DURIN
G CREATE SNAPSHOT AT MASTER SITE Known Bugs: ~~~~~~~~~~~ Fixed In. Bug No. Descr
iption ---------+------------+--------------------------------------------------
-7.1.5 Bug 229873 </metalink/plsql/showdoc?db=Bug&id=229873> 7.1.3 Bug 195115 </
metalink/plsql/showdoc?db=Bug&id=195115> Miscalculation of SCN on startup for di
stributed TX ? 7.1.6.2.7 Bug 297197 </metalink/plsql/showdoc?db=Bug&id=297197> P
ort specific Solaris OPS problem 7.3 Bug 336196 </metalink/plsql/showdoc?db=Bug&
id=336196> Port specific IBM SP AIX problem -> dlm issue 7.3.4.5 Bug 851959 </me
talink/plsql/showdoc?db=Bug&id=851959> OERI:2662 possible from distributed OPS s
elect Not fixed Bug 2216823 </metalink/plsql/showdoc?db=Bug&id=2216823> OERI:266
2
reported when reusing tempfile with restored DB 8.1.7.4 Bug 2177050 </metalink/p
lsql/showdoc?db=Bug&id=2177050> leak possible (with tags "define var info"/"oact
oid info") can corrupt UGA and cause OERI:2662
OERI:729 space
--------------------------------------------------------------------------Ensure
that this note comes out on top in Metalink when searched ora-600 ora-600 ora-6
00 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-6
00 ora-600 2662 2662 2662 2662 2662 2662 2662 2662 2662 2662 2662 2662 2662 2662
2662 2662 2662 2662 </Internal_Only 19.47: _allow_read_only_corruption ========
========================== If you have a media failure and for some reason (such
as having lost an archived log file) you cannot perform a complete recovery on
some datafiles, then you might need this parameter. It is new for 8i. Previously
there was only _allow_resetlogs_corruption which allowed you to do a RESETLOGS
open of the database in such situations. Of course, a database forced open in th
is way would be in a crazy state because the current SCN would reflect the exten
t of the incomplete recovery, but some datafiles would have blocks in the future
, which would lead to lots of nasty ORA-00600 errors (although there is an ADJUS
T_SCN event that could be used for relief). Once in this position, the only thin
g to do would be to do a full database export, rebuild the database, import and
then assess the damage. The new _allow_read_only_corruption provides a much clea
ner solution to the same problem. You should only use it if all other recovery o
ptions have been exhausted, and you cannot open the database read/write. Once ag
ain, the intent is to export, rebuild and import. Not pleasant, but sometimes be
tter than going back to an older usable backup and performing incomplete recover
y to a consistent state. Also, the read only open allows you to assess better wh
ich recovery option you want to take without committing you to either. 19.48: _a
llow_resetlogs_corruption ================================== log problem: Try th
is approach to solve problems with redolog files:
1. create a backup of all datafiles, redolog files and controlfiles. 2.set next
initialization parameter in init.ora _allow_resetlogs_corruption = true 3.startu
p the database and try to open it 4. if the database can't be opened, then mount
it and try to issue: alter session set events '10015 trace name adjust_scn leve
l 1'; #or if previous doesn't work increase the level to 2 alter session set eve
nts '10015 trace name adjust_scn level 4096'; 5. alter database open You can try
with recover database until cancel and then open iz with resetlogs option. With
this procedure I succesfully recovered from loosing my redolog files. Using eve
nt 10015 you are forcing a SCN jump that will eventually syncronize the SCN valu
es from your datafiles and controlfiles. The level controls how much the SCN wil
l be incremented with. In the case of a 9.0.1 I had, it worked only with 4096, h
owever it may be that even a level of 1 to 3 would make the SCN jump 1 million.
So you have to dump those headers and compare the SCNs inside before and after t
he event 10015. I was succeful too in opening a db after loosing controlfile and
online redo logs , however Oracle support made it pretty clear that the only us
age for the database afterwards is to do a full export and recreate it from that
. It would be better if Oracle support walks you through this procedure. 19.49:
ORA-01503: CREATE CONTROLFILE failed ===========================================
= ORA-01503: CREATE CONTROLFILE failed ORA-01161: database name PEGACC in file h
eader does not match given name of PEGSAV ORA-01110: data file 1: '/u02/oradata/
pegsav/system01.dbf' Note 1: ======= Problem: You are attempting to recreate a c
ontrolfile with a 'createcontrolfile' script and the script fails with the follo
wing error when it tries to access one of the datafiles: ORA-1161, database name
<name> in file header does not match given name
You are certain that the file is good and that it belongs to that database. Solu
tion: Check the file's properties in Windows Explorer and verify that it is not
a "Hidden" file. Explanation: If you have set the "Show All Files' option under
Explorer, View, Options, you are able to see 'hidden' files that other users and
/or applications cannot. If any or all datafiles are marked as 'hidden' files, O
racle does not see them when it tries to recreate the controlfile. You must chan
ge the properties of the file by right-clicking on the file in Windows Explorer
and then deselecting the check box marked "Hidden" under the General tab. You sh
ould then be able to create the controlfile. References: Note 1084048.6 ORA-0150
3, ORA-01161: on Create Controlfile. Note 2: ======= This message may result, if
the db_name in the init.ora does not match with the set "db_name" given while c
reating the controlfile. Also, remove any old controlfiles present in the specif
ied directory. Thanks, Note 3: ======= We ran into a similar problem when trying
to create a new instance with datafiles from another database. The error comes
in the create control file statement. Oracle uses REUSE as the default option wh
en you do the alter database backup controlfile to trace. If you delete REUSE th
en the new database name you will change all the header information in all the d
atabase datafiles and you will be able to start up the instance. Hope this helps
. Note 4: ======= Try this command "CREATE CONTROLFILE SET DATABASE..." instead
of "CREATE CONTROLFILE REUSE DATABASE..." I think it would be better.
19.50. ORA-01031 ================ Note 1: ------The 'OSDBA' and 'OSOPER' groups
are chosen at installation time and usually both default to the group 'dba'. The
se groups are compiled into the 'oracle' executable and so are the same for all
databases running from a given ORACLE_HOME directory. The actual groups being us
ed for OSDBA and OSOPER can be checked thus: cd $ORACLE_HOME/rdbms/lib cat confi
g.[cs] The line '#define SS_DBA_GRP "group"' should name the chosen OSDBA group.
The line '#define SS_OPER_GRP "group"' should name the chosen OSOPER group. Not
e 2: ------Bookmark Fixed font Go to End Creation
Doc ID: Note:69642.1 Content Type: TEXT/PLAIN Subject: UNIX: Checklist for Resol
ving Connect AS SYSDBA Issues Date: 20-APR-1999 Type: TROUBLESHOOTING Last Revis
ion Date: 31-DEC-2004 Status: PUBLISHED Introduction: ~~~~~~~~~~~~~ This bulleti
n lists the documented causes of getting
---> prompted for a password when trying to CONNECT as SYSDBA ---> errors such a
s ORA-01031, ORA-01034, ORA-06401, ORA-03113,ORA-09925, ORA-09817, ORA-12705, OR
A-12547 a) SQLNET.ORA Checks: --------------------1. The "sqlnet.ora" can be fou
nd in the following locations (listed by search order): $TNS_ADMIN/sqlnet.ora $H
OME/sqlnet.ora $ORACLE_HOME/network/admin/sqlnet.ora Depending upon your operati
ng system, it may also be located in: /var/opt/oracle/sqlnet.ora /etc/sqlnet.ora
A corrupted "sqlnet.ora" file, or one with security options set, will cause a '
connect internal' request to prompt for a password. To determine if this is the
problem, locate the "sqlnet.ora" that is being
used. The one being used will be the first one found according to the search ord
er listed above. Next, move the file so that it will not be found by this search
: % mv sqlnet.ora sqlnet.ora_save Try to connect internal again. If it still fai
ls, search for other "sqlnet.ora" files according to the search order listed abo
ve and repeat using the move command until you are sure there are no other "sqln
et.ora" files being used. If this does not resolve the issue, use the move comma
nd to put all the "sqlnet.ora" files back where they were before you made the ch
ange: % mv sqlnet.ora_save sqlnet.ora If moving the "sqlnet.ora" resolves the is
sue, then verify the contents of the file: a) SQLNET.AUTHENTICATION_SERVICES If
you are not using database links, comment this line out or try setting it to: SQ
LNET.AUTHENTICATION_SERVICES = (BEQ,NONE) b) SQLNET.CRYPTO_SEED This should not
be set in a "sqlnet.ora" file on UNIX. If it is, comment the line out. (This set
ting is added to the "sqlnet.ora" if it is built by one of Oracle's network cofi
guration products shipped with client products) c) AUTOMATIC_IPC If this is set
to "ON" it can force a "TWO_TASK" connection. Try setting this to "OFF": AUTOMAT
IC_IPC = OFF 2. Set the permissions correctly in the "TNS_ADMIN" files. The envi
ronment variable TNS_ADMIN defines the directory where the "sqlnet.ora", "tnsnam
es.ora", and "listener.ora" files reside. These files must contain the correct p
ermissions, which are set when "root.sh" runs during installation. As root, run
"root.sh" or edit the permissions on the "sqlnet.ora", "tnsnames.ora", and "list
ener.ora" files by hand as follows: $ cd $TNS_ADMIN $ chmod 644 sqlnet.ora tnsna
mes.ora listener.ora $ ls -l sqlnet.ora tnsnames.ora listener.ora -rw-r--r-1 ora
cle dba 1628 Jul 12 15:25 listener.ora
-rw-r--r--rw-r--r--
1 oracle dba 1 oracle dba
586 Jun 1 12:07 sqlnet.ora 82274 Jul 12 15:23 tnsnames.ora
b) Software and Operating System Issues: ---------------------------------------
1. Be sure $ORACLE_HOME is set to the correct directory and does not have any ty
ping mistakes: % cd $ORACLE_HOME % pwd If this returns a location other than you
r "ORACLE_HOME" or is invalid, you will need to reset the value of this environm
ent variable: sh or ksh: ---------$ ORACLE_HOME=<path_to_ORACLE_HOME> $ export O
RACLE_HOME Example: $ ORACLE_HOME=/u01/app/oracle/product/7.3.3 $ export ORACLE_
HOME csh: ---% setenv ORACLE_HOME <path_to_ORACLE_HOME> Example: % setenv ORACLE
_HOME /u01/app/oracle/product/7.3.3 If your "ORACLE_HOME" contains a link or the
instance was started with the "ORACLE_HOME" set to another value, the instance
may try to start using the memory location that another instance is using. An ex
ample of this might be: You have "ORACLE_HOME" set to "/u01/app/oracle/product/7
.3.3" and start the instance. Then you do something like: % ln -s /u01/app/oracl
e/product/7.3.3 /u01/app/oracle/7.3.3 % setenv ORACLE_HOME /u01/app/oracle/7.3.3
% svrmgrl SVRMGR> connect internal If this prompts for a password then most lik
ely the combination of your "ORACLE_HOME" and "ORACLE_SID" hash to the same shar
ed memory address of another running instance. Otherwise you may be able to conn
ect internal but you will receive an ORA-01034 "Oracle not available" error. In
most cases using a link as part of your "ORACLE_HOME" is fine as long as you are
consistent. Oracle recommends that links not be used as part of the "ORACLE_HOM
E", but their use is supported. 2. Check that $ORACLE_SID is set to the correct
SID, (including capitalization),
and does not have any typos: % echo $ORACLE_SID Refer to Note:1048876.6 for more
information. 3. Ensure $TWO_TASK is not set. To check if "TWO_TASK" is set, do
the following: sh, ksh or on HP/UX only csh: ----------------------------env |gr
ep -i two - or echo $TWO_TASK csh: ---setenv |grep -i two If any lines are retur
ned such as: TWO_TASK= - or TWO_TASK=PROD You will need to unset the environment
variable "TWO_TASK": sh or ksh: ---------unset TWO_TASK csh: ---unsetenv TWO_TA
SK Example : $ TWO_TASK=V817 $ export TWO_TASK $ sqlplus /nolog SQL*Plus: Releas
e 8.1.7.0.0 - Production on Fri Dec 31 10:12:25 2004 (c) Copyright 2000 Oracle C
orporation. All rights reserved. SQL> conn / as sysdba ERROR: ORA-01031: insuffi
cient privileges $ unset TWO_TASK $ sqlplus /nolog SQL> conn / as sysdba Connect
ed. If you are running Oracle release 8.0.4, and upon starting "svrmgrl" you rec
eive an ORA-06401 "NETCMN: invalid driver designator" error, you should also uns
et two_task. The login connect string may be getting its value from the TWO_TASK
environment variable if this is set for the user.
4. Check the permissions on the Oracle executable: % cd $ORACLE_HOME/bin % ls -l
oracle ('ls -n oracle' should work as well)
The permissions should be rwsr-s--x, or 6751. If the permissions are incorrect,
do the following as the "oracle" software owner: % chmod 6751 oracle If you rece
ive an ORA-03113 "end-of-file on communication" error followed by a prompt for a
password, then you may also need to check the ownership and permissions on the
dump directories. These directories must belong to Oracle, group dba, (or the ap
propriates names for your installation). This error may occur while creating a d
atabase. Permissions should be: 755 (drwxr-xr-x)
Also, the alert.log must not be greater than 2 Gigabytes in size. When you start
up "nomount" an Oracle pseudo process will try to write the "alert.log" file in
"udump". When Oracle cannot do this (either because of permissions or because o
f the "alert.log" being greater than 2 Gigabytes in size), it will issue the ORA
-03113 error. 5. "osdba" group checks: a. Make sure the operating system user is
suing the CONNECT INTERNAL belongs to the "osdba" group as defined in the "$ORAC
LE_HOME/rdbms/lib/config.s" or "$ORACLE_HOME/rdbms/lib/config.c". Typically this
is set to "dba". To verify the operating system groups the user belongs to, do
the following: % id uid=1030(oracle) gid=1030(dba) The "gid" here is "dba" so th
e "config.s" or "config.c" may contain an entry such as: /* 0x0008 15 */ .ascii
"dba\0"
If these do not match, you either need to add the operating system user to the g
roup as it is seen in the "config" file, or modify the "config" file and relink
the "oracle" binary. Refer to entry [NOTE:50507.1] section 3 for more details. b
. Be sure you are not logged in as the "root" user and that the environment vari
ables "USER", "USERNAME", and "LOGNAME" are not set to "root". The "root" user i
s a special case and cannot connect to Oracle as the "internal" user unless the
effective group is changed to the "osdba" group, which is typically "dba". To do
this, either modify the "/etc/password" file (not recommended) or use the "newg
rp" command:
# newgrp dba "newgrp" always opens a new shell, so you cannot issue "newgrp" fro
m within a shell script. Keep this in mind if you plan on executing scripts as t
he "root" user. c. Verify that the "osdba" group is only listed once in the "/et
c/group" file: % grep dba /etc/group dba::1010: dba::1100: If more than one line
starting with the "osdba" group is returned, you need to remove the ones that a
re not correct. It is not possible to have more than one group use a group name.
d. Check that the oracle user uid and gid are matching with /etc/passwd and /et
c/group : $ id uid=500(oracle) gid=235(dba) $ grep oracle /etc/passwd oracle:x:5
00:235:oracle:/home/oracle:/bin/bash ^^^ $ grep dba /etc/group dba:x:253:oracle
^^^ The mismatch also causes an ORA-1031 error. 6. Verify that the file system i
s not mounted no set uid: % mount /u07 on /dev/md/dsk/d7 nosuid/read/write If th
e filesytem is mounted "nosuid", as seen in this example, you will need to unmou
nt the filesystem and mount it without the "nosuid" option. Consult your operati
ng system documentation or your operating system vendor for instruction on modif
ying mount options. 7. Please read the following warning before you attempt to u
se the information in this step: ***********************************************
******************* * * * WARNING: If you remove segments that belong to a runni
ng * * instance you will crash the instance, and this may * * cause database cor
ruption. * * * * Please call Oracle Support Services for assistance * * if you h
ave any doubts about removing shared memory * * segments. * * * ****************
************************************************** If an instance crashed or was
killed off using "kill" there may be shared
memory segments hanging around that belong to the down instance. If there are no
other instances running on the machine you can issue: % ipcs -b T ID Shared Mem
ory: m 0 m 1601 KEY MODE OWNER GROUP root dba SEGSZ 68 4530176
0x50000ffe --rw-r--r-- root 0x0eedcdb8 --rw-r----- oracle
In this case the "ID" of "1601" is owned by "oracle" and if there are no other i
nstances running in most cases this can safely be removed: % ipcrm -m 1601 If yo
ur SGA is split into multiple segments you will have to remove all segments asso
ciated with the instance. If there are other instances running, and you are not
sure which memory segments belong to the failed instance, you can do the followi
ng: a. Shut down all the instances on the machine and remove whatever shared mem
ory still exists that is owned by the software owner. b. Reboot the machine. c.
If your Oracle software is release 7.3.3 or newer, you can connect into each ins
tance that is up and identify the shared memory owned by that instance: % svrmgr
l SVRMGR> connect internal SVRMGR> oradebug ipc In Oracle8: ----------Area #0 `F
ixed Size', containing Subareas 0-0 Total size 000000000000b8c0, Minimum Subarea
size 00000000 Subarea Shmid Size Stable Addr 0 7205 000000000000c000 80000000 I
n Oracle7: ------------------------ Shared memory -------------Seg Id Address Si
ze 2016 80000000 4308992 Total: # of segments = 1, size = 4308992 Note the "Shmi
d" for Oracle8 and "Seg Id" for Oracle7 for each running instance. By process of
elimination find the segments that do not belong to an instance and remove them
. 8. If you are prompted for a password and then receive error ORA-09925 "unable
to create audit trail file" or error ORA-09817 "write to audit file failed", al
ong with "SVR4 Error: 28: No space left on device", do the following: Check your
"pfile". It is typically in the "$ORACLE_HOME/dbs" directory
and will be named "init<your_sid>.ora, where "<your_sid>" is the value of "ORACL
E_SID" in your environment. If the "init<your_sid>.ora" file has the "ifile" par
ameter set, you will also have to check the included file as well. You are looki
ng for the parameter "audit_file_dest". If "audit_file_dest" is set, change to t
hat directory; otherwise change to the "$ORACLE_HOME/rdbms/audit" directory, as
this is the default location for audit files. If the directory does not exist, c
reate it. Ensure that you have enough space to create the audit file. The audit
file is generally 600 bytes in size. If it does exist, verify you can write to t
he directory: % touch afile If it could not create the called "afile", you need
to change the permissions on your audit directory: % chmod 751 9. If connect int
ernal prompts you for a password and then you receive an ORA-12705 "invalid or u
nknown NLS parameter value specified" error, you need to verify the settings for
"ORA_NLS", "ORA_NLS32", "ORA_NLS33" or "NLS_LANG". You will need to consult you
r Installation and Configuration Guide for the proper settings for these environ
ment variables.
10. If you have installed Oracle software and are trying to connect with Server
Manager to create or start the database, and receive a TNS-12571 "packet writer
failure" error, please refer to Note:1064635.6 11. If in SVRMGRL (Server Manager
line mode), you are running the "startup.sql" script and receive the following
error: ld:so.1: oracle_home/bin/svrmgrl fatal relocation error symbol not found
kgffiop RDBMS v7.3.2 is installed. RDBMS v8.0.4 is a separate "oracle_home", and
you are attempting to have it coexist. This is due to the wrong version of the
client shared library "libclntsh.so.1" being used at runtime. Verify environment
variable settings. You need to ensure that "ORACLE_HOME" and "LD_LIBRARY_PATH"
are set correctly. For C-shell, type: % setenv LD_LIBRARY_PATH $ORACLE_HOME/lib
% setenv ORACLE_HOME /u01/app/oracle/product/8.0.4 For Bourne or Korn shell, typ
e: $ $ $ $ LD_LIBRARY_PATH=$ORACLE_HOME/lib export LD_LIBRARY_PATH ORACLE_HOME=/
u01/app/oracle/product/8.0.4 export ORACLE_HOME
12. Ensure that the disk the instance resides on has not reached 100% capacity.
% df -k If it has reached 100% capacity, this may be the cause of 'connect inter
nal' prompting for a password. Additional disk space will need to be made availa
ble before 'connect internal' will work. For additional information refer to Not
e:97849.1 13. Delete process.dat and regid.dat files in $ORACLE_HOME/otrace/admi
n directory. Oracle Trace is enabled by default on 7.3.2 and 7.3.3 (depends on p
latform) This can caused high disk space usage by these files and cause a number
of apparently mysterious side effects. See Note:45482.1 for more details. 14. W
hen you get ora-1031 "Insufficient privileges" on connect internal after you sup
ply a valid password and you have multiple instances running from the same ORACL
E_HOME, be sure that if an instance has REMOTE_LOGIN_PASSWORDFILE set to exclusi
ve that the file $ORACLE_HOME/dbs/orapw<sid> does exist, otherwise it defaults t
o the use of the file orapw that consequently causes access problems for any oth
er database that has the parameter set to shared. Set the parameter REMOTE_LOGIN
_PASSWORDFILE to shared for all instances that share the common password file an
d create an exclusive orapw<sid> password files for any instances that have this
set to exclusive. 15. Check permissions on /etc/passwd file (Unix only). If Ora
cle cannot open the password file, connect internal fails with ORA-1031, since O
racle is not able to verify if the user trying to connect is indeed in the dba g
roup. Example: -------# chmod 711 /etc/passwd # ls -ltr passwd -rwx--x--x 1 root
sys 901 Sep 21 14:26 passwd $ sqlplus '/ as sysdba' SQL*Plus: Release 9.2.0.1.0
- Production on Sat Sep 21 16:21:18 2002 Copyright (c) 1982, 2002, Oracle Corpo
ration. ERROR: ORA-01031: insufficient privileges Trussing sqlplus will show als
o the problem: 25338: 25338: 25338: 25338: munmap(0xFF210000, 8192) lwp_mutex_wa
keup(0xFF3E0778) lwp_mutex_lock(0xFF3E0778) time() = = = = 0 0 0 1032582594 All
rights reserved.
25338: 25338:
open("/etc/passwd", O_RDONLY) getrlimit(RLIMIT_NOFILE, 0xFFBE8B28)
Err#13 EACCES = 0
c) Operating System Specific checks: -----------------------------------1. On Op
enVMS, check that the privileges have been granted at the Operating System level
: $ SET DEFAULT SYS$SYSTEM: $ RUN AUTHORIZE If the list returned by AUTHORIZE do
es not contain ORA_<SID>_DBA, or ORA_DBA, then you do not have the correct OS pr
ivileges to issue a connect internal. If ORA_<SID>_DBA was added AFTER ORA_DBA,
then ORA_DBA needs to be removed and granted again to be updated. Please refer t
o Note:1010852.6 for more details. 2. On Windows NT, check if DBA_AUTHORIZATION
is set to BYPASS in the registry. 3. On Windows NT, if you are able to connect i
nternally but then startup fails for some reason, successive connect internal at
tempts might prompt for a password. You may also receive errors such as: ORA-127
05: ORA-01012: LCC-00161: ORA-01031: invalid or unknown NLS parameter value spec
ified not logged on Oracle error (possible syntax error) insufficient privileges
Refer to entry Note:1027964.6 for suggestions on how to resolve this problem 4.
If you are using Multi-Threaded Server (MTS), make sure you are using a dedicate
d server connection. A dedicated server connection is required to start up or sh
utdown the database. Unless the database alias in the "TNSNAMES.ORA" file includ
es a parameter to make a dedicated server connection, it will make a shared conn
ection to a dispatcher. See Note:1058680.6 for more details. 5. On Solaris, if t
he file "/etc/.name_service_door" has incorrect permissions, Oracle cannot read
the file. You will receive a message that "The Oracle user cannot access "/etc/.
name_service_door" (permission denied). This file is a flavor of IPC specific to
Solaris which Oracle software is using This can also cause connect internal pro
blems. See entry Note:1066589.6 6. You are on Digital Unix, running SVRMGRL (Ser
ver Manager line mode), and you receive an ORA-12547 "TNS:lost contact" error an
d a password prompt. This problem occurs when using Parallel Server and the True
Cluster software together. If Parallel Server is not linked in, svrmgrl works a
s expected.
Oracle V8.0.5 requires an Operating System patch which previous versions of Orac
le did not require. The above patch allows svrmgrl to communicate with the TCR s
oftware. You can determine if the patch is applied by running: % nm /usr/ccs/lib
/libssn.a | grep adjust If this returns nothing, then you need to: 1. Obtain the
patch for TCR 1.5 from Digital. This patch is for the MC SCN and adds the symbo
l "adjustSequenceNumber" to the library /usr/ccs/lib/libssn.a. 2. Apply the patc
h. 3. Relink Oracle Another possibility is that you need to raise the value of k
ernel parameter per-proc-stack-size when increased from its default value of 209
7152 to 83886080 resolved this problem. 7. You are on version 6.2 of the Silicon
Graphics UNIX (IRIX) operating system and you have recently installed RDBMS rel
ease 8.0.3. If you are logged on as "oracle/dba" and an attempt to log in to Ser
ver Manager using "connect/internal" prompts you for a password, you should refe
r to entry Note:1040607.6 8. On AIX 4.3.3 after applying ML5 or higher you can n
ot longer connect as internal or if on 9.X '/ as sysdba' does not work as well.
This is a known AIX bug and it occurs on all RS6000 ports including SP2. There i
s two workarounds and one solution. They are as follows: 1) Use mkpasswd command
to remove the index. This is valid until a new user is added to "/etc/passwd" o
r modified: # mkpasswd -v -d 2) Touch the "/etc/passwd" file. If the "/etc/passw
d" file is newer than the index it will not use the password file index: # touch
/etc/passwd 3) Obtain APAR IY22458 from IBM. Any questions about this APAR shou
ld be directed to IBM. d) Additional Information: -------------------------1. In
the "Oracle7 Administrator's Reference for UNIX", there is a note that states:
If REMOTE_OS_AUTHENT is set to true, users who are members of the dba group on t
he remote machine are able to connect as INTERNAL without a password. However, i
f you are connecting remotely, that is connecting via anything
except the bequeath adapter, you will be prompted for a password regardless of t
he value of "REMOTE_OS_AUTHENT". Refer to bug 644988 References: ~~~~~~~~~~~ [NO
TE:1048876.6] [NOTE:1064635.6] [NOTE:1010852.6] OR SERVER MANAGER [NOTE:1027964.
6] [NOTE:1058680.6] DATABASE [NOTE:1066589.6] [NOTE:1040607.6] [NOTE:97849.1] [N
OTE:50507.1] [NOTE:18089.1] [BUG:644988] WITHOUT PASSWORD
UNIX: Connect internal prompts for password after install ORA-12571: PACKET WRIT
ER FAILURE WHEN STARTING SVRMGR OPENVMS: ORA-01031: WHEN ISSUING "CONNECT INTERN
AL" IN SQL*DBA LCC-00161 AND ORA-01031 ON STARTUP ORA-00106 or ORA-01031 ERROR w
hen trying to STARTUP or SHUTDOWN UNIX: Connect Internal asks for password when
TWO_TASK is set SGI: ORA-01012 ORA-01031: WHEN USING SRVMGR AFTER 8.0.3 INSTALL
Connect internal Requires Password SYSDBA and SYSOPER Privileges in Oracle8 and
Oracle7 UNIX: Connect INTERNAL / AS SYSBDA Privilege on Oracle 7/8 REMOTE_OS_AUT
HENT=TRUE: NOT ALLOWING USERS TO CONNECT INTERNAL
Search Words: ~~~~~~~~~~~~~ svrmgrm sqldba sqlplus sqlnet remote_login_passwordf
ile
Note 3: ------ORA-01031: insufficient privileges Cause: An attempt was made to c
hange the current username or password without the appropriate privilege. This e
rror also occurs if attempting to install a database without the necessary opera
ting system privileges. Action: Ask the database administrator to perform the op
eration or grant the required privileges. Note 4: ------ORA-01031: insufficient
privileges In most cases, the user receiving this error lacks a privilege to cre
ate an object (such as a table, view, procedure and the like). Grant the require
d privilege like so: grant create table to user_lacking_privilege; Startup If so
meone receives this error while trying to startup the instance, the logged on us
er must belong to the ora_dba group on Windows or dba group on Unix. Note 5: ---
----
I am not sure it is the same, but I got this error today in windows when sql_aut
hentication in sqlnet.ora was NONE. Changing it to NTS solved the problem.
19.51 ORA-00600: internal error code, arguments: [17059]: ======================
=================================== Note 1: ------Doc ID </help/usaeng/Search/se
arch.html>: Note:138554.1 Content Type: TEXT/PLAIN Subject: ORA-600 [17059] Crea
tion Date: 02-APR-2001 Type: REFERENCE Last Revision Date: 09-DEC-2004 Status: P
UBLISHED Note: For additional ORA-600 related information please read [NOTE:1465
80.1] <ml2_documents.showDocument?p_id=146580.1&p_database_id=NOT> PURPOSE: This
article discusses the internal error "ORA-600 [17059]", what it means and possi
ble actions. The information here is only applicable to the versions listed and
is provided only for guidance. ERROR: ORA-600 [17059] [a] VERSIONS: versions 7.1
to 10.1 DESCRIPTION: While building a table to hold the list of child cursor de
pendencies relating to a given parent cursor, we exceed the maximum possible siz
e of the table. ARGUMENTS: Arg [a] Object containing the table FUNCTIONALITY: Ke
rnel Generic Library cache manager IMPACT: PROCESS FAILURE NON CORRUPTIVE - No u
nderlying data corruption. SUGGESTIONS: One symptom of this error is that the se
ssion will appear to hang for a period of time prior to this error being reporte
d. If the Known Issues section below does not help in terms of identifying a sol
ution, please submit the trace files and alert.log to Oracle Support Services fo
r further analysis. Issuing this SQL as SYS (SYSDBA) may help show any problem
objects in the dictionary: select do.obj#, po.obj# , p_timestamp, po.stime , dec
ode(sign(po.stime-p_timestamp),0,'SAME','*DIFFER*') X from sys.obj$ do, sys.depe
ndency$ d, sys.obj$ po where P_OBJ#=po.obj#(+) and D_OBJ#=do.obj# and do.status=
1 /*dependent is valid*/ and po.status=1 /*parent is valid*/ and po.stime!=p_tim
estamp /*parent timestamp not match*/ order by 2,1 ; Normally the above select w
ould return no rows. If any rows are returned the listed dependent objects may n
eed recompiling. Known Issues: Bug# 3555003 See [NOTE:3555003.8] <ml2_documents.
showDocument?p_id=3555003.8&p_database_id=NOT> View compilation hangs / OERI:170
59 after DBMS_APPLY_ADM.SET_DML_HANDLER Fixed: 9.2.0.6 Bug# 2707304 See [NOTE:27
07304.8] <ml2_documents.showDocument?p_id=2707304.8&p_database_id=NOT> OERI:1705
9 / OERI:kqlupd2 / PLS-907 after adding partitions to Partitioned IOT Fixed: 9.2
.0.3, 10.1.0.2 Bug# 2636685 See [NOTE:2636685.8] <ml2_documents.showDocument?p_i
d=2636685.8&p_database_id=NOT> Hang / OERI:[17059] after adding a list value to
a partition Fixed: 9.2.0.3, 10.1.0.2 Bug# 2626347 See [NOTE:2626347.8] <ml2_docu
ments.showDocument?p_id=2626347.8&p_database_id=NOT> OERI:17059 accessing view a
fter ADD / SPLIT PARTITION Fixed: 9.2.0.3, 10.1.0.2 Bug# 2306331 See [NOTE:23063
31.8] <ml2_documents.showDocument?p_id=2306331.8&p_database_id=NOT> Hang / OERI[
17059] on view after SET_KEY or SET_DML_INVOKATION on base table Fixed: 9.2.0.2
Bug# 1115424 See [NOTE:1115424.8] <ml2_documents.showDocument?p_id=1115424.8&p_d
atabase_id=NOT> Cursor authorization and dependency lists too long - can impact
shared pool / OERI:17059 Fixed: 8.0.6.2, 8.1.6.2, 8.1.7.0 Bug# 631335 See [NOTE:
631335.8] <ml2_documents.showDocument?p_id=631335.8&p_database_id=NOT> OERI:1705
9 from extensive re-user of a cursor
Fixed: 8.0.4.2, 8.0.5.0, 8.1.5.0 Bug# 558160 See [NOTE:558160.8] <ml2_documents.
showDocument?p_id=558160.8&p_database_id=NOT> OERI:17059 from granting privilege
s multiple times Fixed: 8.0.3.2, 8.0.4.0, 8.1.5.0 Note 2: ------Doc ID </help/us
aeng/Search/search.html>: Note:234457.1 Content Type: TEXT/X-HTML Subject: ORA-6
00 [17059] Error When Compiling A Package Creation Date: 19FEB-2003 Type: PROBLE
M Last Revision Date: 24-AUG-2004 Status: PUBLISHED
fact: fact: Oracle Server - Enterprise Edition fact: Partitioned Tables / Indexe
s symptom: ORA-600 [17059] Error When Compiling A Package symptom: When Compilin
g a Package symptom: The Package Accesses a Partitioned Table symptom: ORA-00600
: internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s], [%s] sympt
om: internal error code, arguments: [17059], [352251864] symptom: Calling Locati
on kglgob symptom: Calling Location kgldpo symptom: Calling Location kgldon symp
tom: Calling Location pkldon symptom: Calling Location pkloud symptom: Calling L
ocation - phnnrl_name_resolve_by_loading cause: This is due to <bug:2073948 </me
talink/plsql/showdoc?db=bug&id=2073948>> fixed in 10i, and occurs when accessing
a partitioned table via a dblink within the package, where DDL (such as adding/
dropping partitions) is performed on the table.
fix:
This is fixed in 9.0.1.4, 9.2.0.2 & 10i. One-off patches are available for 8.1.7
.4. A workaround is to flush the shared pool. Note 3: ------Doc ID </help/usaeng
/Search/search.html>: Note:239796.1 Content Type: TEXT/PLAIN Subject: ORA-600 [1
7059] when querying dba_tablespaces, dba_indexes, dba_ind_partitions etc Creatio
n Date: 28-MAY-2003 Type: PROBLEM Last Revision Date: 13-AUG-2004 Status: PUBLIS
HED Problem: ~~~~~~~~ The information in this article applies to: Internal Error
ORA-600 [17059] when querying Data dictionary views like dba_tablespaces, dba_i
ndexes, dba_ind_partitions etc Symptom(s) ~~~~~~~~~~ While querying Data diction
ary views like dba_tablespaces, dba_indexes, dba_ind_partitions etc, getting int
ernal error ORA-600 [17059] Change(s) ~~~~~~~~~~ You probably altered some objec
ts or executed some cat*.sql scripts. Cause ~~~~~~~ Some SYS objects are INVALID
. Fix ~~~~ Connect SYS run $ORACLE_HOME/rdbms/admin/utlrp.sql and make sure all
the objects are valid. 19.52: ORA-00600: internal error code, arguments: [17003]
========================================================= Note 1: ------The inf
ormation in this article applies to: Oracle Forms - Version: 9.0.2.7 to 9.0.2.12
Oracle Server - Enterprise Edition - Version: 9.2 This problem can occur on any
platform. Errors ORA 600 "internal error code, arguments: [%s],[%s],[%s], [%s],
[%s], Symptoms
The following error occurs when compiling a form or library ( fmb / pll ) agains
t RDBMS 9.2 PL/SQL ERROR 0 at line 0, column 0 ORA-00600: internal error code, a
rguments: [17003], [0x11360BC], [275], [1], [], [], [], [] The error reproduces
everytime. Triggers / local program units in the form / library contain calls to
stored database procedures and / or functions. The error does not occur when co
mpiling against RDBMS 9.0.1 or lower. Cause This is a known bug / issue. The com
pilation error occurs when the form contains a call to a stored database functio
n / procedure which has two DATE IN variables receiving DEFAULT values such as S
YSDATE. Reference: <Bug:2713384> Abstract: INTERNAL ERROR [1401] WHEN COMPILE FU
NCTION WITH 2 DEFAULT DATE VARIABLES ON 9.2 Fix The bug is fixed in Oracle Forms
10g (9.0.4). There is no backport fix available for Forms 9i (9.0.2) To work-ar
ound, modify the offending calls to the stored database procedure/ functions so
that DEFAULT parameter values are not passed directly . For example, pass the DE
FAULT value SYSDATE indirectly to the stored database procedure/ function by fir
st assigning it to a local variable in the form. Note 2: ------Doc ID </help/usa
eng/Search/search.html>: Note:138537.1 Content Type: TEXT/PLAIN Subject: ORA-600
[17003] Creation Date: 02-APR-2001 Type: REFERENCE Last Revision Date: 15-OCT-2
004 Status: PUBLISHED Note: For additional ORA-600 related information please re
ad [NOTE:146580.1] <ml2_documents.showDocument?p_id=146580.1&p_database_id=NOT>
PURPOSE: This article discusses the internal error "ORA-600 [17003]", what it me
ans and possible actions. The information here is only applicable to the version
s listed and is provided only for guidance. ERROR: ORA-600 [17003] [a] [b] [c] V
ERSIONS: versions 7.0 to 10.1 DESCRIPTION:
The error indicates that we have tried to lock a library cache object by using t
he dependency number to identify the target object and have found that no such d
ependency exists. Under this situation we will raise an ORA-600 [17003] if the d
ependency number that we are using exceeds the number of entries in the dependen
cy table or the dependency entry is not marked as invalidated. ARGUMENTS: Arg [a
] Library Cache Object Handle Arg [b] Dependency number Arg [c] 1 or 2 (indicate
s where the error was raised internally) FUNCTIONALITY: Kernel Generic Library c
ache manager IMPACT: PROCESS MEMORY FAILURE NO UNDERLYING DATA CORRUPTION. SUGGE
STIONS: A common condition where this error is seen is problematic upgrades. If
a patchset has recently been applied, please confirm that there were no errors a
ssociated with this upgrade. Specifically, there are some XDB related bugs which
can lead to this error being reported. Known Issues: Bug# 2611590 See [NOTE:261
1590.8] <ml2_documents.showDocument?p_id=2611590.8&p_database_id=NOT> OERI:[1700
3] running XDBRELOD.SQL Fixed: 9.2.0.3, 10.1.0.2 Bug# 3073414 XDB may not work a
fter applying a 9.2 patch set Fixed: 9.2.0.5
19.53: ORA-00600: internal error code, arguments: [qmxiUnpPacked2], [121], [], [
], [], [], [], [] ==============================================================
==================== =============== Note 1. ------Doc ID: Note:222876.1 Content
Type: TEXT/PLAIN Subject: ORA-600 [qmxiUnpPacked2] Creation Date: 09-DEC-2002 T
ype: REFERENCE Last Revision Date: 15-OCT-2004 Status: PUBLISHED Note: For addit
ional ORA-600 related information please read [NOTE:146580.1]
PURPOSE: This article discusses the internal error "ORA-600 [qmxiUnpPacked2]", w
hat it means and possible actions. The information here is only applicable to th
e versions listed and is provided only for guidance. ERROR: ORA-600 [qmxiUnpPack
ed2] [a] VERSIONS: versions 9.2 to 10.1 DESCRIPTION: When unpickling an XOB or a
n array of XOBs an unexpected datatype was found. Generally due to XMLType data
that has not been successfully upgraded from a previous version. ARGUMENTS: Arg
[a] Type of XOB FUNCTIONALITY: Qernel xMl support Xob to/from Image IMPACT: PROC
ESS FAILURE NON CORRUPTIVE - No underlying data corruption. SUGGESTIONS: Please
review the following article on Metalink : [NOTE:235423.1] How to resolve ORA-60
0 [qmxiUnpPacked2] during upgrade If you still encounter the error having tried
the suggestions in the above article, or the article isn't applicible to your en
vironment then ensure that the upgrade to current version was completed succesfu
lly without error. If the Known Issues section below does not help in terms of i
dentifying a solution, please submit the trace files and alert.log to Oracle Sup
port Services for further analysis. Known Issues: Bug# 2607128 See [NOTE:2607128
.8] OERI:[qmxiUnpPacked2] if CATPATCH.SQL/XDBPATCH.SQL fails Fixed: 9.2.0.3 Bug#
2734234 CONSOLIDATION BUG FOR ORA-600 [QMXIUNPPACKED2] DURING CATPATCH.SQL 9.2.
0.2
Note 2. -------
Doc ID: Subject: Date: Type: Status:
Note:235423.1 Content Type: TEXT/X-HTML How to resolve ORA-600 [qmxiUnpPacked2]
during upgrade 14-APR-2003 HOWTO Last Revision Date: 18-MAR-2005 PUBLISHED
Creation
The information in this article applies to: Oracle 9.2.0.2 Multiple Platforms, 6
4-bit Symptom(s) ~~~~~~~~~~ ORA-600 [qmxiUnpPacked2] [] Cause ~~~~~ If the error
is seen after applying 9.2.0.2 on a 9.2.0.1 database or if using DBCA in 9.2.0.
2 to create a new database (which is using the 9.2.0.1 seed database) then it is
very likely that either shared_pool_size or java_pool_size was too small when c
atpatch.sql was executed. Error is generally seen as ORA-600: internal error cod
e, arguments: [qmxiUnpPacked2], [121] There are 3 options to proceed from here:F
ix ~~~~ Option 1 ======== If your shared_pool_size and java_pool_size are less t
han 150Mb the do the following :1/ Set your shared_pool_size and java_pool_size
to 150Mb each. In some case you may need to use larger pool sizes. 2/ Get the xd
bpatch.sql script from Note 237305.1 3/ Copy xdbpatch.sql to $ORACLE_HOME/rdbms/
admin/xdbpatch.sql having taken a backup of the original file first 4/ Restart t
he instance with: startup migrate; 5/ spool catpatch
@?/rdbms/admin/catpatch.sql Option 2 ======== If you already have shared_pool_si
ze and java_pool_size set at greater than 150Mb then the problem may be caused b
y the shared memory allocated during the JVM upgrade is not released properly. I
n which case do the following :1/ Set your shared_pool_size and java_pool_size t
o 150Mb each. In some case you may need to use larger pool sizes. 2/ Get the xdb
patch.sql script from Note 237305.1 3/ Edit the xdbpatch.sql script and add the
following as the first line in the script:alter system flush shared_pool; 3/ Cop
y xdbpatch.sql to $ORACLE_HOME/rdbms/admin/xdbpatch.sql having taken a backup of
the original file first 3/ Restart the instance with: startup migrate; 4/ spool
catpatch @?/rdbms/admin/catpatch.sql Option 3 ======== If XDB is NOT in use and
there are NO registered XML Schemas an alternative is to drop, and maybe re-ins
tall XDB :1/ To drop the XDB subsystem connect as sys and run @?/rdbms/admin/cat
noqm.sql 2/ You can then run catpatch.sql to perform the upgrade startup migrate
; @?/rdbms/admin/catpatch.sql 3/ Once complete you may chose to re-install the X
DB subsystem, if so connect as sys and run catqm.sql @?/rdbms/admin/catqm.sql <X
DB_PASSWD> <TABLESPACE> <TEMP_TABLESPACE> If the error is seen during normal dat
abase operation, ensure that upgrade to current version was completed succesfull
y without error. Once this is confirmed attempt to reproduce the error, if succe
ssful forward ALERT.LOG, trace files and full error stack to Oracle Support Serv
ices for further analysis.
References ~~~~~~~~~~~ Bug 2734234 CONSOLIDATION BUG FOR ORA-600 [QMXIUNPPACKED2
] DURING CATPATCH.SQL 9.2.0.2 Note 237305.1 Modified xdbpatch.sql
19.54 ORA-00600: internal error code, arguments: [kcbget_37], [1], [], [], [], [
], [], [] ======================================================================
============ ======= ORA-00600: internal error code, arguments: [kcbso1_1], [],
[], [], [], [], [], [] ORA-00600: internal error code, arguments: [kcbget_37], [
1], [], [], [], [], [], [] Doc ID: Note:2652771.8 Subject: Support Description o
f Bug 2652771 Type: PATCH Status: PUBLISHED Content Type: TEXT/X-HTML Creation D
ate: 13-AUG-2003 Last Revision Date: 14-AUG-2003 Click here for details of secti
ons in this note. Bug 2652771 AIX: OERI[1100] / OERI[KCBGET_37] SGA corruption T
his note gives a brief overview of bug 2652771. Affects: Product (Component) Ora
cle Server (RDBMS) Range of versions believed to be affected Versions < 10G Vers
ions confirmed as being affected 8.1.7.4 9.2.0.2 Platforms affected Aix 64bit 5L
Aix 64bit 433 Fixed: This issue is fixed in 9.2.0.3 (Server Patch Set) Symptoms
: Memory Corruption Internal Error may occur (ORA-600) ORA-600 [1100] / ORA-600
[kcbget_37] Known Issues: Bug# 2652771 P See [NOTE:2652771.8] AIX: OERI[1100] /
OERI[KCBGET_37] SGA corruption Fixed: 9.2.0.3
19.55 ORA-00600: internal error code, arguments: [kcbzwb_4], [], [], [], [], [],
[], [] ========================================================================
========== ===== Doc ID: Note:4036717.8 Subject: Bug 4036717 - Truncate table in
exception handler can causes OERI:kcbzwb_4 Type: PATCH Status: PUBLISHED Conten
t Type: TEXT/X-HTML Creation Date: 25-FEB-2005 Last Revision Date: 09-MAR-2005 C
lick here for details of sections in this note. Bug 4036717 Truncate table in ex
ception handler can causes OERI:kcbzwb_4 This note gives a brief overview of bug
4036717. Affects: Product (Component) PL/SQL (Plsql) Range of versions believed
to be affected Versions < 10.2 Versions confirmed as being affected 10.1.0.3 Pl
atforms affected Generic (all / most platforms affected) Fixed: This issue is fi
xed in 9.2.0.7 (Server Patch Set) 10.1.0.4 (Server Patch Set) 10g Release 2 (fut
ure version) Symptoms: Related To: Internal Error May Occur (ORA-600) ORA-600 [k
cbzwb_4] PL/SQL Truncate Description Truncate table in exception handler can cau
se OERI:kcbzwb_4 with the fix for bug 3768052 installed. Workaround: Turn off or
deinstall the fix for bug 3768052. Note that the procedure containing the affec
ted transactional commands will have to be recompiled after backing out the bug
fix.
Doc ID: Note:4036717.8 Subject: Bug 4036717 - Truncate table in exception handle
r can causes OERI:kcbzwb_4 Type: PATCH Status: PUBLISHED Content Type: TEXT/X-HT
ML Creation Date: 25-FEB-2005
Last Revision Date:
09-MAR-2005
Click here for details of sections in this note. Bug 4036717 Truncate table in e
xception handler can causes OERI:kcbzwb_4 This note gives a brief overview of bu
g 4036717. Affects: Product (Component) PL/SQL (Plsql) Range of versions believe
d to be affected Versions < 10.2 Versions confirmed as being affected 10.1.0.3 P
latforms affected Generic (all / most platforms affected) Fixed: This issue is f
ixed in 9.2.0.7 (Server Patch Set) 10.1.0.4 (Server Patch Set) 10g Release 2 (fu
ture version) Symptoms: Related To: Internal Error May Occur (ORA-600) ORA-600 [
kcbzwb_4] PL/SQL Truncate Description Truncate table in exception handler can ca
use OERI:kcbzwb_4 with the fix for bug 3768052 installed. Workaround: Turn off o
r deinstall the fix for bug 3768052. Note that the procedure containing the affe
cted transactional commands will have to be recompiled after backing out the bug
fix.
19.56 ORA-00600: internal error code, arguments: [kcbgtcr_6], [], [], [], [], []
, [], [] =======================================================================
=========== ====== Doc ID: Note:248874.1 Subject: ORA-600 [kcbgtcr_6] Type: REFE
RENCE Status: PUBLISHED Content Type: TEXT/X-HTML Creation Date: 18-SEP-2003 Las
t Revision Date: 25-MAR-2004 <Internal_Only> This note contains information that
has not yet been reviewed by DDR. As such, the contents are not necessarily acc
urate and care should be taken when dealing with customers who have encountered
this error.
Thanks. PAA Internals Group </Internal_Only> Note: For additional ORA-600 relate
d information please read Note 146580.1 PURPOSE: This article discusses the inte
rnal error "ORA-600 [kcbgtcr_6]", what it means and possible actions. The inform
ation here is only applicable to the versions listed and is provided only for gu
idance. ERROR: ORA-600 [kcbgtcr_6] [a] VERSIONS: versions 8.0 to 10.1 DESCRIPTIO
N: Two buffers have been found in the buffer cache that are both current and for
the same DBA (Data Block Address). We should not have two 'current' buffers for
the same DBA in the cache, if this is the case then this error is raised. ARGUM
ENTS: Arg [a] Buffer class Note that for Oracle release 9.2 and earlier there ar
e no additional arguments reported with this error. FUNCTIONALITY: Kernel Cache
Buffer management IMPACT: PROCESS FAILURE POSSIBLE INSTANCE FAILURE NON CORRUPTI
VE - No underlying data corruption. SUGGESTIONS: Retry the operation. Does the e
rror still occur after an instance bounce? If using 64bit AIX then ensure that m
inimum version in use is 9.2.0.3 or patch for Bug 2652771 has been applied. If t
he Known Issues section below does not help in terms of identifying a solution,
please submit the trace files and alert.log to Oracle Support Services for furth
er analysis. Known Issues: Bug 2652771 Shared data structures corrupted around l
atch code on 64bit AIX ports. Fixed 9.2.0.3 backports available for older versio
ns (8.1.7) from Metalink.
<Internal_Only> ORA-600 [kcbgtcr_6] Versions: 8.0.5 - 10.1 Meaning: We have two
'CURRENT' buffers for the same DBA. Argument Description: None -----------------
---------------------------------------------------------Explanation: We have id
entified two 'CURRENT' buffers for the same DBA in the cache, this is incorrect,
and this error will be raised. ------------------------------------------------
--------------------------Diagnosis: Check the trace file, this will show the bu
ffers i.e :BH (0x70000003ffe9800) file#: 39 rdba: 0x09c131e6 (39/78310) class 1
ba: 0x70000003fcf0000 set: 6 dbwrid: 0 obj: 11450 objn: 11450 hash: [70000000efa
9b00,70000004d53a870] lru: [70000000efa9b68,700000006fb8d68] ckptq: [NULL] fileq
: [NULL] st: XCURRENT md: NULL rsop: 0x0 tch: 1 LRBA: [0x0.0.0] HSCN: [0xffff.ff
ffffff] HSUB: [255] RRBA: [0x0.0.0] BH (0x70000000efa9b00) file#: 39 rdba: 0x09c
131e6 (39/78310) class 1 ba: 0x70000000e4f6000 set: 6 dbwrid: 0 obj: 11450 objn:
11450 hash: [70000004d53a870,70000003ffe9800] lru: [700000012fbaf68,70000003ffe
9868] ckptq: [NULL] fileq: [NULL] st: XCURRENT md: NULL rsop: 0x0 tch: 2 LRBA: [
0x0.0.0] HSCN: [0xffff.ffffffff] HSUB: [255] RRBA: [0x0.0.0] Here it is clear th
at we have two current buffers for the dba. Most likely cause for this is 64bit
AIX Bug 2652771. If this isn't the case check the error reproduces consistently
after bouncing the instance? Via SQLplus? What level of concurrency to reproduce
? Is a testcase available? Check OS memory for errors. -------------------------
-------------------------------------------------Source: kcb.c
Known Bugs: Bug 2652771 Shared data structures corrupted around latch code on 64
bit AIX ports. - Fixed 9.2.0.3, backports available for older versions.
19.57 ORA-00600: internal error code, arguments: [1100], [0x7000002FDF83F40], [0
x7000002FDF83F40], [], [], [], [], [] ==========================================
======================================== =================================== Doc
ID: Note:138123.1 Subject: ORA-600 [1100] Type: REFERENCE Status: PUBLISHED Con
tent Type: TEXT/X-HTML Creation Date: 28-MAR-2001 Last Revision Date: 08-FEB-200
5 Note: For additional ORA-600 related information please read Note 146580.1 PUR
POSE: This article discusses the internal error "ORA-600 [1100]", what it means
and possible actions. The information here is only applicable to the versions li
sted and is provided only for guidance. ERROR: ORA-600 [1100] [a] [b] [c] [d] [e
] VERSIONS: versions 6.0 to 9.2 DESCRIPTION: This error relates to the managemen
t of standard double-linked (forward and backward) lists. Generally, if the list
is damaged an attempt to repair the links is performed. Additional information
will accompany this internal error. A dump of the link and often a core dump wil
l coincide with this error. This is a problem with a linked list structure in me
mory. FUNCTIONALITY: GENERIC LINKED LISTS IMPACT: PROCESS FAILURE POSSIBLE INSTA
NCE FAILURE IF DETECTED BY PMON PROCESS No underlying data corruption.
SUGGESTIONS: Known Issues: Bug# 3724548 See Note 3724548.8 OERI[kglhdunp2_2] / O
ERI[1100] under high load Fixed: 9.2.0.6, 10.1.0.4, 10.2 Bug# 3691672 + See Note
3691672.8 OERI[17067]/ OERI[26599] / dump (kgllkdl) from JavaVM / OERI:1100 fro
m PMON Fixed: 10.1.0.4, 10.2 Bug# 2652771 P See Note 2652771.8 AIX: OERI[1100] /
OERI[KCBGET_37] SGA corruption Fixed: 9.2.0.3 Bug# 1951929 See Note 1951929.8 O
RA-7445 in KQRGCU/kqrpfr/kqrpre possible Fixed: 8.1.7.3, 9.0.1.2, 9.2.0.1 Bug# 9
59593 See Note 959593.8 CTRL-C During a truncate crashes the instance Fixed: 8.1
.6.3, 8.1.7.0 <Internal_Only> INTERNAL ONLY SECTION - NOT FOR PUBLICATION OR DIS
TRIBUTION TO CUSTOMERS No internal information at the present time. Ensure that
this note comes out on top in Metalink when searched ora-600 ora-600 ora-600 ora
-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora
-600 1100 1100 1100 1100 1100 1100 1100 1100 1100 1100 1100 1100 1100 1100 1100
1100 1100 1100 1100 1100 </Internal_Only> Note 2: ------Doc ID: Note:3724548.8 S
ubject: Bug 3724548 - OERI[kglhdunp2_2] / OERI[1100] under high load Type: PATCH
Status: PUBLISHED Content Type: TEXT/X-HTML Creation Date: 24-SEP-2004 Last Rev
ision Date: 13-JAN-2005 Click here for details of sections in this note. Bug 372
4548 OERI[kglhdunp2_2] / OERI[1100] under high load This note gives a brief over
view of bug 3724548.
Affects: Product (Component) Oracle Server (Rdbms) Range of versions believed to
be affected Versions < 10.2 Versions confirmed as being affected 9.2.0.4 9.2.0.
5 Platforms affected Generic (all / most platforms affected) Fixed: This issue i
s fixed in 9.2.0.6 (Server Patch Set) 10.1.0.4 (Server Patch Set) 10g Release 2
(future version) Symptoms: Related To: Memory Corruption Internal Error May Occu
r (ORA-600) ORA-600 [kglhdunp2_2] ORA-600 [1100] (None Specified) Description Wh
en an instance is under high load it is possible for sessions to get ORA-600[KGL
HDUNP2_2] and ORA-600 [1100] errors. This can also show as a corrupt linked list
in the SGA. The full bug text (if published) can be seen at <Bug:3724548> (This
link will not work for UNPUBLISHED bugs) You can search for any interim patches
for this bug here <Patch:3724548> (This link will Error if no interim patches e
xist) 19.58 Compilation problems DBI DBD: =================================== We
upgraded Oracle from 8.1.6 to 9.2.0.5 and I tried to rebuild the DBD::Oracle mo
dule but it threw errors like: . gcc: unrecognized option `-q64' ld: 0711-736 ER
ROR: Input file /lib/crt0_64.o: XCOFF64 object files are not allowed in 32-bit m
ode. collect2: ld returned 8 exit status make: 1254-004 The error code from the
last command is 1. Stop. After some digging I found out that this is because the
machine is AIX 5.2 running under 32-bit and it is looking at the oracle's lib d
irectory which has 64 bit libraries. So after running "perl Makefile.PL", I edit
ed the Makefile 1. changing the references to Oracle's ../lib to ../lib32, 2. ch
anging change crt0_64.o to crt0_r.o. 3. Remove the -q32 and/or -q64 options from
the list of libraries to link with.
Now when I ran "make" it went smoothly, so did make test and make install. I ran
my own simple perl testfile which connects to the Oracle and gets some info and
it works fine. Now I have an application which can be customised to call perl s
cripts and when I call this test script from that application it fails with: ins
tall_driver(Oracle) failed: Can't load '/usr/local/perl/lib/site_perl/5.8.5/a ix
/auto/DBD/Oracle/Oracle.so' for module DBD::Oracle: 0509-022 Cannot load mod ule
/usr/local/perl/lib/site_perl/5.8.5/aix/auto/DBD/Oracle/Oracle.so. 0509-150 Dep
endent module /u00/oracle/product/9.2.0/lib/libclntsh.a(sh r.o) could not be loa
ded. 0509-103 The module has an invalid magic number. 0509-022 Cannot load modul
e /u00/oracle/product/9.2.0/lib/libclntsh.a. 0509-150 Dependent module /u00/orac
le/product/9.2.0/lib/libclntsh.a co uld not be loaded. at /usr/local/perl/lib/5.
8.5/aix/DynaLoader.pm line 230. at (eval 3) line 3 Compilation failed in require
at (eval 3) line 3. Perhaps a required shared library or dll isn't installed wh
ere expected at /opt/dscmdevc/src/udps/test_oracle_dbd.pl line 45 whats happenin
g here is that the application sets its own LIBPATH to include oracle's lib(inst
ead of lib32) in the beginning and that makes perl look at the wrong place for t
he file - libclntsh.a .Unfortunately it will take too long for the application d
evelopers to change this in their application and I am looking for a quick solut
ion. The test script is something like: use Env; use strict; use lib qw( /opt/ha
rvest/common/perl/lib ) ; #use lib qw( $ORACLE_HOME/lib32 ) ; use DBI; my $conne
ct_string="dbi:Oracle:"; my $datasource="d1ach2"; $ENV{'LIBPATH'} = "${ORACLE_HO
ME}/lib32:$ENV{'LIBPATH'}" ; . . my $dbh = DBI->connect($connect_string, $dbuser
, $dbpwd) or die "Can't connect to $datasource: $DBI::errstr"; . . Adding 'use l
ib' or using'$ENV{LIBPATH}' to change the LIBPATH is not working because I need
to make this work in this perl script and the "use DBI" is run (or whatever the
term is) in the compile-phase before the LIBPATH is set in the run-phase. I have
a work around for it: write a wrapper ksh script which exports the LIBPATH and
then calls the perl script which works fine but I was wondering if there is a wa
y to set the libpath or do something else inside the current perl script so that
it knows where to look for the right
library files inspite of the wrong LIBPATH? Or did I miss something when I chang
ed the Makefile and did not install everything right? Is there anyway I check th
is? (the make install didnot throw any errors) Any help or thoughts on this woul
d be much appreciated. Thanks! Rachana. note 12: -------P550:/ # find . -name "l
ibclnt*" -print ./apps/oracle/product/9.2/lib/libclntst9.a ./apps/oracle/product
/9.2/lib/libclntsh.a ./apps/oracle/product/9.2/lib32/libclntst9.a ./apps/oracle/
product/9.2/lib32/libclntsh.a ./apps/oracle/oui/bin/aix/libclntsh.so.9.0 P550:/
#
19.59 Listener problem: IBM/AIX RISC System/6000 Error: 13: Permission denied --
--------------------------------------------------------------------------When s
tarting listener start listener TNS-12546: TNS:permission denied TNS-12560: TNS:
protocol adapter error TNS-00516: Permission denied IBM/AIX RISC System/6000 Err
or: 13: Permission denied Note 1: 'TNS-12531: TNS:cannot allocate memory' may be
misleading, it seems to be a permission problem (see also IBM/AIX RISC System/6
000 Error: 13: Permission denied). A possible reason is: Oracle (more specific t
he listener) is unable to read /etc/hosts, because of permission problems. So ho
st resolution is not possible. .. .. The problem really was in permissions of /e
tc/hosts on the node2. It was -rwr----- (640). Now it is -rw-rw-r-- (664) and ev
erything goes ok. Thank you! BUGS WITH REGARDS TO PRO*COBOL ON 9i:
10.59 Listener problem: IBM/AIX RISC System/6000 Error: 79: Connection refused -
----------------------------------------------------------------------------d0pl
anon@zb121l01:/data/oracle/d0planon/admin/home/$ lsnrctl LSNRCTL for IBM/AIX RIS
C System/6000: Version 10.2.0.3.0 - Production on 12-OCT2007 08:29:14 Copyright
(c) 1991, 2006, Oracle. All rights reserved.
Welcome to LSNRCTL, type "help" for information. LSNRCTL> status Connecting to (
ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521)) TNS-12541: TNS:no listener TNS-12560:
TNS:protocol adapter error TNS-00511: No listener IBM/AIX RISC System/6000 Error
: 79: Connection refused Answer 1: Check if the oracle user can read /etc/hosts
Answer 2: Maybe there are multiple instances of the listener, so if you try the
following LSNRCTL> status <listener_name> You might have a correct response.
19.61: 64BIT PRO*COBOL IS NOT THERE EVNN AFTER UPGRDING TO 9.2.0.3 ON AIX-5L BOX
-------------------------------------------------------------------------------
Bookmark Fixed font Go to End Monitor Bug
Bug No. 2859282 Filed 19-MAR-2003 Updated 01-NOV-2003 Product Precompilers Produ
ct Version 9.2.0.3 Platform AIX5L Based Systems (64-bit) Platform Version 5.* Da
tabase Version 9.2.0.3 Affects Platforms Port-Specific Severity Severe Loss of S
ervice Status Closed, Duplicate Bug Base Bug 2440385 Fixed in Product Version No
Data Problem statement: 64BIT PRO*COBOL IS NOT THERE EVNN AFTER UPGRDING TO 9.2
.0.3 ON AIX-5L BOX
*** 03/19/03 10:13 am *** 2889686.996 . ========================= PROBLEM: . 1.
Clear description of the problem encountered: . cst. has upgraded from 9.2.0.2 t
o 9.2.0.3 on a AIX 5L 64-Bit Box and is not seeing the 64-bit Procob executable.
Actually the same problem existed when upgraded from 9.2.0.1 to 9.2.0.2, but th
e one-off patch has been provided in the Bug#2440385 to resolve the issue. As pe
r the Bug, problem has been fixed in 9.2.0.3. But My Cst. is facing the same pro
blem on 9.2.0.3 also. . This is what the Cst. says ============================
This is the original bug # 2440385. The fix provides 64 bit versions of Pro*Cobo
l.There are two versions of the patch for the bug: one is for the 9.2.0.1 RDBMS
and the other is for 9.2.0.2. So the last time I hit this issue, I applied the 9
.2.0.2 RDBMS patch to the 9.2.0.1 install. The 9.2.0.2 patch also experienced th
e relinking problem on rtsora just like the 9.2.0.1 install did. I ignored the e
rror to complete the patch application. Then I used the patch for the 2440385 bu
g to get 64 bit procob/rtsora executables (the patch actually provides executabl
es rather than performing a successful relinking) to get the Pro*Cobol 1.8.77 pr
ecompiler to work with the MicroFocus Server Express 2.0.11 (64 bit) without enc
ountering "bad magic number" error. . The current install that I am performing I
've downloaded the Oracle 9.2.0.3 Pro*Cobol capability fix either so the rtsora
relinking fails as well. Thus I don't have a working Pro*Cobol precompiler to al
low me to generate our Cobol programs against the database. . 2. Pertinent confi
guration information (MTS/OPS/distributed/etc) . 3. Indication of the frequency
and predictability of the problem . 4. Sequence of events leading to the problem
. 5. Technical impact on the customer. Include persistent after effects. . ====
===================== DIAGNOSTIC ANALYSIS: . One-off patch should be provided on
top of 9.2.0.3 as provided on top of 9.2.0.2/9.2.0.1 . ========================
= WORKAROUND: . . ========================= RELATED BUGS: . 2440385 . ==========
=============== REPRODUCIBILITY:
. . . . 1. State if the problem is reproducible; indicate where and predictabili
ty 2. List the versions in which the problem has reproduced 9.2.0.3 3. List any
versions in which the problem has not reproduced
Further notes on PRO*COBOL: =========================== Note 1: ======= 9201,920
2,9203,9204,9205 32 bit cobol: procob32 or procob18_32. 64 bit cobol: procob or
procob18 PATCHES: 1. Patch 2663624: (Cobol patch for 9202 AIX 5L) --------------
--------------------------------PSE FOR BUG2440385 ON 9.2.0.2 FOR AIX5L PORT 212
Patchset Exception: 2663624 / Base Bug 2440385 #-------------------------------
-----------------------------------------# # DATE: November 26, 2002 # ---------
-------------# Platform Patch for : AIX Based Systems (Oracle 64bit) for 5L # Pr
oduct Version # : 9.2.0.2 # Product Patched : RDBMS # # Bugs Fixed by this patch
: # ------------------------# 2440385 : PLEASE PROVIDE THE PATCH FOR SUPPORTING
64BIT PRO*COBOL # # Patch Installation Instructions: # -------------------------
------# To apply the patch, unzip the PSE container file; # # % unzip p2440385_9
202_AIX64-5L.zip # # Set your current directory to the directory where the patch
# is located: # # % cd 2663624 # # Ensure that the directory containing the opa
tch script appears in # your $PATH; then enter the following command: # # % opat
ch apply
2. Patch 2440385: ----------------Results for Platform : AIX5L Based Systems (64
-bit) Patch Description 2440385 Pro*COBOL: 2440385 Pro*COBOL: 2440385 Pro*COBOL:
Release Updated Size PATCH FOR SUPPORTING 64BIT PRO*COBOL 9.2.0.3 27-APR-2003 3
4M PATCH FOR SUPPORTING 64BIT PRO*COBOL 9.2.0.2 26-NOV-2002 17M PATCH FOR SUPPOR
TING 64BIT PRO*COBOL 9.2.0.1 01-OCT-2002 17M
3. Patch 3501955 9205: ---------------------Also includes 2440385. Note 2: =====
== Problem precompiling Cobol program under Oracle 9i...... Hi, we recently upgr
aded to 9i. However, we still have 32 bit Cobol, so we're using the procob18_32
precompiler to compile our programs. Some of my compiles have worked successfull
y. However, I'm receiving the follow error in one of my compiles: 1834 183400 01
IB0-STATUS PIC 9. 7SA 350 1834 ...................................^ PCC-S-0018:
Expected "PICTURE clause", but found "9" at line 1834 in file What's strange is
that if I compile the program against the same DB using procob instead of proco
b18_32, it compiles cleanly. I noticed in my compile that failed using procob18_
32, it had the following message: System default option values taken from: /u01/
app/oracle/product/9.2.0.4/precomp /admin/pcccob.cfg Yet, when I used procob, it
had this message: System default option values taken from: /u01/app/oracle/prod
uct/9.2.0.4/precomp /admin/pcbcfg.cfg .. .. Hi, I started using procob32 instead
of procob18_32, and that resolved my problem. Thanks for any help you may have
already started to provide. Provide the patch for supporting 64-bit Pro*COBOL.
Note 3: ======= Doc ID: Note:257934.1 Content Type: TEXT/X-HTML Subject: Pro*COB
OL Application Fails in Runtime When Using Customized old Make Files With Signal
11 (MF Errror 114) Creation Date: 20-NOV-2003 Type: PROBLEM Last Revision Date:
04-APR-2005 Status: MODERATED The information in this article applies to: Preco
mpilers - Version: 9.2.0.4 This problem can occur on any platform. Symptoms Afte
r upgrading from Oracle server and Pro*COBOL 9.2.0.3.0 to 9.2.0.4.0 application
are failing with cobol runtime error 114 when using 32-bit builds. Platform is A
IX 4.3.3 which does not support 64-bit builds with Micro Focus Server Express 2.
0.11. Execution error : file 'sample1' error code: 114, pc=0, call=1, seg=0 114
Attempt to access item beyond bounds of memory (Signal 11) Changes Upgraded from
9.2.0.3.0 to 9.2.0.4.0. Cause The customized old make files for building 32-bit
applications invoked the 64-bit precompilers procob or procob18 instead of proc
ob32 or procob18_32. Fix Use the Oracle Supplied make templates or change the cu
stomized old make files for 32-bit application builds $ORACLE_HOME/precomp/demo/
procob2/demo_procob_32.mk, $ORACLE_HOME/precomp/demo/procob/demo_procob_32.mk an
d $ORACLE_HOME/precomp/demo/procob/demo_procob18_32.mk invoke the wrong precompi
ler. To fix the problem add the following to $ORACLE_HOME/precomp/demo/procob2/d
emo_procob_32.mk: PROCOB=procob32 Using $ORACLE_HOME/precomp/demo/procob/demo_pr
ocob_32.mk: PROCOB_32=procob32 Using $ORACLE_HOME/precomp/demo/procob/demo_proco
b18_32.mk PROCOB18_32=procob18_32 The change can be added to the bottom of the m
ake file. References Bug 3220095 - Procobol App Fails114 Attempt To Access Item
Beyond Bounds Of Memory (Signal 11) Note 4: ======= Displayed below are the mess
ages of the selected thread.
Thread Status: Closed From: Jean-Daniel DUMAS 23-Nov-04 16:39 Subject: PROCOB18_
32 Problem at execution ORA-00933 PROCOB18_32 Problem at execution ORA-00933 We
try to migrate from Oracle 8.1.7.4 to Oracle 9.2.0.5. We've got problems with a
lot of procobol programs using host table variables in PL SQL blocks like: EXEC
SQL EXECUTE BEGIN FOR nIndice IN 1..:WI-NB-APPELS-TFO009S LOOP UPDATE tmp_editio
n_erreur SET mon_nb_dec = :WTI-S2-MON-NB-DEC (nIndice) WHERE mon_cod = :WTC-S2-M
ON-COD (nIndice) AND run_id = :WC-O-RUN-ID; END LOOP; END; END-EXEC At execution
, we've got "ORA-00933 SQL command not properly ended". The problem seems to app
ear only if the host table variable is used inside a SELECT,UPDATE or DELETE com
mand. For the INSERT VALUES command, it seems that we've got no problem. A worka
round consists to assign host table variables into oracle table variables and re
place inside SQL command host table variables by oracle table variables. But, as
we've got a lot a program like this, we don't enjoy to do this. Have somebody a
nother idea ? jddumas@eram.fr From: Oracle, Amit Joshi 05-Jan-05 06:26 Subject:
Re : PROCOB18_32 Problem at execution ORA-00933 Hi Please refer to bug 3802067 o
n Metalink. From the details provided , it seems you are hitting the same. Best
Regards Amit Joshi Note 5: ======= Re: Server Express 64bit and Oracle 9i proble
m (114) on AIX 5.2 Hi Wayne (and Panos) Apologies if you're aware of some of thi
s already, but I just wanted to clarify the steps involved in creating and execu
ting a Pro*COBOL application with Micro Focus Server Express on UNIX.
When installing Pro*COBOL on UNIX (as part of the main Oracle installation), you
need to have your COBOL environment setup, in order for the installer to relink
a COBOL RTS containing the Oracle support libraries (rtsora/rtsora32/rtsora64).
The 64-bit edition of Oracle 9i on AIX 5.x creates rtsora -- the 64-bit version
of the run-time -- and rtsora32 -- the 32-bit version of the run-time. It's imp
erative that you use the correct edition of Server Express, i.e. 32-bit or 64-bi
t -- note well, that these are separate products on this platform -- for the mod
e in which you wish to use Oracle. In addition, you need to ensure that LIBPATH
is set to point to the correct Oracle 'lib' directory -- $ORACLE_HOME/lib32 for
32-bit, or $ORACLE_HOME/lib for 64-bit If you wish to recreate those executables
, say if you've updated your COBOL environment since installing Oracle, then fro
m looking at the makefiles -ins_precomp.mk and env_precomp.mk -- then the effect
ive commands to use to re-link the run-time correctly are as follows (logged in
under your Oracle user ID) : either mode: <set up COBDIR, ORACLE_HOME, ORACLE_BA
SE, ORACLE_SID as appropriate for your installation> export PATH=$COBDIR/bin:$OR
ACLE_HOME/bin:$PATH 32-bit : export LIBPATH=$COBDIR/lib:$ORACLE_HOME/lib32:$LIBP
ATH cd $ORACLE_HOME/precomp/lib make LIBDIR=lib32 -f ins_precomp.mk EXE=rtsora32
rtsora32 64-bit: export LIBPATH=$COBDIR/lib:$ORACLE_HOME/lib:$LIBPATH cd $ORACL
E_HOME/precomp/lib make -f ins_precomp.mk rtsora Regarding precompiling your app
lication, Oracle provide two versions of Pro*COBOL. Again, you need to use the c
orrect one depending on whether you're creating a 32-bit or 64-bit application,
as the precompiler will generate different code. If invoking Pro*COBOL directly,
you need to use : 32-bit : procob32 / procob18_32 , e.g. procob32 myapp.pco cob
-it myapp.cob rtsora32 myapp.int or 64-bit : procob / procob18 , e.g. procob my
app.pco cob -it myapp.cob rtsora myapp.int If you're using Server Express 2.2 SP
1 or later, you can also compile using the Cobsql preprocessor, which will invok
e the correct version of Pro*COBOL under the covers, allowing for a single preco
mpile-compile step, e.g.
cob -ik myapp.pco -C "p(cobsql) csqlt==oracle8 endp" This method also aids debug
ging, as you will see the original source code while animating, rather than the
output from the precompiler. See the Server Express Database Access manual. Prio
r to SX 2.2 SP1, Cobsql only supported the creation of 32-bit applications. I ho
pe this helps -- if you're still having problems, please let me know. Regards, S
imonT. Re: Re: Server Express 64bit and Oracle 9i problem (114) on AIX 5.2 Hi Si
mon (and anyone else) Thanks for that. We still seem to be getting a very unusua
l error with our c ompiles in or makes. A bit of background: we are "upgrading"
from Oracle8i, SAS6, Solaris, MF COB OL 4.5 to AIX 5L, Oracle9i, SAS8 and MF Ser
ver Express COBOL. When we attempt to compile our COBOL it works fine. However i
f the COBOL has embedded Oracle SQL our procomp makes try to access ADA. We do n
ot use ADA. I thought this must have been included by accident; but can find no
flag or install option for it. So can you give us any clues as to why we are suf
fer ing an ADA plague :-)) Wayne Re: Server Express 64bit and Oracle 9i problem
(114) on AIX 5.2 Hi Wayne. On the surface, it appears as if you're not picking u
p the correct Pro*COBOL binary. If you invoke 'procob' from the command line, yo
u should see something along the lines of : Pro*COBOL: Release 9.2.0.4.0 - Produ
ction on Mon Apr 19 13:38:07 2004 followed by a list of Pro*COBOL options. Do yo
u see this, or do you see a different banner (say, Pro*ADA, or Pro*Fortran)? Ass
uming you see something other than a Pro*COBOL banner, then if you invoke 'whenc
e procob', does it show procob as being picked up from your Oracle bin directory
(/home/oracle/9.2.0/bin/procob in my case) ? If you're either not seeing the co
rrect Pro*COBOL banner, or it's not located in the correct directory, I'd sugges
t rebuilding the procob and procob32 binaries. Logged in under your Oracle user
ID, with the Oracle environment set up : cd $ORACLE_HOME/precomp/lib make -f ins
_precomp.mk procob32 procob
and then try your compilation process again. Regards, SimonT. Re: Re: Server Exp
ress 64bit and Oracle 9i problem (114) on AIX 5.2 Hi Simon Firstly, thanks for a
ll your help, it was greatly appreciated. We have the solution to our problem: T
he problem is resolved by modifying the line in the job from: make -f $SRC_DIR/p
rocob.mk COBS="$SRC_DIR/PFEM025A.cob SYSDATE.cob CNTLGET. cob" EXE=$SRC_DIR/PFEM
025A to make -f $SRC_DIR/procob.mk build COBS="$SRC_DIR/PFEM025A.cob SYSDATE.cob
CN TLGET.cob" EXE=$SRC_DIR/PFEM025A It appears this (build keyword) is not a re
quirement for the job to run on S olaris but is for AIX. All is working fine. Ch
eers Wayne Note 6: ======= Doc ID: Note:2440385.8 Content Type: TEXT/X-HTML Subj
ect: Support Description of Bug 2440385 Creation Date: Type: PATCH Last Revision
Date: 15-AUG-2003 Status: PUBLISHED Click here for details of sections in this
note. Bug 2440385 AIX: Support for 64 bit ProCobol This note gives a brief overv
iew of bug 2440385. Affects: Product (Component) Precompilers (Pro*COBOL) Range
of versions believed to be affected Versions >= 7 but < 10G Versions confirmed a
s being affected 9.2.0.3 Platforms affected Aix 64bit 5L Fixed: This issue is fi
xed in 9.2.0.4 (Server Patch Set) Symptoms: (None Specified) Related To: Pro* Pr
ecompiler Description Add support for 64 bit ProCobol
08-AUG-2003
The full bug text (if published) can be seen at Bug 2440385 This link will not w
ork for UNPUBLISHED bugs. Note 7: ======= Displayed below are the messages of th
e selected thread. Thread Status: Closed From: Cathy Agada 18-Sep-03 21:40 Subje
ct: How do I relink rtsora for 64 bit processing How do I relink rtsora for 64 b
it processing I have the following error while relinking "rtsora" on AIX 5L/64bi
t platform on oracle 9.2.0.3 (I believe my patch is up-to-date). Our Micro Focus
compiler version is 2.0.11 $>make -f ins_precomp.mk relink EXENAME=rtsora /bin/
make -f ins_precomp.mk LIBDIR=lib32 EXE=/app/oracle/product/9.2.0/precomp/lib/rt
sora rtsora32 Linking /app/oracle/product/9.2.0/precomp/lib/rtsora cob64: bad ma
gic number: /app/oracle/product/9.2.0/precomp/lib32/cobsqlintf.o make: 1254-004
The error code from the last command is 1. Stop. make: 1254-004 The error code f
rom the last command is 2. My environment variable is as follows: COBDIR=/usr/lp
p/cobol LD_LIBRARY_PATH=$ORACLE_HOME/lib:/app/oracle/product/9.2.0/network/lib S
HLIB_PATH=$ORACLE_HOME/lib64:/app/oracle/product/9.2.0/lib32 I added 'define=bit
64' on precomp config file. Any ideas on what could be wrong. Thanks.
From: Oracle, Amit Chitnis 19-Sep-03 05:26 Subject: Re : How do I relink rtsora
for 64 bit processing Cathy, Support for 64 bit Pro*Cobol 9.2.0.3 on AIX 5.1 was
provided through one off patch for bug 2440385 You will need to download and ap
ply the patch for bug 2440385. ==OR== You can dowload and apply the latest 9.2.0
.4 patchset where the bug is fixed.
Thanks, Amit Chitnis. Note 8: ======= Doc ID: Note:215279.1 Content Type: TEXT/X
-HTML Subject: Building Pro*COBOL Programs Fails With "cob64: bad magic number:"
Creation Date: 08-APR-2003 Type: PROBLEM Last Revision Date: 15-APR-2003 Status
: PUBLISHED fact: Pro*COBOL 9.2.0.2 fact: Pro*COBOL 9.2.0.1 fact: AIX-Based Syst
ems (64-bit) symptom: Building Pro*COBOL programs fails symptom: cob64: bad magi
c number: %s symptom: /oracle/product/9.2.0/precomp/lib32/cobsqlintf.o cause: Bu
g 2440385 AIX: Support for 64 bit ProCobol fix: This is fixed in Pro*COBOL 9.2.0
.3 One-Off patch for Pro*COBOL 9.2.0.2 has been provided in Metalink Patch Numbe
r 2440385 Reference: How to Download a Patch from Oracle Note 9: ======= If you
wish to recreate those executables, say if you've updated your COBOL environment
since installing Oracle, then from looking at the makefiles -ins_precomp.mk and
env_precomp.mk -- then the effective commands to use to re-link the run-time co
rrectly are as follows (logged in under your Oracle user ID) : either mode: <set
up COBDIR, ORACLE_HOME, ORACLE_BASE, ORACLE_SID as appropriate for your install
ation> export PATH=$COBDIR/bin:$ORACLE_HOME/bin:$PATH 32-bit : export LIBPATH=$C
OBDIR/lib:$ORACLE_HOME/lib32:$LIBPATH
cd $ORACLE_HOME/precomp/lib make LIBDIR=lib32 -f ins_precomp.mk EXE=rtsora32 rts
ora32 64-bit: export LIBPATH=$COBDIR/lib:$ORACLE_HOME/lib:$LIBPATH cd $ORACLE_HO
ME/precomp/lib make -f ins_precomp.mk rtsora Note 10: ======== On 9.2.0.5, try t
o get the pro cobol patch for 9203. Then just copy the procobol files to the cob
ol directory.
19.62: ORA-12170: ================= Connection Timeout. Doc ID: Subject: Date: T
ype: Status: Note:274303.1 Content Type: TEXT/X-HTML Description of parameter SQ
LNET.INBOUND_CONNECT_TIMEOUT 26-MAY-2004 BULLETIN Last Revision Date: 10-FEB-200
5 MODERATED
Creation
*** This article is being delivered in Draft form and may contain errors. Please
use the MetaLink "Feedback" button to advise Oracle of any issues related to th
is article. *** PURPOSE ------To specify the time, in seconds, for a client to c
onnect with the database server and provide the necessary authentication informa
tion. Description of parameter SQLNET.INBOUND_CONNECT_TIMEOUT ------------------
------------------------------------This parameter has been introduced in 9i ver
sion. This has to be configured in sqlnet.ora file. Use the SQLNET.INBOUND_CONNE
CT_TIMEOUT parameter to specify the time, in seconds, for a client to connect wi
th the database server and provide the necessary authentication information. If
the client fails to establish a connection and complete authentication in the ti
me specified, then the database server terminates the connection. In addition, t
he database server logs the IP address of the client and an ORA-12170: TNS:Conne
ct timeout occurred error message to the sqlnet.log file. The client receives ei
ther an ORA-12547: TNS:lost contact or
an ORA-12637: Packet receive failed error message. Without this parameter, a cli
ent connection to the database server can stay open indefinitely without authent
ication. Connections without authentication can introduce possible denial-of-ser
vice attacks, whereby malicious clients attempt to flood database servers with c
onnect requests that consume resources. To protect both the database server and
the listener, Oracle Corporation recommends setting this parameter in combinatio
n with the INBOUND_CONNECT_TIMEOUT_listener_name parameter in the listener.ora f
ile. When specifying values for these parameters, consider the following recomme
ndations: *Set both parameters to an initial low value. *Set the value of the IN
BOUND_CONNECT_TIMEOUT_listener_name parameter to a lower value than the SQLNET.I
NBOUND_CONNECT_TIMEOUT parameter. For example, you can set INBOUND_CONNECT_TIMEO
UT_listener_name to 2 seconds and INBOUND_CONNECT_TIMEOUT parameter to 3 seconds
. If clients are unable to complete connections within the specified time due to
system or network delays that are normal for the particular environment, then i
ncrement the time as needed. By default is set to None Example SQLNET.INBOUND_CO
NNECT_TIMEOUT=3 RELATED DOCUMENTS ----------------Oracle9i Net Services Referenc
e Guide, Release 2 (9.2), Part Number A96581-02 SQLNET.EXPIRE_TIME: ------------
------Purpose: Determines time interval to send a probe to verify the session is
alive See Also: Oracle Advanced Security Administrator's Guide Default: None Mi
nimum Value: 0 minutes Recommended Value: 10 minutes Example: sqlnet.expire_time
=10 sqlnet.expire_time Enables dead connection detection, that is, after the spe
cifed time (in minutes)
the server checks if the client is still connected. If not, the server process e
xits. This parameter must be set on the server
PROBLEM: Long query (20 minutes) returns ORA-01013 after about a minute. SOLUTIO
N: The SQLNET.ORA parameter SQLNET.EXPIRE_TIME was set to a one(1). The paramete
r was changed to... SQLNET.EXPIRE_TIME=2147483647 This allowed the query to comp
lete. This is documented in the Oracle Troubleshooting manual on page 324. The m
anual part number is A54757.01. Keywords: SQLNET.EXPIRE_TIME,SQLNET.ORA,ORA-0101
3 sqlnet.expire_time should be set on the server. The server sends keep alive tr
affic over connections that have already been established. You won't need to cha
nge your firewall. sqlnet.expire_time is actually intended to test connections i
n order to allow oracle to clean up resources from connection that abnormally te
rminated. The architecture to do that means that the server will send a probe pa
cket to the client. That probe packet is viewed by the most firewalls as traffic
on the line. That will in short reset the idle timers on the firewall. If you h
appen to have the disconnects from idle timers then it may help. It was not inte
nded for that feature but it is a byproduct of the design. 19.63: Tracing SQLNET
: ====================== Note 1: -------
Doc ID: Note:219968.1 Subject: SQL*Net, Net8, Oracle Net Services - Tracing and
Logging at a Glance Type: BULLETIN Status: PUBLISHED Content Type: TEXT/X-HTML C
reation Date: 20-NOV-2002 Last Revision Date: 26-AUG-2003
TITLE -----
SQL*Net, Net8, Oracle Net Services - Tracing and Logging at a Glance. PURPOSE --
----The purpose of Oracle Net tracing and logging is to provide detailed informa
tion to track and diagnose Oracle Net problems such as connectivity issues, abno
rmal disconnection and connection delay. Tracing provides varying degrees of inf
ormation that describe connection-specific internal operations during Oracle Net
usage. Logging reports summary, status and error messages. Oracle Net Services
is the replacement name for the Oracle Networking product formerly known as SQL*
Net (Oracle7 [v2.x]) and Net8 (Oracle8/8i [v8.0/8.1]). For consistency, the term
Oracle Net is used thoughout this article and refers to all Oracle Net product
versions. SCOPE & APPLICATION ------------------The aim of this document is to o
verview SQL*Net, Net8, Oracle Net Services tracing and logging facilities. The i
ntended audience includes novice Oracle users and DBAs alike. Although only basi
c information on how to enable and disable tracing and logging features is descr
ibed, the document also serves as a quick reference. The document provides the r
eader with the minimum information necessary to generate trace and log files wit
h a view to forwarding them to Oracle Support Services (OSS) for further diagnos
is. The article does not intend to describe trace/log file contents or explain h
ow to interpret them. LOG & TRACE PARAMETER OVERVIEW ---------------------------
--The following is an overview of Oracle Net trace and log parameters. TRACE_LEV
EL_[CLIENT|SERVER|LISTENER] TRACE_FILE_[CLIENT|SERVER|LISTENER] TRACE_DIRECTORY_
[CLIENT|SERVER|LISTENER] TRACE_UNIQUE_[CLIENT|SERVER|LISTENER] TRACE_TIMESTAMP_[
CLIENT|SERVER|LISTENER] TRACE_FILELEN_[CLIENT|SERVER|LISTENER] TRACE_FILENO_[CLI
ENT|SERVER|LISTENER] LOG_FILE_[CLIENT|SERVER|LISTENER] LOG_DIRECTORY_[CLIENT|SER
VER|LISTENER] LOGGING_LISTENER TNSPING.TRACE_LEVEL TNSPING.TRACE_DIRECTORY NAMES
.TRACE_LEVEL NAMES.TRACE_FILE NAMES.TRACE_DIRECTORY NAMES.TRACE_UNIQUE NAMES.LOG
_FILE NAMES.LOG_DIRECTORY = = = = = = = [0-16|USER|ADMIN|SUPPORT|OFF] <FILE NAME
> <DIRECTORY> [ON|TRUE|OFF|FALSE] [ON|TRUE|OFF|FALSE] #Oracle8i+ <SIZE in KB> #O
racle8i+ <NUMBER> #Oracle8i+
= <FILE NAME> = <DIRECTORY NAME> = [ON|OFF] = [0-16|USER|ADMIN|SUPPORT|OFF] = <D
IRECTORY> = = = = = = [0-16|USER|ADMIN|SUPPORT|OFF] <FILE NAME> <DIRECTORY> [ON|
OFF] <FILE NAME> <DIRECTORY>
NAMES.LOG_UNIQUE NAMESCTL.TRACE_LEVEL NAMESCTL.TRACE_FILE NAMESCTL.TRACE_DIRECTO
RY NAMESCTL.TRACE_UNIQUE
= [ON|OFF] = = = = [0-16|USER|ADMIN|SUPPORT|OFF] <FILE NAME> <DIRECTORY> [ON|OFF
]
Note: With the exception of parameters suffixed with LISTENER, all other paramet
er suffixes and prefixes [CLIENT|NAMES|NAMESCTL|SERVER|TNSPING] are fixed and ca
nnot be changed. For parameters suffixed with LISTENER, the suffix name should b
e the actual Listener name. For example, if the Listener name is PROD_LSNR, an e
xample trace parameter name would be TRACE_LEVEL_PROD_LSNR=OFF. CONFIGURATION FI
LES ------------------Files required to enable Oracle Net tracing and logging fe
atures include: Oracle Net Listener Oracle Net - Client Oracle Net - Server TNSP
ING Utility Oracle Name Server Oracle NAMESCTL Oracle Connection Manager LISTENE
R.ORA SQLNET.ORA on SQLNET.ORA on SQLNET.ORA on NAMES.ORA SQLNET.ORA on CMAN.ORA
client server client/Server server LISTENER.TRC SQLNET.TRC SQLNET.TRC TNSPING.T
RC NAMES.TRC
CONSIDERATIONS WHEN USING LOGGING/TRACING --------------------------------------
--1. Verify which Oracle Net configuration files are in use. By default, Oracle
Net configuration files are sought and resolved from the following locations: TN
S_ADMIN environment variable (incl. Windows Registry Key) /etc or /var/opt/oracl
e (Unix) $ORACLE_HOME/network/admin (Unix) %ORACLE_HOME%/Network/Admin or %ORACL
E_HOME%/Net80/Admin (Windows) Note: User-specific Oracle Net parameters may also
reside in $HOME/sqlnet.ora file. An Oracle Net server installation is also a cl
ient. 2. Oracle Net tracing and logging can consume vast quantities of disk spac
e. Monitor for sufficient disk space when tracing is enabled. On some Unix opera
ting systems, /tmp is used for swap space. Although generally writable by all us
ers, this is not an ideal location for trace/log file generation. 3. Oracle Net
tracing should only be enabled for the duration of the issue at hand. Oracle Net
tracing should always be disabled after problem resolution. 4. Large trace/log
files place an overhead on the processes that generate them. In the absence of i
ssues, the disabling of tracing and/or logging will improve Oracle Net overall e
fficiency. Alternatively, regularly truncating log files will also improve effic
iency.
5. Ensure that the target trace/log directory is writable by the connecting user
, Oracle software owner and/or user that starts the Net Listener. LOG & TRACE PA
RAMETERS ---------------------This section provides a detailed description of ea
ch trace and log parameter. TRACE LEVELS TRACE_LEVEL_[CLIENT|SERVER|LISTENER] =
[0-16|USER|ADMIN|SUPPORT|OFF] Determines the degree to which Oracle Net tracing
is provided. Configuration file is SQLNET.ORA, LISTENER.ORA. Level 0 is disabled
- level 16 is the most verbose tracing level. Listener tracing requires the Net
Listener to be reloaded or restarted after adding trace parameters to LISTENER.
ORA. Oracle Net (client/server) tracing takes immediate effect after tracing par
ameters are added to SQLNET.ORA. By default, the trace level is OFF. OFF USER AD
MIN SUPPORT (equivalent to 0) disabled - provides no tracing. (equivalent to 4)
traces to identify user-induced error conditions. (equivalent to 6) traces to id
entify installation-specific problems. (equivalent to 16) trace information requ
ired by OSS for troubleshooting.
TRACE FILE NAME TRACE_FILE_[CLIENT|SERVER|LISTENER] = <FILE NAME> Determines the
trace file name. Any valid operating system file name. Configuration file is SQ
LNET.ORA, LISTENER.ORA. Trace file is automatically appended with '.TRC'. Defaul
t trace file name is SQLNET.TRC, LISTENER.TRC. TRACE DIRECTORY TRACE_DIRECTORY_[
CLIENT|SERVER|LISTENER] = <DIRECTORY> Determines the directory in which trace fi
les are written. Any valid operating system directory name. Configuration file i
s SQLNET.ORA, LISTENER.ORA. Directory should be writable by the connecting user
and/or Oracle software owner. Default trace directory is $ORACLE_HOME/network/tr
ace. UNIQUE TRACE FILES TRACE_UNIQUE_[CLIENT|SERVER|LISTENER] = [ON|TRUE|OFF|FAL
SE] Allows generation of unique trace files per connection. Trace file names are
automatically appended with '_<PID>.TRC'. Configuration file is SQLNET.ORA, LIS
TENER.ORA. Unique tracing is ideal for sporadic issues/errors that occur infrequ
ently or randomly. Default value is OFF TRACE TIMING
TRACE_TIMESTAMP_[CLIENT|SERVER|LISTENER] = [ON|TRUE|OFF|FALSE] A timestamp in th
e form of [DD-MON-YY 24HH:MI;SS] is recorded against each operation traced by th
e trace file. Configuration file is SQLNET.ORA, LISTENER.ORA Suitable for hangin
g or slow connection issues. Available from Oracle8i onwards. Default value is i
s OFF. MAXIMUM TRACE FILE LENGTH TRACE_FILELEN_[CLIENT|SERVER|LISTENER] = <SIZE>
Determines the maximum trace file size in Kilobytes (Kb). Configuration file is
SQLNET.ORA, LISTENER.ORA. Available from Oracle8i onwards. Default value is UNL
IMITED. TRACE FILE CYCLING TRACE_FILENO_[CLIENT|SERVER|LISTENER] = <NUMBER> Dete
rmines the maximum number of trace files through which to perform cyclic tracing
. Configuration file is SQLNET.ORA, LISTENER.ORA. Suitable when disk space is li
mited or when tracing is required to be enabled for long periods. Available from
Oracle8i onwards. Default value is 1 (file). LOG FILE NAME LOG_FILE_[CLIENT|SER
VER|LISTENER] = <FILE NAME> Determines the log file name. May be any valid opera
ting system file name. Configuration file is SQLNET.ORA, LISTENER.ORA. Log file
is automatically appended with '.LOG'. Default log file name is SQLNET.LOG, LIST
ENER.LOG. LOG DIRECTORY LOG_DIRECTORY_[CLIENT|SERVER|LISTENER] = <DIRECTORY NAME
> Determines the directory in which log files are written. Any valid operating s
ystem directory name. Configuration file is SQLNET.ORA, LISTENER.ORA. Directory
should be writable by the connecting user or Oracle software owner. Default dire
ctory is $ORACLE_HOME/network/log. DISABLING LOGGING LOGGING_LISTENER = [ON|OFF]
Disables Listener logging facility. Configuration file is LISTENER.ORA. Default
value is ON. ORACLE NET TRACE/LOG EXAMPLES -----------------------------
CLIENT (SQLNET.ORA) trace_level_client = 16 trace_file_client = cli trace_direct
ory_client = /u01/app/oracle/product/9.0.1/network/trace trace_unique_client = o
n trace_timestamp_client = on trace_filelen_client = 100 trace_fileno_client = 2
log_file_client = cli log_directory_client = /u01/app/oracle/product/9.0.1/netw
ork/log tnsping.trace_directory = /u01/app/oracle/product/9.0.1/network/trace tn
sping.trace_level = admin SERVER (SQLNET.ORA) trace_level_server = 16 trace_file
_server = svr trace_directory_server = /u01/app/oracle/product/9.0.1/network/tra
ce trace_unique_server = on trace_timestamp_server = on trace_filelen_server = 1
00 trace_fileno_server = 2 log_file_server = svr log_directory_server = /u01/app
/oracle/product/9.0.1/network/log namesctl.trace_level = 16 namesctl.trace_file
= namesctl namesctl.trace_directory = /u01/app/oracle/product/9.0.1/network/trac
e namesctl.trace_unique = on LISTENER (LISTENER.ORA) trace_level_listener = 16 t
race_file_listener = listener trace_directory_listener = /u01/app/oracle/product
/9.0.1/network/trace trace_timestamp_listener = on trace_filelen_listener = 100
trace_fileno_listener = 2 logging_listener = off log_directory_listener = /u01/a
pp/oracle/product/9.0.1/network/log log_file_listener=listener NAMESERVER TRACE
(NAMES.ORA) names.trace_level = 16 names.trace_file = names names.trace_director
y = /u01/app/oracle/product/9.0.1/network/trace names.trace_unique = off CONNECT
ION MANAGER TRACE (CMAN.ORA) tracing = yes RELATED DOCUMENTS -----------------
Note Note Note Note Note Note
16658.1 111916.1 39774.1 73988.1 1011114.6 1030488.6
(7) Tracing SQL*Net/Net8 SQLNET.ORA Logging and Tracing Parameters Log & Trace F
acilities on Net v2 How to Get Cyclic SQL*Net Trace Files when Disk Space is Lim
ited SQL*Net V2 Tracing Net8 Tracing
Note 2: ------Doc ID: Note:39774.1 Subject: LOG & TRACE Facilities on NET v2. Ty
pe: FAQ Status: PUBLISHED Content Type: TEXT/X-HTML Creation Date: 25-JUL-1996 L
ast Revision Date: 31-JAN-2002
LOG AND TRACE FACILITIES ON SQL*NET V2 ====================================== Th
is article describes the log and trace facilities that can be used to examine ap
plication connections that use SQL*Net. This article is based on usage of SQL*NE
T v2.3. It explains how to invoke the trace facility and how to use the log and
trace information to diagnose and resolve operating problems. Following topics a
re covered below: o o o o o o What the log facility is What the trace facility i
s How to invoke the trace facility Logging and tracing parameters Sample log out
put Sample trace output
Note: Information in this section is generic to all operating system environment
s. You may require further information from the Oracle operating system-specific
documentation for some details of your specific operating environment. ________
________________________________ 1. What is the Log Facility? ==================
========== All errors encountered in SQL*Net are logged to a log file for evalua
tion by a network or database administrator. The log file provides additional in
formation for an administrator when the error on the screen is inadequate to und
erstand the failure. The log file, by way of the error stack, shows the state of
the TNS software at various layers. The properties of the log file are:
o o
Error information is appended to the log file when an error occurs. Generally, a
log file can only be replaced or erased by an administrator, although client lo
g files can be deleted by the user whose application created them. (Note that in
general it is bad practice to delete these files while the program using them i
s still actively logging.) Logging of errors for the client, server, and listene
r cannot be disabled. This is an essential feature that ensures all errors are r
ecorded. The Navigator and Connection Manager components of the MultiProtocol In
terchange may have logging turned on or off. If on, logging includes connection
statistics. The Names server may have logging turned on or off. If on, a Names s
erver's operational events are written to a specified logfile. You set logging p
arameters using the Oracle Network Manager.
o
o
o
________________________________________ 2. What is the Trace Facility? ========
====================== The trace facility allows a network or database administr
ator to obtain more information on the internal operations of the components of
a TNS network than is provided in a log file. Tracing an operation produces a de
tailed sequence of statements that describe the events as they are executed. All
trace output is directed to trace output files which can be evaluated after the
failure to identify the events that lead up to an error. The trace facility is
typically invoked during the occurrence of an abnormal condition, when the log f
ile does not provide a clear indication of the cause. Attention: The trace facil
ity uses a large amount of disk space and may have a significant impact upon sys
tem performance. Therefore, you are cautioned to turn the trace facility ON only
as part of a diagnostic procedure and to turn it OFF promptly when it is no lon
ger necessary. Components that can be traced using the trace facility are: o o o
o Network listener SQL*Net version 2 components - SQL*Net client - SQL*Net serv
er MultiProtocol Interchange components - the Connection Manager and pumps - the
Navigator Oracle Names Names server - Names Control Utility
The trace facility can be used to identify the following types of problems: - Di
fficulties in establishing connections
-
Abnormal termination of established connections Fatal errors occurring during th
e operation of TNS network components
________________________________________ 3. What is the Difference between Loggi
ng and Tracing? ====================================================== While log
ging provides the state of the TNS components at the time of an error, tracing p
rovides a description of all software events as they occur, and therefore provid
es additional information about events prior to an error. There are three levels
of diagnostics, each providing more information than the previous level. The th
ree levels are: 1. The reported error from Oracle7 or tools; this is the single
error that is commonly returned to the user. 2. The log file containing the stat
e of TNS at the time of the error. This can often uncover low level errors in in
teraction with the underlying protocols. 3. The trace file containing English st
atements describing what the TNS software has done from the time the trace sessi
on was initiated until the failure is recreated. When an error occurs, a simple
error message is displayed and a log file is generated. Optionally, a trace file
can be generated for more information. (Remember, however, that using the trace
facility has an impact on your system performance.) In the following example, t
he user failed to use Oracle Network Manager to create a configuration file, and
misspelled the word "PORT" as "POT" in the connect descriptor. It is not import
ant that you understand in detail the contents of each of these results; this ex
ample is intended only to provide a comparison. Reported Error (On the screen in
SQL*Forms): ERROR: ORA-12533: Unable to open message file (SQL-02113) Logged Er
ror (In the log file, SQLNET.LOG): *********************************************
******************* Fatal OSN connect error 12533, connecting to: (DESCRIPTION=(
CONNECT_DATA=(SID=trace)(CID=(PROGRAM=)(HOST=lala) (USER=ginger)))(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=ipc) (KEY=bad_port))(ADDRESS=(PROTOCOL=tcp)(HOST=lala)(POT=15
21)))) VERSION INFORMATION: TNS for SunOS: Version 2.0.14.0.0 - Developer's Rele
ase Oracle Bequeath NT Protocol Adapter for SunOS: Version 2.0.14.0.0 - Develope
r's Release Unix Domain Socket IPC NT Protocol Adaptor for SunOS: Version 2.0.14
.0.0 - Developer's Release
TCP/IP NT Protocol Adapter for SunOS: Version 2.0.14.0.0 Developer's Release Tim
e: 07-MAY-93 17:38:50 Tracing to file: /home/ginger/trace_admin.trc Tns error st
ruct: nr err code: 12206 TNS-12206: TNS:received a TNS error while doing navigat
ion ns main err code: 12533 TNS-12533: TNS:illegal ADDRESS parameters ns seconda
ry err code: 12560 nt main err code: 503 TNS-00503: Illegal ADDRESS parameters n
t secondary err code: 0 nt OS err code: 0 Example of Trace of Error ------------
------------The trace file, SQLNET.TRC at the USER level, contains the following
information: --New New --TRACE trace trace TRACE CONFIGURATION INFORMATION FOLL
OWS --stream is "/private1/oracle/trace_user.trc" level is 4 CONFIGURATION INFOR
MATION ENDS ---
--- PARAMETER SOURCE INFORMATION FOLLOWS --Attempted load of system pfile source
/private1/oracle/network/admin/sqlnet.ora Parameter source was not loaded Error
stack follows: NL-00405: cannot open parameter file Attempted load of local pfi
le source /home/ginger/.sqlnet.ora Parameter source loaded successfully -> PARAM
ETER TABLE LOAD RESULTS FOLLOW <Some parameters may not have been loaded See dum
p for parameters which loaded OK -> PARAMETER TABLE HAS THE FOLLOWING CONTENTS <
TRACE_DIRECTORY_CLIENT = /private1/oracle trace_level_client = USER TRACE_FILE_C
LIENT = trace_user --- PARAMETER SOURCE INFORMATION ENDS ----- LOG CONFIGURATION
INFORMATION FOLLOWS --Attempted open of log stream "/tmp_mnt/home/ginger/sqlnet
.log" Successful stream open --- LOG CONFIGURATION INFORMATION ENDS --Unable to
get data from navigation file tnsnav.ora local names file is /home/ginger/.tnsna
mes.ora system names file is /etc/tnsnames.ora -<ERROR>- failure, error stack fo
llows -<ERROR>- NL-00427: bad list -<ERROR>- NOTE: FILE CONTAINS ERRORS, SOME NA
MES MAY BE MISSING Calling address:
(DESCRIPTION=(CONNECT_DATA=(SID=trace)(CID=(PROGRAM=)(HOST=lala)(USER=ging er)))
)(HOST (ADDRESS_LIST=(ADDRESS=(PROTOCOL=ipc)(KEY=bad_port))(ADDRESS=(PROTOCOL=t
cp Getting local community information Looking for local addresses setup by nrig
la No addresses in the preferred address list TNSNAV.ORA is not present. No loca
l communities entry. Getting local address information Address list being proces
sed... No community information so all addresses are "local" Resolving address t
o use to call destination or next hop Processing address list... No community en
tries so iterate over address list This a local community access Got routable ad
dress information Making call with following address information: (DESCRIPTION=(
EMPTY=0)(ADDRESS=(PROTOCOL=ipc)(KEY=bad_port))) Calling with outgoing connect da
ta (DESCRIPTION=(CONNECT_DATA=(SID=trace)(CID=(PROGRAM=)(HOST=lala)(USER=ging (A
DDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=lala)(POT=1521)))) (DESCRIPTION=(EMPTY=
0)(ADDRESS=(PROTOCOL=ipc)(KEY=bad_port))) KEY = bad_port connecting... opening t
ransport... -<ERROR>- sd=8, op=1, resnt[0]=511, resnt[1]=2, resnt[2]=0 -<ERROR>-
unable to open transport -<ERROR>- nsres: id=0, op=1, ns=12541, ns2=12560; nt[0
]=511, nt[1]=2, nt[2]=0 connect attempt failed Call failed... Call made to desti
nation Processing address list so continuing Getting local community information
Looking for local addresses setup by nrigla No addresses in the preferred addre
ss list TNSNAV.ORA is not present. No local communities entry. Getting local add
ress information Address list being processed... No community information so all
addresses are "local" Resolving address to use to call destination or next hop
Processing address list... No community entries so iterate over address list Thi
s a local community access Got routable address information Making call with fol
lowing address information: (DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=tcp)(HOST=l
ala)(POT=1521))) Calling with outgoing connect data (DESCRIPTION=(CONNECT_DATA=(
SID=trace)(CID=(PROGRAM=)(HOST=lala)(USER=ging (ADDRESS_LIST=(ADDRESS=(PROTOCOL=
tcp)(HOST=lala)(POT=521)))) (DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=tcp)(HOST=l
ala)(POT=1521))) -<FATAL?>- failed to recognize: POT -<ERROR>- nsres: id=0, op=1
3, ns=12533, ns2=12560; nt[0]=503, nt[1]=0, nt[2]=0
er)))
er)))
Call failed... Exiting NRICALL with following termination result -1 -<ERROR>- er
ror from nricall -<ERROR>- nr err code: 12206 -<ERROR>- ns main err code: 12533
-<ERROR>- ns (2) err code: 12560 -<ERROR>- nt main err code: 503 -<ERROR>- nt (2
) err code: 0 -<ERROR>- nt OS err code: 0 -<ERROR>- Couldn't connect, returning
12533 In the trace file, note that unexpected events are preceded with an -<ERRO
R>- stamp. These events may represent serious errors, minor errors, or merely un
expected results from an internal operation. More serious and probably fatal err
ors are stamped with the -<FATAL?>- prefix. In this example trace file, you can
see that the root problem, the misspelling of "PORT," is indicated by the trace
line: -<FATAL?>- failed to recognize: POT Most tracing is very similar to this.
If you have a basic understanding of the events the components will perform, you
can identify the probable cause of an error in the text of the trace. _________
_______________________________ 4. Log File Names ================= Log files pr
oduced by different components have unique names. The default file names are: SQ
LNET.LOG LISTENER.LOG INTCHG.LOG NAVGATR.LOG NAMES.LOG Contains client and/or se
rver information Contains listener information Contains Connection Manager and p
ump information Contains Navigator information Contains Names server information
You can control the name of the log file. For each component, any valid string c
an be used to create a log file name. The parameters are of the form: LOG_FILE_c
omponent = string For example: LOG_FILE_LISTENER = TEST Some platforms have rest
rictions on the properties of a file name. See your Oracle operating system spec
ific manuals for platform specific restrictions.
_____________________________________ 5. Using Log Files ================== Foll
ow these steps to track an error using a log file: 1. Browse the log file for th
e most recent error that matches the error number you have received from the app
lication. This is almost always the last entry in the log file. Notice that an e
ntry or error stack in the log file is usually many lines in length. In the exam
ple earlier in this chapter, the error number was 12207. 2. Starting at the bott
om, look up to the first non-zero entry in the error report. This is usually the
actual cause. In the example earlier in this chapter, the last non-zero entry i
s the "ns" error 12560. 3. Look up the first non-zero entry in later chapters of
this book for its recommended cause and action. (For example, you would find th
e "ns" error 12560 under ORA-12560.) To understand the notation used in the erro
r report, see the previous chapter, "Interpreting Error Messages." 4. If that er
ror does not provide the desired information, move up the error stack to the sec
ond to last error and so on. 5. If the cause of the error is still not clear, tu
rn on tracing and re-execute the statement that produced the error message. The
use of the trace facility is described in detail later in this chapter. Be sure
to turn tracing off after you have re-executed the command. ____________________
____________________ 6. Using the Trace Facility =========================== The
steps used to invoke tracing are outlined here. Each step is fully described in
subsequent sections. 1. Choose the component to be traced from the list: o o o
o o o o Client Server Listener Connection Manager and pump (cmanager) Navigator
(navigator) Names server Names Control Utility
2. Save existing trace file if you need to retain information on it. By default
most trace files will overwrite an existing ones. TRACE_UNIQUE parameter needs t
o be included in appropriate config. files if unique trace files are required. T
his appends Process Id to each file. For Example: For Names server tracing, NAME
S.TRACE_UNIQUE=ON needs to be set in NAMES. ORA file. For Names Control Utility,
NAMESCTL.TRACING_UNIQUE=TRUE needs to be in SQLNET.ORA. TRACE_UNIQUE_CLIENT=ON
in SQLNET.ORA for Client Tracing.
3. For any component, you can invoke the trace facility by editing the component
configuration file that corresponds to the component traced. The component conf
ig. files are SQLNET.ORA, LISTENER.ORA, INTCHG.ORA, and NAMES. ORA. 4. Execute o
r start the component to be traced. If the trace component configuration files a
re modified while the component is running, the modified trace parameters will t
ake effect the next time the component is invoked or restarted. Specifically for
each component: CLIENT: Set the trace parameters in the client-side SQLNET.ORA
and invoke a client application, such as SQL*Plus, a Pro*C application, or any a
pplication that uses the Oracle network products. Set the trace parameters in th
e server-side SQLNET.ORA. The next process started by the listener will have tra
cing enabled. The trace parameters must be created or edited manually.
SERVER:
LISTENER: Set the trace parameters in the LISTENER.ORA CONNECTION MANAGER: Set t
he trace parameters in INTCHG.ORA and start the Connection Manager from the Inte
rchange Control Utility or command line. The pumps are started automatically wit
h the Connection Manager, and their trace files are controlled by the trace para
meters for the Connection Manager. NAVIGATOR:Again, set the trace parameters in
INTCHG.ORA and start the Navigator NAMES SERVER: Trace parameters needs to be se
t in NAMES.ORA and start the Names server. NAMES CONTROL UTILITY: Set the trace
parameters in SQLNET.ORA and start the Names Control Utility 5. Be sure to turn
tracing off when you do not need it for a specific diagnostic purpose. _________
_______________________________ 7. Setting Trace Parameters ====================
======= The trace parameters are defined in the same configuration files as the
log parameters. Table below shows the configuration files for different network
components and the default names of the trace files they generate.
-------------------------------------------------------| Trace Parameters | Conf
iguration | | | Corresponding to | File | Output Files | |-------------------|--
---------------|------------------| | | | | | Client | SQLNET.ORA | SQLNET.TRC |
| Server | | SQLNET.TRC | | TNSPING Utility | | TNSPING.TRC | | Names Control |
| | | Utility | | NAMESCTL.TRC | |-------------------|-----------------|-------
-----------| | Listener | LISTENER.ORA | LISTENER.TRC | |-------------------|---
--------------|------------------| | Interchange | INTCHG.ORA | | | Connection |
| | | Manager | | CMG.TRC | | Pumps | | PMP.TRC | | Navigator | | NAV.TRC | |--
-----------------|-----------------|------------------| | Names server | NAMES.O
RA | NAMES.TRC | |___________________|_________________|__________________| The
configuration files for each component are located on the computer running that
component. The trace characteristics for two or more components of an Interchang
e are controlled by different parameters in the same configuration file. For exa
mple, there are separate sets of parameters for the Connection Manager and the N
avigator that determine which components will be traced, and at what level. Simi
larly, if there are multiple listeners on a single computer, each listener is co
ntrolled by parameters that include the unique listener name in the LISTENER.ORA
file. For each component, the configuration files contain the following informa
tion: o o o A valid trace level to be used (Default is OFF) The trace file name
(optional) The trace file directory (optional)
________________________________________ 7a. Valid SQLNET.ORA Diagnostic Paramet
ers ========================================== The SQLNET.ORA caters for: o Clie
nt Logging & Tracing o Server Logging & Tracing o TNSPING utility o NAMESCTL pro
gram ---------------------------------------------------------------------------
--| | |
|
| PARAMETERS | VALUES | Example (DOS client, UNIX server) | | | | | |-----------
-------------|----------------|------------------------------------| |Parameters
for Client | |===================== | |----------------------------------------
--------------------------------------| | | | | | TRACE_LEVEL_CLIENT | OFF/USER/
ADMIN | TRACE_LEVEL_CLIENT=USER | | | | | | TRACE_FILE_CLIENT | string | TRACE_F
ILE_CLIENT=CLIENT | | | | | | TRACE_DIRECTORY_CLIENT | valid directory| TRACE_DI
RECTORY_CLIENT=c:\NET\ADMIN| | | | | | TRACE_UNIQUE_CLIENT | OFF/ON | TRACE_UNIQ
UE_CLIENT=ON | | | | | | LOG_FILE_CLIENT | string | LOG_FILE_CLIENT=CLIENT | | |
| | | LOG_DIRECTORY_CLIENT | valid directory| LOG_DIRECTORY_CLIENT=c:\NET\ADMIN
| |----------------------------------------------------------------------------
--| |Parameters for Server | |===================== | |-------------------------
-----------------------------------------------------| | | | | | TRACE_LEVEL_SER
VER | OFF/USER/ADMIN | TRACE_LEVEL_SERVER=ADMIN | | | | | | TRACE_FILE_SERVER |
string | TRACE_FILE_SERVER=unixsrv_2345.trc | | | | | | TRACE_DIRECTORY_SERVER |
valid directory| TRACE_DIRECTORY_SERVER=/tmp/trace | | | | | | LOG_FILE_SERVER
| string | LOG_FILE_SERVER=unixsrv.log | | | | | | LOG_DIRECTORY_SERVER | valid
directory| LOG_DIRECTORY_SERVER=/tmp/trace | |----------------------------------
--------------------------------------------| ---(SQLNET.ORA Cont.)-------------
-------------------------------------------| | | | | PARAMETERS | VALUES | Examp
le (DOS client, UNIX server) | | | | | |------------------------|---------------
-|------------------------------------| | |Parameters for TNSPING | |===========
=========== | |-----------------------------------------------------------------
-------------| | | | | | TNSPING.TRACE_LEVEL | OFF/USER/ADMIN | TNSPING.TRACE_LE
VEL=user | | | | | | TNSPING.TRACE_DIRECTORY| directory |TNSPING.TRACE_DIRECTORY
= | | | | /oracle7/network/trace | | | | | |------------------------------------
------------------------------------------| |Parameters for Names Control Utilit
y | |==================================== | |-----------------------------------
-------------------------------------------| | | | | | NAMESCTL.TRACE_LEVEL | OF
F/USER/ADMIN |NAMESCTL.TRACE_LEVEL=user | | | | | | NAMESCTL.TRACE_FILE | file |
NAMESCTL.TRACE_FILE=nc_south.trc |
| | | | | NAMESCTL.TRACE_DIRECTORY| directory |NAMESCTL.TRACE_DIRECTORY=/o7/net/
trace| | | | | | NAMESCTL.TRACE_UNIQUE | TRUE/FALSE |NAMESCTL.TRACE_UNIQUE=TRUE
or ON/OFF| | | | | -------------------------------------------------------------
----------------Note: You control log and trace parameters for the client throug
h Oracle Network Manager. You control log and trace parameters for the server by
manually adding the desired parameters to the SQLNET.ORA file. Parameters for N
ames Control Utility & TNSPING Utility need to be added manually to SQLNET.ORA f
ile. You cannot create them using Oracle Network Manager. ______________________
__________________ 7b. Valid LISTENER.ORA Diagnostic Parameters ================
============================ The following table shows the valid LISTENER.ORA pa
rameters used in logging and tracing of the listener. --------------------------
---------------------------------------------------| | | | | PARAMETERS | VALUES
| Example (DOS client, UNIX server) | | | | | |------------------------|-------
---------|------------------------------------| | | | | |TRACE_LEVEL_LISTENER |
USER | TRACE_LEVEL_LISTENER=OFF | | | | | |TRACE_FILE_LISTENER | string | TRACE_
FILE_LISTENER=LISTENER | | | | | |TRACE_DIRECTORY_LISTENER| valid directory| TRA
CE_DIRECTORY_LISTENER=$ORA_SQLNETV2 | | | | | |LOG_FILE_LISTENER | |LOG_DIRECTOR
Y_LISTENER | | string | | LOG_FILE_LISTENER=LISTENER | | |
| valid directory| LOG_DIRECTORY_LISTENER=$ORA_ERRORS | | | |
-----------------------------------------------------------------------------___
_____________________________________ 7c. Valid INTCHG.ORA Diagnostic Parameters
========================================== The following table shows the valid
INTCHG.ORA parameters used in logging and tracing of the Interchange.
--------------------------------------------------------------------------------
| | | | | PARAMETERS | VALUES | Example (DOS client, UNIX server) | | | (default
)| | |------------------------|--------------------|----------------------------
-------| | | | | |TRACE_LEVEL_CMANAGER | OFF|USER|ADMIN | TRACE_LEVEL_CMANAGER=U
SER | | | | | |TRACE_FILE_CMANAGER | string (CMG.TRC) | TRACE_FILE_CMANAGER=CMAN
AGER | | | | | |TRACE_DIRECTORY_CMANAGER| valid directory | TRACE_DIRECTORY_CMAN
AGER=C:\ADMIN | | | | | |LOG_FILE_CMANAGER | string (INTCHG.LOG)| LOG_FILE_CMANA
GER=CMANAGER | | | | | |LOG_DIRECTORY_CMANAGER | valid directory | LOG_DIRECTORY
_CMANAGER=C:\ADMIN | | | | | |LOGGING_CMANAGER | OFF/ON | LOGGING_CMANAGER=ON |
| | | | |LOG_INTERVAL_CMANAGER | Any no of minutes | LOG_INTERVAL_CMANAGER=60 |
| | (60 minutes)| | |TRACE_LEVEL_NAVIGATOR | OFF/USER/ADMIN | TRACE_LEVEL_NAVIGA
TOR=ADMIN | | | | | |TRACE_FILE_NAVIGATOR | string (NAV.TRC)| TRACE_FILE_NAVIGAT
OR=NAVIGATOR | | | | | |TRACE_DIRECTORY_NAVIGATOR| valid directory | TRACE_DIREC
TORY_NAVIGATOR=C:\ADMIN | | | | | |LOG_FILE_NAVIGATOR |string (NAVGATR.LOG)| LOG
_FILE_NAVIGATOR=NAVIGATOR | | | | | |LOG_DIRECTORY_NAVIGATOR | valid directory |
LOG_DIRECTORY_NAVIGATOR=C:\ADMIN |
| | | | |LOGGING_NAVIGATOR | OFF/ON | LOGGING_NAVIGATOR=OFF | | | | | |LOG_LEVEL
_NAVIGATOR | ERRORS|ALL (ERRORS)| LOG_LEVEL_NAVIGATOR=ERRORS | | | | | ---------
-----------------------------------------------------------------------Note: The
pump component shares the trace parameters of the Connection Manager, but it ge
nerates a separate trace file with the unchangeable default name PMPpid.TRC. ___
_____________________________________ 7d. Valid NAMES.ORA Diagnostic Parameters
========================================= The following table shows the valid NA
MES.ORA parameters used in logging and tracing of the Names server. ------------
-----------------------------------------------------------------| | | | | PARAM
ETERS | VALUES | Example (DOS client, UNIX server) | | | (default)| | |---------
---------------|----------------|------------------------------------| | | | | |
NAMES.TRACE_LEVEL | OFF/USER/ADMIN | NAMES.TRACE_LEVEL=ADMIN | | | | | | NAMES.
TRACE_FILE | file(names.trc)| NAMES.TRACE_FILE=nsrv3.trc | | | | | | NAMES.TRACE
_DIRECTORY | directory | NAMES.TRACE_DIRECTORY=/o7/net/trace| | | | | | NAMES.TR
ACE_UNIQUE | TRUE/FALSE | NAMES.TRACE_UNIQUE=TRUE or ON/OFF | | | | | | NAMES.LO
G_FILE | file(names.log)| NAMES.LOG_FILE=nsrv1.log | | | | | | NAMES.LOG_DIRECTO
RY | directory | NAMES.LOG_DIRECTORY= /o7/net/log | | | | | --------------------
---------------------------------------------------------_______________________
_________________ 8. Example of a Trace File =========================== In the
following example, the SQLNET.ORA file includes the following line: TRACE_LEVEL_
CLIENT = ADMIN The following trace file is the result of a connection attempt th
at failed because the hostname is invalid.
The trace output is a combination of debugging aids for Oracle specialists and E
nglish information for network administrators. Several key events can be seen by
analyzing this output from beginning to end: (A) (B) (C) (D) (E) The client des
cribes the outgoing data in the connect descriptor used to contact the server. A
n event is received (connection request). A connection is established over the a
vailable transport (in this case TCP/IP). The connection is refused by the appli
cation, which is the listener. The trace file shows the problem, as follows: -<F
ATAL?>- ***hostname lookup failure! *** (F) Error 12545 is reported back to the
client.
If you look up Error 12545 in Chapter 3 of this Manual, you will find the follow
ing description: ORA-12545 TNS:Name lookup failure Cause: A protocol specific AD
DRESS parameter cannot be resolved. Action: Ensure the ADDRESS parameters have b
een entered correctly; the most likely incorrect value is the node name. ++++++
NOTE: TRACE FILE EXTRACT +++++++ --New New --TRACE trace trace TRACE CONFIGURATI
ON INFORMATION FOLLOWS --stream is "/private1/oracle/trace_admin.trc" level is 6
CONFIGURATION INFORMATION ENDS ---
++++++ NOTE: Loading Parameter files now. +++++++ --- PARAMETER SOURCE INFORMATI
ON FOLLOWS --Attempted load of system pfile source /private1/oracle/network/admi
n/sqlnet.ora Parameter source was not loaded Error stack follows: NL-00405: cann
ot open parameter file Attempted load of local pfile source /home/ginger/.sqlnet
.ora Parameter source loaded successfully -> PARAMETER TABLE LOAD RESULTS FOLLOW
<Some parameters may not have been loaded See dump for parameters which loaded
OK -> PARAMETER TABLE HAS THE FOLLOWING CONTENTS <TRACE_DIRECTORY_CLIENT = /priv
ate1/oracle trace_level_client = ADMIN TRACE_FILE_CLIENT = trace_admin
--- PARAMETER SOURCE INFORMATION ENDS --++++++ NOTE: Reading Parameter files. ++
+++++ --- LOG CONFIGURATION INFORMATION FOLLOWS --Attempted open of log stream "
/private1/oracle/sqlnet.log" Successful stream open --- LOG CONFIGURATION INFORM
ATION ENDS --Unable to get data from navigation file tnsnav.ora local names file
is /home/ginger/.tnsnames.ora system names file is /etc/tnsnames.ora initial re
try timeout for all servers is 500 csecs max request retries per server is 2 def
ault zone is [root] Using nncin2a() to build connect descriptor for (possibly re
mote) database. initial load of /home/ginger/.tnsnames.ora -<ERROR>- failure, er
ror stack follows -<ERROR>- NL-00405: cannot open parameter file -<ERROR>- NOTE:
FILE CONTAINS ERRORS, SOME NAMES MAY BE MISSING initial load of /etc/tnsnames.o
ra -<ERROR>- failure, error stack follows -<ERROR>- NL-00427: bad list -<ERROR>-
NOTE: FILE CONTAINS ERRORS, SOME NAMES MAY BE MISSING Inserting IPC address int
o connect descriptor returned from nncin2a(). ++++++ NOTE: Looking for Routing I
nformation. +++++++ Calling address: (DESCRIPTION=(CONNECT_DATA=(SID=trace)(CID=
(PROGRAM=)(HOST=lala) (USER=ginger)))(ADDRESS_LIST=(ADDRESS=(PROTOCOL=ipc (KEY=b
ad_host))(ADDRESS=(PROTOCOL=tcp)(HOST=lavender) (PORT=1521)))) Getting local com
munity information Looking for local addresses setup by nrigla No addresses in t
he preferred address list TNSNAV.ORA is not present. No local communities entry.
Getting local address information Address list being processed... No community
information so all addresses are "local" Resolving address to use to call destin
ation or next hop Processing address list... No community entries so iterate ove
r address list This a local community access Got routable address information ++
++++ NOTE: Calling first address (IPC). +++++++ Making call with following addre
ss information: (DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=ipc)(KEY=bad_host))) Ca
lling with outgoing connect data (DESCRIPTION=(CONNECT_DATA=(SID=trace)(CID=(PRO
GRAM=)(HOST=lala)
(USER=ginger)))(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp) (HOST=lavender)(PORT=1521))
)) (DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=ipc)(KEY=bad_host))) KEY = bad_host
connecting... opening transport... -<ERROR>- sd=8, op=1, resnt[0]=511, resnt[1]=
2, resnt[2]=0 -<ERROR>- unable to open transport -<ERROR>- nsres: id=0, op=1, ns
=12541, ns2=12560; nt[0]=511, nt[1]=2, nt[2]=0 connect attempt failed Call faile
d... Call made to destination Processing address list so continuing ++++++ NOTE:
Looking for Routing Information. +++++++ Getting local community information Lo
oking for local addresses setup by nrigla No addresses in the preferred address
list TNSNAV.ORA is not present. No local communities entry. Getting local addres
s information Address list being processed... No community information so all ad
dresses are "local" Resolving address to use to call destination or next hop Pro
cessing address list... No community entries so iterate over address list This a
local community access Got routable address information ++++++ NOTE: Calling se
cond address (TCP/IP). +++++++ Making call with following address information: (
DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=tcp) (HOST=lavender)(PORT=1521))) Callin
g with outgoing connect data (DESCRIPTION=(CONNECT_DATA=(SID=trace)(CID=(PROGRAM
=)(HOST=lala) (USER=ginger)))(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp) (HOST=lavende
r) (PORT=1521)))) (DESCRIPTION=(EMPTY=0)(ADDRESS=(PROTOCOL=tcp) (HOST=lavender)(
PORT=1521))) port resolved to 1521 looking up IP addr for host: lavender -<FATAL
?>- *** hostname lookup failure! *** -<ERROR>- nsres: id=0, op=13, ns=12545, ns2
=12560; nt[0]=515, nt[1]=0, nt[2]=0 Call failed... Exiting NRICALL with followin
g termination result -1 -<ERROR>- error from nricall -<ERROR>- nr err code: 1220
6 -<ERROR>- ns main err code: 12545 -<ERROR>- ns (2) err code: 12560 -<ERROR>- n
t main err code: 515 -<ERROR>- nt (2) err code: 0
-<ERROR>- nt OS err code: 0 -<ERROR>- Couldn't connect, returning 12545 Most tra
cing is very similar to this. If you have a basic understanding of the events th
e components will perform, you can identify the probable cause of an error in th
e text of the trace. 19.64 ORA-01595: error freeing extent (2) of rollback segme
nt (9)): =================================================================== Not
e 1: ORA-01595, 00000, "error freeing extent (%s) of rollback segment (%s))" Cau
se: Some error occurred while freeing inactive rollback segment extents. Action:
Investigate the accompanying error. Note 2: Two factors are necessary for this
to happen. A rollback segment has extended beyond OPTIMAL. There are two or more
transactions sharing the rollback segment at the time of the shrink. What happe
ns is that the first process gets to the end of an extent, notices the need to s
hrink and begins the recursive transaction to do so. But the next transaction bl
unders past the end of that extent before the recursive transaction has been com
mitted. The preferred solution is to have sufficient rollback segments to elimin
ate the sharing of rollback segments between processes. Look in V$RESOURCE_LIMIT
for the high-watermark of transactions. That is the number of rollback segments
you need. The alternative solution is to raise OPTIMAL to reduce the risk of th
e error. Note 3: This error is harmless. You can try (and probably should) set o
ptimal to null and maxextents to unlimited (which might minimize the frequency o
f these errors). These errors happen sometimes when oracle is shrinking the roll
back segments upto the optimal size. The undo data for shrinking is also kept in
the rollback segments. So when it attempts to shrink the same rollback segment
where its trying to write the undo, it throws this warning. Its not a failure pe
r se .. since oracle will retry and succeed. 19.65: OUI-10022: oraInventory cann
ot be used because it is in an invalid state ===================================
============================================ Note 1:
------If there are other products installed through the OUI, create a copy of =
the oraInst.loc file (depending on the UNIX system, possibly in /etc or /var/opt
/oracle). Modify the inventory_loc parameter to point to a different location fo
r = the OUI to create the oraInventory directory. Run the installer using the -i
nvPtrLoc parameter (eg: runInstaller -invPtrLoc /PATH/oraInst.loc). This will re
tain the existing oraInventory directory and create a new = one for use by the n
ew product. 19.66: Failure to extend rollback segment because of 30036 condition
==================================================================== Not a seri
ous problem. Do some undo tuning.
19.67: ORA-06502: PL/SQL: numeric or value error: character string buffer too sm
all ============================================================================
====== = Note 1: Hi, I am having a strange problem with an ORA-06502 error I am
getting and don't understand why. I would expect this error to be quite easy to
fix, it would suggest that a variable is not large enough to cope with a value b
eing assigned to it. But I'm fairly sure that isn't the problem. Anyway I have a
stored procedure similar to the following: PROCEDURE myproc(a_user IN VARCHAR2,
p_1 OUT <my_table>.<my_first_column>%TYPE, p_2 OUT <my_table>.<my_second_column
>%TYPE) IS BEGIN SELECT my_first_column, my_second_column INTO p_1, p_2 FROM my_
table WHERE user_id = a_user; END; /
The procedure is larger than this, but using error_position variables I have tra
cked it down to one SQL statement. But I don't understand why I'm getting the OR
A-06502, because the variables I am selecting into are defined as the same types
as the columns I'm selecting. The variable I am selecting into is in fact a VAR
CHAR2(4), but if I replace the sql statement with p_1 := 'AB'; it still fails. I
t succeeds if I do p_1 := 'A'; Has anyone seen this before or anything similar t
hat they might be able to help me with please? Thanks, mtae. -- Answer 1: It is
the code from which you are calling it that has the problem, e.g. DECLARE v1 var
char2(1); v2 varchar2(1); BEGIN my_proc ('USER',v1,v2); END; / -- Answer 2 try t
his: PROCEDURE myproc(a_user IN VARCHAR2, p_1 OUT varchar2, p_2 OUT varchar2) IS
v_1 <my_table>.<my_first_column>%TYPE; v_2 <my_table>.<my_second_column>%TYPE;
BEGIN SELECT my_first_column, my_second_column INTO v_1, v_2 FROM my_table WHERE
user_id = a_user; p_1 := v_1; p_2 := v_2; END; / Comment from mtae Date: 07/28/
2004 04:24AM PDT Author Comment
It was the size of the variable that was being used as the actual parameter bein
g passed in. Feeling very silly, but thanks, sometimes you can look at a problem
too long.
19.68 ORA-00600: internal error code, arguments: [LibraryCacheNotEmptyOnClose],
[], [], [], [], [], [], [] =====================================================
============================= ======================== thread: see this error ev
ery time I shutdown a 10gR3 grid control database on 10.2.0.3 RDBMS, even though
all opmn and OMS processes are down. So far, I have not seen any problems, apar
t from the annoying shutdown warning. Note 365103.1 seems to indicate it can be
ignored: Cause This is due to unpublished Bug 4483084 'ORA-600 [LIBRARYCACHENOTE
MPTYONCLOSE]' This is a bug in that an ORA-600 error is reported when it is foun
d that something is still going on during shutdown. It does not indicate any dam
age or a problem in the system.
Solution At the time of writing, it is likely that the fix will be to report a m
ore meaningful external error, although this has not been finalised. The error i
s harmless so it is unlikely that this will be backported to 10.2. The error can
be safely ignored as it does not indicate a problem with the database. thread:
ORA-00600: internal error code, arguments: [LibraryCacheNotEmptyOnClose], [],[],
[], [], [], [], [] 14-DEC-06 05:15:35 GMT Hi, There is no patch available for t
he bug 4483084. You need to Ignore this error, as there is absolutely no impact
to the database due to this error. Thanks, Ram
thread: 19.69: ORA-12518 Tns: Listener could not hand off: ---------------------
---------------------------->>>> thread 1: Q: ORA-12518 Tns: Listener could not
hand off client conenction Posted: May 31, 2007 2:02 AM Reply Dear exeprts, Plz
tell me how can I resolve ORA-12518 Tns: Listener could not hand off client cone
nction. ORA-12518: TNS:listener could not hand off client connection A: Your ser
ver is probably running out of memory and need to swap memory to disk. One cause
can be an Oracle process consuming too much memory. A possible workaround is to
set following parameter in the listener.ora and restart the listener: DIRECT_HA
NDOFF_TTC_LISTENER=OFF You might need to increase the value of large_pool_size.
Regards. >>>> thread 2: Q: Hi All, I'm using oracle 10g in window XP system. Jav
a programmers will be accessing the database. Frequently they will get "ORA-1251
8: TNS:listener could not hand off" error and through sqlplus also i'll get this
error. But, after sometime it works fine. I checked tnsnames.ora and listner.or
a files entry. they seems to be ok. i have used system name itself for HOST flag
instead of IP address. But still i'm getting this error. Can anybody tell me wh
at might be the problem? Thanks, A: From Oracle's error messages docco, we see -
-------
TNS-12518 TNS:listener could not hand off client connection Cause: The process o
f handing off a client connection to another process failed. Action: Turn on lis
tener tracing and re-execute the operation. Verify that the listener and databas
e instance are properly configured for direct handoff. If the problem persists,
contact Oracle Support Services. -------So what does the listener trace indicate
? A: Did you by any chance upgrade with SP2? If so, you could be running into fi
rewall problems - 1521 is open, the initial contact made, but the handoff to a r
andom (blocked!) port fails... -Regards, Frank van Bortel >>>> thread 3: Q: I in
stall Oracle9i and Oracle8i on Win2000 Server. I used Listener of 9i. My databas
e based on Oracle8i. I found this error:ORA-12518: TNS:listener could not hand o
ff client connectionwhen I logged on database. If I restarted database and liste
ner it run but a few minutes it failed. Can u help me??? A: Are you usting MTS?
First start the listener and then the database ( both the databases). Now check
the status of listener. if nothing works, try DIRECT_HANDOFF_TTC_<listener name>
= OFF in listener.ora. >>>> thread 4 Q: This weekend I installed Oracle Enterpr
ise 10g release 2 on Windows 2003 server. The server is a Xeon dual processor 2.
5MHz each with 3GB RAM and 300GB harddisk on RAID 1. The installation was fine,
I then installed our application on it, that went smoothly as well. I had 3 user
s logged in to test the installation and everything was ok. Today morning we had
100 users trying to login and some got access, but majority got the ORA error a
bove
and have no access. I checked the tnsnames.ora file and sqlnet.ora file, service
on the database all looks ok. I also restarted the listener service on the serv
er, but I still get this error message. I've also increased no of sessions to 10
00. Has anyone ever come across a issue like this in Oracle 10g. Regards A: I th
ink I've resolved the problem, majority of my users are away on easter so when t
hey return I will know whether this tweak has paid off or not. break
Basically my SGA settings were quite high, so 60% of RAM was being used by SGA a
nd 40% by Windows. I basically reduced the total SGA to 800 MB and i've had no c
onnection problems, ever since. >>>> thread 5 ORA-12518: TNS:listener could not
hand off client connection Your server is probably running out of memory and nee
d to swap memory to disk. One cause can be an Oracle process consuming too much
memory. A possible workaround is to set following parameter in the listener.ora
and restart the listener: DIRECT_HANDOFF_TTC_LISTENER=OFF Should you be working
with Multi threaded server connections, you might need to increase the value of
large_pool_size. 19.70: Private strand flush not complete: ---------------------
--------------------- thread: Q: I just upgraded to Oracle 10g release 2 and I k
eep getting this error in my alert log Thread 1 cannot allocate new log, sequenc
e 509 Private strand flush not complete Current log# 2 seq# 508 mem# 0: /usr/loc
al/o1_mf_2_2cx5wnw5_.log Current log# 2 seq# 508 mem# 1: /usr/local/o1_mf_2_2cx5
wrjk_.log What causes the "private strand flush not complete" message? A: This i
s not a bug, it's the expected behavior in 10gr2. not complete" is a "noise" err
or, The "private strand flush
and can be disregarded because it relates to internal cache redo file management
. Oracle Metalink note 372557.1 says that a "strand" is a new 10gr2 term for red
o latches. It notes that a strand is a new mechanism to assign redo latches to m
ultiple processes, and it's related to the log_parallelism parameter. The note s
ays that the number of strands depends on the cpu_count. When you switch redo lo
gs you will see this alert log message since all private strands have to be flus
hed to the current redo log.
-- thread: Q: HI, I'm using the Oracle 10g R2 in a server with Red Hat ES 4.0, a
nd i received the following message in alert log "Private strand flush not compl
ete", somebody knows this error? The part of log, where I found this error is: F
ri Feb 10 10:30:52 2006 Thread 1 advanced to log sequence 5415 Current log# 8 se
q# 5415 mem# 0: /db/oradata/bioprd/redo081.log Current log# 8 seq# 5415 mem# 1:
/u02/oradata/bioprd/redo082.log Fri Feb 10 10:31:21 2006 Thread 1 cannot allocat
e new log, sequence 5416 Private strand flush not complete Current log# 8 seq# 5
415 mem# 0: /db/oradata/bioprd/redo081.log Current log# 8 seq# 5415 mem# 1: /u02
/oradata/bioprd/redo082.log Thread 1 advanced to log sequence 5416 Current log#
13 seq# 5416 mem# 0: /db/oradata/bioprd/redo131.log Current log# 13 seq# 5416 me
m# 1: /u02/oradata/bioprd/redo132.log Thanks, A: Hi, Note:372557.1 has brief exp
lanation of this message. Best Regards, -- thread: Q: Hi, I`m having such info i
n alert_logfile... maybee some ideas or info... Private strand flush not complet
e What could this posible mean ??
Thu Feb 9 22:03:44 2006 Thread 1 cannot allocate new log, Private strand flush n
ot complete Current log# 2 seq# 386 mem# 0: Thread 1 advanced to log sequence Cu
rrent log# 3 seq# 387 mem# 0: Thanks A:
sequence 387 /path/redo02.log 387 /path/redo03.log
see http://downloaduk.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents0
03.htm#sthref4478 regards log file switch (private strand flush incomplete) User
sessions trying to generate redo, wait on this event when LGWR waits for DBWR t
o complete flushing redo from IMU buffers into the log buffer; when DBWR is comp
lete LGWR can then finish writing the current log, and then switch log files. Wa
it Time: 1 second Parameters: None
Error message :Thread 1 cannot allocate new log --------------------------------
--------------Note 1: ------Q: Hi Iam getting error message "Thread 1 cannot all
ocate new log", sequence40994 can any one help me out , how to overcome this pro
blem. Give me a solution. regards A: Perhaps this will provide some guidance. Ri
ck Sometimes, you can see in your alert.log file, the following corresponding me
ssages: Thread 1 advanced to log sequence 248 Current log# 2 seq# 248 mem# 0: /p
rod1/oradata/logs/redologs02.log
Thread 1 cannot allocate new log, sequence 249 Checkpoint not complete
This message indicates that Oracle wants to reuse a redo log file, but the corre
sponding checkpoint associated is not terminated. In this case, Oracle must wait
until the checkpoint is completely realized. This situation may be encountered
particularly when the transactional activity is important. This situation may al
so be checked by tracing two statistics in the BSTAT/ESTAT report.txt file. The
two statistics are: - Background checkpoint started. - Background checkpoint com
pleted. These two statistics must not be different more than once. If this is
not true, your database hangs on checkpoints. LGWR is unable to continue writing
the next transactions until the checkpoints complete. Three reasons may explain
this difference: - A frequency of checkpoints which is too high. - A checkpoint
s are starting but not completing - A DBWR which writes too slowly. The number o
f checkpoints completed and started as indicated by these statistics should be w
eighed against the duration of the bstat/estat report. Keep in mind the goal of
only one log switch per hour, which ideally should equate to one checkpoint per
hour as well. The way to resolve incomplete checkpoints is through tuning checkp
oints and logs: 1) Give the checkpoint process more time to cycle through the lo
gs - add more redo log groups - increase the size of the redo logs 2) Reduce the
frequency of checkpoints - increase LOG_CHECKPOINT_INTERVAL - increase size of
online redo logs 3) Improve the efficiency of checkpoints enabling the CKPT proc
ess
with CHECKPOINT_PROCESS=TRUE 4) Set LOG_CHECKPOINT_TIMEOUT = 0. This disables th
e checkpointing based on time interval. 5) Another means of solving this error i
s for DBWR to quickly write the dirty buffers on disk. The parameter linked to t
his task is: DB_BLOCK_CHECKPOINT_BATCH.
DB_BLOCK_CHECKPOINT_BATCH specifies the number of blocks which are dedicated ins
ide the batch size for writing checkpoints. When you want to accelerate the chec
kpoints, it is necessary to increase this value. Note 2: ------Q: Hi All, Lets g
enerate a good discussion thread for this database performance issue. Sometimes
this message is found in the alert log generated. Thread 1 advanced to log seque
nce xxx Current log# 2 seq# 248 mem# 0: /df/sdfds Thread 1 cannot allocate new l
og, sequence xxx Checkpoint not complete I would appreciate a discussion on the
following 1. What are the basic reasons for this warning 2. What is the preventi
ve measure to be taken / Methods to detect its occurance 3. What are the post oc
curance measures/solutions for this. Regards A: Increase size of your redo logs.
A: Amongst other reasons, this happens when redo logs are not sized properly. A
checkpoint could not be completed because a new log is trying to be allocated w
hile it is still in use (or hasn't been archived yet). This can happen if you ar
e running very long transactions that are producing large amounts of redo (which
you did not anticipate) and the redo logs are too small to handle it. If you ar
e not archiving, increasing the size of your logfiles should help (each log grou
p should have at least 2 members on separate disks). Also, be aware of what type
of hardware you are using. Typically, raid-5 is slower for writes than raid-1.
If you are archiving and have increased the size of the redo logs, also try addi
ng an additional arch process. I have read plenty of conflicting documentation o
n how to resolve this problem. One of the "solutions" is to increase the size of
your logbuffer. I have not found this to be helpful (for my particular database
s). In the future, make sure to monitor the ratio of redo log entries to request
s (it should be around 5000 to 1). If it slips below this ratio, you may want to
consider adding addtional members to your log groups and increasing their size.
A: Configuring redo logs is an art and you may never archieve 100% of the time
that there is no waiting for available log files. But in my opinion, the best be
t for your situation is to add one (or more) redo log instead of increase the si
ze of the redo logs. Because even if your redo logs are huge, but if your disk c
ontroller is slow, a large transaction (for example, data loading) may use up al
l three redo logs before the first redo log completes the archive and becomes av
ailable, thus Oracle will halt until the archive is completed.
19.71: tkcrrsarc: (WARN) Failed to find ARCH for message (message:0x10): -------
------------------------------------------------------------------ thread 1: Q:
tkcrrsarc: (WARN) Failed to find ARCH for message (message:0x1) tkcrrpa: (WARN)
Failed initial attempt to send ARCH message (message:0x1) Most repords speak of
a harmless message. Some reports refer to a bug affecting Oracle versions op to
10.2.0.2 19.72 ORA-600 12333 ------------------thread 1: ORA-600[12333] is repor
ted with three additional numeric values when arequest is being received from a
network packet and the request code inthe packet is not recognized. The three ad
ditional values report theinvalid request values received. The error may have a
number of different root causes. For example, anetwork error
may have caused bad data to be received, or the clientapplication may have sent
wrong data, or the data in the network buffermay have been overwritten. Since th
ere are many potential causes of thiserror, it is essential to have a reproducib
le testcase to correctlydiagnose the underlying cause. If operating system netwo
rk logs areavailable, it is advisable to check them for evidence of networkfailu
res which may indicate network transmission problems. thread 2: We just found ou
t that it was related to Block option DML RETURNINGVALUE in Forms4.5 We set it t
o NO, and the problem was solved Thanks anyway thread 3: From: Oracle, Kalpana M
alligere 05-Oct-99 22:09 Subject: Re : ORA-00600: internal error code, arguments
: [12333], [0], [3], [81], [], [], [] Hello, An ORA-600 12333 occurs because the
re has been a client/server protocol violation. There can be many reasons for th
is: Network errors, network hardware problems, etc. Where do you see or when do
you get this error? Do you have any idea what was going on at the time of this e
rror? Which process received it, i.e., was it a background or user process? Were
you running sql*loader? Does this error have any adverse impact on the applicat
ion or database? We cannot generally progress unless there is reproducible test
case or reproducible environment. There are many bugs logged for this error whic
h are closed as 'could not reproduce'. In one such bug, the developer indicated
that "The problem does not normally have any bad side effects." So suggest you t
ry to isolate what is causing it as much as possible. The error can be due to un
derlying network problems as well. It is not indicative of a problem with the da
tabase itself.
19.73: SMON: Parallel transaction recovery tried: ------------------------------
------------------Note 1: ------Q: I was inserting 2.000.000 records in a table
and the connection has been killed. in my alert file I found the following messa
ge : "SMON: Parallel transaction recovery tried" here the content of the smon lo
g file:
Redo thread mounted by this instance: 1 Oracle process number: 6 Windows thread
id: 2816, image: ORACLE.EXE *** 2006-06-29 21:33:05.484 *** SESSION ID:(5.1) 200
6-06-29 21:33:05.453 *** 2006-06-29 21:33:05.484 SMON: Restarting fast_start par
allel rollback *** 2006-06-30 02:50:54.695 SMON: Parallel transaction recovery t
ried A: Hi, This is an expected message when cleanup is occuring and you have fa
st_start_parallel_rollback set to cleanup rollback segments after a failed trans
action Note 2: ------You get this message if SMON failed to generate the slave s
ervers necessary to perform a parallel rollback of a transaction. Check the valu
e for the parameter, FAST_START_PARALLEL_ROLLBACK (default is LOW). LOW limits t
he number of rollback processes to 2 * CPU_COUNT. HIGH limits the number of roll
back processes to 4 * CPU_COUNT. You may want to set the value of this parameter
to FALSE. Received on Wed Mar 10 2004 - 23:58:40 CST Note 3: ------Q: SMON: Par
allel transaction recovery tried We found above message in alert_sid.log file. A
: No need to worry about it. it is information message ... SMON start recovery i
n parrallel but failed and done in serial mode. Note 4: ------The system monitor
process (SMON) performs recovery, if necessary, at instance startup. SMON is al
so responsible for cleaning up temporary segments that are no longer in use and
for coalescing contiguous free extents within dictionary managed tablespaces. If
any terminated transactions were skipped during instance recovery because of fi
le-read or offline errors, SMON recovers them when the tablespace or
file is brought back online. SMON checks regularly to see whether it is needed.
Other processes can call SMON if they detect a need for it. With Real Applicatio
n Clusters, the SMON process of one instance can perform instance recovery for a
failed CPU or instance.
19.74: KGX Atomic Operation: ============================
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production Wit
h the Partitioning, OLAP and Data Mining options ORACLE_HOME = /dbms/tdbaaccp/or
a10g/home System name: AIX Node name: pl003 Release: 3 Version: 5 Machine: 00CB5
60D4C00 Instance name: accptrid Redo thread mounted by this instance: 1 Oracle p
rocess number: 16 Unix process pid: 2547914, image: oracle@pl003 (TNS V1-V3) ***
2008-03-20 07:22:28.571 *** SERVICE NAME:(SYS$USERS) 2008-03-20 07:22:28.570 **
* SESSION ID:(161.698) 2008-03-20 07:22:28.570 KGX cleanup... KGX Atomic Operati
on Log 700000036eb4350 Mutex 70000003f9adcf8(161, 0) idn 0 oper EXAM Cursor Pare
nt uid 161 efd 5 whr 26 slp 0 oper=DEFAULT pt1=700000039ce1c30 pt2=700000039ce1e
18 pt3=700000039ce2338 pt4=0 u41=0 stt=0 Note 1: ------Q: Hi there, Oracle has s
tarted using mutexes and it is said that they are more efficient as compared to
latches. Questions 1)What is mutex?I know mutex are mutual exclusions and they a
re the concept of multiple threads.What I want to know that how this concept is
implemented in Oracledatabase? 2) How they are better than latches?both are used
for low level locking so how one is better than the other? Any input is welcome
. Thanks and regards Aman....
A: 1) Simply put mutexes are memory structures. They are used to serialize the a
ccess to shared structures. IMHO their most important characteristics are two. F
irst, they can be taken in shared or exclusive mode. Second, getting a mutex can
be done in wait or no-wait mode. 2) The main advantages over latches are that m
utexes requires less memory and are faster to get and release. A: In Oracle, lat
ches and mutexes are different things and managed using different modules. KSL*
modules for latches and KGX* for mutexes. As Chris said, general mutex operatins
require less CPU instructions than latch operations (as they aren't as sophisti
cated as latches and don't maintain get/miss counts as latches do). But the main
scalability benefit comes from that there's a mutex structure in each child cur
sor handle and the mutex itself acts as cursor pin structure. So if you have a c
ursor open (or cached in session cursor cache) you don't need to get the library
cache latch (which was previously needed for changing cursor pin status), but y
ou can modify the cursor's mutex refcount directly (with help of pointers in ope
n cursor state area in sessions UGA). Therefore you have much higher scalability
when pinning/unpinning cursors (no library cache latching needed, virtually no
false contention) and no separate pin structures need to be allocated/maintained
. Few notes: 1) library cache latching is still needed for parsing etc, the mute
xes address only the pinning issue in library cache 2) mutexes are currently use
d for library cache cursors (not other objects like PL/SQL stored procs, table d
efs etc) 3) As mutexes are a generic mechanism (not library cache specific) they
're used in V$SQLSTATS underlying structures too 4) When mutexes are enabled, yo
u won't see cursor pins from X$KGLPN anymore (as X$KGLPN is a fixed table based
on the KGL pin array - which wouldn't be used for cursors anymore)
19.75: ktsmgtur(): TUR was not tuned for 361 secs: =============================
===================== [pl101][tdbaprod][/dbms/tdbaprod/prodrman/admin/dump/bdump
] cat prodrman_mmnl_1011950.trc /dbms/tdbaprod/prodrman/admin/dump/bdump/prodrma
n_mmnl_1011950.trc Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 6
4bit Production With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /dbms/tdbaprod/ora10g/home System name: AIX Node name: pl101 Relea
se: 3 Version: 5 Machine: 00CB85FF4C00 Instance name: prodrman Redo thread mount
ed by this instance: 1 Oracle process number: 12 Unix process pid: 1011950, imag
e: oracle@pl101 (MMNL) *** 2008-03-25 06:58:08.841 *** SERVICE NAME:(SYS$BACKGRO
UND) 2008-03-25 06:58:08.811 *** SESSION ID:(105.1) 2008-03-25 06:58:08.811 ktsm
gtur(): TUR was not tuned for 361 secs What does this mean? Note 1: ------Tur is
a pathchecker, and if a SAN connection is lost, TUR will complain.
19.76: tkcrrpa: (WARN) Failed initial attempt to send ARCH message: ============
======================================================= > *** SERVICE NAME:() 20
08-03-22 14:56:43.590 > *** SESSION ID:(221.1) 2008-03-22 14:56:43.590 > Maximum
redo generation record size = 132096 bytes > Maximum redo generation change vec
tor size = 98708 bytes > tkcrrsarc: (WARN) Failed to find ARCH for message (mess
age:0x10) > tkcrrpa: (WARN) Failed initial attempt to send ARCH message (message
:0x10) No good answer yet.
19.77: Weird errors 1: ====================== In a trace file of an Oracle 10.2.
0.3 db on AIX 5.3 we can find: >>>> DATABASE CALLED PRODTRID: > OS pid = 3907726
> loadavg : 1.12 1.09 1.13 > swap info: free_mem = 49.16M rsv = 24.00M > alloc
= 2078.75M avail = 6144.00M swap_free = 4065.25M > F S UID PID PPID C PRI NI ADD
R SZ WCHAN TTY TIME CMD > 240001 A tdbaprod 3907726 1 0 60 20 1cfff7400 90692 06
:00:39 - 0:00 ora_m000_prodtrid > open: Permission denied
STIME
> 3907726: ora_m000_prodtrid > 0x00000001000f81e0 sskgpwwait(??, ??, ??, ??, ??)
+ ?? > 0x00000001000f5c54 skgpwwait(??, ??, ??, ??, ??) + 0x94 > 0x000000010010
ba00 ksliwat(??, ??, ??, ??, ??, ??, ??, ??) + 0x640 > 0x0000000100116744 kslwai
tns_timed(??, ??, ??, ??, ??, ??, ??, ??) + 0x24 > 0x0000000100170374 kskthbwt(0
x0, 0x7000000, 0x0, 0x0, 0x15ab3c, 0x28284288, 0xfffffff, 0x7000000) + 0x214 > 0
x0000000100116884 kslwait(??, ??, ??, ??, ??, ??) + 0x84 > 0x00000001002c8fb0 ks
vrdp() + 0x550 > 0x00000001041c8c34 opirip(??, ??, ??) + 0x554 > 0x0000000102ab4
ba8 opidrv(??, ??, ??) + 0x448 > 0x000000010409df30 sou2o(??, ??, ??, ??) + 0x90
> 0x0000000100000870 opimai_real(??, ??) + 0x150 > 0x00000001000006d8 main(??,
??) + 0x98 > 0x0000000100000360 __start() + 0x90 > *** 2008-04-01 06:01:43.294 A
t other instances we find: >>>> DATABASE CALLED PRODRMAN 06:01:41 - Check for ch
anges since lastscan in file: /dbms/tdbaprod/prodrman/admin/dump/bdump/prodrman_
cjq0_1003754.trc Warning: Errors detected in file /dbms/tdbaprod/prodrman/admin/
dump/bdump/prodrman_cjq0_1003754.trc > OS pid = 3997922 > loadavg : 1.00 1.09 1.
17 > swap info: free_mem = 62.76M rsv = 24.00M > alloc = 2087.91M avail = 6144.0
0M swap_free = 4056.09M > F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME
CMD > 240001 A tdbaprod 3997922 1 4 62 20 1322c8400 91516 05:43:28 - 0:00 ora_j
000_prodrman > open: Permission denied > 3997922: ora_j000_prodrman > 0x00000001
000f81e0 sskgpwwait(??, ??, ??, ??, ??) + ?? > 0x00000001000f5c54 skgpwwait(??,
??, ??, ??, ??) + 0x94 > 0x000000010010ba00 ksliwat(??, ??, ??, ??, ??, ??, ??,
??) + 0x640 > 0x0000000100116744 kslwaitns_timed(??, ??, ??, ??, ??, ??, ??, ??)
+ 0x24 > 0x0000000100170374 kskthbwt(0x0, 0x0, 0x7000000, 0x7000000, 0x15ab10,
0x1, 0xfffffff, 0x7000000) + 0x214 > 0x0000000100116884 kslwait(??, ??, ??, ??,
??, ??) + 0x84 > 0x00000001021d4fcc kkjsexe() + 0x32c > 0x00000001021d5d58 kkjrd
p() + 0x478 > 0x00000001041c8bd0 opirip(??, ??, ??) + 0x4f0 > 0x0000000102ab4ba8
opidrv(??, ??, ??) + 0x448 > 0x000000010409df30 sou2o(??, ??, ??, ??) + 0x90 >
0x0000000100000870 opimai_real(??, ??) + 0x150 > 0x00000001000006d8 main(??, ??)
+ 0x98 > 0x0000000100000360 __start() + 0x90 > *** 2008-04-01 05:46:23.170
05:46:20 - Check for changes since lastscan in file: /dbms/tdbaprod/prodrman/adm
in/dump/bdump/prodrman_cjq0_1003754.trc Warning: Errors detected in file /dbms/t
dbaprod/prodrman/admin/dump/bdump/prodrman_cjq0_1003754.trc > > > > > > > > > >
> > > > > > > > > > /dbms/tdbaprod/prodrman/admin/dump/bdump/prodrman_cjq0_10037
54.trc Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Product
ion With the Partitioning, OLAP and Data Mining options ORACLE_HOME = /dbms/tdba
prod/ora10g/home System name: AIX Node name: pl101 Release: 3 Version: 5 Machine
: 00CB85FF4C00 Instance name: prodrman Redo thread mounted by this instance: 1 O
racle process number: 10 Unix process pid: 1003754, image: oracle@pl101 (CJQ0) *
** 2008-04-01 05:46:17.709 *** SERVICE NAME:(SYS$BACKGROUND) 2008-04-01 05:44:28
.394 *** SESSION ID:(107.1) 2008-04-01 05:44:28.394 Waited for process J000 to i
nitialize for 60 seconds *** 2008-04-01 05:46:17.709 Dumping diagnostic informat
ion for J000:
>>>> DATABASE CALLED ACCPROSS 06:01:26 - Check for changes since lastscan in fil
e: /dbms/tdbaaccp/accpross/admin/dump/bdump/accpross_cjq0_1970272.trc Warning: E
rrors detected in file /dbms/tdbaaccp/accpross/admin/dump/bdump/accpross_cjq0_19
70272.trc > > > > > > > > > > > > > > > > > > > > /dbms/tdbaaccp/accpross/admin/
dump/bdump/accpross_cjq0_1970272.trc Oracle Database 10g Enterprise Edition Rele
ase 10.2.0.3.0 - 64bit Production With the Partitioning, OLAP and Data Mining op
tions ORACLE_HOME = /dbms/tdbaaccp/ora10g/home System name: AIX Node name: pl003
Release: 3 Version: 5 Machine: 00CB560D4C00 Instance name: accpross Redo thread
mounted by this instance: 1 Oracle process number: 10 Unix process pid: 1970272
, image: oracle@pl003 (CJQ0) *** 2008-04-01 06:01:21.210 *** SERVICE NAME:(SYS$B
ACKGROUND) 2008-04-01 06:00:48.099 *** SESSION ID:(217.1) 2008-04-01 06:00:48.09
9 Waited for process J001 to initialize for 60 seconds *** 2008-04-01 06:01:21.2
10 Dumping diagnostic information for J001:
> OS pid = 3645448 > loadavg : 1.28 1.18 1.16 > swap info: free_mem = 107.12M rs
v = 24.00M > alloc = 3749.61M avail = 6144.00M swap_free = 2394.39M > F S UID PI
D PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD > 240001 A tdbaaccp 3645448 1 8
64 20 7566c510 91844 05:59:48 - 0:00 ora_j001_accpross > open: Permission denie
d > 3645448: ora_j001_accpross > 0x00000001000f81e0 sskgpwwait(??, ??, ??, ??, ?
?) + ?? > 0x00000001000f5c54 skgpwwait(??, ??, ??, ??, ??) + 0x94 > 0x0000000100
10ba00 ksliwat(??, ??, ??, ??, ??, ??, ??, ??) + 0x640 > 0x0000000100116744 kslw
aitns_timed(??, ??, ??, ??, ??, ??, ??, ??) + 0x24 > 0x0000000100170374 kskthbwt
(0x0, 0x0, 0x7000000, 0x7000000, 0x16656c, 0x1, 0xfffffff, 0x7000000) + 0x214 >
0x0000000100116884 kslwait(??, ??, ??, ??, ??, ??) + 0x84 > 0x00000001021d4fcc k
kjsexe() + 0x32c > 0x00000001021d5d58 kkjrdp() + 0x478 > 0x00000001041c8bd0 opir
ip(??, ??, ??) + 0x4f0 > 0x0000000102ab4ba8 opidrv(??, ??, ??) + 0x448 > 0x00000
0010409df30 sou2o(??, ??, ??, ??) + 0x90 > 0x0000000100000870 opimai_real(??, ??
) + 0x150 > 0x00000001000006d8 main(??, ??) + 0x98 > 0x0000000100000360 __start(
) + 0x90 > *** 2008-04-01 06:01:26.792 >>>> DATABASE CALLED PRODROSS 05:15:00 -
Check for changes since lastscan in file: /dbms/tdbaprod/prodross/admin/dump/bdu
mp/prodross_cjq0_2068516.trc Warning: Errors detected in file /dbms/tdbaprod/pro
dross/admin/dump/bdump/prodross_cjq0_2068516.trc > > > > > > > > > > > > > > > >
> > > > > /dbms/tdbaprod/prodross/admin/dump/bdump/prodross_cjq0_2068516.trc Or
acle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production With
the Partitioning, OLAP and Data Mining options ORACLE_HOME = /dbms/tdbaprod/ora1
0g/home System name: AIX Node name: pl101 Release: 3 Version: 5 Machine: 00CB85F
F4C00 Instance name: prodross Redo thread mounted by this instance: 1 Oracle pro
cess number: 10 Unix process pid: 2068516, image: oracle@pl101 (CJQ0) *** 2008-0
4-01 05:13:52.362 *** SERVICE NAME:(SYS$BACKGROUND) 2008-04-01 05:11:46.862 ***
SESSION ID:(217.1) 2008-04-01 05:11:46.861 Waited for process J000 to initialize
for 60 seconds *** 2008-04-01 05:13:52.362 Dumping diagnostic information for J
000: OS pid = 1855710
> loadavg : 1.08 1.15 1.20 > swap info: free_mem = 63.91M rsv = 24.00M > alloc =
2110.61M avail = 6144.00M swap_free = 4033.39M > F S UID PID PPID C PRI NI ADDR
SZ WCHAN STIME TTY TIME CMD > 240001 A tdbaprod 1855710 1 4 66 22 1cb2f5400 926
72 05:10:46 - 0:00 ora_j000_prodross > open: Permission denied > 1855710: ora_j0
00_prodross > 0x00000001000f81e0 sskgpwwait(??, ??, ??, ??, ??) + ?? > 0x0000000
1000f5c54 skgpwwait(??, ??, ??, ??, ??) + 0x94 > 0x000000010010ba00 ksliwat(??,
??, ??, ??, ??, ??, ??, ??) + 0x640 > 0x0000000100116744 kslwaitns_timed(??, ??,
??, ??, ??, ??, ??, ??) + 0x24 > 0x0000000100170374 kskthbwt(0x0, 0x0, 0x700000
0, 0x7000000, 0x15aab2, 0x1, 0xfffffff, 0x7000000) + 0x214 > 0x0000000100116884
kslwait(??, ??, ??, ??, ??, ??) + 0x84 > 0x00000001021d4fcc kkjsexe() + 0x32c >
0x00000001021d5d58 kkjrdp() + 0x478 > 0x00000001041c8bd0 opirip(??, ??, ??) + 0x
4f0 > 0x0000000102ab4ba8 opidrv(??, ??, ??) + 0x448 > 0x000000010409df30 sou2o(?
?, ??, ??, ??) + 0x90 > 0x0000000100000870 opimai_real(??, ??) + 0x150 > 0x00000
001000006d8 main(??, ??) + 0x98 > 0x0000000100000360 __start() + 0x90 > *** 2008
-04-01 05:13:59.017
06:01:42 - Check for changes since lastscan in file: /dbms/tdbaprod/prodroca/adm
in/dump/bdump/prodroca_cjq0_757946.trc Warning: Errors detected in file /dbms/td
baprod/prodroca/admin/dump/bdump/prodroca_cjq0_757946.trc > OS pid = 1867996 > l
oadavg : 1.00 1.09 1.17 > swap info: free_mem = 66.71M rsv = 24.00M > alloc = 20
87.91M avail = 6144.00M swap_free = 4056.09M > F S UID PID PPID C PRI NI ADDR SZ
WCHAN STIME TTY TIME CMD > 240001 A tdbaprod 1867996 1 3 65 22 1078c5400 92656
05:44:06 - 0:00 ora_j000_prodroca > open: Permission denied > 1867996: ora_j000_
prodroca > 0x00000001000f81e0 sskgpwwait(??, ??, ??, ??, ??) + ?? > 0x0000000100
0f5c54 skgpwwait(??, ??, ??, ??, ??) + 0x94 > 0x000000010010ba00 ksliwat(??, ??,
??, ??, ??, ??, ??, ??) + 0x640 > 0x0000000100116744 kslwaitns_timed(??, ??, ??
, ??, ??, ??, ??, ??) + 0x24 > 0x0000000100170374 kskthbwt(0x0, 0x0, 0x7000000,
0x7000000, 0x15ab10, 0x1, 0xfffffff, 0x7000000) + 0x214 > 0x0000000100116884 ksl
wait(??, ??, ??, ??, ??, ??) + 0x84 > 0x00000001021d4fcc kkjsexe() + 0x32c > 0x0
0000001021d5d58 kkjrdp() + 0x478 > 0x00000001041c8bd0 opirip(??, ??, ??) + 0x4f0
> 0x0000000102ab4ba8 opidrv(??, ??, ??) + 0x448 > 0x000000010409df30 sou2o(??,
??, ??, ??) + 0x90
> > > >
0x0000000100000870 opimai_real(??, ??) + 0x150 0x00000001000006d8 main(??, ??) +
0x98 0x0000000100000360 __start() + 0x90 *** 2008-04-01 05:46:23.398
06:01:42 - Check for changes since lastscan in file: /dbms/tdbaprod/prodtrid/adm
in/dump/bdump/prodtrid_mmon_921794.trc Warning: Errors detected in file /dbms/td
baprod/prodtrid/admin/dump/bdump/prodtrid_mmon_921794.trc > > > > > > > > > > >
> > > > > > > > > /dbms/tdbaprod/prodtrid/admin/dump/bdump/prodtrid_mmon_921794.
trc Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options ORACLE_HOME = /dbms/tdbapro
d/ora10g/home System name: AIX Node name: pl101 Release: 3 Version: 5 Machine: 0
0CB85FF4C00 Instance name: prodtrid Redo thread mounted by this instance: 1 Orac
le process number: 11 Unix process pid: 921794, image: oracle@pl101 (MMON) *** 2
008-04-01 06:01:39.797 *** SERVICE NAME:(SYS$BACKGROUND) 2008-04-01 06:01:39.385
*** SESSION ID:(106.1) 2008-04-01 06:01:39.385 Waited for process m000 to initi
alize for 60 seconds *** 2008-04-01 06:01:39.797 Dumping diagnostic information
for m000:
06:01:42 - Check for changes since lastscan in file: /dbms/tdbaprod/prodrman/adm
in/dump/bdump/alert_prodrman.log 06:01:42 - Check for changes since lastscan in
file: /dbms/tdbaprod/prodrman/admin/dump/udump/sbtio.log 06:01:42 - Check for ch
anges since lastscan in file: /dbms/tdbaprod/prodroca/admin/dump/bdump/alert_pro
droca.log 06:01:42 - Check for changes since lastscan in file: /dbms/tdbaprod/pr
odroca/admin/dump/udump/sbtio.log 06:01:42 - Check for changes since lastscan in
file: /dbms/tdbaprod/prodross/admin/dump/bdump/alert_prodross.log 06:01:42 - Ch
eck for changes since lastscan in file: /dbms/tdbaprod/prodross/admin/dump/udump
/sbtio.log 06:01:42 - Check for changes since lastscan in file: /dbms/tdbaprod/p
rodslot/admin/dump/bdump/alert_prodslot.log 06:01:42 - Check for changes since l
astscan in file: /dbms/tdbaprod/prodslot/admin/dump/udump/sbtio.log 06:01:42 - C
heck for changes since lastscan in file: /dbms/tdbaprod/prodtrid/admin/dump/bdum
p/alert_prodtrid.log 06:01:42 - Check for changes since lastscan in file: /dbms/
tdbaprod/prodtrid/admin/dump/udump/sbtio.log 06:01:42 - Check for changes since
lastscan in file: /dbms/tdbaprod/ora10g/home/network/log/listener.log
File /dbms/tdbaprod/ora10g/home/network/log/listener.log is changed, but no erro
rs detected
Note 1: ------Q: Hi, we're running oracle 10 on AIX 5.3 TL04. We're experiencing
some troubles with paging space. We've got 7 GB real mem and 10 GB paging space
, and smoetimes the paging space occupation increases and it "freezes" the serve
r (no telnet nor console connection). We've seen oracle has shown this error:
CODE *** 2007-06-18 11:16:49.696 Dump diagnostics for process q002 pid 786600 wh
ich did not start after 120 seconds: (spawn_time:x10BF1F175 now:x10BF3CB36 diff:
x1D9C1) *** 2007-06-18 11:16:54.668 Dumping diagnostic information for q002: OS
pid = 786600 loadavg : 0.07 0.27 0.28 swap info: free_mem = 9.56M rsv = 40.00M a
lloc = 4397.23M avail = 10240.00M swap_free = 5842.77M skgpgpstack: fgets() time
d out after 60 seconds skgpgpstack: pclose() timed out after 60 seconds ERROR: p
rocess 786600 is not alive *** 2007-06-18 11:19:41.152 *** 2007-06-18 11:27:36.4
03 Process startup failed, error stack: ORA-27300: OS system dependent operation
:fork failed with status: 12 ORA-27301: OS failure message: Not enough space ORA
-27302: failure occurred at: skgpspawn3 So we think it's oracle's fault, but we'
re not sure. We're AIX guys, not oracle, so we're not sure about this. Can anyon
e confirm if this is caused by oracle? A: Looks like a bug. We are running on a
Windows 2003 Server Standard edition. I had the same problem. Server was not res
ponding anymore after the following errors: ORA-27300: OS system dependent opera
tion:spcdr:9261:4200 failed with status: 997 ORA-27301: OS failure message: Over
lapped I/O operation is in progress. ORA-27302: failure occurred at: skgpspawn A
nd later:
O/S-Error: (OS 1450) Insufficient system resources exist to complete the request
ed service. We are running the latest patchset 10.2.0.2 because of a big problem
in 10.2.0.1 (wrong parsing causes client memory problems. Procobol., plsql deve
loper ect crash because oracle made mistakes skipping the parse process, goto di
rect execute and return corrupted data to the client. Tomorrow I will rise a lev
el 1 TAR indicating we had a crach. Server is now running normaly. A: Oracle fin
ally admit there was a bug: BUG 5607984 ORACLE DOES NOT CLOSE TCP CONNECTIONS. R
EMAINS IN CLOSE_WAIT STATE. [On Windows 32-bit]. The patch 10 (patch number 5639
232) is supposed to solve the problem for 10.2.0.2.0. We applied it monday morni
ng and everything is fine up to now. This bug is also supposed to be solved in t
he 10.2.0.3.0 patchset that is availlable on the Metalink site. Note 2: ------Q:
question: ----------------------------------------------------------my bdump re
ceived two error message traces this morning. One of the trace displays a lot of
detail, mainly as: *** SESSION ID:(822.1) 2007-02-11 00:35:06.147 Waited for pr
ocess J000 to initialize for 60 seconds *** 2007-02-11 00:35:20.276 Dumping diag
nostic information for J000: OS pid = 811172 loadavg : 0.55 0.42 0.44 swap info:
free_mem = 3.77M rsv = 24.50M alloc = 2418.36M avail = 6272.00M swap_free = 385
3.64M F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD 240001 A oracle
811172 1 0 60 20 5bf12400 86396 00:34:32 - 0:00 ora_j000_BAAN open: The file ac
cess permissions do not allow the specified action. Then whole bunch of the poin
ters and something like this "0x0000000100055800 kghbshrt(??, ??, ??, ??, ??, ??
) + 0x80" how do I find out what really went wrong? This error occured after I d
id an export pump of the DB, about 10 minutes later. This is first time I sae su
ch and the export pump has been for a year. My system is Oracle 10g R2 on AIX 5.
3L
Note 3:
------At least here you have an explanation about the Oracle processes: pmon The
process monitor performs process recovery when a user process fails. PMON is re
sponsible for cleaning up the cache and freeing resources that the process was u
sing. PMON also checks on the dispatcher processes (described later in this tabl
e) and server processes and restarts them if they have failed. mman Used for int
ernal database tasks. dbw0 The database writer writes modified blocks from the d
atabase buffer cache to the datafiles. Oracle Database allows a maximum of 20 da
tabase writer processes (DBW0-DBW9 and DBWa-DBWj). The initialization parameter
DB_WRITER_PROCESSES specifies the number of DBWn processes. The database selects
an appropriate default setting for this initialization parameter (or might adju
st a user specified setting) based upon the number of CPUs and the number of pro
cessor groups. lgwr The log writer process writes redo log entries to disk. Redo
log entries are generated in the redo log buffer of the system global area (SGA
), and LGWR writes the redo log entries sequentially into a redo log file. If th
e database has a multiplexed redo log, LGWR writes the redo log entries to a gro
up of redo log files. ckpt At specific times, all modified database buffers in t
he system global area are written to the datafiles by DBWn. This event is called
a checkpoint. The checkpoint process is responsible for signalling DBWn at chec
kpoints and updating all the datafiles and control files of the database to indi
cate the most recent checkpoint. smon The system monitor performs recovery when
a failed instance starts up again. In a Real Application Clusters database, the
SMON process of one instance can perform instance recovery for other instances t
hat have failed. SMON also cleans up temporary segments that are no longer in us
e and recovers dead transactions skipped during system failure and instance reco
very because of file-read or offline errors. These transactions are eventually r
ecovered by SMON when the tablespace or file is brought back online. reco The re
coverer process is used to resolve distributed transactions that are pending due
to a network or system failure in a distributed database. At timed intervals, t
he local RECO attempts to connect to remote databases and automatically complete
the commit or rollback of the local portion of any pending distributed transact
ions.
cjq0 Job Queue Coordinator (CJQ0) Job queue processes are used for batch process
ing. The CJQ0 process dynamically spawns job queue slave processes (J000...J999)
to run the jobs. d000 Dispatchers are optional background processes, present on
ly when the shared server configuration is used. s000 Dunno. qmnc Queue monitor
background process A queue monitor process which monitors the message queues. Us
ed by Oracle Streams Advanced Queuing. mmon Performs various manageability-relat
ed background tasks. mmnl Performs frequent and light-weight manageability-relat
ed tasks, such as session history capture and metrics computation. j000 A job qu
eue slave. (See cjq0) Addition: --------Sep 13, 2006 Oracle Background Processes
, incl. 10gR2 --------------------New in 10gR2 ------------------PSP0 (new in 10
gR2) - Process SPawner - to create and manage other Oracle processes. NOTE: Ther
e is no documentation currently in the Oracle Documentation set on this process.
LNS1(new in 10gR2) - a network server process used in a Data Guard (primary) da
tabase. Further explaination From "What's New in Oracle Data Guard?" in the Orac
le Data Guard Concepts and Administration 10g Release 2 (10.2) "During asynchronous
redo transmission, the network server (LNSn) process transmits redo data out of
the online redo log files on the primary database and no longer interacts direc
tly with the log writer process. This change in behavior allows
the log writer (LGWR) process to write redo data to the current online redo log
file and continue processing the next request without waiting for inter-process
communication or network I/O to complete." --------------------New in 10gR1 ----
--------------MMAN - Memory MANager - it serves as SGA Memory Broker and coordin
ates the sizing of the memory components, which keeps track of the sizes of the
components and pending resize operations. Used by Automatic Shared Memory Manage
ment feature. RVWR -Recovery Writer - which is responsible for writing flashback
logs which stores pre-image(s) of data blocks. It is used by Flashback database
feature in 10g, which provides a way to quickly revert an entire Oracle databas
e to the state it was in at a past point in time. - This is different from tradi
tional point in time recovery. - One can use Flashback Database to back out chan
ges that: - Have resulted in logical data corruptions. - Are a result of user er
ror. - This feature is not applicable for recovering the database in case of med
ia failure. - The time required for flashbacking a database to a specific time i
n past is DIRECTLY PROPORTIONAL to the number of changes made and not on the siz
e of the database. Jnnn - Job queue processes which are spawned as needed by CJQ
0 to complete scheduled jobs. This is not a new process. CTWR - Change Tracking
Writer (CTWR) which works with the new block changed tracking features in 10g fo
r fast RMAN incremental backups. MMNL - Memory Monitor Light process - which wor
ks with the Automatic Workload Repository new features (AWR) to write out full s
tatistics buffers to disk as needed. MMON - Memory MONitor (MMON) process - is a
ssociated with the Automatic Workload Repository new features used for automatic
problem detection and self-tuning. MMON writes out the required statistics for
AWR on a scheduled basis. M000 - MMON background slave (m000) processes. CJQn -
Job Queue monitoring process - which is initiated with the job_queue_processes p
arameter. This is not new. RBAL - It is the ASM related process that performs re
balancing of disk resources controlled by ASM. ARBx - These processes are manage
d by the RBAL process and are used to do the actual rebalancing of ASM controlle
d disk resources.
The number of ARBx processes invoked is directly influenced by the asm_power_lim
it parameter. ASMB - is used to provide information to and from the Cluster Sync
hronization Services used by ASM to manage the disk resources. It is also used t
o update statistics and provide a heartbeat mechanism. Changes about Queue Monit
or Processes The QMON processes are optional background processes for Oracle Str
eams Advanced Queueing (AQ) which monitor and maintain all the system and user o
wned AQ objects. These optional processes, like the job_queue processes, does no
t cause the instance to fail on process failure. They provide the mechanism for
message expiration, retry, and delay, maintain queue statistics, remove processe
d messages from the queue table and maintain the dequeue IOT. QMNx - Pre-10g QMO
N Architecture The number of queue monitor processes is controlled via the dynam
ic initialisation parameter AQ_TM_PROCESSES. If this parameter is set to a non-z
ero value X, Oracle creates that number of QMNX processes starting from ora_qmn0
_ (where is the identifier of the database) up to ora_qmnX_ ; if the parameter i
s not specified or is set to 0, then QMON processes are not created. There can b
e a maximum of 10 QMON processes running on a single instance. For example the p
arameter can be set in the init.ora as follows aq_tm_processes=1 or set dynamica
lly via alter system set aq_tm_processes=1; QMNC & Qnnn - 10g QMON Architecture
Beginning with release 10.1, the architecture of the QMON processes has been cha
nged to an automatically controlled coordinator slave architecture. The Queue Mo
nitor Coordinator, ora_qmnc_, dynamically spawns slaves named, ora_qXXX_, depend
ing on the system load up to a maximum of 10 in total. For version 10.01.XX.XX o
nwards it is no longer necessary to set AQ_TM_PROCESSES when Oracle Streams AQ o
r Streams is used. However, if you do specify a value, then that value is taken
into account. However, the number of qXXX processes can be different from what w
as specified by AQ_TM_PROCESSES. If AQ_TM_PROCESSES is not specified in versions
10.1 and above, QMNC only runs when you have AQ objects in your database.
19.78: ORA-00600: internal error code, arguments: [13080], [], [], [], [], [], [
], []: =========================================================================
========= ==== When running statement ALTER TABLE ENABLE CONSTAINT this ORA-0060
0 error appears.
19.79: WARNING: inbound connection timed out (ORA-3136): =======================
================================= Note 1: Q: WARNING: inbound connection timed o
ut (ORA-3136) this error appearing in Alert log . Please explain following:-----
---------1.How to overcome this error? 2.Is there any adverse effect in long run
? 3.Is it require to SHUTDOWN the DATABASE to solve it. A: A good dicussion at f
reelist.ora http://www.freelists.org/archives/oracle-l/08-2005/msg01627.html In
10gR2, SQLNET.INBOUND_CONNECT_TIMEOUT the parameters were set to have a default
of 60 (seconds). Set the parameters SQLNET.INBOUND_CONNECT_TIMEOUT and INBOUND_C
ONNECT_TIMEOUT_listenername to 0 (indefinite). A: What the error is telling you
is that a connection attempt was made, but the session authentication was not pr
ovided before SQLNET.INBOUND_CONNECT_TIMEOUT seconds. As far as adverse effects
in the long run, you have a user or process that is unable to connect to the dat
abase. So someone is unhappy about the database/application. Before setting SQLN
ET.INBOUND_CONNECT_TIMEOUT, verify that there is not a firewall or Network Addre
ss Translation (NAT) between the client and server. Those are common cause for O
RA-3136. Q: Subject: WARNING: inbound connection timed out (ORA-3136) I have bee
n getting like 50 of these error message a day in my alert_log the past couple o
f days. Anybody know what they mean? WARNING: inbound connection timed out (ORA-
3136) A: Yep this is annoying, especially if you have alert log monitors :(. I h
ad these when I first went to 10G... make these changes to get rid of them:
Listener.ora: INBOUND_CONNECT_TIMEOUT_<LISTENER_NAME>=0 .. for every listener Sq
lnet.ora: SQLNET.INBOUND_CONNECT_TIMEOUT=0 Then the errors stop...
Note 2: SQLNET.INBOUND_CONNECT_TIMEOUT Purpose Use the SQLNET.INBOUND_CONNECT_TI
MEOUT parameter to specify the time, in seconds, for a client to connect with th
e database server and provide the necessary authentication information. If the c
lient fails to establish a connection and complete authentication in the time sp
ecified, then the database server terminates the connection. In addition, the da
tabase server logs the IP address of the client and an ORA-12170: TNS:Connect ti
meout occurred error message to the sqlnet.log file. The client receives either
an ORA-12547: TNS:lost contact or an ORA-12637: Packet receive failed error mess
age. . Without this parameter, a client connection to the database server can st
ay open indefinitely without authentication. Connections without authentication
can introduce possible denial-of-service attacks, whereby malicious clients atte
mpt to flood database servers with connect requests that consume resources. To p
rotect both the database server and the listener, Oracle Corporation recommends
setting this parameter in combination with the INBOUND_CONNECT_TIMEOUT_listener_
name parameter in the listener.ora file. When specifying values for these parame
ters, consider the following recommendations: Set both parameters to an initial
low value. Set the value of the INBOUND_CONNECT_TIMEOUT_listener_name parameter
to a lower value than the SQLNET.INBOUND_CONNECT_TIMEOUT parameter. For example,
you can set INBOUND_CONNECT_TIMEOUT_listener_name to 2 seconds and INBOUND_CONN
ECT_TIMEOUT parameter to 3 seconds. If clients are unable to complete connection
s within the specified time due to system or network delays that are normal for
the particular environment, then increment the time as needed.
19.80 How to insert special symbols: ==================================== Note 1
: -------
Q: Hi, Is there anyone who knows how to insert a value containing "&" into a tab
le? sth like this: insert into test_tab (test_field) values ('&test'); I tried '
'&test' and many more but none of them works:-( As far as I know Oracle tries to
bind a value when it encounters '&sth'... thanks in advance A: Try: set define
off Then execute your insert. 19.81: SGA POLICY: Cache below reserve getting fro
m component1: ===============================================================
19.82: AUTO SGA: Not free: ========================== Q: Hi, We have 10gr2 on wi
ndows server 2003 standard edition. The below errors are generated in mman trace
files every now and then. AUTO AUTO AUTO AUTO AUTO AUTO AUTO AUTO AUTO AUTO SGA
: SGA: SGA: SGA: SGA: SGA: SGA: SGA: SGA: SGA: Not Not Not Not Not Not Not Not N
ot Not free free free free free free free free free free 0x2DFE78A8, 0x2DFE78A8,
0x2DFE795C, 0x2DFE7A10, 0x2DFE7AC4, 0x2DFE7B78, 0x2DFE7C2C, 0x2DFE7CE0, 0x2DFE7
D94, 0x2DFF2708, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0 0
0 0 0 0 0 0 0 0
metalink doesnt give much info either.( BUG : 5201883 for your reference ) did a
nybody happened to have come across this issue and probably resolved it. Any com
ments are appreciated. A: This can be safely ignored.Since ASMM(Automatic Shared
Memory Management) is enabled at instance level, you might be hitting this bug.
Check Metalink note: 394026.1 Adding the Metalink note. A: As stated in the bug
description, either 1) ignore the messages and delete generated trace files peri
odically and/or 2) wait for patchset 10.2.0.4
===================== 20. DATABASE TRACING: ===================== 20.2 Oracle 10
g: ================ 20.2.1 Tracing a session in 10g: ---------------------------
----The current state of database and instance trace is reported in the data dic
tionary view DBA_ENABLED_TRACES. SQL> desc DBA_ENABLED_TRACES Name Null? -------
---------------------------------- -------TRACE_TYPE PRIMARY_ID QUALIFIER_ID1 QU
ALIFIER_ID2 WAITS BINDS INSTANCE_NAME Type ---------------------------VARCHAR2(2
1) VARCHAR2(64) VARCHAR2(48) VARCHAR2(32) VARCHAR2(5) VARCHAR2(5) VARCHAR2(16)
Note 1: 10g tracing quick start: -------------------------------Oracles released a
few new facilities to help with tracing in 10g, quick wrap up of the most signif
icant: >>>> Using the new client identifier: You can tag database sessions with
a session identifier that can later be used to identify sessions to trace. You c
an set the identifier like this: heres a real
begin dbms_session.set_identifier('GUY1'); end; You can set this from a login tr
igger if you dont have access to the source code. To set trace on for a matching cl
ient id, you use DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE: BEGIN DBMS_MONITOR.client_
id_trace_enable (client_id waits binds ); END; => 'GUY1', => TRUE, => FALSE
You can add waits and or bind variables to the trace file using the flags shown.
>>>> Tracing by Module and/or action: Many Oracle-aware applications set Module
and action properties and you can use these to enable tracing as well. The serv
_mod_act_trace_enable method allows you to set the tracing on for sessions match
ing particular service, module, actions and (for clusters) instance identifiers.
You can see current values for these usng the following query: SELECT DISTINCT
instance_name, service_name, module, action FROM gv$session JOIN gv$instance USI
NG (inst_id); INSTANCE_NAME ---------------ghrac11 ghrac11 ghrac11 ghrac13 ghrac
13 ghrac12 ghrac12 SERVICE_NA ---------SYS$USERS ghrac1 ghrac1 SYS$USERS ghrac1
ghrac1 SYS$USERS MODULE ACTION ------------------------------ -----------SQLNav5
.exe Spotlight On Oracle, classic 4.0 racgimon@mel601416.melquest.de v.mel.au.qs
ft (TNS Spotlight On Oracle, classic 4.0 SQL*Plus racgimon@mel601416.melquest.de
v.mel.au.qsft (TNS
So to generate traces for all SQL*plus sessions that connect to the cluster from
any instance, I could issue the following command: BEGIN DBMS_MONITOR.serv_mod_
act_trace_enable (service_name module_name action_name waits binds instance_name
=> => => => => =>
'ghrac1', 'SQL*Plus', DBMS_MONITOR.all_actions, TRUE, FALSE, NULL
); END; . / >>>> Tracing using sid and serial DBMS_MONITOR can enable traces for
specific sid and serial as you would expect: SELECT FROM WHERE instance_name, S
ID, serial#, module, action gv$session JOIN gv$instance USING (inst_id) username
= 'SYSTEM';
INSTANCE_NAME SID SERIAL# MODULE ACTION ---------------- ---------- ---------- -
----------- -----------ghrac11 184 13179 SQL*Plus ghrac11 181 3353 SQLNav5.exe g
hrac13 181 27184 SQL*Plus ghrac13 180 492 SQL*Plus ghrac12 184 18601 SQL*Plus BE
GIN dbms_monitor.session_trace_enable (session_id serial_num => 492, waits => TR
UE, binds => TRUE ); END; / => 180,
The sid and serial need to be current now unlike the other methods, this does not
setup a permanent trace request (simply because the sid and serial# will never b
e repeated). Also, you need to issue this from the same instance if you are in a
RAC cluster. Providing NULLs for sid and serial# traces the current session. >>
>> Finding and analyzing the trace: This hasnt changed much in 10g; the traces are
in the USER_DUMP_DEST directory, and you can analyze them using tkprof. The trcs
ess utility is a new additional that allows you to generate a trace based on mul
tiple input files and several other conditions. trcsess [output=<output file nam
e >] [session=<session ID>] [clientid=<clientid>] [service=<service name>] [acti
on=<action name>] [module=<module name>] <trace file names>
output=<output file name> To generate a single trace file combining all the entr
ies from the SQL*Plus sessions I traced earlier, then to feed them into tkprof f
or analysis, I would issue the following commands: [oracle@mel601416 udump]$ trc
sess module='SQL*Plus' *.trc output=sqlplus.trc [oracle@mel601416 udump]$ tkprof
sqlplus.trc sqlplus.prf TKPROF: Release 10.2.0.1.0 - Production on Wed Sep 27 1
4:47:51 2006
Note 2: ------Setting Up Tracing with DBMS_MONITOR The DBMS_MONITOR package has
routines for enabling and disabling statistics aggregation as well as for tracin
g by session ID, or tracing based upon a combination of service name, module nam
e, and action name. (These three are associated hierarchically: you can't specif
y an action without specifying the module and the service name, but you can spec
ify only the service name, or only the service name and module name.) The module
and action names, if available, come from within the application code. For exam
ple, Oracle E-Business Suite applications provide module and action names in the
code, so you can identify these by name in any of the Oracle Enterprise Manager
pages. (PL/SQL developers can embed calls into their applications by using the
DBMS_APPLICATION_INFO package to set module and action names.) Note that setting
the module, action, and other paramters such as client_id no longer causes a ro
und-trip to the database these routines now piggyback on all calls from the applica
tion. The service name is determined by the connect string used to connect to a
service. User sessions not associated with a specific service are handled by sys
$users (sys$background is the default service for the background processes). Sin
ce we have a service and a module name, we can turn on tracing for this module a
s follows: SQL> exec dbms_monitor.serv_mod_act_trace_enable (service_name=>'test
env', module_name=>'product_update'); PL/SQL procedure successfully completed. W
e can turn on tracing for the client: SQL> exec dbms_monitor.client_id_trace_ena
ble (client_id=>'kimberly'); PL/SQL procedure successfully completed.
Note that all of these settings are persistentall sessions associated with the serv
ice and module will be traced, not just the current sessions. To trace the SQL b
ased on the session ID, look at the Oracle Enter-prise Manager Top Sessions page
, or query the V$SESSION view as you likely currently do. SQL> select sid, seria
l#, username from v$session; SID SERIAL# USERNAME ------------ -----------133 41
52 SYS 137 2418 SYSMAN 139 53 KIMBERLY 140 561 DBSNMP 141 4 DBSNMP . . . 168 1 1
69 1 170 1 28 rows selected. With the session ID (SID) and serial number, you ca
n use DBMS_MONITOR to enable tracing for just this session: SQL> exec dbms_monit
or.session_trace_enable(139); exec dbms_monitor.session_trace_enable(81); PL/SQL
procedure successfully completed. The serial number defaults to the current ser
ial number for the SID (unless otherwise specified), so if that's the session an
d serial number you want to trace, you need not look any further. Also, by defau
lt, WAITS are set to true and BINDS to false, so the syntax above is effectively
the same as the following: SQL> exec dbms_monitor.session_trace_enable(session_
id=>139, serial_num=>53, waits=>true, binds=>false); Note that WAITS and BINDS a
re the same parameters that you might have set in the past using DBMS_SUPPORT an
d the 10046 event. If you're working in a production environment, at this point
you'd rerun the errant SQL or application, and the trace files would be created
accordingly. Note 3: DBMS_MONITOR: --------------------The DBMS_MONITOR package
let you use PL/SQL for controlling additional tracing and
statistics gathering. The chapter contains the following topics: Subprogram Desc
ription CLIENT_ID_STAT_DISABLE Procedure Disables statistic gathering previously
enabled for a given Client Identifier CLIENT_ID_STAT_ENABLE Procedure Enables s
tatistic gathering for a given Client Identifier CLIENT_ID_TRACE_DISABLE Procedu
re Disables the trace previously enabled for a given Client Identifier globally
for the database CLIENT_ID_TRACE_ENABLE Procedure Enables the trace for a given
Client Identifier globally for the database DATABASE_TRACE_DISABLE Procedure Dis
ables SQL trace for the whole database or a specific instance DATABASE_TRACE_ENA
BLE Procedure Enables SQL trace for the whole database or a specific instance SE
RV_MOD_ACT_STAT_DISABLE Procedure Disables statistic gathering enabled for a giv
en combination of Service Name, MODULE and ACTION SERV_MOD_ACT_STAT_ENABLE Proce
dure Enables statistic gathering for a given combination of Service Name, MODULE
and ACTION SERV_MOD_ACT_TRACE_DISABLE Procedure Disables the trace for ALL enab
led instances for a or a given combination of Service Name, MODULE and ACTION na
me globally SERV_MOD_ACT_TRACE_ENABLE Procedure Enables SQL tracing for a given
combination of Service Name, MODULE and ACTION globally unless an instance_name
is specified SESSION_TRACE_DISABLE Procedure Disables the previously enabled tra
ce for a given database session identifier (SID) on the local instance SESSION_T
RACE_ENABLE Procedure
Enables the trace for a given database session identifier (SID) on the local ins
tance --------------------------------------------------------------------------
-------------------------------------------- CLIENT_ID_STAT_ENABLE Procedure Thi
s procedure enables statistic gathering for a given Client Identifier. Statistic
s gathering is global for the database and persistent across instance starts and
restarts. That is, statistics are enabled for all instances of the same databas
e, including restarts. Statistics are viewable through V$CLIENT_STATS views. Syn
tax DBMS_MONITOR.CLIENT_ID_STAT_ENABLE( client_id IN VARCHAR2); Parameters Table
60-3 CLIENT_ID_STAT_ENABLE Procedure Parameters Parameter Description client_id
The Client Identifier for which statistic aggregation is enabled. Examples To e
nable statistic accumulation for a client with a given client ID: EXECUTE DBMS_M
ONITOR.CLIENT_ID_STAT_ENABLE('janedoe'); EXECUTE DBMS_MONITOR.CLIENT_ID_STAT_ENA
BLE('edp$jvl'); EXECUTE DBMS_MONITOR.CLIENT_ID_STAT_DISABLE('edp$jvl'); -- CLIEN
T_ID_STAT_DISABLE Procedure This procedure will disable statistics accumulation
for all instances and remove the accumulated results from V$CLIENT_STATS view en
abled by the CLIENT_ID_STAT_ENABLE Procedure. Syntax DBMS_MONITOR.CLIENT_ID_STAT
_DISABLE( client_id IN VARCHAR2); Parameters Parameter Description client_id The
Client Identifier for which statistic aggregation is disabled. Examples To disa
ble accumulation: EXECUTE DBMS_MONITOR.CLIENT_ID_STAT_DISABLE('janedoe');
--------------------------------------------------------------------------------
-------------------------------------- CLIENT_ID_TRACE_DISABLE Procedure This pr
ocedure will disable tracing enabled by the CLIENT_ID_TRACE_ENABLE Procedure. Sy
ntax DBMS_MONITOR.CLIENT_ID_TRACE_DISABLE( client_id IN VARCHAR2); Parameters Ta
ble 60-4 CLIENT_ID_TRACE_DISABLE Procedure Parameters Parameter Description clie
nt_id The Client Identifier for which SQL tracing is disabled. Examples EXECUTE
DBMS_MONITOR.CLIENT_ID_TRACE_DISABLE ('janedoe'); edp$jvl
-- CLIENT_ID_TRACE_ENABLE Procedure This procedure will enable the trace for a g
iven client identifier globally for the database. Syntax DBMS_MONITOR.CLIENT_ID_
TRACE_ENABLE( client_id IN VARCHAR2, waits IN BOOLEAN DEFAULT TRUE, binds IN BOO
LEAN DEFAULT FALSE); Parameters Table 60-5 CLIENT_ID_TRACE_ENABLE Procedure Para
meters Parameter Description client_id Database Session Identifier for which SQL
tracing is enabled. waits If TRUE, wait information is present in the trace. bi
nds If TRUE, bind information is present in the trace. Usage Notes The trace wil
l be written to multiple trace files because more than one Oracle shadow process
can work on behalf of a given client identifier.
The tracing is enabled for all instances and persistent across restarts. Example
s EXECUTE DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE('janedoe', TRUE,FALSE); EXECUTE DB
MS_MONITOR.CLIENT_ID_TRACE_ENABLE('albert'); EXECUTE DBMS_MONITOR.CLIENT_ID_TRAC
E_DISABLE ('albert'); ----------------------------------------------------------
------------------------------------------------------------ SERV_MOD_ACT_STAT_D
ISABLE Procedure This procedure will disable statistics accumulation and remove
the accumulated results from V$SERV_MOD_ACT_STATS view. Statistics disabling is
persistent for the database. That is, service statistics are disabled for instan
ces of the same database (plus dblinks that have been activated as a result of t
he enable). Syntax DBMS_MONITOR.SERV_MOD_ACT_STAT_DISABLE( service_name IN VARCH
AR2, module_name IN VARCHAR2, action_name IN VARCHAR2 DEFAULT ALL_ACTIONS); Para
meters Table 60-8 SERV_MOD_ACT_STAT_DISABLE Procedure Parameters Parameter Descr
iption service_name Name of the service for which statistic aggregation is disab
led. module_name Name of the MODULE. An additional qualifier for the service. It
is a required parameter. action_name Name of the ACTION. An additional qualifie
r for the Service and MODULE name. Omitting the parameter (or supplying ALL_ACTI
ONS constant) means enabling aggregation for all Actions for a given Server/Modu
le combination. In this case, statistics are aggregated on the module level. --
SERV_MOD_ACT_STAT_ENABLE Procedure This procedure enables statistic gathering fo
r a given combination of Service Name, MODULE and ACTION. Calling this procedure
enables statistic gathering for a hierarchical combination of Service name, MOD
ULE name, and ACTION name on all instances for the same database. Statistics are
accessible by means of the V$SERV_MOD_ACT_STATS view. Syntax DBMS_MONITOR.SERV_
MOD_ACT_STAT_ENABLE( service_name IN VARCHAR2, module_name IN VARCHAR2, action_n
ame IN VARCHAR2 DEFAULT ALL_ACTIONS);
Parameters Table 60-9 SERV_MOD_ACT_STAT_ENABLE Procedure Parameters Parameter De
scription service_name Name of the service for which statistic aggregation is en
abled. module_name Name of the MODULE. An additional qualifier for the service.
It is a required parameter. action_name Name of the ACTION. An additional qualif
ier for the Service and MODULE name. Omitting the parameter (or supplying ALL_AC
TIONS constant) means enabling aggregation for all Actions for a given Server/Mo
dule combination. In this case, statistics are aggregated on the module level. U
sage Notes Enabling statistic aggregation for the given combination of Service/M
odule/Action names is slightly complicated by the fact that the Module/Action va
lues can be empty strings which are indistinguishable from NULLs. For this reaso
n, we adopt the following conventions: A special constant (unlikely to be a real
action names) is defined: ALL_ACTIONS constant VARCHAR2 := '###ALL_ACTIONS'; Us
ing ALL_ACTIONS for a module specification means that aggregation is enabled for
all actions with a given module name, while using NULL (or empty string) means
that aggregation is enabled for an action whose name is an empty string. Example
s To enable statistic accumulation for a given combination of Service name and M
ODULE: EXECUTE DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE( 'APPS1','PAYROLL'); To ena
ble statistic accumulation for a given combination of Service name, MODULE and A
CTION: EXECUTE DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE('APPS1','GLEDGER','DEBIT_EN
TRY'); If both of the preceding commands are issued, statistics are accumulated
as follows: For the APPS1 service, because accumulation for each Service Name is
the default. For all actions in the PAYROLL Module. For the DEBIT_ENTRY Action
within the GLEDGER Module. -----------------------------------------------------
-----------------------------
------------------------------------- DATABASE_TRACE_ENABLE Procedure This proce
dure enables SQL trace for the whole database or a specific instance. Syntax DBM
S_MONITOR.DATABASE_TRACE_ENABLE( waits IN BOOLEAN DEFAULT TRUE, binds IN BOOLEAN
DEFAULT FALSE, instance_name IN VARCHAR2 DEFAULT NULL); Parameters Table 60-7 D
ATABASE_TRACE_ENABLE Procedure Parameters Parameter Description waits If TRUE, w
ait information will be present in the trace binds If TRUE, bind information wil
l be present in the trace instance_name If set, restricts tracing to the named i
nstance EXECUTE dbms_monitor.database_trace_enable EXECUTE dbms_monitor.database
_trace_disable
-- DATABASE_TRACE_DISABLE Procedure This procedure disables SQL trace for the wh
ole database or a specific instance. Syntax DBMS_MONITOR.DATABASE_TRACE_DISABLE(
instance_name IN VARCHAR2 DEFAULT NULL); Parameters Table 60-6 DATABASE_TRACE_D
ISABLE Procedure Parameters Parameter Description instance_name Disables tracing
for the named instance --------------------------------------------------------
------------------------------------------------------------SERV_MOD_ACT_TRACE_D
ISABLE Procedure This procedure will disable the trace at ALL enabled instances
for a given combination of Service Name, MODULE, and ACTION name globally. Synta
x DBMS_MONITOR.SERV_MOD_ACT_TRACE_DISABLE( service_name IN VARCHAR2,
module_name action_name instance_name Parameters
IN VARCHAR2, IN VARCHAR2 DEFAULT ALL_ACTIONS, IN VARCHAR2 DEFAULT NULL);
Table 60-10 SERV_MOD_ACT_TRACE_DISABLE Procedure Parameters Parameter Descriptio
n service_name Name of the service for which tracing is disabled. module_name Na
me of the MODULE. An additional qualifier for the service. action_name Name of t
he ACTION. An additional qualifier for the Service and MODULE name. instance_nam
e If set, this restricts tracing to the named instance_name.
Usage Notes Specifying NULL for the module_name parameter means that statistics
will no longer be accumulated for the sessions which do not set the MODULE attri
bute. Examples To enable tracing for a Service named APPS1: EXECUTE DBMS_MONITOR
.SERV_MOD_ACT_TRACE_ENABLE('APPS1', DBMS_MONITOR.ALL_MODULES, DBMS_MONITOR.ALL_A
CTIONS,TRUE, FALSE,NULL); To disable tracing specified in the previous step: EXE
CUTE DBMS_MONITOR.SERV_MOD_ACT_TRACE_DISABLE('APPS1'); To enable tracing for a g
iven combination of Service and MODULE (all ACTIONs): EXECUTE DBMS_MONITOR.SERV_
MOD_ACT_TRACE_ENABLE('APPS1','PAYROLL', DBMS_MONITOR.ALL_ACTIONS,TRUE,FALSE,NULL
); To disable tracing specified in the previous step: EXECUTE DBMS_MONITOR.SERV_
MOD_ACT_TRACE_DISABLE('APPS1','PAYROLL'); --------------------------------------
-----------------------------------------SERV_MOD_ACT_TRACE_ENABLE Procedure Thi
s procedure will enable SQL tracing for a given combination of Service Name, MOD
ULE and ACTION globally unless an instance_name is specified. Syntax DBMS_MONITO
R.SERV_MOD_ACT_TRACE_ENABLE( service_name IN VARCHAR2, module_name IN VARCHAR2 D
EFAULT ANY_MODULE, action_name IN VARCHAR2 DEFAULT ANY_ACTION, waits IN BOOLEAN
DEFAULT TRUE,
binds instance_name Parameters
IN BOOLEAN DEFAULT FALSE, IN VARCHAR2 DEFAULT NULL);
Table 60-11 SERV_MOD_ACT_TRACE_ENABLE Procedure Parameters Parameter Description
service_name Name of the service for which tracing is enabled. module_name Name
of the MODULE. An optional additional qualifier for the service. action_name Na
me of the ACTION. An optional additional qualifier for the Service and MODULE na
me. waits If TRUE, wait information is present in the trace. binds If TRUE, bind
information is present in the trace. instance_name If set, this restricts traci
ng to the named instance_name.
Usage Notes The procedure enables a trace for a given combination of Service, MO
DULE and ACTION name. The specification is strictly hierarchical: Service Name o
r Service Name/MODULE, or Service Name, MODULE, and ACTION name must be specifie
d. Omitting a qualifier behaves like a wild-card, so that not specifying an ACTI
ON means all ACTIONs. Using the ALL_ACTIONS constant achieves the same purpose.
This tracing is useful when an application MODULE and optionally known ACTION is
experiencing poor service levels. By default, tracing is enabled globally for t
he database. The instance_name parameter is provided to restrict tracing to name
d instances that are known, for example, to exhibit poor service levels. Tracing
information is present in multiple trace files and you must use the trcsess too
l to collect it into a single file. Specifying NULL for the module_name paramete
r means that statistics will be accumulated for the sessions which do not set th
e MODULE attribute. Examples To enable tracing for a Service named APPS1: EXECUT
E DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE('APPS1', DBMS_MONITOR.ALL_MODULES, DBMS
_MONITOR.ALL_ACTIONS,TRUE, FALSE,NULL); To enable tracing for a given combinatio
n of Service and MODULE (all ACTIONs):
EXECUTE DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE('APPS1','PAYROLL', DBMS_MONITOR.A
LL_ACTIONS,TRUE,FALSE,NULL); ---------------------------------------------------
----------------------------SESSION_TRACE_DISABLE Procedure This procedure will
disable the trace for a given database session at the local instance. Syntax DBM
S_MONITOR.SESSION_TRACE_DISABLE( session_id IN BINARY_INTEGER DEFAULT NULL, seri
al_num IN BINARY_INTEGER DEFAULT NULL); Parameters Table 60-12 SESSION_TRACE_DIS
ABLE Procedure Parameters Parameter Description session_id Name of the service f
or which SQL trace is disabled. serial_num Serial number for this session.
Usage Notes If serial_num is NULL but session_id is specified, a session with a
given session_id is no longer traced irrespective of its serial number. If both
session_id and serial_num are NULL, the current user session is no longer traced
. It is illegal to specify NULL session_id and non-NULL serial_num. In addition,
the NULL values are default and can be omitted. Examples To enable tracing for
a client with a given client session ID: EXECUTE DBMS_MONITOR.SESSION_TRACE_ENAB
LE(7,4634, TRUE, FALSE); To disable tracing specified in the previous step: EXEC
UTE DBMS_MONITOR.SESSION_TRACE_DISABLE(7,4634);; -------------------------------
------------------------------------------------SESSION_TRACE_ENABLE Procedure T
his procedure enables a SQL trace for the given Session ID on the local instance
Syntax DBMS_MONITOR.SESSION_TRACE_ENABLE( session_id IN BINARY_INTEGER DEFAULT
NULL, serial_num IN BINARY_INTEGER DEFAULT NULL, waits IN BOOLEAN DEFAULT TRUE,
binds IN BOOLEAN DEFAULT FALSE) Parameters
Table 60-13 SESSION_TRACE_ENABLE Procedure Parameters Parameter Description sess
ion_id Database Session Identifier for which SQL tracing is enabled. Specifying
NULL means that my current session should be traced. serial_num Serial number fo
r this session. Specifying NULL means that any session which matches session_id
(irrespective of serial number) should be traced. waits If TRUE, wait informatio
n is present in the trace. binds If TRUE, bind information is present in the tra
ce.
Usage Notes The procedure enables a trace for a given database session, and is s
till useful for client/server applications. The trace is enabled only on the ins
tance to which the caller is connected, since database sessions do not span inst
ances. This tracing is strictly local to an instance. If serial_num is NULL but
session_id is specified, a session with a given session_id is traced irrespectiv
e of its serial number. If both session_id and serial_num are NULL, the current
user session is traced. It is illegal to specify NULL session_id and non-NULL se
rial_num. In addition, the NULL values are default and can be omitted. Examples
To enable tracing for a client with a given client session ID: EXECUTE DBMS_MONI
TOR.SESSION_TRACE_ENABLE(7,4634, TRUE, FALSE); To disable tracing specified in t
he previous step: EXECUTE DBMS_MONITOR.SESSION_TRACE_ENABLE(82,30962); EXECUTE D
BMS_MONITOR.SESSION_TRACE_DISABLE(82,30962); Either EXECUTE DBMS_MONITOR.SESSION
_TRACE_ENABLE(5); or EXECUTE DBMS_MONITOR.SESSION_TRACE_ENABLE(5, NULL); traces
the session with session ID of 5, while either EXECUTE DBMS_MONITOR.SESSION_TRAC
E_ENABLE(); or EXECUTE DBMS_MONITOR.SESSION_TRACE_ENABLE(NULL, NULL); traces the
current user session. Also,
EXECUTE DBMS_MONITOR.SESSION_TRACE_ENABLE(NULL, NULL, TRUE, TRUE); traces the cu
rrent user session including waits and binds. The same can be also expressed usi
ng keyword syntax: EXECUTE DBMS_MONITOR.SESSION_TRACE_ENABLE(binds=>TRUE); Note
4: ------End-to-End Tracing A common approach to diagnosing performance problems
is to enable sql_trace to trace database calls and then analyze the output late
r using a tool such as tkprof. However, the approach has a serious limitation in
databases with shared server architecture. In this configuration, several share
d server processes are created to service the requests from the users. When user
BILL connects to the database, the dispatcher passes the connection to an avail
able shared server. If none is available, a new one is created. If this session
starts tracing, the calls made by the shared server process are traced. Now supp
ose that BILL's session becomes idle and LORA's session becomes active. At that
point the shared server originally servicing BILL is assigned to LORA's session.
At this point, the tracing information emitted is not from BILL's session, but
from LORA's. When LORA's session becomes inactive, this shared server may be ass
igned to another active session, which will have completely different informatio
n. In 10g, this problem has been effectively addressed through the use of end-to
-end tracing. In this case, tracing is not done only by session, but by an ident
ifiable name such as a client identifier. A new package called DBMS_MONITOR is a
vailable for this purpose. For instance, you may want to trace all sessions with
the identifier account_update. To set up the tracing, you would issue: exec DBM
S_MONITOR.CLIENT_ID_TRACE_ENABLE('account_update'); This command enables tracing
on all sessions with the identifier account_update. When BILL connects to the d
atabase, he can issue the following to set the client identifier: exec DBMS_SESS
ION.SET_IDENTIFIER ('account_update') Tracing is active on the sessions with the
identifier account_update, so the above session will be traced and a trace file
will be generated on the user dump destination directory. If another user conne
cts to the database and sets her client identifier to account_update, that sessi
on will be traced as well, automatically, without setting any other command insi
de the code. All sessions with the client identifier account_update will be trac
ed until the tracing is disabled by issuing: exec DBMS_MONITOR.CLIENT_ID_TRACE_D
ISABLE('account_update'); The resulting trace files can be analyzed by tkprof. H
owever, each session produces a different trace file. For proper problem diagnos
is, we are interested in the consolidated trace file; not individual ones.
How do we achieve that? Simple. Using a tool called trcsess, you can extract inf
ormation relevant to client identifier account_update to a single file that you
can run through tkprof. In the above case, you can go in the user dump destinati
on directory and run: trcsess output=account_update_trc.txt clientid=account_upd
ate * This command creates a file named account_update_trc.txt that looks like a
regular trace file but has information on only those sessions with client ident
ifier account_update. This file can be run through tkprof to get the analyzed ou
tput. Contrast this approach with the previous, more difficult method of collect
ing trace information. Furthermore, tracing is enabled and disabled by some vari
able such as client identifier, without calling alter session set sql_trace = tr
ue from that session. Another procedure in the same package, SERV_MOD_ACT_TRACE_
ENABLE, can enable tracing in other combinations such as for a specific service,
module, or action, which can be set by dbms_application_info package. Note 5: -
-----Generating SQL Trace Files Oracle Tips by Burleson Consulting The following
Tip is from the outstanding book "Oracle PL/SQL Tuning: Expert Secrets for High
Performance Programming" by Dr. Tim Hall, Oracle ACE of the year, 2006: There a
re numerous ways to enable, disable and vary the contents of this trace. The fol
lowing methods have been available for several versions of the database. -- All
versions. SQL> ALTER SESSION SET sql_trace=TRUE; SQL> ALTER SESSION SET sql_trac
e=FALSE; SQL> EXEC DBMS_SESSION.set_sql_trace(sql_trace => TRUE); SQL> EXEC DBMS
_SESSION.set_sql_trace(sql_trace => FALSE); SQL> ALTER SESSION SET EVENTS '10046
trace name context forever, level 8'; SQL> ALTER SESSION SET EVENTS '10046 trac
e name context off'; SQL> EXEC DBMS_SYSTEM.set_sql_trace_in_session(sid=>123, se
rial#=>1234, sql_trace=>TRUE); SQL> EXEC DBMS_SYSTEM.set_sql_trace_in_session(si
d=>123, serial#=>1234, sql_trace=>FALSE); SQL> EXEC DBMS_SYSTEM.set_ev(si=>123,
se=>1234, ev=>10046, le=>8, nm=>' '); SQL> EXEC DBMS_SYSTEM.set_ev(si=>123, se=>
1234, ev=>10046, le=>0, nm=>' ');
-- All versions, requires DBMS_SUPPORT package to be loaded. SQL> EXEC DBMS_SUPP
ORT.start_trace(waits=>TRUE, binds=>FALSE); SQL> EXEC DBMS_SUPPORT.stop_trace; S
QL> EXEC DBMS_SUPPORT.start_trace(sid=>123, serial=>1234, waits=>TRUE, binds=>FA
LSE); SQL> EXEC DBMS_SUPPORT.stop_trace(sid=>123, serial=>1234); The dbms_suppor
t package is not present by default, but can be loaded as the SYS user by execut
ing the @$ORACLE_HOME/rdbms/admin/dbmssupp.sql script. For methods that require
tracing levels, the following are valid values: 0 - No trace. Like switching sql
_trace off. 2 - The equivalent of regular sql_trace. 4 - The same as 2, but with
the addition of bind variable values. 8 - The same as 2, but with the addition
of wait events. 12 - The same as 2, but with both bind variable values and wait
events. The same combinations are possible for those methods with boolean parame
ters for waits and binds. With the advent of Oracle 10g, the SQL tracing options
have been centralized and extended using the dbms_monitor package. The examples
below show a few possible variations for enabling and disabling SQL trace in Or
acle 10g. -- Oracle 10g SQL> EXEC DBMS_MONITOR.session_trace_enable; SQL> EXEC D
BMS_MONITOR.session_trace_enable(waits=>TRUE, binds=>FALSE); SQL> EXEC DBMS_MONI
TOR.session_trace_disable; SQL> EXEC DBMS_MONITOR.session_trace_enable(session_i
d=>1234, serial_num=>1234); SQL> EXEC DBMS_MONITOR.session_trace_enable(session_
id =>1234, serial_num=>1234, waits=>TRUE, binds=>FALSE); SQL> EXEC DBMS_MONITOR.
session_trace_disable(session_id=>1234, serial_num=>1234); SQL> EXEC DBMS_MONITO
R.client_id_trace_enable(client_id=>'tim_hall'); SQL> EXEC DBMS_MONITOR.client_i
d_trace_enable(client_id=>'tim_hall', waits=>TRUE, binds=>FALSE); SQL> EXEC DBMS
_MONITOR.client_id_trace_disable(client_id=>'tim_hall'); SQL> EXEC DBMS_MONITOR.
serv_mod_act_trace_enable(service_name=>'db10g', module_name=>'test_api', action
_name=>'running'); SQL> EXEC DBMS_MONITOR.serv_mod_act_trace_enable(service_name
=>'db10g', module_name=>'test_api', action_name=>'running', waits=>TRUE, binds=>
FALSE); SQL> EXEC DBMS_MONITOR.serv_mod_act_trace_disable(service_name=>'db10g',
module_name=>'test_api', action_name=>'running'); The package provides the conv
entional session level tracing along with two new
variations. First, tracing can be enabled on multiple sessions based on the valu
e of the client_identifier column of the v$session view, set using the dbms_sess
ion package. Second, tracing can be activated for multiple sessions based on var
ious combinations of the service_name, module, action columns in the v$session v
iew, set using the dbms_application_info package, along with the instance_name i
n RAC environments. With all the possible permutations and default values, this
provides a high degree of flexibility. trcsess Activating trace on multiple sess
ions means that trace information is spread throughout many trace files. For thi
s reason Oracle 10g introduced the trcsess utility, allowing trace information f
rom multiple trace files to be identified and consolidated into a single trace f
ile. The trcsess usage is listed below. trcsess [output=<output file name >] [se
ssion=<session ID>] [clientid=<clientid>] [service=<service name>] [action=<acti
on name>] [module=<module name>] <trace file names> output=<output file name> ou
tput destination default being standard output. session=<session Id> session to
be traced. Session id is a combination of session Index & session serial number
e.g. 8.13. clientid=<clientid> clientid to be traced. service=<service name> ser
vice to be traced. action=<action name> action to be traced. module=<module name
> module to be traced. <trace_file_names> Space separated list of trace files wi
th wild card '*' supported. With all these options, the consolidated trace file
can be as broad or as specific as needed. tkprof The SQL trace files produced by
the methods discussed previously can be read in their raw form, or they can be
translated by the tkprof utility into a more human readable form. The output bel
ow lists the usage notes from the tkprof utility in Oracle 10g. $ tkprof Usage:
tkprof tracefile outputfile [explain= ] [table= ] [print= ] [insert= ] [sys= ] [
sort= ] table=schema.tablename Use 'schema.tablename' with 'explain=' option. ex
plain=user/password Connect to ORACLE and issue EXPLAIN PLAN. print=integer List
only the first 'integer' SQL statements. aggregate=yes|no insert=filename List
SQL statements and data inside INSERT statements. sys=no TKPROF does not list SQ
L statements run as user SYS. record=filename Record non-recursive statements fo
und in the trace file. waits=yes|no Record summary for any wait events found in
the trace file. sort=option Set of zero or more of the following sort options: p
rscnt prscpu prsela number of times parse was called cpu time parsing elapsed ti
me parsing
prsdsk prsqry prscu prsmis execnt execpu exeela exedsk exeqry execu exerow exemi
s fchcnt fchcpu fchela fchdsk fchqry fchcu fchrow userid $
number of disk reads during parse number of buffers for consistent read during p
arse number of buffers for current read during parse number of misses in library
cache during parse number of execute was called cpu time spent executing elapse
d time executing number of disk reads during execute number of buffers for consi
stent read during execute number of buffers for current read during execute numb
er of rows processed during execute number of library cache misses during execut
e number of times fetch was called cpu time spent fetching elapsed time fetching
number of disk reads during fetch number of buffers for consistent read during
fetch number of buffers for current read during fetch number of rows fetched use
rid of user that parsed the cursor
The waits parameter was only added in Oracle 9i, so prior to this version wait i
nformation had to be read from the raw trace file. The values of bind variables
must be read from the raw files as they are not displayed in the tkprof output.
20.2 OLDER ORACLE Versions 8,8i,9i: =================================== 20.2.1 T
race a session: ----------------------Examples: --------exec DBMS_SYSTEM.SET_SQL
_TRACE_IN_SESSION(sid, serial#, TRUE); exec DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION
(23, 54071, TRUE); DBMS_SYSTEM has some mysterious and apparently dangerous proc
edures in it. Obtaining any information about SET_EV and READ_EV was very diffic
ult and promises to be more difficult in the future since the package header is
no longer exposed in Oracle 8.0. In spite of Oracle's desire to keep DBMS_SYSTEM
"under wraps," I feel strongly that the SET_SQL_TRACE_IN_SESSION procedure is f
ar too valuable to be hidden away in obscurity. DBAs and developers need to find
out exactly what is happening at runtime when a user is experiencing unusual pe
rformance problems, and the SQL trace facility is one of the best tools availabl
e for discovering what
the database is doing during a user's session. This is especially useful when in
vestigating problems with software packages where source code (including SQL) is
generally unavailable. So how can we get access to the one program in DBMS_SYST
EM we want without exposing those other dangerous elements to the public? The an
swer, of course, is to build a package of our own to encapsulate DBMS_SYSTEM and
expose only what is safe. In the process, we can make DBMS_SYSTEM easier to use
as well. Those of us who are "keyboard-challenged" (or just plain lazy) would c
ertainly appreciate not having to type a procedure name with 36 characters. I've
created a package called trace to cover DBMS_SYSTEM and provide friendlier ways
to set SQL tracing on or off in other user's sessions. Here is the package spec
ification: */ Filename on companion disk: trace.sql */* CREATE OR REPLACE PACKAG
E trace IS type rr_rec is record ( v_sid number, v_serial number ); r_rec rr_rec
; /* || || || || || || || || || || || || */ Exposes DBMS_SYSTEM.SET_SQL_TRACE_IN
_SESSION with easier to call programs Author: John Beresniewicz, Savant Corp Cre
ated: 07/30/97 Compilation Requirements: SELECT on SYS.V_$SESSION EXECUTE on SYS
.DBMS_SYSTEM (or create as SYS) Execution Requirements:
/* turn SQL trace on by session id */ PROCEDURE Xon(sid_IN IN NUMBER); /* turn S
QL trace off by session id */ PROCEDURE off(sid_IN IN NUMBER); /* turn SQL trace
on by username */ PROCEDURE Xon(user_IN IN VARCHAR2); /* turn SQL trace off by
username */ PROCEDURE off(user_IN IN VARCHAR2); END trace;
The trace package provides ways to turn SQL tracing on or off by session id or u
sername. One thing that annoys me about DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION is
having to figure out and pass a session serial number into the procedure. There
should always be only one session per sid at any time connected to the database,
so trace takes care of figuring out the appropriate serial number behind the sc
enes. Another improvement (in my mind) is replacing the potentially confusing BO
OLEAN parameter sql_trace with two distinct procedures whose names indicate what
is being done. Compare the following commands, either of which might be used to
turn SQL tracing off in session 15 using SQL*Plus: SQL> execute trace.off(sid_I
N=>15); SQL> execute SYS.DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(15,4567,FALSE); Th
e first method is both more terse and easier to understand. The xon and off proc
edures are both overloaded on the single IN parameter, with versions accepting e
ither the numeric session id or a character string for the session username. All
owing session selection by username may be easier than by sids. Why? Because sid
s are transient and must be looked up at runtime, whereas username is usually pe
rmanently associated with an individual. Beware, though, that multiple sessions
may be concurrently connected under the same username, and invoking trace.xon by
username will turn tracing on in all of them. Let's take a look at the trace pa
ckage body: /* Filename on companion disk: trace.sql */* CREATE OR REPLACE PACKA
GE BODY trace IS /* || Use DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION to turn tracing
on || or off by either session id or username. Affects all sessions || that matc
h non-NULL values of the user and sid parameters. */ PROCEDURE set_trace (sqltra
ce_TF BOOLEAN ,user IN VARCHAR2 DEFAULT NULL ,sid IN NUMBER DEFAULT NULL) IS BEG
IN /* || Loop through all sessions that match the sid and user || parameters and
set trace on in those sessions. The NVL || function in the cursor WHERE clause
allows the single || SELECT statement to filter by either sid OR user.
*/ FOR sid_rec IN (SELECT sid,serial# FROM v$session S WHERE S.type='USER' AND S
.username = NVL(UPPER(user),S.username) AND S.sid = NVL(sid,S.sid) ) LOOP SYS.DB
MS_SYSTEM.SET_SQL_TRACE_IN_SESSION (sid_rec.sid, sid_rec.serial#, sqltrace_TF);
END LOOP; END set_trace; /* || The programs exposed by the package all simply ||
call set_trace with different parameter combinations. */ PROCEDURE Xon(sid_IN I
N NUMBER) IS BEGIN set_trace(sqltrace_TF => TRUE, sid => sid_IN); END Xon; PROCE
DURE off(sid_IN IN NUMBER) IS BEGIN set_trace(sqltrace_TF => FALSE, sid => sid_I
N); END off; PROCEDURE Xon(user_IN IN VARCHAR2) IS BEGIN set_trace(sqltrace_TF =
> TRUE, user => user_IN); END Xon; PROCEDURE off(user_IN IN VARCHAR2) IS BEGIN s
et_trace(sqltrace_TF => FALSE, user => user_IN); END off; END trace; All of the
real work done in the trace package is contained in a single private procedure c
alled set_trace. The public procedures merely call set_trace with different para
meter combinations. This is a structure that many packages exhibit: private prog
rams with complex functionality exposed through public programs with simpler int
erfaces. One interesting aspect of set_trace is the cursor used to get session i
dentification data from V_$SESSION. I wanted to identify sessions for tracing by
either session id or username. I could have just defined two cursors on V_$SESS
ION with some conditional logic deciding which cursor to use, but that just did
not seem clean enough. After all, less code means fewer bugs. The solution I arr
ived at: make use of the NVL function to have a single cursor effectively ignore
either the sid or the user parameter when either is passed in as NULL. Since se
t_trace is always called with either sid or user, but not both, the NVLs act as
a kind of toggle on the cursor. I also supplied both the sid and user parameters
to set_trace with the default value of NULL so that only the parameter being us
ed for selection needs be passed in the call. Once set_trace was in place, the p
ublicly visible procedures were trivial. A final note about the procedure name "
xon": I wanted to use the procedure name "on," but ran afoul of the PL/SQL compi
ler since ON is a reserved word in SQL and PL/SQL. You can also try: Alter syste
m set sql_trace=true; Setting sql_trace=true is a prerequisite when using tk pro
f. -- TRACING a session: ----------------------Enable tracing a session to gener
ate a tarce file. This file can be formatted with TKPROF 6.1. The following INIT
.ORA parameters must be set: #SQL_TRACE = TRUE USER_DUMP_DEST = <preferred direc
tory for the trace output> TIMED_STATISTICS = TRUE MAX_DUMP_FILE_SIZE = <optiona
l, determines trace output file size> 6.2 To enable the SQL trace facility for y
our current session, enter: ALTER SESSION SET SQL_TRACE = TRUE; or use DBMS_SUPP
ORT.START_TRACE_IN_SESSION( SID , SERIAL# ); DBMS_SUPPORT.STOP_TRACE_IN_SESSION(
SID , NULL ); DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid, serial#, TRUE); DBMS_SU
PPORT.START_TRACE_IN_SESSION(86,43326); To enable the SQL trace facility for you
r instance, set the value of the SQL_TRACE initialization parameter to TRUE. Sta
tistics will be collected for all sessions. Once the SQL trace facility has been
enabled for the instance, you can disable it for an individual session by enter
ing: ALTER SESSION SET SQL_TRACE = FALSE;
6.3 Examples of TKPROF TKPROF ora53269.trc ora 53269.prf SORT = (PRSDSK, EXEDSK,
FCHDSK) PRINT = 10 To analyze the sql statements: 1. 2. 3. 4. tkprof tkprof tkp
rof tkprof ora_11598.trc ora_11598.trc ora_11598.trc ora_23532.trc myfilename /t
mp/myfilename /tmp/myfilename explain=ap/ap myfilename explain=po/po sort=execpu
7 STATSPACK: -----------Statspack is a set of SQL, PL/SQL, and SQL*Plus scripts
that allow the collection, automation, storage, and viewing of perfoRMANce data
(see Table 2). The installation script (statscre.sql) calls several other script
s in order to create the entire Statspack environment. (Note: You should run onl
y the installation script, not the base scripts that statscre.sql invokes.) All
the scripts you need for installing and running Statspack are in the ORACLE_HOME
/rdbms/admin directory for UNIX platforms and in %ORACLE_HOME%\rdbms\admin for M
icrosoft Windows NT systems. The simplest interactive way to take a snapshot is
to log in to SQL*Plus as the owner perfstat and execute the statspack.snap proce
dure: SQL> connect perfstat/perfstat SQL> execute statspack.snap; You can use db
ms_job to automate statistics collection. The file statsauto.sql contains an exa
mple of how to do this, scheduling a snapshot every hour. When you create a job
by using dbms_job, Oracle assigns the job a unique number that you can use for c
hanging or removing the job. In order to use dbms_job to schedule snapshots auto
matically, you must set the job_queue_processes initialization parameter to grea
ter than 0 in the init.ora file: # Set to enable the job-queue process to start.
# This allows dbms_job to schedule automatic # statistics collection, using Sta
tspack job_queue_processes=1 Change the interval of statistics collection by usi
ng the dbms_job.interval procedure: execute dbms_job.interval(<job number>, 'SYS
DATE+(1/48)'); In this case, SYSDATE+(1/48)' causes the statistics to be gathere
d each 1/48 dayevery half hour. To stop and remove the automatic-collection job:
execute dbms_job.remove(<job number>); Install Statspack: CREATE USER perfstat i
dentified by perfstat default tableSpace TOOLS temporary tableSpace TEMP; GRANT
GRANT GRANT GRANT CREATE SeSSion to PERFSTAT; connect to PERFSTAT; reSource to P
ERFSTAT; unlimited tableSpace to PERFSTAT;
sqlplus sys --- Install Statspack -- Enter tablespace names when prompted -@?/rd
bms/admin/spcreate.sql --- Drop Statspack -- Reverse of spcreate.sql --- @?/rdbm
s/admin/spdrop.sql -The spcreate.sql install script automatically calls 3 other
scripts needed: spcusr - creates the user and grants privileges spctab - creates
the tables spcpkg - creates the package Check each of the three output files pr
oduced (spcusr.lis, spctab.lis, spcpkg.lis) by the installation to ensure no err
ors were encountered, before continuing on to the next step. Using Statspack (ga
thering data): sqlplus perfstat --- Take a perfoRMANce snapshot -execute statspa
ck.snap; --- Get a list of snapshots -column snap_time format a21 SELECT snap_id
,to_char(snap_time,'MON dd, yyyy hh24:mm:ss') snap_time FROM sp$snapshot; -NOTE:
To include important timing information set the init.ora parameter timed_statis
tics to true. To examine the change in instancewide statistics between two time
periods, the
SPREPORT.SQL file is run while connected to the PERFSTAT user. The SPREPORT.SQL
command file is located in the rdbms/admin directory of the Oracle home.
You are prompted for the following: The beginning snapshot ID The ending snapsho
t ID The name of the report text file to be created
=========== 21. Overig: =========== 20.1 NLS: ========= Bij Server: 1. character
set specificatie bij CREATE DATABASE 2. De Sever kan wel meerdere locale in runt
ime laden uit files gespecificeerd in $ export ORA_NLSxx=$ORACLE_HOME/ocommon/nl
s/admin/data 3. NLS init.ora parameters t.b.v. de user sessions.
If clients using different character sets will access the database, then choose
a superset that includes all client character sets. Otherwise, character convers
ions may be necessary at the cost of increased overhead and potential data loss.
client: 1. client heeft lokaal een NLS environment setting 2. client connect na
ar database, een session wordt gevormd, en de NLS enviroment wordt gemaakt aan d
e hAND van de NLS init.ora parameters. Is bij de clent de NLS_LANG environment v
ariable gezet, dan communiceerd de client dat naar de server session. Hierdoor z
ijn beide hetzelfde. Is er geen NLS_LANG, dan gelden de init.ora NLS parameters
voor de server session 3. De session NLS kan worden verANDert via ALTER SESSION.
Dit heeft alleen effect op de PL/SQL en SQL statements executed op de server in
it.ora parameters bij server environment variables bij client alter session stat
ement expliciet in SQL statement : : : : invloed op sessions op server locale bi
j client, overrides session verANDert de session, overides init.ora overides all
es
Voorbeeld van override: in init.ora: bij client: Examples: --------Example 1: --
-------ALTER SESSION SET nls_date_format = 'dd/mm/yy' ALTER SESSION SET NLS_DATE
_FORMAT = 'DD-MON-YYYY' ALTER SESSION SET NLS_LANGUAGE='ENGLISH'; ALTER SESSION
SET NLS_LANGUAGE='NEDERLANDS'; export NLS_NUMERIC_CHARACTERS=',.' ALTER SESSION
SET NLS_NUMERIC_CHARACTERS=',.' ALTER SESSION SET NLS_TERRITORY=France; ALTER SE
SSION SET NLS_TERRITORY=America; In SQL functions: NLS parameters can be used ex
plicitly to hardcode NLS behavior within a SQL function. Doing so will override
the default values that are set for the session in the initialization parameter
file, set for the client with environment variables, or set for the session by t
he ALTER SESSION statement. For example: TO_CHAR(hiredate, 'DD/MON/YYYY', 'nls_d
ate_language = FRENCH') SELECT last_name FROM employees WHERE hire_date > TO_DAT
E('01-JAN-1999','DD-MON-YYYY', 'NLS_DATE_LANGUAGE = AMERICAN'); NLS_SORT=ENGLISH
ALTER SESSION SET NLS_SORT=FRENCH;
Example 2: ---------SQL> ALTER SESSION SET NLS_NUMERIC_CHARACTERS=',.' 2 ; Sessi
on altered. SQL> select * from ap2; NAME SAL ---------- ---------ap 12,53 piet 8
9,7
SQL> ALTER SESSION SET NLS_NUMERIC_CHARACTERS='.,'; Session altered. SQL> select
* from ap2; NAME SAL ---------- ---------ap 12.53 piet 89.7
priority: --------1. 2. 3. 4. expliciet in SQL ALTER SESSION environment variabl
e init.ora
NLS parameters, te zetten via: NLS_CALENDAR NLS_COMP NLS_CREDIT NLS_CURRENCY NLS
_DATE_FORMAT NLS_DATE_LANGUAGE NLS_DEBIT NLS_ISO_CURRENCY NLS_LANG NLS_LANGUAGE
NLS_LIST_SEPERATOR NLS_MONETARY_CHARACTERS NLS_NCHAR NLS_NUMMERIC_CHARACTERS NLS
_SORT NLS_TERRITORY NLS_DUAL_CURRENCY DATA DICTIONARY VIEWS: -------------------
--Applications can check the session, instance, and database NLS parameters by q
uerying the following data dictionary views: NLS_SESSION_PARAMETERS shows the NL
S parameters and their values for the session that is querying the view. It does
not show information about the character set. NLS_INSTANCE_PARAMETERS shows the
current NLS instance parameters that have been explicitly set and the values of
the NLS instance parameters. init.ora, init.ora, init.ora, init.ora, init.ora,
init.ora, init.ora, init.ora, init.ora, init.ora, init.ora, env, env, env env, e
nv, env, env env, env - , env env env env, env, - , env, alter alter alter alter
alter alter alter alter alter alter alter session session session session sessi
on session session
session session session session
NLS_DATABASE_PARAMETERS shows the values of the NLS parameters that were used wh
en the database was created. Example: -------SQL> desc ap1; Name Null? ---------
-------------------------------- -------NAME SAL SQL> select * from ap1; NAME --
-------ap piet SAL ---------12,53 89,7 Type ---------------------------VARCHAR2(
10) NUMBER Type ------------VARCHAR2(10) VARCHAR2(10)
SQL> desc ap2; Name Null? ----------------------------------------- -------NAME
SAL SQL> select * from ap2; NAME SAL ---------- ---------ap 12.53 piet 89.7 SQL>
insert into ap2 2 select * from ap1; select * from ap1 * ERROR at line 2: ORA-0
1722: invalid number SQL> ALTER SESSION SET NLS_NUMERIC_CHARACTERS=',.'; Session
altered. SQL> insert into ap2 2 select * from ap1; 2 rows created. 20.2 More on
AL32UTF8, AL16UTF16, UTF8: ======================================= 1) What is t
he National Character Set?
-------------------------------------The National Character set (NLS_NCHAR_CHARA
CTERSET) is a character set which is defined in addition to the (normal) databas
e character set and is used for data stored in NCHAR, NVARCHAR2 and NCLOB column
s. Your current value for the NLS_NCHAR_CHARACTERSET can be found with this sele
ct: select value from NLS_DATABASE_PARAMETERS where parameter='NLS_NCHAR_CHARACT
ERSET'; You cannot have more than 2 charactersets defined in Oracle: The NLS_CHA
RACTERSET is used for CHAR, VARCHAR2, CLOB columns; The NLS_NCHAR_CHARACTERSET i
s used for NCHAR, NVARCHAR2, NCLOB columns. NLS_NCHAR_CHARACTERSET is defined wh
en the database is created and specified with the CREATE DATABASE command. The N
LS_NCHAR_CHARACTERSET defaults to AL16UTF16 if nothing is specified. From 9i onw
ards the NLS_NCHAR_CHARACTERSET can have only 2 values: UTF8 or AL16UTF16 who ar
e Unicode charactersets. See Note 260893.1 Unicode character sets in the Oracle
database for more info about the difference between them. Al lot of people think
that they *need* to use the NLS_NCHAR_CHARACTERSET to have UNICODE support in O
racle, this is not true, NLS_NCHAR_CHARACTERSET (NCHAR, NVARCHAR2) is in 9i alwa
ys Unicode but you can perfectly use "normal" CHAR and VARCHAR2 columns for stor
ing unicode in a database who has a AL32UTF8 / UTF8 NLS_CHARACTERSET. See also p
oint 15. When trying to use another NATIONAL characterset, the CREATE DATABASE c
ommand will fail with "ORA-12714 invalid national character set specified". The
character set identifier is stored with the column definition itself. 2) Which d
atatypes use the National Character Set? ---------------------------------------
----------There are three datatypes which can store data in the national charact
er set: NCHAR - a fixed-length national character set character string. The leng
th of the column is ALWAYS defined in characters (it always uses CHAR semantics)
NVARCHAR2 - a variable-length national character set character string. The lengt
h of the column is ALWAYS defined in characters (it always uses CHAR semantics)
NCLOB - stores national character set data of up to four gigabytes. Data is alwa
ys stored in UCS2 or AL16UTF16, even if the NLS_NCHAR_CHARACTERSET is UTF8. This
has very limited impact, for more info about this please see: Note 258114.1 <ht
tp://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=258114.1> Possible act
ion for CLOB/NCLOB storage after 10g upgrade and if you use DBMS_LOB.LOADFROMFIL
E see Note 267356.1 <http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id
=267356.1> Character set conversion when using DBMS_LOB
If you don't know what CHAR semantics is, then please read Note 144808.1 <http:/
/metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=144808.1> Examples and lim
its of BYTE and CHAR semantics usage If you use N-types, DO use the (N'...') syn
tax when coding it so that Literals are denoted as being in the national charact
er set by prepending letter 'N', for example: create table test(a nvarchar2(100)
); insert into test values(N'this is a NLS_NCHAR_CHARACTERSET string'); 3) How t
o know if I use N-type columns? --------------------------------------This selec
t list all tables containing a N-type column: select distinct OWNER, TABLE_NAME
from DBA_TAB_COLUMNS where DATA_TYPE in ('NCHAR','NVARCHAR2', 'NCLOB'); On a 9i
database created without (!) the "sample" shema you will see these rows (or less
) returned: OWNER -----------------------------SYS SYS SYS SYS SYS SYS SYSTEM SY
STEM SYSTEM 9 rows selected. These SYS and SYSTEM tables may contain data if you
are using: * Fine Grained Auditing -> DBA_FGA_AUDIT_TRAIL * Advanced Replicatio
n -> ALL_REPPRIORITY, DBA_REPPRIORITY, USER_REPPRIORITY DEF$_TEMP$LOB , DEF$_TEM
P$LOB and REPCAT$_PRIORITY * Advanced Replication or Deferred Transactions funct
ionality -> DEFLOB * Oracle Streams -> STREAMS$_DEF_PROC If you do have created
the database with the DBCA and included the sample shema then you will see typic
ally: OWNER TABLE_NAME ---------------------------------------------------------
--OE BOMBAY_INVENTORY OE PRODUCTS OE PRODUCT_DESCRIPTIONS OE SYDNEY_INVENTORY OE
TORONTO_INVENTORY TABLE_NAME -----------------------------ALL_REPPRIORITY DBA_F
GA_AUDIT_TRAIL DBA_REPPRIORITY DEFLOB STREAMS$_DEF_PROC USER_REPPRIORITY DEF$_LO
B DEF$_TEMP$LOB REPCAT$_PRIORITY
PM SYS SYS SYS SYS SYS SYS SYSTEM SYSTEM SYSTEM 15 rows selected.
PRINT_MEDIA ALL_REPPRIORITY DBA_FGA_AUDIT_TRAIL DBA_REPPRIORITY DEFLOB STREAMS$_
DEF_PROC USER_REPPRIORITY DEF$_LOB DEF$_TEMP$LOB REPCAT$_PRIORITY
The OE and PM tables contain just sample data and can be dropped if needed. 4) S
hould I worry when I upgrade from 8i or lower to 9i or 10g? --------------------
------------------------------------------* When upgrading from version 7: The N
ational Character Set did not exist in version 7, so you cannot have N-type colu
mns. Your database will just have the -default- AL16UTF16 NLS_NCHAR_CHARACTERSET
declaration and the standard sys/system tables. So there is nothing to worry ab
out... * When upgrading from version 8 and 8i: - If you have only the SYS / SYST
EM tables listed in point 3) then you don't have USER data using N-type columns.
Your database will just have the -default- AL16UTF16 NLS_NCHAR_CHARACTERSET dec
laration after the upgrade and the standard sys/system tables. So there is nothi
ng to worry about... We recommend that you follow this note: Note 159657.1 <http
://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=159657.1> Complete Upgra
de Checklist for Manual Upgrades from 8.X / 9.0.1 to Oracle9i - If you have more
tables then the SYS / SYSTEM tables listed in point 3) (and they are also not t
he "sample" tables) then there are two possible cases: * Again, the next to poin
ts are *only* relevant when you DO have n-type USER data * a) Your current 8 / 8
i NLS_NCHAR_CHARACTERSET is in this list: JA16SJISFIXED , JA16EUCFIXED , JA16DBC
SFIXED , ZHT32TRISFIXED KO16KSC5601FIXED , KO16DBCSFIXED , US16TSTFIXED , ZHS16C
GB231280FIXED ZHS16GBKFIXED , ZHS16DBCSFIXED , ZHT16DBCSFIXED , ZHT16BIG5FIXED Z
HT32EUCFIXED Then the new NLS_NCHAR_CHARACTERSET will be AL16UTF16 and your data
will be converted to AL16UTF16 during the upgrade. We recommend that you follow
this note: Note 159657.1
<http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=159657.1> Complete
Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 to Oracle9i b) Your curre
nt 8 / 8i NLS_NCHAR_CHARACTERSET is UTF8: Then the new NLS_NCHAR_CHARACTERSET wi
ll be UTF8 and your data not be touched during the upgrade. We still recommend t
hat you follow this note: Note 159657.1 <http://metalink.oracle.com/metalink/pls
ql/showdoc?db=NOT&id=159657.1> Complete Upgrade Checklist for Manual Upgrades fr
om 8.X / 9.0.1 to Oracle9i c) Your current 8 / 8i NLS_NCHAR_CHARACTERSET is NOT
in the list of point a) and is NOT UTF8: Then your will need to export your data
and drop it before upgrading. We recommend that you follow this note: Note 1596
57.1 <http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=159657.1> Comp
lete Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 to Oracle9i For more
info about the National Character Set in Oracle8 see Note 62107.1 <http://metal
ink.oracle.com/metalink/plsql/showdoc?db=NOT&id=62107.1> 5) The NLS_NCHAR_CHARAC
TERSET is NOT changed to UTF8 or AL16UTF16 after upgrading to 9i. --------------
------------------------------------------------------------------------That may
happen if you have not set the ORA_NLS33 environment parameter correctly to the
9i Oracle_Home during the upgrade. Note 77442.1 <http://metalink.oracle.com/met
alink/plsql/showdoc?db=NOT&id=77442.1> ORA_NLS (ORA_NLS32, ORA_NLS33, ORA_NLS10)
Environment Variables explained. We recommend that you follow this note for the
upgrade: Note 159657.1 <http://metalink.oracle.com/metalink/plsql/showdoc?db=NO
T&id=159657.1> Complete Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 t
o Oracle9i Strongly consider to restore your backup and do the migration again o
r log a TAR, refer to this note and ask to assign the TAR to the NLS/globalizati
on team. That team can then assist you further. However please do note that not
all situations can be corrected, so you might be asked to do the migration again
...
6) Can I change the AL16UTF16 to UTF8 / I hear that there are problems with AL16
UTF16. -------------------------------------------------------------------------
-----------a) If you do *not* use N-types then there is NO problem at all with A
L16UTF16 because you are simply not using it and we strongly advice you the keep
the default AL16UTF16 NLS_NCHAR_CHARACTERSET.
b) If you *do* use N-types then there will be a problem with 8i clients and lowe
r accessing the N-type columns (note that you will NOT have a problem selecting
from "normal" non-N-type columns). More info about that is found there: Note 140
014.1 <http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=140014.1> ALE
RT Oracle8/8i to Oracle9i/10g using New "AL16UTF16" National Character Set Note
236231.1 <http://metalink.oracle.com/metalink/plsql/showdoc?db=NOT&id=236231.1>
New Character Sets Not Supported For Use With Developer 6i And Older Versions If
this is a situation you find yourself in we recommend to simply use UTF8 as NLS
_NCHAR_CHARACTERSET or create a second 9i db using UTF8 as NCHAR and use this as
"inbetween" between the 8i and the 9i db you can create views in this new datab
ase that do a select from the AL16UTF16 9i db the data will then be converted fr
om AL16UTF16 to UTF8 in the "inbetween" database and that can be read by oracle
8i This is one of the 2 reasons why you should use UTF8 as NLS_NCHAR_CHARACTERSE
T. If you are NOT using N-type columns with pre-9i clients then there is NO reas
on to go to UTF8. c) If you want to change to UTF8 because you are using transpo
rtable tablespaces from 8i database then check if are you using N-types in the 8
i database that are included in the tablespaces that you are transporting. selec
t distinct OWNER, TABLE_NAME from DBA_TAB_COLUMNS where DATA_TYPE in ('NCHAR','N
VARCHAR2', 'NCLOB'); If yes, then you have the second reason to use UTF8 as as N
LS_NCHAR_CHARACTERSET. If not, then leave it to AL16UTF16 and log a tar for the
solution of the ORA19736 and refer to this document. d) You are in one of the 2
situations where it's really needed to change from AL16UTF16 to UTF8, log a tar
so that we can assist you. provide: 1) the output from: select distinct OWNER, T
ABLE_NAME, COLUMN_NAME, CHAR_LENGTH from DBA_TAB_COLUMNS where DATA_TYPE in ('NC
HAR','NVARCHAR2', 'NCLOB'); 2) a CSSCAN output IMPORTANT: Please *DO* install th
e version 1.2 or higher from TechNet for you version. http://technet.oracle.com/
software/tech/globalization/content.html and use this. copy all scripts and exec
utables found in the zip file you downloaded to your oracle_home overwriting the
old versions.
Then run csminst.sql using these commands and SQL statements: cd $ORACLE_HOME/rd
bms/admin set oracle_sid=<your SID> sqlplus "sys as sysdba" SQL>set TERMOUT ON S
QL>set ECHO ON SQL>spool csminst.log SQL> START csminst.sql Check the csminst.lo
g for errors. Then run CSSCAN csscan FULL=Y FROMNCHAR=AL16UTF16 TONCHAR=UTF8 LOG
=Ncharcheck CAPTURE=Y ( note the usage of fromNchar and toNchar ) Upload the 3 r
esulting files and the output of the select while creating the tar important: Do
NOT use the N_SWITCH.SQL script, this will corrupt you NCHAR data !!!!!! 7) Is
the AL32UTF8 problem the same as the AL16UTF16 / do I need the same patches? ---
------------------------------------------------------------------------------No
, they may look similar but are 2 different issues. For information about the po
ssible AL32UTF8 issue please see Note 237593.1 <http://metalink.oracle.com/metal
ink/plsql/showdoc?db=NOT&id=237593.1> Problems connecting to AL32UTF8 databases
from older versions (8i and lower) 8) But I still want <characterset> as NLS_NCH
AR_CHARACTERSET, like I had in 8(i)! -------------------------------------------
-------------------------------------This is simply not possible. From 9i onward
s the NLS_NCHAR_CHARACTERSET can have only 2 values: UTF8 or AL16UTF16. Both UTF
8 and AL16UTF16 are unicode charactersets, so they can store whatever <character
set> you had as NLS_NCHAR_CHARACTERSET in 8(i). If you are not using N-types the
n keep the default AL16UTF16 or use UTF8, it doesn't matter if you don't use the
types. There is one condition in which this "limitation" can have a undisired a
ffect, when you are importing an Oracle8i Transportable Tablespace into Oracle9i
you can run into a ORA-19736 (as wel with AL16UTF16 as with UTF8). In that case
log a TAR, refer to this note and ask to assign the TAR to the NLS/globalizatio
n team. That team can then assist you to work around this issue. 9) Do i need to
set NLS_LANG to AL16UTF16 when creating/using the NLS_NCHAR_CHARACTERSET ? ----
------------------------------------------------------------------------------
-------As clearly stated in Note 158577.1 <http://metalink.oracle.com/metalink/p
lsql/showdoc?db=NOT&id=158577.1> NLS_LANG Explained (How does Client-Server Char
acter Conversion Work?) point "1.2 What is this NLS_LANG thing anyway?" * NLS_LA
NG is used to let Oracle know what characterset you client's OS is USING so that
Oracle can do (if needed) conversion from the client's characterset to the data
base characterset. NLS_LANG is a CLIENT parameter has has no influance on the da
tabase side. 10) I try to use AL32UTF8 as NLS_NCHAR_CHARACTERSET but it fails wi
th ORA-12714 -------------------------------------------------------------------
-----------From 9i onwards the NLS_NCHAR_CHARACTERSET can have only 2 values: UT
F8 or AL16UTF16. UTF8 is possible so that you can use it (when needed) for 8.x b
ackwards compatibility. In all other conditions AL16UTF16 is the preferred and b
est value. AL16UTF16 has the same unicode revision as AL23UTF8, so there is no n
eed for AL32UTF8 as NLS_NCHAR_CHARACTERSET. 11) I have the message "( possible n
charset conversion )" during import. -------------------------------------------
----------------------------in the import log you see something similar to this:
Import: Release 9.2.0.4.0 - Production on Fri Jul 9 11:02:42 2004 Copyright (c)
1982, 2002, Oracle Corporation. All rights reserved. Connected to: Oracle9i Ent
erprise Edition Release 9.2.0.4.0 - 64bit Production JServer Release 9.2.0.4.0 -
Production Export file created by EXPORT:V08.01.07 via direct path import done
in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set export server us
es WE8ISO8859P1 NCHAR character set (possible ncharset conversion) This is norma
l and is not a error condition. - If you do not use N-types then this is a pure
informative message. - But even in the case that you use N-types like NCHAR or N
CLOB then this is not a problem: * the database will convert from the "old" NCHA
R characterset to the new one automatically. (and - unlike the "normal" characte
rset - the NLS_LANG has no impact on this conversion during exp/imp) * AL16UTF16
or UTF8 (the only 2 possible values in 9i) are unicode characterset and so can
store any character... So no data loss is to be expected.
12) Can i use AL16UTF16 as NLS_CHARACTERSET ? ----------------------------------
-----------No, AL16UTF16 can only be used as NLS_NCHAR_CHARACTERSET in 9i and ab
ove. Trying to create a database with a AL16UTF16 NLS_CHARACTERSET will fail. 13
) I'm inserting <special character> in a Nchar or Nvarchar2 col but it comes bac
k as ? or ? ... ----------------------------------------------------------------
--------------------------------see point 13 in Note 227330.1 <http://metalink.o
racle.com/metalink/plsql/showdoc?db=NOT&id=227330.1> Character Sets & Conversion
- Frequently Asked Questions 14) Do i need to change the NLS_NCHAR_CHARACTERSET
in 8i to UTF8 BEFORE upgrading to 9i/10g? -------------------------------------
-----------------------------------------------------No, see point 4) in this no
te. 15) Having a UTF8 NLS_CHARACTERSET db is there a advantage to use AL16UTF16
Ntypes ? -----------------------------------------------------------------------
------------there migth be 2 reasons: a) one possible advantage is storage (disk
space). UTF8 uses 1 up to 3 bytes, AL16UTF16 always 2 bytes. If you have a lot
of non-western data (cyrillic, Chinese, Japanese, Hindi languages..) then i can
be advantageous to use N-types for those columns. For western data (english, fre
nch, spanish, dutch, german, portuguese etc...) UTF8 will use in most cases less
disk space then AL16UTF16. Note 260893.1 <http://metalink.oracle.com/metalink/p
lsql/showdoc?db=NOT&id=260893.1> Unicode character sets in the Oracle database T
his is not true for (N)CLOB, they are both encoded a internal fixed-width Unicod
e character set Note 258114.1 <http://metalink.oracle.com/metalink/plsql/showdoc
?db=NOT&id=258114.1> Possible action for CLOB/NCLOB storage after 10g upgrade so
they will use the same amount of disk space. b) other possible advantage is ext
ending the limits of CHAR semantics For a single-byte character set encoding, th
e character and byte length are the same. However, multi-byte character set enco
dings do not correspond to the bytes, making sizing the column more difficult. H
ence the reason why CHAR semantics was introduced. However, we still have some p
hysical underlying byte based limits and development has choosen to allow the fu
ll usage
of the underlying limits. This results in the following table giving the maximum
amount of CHARarcters occupying the MAX datalength that can be stored for a cer
datatype in 9i and up. The MAX colum is the MAXIMUM amount of CHARACTERS that c
an be stored occupying the MAXIMUM data len seen that UTF8 and AL32UTF8 are VARR
YING charactersets this means that a string of X chars can be X to X*3 (or X*4 f
or AL32) bytes. The MIN col is the maximum size that you can *define* and that O
racle can store if all data is the MINIMUM datalength (1 byte for AL32UTF8 and U
TF8) for that characet. N-types (NVARCHAR2, NCHAR) are *always* defined in CHAR
semantics, you cannot define them in BYTE. all numbers are CHAR definitions UTF8
(1 to 3 bytes) MIN MAX 2000 666 4000 2000 4000 1333 666 1333 AL32UTF8 (1 to 4 b
ytes) MIN MAX 2000 500 4000 N/A N/A 1000 N/A N/A AL16UTF16 ( 2 bytes) MIN MAX N/
A N/A N/A 1000 2000 N/A 1000 2000
CHAR VARCHAR2 NCHAR NVARCHAR2
(N/A means not possible) This means that if you try to store more then 666 chara
cters that occupy 3 bytes in UTF8 in a CHAR UTF8 colum you still will get a ORA-
01401: inserted value too large for column (or from 10g onwards: ORA-12899: valu
e too large for column ) error, even if you have defined the colum as CHAR (2000
CHAR) so here it might be a good idea to define that column as NCHAR that will
raise the MAX to 1000 char's ... Note 144808.1 <http://metalink.oracle.com/metal
ink/plsql/showdoc?db=NOT&id=144808.1> Examples and limits of BYTE and CHAR seman
tics usage Disadvantages using N-types: * You might have some problems with olde
r clients if using AL16UTF16 see point 6) b) in this note * Be sure that you use
(AL32)UTF8 as NLS_CHARACTERSET , otherwise you will run into point 13 of this n
ote. * Do not expect a higher *performance* by using AL16UTF16, it might be fast
er on some systems, but that has more to do with I/O then with the database kern
el. * If you use N-types, DO use the (N'...') syntax when coding it so that Lite
rals are denoted as being in the national character set by prepending letter 'N'
, for example: create table test(a nvarchar2(100));
insert into test values(N'this is NLS_NCHAR_CHARACTERSET string'); Normally you
will choose to use VARCHAR (using a (AL32)UTF8 NLS_CHARACTERSET) for simplicity,
to avoid confusion and possible other limitations who might be imposed by your
application or programming language to the usage of N-types. 16) I have a messag
e running DBUA (Database Upgrade Assistant) about NCHAR type when upgrading from
8i . AL16UTF16 The default Oracle character set for the SQL NCHAR data type, wh
ich is used for the national character set. It encodes Unicode data in the UTF-1
6 encoding. AL32UTF8 An Oracle character set for the SQL CHAR data type, which i
s used for the database character set. It encodes Unicode data in the UTF-8 enco
ding. Unicode Unicode is a universal encoded character set that allows you infor
mation from any language to be stored by using a single character set. Unicode p
rovides a unique code value for every character, regardless of the platform, pro
gram, or language. Unicode database A database whose database character set is U
TF-8. Unicode code point A 16-bit binary value that can represent a unit of enco
ded text for processing and interchange. Every point between U+0000 and U+FFFF i
s a code point. Unicode datatype A SQL NCHAR datatype (NCHAR, NVARCHAR2, and NCL
OB). You can store Unicode characters in columns of these datatypes even if the
database character set is not Unicode. unrestricted multilingual support The abi
lity to use as many languages as desired. A universal character set, such as Uni
code, helps to provide unrestricted multilingual support because it supports a v
ery large character repertoire, encompassing most modern languages of the world.
UTFE A Unicode 3.0 UTF-8 Oracle database character set with 6-byte supplementar
y character support. It is used only on EBCDIC platforms. UTF8 The UTF8 Oracle c
haracter set encodes characters in It is for ASCII-based platforms. The UTF8 cha
racter Although specific supplementary characters were not Unicode until version
3.1, the code point range was allocated for one, two, or three bytes. set suppo
rts Unicode 3.0. assigned code points in supplementary characters in
Unicode 3.0. Supplementary characters are treated as two separate, user-defined
characters that occupy 6 bytes. UTF-8 The 8-bit encoding of Unicode. It is a var
iable-width encoding. One Unicode character can be 1 byte, 2 bytes, 3 bytes, or
4 bytes in UTF-8 encoding. Characters from the European scripts are represented
in either 1 or 2 bytes. Characters from most Asian scripts are represented in 3
bytes. Supplementary characters are represented in 4 bytes. UTF-16 The 16-bit en
coding of Unicode. It is an extension of UCS-2 and supports the supplementary ch
aracters defined in Unicode 3.1 by using a pair of UCS-2 code points. One Unicod
e character can be 2 bytes or 4 bytes in UTF-16 encoding. Characters (including
ASCII characters) from European scripts and most Asian scripts are represented i
n 2 bytes. Supplementary characters are represented in 4 bytes. wide character A
fixed-width character format that is useful for extensive text processing becau
se it allows data to be processed in consistent, fixed-width chunks. Wide charac
ters are intended to support internal character processing
Oracle started supporting Unicode based character sets in Oracle7. Here is a sum
mary of the Unicode character sets supported in Oracle: +------------+---------+
-----------------+ | Charset | RDBMS | Unicode version | +------------+---------
+-----------------+ | AL24UTFFSS | 7.2-8.1 | 1.1 | | | | | | UTF8 | 8.0-10g | 2.
1 (8.0-8.1.7) | | | | 3.0 (8.1.7-10g) | | | | | | UTFE | 8.0-10g | 2.1 (8.0-8.1.
7) | | | | 3.0 (8.1.7-10g) | | | | | | AL32UTF8 | 9.0-10g | 3.0 (9.0) | | | | 3.
1 (9.2) | | | | 3.2 (10.1) | | | | | | AL16UTF16 | 9.0-10g | 3.0 (9.0) | | | | 3
.1 (9.2) | | | | 3.2 (10.1) | +------------+---------+-----------------+ AL24UTF
FSS AL24UTFFSS was the first Unicode character set supported by Oracle. Is was i
ntroduced in Oracle 7.2. The AL24UTFFSS encoding scheme was based on the Unicode
1.1 standard, which is now obsolete. AL24UTFFSS has been de-supported from Orac
le9i. The migration path for existing AL24UTFFSS databases is to upgrade the dat
abase to 8.0 or 8.1, then upgrade the character set to UTF8
before upgrading the database further to 9i or 10g. [NOTE:234381.1] <http://meta
link.oracle.com/metalink/plsql/ml2_documents.showDocument?p_id=234381. 1&p_datab
ase_id=NOT> Changing AL24UTFFSS to UTF8 - AL32UTF8 with ALTER DATABASE CHARACTER
SET UTF8 UTF8 was the UTF-8 encoded character set in Oracle8 and 8i. It followed
the Unicode 2.1 standard between Oracle 8.0 and 8.1.6, and was upgraded to Unic
ode version 3.0 for versions 8.1.7, 9i and 10g. To maintain compatibility with e
xisting installations this character set will remain at Unicode 3.0 in future Or
acle releases. Although specific supplementary characters were not assigned to U
nicode until version 3.1, the allocation for these characters were already defin
ed in 3.0. So if supplementary characters are inserted in a UTF8 database, it wi
ll not corrupt the actual data inside the database. They will be treated as 2 se
parate undefined characters, occupying 6 bytes in storage. We recommend that cus
tomers switch to AL32UTF8 for full supplementary character support. UTFE This is
the UTF8 database character set for the EDCDIC platforms. It has the same prope
rties as UTF8 on ASCII based platforms. The EBCDIC Unicode transformation format
is documented in Unicode Technical Report #16 UTF-EBCDIC. Which can be found at
http://www.unicode.org/unicode/reports/tr16/ AL32UTF8 This is the UTF-8 encoded
character set introduced in Oracle9i. AL32UTF8 is the database character set th
at supports the latest version (3.2 in 10g) of the Unicode standard. It also pro
vides support for the newly defined supplementary characters. All supplementary
characters are stored as 4 bytes. AL32UTF8 was introduced because when UTF8 was
designed (in the times of Oracle8) there was no concept of supplementary charact
ers, therefore UTF8 has a maximum of 3 bytes per character. Changing the design
of UTF8 would break backward compatibility, so a new character set was introduce
d. The introduction of surrogate pairs should mean that no significant architect
ure changes are needed in future versions of the Unicode standard, so the plan i
s to keep enhancing AL32UTF8 as necessary to support future version of the Unico
de standard, for example work is now underway to make sure we support Unicode 4.
0 in AL32UTF8 in the release after 10.1. AL16UTF16 This is the first UTF-16 enco
ded character set in Oracle. It was introduced in Oracle9i as the default nation
al character set (NLS_NCHAR_CHARACTERSET). AL16UTF16 supports the latest version
(3.2 in 10g) of the Unicode standard. It also provides support for the newly de
fined supplementary characters. All supplementary characters are stored as 4 byt
es. As with AL32UTF8, the plan is to keep enhancing AL16UTF16 as necessary to su
pport future version of the Unicode standard. AL16UTF16 cannot be used as a data
base character set (NLS_CHARACTERSET), only as the national character set (NLS_N
CHAR_CHARACTERSET). The database character set is used to identify and to hold S
QL, SQL metadata and PL/SQL source code. It must have either single byte 7-bit A
SCII or single byte EBCDIC as a subset, whichever is native to the deployment pl
atform. Therefore, it is not possible to use a fixed-width, multi-byte character
set (such as AL16UTF16) as the database character set. Trying to create a datab
ase with AL16UTF16 a characterset in 9i and up will give "ORA-12706: THIS CREATE
DATABASE CHARACTER SET IS NOT ALLOWED". Further reading
--------------All the above information is taken from the white paper "Oracle Un
icode database support". The paper itself contains much more information and is
available from: http://otn.oracle.com/tech/globalization/pdf/TWP_Unicode_10gR1.p
df References ---------The following URLs contain a complete list of hex values
and character descriptions for every Unicode character: Unicode Version 3.2: htt
p://www.unicode.org/Public/3.2-Update/UnicodeData3.2.0.txt Unicode Version 3.1:
http://www.unicode.org/Public/3.1-Update/UnicodeData3.1.0.txt Unicode Version 3.
0: http://www.unicode.org/Public/3.0-Update/UnicodeData3.0.0.txt Unicode Version
s 2.x: http://www.unicode.org/unicode/standard/versions/enumeratedversions.html
Unicode Version 1.1: http://www.unicode.org/Public/1.1-Update/UnicodeData1.1.5.t
xt A description of the file format can be found at: http://www.unicode.org/Publ
ic/UNIDATA/UnicodeData.html For a glossarry of unicode terms, see: http://www.un
icode.org/glossary/ On above locations you can find the unicode standard, all ch
aracters are there referenced with their UCS-2 codepoint
Some further notes: =================== Note 1: ------Thanks for the detailed re
ply. > > >Furthermore the use of NLS columns on a utf8 database (al32utf8 would
be > better by the way) is > >subject to questions. Correct me if I'm wrong but
I believe that most > >asian character sets can be translated into utf8 without
loosing any > >information. The only exception to this statement is for surrogat
e pairs > >and that's the only difference between al32utf8 and utf8 in Oracle. >
>al32utf8 supports surrogate pairs. > > I found from Oracle documentation that
UTF8 supports surrogate pairs but > requires 6 bytes for surrogate pairs. I shou
ld have clarified : the jdbc drivers don't support these 6-bytes utf8 surrogate
pairs. That's the reason why we introduced al32utf8 as one of the native charact
er set (ascii, isolatin1, utf8, al32utf8, ucs2, al24utffss). Note 2: -------
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > >
AL32UTF8 The AL32UTF8 character set encodes characters in one to three bytes. Su
rrogate pairs require four bytes. It is for ASCII-based platforms. UTF8 The UTF8
character set encodes characters in one to three bytes. Surrogate pairs require
six bytes. It is for ASCII-based platforms. AL32UTF8 --------Advantages -------
--1. Surrogate pair Unicode characters are stored in the standard 4 bytes repres
entation, and there is no data conversion upon retrieval and insertion of those
surrogate characters. Also, the storage for those characters requires less disk
space than that of the same characters encoded in UTF8. Disadvantages ----------
--1. You cannot specify the length of SQL CHAR types in the number of characters
(Unicode code points) for surrogate characters. For example, surrogate characte
rs are treated as one code point rather than the standard of two code points. 2.
The binary order for SQL CHAR columns is different from that of SQL NCHAR colum
ns when the data consists of surrogate pair Unicode characters. As a result, CHA
R columns NCHAR columns do not always have the same sort for identical strings.
UTF8 ---Advantages ---------1. You can specify the length of SQL CHAR types as a
number of characters. 2. The binary order on the SQL CHAR columns is always the
same as that of the SQL NCHAR columns when the data consists of the same surrog
ate pair Unicode characters. As a result, CHAR columns and NCHAR columns have th
e same sort for identical strings. Disadvantages ------------1. Surrogate pair U
nicode characters are stored
> > > > > > > >
as 6 bytes instead of the 4 bytes defined by the Unicode standard. As a result,
Oracle has to convert data for those surrogate characters. I dont understand the
1st disadvantage of AL32UTF8 encoding !! If surrogate characters are considered
1 codepoint, then if I declare a CHAR column as of length 40 characters (codepo
ints) , then I can enter 40 surrogate characters.
Note 3: ------Universal Character Sets ==================== Character Set Name D
escription Language, Country or Region ================= =======================
============== ========================== AL16UTF16 Unicode 3.1 UTF-16Universal
character set Universal Unicode AL32UTF8 Unicode 3.1 UTF-8 Universal character s
et Universal Unicode UTF8 Unicode 3.0 UTF-8 Universal character set Universal Un
icode CESU-8 compliant UTFE EBCDIC form of Unicode 3.0UTF-8 Universal Unicode Un
iversal character set Note 4: ------WE8ISO is a single byte character set. It ha
s 255 characters.
Comments ========= MB, EURO, FIXED MB, ASCII, EURO MB, ASCII, EURO MB, EURO
Korean data requires a multi-byte character set -- each character could be 1, 2,
3 or more bytes. It is a variable length encoding scheme. It has more then, way
more then 255 characters. I don't see it fitting into we8iso unless they use RA
W in which case it is just bytes, not characters at all. Note 5: ------Hi Tom, W
e migrated our DB 8.1.7 to 9.2.In 8.1.7 we used UTF8 character set.It remains sa
me in 9.2. We know that Oracle 9.2 doesn't have UTF8 but AL32UTF8. Can we keep t
his UTF8 or have to change to AL32UTF8. If we need to change, may we do it by :
alter database character set AL32UTF8 or we must use exp/imp utility? Regards Fo
llowup: what do you mean -- utf8 is still a valid character set?
Note 6: ------Hi Tom, We are migrating from oracle 8.1.6 to oracle 9 R2. We have
about 14 oracle instance. All instances have WE8ISO88591P1 character set. Our c
ompany is expanding globally so we are thinking to use unicode character set wit
h oracle 9. I have few questions on this issue. 1) What is the difference betwee
n UTF-8,UTF-16 Is AL32UTF8 and UTF-8 is same character set or they are different
? Is UTF-16 and AL16UTF16 is same character set or different ? 2) Which characte
r is super set of all character set? If there is any, Does oracle support that c
haracter set? 3) Do we have to change our pl/sql procedure if we move to unicode
database ? The reason for this question is our developer is using ascii charact
er for carrage return and line feed like chr(10) and chr(13) and some other asci
i character . 4) What is impact on CLOB ? 5) What will be the size of the databa
se? Our production DB size is currently 50GB. What it would be in unicode? Thank
s basically utf8 is unicode 3.0 support, utf16 is unicode 3.1 there is no super
super "top" set. Your plsql routines may will have to change -- your data model
may well have to change. You'll find that in utf, european characters (except as
cii -- 7bit data) all take 2 bytes. That varchar2(80) you have in your database?
It might only hold 40 characters of eurpean data (or even less of other kinds o
f data). It is 80 bytes (you can use the new 9i syntax varchar2( N char ) -- it'
ll allocate in characters, not bytes). So, you could find your 80 character desc
ription field cannot hold 80 characters. You might find that x := a || b; fails
-- with string to long in your plsql code due to the increased size. You might f
ind that your string intensive routines run slower (substr(x,1,80) is no longer
byte 1 .. byte 80 -- Oracle has to look through the string to find where charact
ers start and stop -- it is more complex) chr(10) and chr(13) should work find,
they are simple ASCII.
On clob -- same impact as on varchar2, same issues. Your database could balloon
to 200gb, but it will be somewhere between 50 and 200. As unicode is a VARYING W
IDTH encoding scheme, it is impossible to be precise -- it is not a fixed width
scheme, so we don't know how big your strings will get to be.
21.3 Oracle Rowid's ------------------Rowid's: Every table row has an internal r
owid which contains information about object_id, block_id, file#. Also you can q
uery on the "logical" number rownum. SQL> SELECT * FROM charlie.xyz; ID --------
1 2 NAME -------------------joop gerrit
SQL> SELECT rownum FROM charlie.xyz; ROWNUM --------1 2 SQL> SELECT rowid FROM S
ALES.xyz; ROWID -----------------AAAI92AAQAAAFXbAAA AAAI92AAQAAAFXbAAB - DBMS_RO
WID: DBMS_ROWID. Every row has a rowid. Every row has also an associated logical
"rownum" on which you can query. The rowid is an 18 byte structure that stores
the location of blockid WHERE the row is in. The old format is the restricted fo
rmat of Oracle 7 The new format is the extended format of Oracle 8, 8i format: O
OOOOOFFFBBBBBRRRR 000000=object_id FFF=relative datafile number BBBBB=block_id
RRR=row in block The dbms package DBMS_ROWID has several function to convert FRO
M the one format to the other. DBMS_ROWID EXAMPLES: -------------------SELECT DB
MS_ROWID.ROWID_TO_EXTENDED(ROWID,null,null,0), DBMS_ROWID.ROWID_TO_RESTRICTED(RO
WID,0), rownum FROM CHARLIE.XYZ; SELECT dbms_rowid.rowid_block_number(rowid) FRO
M emp WHERE ename = 'KING'; SELECT dbms_rowid.rowid_block_number(rowid) FROM TCM
LOGDBUSER.EVENTLOG WHERE id = 5; This example returns the ROWID for a row in the
EMP table, extracts the data object number FROM the ROWID, using the ROWID_OBJE
CT function in the DBMS_ROWID package, then displays the object number: DECLARE
object_no INTEGER; row_id ROWID; BEGIN SELECT ROWID INTO row_id FROM TCMLOGDBUSE
R.EVENTLOG WHERE id=5; object_no := dbms_rowid.rowid_object(row_id); dbms_output
.put_line('The obj. # is '|| object_no); END; / PL/SQL procedure successfully co
mpleted. SQL> set serveroutput on SQL> / The obj. # is 28954 PL/SQL procedure su
ccessfully completed. SQL> select * from dba_objects where object_id=28954; OWNE
R -----------------------------OBJECT_NAME -------------------------------------
---------------------SUBOBJECT_NAME OBJECT_ID DATA_OBJECT_ID -------------------
----------- ---------- -------------OBJECT_TYPE CREATED LAST_DDL_ TIMESTAMP ----
-------------- --------- --------- ------------------STATUS T G S ------- - - TC
MLOGDBUSER EVENTLOG
TABLE VALID
N N N
28954 28954 05-DEC-04 05-DEC-04 2004-12-05:22:26:10
21.4 HETEROGENEOUS SERVICES: ---------------------------Generic connectivity is
intended for low-end data integration solutions requiring the ad hoc query capab
ility to connect from Oracle8i to non-Oracle database systems. Generic connectiv
ity is enabled by Oracle Heterogeneous Services, allowing you to connect to non-
Oracle systems with improved performance and throughput. Generic connectivity is
implemented as a Heterogeneous Services ODBC agent. An ODBC agent is included a
s part of your Oracle8i system. To access the non-Oracle data store using generi
c connectivity, the agent works with an ODBC driver. Oracle8i provides support f
or the ODBC driver interface. The driver that you use must be on the same machin
e as the agent. The non-Oracle data stores can reside on the same machine as Ora
cle8i or a different machine. Agent processes are usually started when a user se
ssion makes its first non-Oracle system access through a database link. These co
nnections are made using Oracle's remote data access software, Oracle Net Servic
es, which enables both client-server and server-server communication. The agent
process continues to run until the user session is disconnected or the database
link is explicitly closed. Multithreaded agents behave slightly differently. The
y have to be explicitly started and shut down by a database administrator instea
d of automatically being spawned by Oracle Net Services. Oracle has Generic Conn
ectivity agents for ODBC and OLE DB that enable you to use ODBE and OLEDB driver
s to access non-Oracle systems that have an ODBC or an OLE DB interface. Setup:
-----1. HS datadictonary ------------------To install the data dictionary tables
and views for Heterogeneous Services, you must run a script that creates all th
e Heterogeneous Services data dictionary tables, views, and packages. On most sy
stems the script is called caths.sql and resides in $ORACLE_HOME/rdbms/admin. Ch
eck for the existence of Heterogeneous Services data dictionary views, All norma
l standard preparations for HS needs to be in place in Oracle 9i. To recap this
here, if you must install HS from scratch:
-
run caths.sql as SYS on Ora9i DB Server. The HS Agent will be installed as part
of 9i DB install. It will be started as part of the listener. On NT/2000, The ag
ent works with a OLEDB or ODBC driver to connect to target db The DB Server will
connect to the agent through NET8, which is why a tnsnames.ora and a listener.o
ra entry needs to be setup
You van also check on HS installation. Just check on existence of the HS% views
in the SYS schema, for example, SYS.HS_FDS_CLASS. 2. tnsnames.ora and listener.o
ra -------------------------------To initiate a connection to the non-Oracle sys
tem, the Oracle9i server starts an agent process through the Oracle Net listener
. For the Oracle9i server to be able to connect to the agent, you must configure
tnsnames.ora and listener.ora -------------------------------------------------
----------------------------tnsnames examples: Sybase_sales= (DESCRIPTION= (ADDR
ESS=(PROTOCOL=tcp) (HOST=dlsun206) -- local machine (PORT=1521) ) (CONNECT_DATA
= (SERVICE_NAME=SalesDB) ) (HS = OK) ) TNSNAMES.ORA hsmsql = (DESCRIPTION = (ADD
RESS_LIST = (ADDRESS = (PROTOCOL = tcp)(host=winhost)(port=1521)) ) -- local mac
hine (CONNECT_DATA = (SID = msql) ) -- needs to match the sid in listener.ora. (
HS=OK) ) ) TG4MSQL.WORLD = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ukp
15340)(PORT = 1528) ) (CONNECT_DATA = (SID = tg4msql) ) (HS = OK) ) ------------
-------------------------------------------------------------------
listener.ora examples: LISTENER = (ADDRESS_LIST = (ADDRESS= (PROTOCOL=tcp) (HOST
= dlsun206) (PORT = 1521) ) ) ... SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (
SID_NAME=SalesDB) (ORACLE_HOME=/home/oracle/megabase/9.0.1) (PROGRAM=tg4mb80) (E
NVS=LD_LIBRARY_PATH=non_oracle_system_lib_directory) ) ) LISTENER.ORA LISTENER =
(DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(
HOST = winhost)(PORT = 1521))
)
)
SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = msql) <== needs to match
the sid in tnsnames.ora (ORACLE_HOME = E:\Ora816) (PROGRAM = hsodbc) <== hsodbc
is the executable ) 3. create the initialization file: ------------------------
----------
)
Create the Initialization file. Oracle supplies a sample initialization file nam
ed "inithsodbc.ora" which is stored in the $ORACLE_HOME\hs\admin directory. To c
reate an initialization file, copy the appropriate sample file and rename the fi
le to initHS_SID.ora. In this example the sid noted in the listener and tnsnames
is msql so our new initialization file is called initmsql.ora. INITMSQL.ORA # H
S init parameters # HS_FDS_CONNECT_INFO = msql <= HS_FDS_TRACE_LEVEL = 0 <= HS_F
DS_TRACE_FILE_NAME = hsmsql.trc <= # Environment variables required for the #set
<envvar>=<value> HS_FDS_SHAREABLE_NAME Default value: none Range of values: not
applicable
odbc data_source_name trace levels 0 - 4 (4 is verbose) trace file name # non-Or
acle system #
HS_FDS_SHAREABLE_NAME: Specifies the full path name to the ODBC library. This pa
rameter is required when you are using generic connectivity to access data from
an ODBC provider on a UNIX machine.
4. create a database link: -------------------------CREATE DATABASE LINK sales U
SING `Sybase_sales';
Common Errors: -------------AGTCTL.exe = ORA-28591 unable to access parameter fi
le, ORA-28592 agent SID not set agentctl hsodbc.exe = caths.sql What is the diff
erence between agtctl and lsnrctl dbsnmp_start Error: ORA-28591 Text: agent cont
rol utility: unable to access parameter file -----------------------------------
---------------------------------------Cause: The agent control utility was unab
le to access its parameter file. This could be because it could not find its adm
in directory or because permissions on directory were not correctly set. Action:
The agent control utility puts its parameter file in either the directory point
ed to by the environment variable AGTCTL_ADMIN or in the directory pointed to by
the environment variable TNS_ADMIN. Make sure that at least one of these enviro
nment variables is set and that it points to a directory that the agent has acce
ss to. SET AGTCTL_ADMIN=\OPT\ORACLE\ORA81\HS\ADMIN Error: ORA-28592 Text: agent
control utility: agent SID not set ---------------------------------------------
-----------------------------Cause: The agent needs to know the value of the AGE
NT_SID parameter before it can process any commands. If it does not have a value
for AGENT_SID then all commands will fail. Action: Issue the command SET AGENT_
SID <value> and then retry the command that failed. Error: -----fix:
Set the HS_FDS_TRACE_FILE_NAME to a filename: HS_FDS_TRACE_FILE_NAME = test.log
or comment it out: #HS_FDS_TRACE_FILE_NAME Error: incorrect characters -----Chan
ge the HS_LANGUAGE to a correct NLS like AMERICAN_AMERICA.WE8MSWIN1252 Error: OR
A-02085 ---------------HS_FDS_CONNECT_INFO = <SystemDSN_name> HS_FDS_TRACE_LEVEL
= 0 HS_FDS_TRACE_FILE_NAME = c:\hs.log HS_DB_NAME = exhsodbc -- case sensitive
HS_DB_DOMAIN = ch.oracle.com -- case sensitive ERROR: ORA-02085 ---------------S
ET GLOBAL_NAMES TRUE ERORR:ORA-02068 and ORA-28511 ----------------------------L
D_LIBRARY_PATH=/u06/home/oracle/support/network/ODBC/lib f the LD_LIBRARY_PATH d
oes not contain the path to the ODBC library, a dd the ODBC library path and sta
rt the listener with this environment. LD_LIBRARY_PATH=/u01/app/oracle/product/8
.1.7/lib; export LD_LIBRARY_PATH When the listener launches the agent hsodbc, th
e agent inherits the environment from the listener and needs to have the ODBC li
brary path in order to access the ODBC shareable file. The shareable file is def
ined in the init<sid>.ora file located in the $ORACLE_HOME/hs/admin directory. H
S_FDS_SHAREABLE_NAME=/u06/home/oracle/support/network/ODBC/lib/libodbc.so
21.5 SET EVENTS: ---------------Note 1: ------- What is a database EVENT and how
does one set it? Oracle trace events are useful for debugging the Oracle databa
se server. The following two examples are simply to demonstrate syntax. Refer to
later notes on this page for an
explanation of what these particular events do. Events can be activated by eithe
r adding them to the INIT.ORA parameter file. E.g. event='1401 trace name errors
tack, level 12' ... or, by issuing an ALTER SESSION SET EVENTS command: E.g. alt
er session set events '10046 trace name context forever, level 4'; The alter ses
sion method only affects the user's current session, whereas changes to the INIT
.ORA file will affect all sessions once the database has been restarted. - What
database events can be set? The following events are frequently used by DBAs and
Oracle Support to diagnose problems: 10046 trace name context forever, level 4
Trace SQL statements and show bind variables in trace output. 10046 trace name c
ontext forever, level 8 This shows wait events in the SQL trace files 10046 trac
e name context forever, level 12 This shows both bind variable names and wait ev
ents in the SQL trace files 1401 trace name 1401 trace name 1401 trace name Dump
s out trace error occurs. The 1401 can be trace. errorstack, level 12 errorstack
, level 4 processstate information if an ORA-1401 "inserted value too large for
column" replaced by any other Oracle Server error code that you want to
60 trace name errorstack level 10 Show where in the code Oracle gets a deadlock
(ORA-60), and may help to diagnose the problem. - The following list of events a
re examples only. They might be version specific, so please call Oracle before u
sing them: 10210 trace name context forever, level 10 10211 trace name context f
orever, level 10 10231 trace name context forever, level 10 These events prevent
database block corruptions 10049 trace name context forever, level 2 Memory pro
tect cursor 10210 trace name context forever, level 2 Data block check 10211 tra
ce name context forever, level 2 Index block check 10235 trace name context fore
ver, level 1 Memory heap check 10262 trace name context forever, level 300
Allow 300 bytes memory leak for connections - How can one dump internal database
structures? The following (mostly undocumented) commands can be used to obtain
information about internal database structures. -- Dump control file contents al
ter session set events 'immediate trace name CONTROLF level 10' / -- Dump file h
eaders alter session set events 'immediate trace name FILE_HDRS level 10' / -- D
ump redo log headers alter session set events 'immediate trace name REDOHDR leve
l 10' / -- Dump the system state -- NOTE: Take 3 successive SYSTEMSTATE dumps, w
ith 10 minute intervals alter session set events 'immediate trace name SYSTEMSTA
TE level 10' / -- Dump the process state alter session set events 'immediate tra
ce name PROCESSSTATE level 10' / -- Dump Library Cache details alter session set
events 'immediate trace name library_cache level 10' / -- Dump optimizer statis
tics whenever a SQL statement is parsed (hint: change statement or flush pool) a
lter session set events '10053 trace name context forever, level 1' / -- Dump a
database block (File/ Block must be converted to DBA address) -- Convert file an
d block number to a DBA (database block address). Eg: variable x varchar2; exec
:x := dbms_utility.make_data_block_address(1,12); print x alter session set even
ts 'immediate trace name blockdump level 50360894' / ALTER SESSION SET EVENTS '1
652 trace name errorstack level 1 '; or alter system set events '1652 trace name
errorstack level 1 '; alter system set events '1652 trace name errorstack off '
; Note 2: ------Doc ID </help/usaeng/Search/search.html>: Note:218105.1 Content
Type:
TEXT/PLAIN Subject: Introduction to ORACLE Diagnostic EVENTS Creation Date: Type
: BULLETIN Last Revision Date: 20-NOV-2002 Status: PUBLISHED PURPOSE -------
11-NOV-2002
This document describes the different types of Oracle EVENT that exist to help c
ustomers and Oracle Support Services when investigating Oracle RDBMS related iss
ues. This note will only provide information of a general nature. Specific infor
mation on the usage of a given event should be provided by Oracle Support Servic
es or the Support related article that is suggesting the use of a given event. T
his note will not provide that level of detail. SCOPE & APPLICATION ------------
------The information held here is of use to Oracle DBAs, developers and Oracle
Support Services. Introduction to ORACLE Diagnostic EVENTS ---------------------
------------------Before proceeding, please review the following note as it cont
ain some important additional information on Events. [NOTE:75713.1] <ml2_documen
ts.showDocument?p_id=75713.1&p_database_id=NOT> "Important Customer information
about using Numeric Events" EVENTS are primarily used to produce additional diag
nostic information when insufficient information is available to resolve a given
problem. EVENTS are also used to workaround or resolve problems by changing Ora
cle's behaviour or enabling undocumented features. *WARNING* Do not use an Oracl
e Diagnostic Event unless directed to do so by Oracle Support Services or via a
Support related article on Metalink. Incorrect usage can result in disruptions t
o the database services. Setting EVENTS -------------There are a number of ways
in which events can be set. How you set an event depends on the nature of the ev
ent and the circumstances at the time. As stated above, specific information on
how you set a given event should be provided by Oracle Support Services or the S
upport related article that is suggesting the use of a given event. Most events
can be set using more than one of the following methods : o As INIT parameters o
In the current session o From another session using a Debug tool
INIT Parameters ~~~~~~~~~~~~~~~ Syntax: EVENT = "<event_name> <action>" Referenc
e: [NOTE:160178.1] <ml2_documents.showDocument?p_id=160178.1&p_database_id=NOT>
How to set EVENTS in the SPFILE Current Session ~~~~~~~~~~~~~~~ Syntax: ALTER SE
SSION SET EVENTS '<event_name> <action>'; From another Session using a Debug too
l ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ There are a number of debug tools : o
ORADEBUG o ORAMBX (VMS only) ORADEBUG : ======== Syntax: Prior to Oracle 9i, SVR
MGR> oradebug event <event_name> <action> Oracle 9i and above : SQL> oradebug ev
ent <event_name> <action> Reference: [NOTE:29786.1] <ml2_documents.showDocument?
p_id=29786.1&p_database_id=NOT> "SUPTOOL: ORADEBUG 7.3+ (Server Manager/SQLPLUS
Debug Commands)" [NOTE:1058210.6] <ml2_documents.showDocument?p_id=1058210.6&p_d
atabase_id=NOT> "HOW TO ENABLE SQL TRACE FOR ANOTHER SESSION USING ORADEBUG" ORA
MBX : on OpenVMS is still available and described under : ====== [NOTE:29062.1]
<ml2_documents.showDocument?p_id=29062.1&p_database_id=NOT> "SUPTOOL: ORAMBX (VM
S) - Quick Reference" This note will not enter into additional details on these
tools. EVENT Categories ----------------
The most commonly used events fall into one of four categories : o o o o Dump di
agnostic information on request Dump diagnostic information when an error occurs
Change Oracle's behaviour Produce trace diagnostic information as the instance
runs
Dump diagnostic information on request (Immediate Dump) ~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~ An immediate dump Event will result in information immediately be
ing written to a trace file. Some common immediate dump Events include : SYSTEMS
TATE, ERRORSTACK, CONTROLF, FILE_HDRS and REDOHDR These type of events are typic
ally set in the current session. For example: ALTER SESSION SET EVENTS 'IMMEDIAT
E trace name ERRORSTACK level 3'; Dump Diagnostic information when an error occu
rs (On-Error Dump) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The on-error
dump Event is similar to the immediate dump Event with the difference being tha
t the trace output is only produced when the given error occurs. You can use vir
tually any standard Oracle error to trigger this type of event. For example, an
ORA-942 "table or view does not exist" error does not include the name of the pr
oblem table or view. When this is not obvious from the application (due to its c
omplexity), then it can be difficult to investigate the source of the problem. H
owever, an On-Error dump against the 942 error can help narrow the search. These
type of events are typically set as INIT parameters. For example, using the 942
error : EVENT "942 trace name ERRORSTACK level 3" Once established, the next ti
me a session encounters an ORA-942 error, a trace file will be produced that sho
ws (amongst other information) the current SQL statement being executed. This cu
rrent SQL can now be checked and the offending table or view more easily discove
red. Change Oracle's behaviour ~~~~~~~~~~~~~~~~~~~~~~~~~ Instance behaviour can
be changed or hidden features can be enabled using these type of Event A common
event in this category is 10262 which is discussed in
[NOTE:21235.1] <ml2_documents.showDocument?p_id=21235.1&p_database_id=NOT> EVENT
: 10262 "Do not check for memory leaks" These type of events are typically set a
s INIT parameters. For example: EVENT "10262 trace name context forever, level 4
000" Produce trace diagnostic information as the instance runs (Trace Events) ~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Trace events produce dia
gnostic information as processes are running. They are used to gather additional
information about a problem. A common event in this category is 10046 which is
discussed in [NOTE:21154.1] <ml2_documents.showDocument?p_id=21154.1&p_database_
id=NOT> EVENT: 10046 "enable SQL statement tracing (including binds/waits)" Thes
e type of events are typically set as INIT parameters. For example: EVENT = "100
46 trace name context forever, level 12" Summary ------EVENT usage and syntax ca
n be very complex and due to the possible impact on the database, great care sho
uld be taken when dealing with them. Oracle Support Services (or a Support artic
le) should provide information on the appropriate method to be adopted and synta
x to be used when establishing a given event. If it is possible to do so, test a
n event against a development system prior to doing the same thing on a producti
on system. The misuse of events can lead to a loss of service. RELATED DOCUMENTS
----------------[NOTE:75713.1] <ml2_documents.showDocument?p_id=75713.1&p_datab
ase_id=NOT> Important Customer information about using Numeric Events [NOTE:2123
5.1] <ml2_documents.showDocument?p_id=21235.1&p_database_id=NOT> EVENT: 10262 "D
o not check for memory leaks" [NOTE:21154.1] <ml2_documents.showDocument?p_id=21
154.1&p_database_id=NOT> EVENT: 10046 "enable SQL statement tracing (including b
inds/waits)" [NOTE:160178.1] <ml2_documents.showDocument?p_id=160178.1&p_databas
e_id=NOT> How to set EVENTS in the SPFILE [NOTE:1058210.6] <ml2_documents.showDo
cument?p_id=1058210.6&p_database_id=NOT> HOW TO ENABLE SQL TRACE FOR ANOTHER SES
SION USING ORADEBUG [NOTE:29786.1] <ml2_documents.showDocument?p_id=29786.1&p_da
tabase_id=NOT> SUPTOOL: ORADEBUG 7.3+ (Server Manager/SQLPLUS Debug Commands)
[NOTE:29062.1] <ml2_documents.showDocument?p_id=29062.1&p_database_id=NOT> SUPTO
OL: ORAMBX (VMS) - Quick Reference
====================== 22. DBA% and v$ views ====================== NLS: ---VIEW
_NAME -----------------------------NLS_DATABASE_PARAMETERS NLS_INSTANCE_PARAMETE
RS NLS_SESSION_PARAMETERS DBA: ---VIEW_NAME -----------------------------DBA_2PC
_NEIGHBORS DBA_2PC_PENDING DBA_ALL_TABLES DBA_ANALYZE_OBJECTS DBA_ASSOCIATIONS D
BA_AUDIT_EXISTS DBA_AUDIT_OBJECT DBA_AUDIT_SESSION DBA_AUDIT_STATEMENT DBA_AUDIT
_TRAIL DBA_CACHEABLE_OBJECTS DBA_CACHEABLE_TABLES DBA_CACHEABLE_TABLES_BASE DBA_
CATALOG DBA_CLUSTERS DBA_CLUSTER_HASH_EXPRESSIONS DBA_CLU_COLUMNS DBA_COLL_TYPES
DBA_COL_COMMENTS DBA_COL_PRIVS DBA_CONSTRAINTS DBA_CONS_COLUMNS DBA_CONTEXT DBA
_DATA_FILES DBA_DB_LINKS DBA_DEPENDENCIES DBA_DIMENSIONS DBA_DIM_ATTRIBUTES DBA_
DIM_CHILD_OF DBA_DIM_HIERARCHIES DBA_DIM_JOIN_KEY DBA_DIM_LEVELS DBA_DIM_LEVEL_K
EY DBA_DIRECTORIES OWNER -----------------------------SYS SYS SYS SYS SYS SYS SY
S SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SY
S SYS SYS SYS SYS SYS SYS SYS OWNER -----------------------------SYS SYS SYS
DBA_DMT_FREE_SPACE DBA_DMT_USED_EXTENTS DBA_ERRORS DBA_EXP_FILES DBA_EXP_OBJECTS
DBA_EXP_VERSION DBA_EXTENTS DBA_FREE_SPACE DBA_FREE_SPACE_COALESCED DBA_FREE_SP
ACE_COALESCED_TMP1 DBA_FREE_SPACE_COALESCED_TMP2 DBA_FREE_SPACE_COALESCED_TMP3 D
BA_IAS_CONSTRAINT_EXP DBA_IAS_GEN_STMTS DBA_IAS_GEN_STMTS_EXP DBA_IAS_OBJECTS DB
A_IAS_OBJECTS_BASE DBA_IAS_OBJECTS_EXP DBA_IAS_POSTGEN_STMTS DBA_IAS_PREGEN_STMT
S DBA_IAS_SITES DBA_IAS_TEMPLATES DBA_INDEXES DBA_INDEXTYPES DBA_INDEXTYPE_OPERA
TORS DBA_IND_COLUMNS DBA_IND_EXPRESSIONS DBA_IND_PARTITIONS DBA_IND_SUBPARTITION
S DBA_INTERNAL_TRIGGERS DBA_JAVA_POLICY DBA_JOBS DBA_JOBS_RUNNING DBA_LIBRARIES
DBA_LMT_FREE_SPACE DBA_LMT_USED_EXTENTS DBA_LOBS DBA_LOB_PARTITIONS DBA_LOB_SUBP
ARTITIONS DBA_METHOD_PARAMS DBA_METHOD_RESULTS DBA_MVIEWS DBA_MVIEW_AGGREGATES D
BA_MVIEW_ANALYSIS DBA_MVIEW_DETAIL_RELATIONS DBA_MVIEW_JOINS DBA_MVIEW_KEYS DBA_
NESTED_TABLES DBA_OBJECTS DBA_OBJECT_SIZE DBA_OBJECT_TABLES DBA_OBJ_AUDIT_OPTS D
BA_OPANCILLARY DBA_OPARGUMENTS DBA_OPBINDINGS DBA_OPERATORS DBA_OUTLINES DBA_OUT
LINE_HINTS
SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
DBA_PARTIAL_DROP_TABS DBA_PART_COL_STATISTICS DBA_PART_HISTOGRAMS DBA_PART_INDEX
ES DBA_PART_KEY_COLUMNS DBA_PART_LOBS DBA_PART_TABLES DBA_PENDING_TRANSACTIONS D
BA_POLICIES DBA_PRIV_AUDIT_OPTS DBA_PROFILES DBA_QUEUES DBA_QUEUE_SCHEDULES DBA_
QUEUE_TABLES DBA_RCHILD DBA_REFRESH DBA_REFRESH_CHILDREN DBA_REFS DBA_REGISTERED
_SNAPSHOTS DBA_REGISTERED_SNAPSHOT_GROUPS DBA_REPAUDIT_ATTRIBUTE DBA_REPAUDIT_CO
LUMN DBA_REPCAT DBA_REPCATLOG DBA_REPCAT_REFRESH_TEMPLATES DBA_REPCAT_TEMPLATE_O
BJECTS DBA_REPCAT_TEMPLATE_PARMS DBA_REPCAT_TEMPLATE_SITES DBA_REPCAT_USER_AUTHO
RIZATIONS DBA_REPCAT_USER_PARM_VALUES DBA_REPCOLUMN DBA_REPCOLUMN_GROUP DBA_REPC
ONFLICT DBA_REPDDL DBA_REPFLAVORS DBA_REPFLAVOR_COLUMNS DBA_REPFLAVOR_OBJECTS DB
A_REPGENERATED DBA_REPGENOBJECTS DBA_REPGROUP DBA_REPGROUPED_COLUMN DBA_REPGROUP
_PRIVILEGES DBA_REPKEY_COLUMNS DBA_REPOBJECT DBA_REPPARAMETER_COLUMN DBA_REPPRIO
RITY DBA_REPPRIORITY_GROUP DBA_REPPROP DBA_REPRESOLUTION DBA_REPRESOLUTION_METHO
D DBA_REPRESOLUTION_STATISTICS DBA_REPRESOL_STATS_CONTROL DBA_REPSCHEMA DBA_REPS
ITES DBA_RGROUP DBA_ROLES DBA_ROLE_PRIVS DBA_ROLLBACK_SEGS
SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
DBA_RSRC_CONSUMER_GROUPS DBA_RSRC_CONSUMER_GROUP_PRIVS DBA_RSRC_MANAGER_SYSTEM_P
RIVS DBA_RSRC_PLANS DBA_RSRC_PLAN_DIRECTIVES DBA_RULESETS DBA_SEGMENTS DBA_SEQUE
NCES DBA_SNAPSHOTS DBA_SNAPSHOT_LOGS DBA_SNAPSHOT_LOG_FILTER_COLS DBA_SNAPSHOT_R
EFRESH_TIMES DBA_SOURCE DBA_STMT_AUDIT_OPTS DBA_SUBPART_COL_STATISTICS DBA_SUBPA
RT_HISTOGRAMS DBA_SUBPART_KEY_COLUMNS DBA_SUMMARIES DBA_SUMMARY_AGGREGATES DBA_S
UMMARY_DETAIL_TABLES DBA_SUMMARY_JOINS DBA_SUMMARY_KEYS DBA_SYNONYMS DBA_SYS_PRI
VS DBA_TABLES DBA_TABLESPACES DBA_TAB_COLUMNS DBA_TAB_COL_STATISTICS DBA_TAB_COM
MENTS DBA_TAB_HISTOGRAMS DBA_TAB_MODIFICATIONS DBA_TAB_PARTITIONS DBA_TAB_PRIVS
DBA_TAB_SUBPARTITIONS DBA_TEMP_FILES DBA_TRIGGERS DBA_TRIGGER_COLS DBA_TS_QUOTAS
DBA_TYPES DBA_TYPE_ATTRS DBA_TYPE_METHODS DBA_UNUSED_COL_TABS DBA_UPDATABLE_COL
UMNS DBA_USERS DBA_USTATS DBA_VARRAYS DBA_VIEWS V_$: ---VIEW_NAME --------------
---------------V_$ACCESS V_$ACTIVE_INSTANCES V_$AQ V_$AQ1 V_$ARCHIVE
SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
SYS SYS SYS SYS SYS SYS SYS
OWNER -----------------------------SYS SYS SYS SYS SYS
V_$ARCHIVED_LOG V_$ARCHIVE_DEST V_$ARCHIVE_PROCESSES V_$BACKUP V_$BACKUP_ASYNC_I
O V_$BACKUP_CORRUPTION V_$BACKUP_DATAFILE V_$BACKUP_DEVICE V_$BACKUP_PIECE V_$BA
CKUP_REDOLOG V_$BACKUP_SET V_$BACKUP_SYNC_IO V_$BGPROCESS V_$BH V_$BSP V_$BUFFER
_POOL V_$BUFFER_POOL_STATISTICS V_$CIRCUIT V_$CLASS_PING V_$COMPATIBILITY V_$COM
PATSEG V_$CONTEXT V_$CONTROLFILE V_$CONTROLFILE_RECORD_SECTION V_$COPY_CORRUPTIO
N V_$DATABASE V_$DATAFILE V_$DATAFILE_COPY V_$DATAFILE_HEADER V_$DBFILE V_$DBLIN
K V_$DB_CACHE_ADVICE V_$DB_OBJECT_CACHE V_$DB_PIPES V_$DELETED_OBJECT V_$DISPATC
HER V_$DISPATCHER_RATE V_$DLM_ALL_LOCKS V_$DLM_CONVERT_LOCAL V_$DLM_CONVERT_REMO
TE V_$DLM_LATCH V_$DLM_LOCKS V_$DLM_MISC V_$DLM_RESS V_$DLM_TRAFFIC_CONTROLLER V
_$ENABLEDPRIVS V_$ENQUEUE_LOCK V_$EVENT_NAME V_$EXECUTION V_$FAST_START_SERVERS
V_$FAST_START_TRANSACTIONS V_$FILESTAT V_$FILE_PING V_$FIXED_TABLE V_$FIXED_VIEW
_DEFINITION V_$GLOBAL_BLOCKED_LOCKS V_$GLOBAL_TRANSACTION V_$HS_AGENT
SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
V_$HS_PARAMETER V_$HS_SESSION V_$INDEXED_FIXED_COLUMN V_$INSTANCE V_$INSTANCE_RE
COVERY V_$KCCDI V_$KCCFE V_$LATCH V_$LATCHHOLDER V_$LATCHNAME V_$LATCH_CHILDREN
V_$LATCH_MISSES V_$LATCH_PARENT V_$LIBRARYCACHE V_$LICENSE V_$LOADCSTAT V_$LOADI
STAT V_$LOADPSTAT V_$LOADTSTAT V_$LOCK V_$LOCKED_OBJECT V_$LOCKS_WITH_COLLISIONS
V_$LOCK_ACTIVITY V_$LOCK_ELEMENT V_$LOG V_$LOGFILE V_$LOGHIST V_$LOGMNR_CONTENT
S V_$LOGMNR_DICTIONARY V_$LOGMNR_LOGS V_$LOGMNR_PARAMETERS V_$LOG_HISTORY V_$MAX
_ACTIVE_SESS_TARGET_MTH V_$MLS_PARAMETERS V_$MTS V_$MYSTAT V_$NLS_PARAMETERS V_$
NLS_VALID_VALUES V_$OBJECT_DEPENDENCY V_$OBSOLETE_PARAMETER V_$OFFLINE_RANGE V_$
OPEN_CURSOR V_$OPTION V_$PARALLEL_DEGREE_LIMIT_MTH V_$PARAMETER V_$PARAMETER2 V_
$PQ_SESSTAT V_$PQ_SLAVE V_$PQ_SYSSTAT V_$PQ_TQSTAT V_$PROCESS V_$PROXY_ARCHIVEDL
OG V_$PROXY_DATAFILE V_$PWFILE_USERS V_$PX_PROCESS V_$PX_PROCESS_SYSSTAT V_$PX_S
ESSION V_$PX_SESSTAT
SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
V_$QUEUE V_$RECOVERY_FILE_STATUS V_$RECOVERY_LOG V_$RECOVERY_PROGRESS V_$RECOVER
Y_STATUS V_$RECOVER_FILE V_$REQDIST V_$RESERVED_WORDS V_$RESOURCE V_$RESOURCE_LI
MIT V_$ROLLNAME V_$ROLLSTAT V_$ROWCACHE V_$ROWCACHE_PARENT V_$ROWCACHE_SUBORDINA
TE V_$RSRC_CONSUMER_GROUP V_$RSRC_CONSUMER_GROUP_CPU_MTH V_$RSRC_PLAN V_$RSRC_PL
AN_CPU_MTH V_$SESSION V_$SESSION_CONNECT_INFO V_$SESSION_CURSOR_CACHE V_$SESSION
_EVENT V_$SESSION_LONGOPS V_$SESSION_OBJECT_CACHE V_$SESSION_WAIT V_$SESSTAT V_$
SESS_IO V_$SGA V_$SGASTAT V_$SHARED_POOL_RESERVED V_$SHARED_SERVER V_$SORT_SEGME
NT V_$SORT_USAGE V_$SQL V_$SQLAREA V_$SQLTEXT V_$SQLTEXT_WITH_NEWLINES V_$SQL_BI
ND_DATA V_$SQL_BIND_METADATA V_$SQL_CURSOR V_$SQL_SHARED_CURSOR V_$SQL_SHARED_ME
MORY V_$STATNAME V_$SUBCACHE V_$SYSSTAT V_$SYSTEM_CURSOR_CACHE V_$SYSTEM_EVENT V
_$SYSTEM_PARAMETER V_$SYSTEM_PARAMETER2 V_$TABLESPACE V_$TARGETRBA V_$TEMPFILE V
_$TEMPORARY_LOBS V_$TEMPSTAT V_$TEMP_EXTENT_MAP V_$TEMP_EXTENT_POOL V_$TEMP_PING
SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
V_$TEMP_SPACE_HEADER V_$THREAD V_$TIMER V_$TRANSACTION V_$TRANSACTION_ENQUEUE V_
$TYPE_SIZE V_$VERSION V_$WAITSTAT V_$_LOCK
SYS SYS SYS SYS SYS SYS SYS SYS SYS
========== 23 TUNING: ========== 1. init.ora settings -------------------backgro
und_dump_dest = /var/opt/oracle/SALES/bdump control_files = ( /oradata/arc/contr
ol/ctrl1SALES.ctl , /oradata/temp/control/ctrl2SALES.ctl , /oradata/rbs/control/
ctrl3SALES.ctl) db_block_size = 16384 db_name = SALES db_block_buffers = 17500 d
b_block_checkpoint_batch = 16 db_files = 255 db_file_multiblock_read_count = 10
license_max_users = 170 #core_dump_dest = /var/opt/oracle/SALES/cdump core_dump_
dest = /oradata/rbs/cdump distributed_transactions = 40 dml_locks = 1000 job_que
ue_processes = 2 log_archive_buffers = 20 log_archive_buffer_size = 256 log_arch
ive_dest = /oradata/arc log_archive_format = arcSALES_%s.arc log_archive_start =
true log_buffer = 163840 log_checkpoint_interval = 1250 log_checkpoint_timeout
= 1800 log_simultaneous_copies = 4 max_dump_file_size = 100240 max_enabled_roles
= 50 oracle_trace_enable = true open_cursors = 2000 open_links = 20 processes =
200 remote_os_authent = true rollback_segments = (r1, r2, r3, rbig,rbig2) seque
nce_cache_entries = 30 sequence_cache_hash_buckets = 23 shared_pool_size = 750M
sort_area_retained_size = 15728640
sort_area_size = 15728640 sql_trace = false timed_statistics = true resource_lim
it = true user_dump_dest = /var/opt/oracle/SALES/udump utl_file_dir = /var/opt/o
racle/utl utl_file_dir = /var/opt/oracle/utl/frontend SORT_AREA_SIZE SORT_AREA_R
ETAINED_SIZE PROCESSES DB_BLOCK_SIZE DB_BLOCK_BUFFERS SHARED_POOL_SIZE LOG_BUFFE
R LARGE_POOL_SIZE = DBWR_IO_SLAVES DB_WRITER_PROCESSES = 2 LGWR_IO_SLAVES= DB_FI
LE_MULTIBLOCK_READ_COUNT =16 in one read) BUFFER_POOL_RECYCLE BUFFER_POOL_KEEP T
IMED_STATISTICES or not) OPTIMIZER_MODE PARALLEL_MIN_SERVERS recovery) PARALLEL_
MAX_SERVERS RECOVERY_PARALLELISM niveau) 2. UTLBSTAT and UTLESTAT --------------
---------- if wanted change default tablespace of SYS to TOOLS - set timed_stati
stics=true - in $ORACLE_HOME/rdbms/admin you find utlbstat.sql and utlestat.sql
to create perfoRMANce table and insert baseline: run utlbstat let the database r
un for some time to gather statistics, run utlestat which drop tables and genera
te report.txt 3. STATSPACK: ------------Available as of 8.1.6 = = =TRUE (statist
ics related to time are collected =RULE, CHOOSE, FIRST_ROWS, ALL_ROWS = 2 = 4 =
2 (set parallel recovery op database (voor Parallel Query, en parallel = 65536 =
65536 = 100 = 8192 = 3400 = 52428800 = 26215400 4194304 8388608 (per PGA, max s
ort area) (size after sort) (alle processes) (DB_CACHE_SIZE in Oracle 9i)
(DB_WRITER_PROCESSES) (minimize io during table scans, it specifies max number o
f blocks io operation during sequential
installation: - connect internal - @$ORACLE_HOME/rdbms/admin/statscre.sql It wil
l create user PERFSTAT who ownes the new statistics tables You will be prompted
for TEMP and DEFAULT tablespaces Gather statistices: - connect perfstat/perfstat
- execute statspack.snap Or use DBMS_JOB to schedule the generation of snapshot
s Create report: - connect perfstat/perfstat - @ORACLE_HOME/rdbms/admin/statsrep
.sql This will ask for beginning snapshot id and ending snapshot id. Then you ca
n enter the filename for the report. 4. QUERIES: ------------ 4.1 HIT RATIO buff
ercache SELECT FROM WHERE AND AND (1-(pr.value/(dbg.value+cg.value)))*100 v$syss
tat pr, v$sysstat dbg, v$sysstat cg pr.name = 'physical reads' dbg.name = 'db bl
ock gets' cg.name = 'consistent gets';
-- 4.2 redo noWait ratio SELECT FROM WHERE AND (req.value*5000)/entries.value v$
sysstat req, v$sysstat entries req.name ='redo log space requests' entries.name=
'redo entries';
-- 4.3 Library cache and shared pool Overview memory: SELECT * FROM V$SGA; Free
memory shared pool: SELECT * FROM v$sgastat WHERE name = 'free memory'; How ofte
n an object has to be reloaded into the cache once it has been loaded SELECT sum
(pins) Executions, sum(reloads) Misses, sum(reloads)/sum(pins) Ratio FROM v$libr
arycache;
SELECT gethits,gets,gethitratio FROM v$librarycache WHERE namespace = 'SQL AREA'
; SELECT sum(sharable_mem) FROM v$db_object_cache; -- 4.4 TABLE OR INDEX REBUILD
NECCESARY? SELECT substr(segment_name, 1, 30), segment_type, substr(owner, 1, 1
0), extents, initial_extent, next_extent, max_extents FROM dba_segments WHERE ex
tents > max_extents - 100 AND owner not in ('SYS','SYSTEM'); SELECT index_name,
blevel, decode(blevel,0,'OK BLEVEL',1,'OK BLEVEL', 2,'OK BLEVEL',3,'OK BLEVEL',4
,'OK BLEVEL','BLEVEL HIGH') OK FROM dba_indexes WHERE owner='SALES'; EXAMPLE OF
A SCRIPT THAT YOU MIGHT SCHEDULE ONCE A DAY: -----------------------------------
--------------------- report 1. set linesize 500 set pagesize 500 set serveroutp
ut on set trimspool on spool d:\logs\ exec dbms_output.put_line('DAILY REPORT SA
LES DATABASE ON SERVER SUPER'); exec dbms_output.put_line('RUNTIME: '||to_char(S
YSDATE, 'DD-MM-YYYY;HH24:MI')); exec dbms_output.put_line('Please read all secti
ons carefully, takes only 1 minute.'); exec dbms_output.put_line(' '); exec dbms
_output.put_line('==================================================='); exec db
ms_output.put_line('SECTION 1: OBJECTS AND USERS'); exec dbms_output.put_line('=
=================================================='); exec dbms_output.put_line(
' '); exec dbms_output.put_line('-----------------------------------------------
----'); exec dbms_output.put_line('1.1 INVALID OBJECTS AS FOUND RIGHT NOW:'); ex
ec dbms_output.put_line(' '); SELECT substr(object_name, 1. 30), substr(object_t
ype, 1, 20), owner, status FROM dba_objects WHERE status='INVALID'; exec dbms_ou
tput.put_line(' '); exec dbms_output.put_line('Remark: If invalid objects are fo
und intervention is required.'); exec dbms_output.put_line(' '); exec dbms_outpu
t.put_line('---------------------------------------------------'); exec dbms_out
put.put_line('1.2 TABLE/INDEX REACHING MAX NO OF EXTENTS:'); exec dbms_output.pu
t_line(' '); SELECT substr(segment_name, 1, 30), segment_type, substr(owner, 1,
10), extents, initial_extent, next_extent, max_extents
FROM WHERE AND
dba_segments extents > max_extents - 50 owner not in ('SYS','SYSTEM');
exec dbms_output.put_line(' '); exec dbms_output.put_line('Remark: If objects ar
e found intervention is required.'); exec dbms_output.put_line(' '); exec dbms_o
utput.put_line('---------------------------------------------------'); exec dbms
_output.put_line('1.3 SKEWED or BAD INDEXES with blevel > 3:'); exec dbms_output
.put_line(' '); SELECT index_name, owner, blevel, decode(blevel,0,'OK BLEVEL',1,
'OK BLEVEL', 2,'OK BLEVEL',3,'OK BLEVEL',4,'OK BLEVEL','BLEVEL HIGH') OK FROM db
a_indexes WHERE owner in ('SALES','FRONTEND') and blevel > 3; exec exec exec exe
c exec exec dbms_output.put_line(' '); dbms_output.put_line('Remark: If indexes
are found rebuild is required.'); dbms_output.put_line(' '); dbms_output.put_lin
e('---------------------------------------------------'); dbms_output.put_line('
1.4. NEW OBJECTS CREATED SINCE YESTERDAY:'); dbms_output.put_line(' ');
SELECT owner, substr(object_name, 1, 30), object_type, created, last_ddl_time, s
tatus FROM dba_objects WHERE created > SYSDATE-5; exec exec exec exec dbms_outpu
t.put_line(' '); dbms_output.put_line('-----------------------------------------
----------'); dbms_output.put_line('1.5. NEW ORACLE USERS CREATED SINCE YESTERDA
Y:'); dbms_output.put_line(' ');
SELECT substr(username, 1, 20), account_status, default_tablespace, temporary_ta
blespace, created FROM dba_users WHERE created > SYSDATE -10; exec dbms_output.p
ut_line(' exec exec exec exec exec exec exec ');
dbms_output.put_line('==================================================='); dbm
s_output.put_line('SECTION 2: TABLESPACES, DATAFILES, ROLLBACK SEGS'); dbms_outp
ut.put_line('==================================================='); dbms_output.
put_line(' '); dbms_output.put_line('-------------------------------------------
--------'); dbms_output.put_line('2.1 FREE/USED SPACE OF TABLESPACES RIGHT NOW:'
); dbms_output.put_line(' ');
SELECT Total.name "Tablespace Name", Free_space, (total_space-Free_space) Used_s
pace, total_space FROM (SELECT tablespace_name, sum(bytes/1024/1024) Free_Space
FROM sys.dba_free_space GROUP BY tablespace_name ) Free, (SELECT b.name, sum(byt
es/1024/1024) TOTAL_SPACE
FROM sys.v_$datafile a, sys.v_$tablespace B WHERE a.ts# = b.ts# GROUP BY b.name
) Total WHERE Free.Tablespace_name = Total.name; exec dbms_output.put_line(' ');
exec dbms_output.put_line('REMARK: FOR MONTHLY INTERNET BILLING AT LEAST 50MB S
PACE MUST'); exec dbms_output.put_line('BE AVAILABLE IN EACH OF THE MANIIN% TABL
ESPACES. '); exec dbms_output.put_line(' '); exec dbms_output.put_line('--------
-------------------------------------------'); exec dbms_output.put_line('2.2 ST
ATUS DATABASE FILES RIGHT NOW:'); exec dbms_output.put_line(' '); SELECT substr(
file_name, 1, 50), tablespace_name, status FROM dba_data_files; exec exec exec e
xec exec exec dbms_output.put_line(' '); dbms_output.put_line('Remark: status of
all files should be available '); dbms_output.put_line(' '); dbms_output.put_li
ne('---------------------------------------------------'); dbms_output.put_line(
'2.3 STATUS ROLLBACK SEGMENTS RIGHT NOW:'); dbms_output.put_line(' ');
SELECT substr(segment_name, 1, 20), substr(tablespace_name, 1, 20), status, INIT
IAL_EXTENT, NEXT_EXTENT, MIN_EXTENTS, MAX_EXTENTS, PCT_INCREASE FROM DBA_ROLLBAC
K_SEGS; exec exec exec exec exec exec exec exec dbms_output.put_line(' '); dbms_
output.put_line('==================================================='); dbms_out
put.put_line('SECTION 3: PERFORMANCE STATS SINCE DATABASE STARTUP'); dbms_output
.put_line('==================================================='); dbms_output.pu
t_line(' '); dbms_output.put_line('---------------------------------------------
------'); dbms_output.put_line('3.1 ORACLE MEMORY (SGA LAYOUT):'); dbms_output.p
ut_line(' ');
SELECT * FROM V$SGA; exec exec exec exec dbms_output.put_line(' '); dbms_output.
put_line('---------------------------------------------------'); dbms_output.put
_line('3.2 FREE MEMORY SHARED POOL:'); dbms_output.put_line(' ');
SELECT * FROM v$sgastat WHERE name = 'free memory'; exec exec exec exec dbms_out
put.put_line(' '); dbms_output.put_line('---------------------------------------
------------'); dbms_output.put_line('3.3 LIBRARY (pl/sql) HIT RATIO:'); dbms_ou
tput.put_line(' ');
SELECT sum(pins) Executions, sum(reloads) Misses, sum(reloads)/sum(pins) Ratio F
ROM v$librarycache; exec dbms_output.put_line(' '); exec dbms_output.put_line('R
emark: above Ratio should be low ');
exec dbms_output.put_line('
');
exec dbms_output.put_line('---------------------------------------------------')
; exec dbms_output.put_line('3.4 DATABASE BUFFERS HIT RATIO:'); exec dbms_output
.put_line(' '); SELECT FROM WHERE AND AND exec exec exec exec exec exec (1-(pr.v
alue/(dbg.value+cg.value)))*100 v$sysstat pr, v$sysstat dbg, v$sysstat cg pr.nam
e = 'physical reads' dbg.name = 'db block gets' cg.name = 'consistent gets';
dbms_output.put_line(' '); dbms_output.put_line('Remark: above Ratio should be h
igh '); dbms_output.put_line(' '); dbms_output.put_line('-----------------------
----------------------------'); dbms_output.put_line('3.5 REDO BUFFERS WAITS:');
dbms_output.put_line(' '); (req.value*5000)/entries.value v$sysstat req, v$syss
tat entries req.name ='redo log space requests' entries.name='redo entries';
SELECT FROM WHERE AND exec exec exec exec exec exec exec exec exec exec
dbms_output.put_line(' '); dbms_output.put_line('Remark: above Ratio should be v
ery low '); dbms_output.put_line(' '); dbms_output.put_line('===================
================================'); dbms_output.put_line('SECTION 4: LOCKS'); db
ms_output.put_line('==================================================='); dbms_
output.put_line(' '); dbms_output.put_line('------------------------------------
---------------'); dbms_output.put_line('4.1 OBJECT LOCKS RIGHT NOW:'); dbms_out
put.put_line(' '); l.object_id l.session_id substr(l.oracle_username, 1, 10) sub
str(l.os_user_name, 1, 30) l.process l.locked_mode substr(o.object_name, 1, 20)
v$locked_object l, dba_objects o l.object_id=o.object_id; object_id, session_id,
username, osuser, process, lockmode, objectname
SELECT
FROM WHERE exec exec exec exec
dbms_output.put_line(' '); dbms_output.put_line('-------------------------------
--------------------'); dbms_output.put_line('4.2 PERSISTENT LOCKS SINCE YESTERD
AY:'); dbms_output.put_line(' ');
SELECT OBJECT_ID,SESSION_ID,USERNAME,OSUSER,PROCESS,LOCKMODE, OBJECT_NAME, to_ch
ar(DATUM, 'DD-MM-YYYY;HH24:MI') FROM PROJECTS.LOCKLIST WHERE DATUM > SYSDATE-2 O
RDER BY DATUM; exec dbms_output.put_line(' '); exec dbms_output.put_line('------
---------------------------------------------');
exec dbms_output.put_line('4.3 BLOCKED SESSIONS RIGHT NOW:'); exec dbms_output.p
ut_line(' '); SELECT s.sid sid, substr(s.username, 1, 10) username, substr(s.sch
emaname, 1, 10) schemaname, substr(s.osuser, 1, 10) osuser, substr(s.program, 1,
30) program, s.command command, l.lmode lockmode, l.block blocked FROM v$sessio
n s, v$lock l WHERE s.sid=l.sid and schemaname not in ('SYS','SYSTEM'); exec exe
c exec exec exec exec exec exec exec exec exec exec exec exec exec exit / dbms_o
utput.put_line(' '); dbms_output.put_line('=====================================
=============='); dbms_output.put_line('SECTION 5: ONLY NEEDED FOR oracle-dba ')
; dbms_output.put_line(' INFO NEEDED FOR RECOVERY '); dbms_output.put_line('====
==============================================='); dbms_output.put_line(' '); db
ms_output.put_line('scn datafiles: '); dbms_output.put_line('scn controlfiles: '
); dbms_output.put_line('latest 20 archived redo: '); dbms_output.put_line(' ');
dbms_output.put_line(' '); dbms_output.put_line('------------------------------
---------------------'); dbms_output.put_line('---------------------------------
------------------'); dbms_output.put_line('END REPORT 1'); dbms_output.put_line
('Thanks a lot for reading this report !!!');
======== 24 RMAN: ======== $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ $$$$$ =============== 24.1: RMAN 10g: ============
===
24.1.1 Create the catalog and register target database: ------------------------
------------------------------10g example: -----------Oracle 10.2 target databas
e is test10g.
Orcale 10.2 rman database is RMAN. Set up the catalog and register the target: R
MAN> create catalog tablespace "RMAN" recovery catalog created RMAN> exit Recove
ry Manager complete. C:\oracle>rman catalog=rman/rman@rman target=system/vga88nt
@test10g Recovery Manager: Release 10.2.0.1.0 - Production on Wed Feb 27 21:31:0
2 2008 Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: TEST10G (DBID=899275577) connected to recovery cat
alog database RMAN> register database; database registered in recovery catalog s
tarting full resync of recovery catalog full resync complete
24.1.2 Backup and recovery examples 10g RMAN: ----------------------------------
----------Good Examples using RMAN on 10g: ------------------------------->>>> F
ull Backup First we configure several persistant parameters for this instance: R
MAN> configure retention policy to recovery window of 5 days; RMAN> configure de
fault device type to disk; RMAN> configure controlfile autobackup on; RMAN> conf
igure channel device type disk format 'C:\Oracle\Admin\W2K2\Backup%d_DB_%u_%s_%p
'; Next we perform a complete database backup using a single command: RMAN> run
{ 2> backup database plus archivelog; 3> delete noprompt obsolete; 4> } The reco
very catalog should be resyncronized on a regular basis so that changes to the d
atabase structure and presence of new archive logs is recorded. Some commands pe
rform partial and
full resyncs implicitly, but if you are in doubt you can perform a full resync u
sing the follwoing command: RMAN> resync catalog; >>>> Restore & Recover The Who
le Database If the controlfiles and online redo logs are still present a whole d
atabase recovery can be achieved by running the following script: run { shutdown
immediate; # use abort if this fails startup mount; restore database; recover d
atabase; alter database open; } This will result in all datafiles being restored
then recovered. RMAN will apply archive logs as necessary until the recovery is
complete. At that point the database is opened. If the tempfiles are still pres
ent you can issue a command like like the following for each of them: sql "ALTER
TABLESPACE temp ADD TEMPFILE ''C:\Oracle\oradata\W2K2\temp01.dbf'' REUSE"; If t
he tempfiles are missing they must be recreated as follows: sql "ALTER TABLESPAC
E temp ADD TEMPFILE ''C:\Oracle\oradata\W2K2\temp01.dbf'' SIZE 100M AUTOEXTEND O
N NEXT 64K";
>>>> Restore & Recover A Subset Of The Database A subset of the database can be
restored in a similar fashion: run { sql 'ALTER TABLESPACE users OFFLINE IMMEDIA
TE'; restore tablespace users; recover tablespace users; sql 'ALTER TABLESPACE u
sers ONLINE'; } Recovering a Tablespace in an Open Database The following exampl
e takes tablespace TBS_1 offline, restores and recovers it, then brings it back
online: run { allocate channel dev1 type 'sbt_tape'; sql "ALTER TABLESPACE tbs_1
OFFLINE IMMEDIATE";
restore tablespace tbs_1; recover tablespace tbs_1; sql "ALTER TABLESPACE tbs_1
ONLINE"; } Recovering Datafiles Restored to New Locations The following example
allocates one disk channel and one media management channel to use datafile copi
es on disk and backups on tape, and restores one of the datafiles in tablespace
TBS_1 to a different location: run { allocate channel dev1 type disk; allocate c
hannel dev2 type 'sbt_tape'; sql "ALTER TABLESPACE tbs_1 OFFLINE IMMEDIATE"; set
newname for datafile 'disk7/oracle/tbs11.f' to 'disk9/oracle/tbs11.f'; restore
tablespace tbs_1; switch datafile all; recover tablespace tbs_1; sql "ALTER TABL
ESPACE tbs_1 ONLINE";
} >>>> Example backup to sbt: echo " run { allocate channel t1 type 'sbt_tape' p
arms 'ENV=(tdpo_optfile=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)'; allocate
channel t2 type 'sbt_tape' parms 'ENV=(tdpo_optfile=/usr/tivoli/tsm/client/orac
le/bin64/tdpo.opt)'; backup full database ; backup (spfile) (current controlfile
) ; sql 'alter system archive log current'; backup archivelog all delete input ;
release channel t1; release channel t2; }
>>>> Incomplete Recovery As you would expect, RMAN allows incomplete recovery to
a specified time, SCN or sequence number: run { shutdown immediate; startup mou
nt; set until time 'Nov 15 2000 09:00:00'; # set until scn 1000; # alternatively
, you can specify SCN # set until sequence 9923; # alternatively, you can specif
y log sequence number restore database; recover database;
alter database open resetlogs; } The incomplete recovery requires the database t
o be opened using the RESETLOGS option. >>>> Disaster Recovery In a disaster sit
uation where all files are lost you can only recover to the last SCN in the arch
ived redo logs. Beyond this point the recovery would have to make reference to t
he online redo logs which are not present. Disaster recovery is therefore a type
of incomplete recovery. To perform disaster recovery connect to RMAN: C:>rman c
atalog=rman/rman@w2k1 target=sys/password@w2k2 Once in RMAN do the following: st
artup nomount; restore controlfile; alter database mount; From SQL*Plus as SYS g
et the last archived SCN using: SQL> SELECT archivelog_change#-1 FROM v$database
; ARCHIVELOG_CHANGE#-1 -------------------1048438 1 row selected. SQL>Back in RM
AN do the following: run { set until scn 1048438; restore database; recover data
base; alter database open resetlogs; } If the "until scn" were not set the follo
wing type of error would be produced once a redo log was referenced: RMAN-00571:
=========================================================== RMAN-00569: =======
======== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: ===============
============================================ RMAN-03002: failure of recover comm
and at 03/18/2003 09:33:19 RMAN-06045: media recovery requesting unknown log: th
read 1 scn 1048439 With the database open all missing tempfiles must be replaced
: sql "ALTER TABLESPACE temp ADD TEMPFILE ''C:\Oracle\oradata\W2K2\temp01.dbf''
SIZE 100M AUTOEXTEND ON NEXT 64K";
Once the database is fully recovered a new backup should be perfomed. The recove
red database will be registered in the catalog as a new incarnation. The current
incarnation can be listed and altered using the following commands: list incarn
ation; reset database to incarnation x;Lists And Reports RMAN has extensive list
ing and reporting functionality allowing you to monitor you backups and maintain
the recovery catalog. Here are a few useful commands: >>>> Restoring a datafile
to another location: For example, if you restore datafile ?/oradata/trgt/tools0
1.dbf to its default location, then RMAN restores the file ?/oradata/trgt/tools0
1.dbf and overwrites any file that it finds with the same filename. If you run a
SET NEWNAME command before you restore a file, then RMAN creates a datafile cop
y with the name that you specify. For example, assume that you run the following
commands: SET NEWNAME FOR DATAFILE '?/oradata/trgt/tools01.dbf' TO '/tmp/tools0
1.dbf'; RESTORE DATAFILE '?/oradata/trgt/tools01.dbf'; In this case, RMAN create
s a datafile copy of ?/oradata/trgt/tools01.dbf named /tmp/tools01.dbf and recor
ds it in the repository. To change the name for datafile ?/oradata/trgt/tools01.
dbf to /tmp/tools01.dbf in the control file, run a SWITCH command so that RMAN c
onsiders the restored file as the current database file. For example: SWITCH DAT
AFILE '/tmp/tools01.dbf' TO DATAFILECOPY '?/oradata/trgt/tools01.dbf'; The SWITC
H command is the RMAN equivalent of the SQL statement ALTER DATABASE RENAME FILE
. >>>> Archive logs What is the purpose and are the differences of ALTER SYSTEM ARC
HIVE LOG CURRENT and ALTER SYSTEM ARCHIVE LOG ALL # When the database is open, run th
llowing SQL statement to force Oracle to switch out of the current log and archi
ve it as well as all other unarchived logs: ALTER SYSTEM ARCHIVE LOG CURRENT; #
When the database is mounted, open, or closed, you can run the following SQL sta
tement to force Oracle to archive all noncurrent redo logs: ALTER SYSTEM ARCHIVE
LOG ALL; A log switch does not mean that the redo is archived. When you execute
"'alter system archive log current" you force that the current
log to be archived, so it is safe: you are sure to have all the needed archived
logs. alter system archive log all: This command will archive all filled redo lo
gs but will not complete current log because it will not be full.
>>>> LIST AND REPORT COMMANDS: ============= LIST COMMAND: ============= List co
mmands query the catalog or control file, to determine which backups or copies a
re available. List commands provide for basic information. Report commands can p
rovide for much more detail. About RMAN Reports Generated by the LIST Command Yo
u can control how the output is displayed by using the BY BACKUP and BY FILE opt
ions of the LIST command and choosing between the SUMMARY and VERBOSE options. -
- Example 1: Query on the incarnations of the target database RMAN> list incarna
tion of database; RMAN-03022: compiling command: list List of DB Key ------1 Dat
abase Incarnations Inc Key DB Name DB ID CUR Reset SCN Reset Time ------- ------
-- ---------------- --- ---------- ---------2 AIRM 2092303715 YES 1 24-DEC-02
-- Example 2: Query on tablespace backups You can ask for lists of tablespace ba
ckups, as shown in the following example: RMAN> list backup of tablespace users;
-- Example 3: Query on database backups RMAN> list backup of database; -- Examp
le 4: Query on backup of archivelogs: RMAN> list backup of archivelog all; The p
rimary purpose of the LIST command is to determine which backups are available.
For example, you can list:
. Backups and proxy copies of a database, tablespace, datafile, archived redo lo
g, or control file . Backups that have expired . Backups restricted by time, pat
h name, device type, tag, or recoverability . Incarnations of a database By defa
ult, RMAN lists backups by backup, which means that it serially lists each backu
p or proxy copy and then identifies the files included in the backup. You can al
so list backups by file. By default, RMAN lists in verbose mode. You can also li
st backups in a summary mode if the verbose mode generates too much output. List
ing To list you use execute clause. you can Backups by Backup backups by backup,
connect to the target database and recovery catalog (if one), and then the LIST
BACKUP command. Specify the desired objects with the listObjList For example, e
nter: # lists backup sets, image copies, and proxy copies # lists only backup se
ts and proxy copies # lists only disk copies
LIST BACKUP; LIST BACKUPSET; LIST COPY; Example:
RMAN> LIST BACKUP OF DATABASE; By default the LIST output is detailed, but you c
an also specify that RMAN display the output in summarized form. Specify the des
ired objects with the listObjectList or recordSpec clause. If you do not specify
an object, then LIST BACKUP displays all backups. After connecting to the targe
t database and recovery catalog (if you use one), execute LIST BACKUP, specifyin
g the desired objects and options. For example: LIST BACKUP SUMMARY; # lists bac
kup sets, proxy copies, and disk copies
You can also specify the EXPIRED keyword to identify those backups that were not
found during a crosscheck: LIST EXPIRED BACKUP SUMMARY; # Show all backup detai
ls list backup; ================ Report commands: ================
RMAN>report schema; Shows the physical structure of the target database. RMAN> r
eport obsolete; RMAN-03022: compiling command: report RMAN-06147: no obsolete ba
ckups found -- REPORT COMMAND: -- --------------About Reports of RMAN Backups Re
ports enable you to confirm that your backup and recovery strategy is in fact me
eting your requirements for database recoverability. The two major forms of REPO
RT used to determine whether your database is recoverable are: RMAN> REPORT NEED
BACKUP; Reports which database files need to be backed up to meet a configured
or specified retention policy Use the REPORT NEED BACKUP command to determine wh
ich database files need backup under a specific retention policy. With no argume
nts, REPORT NEED BACKUP reports which objects need backup under the currently co
nfigured retention policy. The output for a configured retention policy of REDUN
DANCY 1 is similar to this example: REPORT NEED BACKUP; RMAN retention policy wi
ll be applied to the command RMAN retention policy is set to redundancy 1 Report
of files with less than 1 redundant backups File #bkps Name ---- ----- --------
--------------------------------------------2 0 /oracle/oradata/trgt/undotbs01.d
bf RMAN> REPORT UNRECOVERABLE; Reports which database files require backup becau
se they have been affected by some NOLOGGING operation such as a direct-path ins
ert You can report backup sets, backup pieces and datafile copies that are obsol
ete, that is, not needed to meet a specified retention policy, by specifying the
OBSOLETE keyword. If you do not specify any other options, then REPORT OBSOLETE
displays the backups that are obsolete according to the current retention polic
y, as shown in the following example:
RMAN> REPORT OBSOLETE; In the simplest case, you could crosscheck all backups on
disk, tape or both, using any one of the following commands: RMAN> CROSSCHECK B
ACKUP DEVICE TYPE DISK; RMAN> CROSSCHECK BACKUP DEVICE TYPE SBT; RMAN> CROSSCHEC
K BACKUP; # crosshecks all backups on all devices The REPORT SCHEMA command list
s and displays information about the database files. After connecting RMAN to th
e target database and recovery catalog (if you use one), issue REPORT SCHEMA as
shown in this example: RMAN> REPORT SCHEMA;
# Show items that beed 7 days worth of # archivelogs to recover completely repor
t need backup days = 7 database; report need backup; # Show/Delete items not nee
ded for recovery report obsolete; delete obsolete; # Show/Delete items not neede
d for point-in-time # recovery within the last week report obsolete recovery win
dow of 7 days; delete obsolete recovery window of 7 days; RMAN> REPORT OBSOLETE
REDUNDANCY 2; RMAN> REPORT OBSOLETE RECOVERY WINDOW OF 5 DAYS; RMAN displays bac
kups that are obsolete according to those retention policies, regardless of the
actual configured retention policy.
# Show/Delete items with more than 2 newer copies available report obsolete redu
ndancy = 2 device type disk; delete obsolete redundancy = 2 device type disk; #
Show datafiles that connot currently be recovered report unrecoverable database;
report unrecoverable tablespace 'USERS'; 24.1.3 More on Backup and recovery 10g
RMAN: -------------------------------------------24.1.3.1 About RMAN Backups:
---------------------------When you execute the BACKUP command in RMAN, you crea
te one or more backup sets or image copies. By default, RMAN creates backup sets
regardless of whether the destination is disk or a media manager. >>>About Imag
e Copies An image copy is an exact copy of a single datafile, archived redo log
file, or control file. Image copies are not stored in an RMAN-specific format. T
hey are identical to the results of copying a file with operating system command
s. RMAN can use image copies during RMAN restore and recover operations, and you
can also use image copies with non-RMAN restore and recovery techniques. To cre
ate image copies and have them recorded in the RMAN repository, run the RMAN BAC
KUP AS COPY command (or, alternatively, configure the default backup type for di
sk as image copies using CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COPY before p
erforming a backup). A database server session is used to create the copy, and t
he server session also performs actions such as validating the blocks in the fil
e and recording the image copy in the RMAN repository. You can also use an opera
ting system command such as the UNIX dd command to create image copies, though t
hese will not be validated, nor are they recorded in the RMAN repository. You ca
n use the CATALOG command to add image copies created with native operating syst
em tools in the RMAN repository. >>>Using RMAN-Created Image Copies If you run a
RESTORE command, then by default RMAN restores a datafile or control file to it
s original location by copying an image copy backup to that location. Image copi
es are chosen over backup sets because of the extra overhead of reading through
an entire backup set in search of files to be restored. However, if you need to
restore and recover a current datafile, and if you have an image copy of the dat
afile available on disk, then you do not actually need to have RMAN copy the ima
ge copy back to its old location. You can instead have the database use the imag
e copy in place, as a replacement for the datafile to be restored. The SWITCH co
mmand updates the RMAN repository indicate that the image copy should now be tre
ated as the current datafile. Issuing the SWITCH command in this case is equival
ent to issuing the SQL statement ALTER DATABASE RENAME FILE. You can then perfor
m recovery on the copy. >>>User-Managed Image Copies RMAN can use image copies c
reated by mechanisms outside of RMAN, such as native
operating system file copy commands or third-party utilities that leave image co
pies of files on disk. These copies are known as user-managed copies or operatin
g system copies. The RMAN CATALOG command causes RMAN to inspect an existing ima
ge copy and enter its metadata into the RMAN repository. Once cataloged, these f
iles can be used like any other backup with the RESTORE or SWITCH commands. Some
sites store their datafiles on mirrored disk volumes, which permit the creation
of image copies by breaking a mirror. After you have broken the mirror, you can
notify RMAN of the existence of a new user-managed copy, thus making it a candi
date for a backup operation. You must notify RMAN when the copy is no longer ava
ilable, by using the CHANGE ... UNCATALOG command. In this example, before resil
vering the mirror (not including other copies of the broken mirror), you must us
e a CHANGE ... UNCATALOG command to update the recovery catalog and indicate tha
t this copy is no longer available. >>>Storage of Backups on Disk and Tape RMAN
can create backups on disk or a third-party media device such as a tape drive. I
f you specify DEVICE TYPE DISK, then your backups are created on disk, in the fi
le name space of the target instance that is creating the backup. You can make a
backup on any device that can store a datafile. To create backups on non-disk m
edia, such as tape, you must use third-party media management software, and allo
cate channels with device types, such as SBT, that are supported by that softwar
e. >>>Backups of Archived Logs There are several features of RMAN backups specif
ic to backups of archived redo logs. Deletion of Archived Logs After Backups RMA
N can delete one or all copies of archived logs from disk after backing them up
to backup sets. If you specify the DELETE INPUT option, then RMAN backs up exact
ly one copy of each specified log sequence number and thread from an archive des
tination to tape, and then deletes the specific file it backed up while leaving
the other copies on disk. If you specify the DELETE ALL INPUT option, then RMAN
backs up exactly one copy of each specified log sequence number and thread, and
then deletes that log from all archive destinations. Note that there are special
considerations related to deletion of archived redo logs in standby database co
nfigurations. See Oracle Data Guard Concepts and Administration for details. >>>
Backups of Backup Sets
The RMAN BACKUP BACKUPSET command backs up previously created backup sets. Only
backup sets that were created on device type DISK can be backed up, and they can
be backed up to any available device type. Note: RMAN issues an error if you at
tempt to run BACKUP AS COPY BACKUPSET. The BACKUP BACKUPSET command uses the def
ault disk channel to copy backup sets from disk to disk. To back up from disk to
tape, you must either configure or manually allocate a non-disk channel. Uses f
or Backups of Backup Sets The BACKUP BACKUPSET command is a useful way to spread
backups among multiple media. For example, you can execute the following BACKUP
command weekly as part of the production backup schedule: # makes backup sets o
n disk BACKUP DEVICE TYPE DISK AS BACKUPSET DATABASE PLUS ARCHIVELOG; BACKUP DEV
ICE TYPE sbt BACKUPSET ALL; # copies backup sets on disk to tape In this way, yo
u ensure that all your backups exist on both disk and tape. You can also duplex
backups of backup sets, as in this example: BACKUP COPIES 2 DEVICE TYPE sbt BACK
UPSET ALL; (Again, control file autobackups are never duplexed.) You can also us
e BACKUP BACKUPSET to manage backup space allocation. For example, to keep more
recent backups on disk and older backups only on tape, you can regularly run the
following command: BACKUP DEVICE TYPE sbt BACKUPSET COMPLETED BEFORE 'SYSDATE-7
' DELETE INPUT; This command backs up backup sets that were created more than a
week ago from disk to tape, and then deletes them from disk. Note that DELETE IN
PUT here is equivalent to DELETE ALL INPUT; RMAN deletes all existing copies of
the backup set. If you duplexed a backup to four locations, then RMAN deletes al
l four copies of the pieces in the backup set. >>> Restoring Files with RMAN Use
the RMAN RESTORE command to restore the following types of files from disk or o
ther media: Database (all datafiles) Tablespaces Control files Archived redo log
s Server parameter files
Because a backup set is in a proprietary format, you cannot simply copy it as yo
u
would a backup database file created with an operating system utility; you must
use the RMAN RESTORE command to extract its contents. In contrast, the database
can use image copies created by the RMAN BACKUP AS COPY command without addition
al processing. RMAN automates the procedure for restoring files. You do not need
to go into the operating system, locate the backup that you want to use, and ma
nually copy files into the appropriate directories. When you issue a RESTORE com
mand, RMAN directs a server session to restore the correct backups to either: -
The default location, overwriting the files with the same name currently there -
A new location, which you can specify with the SET NEWNAME command To restore a
datafile, either mount the database or keep it open and take the datafile to be
restored offline. When RMAN performs a restore, it creates the restored files a
s datafile image copies and records them in the repository. The following table
describes the behavior of the RESTORE, SET NEWNAME, and SWITCH commands. >>>Data
file Media Recovery with RMAN The concept of datafile media recovery is the appl
ication of online or archived redo logs or incremental backups to a restored dat
afile in order to update it to the current time or some other specified time. Us
e the RMAN RECOVER command to perform media recovery and apply logs or increment
al backups automatically. RMAN Media Recovery: Basic Steps If possible, make the
recovery catalog available to perform the media recovery. If it is not availabl
e, or if you do not maintain a recovery catalog, then RMAN uses metadata from th
e target database control file. If both the control file and recovery catalog ar
e lost, then you can still recover the database --assuming that you have backups
of the datafiles and at least one autobackup of the control file. The generic s
teps for media recovery using RMAN are as follows: -Place the database in the ap
propriate state: mounted or open. For example, mount the database when performin
g whole database recovery, or open the database when performing online tablespac
e recovery. -To perform incomplete recovery, use the SET UNTIL command to specif
y the time, SCN, or log sequence number at which recovery terminates. Alternativ
ely, specify the UNTIL clause on the RESTORE and RECOVER commands. -Restore the
necessary files with the RESTORE command. -Recover the datafiles with the RECOVE
R command. -Place the database in its normal state. For example, open it or brin
g recovered
tablespaces online. RESTORE DATABASE; RECOVER DATABASE; >>> Corrupt Block recove
ry Although datafile media recovery is the principal form of recovery, you can a
lso use the RMAN BLOCKRECOVER command to perform block media recovery. Block med
ia recovery recovers an individual corrupt datablock or set of datablocks within
a datafile. In cases when a small number of blocks require media recovery, you
can selectively restore and recover damaged blocks rather than whole datafiles.
For example, you may discover the following messages in a user trace file: ORA-0
1578: ORA-01110: ORA-01578: ORA-01110: ORACLE data block corrupted (file # 7, bl
ock # 3) data file 7: '/oracle/oradata/trgt/tools01.dbf' ORACLE data block corru
pted (file # 2, block # 235) data file 2: '/oracle/oradata/trgt/undotbs01.dbf'
You can then specify the corrupt blocks in the BLOCKRECOVER command as follows:
BLOCKRECOVER DATAFILE 7 BLOCK 3 DATAFILE 2 BLOCK 235; >>> After a Database Resto
re and Recover, RMAN gives the error: RMAN-00571: ==============================
============================= RMAN-00569: =============== ERROR MESSAGE STACK FO
LLOWS =============== RMAN-00571: ==============================================
============= RMAN-03002: failure of backup command at 03/03/2008 11:13:06 RMAN-
06059: expected archived log not found, lost of archived log compromises recover
ability ORA-19625: error identifying file /dbms/tdbaeduc/educroca/recovery/archi
ve/arch_1_870_617116679.arch ORA-27037: unable to obtain file status IBM AIX RIS
C System/6000 Error: 2: No such file or directory Note 1: If you no longer have
a particular archivelog file you can let RMAN catalog know this by issuing the f
ollowing command at the rman prompt after connecting to the rman catalog and the
target database change archivelog all crosscheck ; This will check the archivel
og folder and then make the catalog agree with what is actually available. rman>
DELETE EXPIRED ARCHVIELOG ;
Oracle Error :: RMAN-20011 target database incarnation is not current in recover
y catalog Cause the database incarnation that matches the resetlogs change# and
time of the mounted target database control file is not the current incarnation
of the database Action If "reset database to incarnation <key>" was used to make
an old incarnation current then restore the target database from a backup that
matches the incarnation and mount it. You will need to do "startup nomount" befo
re you can restore the control file using RMAN. Otherwise use "reset database to
incarnation <key>" make the intended incarnation current in the recovery catalo
g. >>> Note about rman and tape sbt and recovery window: Suppose you have a rete
ntion period defined in rman, like for example CONFIGURE RETENTION POLICY TO RED
UNDANCY 3 This means that 3 backups needs to be maintained by rman, and other ba
ckups are considered "obsolete". But those other backups beyond retention, are n
ot expired or otherwise not usable. If they are still present, you can use them
in a recovery. Besides this, it cannot be known beforehand how the tape subsyste
m will deal with rman commands like "delete obsolete". The tape subsystem has pr
obably its own retention period, and you need much more details about all system
s involved, before you know whats going on.
============================================= 24.1.3.2 ABOUT RMAN ERRORS / troub
leshooting: ============================================= Err 1: Missing archive
d redolog: ================================ Problem: If an archived redo is miss
ing, you might get a message similar like this: RMAN-00571: ====================
======================================= RMAN-00569: =============== ERROR MESSAG
E STACK FOLLOWS =============== RMAN-00571: ====================================
======================= RMAN-03002: failure of backup command at 03/05/2008 07:4
4:35 RMAN-06059: expected archived log not found, lost of archived log compromis
es recoverability ORA-19625: error identifying file /dbms/tdbaeduc/educroca/reco
very/archive/arch_1_817_617116679.arch ORA-27037: unable to obtain file status I
BM AIX RISC System/6000 Error: 2: No such file or directory
Solution: If archived redo logs are (wrongly) deleted/moved/compressed from disk
without being backed up, the rman catalog will not know this has happened, and
will keep attempting to backup the missing archived redo logs. That will cause r
man archived redo log backups to fail altogether with an error like: RMAN-06059:
expected archived log not found, lost of archived log compromises recoverabilit
y If you can, you should bring back the missing archved redo logs to their origi
nal location and name, and let rman back them up. But if that is impossible, the
workaround is to crosscheck archivelog all, like: rman <<e1 connect target / connect
catalog username/password@catalog run { allocate channel c1 type disk ; crossche
ck archivelog all ; release channel c1 ; } e1 Or just go into rman and run the c
ommand: RMAN> crosscheck archivelog all; Youll get output like this: validation suc
ceeded for archived log archive log filename=D:REDOARCHARCH_1038.DBF recid=1017
stamp=611103638 for every archived log as they are all checked on disk. That sho
uld be the catalog fixed, run an archivelog backup to make sure.
Err 2: online redo logs listed as archives: ====================================
======= Testcase: a 10g 10.2.0.3 shows after recovery with resetlogs the followi
ng in v$archived_log. It looks as if it will stay there forever: SEQ# FIRST 814
17311773 815 17311785 816 17354662 817 17354674 2 17415287 0 0 0 0 0 0 1 -->1740
2532 1 17402532 NEXT 17311785 17354662 17354674 17402531 2.81E+14 0 0 0 17415287
17404154 DIFF 12 42877 12 47857 redo01.log 2.8147E+14 A redo02.log 0 redo03.log
0 redo04.log 0 redo05.log 12755 A 1622 NAME D D D D A A A D STATUS
2
17404154
17404165
11
D
FIRST_CHANGE# NEXT_CHANGE# SEQUENCE# RESETLOGS_CHANGE# ------------- -----------
- ---------- ----------------17311785 17354662 815 1 17354662 17354674 816 1 173
54674 17402531 817 1 -->17402532 17404154 1 -->17402532 17404154 17404165 2 1740
2532 17404165 17415733 3 17402532 We dont know what is going on here.
Err 3: Highlevel overview RMAN Error Codes =====================================
===== RMAN error codes are summarized in the table below. 0550-0999 Command-line
interpreter 1000-1999 Keyword analyzer 2000-2999 Syntax analyzer 3000-3999 Main
layer 4000-4999 Services layer 5000-5499 Compilation of RESTORE or RECOVER comm
and 5500-5999 Compilation of DUPLICATE command 6000-6999 General compilation 700
0-7999 General execution 8000-8999 PL/SQL programs 9000-9999 Low-level keyword a
nalyzer 10000-10999 Server-side execution 11000-11999 Interphase errors between
PL/SQL and RMAN 12000-12999 Recovery catalog packages 20000-20999 Miscellaneous
RMAN error messages
Err 4: RMAN-03009 accompinied with ORA- error: =================================
============= Q: Here is my problem; When trying to delete obsolete RMAN backups
ets, I get an error: RMAN> change backupset 698, 702, 704, 708 delete; List of B
P Key ------698 702 704 Backup Pieces BS Key Pc# Cp# ------- --- --698 1 1 702 1
1 704 1 1
Status ----------AVAILABLE AVAILABLE AVAILABLE
Device Type ----------SBT_TAPE SBT_TAPE SBT_TAPE
Piece Name ---------df_546210555_706_1 df_546296605_709_1 df_546383776_712_1
708
708
1
1
AVAILABLE
SBT_TAPE
df_546469964_715_1
Do you really want to delete the above objects (enter YES or NO)? YES RMAN-00571
: =========================================================== RMAN-00569: ======
========= ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: ==============
============================================= RMAN-03009: failure of delete comm
and on ORA_MAINT_SBT_TAPE_1 channel at 03/02/2005 16:27:06 ORA-27191: sbtinfo2 r
eturned error Additional information: 2 What in the world does "Additional infor
mation: 2" mean? I can't find any more useful detail than this. A: Oracle Error
:: ORA-27191 sbtinfo2 returned error Cause sbtinfo2 returned an error. This happ
ens while retrieving backup file information from the media manager"s catalog. A
ction This error is returned from the media management software which is linked
with Oracle. There should be additional messages which explain the cause of the
error. This error usually requires contacting the media management vendor. A: --
-> ORA-27191 John Clarke: My guess is that "2" is an O/S return code, and in /us
r/sys/include/errno.h, you'll see that error# 2 is "no such file or directory. A
ccompanied with ORA-27191, I'd guess that your problem is that your tape library
doesn't currently have the tape(s) loaded and/or can't find them. Mladen Gogala
: Additional information 2 means that OS returned status 2. That is a "file not
found" error. In plain Spanglish, you cannot delete files from tape, only from t
he disk drives. Niall Litchfield: The source error is the ora-27191 error (http:
//downloadwest.oracle.com/docs/cd/B14117_01/server.101/b10744/e24280.htm#ORA-271
91) which suggests a tape library issue to me. You can search for RMAN errors us
ing the error search page as well http://otn.oracle.com/pls/db10g/db10g.error_se
arch?search=rman-03009, for example A: ---> RMAN-03009
RMAN-03009: failure of delete command on ORA_MAINT_SBT_TAPE_1 channel at date/ti
me RMAN-03009: failure of allocate command on t1 channel at date/time RMAN-03009
: failure of backup command on t1 channel at date/time etc.. -> Means most of th
e time that you have Media Management Library problems -> Can also mean that the
re is a problem with backup destination (disk not found, no space, tape not load
ed etc..) ERR 5: Test your Media Management API: ===============================
======= Testing the Media Management API On specified platforms, Oracle provides
a diagnostic tool called "sbttest". This utility performs a simple test of the
tape library by acting as the Oracle database server and attempting to communica
te with the media manager. Obtaining the Utility On UNIX, the sbttest utility is
located in $ORACLE_HOME/bin. Obtaining Online Documentation For online document
ation of sbttest, issue the following on the command line: % sbttest The program
displays the list of possible arguments for the program: Error: backup file nam
e must be specified Usage: sbttest backup_file_name # this is the only required
parameter <-dbname database_name> <-trace trace_file_name> <-remove_before> <-no
_remove_after> <-read_only> <-no_regular_backup_restore> <-no_proxy_backup> <-no
_proxy_restore> <-file_type n> <-copy_number n> <-media_pool n> <-os_res_size n>
<-pl_res_size n> <-block_size block_size> <-block_count block_count> <-proxy_fi
le os_file_name bk_file_name [os_res_size pl_res_size block_size block_count]> T
he display also indicates the meaning of each argument. For example, following i
s the description for two optional parameters: Optional parameters: -dbname spec
ifies the database name which will be used by SBT to identify the backup file. T
he default is "sbtdb" -trace specifies the name of a file where the Media Manage
ment software will write diagnostic messages.
Using the Utility Use sbttest to perform a quick test of the media manager. The
following table explains how to interpret the output: If sbttest returns... 0 Th
en...
The program ran without error. In other words, the media manager is installed an
d can accept a data stream and return the same data when requested. non-0 The pr
ogram encountered an error. Either the media manager is not installed or it is n
ot configured correctly. To use sbttest: Make sure the program is installed, inc
luded in your system path, and linked with Oracle by typing sbttest at the comma
nd line: % sbttest If the program is operational, you should see a display of th
e online documentation. Execute the program, specifying any of the arguments des
cribed in the online documentation. For example, enter the following to create t
est file some_file.f and write the output to sbtio.log: % sbttest some_file.f -t
race sbtio.log You can also test a backup of an existing datafile. For example,
this command tests datafile tbs_33.f of database PROD: % sbttest tbs_33.f -dbnam
e prod Examine the output. If the program encounters an error, it provides messa
ges describing the failure. For example, if Oracle cannot find the library, you
see: libobk.so could not be loaded. Check that it is installed properly, and tha
t LD_ LIBRARY_PATH environment variable (or its equivalent on your platform) inc
ludes the directory where this file can be found. Here is some additional inform
ation on the cause of this error: ld.so.1: sbttest: fatal: libobk.so: open faile
d: No such file or directory ERR 6: RMAN-12004 ================= Hi, I m facing
this problem any pointers will of great help.... 1. RMAN-00571: ================
===========================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-005
71: =========================================================== RMAN-00579: the
following error occurred at 12/16/2003 02:46:31 RMAN-10035: exception raised in
RPC: RMAN-10031: ORA-19624 occurred during call to DBMS_BACKUP_RESTORE.BACKUPPIE
CECREATE RMAN-03015: error occurred in stored script backup_db_full RMAN-03015:
error occurred in stored script backup_del_all_al RMAN-03007: retryable error oc
curred during execution of command: backup RMAN-12004: unhandled exception durin
g command execution on channel t1 RMAN-10035: exception raised in RPC: ORA-19506
: failed to create sequential file, name="l0f93ro5_1_1", parms="" ORA-27028: skg
fqcre: sbtbackup returned error ORA-19511: Error received from media manager lay
er, error text: sbtbackup: Failed to process backup file. RMAN-10031: ORA-19624
occurred during call to DBMS_BACKUP_RESTORE.BACKUPPIECECREATE OR another errorst
ack RMAN-12004: unhandled exception during command execution on channel disk13 R
MAN-10035: exception raised in RPC: ORA-19502: write error on file "/db200_backu
p/archive_log03/EDPP_ARCH0_21329_1_492222998", blockno 612353 (blocksize=1024) O
RA-27072: skgfdisp: I/O error HP-UX Error: 2: No such file or directory Addition
al information: 612353 RMAN-10031: ORA-19624 occurred during call to DBMS_BACKUP
_RESTORE.BACKUPPIECECREATE OR another errorstack RMAN-12004: unhandled exception
during command execution on channel ch00 RMAN-10035: exception raised in RPC: O
RA-19599: block number 691 is corrupt in controlfile C:\ORACLE\ORA90\DATABASE\SN
CFSUMMITDB.ORA RMAN-10031: ORA-19583 occurred during call to DBMS_BACKUP_RESTORE
.BACKUPPIECECREATE OR another errorstack Have managed to create a job to backup
my db, but I can't restore. I get the following: RMAN-03002: failure during comp
ilation of command RMAN-03013: command type: restore RMAN-03006: non-retryable e
rror occurred during execution of command: IRESTORE RMAN-07004: unhandled except
ion during command execution on channel BackupTest RMAN-10035: exception raised
in RPC: ORA-19573: cannot obtain exclusive enqueue for datafile 1 RMAN-10031: OR
A-19583 occurred during call to DBMS_BACKUP_RESTORE.RESTOREBACKUPPIECE Seems to
relate to corrupt or missing Oracle files. $$$$ ERR 7: ORA-27211
================ Q: Continue to get ORA-27211 Failed to load media management li
brary A: had a remarkably similar experience a few months ago with Legato NetWor
ker and performed all of the steps you listed with the same results. The problem
turned out to be very simple. The SA installed the 64-bit version of the Legato
Networker client because it is a 64-bit server. However, we were running a 32-b
it version of Oracle on it. Installing the 32-bit client solved the problem. A:
Cause: User-supplied SBT_LIBRARY or libobk.so could not be loaded. Call to dlope
n for media library returned error. See Additional information for error code. A
ction: Retry the command with proper media library. Or re-install Media manageme
nt module for Oracle. A: Exact Error Message ORA-27211: Failed to load Media Man
agement Library on HP-UX system Details: Overview: The Oracle return code ORA-27
211 implies a failure to load a shared object library into process space. Oracle
Recovery Manager (RMAN) backups will fail with a message "ORA-27211: Failed to
load Media Management Library" if the SBT_LIBRARY keyword is defined and points
to an incorrect library name. The SBT_LIBRARY keyword must be set in the PARMS c
lause of the ALLOCATE CHANNEL statement in the RMAN script. This keyword is not
valid with the SEND command and is new to Oracle 9i. If this value is set, it ov
errides the default search path for the libobk library. By default, SBT_LIBRARY
is not set. Troubleshooting: If an ORA-27211 error is seen for an Oracle RMAN ba
ckup, it is necessary to review the Oracle RMAN script and verify if SBT_LIBRARY
is either not set or is set correctly. If set, the filename should be libobk.sl
for HP-UX 10, 11.00 and 11.11, but libobk.so for HP-UX 11.23 (ia64) clients. Ex
ample of an invalid entry for HP-UX 11.23 (ia64) clients: PARMS='SBT_LIBRARY=/us
r/openv/netbackup/bin/libobk.sl' Example of a correct entry for HP-UX 11.23 (ia6
4) clients: PARMS='SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so'
Master Server Log Files: Media Server Log Files: Client Log Files:
n/a n/a
The RMAN log file on the client will show the following error message: RMAN-0057
1: =========================================== RMAN-00569: ======= ERROR MESSAGE
STACK FOLLOWS ======= RMAN-00571: =========================================== R
MAN-03009: failure of allocate command on ch00 channel at 05/21/2005 16:39:17 OR
A-19554: error allocating device, device type: SBT_TAPE, device name: ORA-27211:
Failed to load Media Management Library Additional information: 25 Resolution:
The Oracle return code ORA-27211 implies a failure to load a shared object libra
ry into process space. Oracle RMAN backups will fail with a message "ORA-27211:
Failed to load Media Management Library" if the SBT_LIBRARY keyword is defined a
nd points to an incorrect library name. To manually set the SBT_LIBRARY path, fo
llow the steps described below: 1. Modify the RMAN ALLOCATE CHANNEL statement in
the backup script to reference the HP-UX 11.23 library file directly: PARMS='SB
T_LIBRARY=/usr/openv/netbackup/bin/libobk.so' Note: This setting would be added
to each ALLOCATE CHANNEL statement. A restart of the Oracle instance is not need
ed for this change to take affect. 2. Run a test backup or wait for the next sch
eduled backup of the Oracle database
ERR8: More on DBMS_BACKUP_RESTORE: ================================== Note 1: Th
e dbms_backup_restore package is used as a PL/SQL command-line interface for rep
lacing native RMAN commands, and it has very little documentation. The Oracle do
cs note how to install and configure the dbms_backup_restore package: The DBMS_BACK
UP_RESTORE package is an internal package created by the dbmsbkrs.sql and prvtbk
rs.plb scripts. This package, along with the target database version of DBMS_RCV
MAN, is automatically installed in every Oracle database when the catproc.sql sc
ript is run. This package interfaces with the Oracle database server and the ope
rating system to provide the I/O services for backup and restore operations as d
irected by RMAN.
The docs also note that The DBMS_BACKUP_RESTORE package has a PL/SQL procedure to n
ormalize filenames on Windows NT platforms. Oracle DBA John Parker gives this examp
le of dbms_backup_restore to recover a controlfile: declare devtype varchar2(256
); done boolean; begin devtype:=dbms_backup_restore.deviceallocate( type=>'sbt_t
ape', params=>'ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=rdcs,OB2BARLIST=ORA_RDCS_WEEKL
Y)', ident=>'t1'); dbms_backup_restore.restoresetdatafile; dbms_backup_restore.r
estorecontrolfileto('D:\oracle\ora81\dbs\CTL1rdcs.ORA'); dbms_backup_restore.res
torebackuppiece( 'ORA_RDCS_WEEKLY<rdcs_6222:596513521:1>.dbf', DONE=>done ); dbm
s_backup_restore.restoresetdatafile; dbms_backup_restore.restorecontrolfileto('D
:\DBS\RDCS\CTL2RDCS.ORA'); dbms_backup_restore.restorebackuppiece( 'ORA_RDCS_WEE
KLY<rdcs_6222:596513521:1>.dbf', DONE=>done ); dbms_backup_restore.devicedealloc
ate('t1'); end; Here are some other examples of using dbms_backup_restore: DECLA
RE devtype varchar2(256); done boolean; BEGIN devtype := dbms_backup_restore.Dev
iceAllocate (type => '',ident => 'FUN'); dbms_backup_restore.RestoreSetDatafile;
dbms_backup_restore.RestoreDatafileTo(dfnumber => 1,toname => 'D:\ORACLE_BASE\d
atafiles\SYSTEM01.DBF'); dbms_backup_restore.RestoreDatafileTo(dfnumber => 2,ton
ame => 'D:\ORACLE_BASE\datafiles\UNDOTBS.DBF'); --dbms_backup_restore.RestoreDat
afileTo(dfnumber => 3,toname => 'D:\ORACLE_BASE\datafiles\MYSPACE.DBF'); dbms_ba
ckup_restore.RestoreBackupPiece(done => done,handle => 'D:\ORACLE_BASE\RMAN_BACK
UP\MYDB_DF_BCK05H2LLQP_1_1', params => null); dbms_backup_restore.DeviceDealloca
te; END; / --restore archived redolog DECLARE devtype varchar2(256); done boolea
n; BEGIN devtype := dbms_backup_restore.DeviceAllocate (type => '',ident => 'FUN
'); dbms_backup_restore.RestoreSetArchivedLog(destination=>'D:\ORACLE_BASE\achiv
e\'); dbms_backup_restore.RestoreArchivedLog(thread=>1,sequence=>1); dbms_backup
_restore.RestoreArchivedLog(thread=>1,sequence=>2); dbms_backup_restore.RestoreA
rchivedLog(thread=>1,sequence=>3); dbms_backup_restore.RestoreBackupPiece(done =
> done,handle => 'D:\ORACLE_BASE\RMAN_BACKUP\MYDB_LOG_BCK0DH1JGND_1_1', params =
> null); dbms_backup_restore.DeviceDeallocate;
END; / Note 2: --------restore controlfile DECLARE devtype varchar2(256); done b
oolean; BEGIN devtype := dbms_backup_restore.DeviceAllocate(type => '',ident =>
'FUN'); dbms_backup_restore.RestoresetdataFile; dbms_backup_restore.RestoreContr
olFileto('D:\ORACLE_BASE\controlfiles\CONTROL01.CT L'); dbms_backup_restore.Rest
oreBackupPiece('D:\ORACLE_BASE\Rman_Backup\MYDB_DF_BCK0BH1 JBVA_1_1',done => don
e); dbms_backup_restore.RestoresetdataFile; dbms_backup_restore.RestoreControlFi
leto('D:\ORACLE_BASE\controlfiles\CONTROL02.CT L'); dbms_backup_restore.RestoreB
ackupPiece('D:\ORACLE_BASE\Rman_Backup\MYDB_DF_BCK0BH1 JBVA_1_1',done => done);
dbms_backup_restore.RestoresetdataFile; dbms_backup_restore.RestoreControlFileto
('D:\ORACLE_BASE\controlfiles\CONTROL03.CT L'); dbms_backup_restore.RestoreBacku
pPiece('D:\ORACLE_BASE\Rman_Backup\MYDB_DF_BCK0BH1 JBVA_1_1',done => done); dbms
_backup_restore.DeviceDeallocate; END; / --restore datafile DECLARE devtype varc
har2(256); done boolean; BEGIN devtype := dbms_backup_restore.DeviceAllocate (ty
pe => '',ident => 'FUN'); dbms_backup_restore.RestoreSetDatafile; dbms_backup_re
store.RestoreDatafileTo(dfnumber => 1,toname => 'D:\ORACLE_BASE\datafiles\SYSTEM
01.DBF'); dbms_backup_restore.RestoreDatafileTo(dfnumber => 2,toname => 'D:\ORAC
LE_BASE\datafiles\UNDOTBS.DBF'); --dbms_backup_restore.RestoreDatafileTo(dfnumbe
r => 3,toname => 'D:\ORACLE_BASE\datafiles\MYSPACE.DBF'); dbms_backup_restore.Re
storeBackupPiece(done => done,handle => 'D:\ORACLE_BASE\RMAN_BACKUP\MYDB_DF_BCK0
5H2LLQP_1_1', params => null); dbms_backup_restore.DeviceDeallocate; END; / --re
store archived redolog DECLARE devtype varchar2(256); done boolean; BEGIN devtyp
e := dbms_backup_restore.DeviceAllocate (type => '',ident => 'FUN');
dbms_backup_restore.RestoreSetArchivedLog(destination=>'D:\ORACLE_BASE\achive\')
; dbms_backup_restore.RestoreArchivedLog(thread=>1,sequence=>1); dbms_backup_res
tore.RestoreArchivedLog(thread=>1,sequence=>2); dbms_backup_restore.RestoreArchi
vedLog(thread=>1,sequence=>3); dbms_backup_restore.RestoreBackupPiece(done => do
ne,handle => 'D:\ORACLE_BASE\RMAN_BACKUP\MYDB_LOG_BCK0DH1JGND_1_1', params => nu
ll); dbms_backup_restore.DeviceDeallocate; END; /
ERR 9: RMAN-00554 initialization of internal recovery manager package failed: ==
===========================================================================
connected to target database: PLAYROCA (DBID=575215626) RMAN-00571: ============
=============================================== RMAN-00569: =============== ERRO
R MESSAGE STACK FOLLOWS =============== RMAN-00571: ============================
=============================== RMAN-00554: initialization of internal recovery
manager package failed RMAN-04004: error from recovery catalog database: ORA-031
35: connection lost contact keys: RMAN-00554 RMAN-04004 ORA-03135 ORA-3136 >>>>
In alertlog of the rman, we can find: WARNING: inbound connection timed out (ORA
-3136) Thu Mar 13 23:09:54 2008 >>>> In Net logs sqlnet.log we can find: Warning
: Errors detected in file /dbms/tdbaplay/ora10g/home/network/log/sqlnet.log > **
********************************************************************* > Fatal NI
connect error 12170. > > VERSION INFORMATION: > TNS for IBM/AIX RISC System/600
0: Version 10.2.0.3.0 - Production > TCP/IP NT Protocol Adapter for IBM/AIX RISC
System/6000: Version 10.2.0.3.0 - Production > Oracle Bequeath NT Protocol Adap
ter for IBM/AIX RISC System/6000: Version 10.2.0.3.0 - Production > Time: 18-MAR
-2008 23:01:43 > Tracing not turned on. > Tns error struct: > ns main err code:
12535 > TNS-12535: TNS:operation timed out > ns secondary err code: 12606 > nt m
ain err code: 0
> > > Note 1: -------
nt secondary err code: 0 nt OS err code: 0 Client address: (ADDRESS=(PROTOCOL=tc
p)(HOST=57.232.4.123)(PORT=35844))
RMAN-00554: initialization of internal recovery manager package failed Is a gene
ral error code. You must turn your attention the the codes underneath this one.
For example: RMAN-00571: =======================================================
==== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMA
N-00571: =========================================================== RMAN-00554:
initialization of internal recovery manager package failed RMAN-06003: ORACLE e
rror from target database: ORA-00210: cannot open the specified control file ORA
-00202: control file: '/devel/dev02/dev10g/standbyctl.ctl' RMAN-00571: =========
================================================== RMAN-00569: =============== E
RROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================
================================== RMAN-00554: initialization of internal recove
ry manager package failed RMAN-04005: error from target database: ORA-01017: inv
alid username/password; Note 2: ------RMAN-04004: error from recovery catalog da
tabase: ORA-03135: connection lost contact
ERR 10: RMAN-00554 initialization of internal recovery manager package failed: =
============================================================================= St
arting backup at 17-MAY-08 released channel: t1 released channel: t2 RMAN-00571:
=========================================================== RMAN-00569: =======
======== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: ===============
============================================ RMAN-03002: failure of backup comma
nd at 05/17/2008 23:30:13 RMAN-06004: ORACLE error from recovery catalog databas
e: RMAN-20242: specification does not match any archive log in the recovery cata
log Note 1: ------Oracle Error :: RMAN-20242
specification does not match any archivelog in the recovery catalog Cause No arc
hive logs in the specified archive log range could be found. Action Check the ar
chive log specifier. Note 2: ------Some of the common RMAN errors are: RMAN-2024
2: Specification does not match any archivelog in the recovery catalog. Add to R
MAN script: sql 'alter system archive log current'; Note 3: ------Q: RMAN-20242:
specification does not match any archive log in the recovery ca Posted: Feb 12,
2008 7:52 AM Reply A couple of archive log files were deleted from the OS. They
still show up in the list of archive logs in Enterprise Manager. I want to fix
this because now whenever I try to run a crosscheck command, I get the message:
RMAN-20242: specification does not match any archive log in the recovery catalog
I also tried to uncatalog those files, but got the same message. Any suggestion
s on what to do? Thanks! A: hi, from rman run the command list expired archivelo
g; if ther archives are in this list they will show, then i think you should do
a crosscheck archivelog all; then you should be able to delete them. regards Not
e 4:
------The RMAN error number would be helpful, but this is a common problem - RMA
N-20242 - and is addressed in detail in MetaLink notes. Either the name specific
ation (the one you entered) is wrong, or you could be using mismatched versions
between RMAN and the database (don't know since you didn't provide any version d
etails). Note 5: ------Q: Hi there! We are having problems with an Oracle backup
. The compiling of the backup command fails with the error message: RMAN-20242:
specification does not match any archivelog in the recovery catalog But RMAN is
only supposed to backup any archived logs that are there and then insert them in
the catalog... Did anybody experience anything similar? This is 8.1.7 on HP-UX
with Legato Networker Thanks, A: If i ask rman to backup archivelogs that are mo
re than 2days old and there are none, thats not an error. That is when i see it
the most, now most companies will force a log switch after a set amount of time
during the day so in DR, you dont lose days worth of redo that might still be ha
nging in a redo log if it gets lost.
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$ $$$$$ Now we will do some test of RMAN on a testsystem with Oracle 10g R2 Tes
t Case 1: ============ 10g Database test10g: TEST10G: startup mount pfile=c:\ora
cle\admin\test10g\pfile\init.ora alter database archivelog; archive log start;
alter database force logging; alter database add supplemental log data; alter da
tabase open; Files and tablespaces: >>> User albert creates table TEST CREATE TA
BLE test ( id number, name varchar2(10)); insert into test values (1,'test1'); c
ommit; >>> make full RMAN backup BACKUP 1: >>> Some time later, albert inserts s
econd record insert into test values (2,'test2'); commit; >>> make full RMAN bac
kup BACKUP 2: >>> Now investigate some SCN's: SQL> select CHECKPOINT_CHANGE#,CON
TROLFILE_SEQUENCE#,ARCHIVE_CHANGE#,CURRENT_SCN from v$database; CHECKPOINT_CHANG
E# CONTROLFILE_SEQUENCE# ARCHIVE_CHANGE# CURRENT_SCN ------------------ --------
------------- --------------- ----------888889 1745 889087 889154 SQL> select CH
ECKPOINT_CHANGE#,CONTROLFILE_SEQUENCE#,ARCHIVE_CHANGE#,CURRENT_SCN,archivelog_ch
ange# from v$database; CHECKPOINT_CHANGE# CONTROLFILE_SEQUENCE# ARCHIVE_CHANGE#
CURRENT_SCN ARCHIVELOG_CHANGE# ------------------ --------------------- -------
-------- ---------------------------889090 1748 889087 890538 889090 SQL> SELECT
DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER() from dual; DBMS_FLASHBACK.GET_SYSTEM_
CHANGE_NUMBER() ----------------------------------------890599 SQL> select CHECK
POINT_CHANGE#,CONTROLFILE_SEQUENCE#,ARCHIVE_CHANGE#,CURRENT_SCN,archivelog_ch
ange# from v$database; CHECKPOINT_CHANGE# CONTROLFILE_SEQUENCE# ARCHIVE_CHANGE#
CURRENT_SCN ARCHIVELOG_CHANGE# ------------------ --------------------- --------
------- ---------------------------889090 1748 889087 890678 889090
SQL> select file#,CHECKPOINT_CHANGE#,LAST_CHANGE#,OFFLINE_CHANGE#,ONLINE_CHANGE#
,NAME from v$datafile; FILE# CHECKPOINT_CHANGE# LAST_CHANGE# OFFLINE_CHANGE# ONL
INE_CHANGE# NAME ---------- ------------------ ------------ --------------- ----
----------------------------------1 888936 534906 534907 C:\ORACLE\ORADATA\TEST1
0G\SYSTEM01.DBF 2 888936 534906 534907 C:\ORACLE\ORADATA\TEST10G\UNDOTBS01.DBF 3
888936 534906 534907 C:\ORACLE\ORADATA\TEST10G\SYSAUX01.DBF 4 888936 534906 534
907 C:\ORACLE\ORADATA\TEST10G\USERS01.DBF 5 888936 0 0 C:\ORACLE\ORADATA\TEST10G
\EXAMPLE01.DBF 6 888936 0 0 C:\ORACLE\ORADATA\TEST10G\TS_CDC.DBF 6 rows selected
. SQL> select RL_SEQUENCE#,RL_FIRST_CHANGE#,RL_NEXT_CHANGE# from V$BACKUP_FILES
...... 151 888889 889090 151 888889 889090 SQL> select SEQUENCE#,FIRST_CHANGE#,
STATUS from v$log; SEQUENCE# FIRST_CHANGE# STATUS ---------- ------------- -----
----------152 889090 CURRENT 150 887780 INACTIVE 151 888889 INACTIVE SQL> select
SEQUENCE#,FIRST_CHANGE#,NEXT_CHANGE# from v$log_history ......... 147 880266 88
2166 148 882166 882431 149 882431 887780 150 887780 888889 151 888889 889090
>>> Some time later, albert inserts third record insert into test values (3,'tes
t3'); commit; >>> shutdown database >>> delete a datafile data file 6: 'C:\ORACL
E\ORADATA\TEST10G\TS_CDC.DBF' >>> startup database SQL> alter database open; alt
er database open * ERROR at line 1: ORA-01157: cannot identify/lock data file 6
- see DBWR trace file ORA-01110: data file 6: 'C:\ORACLE\ORADATA\TEST10G\TS_CDC.
DBF' >>> RECOVER WITH RMAN RMAN> RESTORE DATABASE; RMAN> RECOVER DATABASE; >>> l
ogon as albert SQL> select * from test; ID ---------3 1 2 NAME ---------test3 te
st1 test2
>>> logon as system SQL> select CHECKPOINT_CHANGE#,CONTROLFILE_SEQUENCE#,ARCHIVE
_CHANGE#,CURRENT_SCN from v$database; CHECKPOINT_CHANGE# CONTROLFILE_SEQUENCE# A
RCHIVE_CHANGE# CURRENT_SCN ------------------ --------------------- ------------
--- ----------891236 1780 889087 891702
SQL> select file#,CHECKPOINT_CHANGE#,LAST_CHANGE#,OFFLINE_CHANGE#,ONLINE_CHANGE#
,NAME from v$datafile; FILE# CHECKPOINT_CHANGE# LAST_CHANGE# OFFLINE_CHANGE# ONL
INE_CHANGE# NAME ---------- ------------------ ------------ --------------- ----
----------------------------------1 891236 534906 534907
C:\ORACLE\ORADATA\TEST10G\SYSTEM01.DBF 2 891236 C:\ORACLE\ORADATA\TEST10G\UNDOTB
S01.DBF 3 891236 C:\ORACLE\ORADATA\TEST10G\SYSAUX01.DBF 4 891236 C:\ORACLE\ORADA
TA\TEST10G\USERS01.DBF 5 891236 C:\ORACLE\ORADATA\TEST10G\EXAMPLE01.DBF 6 891236
C:\ORACLE\ORADATA\TEST10G\TS_CDC.DBF 6 rows selected.
534906 534906 534906 0 0
534907 534907 534907 0 0
SQL> select CHECKPOINT_CHANGE#,CONTROLFILE_SEQUENCE#,ARCHIVE_CHANGE#,CURRENT_SCN
,archivelog_ch ange# from v$database; CHECKPOINT_CHANGE# CONTROLFILE_SEQUENCE# A
RCHIVE_CHANGE# CURRENT_SCN ARCHIVELOG_CHANGE# ------------------ ---------------
------ --------------- ---------------------------893124 1785 889087 893131 8890
90 SQL> select file#,CHECKPOINT_CHANGE#,LAST_CHANGE#,OFFLINE_CHANGE#,ONLINE_CHAN
GE#,NAME from v$datafil e; FILE# CHECKPOINT_CHANGE# LAST_CHANGE# OFFLINE_CHANGE#
ONLINE_CHANGE# NAME ---------- ------------------ ------------ ---------------
--------------------------------------1 893124 534906 534907 C:\ORACLE\ORADATA\T
EST10G\SYSTEM01.DBF 2 893124 534906 534907 C:\ORACLE\ORADATA\TEST10G\UNDOTBS01.D
BF 3 893124 534906 534907 C:\ORACLE\ORADATA\TEST10G\SYSAUX01.DBF 4 893124 534906
534907 C:\ORACLE\ORADATA\TEST10G\USERS01.DBF 5 893124 0 0 C:\ORACLE\ORADATA\TES
T10G\EXAMPLE01.DBF 6 893124 0 0 C:\ORACLE\ORADATA\TEST10G\TS_CDC.DBF 6 rows sele
cted. select THREAD#,SEQUENCE#,FIRST_CHANGE#,NEXT_CHANGE# from v$log_history; ..
.. 1 1 1 1 149 150 151 152 882431 887780 888889 889090 887780 888889 889090 8934
99
1 1 1 1
153 154 155 156
893499 895665 896834 898275
895665 896834 898275 899008
select THREAD#,SEQUENCE#,FIRST_CHANGE#,NEXT_CHANGE# from v$archived_log; 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 END TESTCASE 1: =============== $$$$$$$$$$$$$$$$$$$$$$$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ $$$$$ --------------
-------------------The V$RMAN_OUTPUT memory-only view shows the output of a curr
ently executing RMAN job, whereas the V$RMAN_STATUS control file view indicates
the status of both executing and completed RMAN jobs. The V$BACKUP_FILES provide
s access to the information used as the basis of the LIST BACKUP and REPORT OBSO
LETE commands. Best views to obtain backup information are: V$RMAN_STATUS V$BACK
UP_FILES v$archived_log v$log_history v$database; You can also list backups by q
uerying V$BACKUP_FILES and the RC_BACKUP_FILES recovery catalog view. These view
s provide access to the same information as the LIST BACKUPSET command. --------
-------------------------Enhanced Reporting: RESTORE PREVIEW The PREVIEW option
to the RESTORE command can now tell you which backups will be accessed during a
RESTORE operation. 149 149 150 150 151 151 152 152 153 153 154 154 155 155 156 1
56 882431 882431 887780 887780 888889 888889 889090 889090 893499 893499 895665
895665 896834 896834 898275 898275 887780 887780 888889 888889 889090 889090 893
499 893499 895665 895665 896834 896834 898275 898275 899008 899008
--------------------------------->> To run RMAN commands interactively, start RM
AN and then type commands into the command-line interface. For example, you can
start RMAN from the UNIX command shell and then execute interactive commands as
follows: % rman TARGET SYS/oracle@trgt CATALOG rman/cat@catdb % rman TARGET=SYS/
oracle@trgt CATALOG=rman/cat@catdb --------------------------------->> Command f
iles In this example, a sample RMAN script is placed into a command file called
commandfile.rcv. You can run this file from the operating system command line an
d write the output into the log file outfile.txt as follows: % rman TARGET / CAT
ALOG rman/cat@catdb CMDFILE commandfile.rcv LOG outfile.txt --------------------
-------------Run the CONFIGURE DEFAULT DEVICE TYPE command to specify a default
device type for automatic channels. For example, you may make backups to tape mo
st of the time and only occasionally make a backup to disk. In this case, config
ure channels for disk and tape devices, but make sbt the default device type: CO
NFIGURE DEVICE TYPE DISK PARALLELISM 1; # configure device disk CONFIGURE DEVICE
TYPE sbt PARALLELISM 2; # configure device sbt CONFIGURE DEFAULT DEVICE TYPE TO
sbt; Now, RMAN will, by default, use sbt channels for backups. For example, if
you run the following command: BACKUP TABLESPACE users; RMAN only allocates chan
nels of type sbt during the backup because sbt is the default device.
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$ $$$$$
24.2 Older RMAN stuff: 8,8i,9i: =============================== 24.1 Introductio
n: ------------------
Recovery Manager (RMAN) is an Oracle tool that allows you to back up, copy, rest
ore, and recover datafiles, control files, and archived redo logs. It is include
d with the Oracle server and does not require separate installation. You can inv
oke RMAN as a command line utility from the operating system (O/S) prompt or use
the GUI-based Enterprise Manager Backup Manager. RMAN users "server sessions" t
o automate many of the backup and recovery tasks that were formerly performed ma
nually. For example, instead of requiring you to locate appropriate backups for
each datafile, copy them to the correct place using operating system commands, a
nd choose which archived logs to apply, RMAN manages these tasks automatically.
RMAN stores metadata about its backup and recovery operations in the recovery ca
talog, which is a centralized repository of information, or exclusively in the c
ontrol file. Typically, the recovery catalog is stored in a separate database. I
f you do not use a recovery catalog, RMAN uses the control file as its repositor
y of metadata. RMAN can be used on a database in archive mode or no archive mode
. !!!! But, for open backups, the database MUST BE in ARCHIVE MODE. That's true
for Oracle 8, 8i, 9i and 10g. RMAN doesn't do a "begin backup". It is not necess
ary when you use RMAN. RMAN does an intelligent copy of the database blocks (as
opposed to a simple OS copy) and it ensures we do not copy a fractured block. Th
e whole purpose of the begin backup (of the OS type of backup) is to record more
info into the redo logs in the event an OS copy copies a "fractured block" - wh
ere the head and tail do not match (can happen since we are WRITING to the datab
ase at the same time the backup would be reading). When RMAN hits such a block -
- it re-reads it to get a clean copy. How to start RMAN? - You can call from uni
x, or cmd prompt, the RMAN utility: $ rman RMAN> Once started you will see the R
MAN> prompt. - Or you can give command line paramaters along with the rman call
% rman target sys/sys_pwd@prod1 catalog rman/rman@rcat 24.2 Types of commands, a
nd interactive mode or batch mode: ---------------------------------------------
-------------RMAN uses two basic types of commands: stand-alone commands and job
commands.
- The job commands always appear within the brackets of a run command. - The sta
nd-alone command can be issued right after the RMAN prompt. You can run RMAN in
interactive mode or batch mode - examples of interactive mode: RMAN> run { 2> al
locate channel d1 type disk; 3> backup database; 4> } RMAN> run { allocate chann
el c1 type disk; copy datafile 6 to 'F:\oracle\backups\oem01.cpy'; release chann
el c1; } RMAN> run { allocate channel c1 type disk; backup format 'F:\oracle\bac
kups\oem01.rbu' ( datafile 6 ); release channel c1; } RMAN> run { allocate chann
el c1 type 'sbt_tape'; restore database; recover database; } Note about 'channel
': You must allocate a 'channel" before you execute backup and recovery commands
. Each allocated channel establishes a connection from RMAN to a target database
by starting a server session on the instance. This server session performs the
backup and recovery operations. Only one RMAN session communicates with the allo
cated server sessions. You can allocate multiple channels, thus allowing a singl
e RMAN command to read or write multiple backups or image copies in parallel. Th
us, the number of channels that you allocate affects the degree of parallelism w
ithin a command. When backing up to tape you should allocate one channel for eac
h physical device, but when backing up to disk you can allocate as many channels
as necessary for maximum throughput. The simplest return code. RMAN returns For
example, RMAN outputs way to determine whether RMAN encountered an error is to
examine its 0 to the operating system if no errors occurred, 1 otherwise. if you
are running UNIX and using the C shell, the return code into a shell variable c
alled $status.
The second easiest way is to search the Recovery Manager output for the string R
MAN-00569, which is the message number for the error stack banner.
All RMAN errors are preceded by this error message. If you do not see an RMAN-00
569 message in the output, then there are no errors. - example of batch mode: Yo
u can type RMAN commands into a file, and then run the command file by specifyin
g its name on the command line. The contents of the command file should be ident
ical to commands entered at the command line. Suppose the commandfile is called
'b_whole_l0.rcv', then the rman call could be as in the following example: $ rma
n target / catalog rman/rman@rcat @b_whole_l0.rcv log rman_log.f Another example
: c:> rman target xxx/yyy@target rcvcat aaa/bbb@catalog cmdfile bkdb.scr msglog
bkdb.log
24.3. Recovery Manager Repository or RMAN Catalog: -----------------------------
--------------------Storage of the RMAN Repository in the Recovery Catalog, or e
xclusively in the target database controlfile: The RMAN repository is the collec
tion of metadata about your target databases that RMAN uses to conduct its backu
p, recovery, and maintenance operations. You can either create a recovery catalo
g in which to store this information, or let RMAN store it exclusively in the ta
rget database control file. Although RMAN can conduct all major backup and recov
ery operations using just the control file, some RMAN commands function only whe
n you use a recovery catalog. The recovery catalog is maintained solely by RMAN;
the target database never accesses it directly. RMAN propagates information abo
ut the database structure, archived redo logs, backup sets, and datafile copies
into the recovery catalog from the target database's control file. A single reco
very catalog is able to store information for multiple target databases. What is
in the recovery catalog? --------------------------------Datafile and archived
redo log backup sets and backup pieces. -Datafile copies. -Archived redo logs an
d their copies. -Tablespaces and datafiles on the target database. -Stored scrip
ts, which are named user-created sequences of RMAN and SQL commands. Resynchroni
zation of the Recovery Catalog ----------------------------------------The recov
ery catalog obtains crucial RMAN metadata from the target database control file.
Resynchronization of the recovery catalog ensures that the metadata that RMAN ob
tains from the control file stays current. Resynchronizations can be full or par
tial. In a partial resynchronization, RMAN reads the current control file to upd
ate changed data, but does not resynchronize metadata about the database physica
l schema: datafiles, tablespaces, redo threads, rollback segments (only if the d
atabase is open), and online redo logs. In a full resynchronization, RMAN update
s all changed records, including schema records. When you issue certain commands
in RMAN, the program automatically detects when it needs to perform a full or p
artial resynchronization and executes the operation as needed. You can also forc
e a full resynchronization by issuing a 'resync catalog' command. It is a good i
dea to run RMAN once a day or so and issue the resync catalog command to ensure
that the catalog stays current. Because the control file employs a circular reus
e system, backup and copy records eventually get overwritten. A single recovery
catalog is able to store information for multiple target databases. 24.4 Media M
anager: ------------------To utilize tape storage for your database backups, RMA
N requires a media manager. A media manager is a utility that loads, labels, and
unloads sequential media such as tape drives for the purpose of backing up and
recovering data. Note that Oracle does not need to connect to the media manageme
nt library (MML) software when it backs up to disk. Software that is compliant w
ith the MML interface enables an Oracle server session to issue commands to the
media manager to back up or restore a file. The media manager responds to the co
mmand by loading, labeling, or unloading the requested tape. 24.5 Backups: -----
-------When you execute the backup command, you create one or more backup sets.
A backup set, which is a logical construction, contains one or more physical bac
kup pieces. Backup pieces are operating system files that contain the backed up
datafiles, control files, or archived redo logs. You cannot split a file across
different backup sets or mix archived redo logs and datafiles into a single back
up set. A backup set is a complete set of backup pieces that constitute a full o
r
incremental backup of the objects specified in the backup command. Backup sets a
re in an RMANspecific format; image copies, in contrast, are available for use w
ithout additional processing. So, You You You for can can can example: have a ba
ckupset 'backupset 1' containing just 1 datafile. have a backupset 'backupset 2'
containing many datafiles, as blocks. have a backupset 'backupset 3' containing
archived redologs
You can either let RMAN determine a unique name for the backup piece or use the
format parameter to specify a name. If you do not specify a filename, RMAN uses
the %U substitution variable to guarantee a unique name. The backup command prov
ides substitution variables that allow you to generate unique filenames. 24.6 St
arting RMAN Sessions: ---------------------------Example 1: connect to target da
tabase ------------------------------------$ ORACLE_SID=brdb;export ORACLE_SID $
rman RMAN>connect target sys/password RMAN .. connected Example 2: connect to ca
talog database -------------------------------------$rman RMAN>connect catalog r
man/rman RMAN .. connected Starting and stopping target database $ ORACLE_SID=br
db;export ORACLE_SID $rman RMAN>connect target sys/password RMAN .. connected RM
AN>startup RMAN>shutdown -- will start the target database -- will stop the targ
et database
Example 3: starting RMAN with command parameters: ------------------------------
------------------$ ORACLE_SID=brdb;export ORACLE_SID $ rman target sys/password
@prod1 catalog rman/rman@rcat
rman target sys/cactus@playroca catalog rman/cactus@playrman 24.7 Creating the R
ecovery Catalog: ----------------------------------- create a database for the R
ecovery Catalog, for example rcdb - create the user that will hold the catalog,
rman with password rman create user rman identified by rman default tablespace r
man temporary tablespace temp; - give the right permissions: grant connect, reso
urce, recovery_catalog_owner to rman; - create the catalog in database rcdb In 8
.0, to setup Recovery Catalog, you can run $ORACLE_HOME/rdbms/admin/catrman.sql
while connected to RMAN database. In 8.1 and later, to setup the Recovery Catalo
g, use the create catalog command. $ rman RMAN>connect catalog rman/rman RMAN-06
008 connected to recovery catalog database RMAN-06428 recovery catalog is not in
stalled RMAN>create catalog tablespace rman; RMAN-06431 recovery catalog created
You can expect something like the following to exist in the rcdb database: SQL>
select table_name, tablespace_name, owner 2 from dba_tables where owner='RMAN';
TABLE_NAME -----------------------------AL BCB BCF BDF BP BRL BS CCB CCF CDF CK
P CONFIG DB DBINC DF TABLESPACE_NAME -----------------------------DATA DATA DATA
DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA OWNER -----RMAN RMA
N RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN
DFATT OFFR ORL RCVER RLH RR RT SCR SCRL TS TSATT XCF XDF 28 rows selected.
DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA DATA
RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN
SQL> select view_name, owner 2 from dba_views where owner='RMAN'; 8, 8i: -----VI
EW_NAME -----------------------------RC_ARCHIVED_LOG RC_BACKUP_CONTROLFILE RC_BA
CKUP_CORRUPTION RC_BACKUP_DATAFILE RC_BACKUP_PIECE RC_BACKUP_REDOLOG RC_BACKUP_S
ET RC_CHECKPOINT RC_CONTROLFILE_COPY RC_COPY_CORRUPTION RC_DATABASE RC_DATABASE_
INCARNATION RC_DATAFILE RC_DATAFILE_COPY RC_LOG_HISTORY RC_OFFLINE_RANGE RC_PROX
Y_CONTROLFILE RC_PROXY_DATAFILE RC_REDO_LOG RC_REDO_THREAD RC_RESYNC RC_STORED_S
CRIPT RC_STORED_SCRIPT_LINE RC_TABLESPACE 24 rows selected. The recovery catalog
is now installed in the database rcdb. 10g: ---OWNER ----RMAN RMAN RMAN RMAN RM
AN RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN RMAN RM
AN RMAN RMAN RMAN
SQL> select view_name from dba_views where view_name like '%RMAN%'; VIEW_NAME --
---------------------------V_$RMAN_CONFIGURATION GV_$RMAN_CONFIGURATION V_$RMAN_
STATUS V_$RMAN_OUTPUT GV_$RMAN_OUTPUT V_$RMAN_BACKUP_SUBJOB_DETAILS V_$RMAN_BACK
UP_JOB_DETAILS V_$RMAN_BACKUP_TYPE MGMT$HA_RMAN_CONFIG RC_RMAN_OUTPUT RC_RMAN_BA
CKUP_SUBJOB_DETAILS RC_RMAN_BACKUP_JOB_DETAILS RC_RMAN_BACKUP_TYPE RC_RMAN_CONFI
GURATION RC_RMAN_STATUS 15 rows selected. SQL> select view_name, owner 2 from db
a_views where owner='RMAN'; XXXX RC_RMAN_OUTPUT RC_BACKUP_FILES RC_RMAN_BACKUP_S
UBJOB_DETAILS RC_RMAN_BACKUP_JOB_DETAILS RC_BACKUP_SET_DETAILS RC_BACKUP_PIECE_D
ETAILS RC_BACKUP_COPY_DETAILS RC_PROXY_COPY_DETAILS RC_PROXY_ARCHIVELOG_DETAILS
RC_BACKUP_DATAFILE_DETAILS RC_BACKUP_CONTROLFILE_DETAILS RC_BACKUP_ARCHIVELOG_DE
TAILS RC_BACKUP_SPFILE_DETAILS RC_BACKUP_SET_SUMMARY RC_BACKUP_DATAFILE_SUMMARY
RC_BACKUP_CONTROLFILE_SUMMARY RC_BACKUP_ARCHIVELOG_SUMMARY RC_BACKUP_SPFILE_SUMM
ARY RC_BACKUP_COPY_SUMMARY RC_PROXY_COPY_SUMMARY RC_PROXY_ARCHIVELOG_SUMMARY RC_
UNUSABLE_BACKUPFILE_DETAILS RC_RMAN_BACKUP_TYPE RC_DATABASE RC_DATABASE_INCARNAT
ION RC_RESYNC RC_CHECKPOINT RC_TABLESPACE RC_DATAFILE RC_TEMPFILE RC_REDO_THREAD
RC_REDO_LOG RC_LOG_HISTORY RC_ARCHIVED_LOG RC_BACKUP_SET RC_BACKUP_PIECE RC_BACK
UP_DATAFILE RC_BACKUP_CONTROLFILE RC_BACKUP_SPFILE RC_DATAFILE_COPY RC_CONTROLFI
LE_COPY RC_BACKUP_REDOLOG RC_BACKUP_CORRUPTION RC_COPY_CORRUPTION RC_OFFLINE_RAN
GE RC_STORED_SCRIPT RC_STORED_SCRIPT_LINE RC_PROXY_DATAFILE RC_PROXY_CONTROLFILE
RC_RMAN_CONFIGURATION RC_DATABASE_BLOCK_CORRUPTION RC_PROXY_ARCHIVEDLOG RC_RMAN
_STATUS 53 rows selected.
Compatibility: --------------If you use an 8.1.6 RMAN executable to execute the
"create catalog" command, then the recovery catalog is created as a release 8.1.
6 recovery catalog. Compatibility=8.1.6 You cannot use the 8.1.6 catalog with a
pre-8.1.6 release of the RMAN executable. If you use an 8.1.6 RMAN executable to
execute the "upgrade catalog" command, then the recovery catalog is upgraded fr
om a pre-8.1.6 release to a release 8.1.6 catalog. Compatibility=8.0.4 The 8.1.6
catalog is backwards compatible with older releases of the RMAN executable. To
view compatibility: SQL> SELECT value FROM config WHERE name='compatible'; Use a
n older RMAN to create the catalog. Use the newer RMAN to upgrade the catalog. Y
ou can allwys do: RMAN> configure compatible = 8.1.5; *** EXTRA: different RMAN
CATALOGS in 1 DATABASE ***
Different versions in one database: ----------------------------------In general
, the rules of RMAN compatibility are as follows: - The RMAN catalog schema vers
ion (tables/views) should be greater than or equal to the catalog database versi
on. - The RMAN catalog is backwards compatible with target databases from earlie
r releases. - The versions of the RMAN executable and the target database should
be the same - RMAN cannot create release 8.1 or later catalog schemas in 8.0 ca
talog databases.
Suppose you have 8.0.5 and 9i target databases. - create one 9i database rcdb -
create 2 tablespaces: RCAT80 and RCAT9I - create corresponding rman users Create
the 8.0.5 catalog in the 9.2.0 catalog database. # sql syntax for creating logi
cal catalog 8.0.5 structure. create tablespace RCAT80 datafile '/export/home/dfr
eneuil/D817F/ DATAFILES/rcat80_01.dbf' size 20M ; Create the 9.2.0 catalog in th
e 9.2.0 catalog database. # sql syntax for creating logical catalog 8i structure
. create tablespace RCAT9I datafile '/export/home/dfreneuil/D920F/ DATAFILES/rca
t9i_01.dbf' size 20M ; # sql syntax for creating catalog 8.0.5 user owner. creat
e user RMAN80 identified by rman80 default tablespace RCAT80 temporary tablespac
e temp quota unlimited on RCAT80 ; grant connect, resource,recovery_catalog_owne
r to rman80 ; # sql syntax for creating catalog 9i user owner. create user RMAN9
I identified by rman9i default tablespace RCAT9I temporary tablespace temp quota
unlimited on RCAT9I ; grant connect, resource,recovery_catalog_owner to rman9i
; - make tnsnames.ora OK - Create the 2 catalogs: 9.2.0 catalog views creation.
$ rman catalog rman9i/rman9i -- to connect locally. or $ rman catalog rman8i/rma
n9i@alias to connect through NET8.
RMAN> create catalog ; 8.0.5 catalog views creation. Since the catalogs database
is an 8.1.7 database, connect to the 8.0.5 catalog via 8.0.5 SQL*Plus. $ sqlplu
s rman80/rman80@alias_to_rcat80 --> connect from the target machine to the 8.0.5
catalog. SQL> @?/rdbms/admin/catrman.sql Backup an 8.0.5 database with 8.0.5 RM
AN into an 8.0.5 catalog in an 9.2.0 catalog database. $ rman rcvcat rman80/rman
80@V817 8.0.5 db ----> 8.0.5 RMAN ----> 8.0.5 catalog in 9.2.0 db 9.2.0 db ---->
9.2.0 RMAN ----> 9.2.0 catalog in 9.2.0 db *** END EXTRA ***
24.8 Registering and un-registering the target database: -----------------------
--------------------------------Register: --------Now we must 'register' the tar
get database. Suppose the target database is called 'airm'. Connect to the targe
t and the catalog: $ rman target / catalog rman/rman@rcdb or $ rman system/passw
@airm catalog rman/rman@rcdb RMAN-06005 connected to target database: AIRM RMAN-
06008 connected to recovery catalog database RMAN>register database And the airm
database will be registered in the catalog. If you connected to rcdb before the
registering and give the following queries before and after registering airm: S
QL> connect system/manager@rcdb Connected. before registering: SQL> select * fro
m rman.db;
no rows selected after registering: SQL> select * from rman.db; DB_KEY DB_ID CUR
R_DBINC_KEY ---------- ---------- -------------1 2092303715 2 Unregister: ------
----It's best to unregister the backups from the catalog first: RMAN> list backu
p of database; RMAN-03022: compiling command: list shows possible backupsets wit
h their numbers fore example 989 RMAN> allocate channel for maintenance type dis
k; change backupset 989 delete; Next we un-register the target database. You wil
l not use rman, but a special procedure. You must use this procedure with the DB
_KEY and DB_ID parameters as values. In SQL*Plus: SQL>execute dbms_rcvcat.unregi
sterdatabase(1,2092303715) and the airm database will be unregistered. 24.9 Rese
t of the catalog: -------------------------If you have opened the target databas
e with the 'RESETLOGS' option, you have in fact created a new 'incarnation' of t
he database. This information must be 'told' to the recovery catalog via the 're
set database' command: $ rman target sys/passw catalog rman/rman@rcdb RMAN>reset
database;
-- VALIDATE: -- ---------
You can use the VALIDATE option of the BACKUP command to verify that database fi
les exist and are in the correct locations, and have no physical or logical corr
uptions that would prevent RMAN from creating backups of them. When performing a
BACKUP... VALIDATE, RMAN reads the files to be backed up in their entirety, as
it would during a real backup. It does not, however, actually produce any backup
sets or image copies. If the backup validation discovers corrupt blocks, then R
MAN updates the V$DATABASE_BLOCK_CORRUPTION view with rows describing the corrup
tions. You can repair corruptions using block media recovery, documented in Orac
le Database Backup and Recovery Advanced User's Guide. After a corrupt block is
repaired, the row identifying this block is deleted from the view. For example,
you can validate that all database files and archived logs can be backed up by r
unning a command as follows: BACKUP VALIDATE DATABASE ARCHIVELOG ALL; The RMAN c
lient displays the same output that it would if it were really backing up the fi
les. If RMAN cannot validate the backup of one or more of the files, then it iss
ues an error message. For example, RMAN may show output similar to the following
: RMAN-00571: =========================================================== RMAN-0
0569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: ==
========================================================= RMAN-03002: failure of
backup command at 08/29/2002 14:33:47 ORA-19625: error identifying file /oracle
/oradata/trgt/arch/archive1_6.dbf ORA-27037: unable to obtain file status SVR4 E
rror: 2: No such file or directory Additional information: 3
-- CONTROLFILE AUTOBACKUP -- ---------------------Configuring Control File and S
erver Parameter File Autobackup RMAN can be configured to automatically back up
the control file and server parameter file whenever the database structure metad
ata in the control file changes and whenever a backup record is added. The autob
ackup enables RMAN to recover the database even if the current control file, cat
alog, and server parameter file are lost. Because the filename for the autobacku
p uses a well-known format, RMAN can search for it without access to a repositor
y, and then restore the server parameter file. After you have started the instan
ce with the restored server parameter file, RMAN can restore the control file fr
om an autobackup. After you mount the control file, the RMAN repository is avail
able and RMAN can restore the
datafiles and find the archived redo log. You can enable the autobackup feature
by running this command: CONFIGURE CONTROLFILE AUTOBACKUP ON; You can disable th
e feature by running this command: CONFIGURE CONTROLFILE AUTOBACKUP OFF; Backing
Up Control Files with RMAN You can back up the control file when the database i
s mounted or open. RMAN uses a snapshot control file to ensure a read-consistent
version. If CONFIGURE CONTROLFILE AUTOBACKUP is ON (by default it is OFF), then
RMAN automatically backs up the control file and server parameter file after ev
ery backup and after database structural changes. The control file autobackup co
ntains metadata about the previous backup, which is crucial for disaster recover
y. If the autobackup feature is not set, then you must manually back up the cont
rol file in one of the following ways: .Run BACKUP CURRENT CONTROLFILE .Include
a backup of the control file within any backup by using the INCLUDE CURRENT CONT
ROLFILE option of the BACKUP command .Back up datafile 1, because RMAN automatic
ally includes the control file and SPFILE in backups of datafile 1 Note: If the
control file block size is not the same as the block size for datafile 1, then t
he control file cannot be written into the same backup set as the datafile. RMAN
writes the control file into a backup set by itself if the block size is differ
ent. A manual backup of the control file is not the same as a control file autob
ackup. In manual backups, only RMAN repository data for backups within the curre
nt RMAN session is in the control file backup, and a manually backed-up control
file cannot be automatically restored.
24.11 Create scripts: --------------------If you are connected to the target and
the catalog, you can create and store scripts in the catalog. Example:
== XXX RMAN> create script complet_bac1 { 2> allocate channel c1 type disk; 3> a
llocate channel c2 type disk; 4> backup database; 5> sql 'ALTER SYSTEM ARCHIVE L
OG ALL'; 6> backup archivelog all; 7> } RMAN-03022: compiling command: create sc
ript RMAN-03023: executing command: create script RMAN-08085: created script com
plet_bac1 To run such a script: $ rman target sys/passw@airm catalog rman/rman@r
cdb RMAN>run { execute scipt complet_bac1; } You can also replace a script: RMAN
>replace script b_whole_l0 { # back up whole database and archived logs allocate
channel d1 type disk; allocate channel d2 type disk; allocate channel d3 type d
isk; backup incremental level 0 tag b_whole_l0 filesperset 6 format '/dev/backup
/prod1/df/df_t%t_s%s_p%p' -- name of the backup piece (database); sql 'ALTER SYS
TEM ARCHIVE LOG CURRENT'; backup filesperset 20 format '/dev/backup/prod1/al/al_
t%t_s%s_p%p' (archivelog all delete input); } RMAN> SET CONTROLFILE AUTOBACKUP F
ORMAT FOR DEVICE TYPE DISK TO 'controlfile_%F'; RMAN> BACKUP AS COPY DATABASE; R
MAN> RUN { SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/tmp/%F.bc
k'; BACKUP AS BACKUPSET DEVICE TYPE DISK DATABASE; }
24.12 Parallization: -------------------RMAN executes commands serially; that is
, it completes the current command before starting the next one. Parallelism is
exploited only within the context of a single command. Consequently, if you want
5 datafile copies, issue a single copy command specifying all 5 copies rather t
han 5 separate copy
commands. In the following example, you allocate 5 channels, and then you issued
5 separate copy commands. So, all copy commands are performed one after the oth
er. run { allocate channel allocate channel allocate channel allocate channel al
locate channel copy datafile 22 copy datafile 23 copy datafile 24 copy datafile
25 copy datafile 26 } c1 type disk; c2 type disk; c3 type disk; c4 type disk; c5
type disk; to '/dev/prod/backup1/prod_tab5_1.dbf'; to '/dev/prod/backup1/prod_t
ab5_2.dbf'; to '/dev/prod/backup1/prod_tab5_3.dbf'; to '/dev/prod/backup1/prod_t
ab5_4.dbf'; to '/dev/prod/backup1/prod_tab6_1.dbf';
To get the copy command run in parallel, use the following command: run { alloca
te channel c1 type disk; allocate channel c2 type disk; allocate channel c3 type
disk; allocate channel c4 type disk; allocate channel c5 type disk; copy datafi
le 5 to '/dev/prod/backup1/prod_tab5_1.dbf', datafile 23 to '/dev/prod/backup1/p
rod_tab5_2.dbf', datafile 24 to '/dev/prod/backup1/prod_tab5_3.dbf', datafile 25
to '/dev/prod/backup1/prod_tab5_4.dbf', datafile 26 to '/dev/prod/backup1/prod_
tab6_1.dbf'; } 24.13 Creating backups: ----------------------1. Image copy and B
ackup set: ----------------------------- you can make 'image copies', which are
actual complete copies of database files, controlfiles, or archived redologs, to
disk. These are not stored in the special RMAN format, and can be used 'ouside'
of rman if neccessary. - you can make for example backups of database files in
a 'backup set' which are in the special rman format. You must use rman to proces
s them. Examples: - image copy, using the copy command:
RMAN>run { allocate channel c1 type disk; copy datafile 1 to '/staging/system01.
dbf', datafile 2 to '/staging/data01.dbf', datafile 3 to '/staging/users01.dbf',
current controlfile to '/staging/control1.ctl'; } RMAN> run { 2> allocate chann
el c1 type disk; 3> copy datafile 1 to 'df1.bak'; 4> } - backup set, using the b
ackup command: RMAN> run { allocate channel c1 type disk; backup tablespace user
s including current controlfile; } RMAN> run { 2> allocate channel c1 type disk;
3> backup tablespace system; 4> } RMAN> This example backs up the tablespace to
its default backup location, which is port-specific: on UNIX systems the locati
on is $ORACLE_HOME/dbs. Because you do not specify the format parameter, RMAN au
tomatically assigns the backup a unique filename.
2. Archive mode and No archive mode: -----------------------------------If the d
atabase is in ARCHIVELOG mode, then the target database can be open or closed; y
ou do not need to close the database cleanly (although Oracle recommends you do
so that the backup is consistent). If the database is in NOARCHIVELOG mode, then
you must close it cleanly prior to taking a backup. The following example shows
that a tablespace backup does not work if the database is open and in no archiv
e mode. RMAN> run { 2> allocate channel c1 type disk; 3> backup tablespace users
; 4> } RMAN-03022: compiling command: allocate RMAN-03023: executing command: al
locate
RMAN-08030: allocated channel: c1 RMAN-08500: channel c1: sid=17 devtype=DISK RM
AN-03022: compiling command: backup RMAN-03023: executing command: backup RMAN-0
8008: channel c1: starting full datafile backupset RMAN-08502: set_count=2 set_s
tamp=482962114 creation_time=10-JAN-03 RMAN-08010: channel c1: specifying datafi
le(s) in backupset RMAN-00571: =================================================
========== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============
== RMAN-00571: =========================================================== RMAN-
03007: retryable error occurred during execution of command: backup RMAN-07004:
unhandled exception during command execution on channel c1 RMAN-10035: exception
raised in RPC: ORA-19602: cannot backup or copy active file in NOARCHIVELOG mod
e RMAN-10031: ORA-19624 occurred during call to DBMS_BACKUP_RESTORE.BACKUPDATAFI
LE 3. Names and sizes: ------------------Filenames for Backup Pieces: You can ei
ther let RMAN determine a unique name for the backup piece or use the format par
ameter to specify a name. If you do not specify a filename, RMAN uses the %U sub
stitution variable to guarantee a unique name. The backup command provides subst
itution variables that allow you to generate unique filenames. Number and Size o
f Backup Set: Use the backupSpec clause to list what you want to back up as well
as specify other useful options. The number and size of backup sets depends on:
The number of backupSpec clauses that you specify. The number of input files sp
ecified or implied in each backupSpec clause. The number of channels that you al
locate. The filesperset parameter, which limits the number of files for a backup
set. The setsize parameter, which limits the overall size in bytes of a backup
set. The most important rules in the algorithm for backup set creation are: Each
allocated channel that performs work in the backup job--that is, that is not id
le--generates at least one backup set. By default, this backup set contains one
backup piece. RMAN always tries to divide the backup load so that all allocated
channels have roughly the same amount of work to do. The maximum upper limit for
the number of files per backup set is determined by the filesperset parameter o
f the backup command.
The maximum upper limit for the size in bytes of a backup set is determined by t
he setsize parameter of the backup command. The filesperset parameter limits the
number of files that can go in a backup set. The default value of this paramete
r is calculated by RMAN as follows: RMAN compares the value 64 to the rounded-up
ratio of number of files / number of channels, nd sets filesperset to the lower
value. For example, if you back up 70 files with one channel, RMAN divides 70/1
, compares this value to 64, and sets filesperset to 64 because it is the lowest
value. The number of backup sets produced by RMAN is the rounded-up ratio of nu
mber of datafiles / filesperset. For example, if you back up 70 datafiles and fi
lesperset is 64, then RMAN produces 2 backup sets. setsize: Sets the maximum siz
e in bytes of the backup set without specifying a limit to the number of files i
n the set. Sets a limit to the number of files in the backup set without specify
ing a maximum size in bytes of the set.
filesperset:
4. Examples: ------------ Backup and Recovery Database: ------------------------
------Other Examples: --------------$ rman target / catalog rman/rman@rcat To wr
ite the output to a log file, specify the file at startup. For example, enter: $
rman target / catalog rman/rman@rcat log /oracle/log/mlog.f Allocate one or mor
e channels of type disk or type 'sbt_tape'. This example backs up all the datafi
les as well as the control file. It does not specify a format parameter, so RMAN
gives each backup piece a unique name automatically and stores it in the port-s
pecific default location ($ORACLE_HOME/dbs on UNIX). Whole database backups auto
matically include the current control file, but the current control file does no
t contain a record of the whole database backup. To obtain a control file backup
with a record of the whole database backup, make a backup of the control file a
fter executing the whole database backup.
Include a backup of the control file within any backup by specifying the include
current controlfile option. Optionally, use the set duplex command to create mu
ltiple identical backupsets. run { allocate channel ch1 type disk; backup databa
se; sql 'ALTER SYSTEM ARCHIVE LOG CURRENT'; # archives current redo log as well
# all unarchived logs
as }
Optionally, use the format parameter to specify a filename for the backup piece.
For example, enter: run { allocate channel ch1 type disk; backup database forma
t '/oracle/backup/%U'; # %U generates a unique filename
}
Optionally, use the tag parameter to specify a tag for the backup. For example,
enter: run { allocate channel ch1 type 'sbt_tape'; backup database tag = 'weekly
_backup'; # gives the backup a tag identifier
} This script backs up the database and the archived redo logs: RMAN> run { allo
cate channel ch1 type disk; allocate channel ch2 type disk; backup database; sql
'ALTER SYSTEM ARCHIVE LOG ALL'; backup archivelog all; } RMAN> run { allocate c
hannel ch1 type disk; allocate channel ch2 type disk; backup format 'i:\backup\f
ull_db.bck' (database); sql 'ALTER SYSTEM ARCHIVE LOG CURRENT'; backup archivelo
g all; } - Backup tablespace: -------------------run { allocate channel ch1 type
disk; allocate channel ch2 type disk; allocate channel ch3 type disk;
backup filesperset = 3 tablespace inventory, sales include current controlfile;
} - Backup datafiles: ------------------run { allocate channel ch1 type disk; ba
ckup (datafile 1,2,3,4,5,6 filesperset 3) datafilecopy '/oracle/copy/tbs_1_c.f';
}
RMAN> run { allocate channel c1 type disk; copy datafile 6 to 'F:\oracle\backups
\oem01.cpy'; release channel c1; } RMAN> run { allocate channel c1 type disk; ba
ckup format 'F:\oracle\backups\oem01.rbu' ( datafile 6 ); release channel c1; }
RMAN> run { allocate channel ch1 type disk; allocate channel ch2 type disk; allo
cate channel ch3 type disk; backup (datafile 1,2,3 filesperset = 1 channel ch1)
(datafilecopy '/oracle/copy/cf.f' filesperset = 2 channel ch2) (archivelog from
logseq 100 until logseq 102 thread 1 filesperset = 3 channel ch3); } - Backup ar
chived redologs: --------------------------To back up archived logs, issue backu
p archivelog with the desired filtering options: run { allocate channel ch1 type
'sbt_tape'; backup archivelog all # Backs up all archived redo logs. delete inp
ut; # Optionally, delete the input logs
} You can also specify a range of archived redo logs by time, SCN, or log sequen
ce number. This example backs up all archived logs created more than 7 and less
than 30 days ago: run { allocate channel ch1 type disk;
}
backup archivelog from time 'SYSDATE-30' until time 'SYSDATE-7';
- Incremental backups: ---------------------This example makes a level 0 backup
of the database: run { allocate channel ch1 type disk; backup incremental level
= 0 database; } This example makes a level 1 backup of the database: run { alloc
ate channel ch1 type disk; backup incremental level = 1 database; } Further exam
ples: -----------------Your database has to be in archive log mode for this scri
pt to work RMAN> run { 2> # backup the database to disk 3> allocate channel d1 t
ype disk; 4> backup 5> full 6> tag full_db 7> format '/backups/db_%t_%s_p%p' 8>
(database); 9> release channel d1; 10> } ---This script will backup all archive
logs. Your database has to be in archive log mode for this script to work. RMAN>
run { 2> allocate channel d1 type disk; 3> backup 4> format '/backups/log_t%t_s
%s_p%p' 5> (archivelog all); 6> release channel d1; 7> }
---This script will backup all the datafiles. resync catalog; run { allocate cha
nnel c1 type disk; copy datafile 1 to 'C:\rman1.dbf'; copy datafile 2 to 'C:\rma
n2.dbf'; copy datafile 3 to 'C:\rman3.dbf'; copy datafile 4 to 'C:\rman4.dbf'; c
opy datafile 5 to 'C:\rman5.dbf'; } exit echo exiting after successful hot backu
p using RMAN ----run { sql 'alter database close'; allocate channel d1 type disk
; backup full tag full_offline_backup format 'c:\backup\db_t%t_s%s_p%p' (databas
e); release channel d1; sql 'alter database open'; } 5. Complete Examples: -----
---------------*************************************************************** L
=0 BACKUP run { allocate channel d1 type disk; backup incremental level = 0 tag
db_whole_l0 format 'i:\backup\l0_%d_t%t_s%s_p%p' (database); sql 'ALTER SYSTEM A
RCHIVE LOG CURRENT'; backup format 'i:\backup\log_%d_t%t_s%s_p%p' (archivelog al
l); } or run { allocate channel d1 type disk; allocate channel d2 type disk; bac
kup incremental level = 0 tag db_whole_l0 format 'i:\backup\l0_%d_t%t_s%s_p%p' (
database channel d1);
sql 'ALTER SYSTEM ARCHIVE LOG CURRENT'; backup format 'i:\backup\log_%d_t%t_s%s_
p%p' (archivelog all channel d2); } L=1 BACKUP run { allocate channel d1 type di
sk; backup incremental level = 1 tag db_whole_l1 format 'i:\backup\l1_%d_t%t_s%s
_p%p' (database); sql 'ALTER SYSTEM ARCHIVE LOG CURRENT'; backup format 'i:\back
up\log_%d_t%t_s%s_p%p' (archivelog all); } *************************************
**************************** RMAN>create script db_whole_l0 { # back up whole da
tabase and archived logs allocate channel d1 type disk; backup incremental level
0 tag db_whole_l0 filesperset 15 format 'i:\backup\l0_%d_t%t_s%s_p%p' -- name o
f the backup piece (database); sql 'ALTER SYSTEM ARCHIVE LOG CURRENT'; backup fi
lesperset 20 format 'i:\backup\log_%d_t%t_s%s_p%p' (archivelog all delete input)
; } RMAN>create script db_whole_l1 { # back up whole database and archived logs
allocate channel d1 type disk; backup incremental level 1 tag db_whole_l0 filesp
erset 15 format 'i:\backup\l1_%d_t%t_s%s_p%p' -- name of the backup piece (datab
ase); sql 'ALTER SYSTEM ARCHIVE LOG CURRENT'; backup filesperset 20 format 'i:\b
ackup\log_%d_t%t_s%s_p%p' (archivelog all delete input); } On sunday : schedule
RMAN>run { execute scipt db_whole_l0; } Other days: schedule RMAN>run { execute
scipt db_whole_l1; }
********************************************** replace script backup_all_archive
s { execute script alloc_all_disks; backup filesperset 50 format '/bkup/SID/%d_a
l_t%t_s%s_p%p' (archivelog all delete input); execute script rel_all_disks; } #
Incremental level 0 (whole) database backup # The control file is automatically
included each time file 1 of the # system tablespace is backed up. # replace scr
ipt backup_db_level_0_disk { # execute script alloc_all_disks; # set maxcorrupt
for datafile 1 to 0; run { allocate channel c2 type disk; backup incremental lev
el = 0 tag backup_db_level_0 # The skip inaccessible clause ensures the backup w
ill continue # if any of the datafiles are inaccessible. skip inaccessible files
perset 9 format 'i:\backup\L0_%d.bck' (database); sql 'alter system archive log
current'; execute script backup_all_archives; } ********************************
***************************** -- SUNDAY LEVEL 0 BACKUP run { allocate channel d1
type disk; setlimit channel d1 kbytes 2097150 maxopenfiles 32 readrate 200; set
maxcorrupt for datafile 1,2,3,4,5,6 to 0; backup incremental level 0 cumulative
skip inaccessible tag sunday_level_0 format 'c:\temp\df_t%t_s%s_p%p' database;
copy current controlfile to 'c:\temp\sunday.ctl'; sql 'alter system archive log
current'; backup format 'c:\temp\al_t%t_s%s_p%p' archivelog all delete input; re
lease channel d1; } -- MONDAY LEVEL 2 BACKUP run { allocate channel d1 type disk
; setlimit channel d1 kbytes 2097150 maxopenfiles 32 readrate 200;
set maxcorrupt for datafile 1,2,3,4,5,6 to 0; backup incremental level 2 cumulat
ive skip inaccessible tag monday_level_2 format 'c:\temp\df_t%t_s%s_p%p' databas
e; copy current controlfile to 'c:\temp\monday.ctl'; sql 'alter system archive l
og current'; backup format 'c:\temp\al_t%t_s%s_p%p' archivelog all delete input;
release channel d1; } -- TUESDAY LEVEL 2 BACKUP run { allocate channel d1 type
disk; setlimit channel d1 kbytes 2097150 maxopenfiles 32 readrate 200; set maxco
rrupt for datafile 1,2,3,4,5,6 to 0; backup incremental level 2 cumulative skip
inaccessible tag tueday_level_2 format 'c:\temp\df_t%t_s%s_p%p' database; copy c
urrent controlfile to 'c:\temp\tuesday.ctl'; sql 'alter system archive log curre
nt'; backup format 'c:\temp\al_t%t_s%s_p%p' archivelog all delete input; release
channel d1; } -- WEDNESDAY LEVEL 2 BACKUP run { allocate channel d1 type disk;
setlimit channel d1 kbytes 2097150 maxopenfiles 32 readrate 200; set maxcorrupt
for datafile 1,2,3,4,5,6 to 0; backup incremental level 2 cumulative skip inacce
ssible tag wednesday_level_2 format 'c:\temp\df_t%t_s%s_p%p' database; copy curr
ent controlfile to 'c:\temp\wednesday.ctl'; sql 'alter system archive log curren
t'; backup format 'c:\temp\al_t%t_s%s_p%p' archivelog all delete input; release
channel d1; } -- THURSDAY LEVEL 1 BACKUP run { allocate channel d1 type disk; se
tlimit channel d1 kbytes 2097150 maxopenfiles 32 readrate 200; set maxcorrupt fo
r datafile 1,2,3,4,5,6 to 0;
backup incremental level 1 cumulative skip inaccessible tag thursday_level_1 for
mat 'c:\temp\df_t%t_s%s_p%p' database; copy current controlfile to 'c:\temp\thur
sday.ctl'; sql 'alter system archive log current'; backup format 'c:\temp\al_t%t
_s%s_p%p' archivelog all delete input; release channel d1; } -- FRIDAY LEVEL 2 B
ACKUP run { allocate channel d1 type disk; setlimit channel d1 kbytes 2097150 ma
xopenfiles 32 readrate 200; set maxcorrupt for datafile 1,2,3,4,5,6 to 0; backup
incremental level 2 cumulative skip inaccessible tag friday_level_2 format 'c:\
temp\df_t%t_s%s_p%p' database; copy current controlfile to 'c:\temp\friday.ctl';
sql 'alter system archive log current'; backup format 'c:\temp\al_t%t_s%s_p%p'
archivelog all delete input; release channel d1; } -- SATURDAY LEVEL 2 BACKUP ru
n { allocate channel d1 type disk; setlimit channel d1 kbytes 2097150 maxopenfil
es 32 readrate 200; set maxcorrupt for datafile 1,2,3,4,5,6 to 0; backup increme
ntal level 2 cumulative skip inaccessible tag saturday_level_2 format 'c:\temp\d
f_t%t_s%s_p%p' database; copy current controlfile to 'c:\temp\saturday.ctl'; sql
'alter system archive log current'; backup format 'c:\temp\al_t%t_s%s_p%p' arch
ivelog all delete input; release channel d1; } 6. Third Party: ---------------
You can use rman in combination with third party storage managers. In this case,
rman is used with a MML library and possibly some API that uses it's own config
uration files, for example: backup.scr script: run {
allocate channel t1 type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=c:\RMAN\scripts\tdp
o.opt)'; allocate channel t2 type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=c:\RMAN\sc
ripts\tdpo.opt)'; backup filesperset 5 format 'df_%t_%s_%p' (database); release
channel t1; release channel t2;
}
run { allocate channel d1 type 'sbt_tape' connect 'internal/manager@scdb2' parms
'ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)'; allocate cha
nnel d2 type 'sbt_tape' connect 'internal/manager@scdb1' parms 'ENV=(TDPO_OPTFIL
E=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)'; backup format 'ctl_t%t_s%s_p%p
' tag cf (current controlfile); backup full filesperset 8 format 'db_t%t_s%s_p%p
' tag fulldb (database); release channel d1; release channel d2; } The PARMS par
ameter sends instructions to the media manager. For example, the following vendo
r-specific PARMS setting instructs the media manager to back up to a volume pool
called oracle_tapes: PARMS='ENV=(NSR_DATA_VOLUME_POOL=oracle_tapes)' parms='ENV
=(DSMO_FS=oracle)' Another example: RUN { ALLOCATE CHANNEL c1 DEVICE TYPE sbt PA
RMS='ENV=(NSR_SERVER=tape_srv,NSR_GROUP=oracle_tapes)'; }
If you do not receive an error message, then Oracle successfully l oaded the sha
red library. However, channel allocation can fail with the ORA-27211 error: To d
elete an old backup: run {
allocate channel for delete type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=c:\RMAN\scr
ipts\tdpo.opt)'; change backupset primary_key delete;
} To schedule scripts: -------------------orcschedppim.cmd rem =================
================================= rem orcsched.cmd rem =========================
========================= rem rem rem set rem rem rem rem set rem rem rem ======
============================================ set rman executable ===============
=================================== ora_exe=d:\oracle\ora81\bin\rman ===========
======================================= set script and log directory ===========
======================================= set ora_script_dir=d:\oracle\scripts\ or
a_script_dir=c:\progra~1\tivoli\tsm\agentoba\ ==================================
================ run the backup script =========================================
=========
%ora_exe% target system/manager@ppim rcvcat rman_db1/rman_db1@orcl cmdfile %ora_
script_dir%bkdbppim.scr msglog %ora_script_dir%bkdbppim.log bkdbppim.scr run {
allocate channel t1 type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=C:\Progra~1\Tivoli\
TSM\AgentOBA\tdpoppim.opt)'; allocate channel t2 type 'sbt_tape' parms 'ENV=(TDP
O_OPTFILE=C:\Progra~1\Tivoli\TSM\AgentOBA\tdpoppim.opt)'; backup filesperset 5 f
ormat 'df_%t_%s_%p' (database); release channel t1;
release channel t2; } -----------------------------------Remarks: -------The fol
lowing is what needs to be changed. - Old Way allocate channel for maintenance t
ype 'sbt_tape' parms 'ENV=(DSMO_NODE=tora, DSMI_ORC_CONFIG=/opt/tivoli/tsm/clien
t/oracle/bin/dsm.opt)' allocate channel t1 type 'sbt_tape' parms > 'ENV=(DSMO_NO
DE=rx_r50, > DSMI_CONFIG=/usr/tivoli/tsm/client/ba/bin/dsm.opt, > DSMO_PSWDPATH=
/usr/tivoli/tsm/client/oracle/bin, > DSMI_DIR=/usr/tivoli/tsm/client/ba/bin, > D
SMO_AVG_SIZE#00)'; > - New Way allocate channel for maintenance type 'sbt_tape'
parms 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin/tdpo.opt)' Contents o
f tdpo.opt DSMI_ORC_CONFIG DSMI_LOG TDPO_FS TDPO_NODE *TDPO_OWNER TDPO_PSWDPATH
*TDPO_DATE_FMT *TDPO_NUM_FMT *TDPO_TIME_FMT *TDPO_MGMT_CLASS2 *TDPO_MGMT_CLASS3
*TDPO_MGMT_CLASS4 /opt/tivoli/tsm/client/oracle/bin/dsm.opt /opt/tivoli/tsm/clie
nt/oracle/bin/tdpoerror.log rman_fs tora /opt/tivoli/tsm/client/oracle/bin 1 1 1
mgmtclass2 mgmtclass3 mgmtclass4
It is recomended TDP_NUM_BUFFERS be set to a value of 1 only. 7. Recovery: -----
------A restore can be as easy as: RMAN> RESTORE DATABASE; RMAN> RECOVER DATABAS
E;
Or a single tablespace: Restore the tablespace or datafile with the RESTORE comm
and, and recover it with the RECOVER command. (Use configured channels, or if de
sired, use a RUN block and allocate channels to improve performance of the RESTO
RE and RECOVER commands.) RMAN> RESTORE TABLESPACE users; RMAN> RECOVER TABLESPA
CE users; If RMAN reported no errors during the recovery, then bring the tablesp
ace back online: RMAN> SQL 'ALTER TABLESPACE users ONLINE';
Use the RMAN restore command to restore datafiles, control files, or archived re
do logs from backup sets or image copies. RMAN restores backups from disk or tap
e, but image copies only from disk. Restore files to either: - The default locat
ion, which overwrites the files with the same name. - A new location specified b
y the set newname command. Restoring the Database to its Default Location ------
---------------------------------------If you do not specify set newname command
s for the datafiles during a restore job, the database must be closed or the dat
afiles must be offline. RMAN> run { allocate channel c1 type 'sbt_tape'; restore
database; recover database; } run { set until logseq 5 thread 1; allocate auxil
iary channel dupdb1 type disk; duplicate target database to dupdb;
} Examples Restoring the Database to a point in time (same incarnation) --------
------------------------------------------------------------Example 1: ---------
RMAN> run
2 { 3 4 5 6 }
set until time '23-DEC-2006 13:45:00'; restore database; recover database;
Example 2: ---------To recover the database until a specified time, SCN, or log
sequence number: After connecting to the target database and, optionally, the re
covery catalog database, ensure that the database is mounted. If the database is
open, shut it down and then mount it: SHUTDOWN IMMEDIATE; STARTUP MOUNT; Determ
ine the time, SCN, or log sequence that should end recovery. For example, if you
discover that a user accidentally dropped a tablespace at 9:02 a.m., then you c
an recover to 9 a.m. --just before the drop occurred. You will lose all changes
to the database made after that time. You can also examine the alert.log to find
the SCN of an event and recover to a prior SCN. Alternatively, you can determin
e the log sequence number that contains the recovery termination SCN, and then r
ecover through that log. For example, query V$LOG_HISTORY to view the logs that
you have archived. RECID STAMP THREAD# SEQUENCE# FIRST_CHAN FIRST_TIM NEXT_CHANG
---------- ---------- ---------- ---------- ---------- --------- ---------1 344
890611 1 1 20037 24-SEP-02 20043 2 344890615 1 2 20043 24-SEP-02 20045 3 3448906
18 1 3 20045 24-SEP-02 20046 Perform the following operations within a RUN comma
nd: Set the end recovery time, SCN, or log sequence. If specifying a time, then
use the date format specified in the NLS_LANG and NLS_DATE_FORMAT environment va
riables. If automatic channels are not configured, then manually allocate one or
more channels. Restore and recover the database. The following example performs
an incomplete recovery until November 15 at 9 a.m. RUN { SET UNTIL TIME 'Nov 15
2002 09:00:00'; # SET UNTIL SCN 1000; # alternatively, specify SCN # SET UNTIL
SEQUENCE 9923; # alternatively, specify log sequence number RESTORE DATABASE; RE
COVER DATABASE;
} If recovery was successful, then open the database and reset the online logs:
ALTER DATABASE OPEN RESETLOGS;
Moving the Target Database to a New Host with the Same File System -------------
----------------------------------------------------A media failure may force yo
u to move a database by restoring a backup from one host to another. You can per
form this procedure so long as you have a valid backup and a recovery catalog or
control file. Because your restored database will not have the online redo logs
of your production database, you will need to perform incomplete recovery up to
the lowest SCN of the most recently archived redo log in each thread and then o
pen the database with the RESETLOGS option. To restore the database from HOST_A
to HOST_B with a recovery catalog: Copy the initialization parameter file for HO
ST_A to HOST_B using an operating system utility. Connect to the HOST_B target i
nstance and HOST_A recovery catalog. For example, enter: % rman target sys/chang
e_on_install@host_b catalog rman/rman@rcat Start the instance without mounting i
t: startup nomount Restore and mount the control file. Execute a run command wit
h the following subcommands: Allocate at least one channel. Restore the control
file. Mount the control file. run { allocate channel ch1 type disk; restore cont
rolfile; alter database mount;
}
Because there may be multiple threads of redo, use change-based recovery. Obtain
the SCN for recovery termination by finding the lowest SCN among the most recen
t archived redo logs for each thread. Start SQL*Plus and use the following query
to determine the necessary SCN: SELECT min(scn)
FROM (SELECT max(next_change#) scn FROM v$archived_log GROUP BY thread#); Execut
e a run command with the following sub-commands: Set the SCN for recovery termin
ation using the value obtained from the previous step. Allocate at least one cha
nnel. Restore the database. Recover the database. Open the database with the RES
ETLOGS option. run { set until scn = 500; # use appropriate SCN for incomplete r
ecovery allocate channel ch1 type 'sbt_tape'; restore database; recover database
; alter database open resetlogs;
} Moving the Target Database to a New Host with a different File System --------
------------------------------------------------------------Follow the procedure
as above, but now use the 'set newname' command. run { set until scn 500; # use
appropriate SCN for incomplete recovery allocate channel ch1 type disk; set new
name for datafile 1 to '/disk1/%U'; # rename each datafile manually set newname
for datafile 2 to '/disk1/%U'; set newname for datafile 3 to '/disk1/%U'; set ne
wname for datafile 4 to '/disk1/%U'; set newname for datafile 5 to '/disk1/%U';
set newname for datafile 6 to '/disk2/%U'; set newname for datafile 7 to '/disk2
/%U'; set newname for datafile 8 to '/disk2/%U'; set newname for datafile 9 to '
/disk2/%U'; set newname for datafile 10 to '/disk2/%U'; alter database mount; re
store database; switch datafile all; # points the control file to the renamed da
tafiles recover database; alter database open resetlogs;
}
Warning: restore with use catalog: If you issue switch commands, RMAN considers
the restored database as the target database, and the recovery catalog becomes c
orrupted. If you do not issue switch commands, RMAN considers the restored dataf
iles as image copies that are candidates for future restore operations.
restore with no catalog: If you issue switch commands, RMAN considers the restor
ed database as the target database. If you do not issue switch commands, the res
tore operation has no effect on the repository. Restoring a tablespace: --------
--------------Suppose tablespace DATA_BIG has become unusable. run { } run { all
ocate channel ch1 type disk; recover tablespace data_big; } allocate channel ch1
type disk; restore tablespace data_big;
This script will perform datafile recovery RMAN> run { 2> allocate channel d1 ty
pe disk; 3> sql "alter tablespace users offline immediate"; 4> restore datafile
5; 5> recover datafile 5; 6> sql "alter tablespace users online"; 7> release cha
nnel d1; 8> }
RMAN> run { allocate channel ch1 type disk; restore database; recover database;
alter database open resetlogs; }
Duplexing the Target Database to a New Host: -----------------------------------
---------create instance on second host create init.ora, password file etc.. cre
ate similar directories on second host make sure net8 works from target en rman
to second host startup nomount neccesary archived redologs are present on second
host
$ rman target sys/target_pwd@target_str catalog rman/cat_pwd@cat_str auxiliary s
ys/aux_pwd@aux_str
run {
allocate auxiliary channel ch1 type 'sbt_tape'; duplicate target database to dup
db nofilenamecheck;
} run { # allocate at least one auxiliary channel of type disk or tape allocate
auxiliary channel dupdb1 type 'sbt_tape'; . . . # set new filenames for the data
files set newname for datafile 1 TO '$ORACLE_HOME/dbs/dupdb_data_01.f'; set newn
ame for datafile 2 TO '$ORACLE_HOME/dbs/dupdb_data_02.f'; . . . # issue the dupl
icate command duplicate target database to dupdb # create at least two online re
do log groups logfile group 1 ('$ORACLE_HOME/dbs/dupdb_log_1_1.f', '$ORACLE_HOME
/dbs/dupdb_log_1_2.f') size 200K, group 2 ('$ORACLE_HOME/dbs/dupdb_log_2_1.f', '
$ORACLE_HOME/dbs/dupdb_log_2_2.f') size 200K;
}
24.14 Common RMAN errors: ------------------------What are the common RMAN error
s (with solutions)? Some of the common RMAN errors are: PROBLEM 1. ---------RMAN
-20242: Specification does not match any archivelog in the recovery catalog. Add
to RMAN script: sql 'alter system archive log current'; PROBLEM 2. ---------RMA
N-06089: archived log xyz not found or out of sync with catalog Execute from RMA
N: change archivelog all validate; PROBLEM 3. ---------fact: Oracle Server - Ent
erprise Edition 8 fact: Oracle Server - Enterprise Edition 9 fact: Recovery Mana
ger (RMAN) symptom: RMAN backup fails symptom: RMAN-10035: exception raised in R
PC symptom: ORA-19505: failed to identify file <file> symptom: ORA-27037: unable
to obtain file status
symptom: SVR4 error:2:no such file or directory cause: Datafile existed in previ
ous backup set, but has been subsequently removed or renamed. fix: Resync the RM
AN Catalog $ rman target sys/<passwd>@target catalog rman/<passwd>@catalog RMAN>
resync catalog; Or Validate the backup pieces. $ rman target sys/<passwd>@targe
t catalog rman/<passwd>@catalog RMAN> allocate channel for maintenance type disk
; RMAN> crosscheck backup; RMAN> resync catalog; PROBLEM 4. ---------RMAN> conne
ct target sys/change_on_install@TARGETDB RMAN-00569: ================error messa
ge stack follows RMAN-04005: error from target database: ORA-01017: invalid user
name/password; logon denied Problem Explanation: Recovery Manager automatically
requests a connection to the target database as SYSDBA. Solution Description: Re
covery Manager automatically requests a connection to the target database as SYS
DBA. In order to connect to the target database as SYSDBA, you must either: 1. B
e part of the operating system DBA group with respect to the target database. Th
is means that you have the ability to CONNECT INTERNAL to the trget database wit
hout a password. - or 2. Have a password file setup. This requires the use of th
e "orapwd" command and the initialization parameter "remote_login_passwordfile".
See Chapter 1 of the Oracle8(TM) Server Administrator's Guide, Release 8.0 for
details. Note that changes to the password file will not take affect until after
the database is shutdown and restarted. For Unix, also ensure TWO_TASK is _not_
set. e.g. % env | grep -i two If set, unset it. % unsetenv TWO_TASK PROBLEM 5.
--------RMAN cannot connect to the target database through a multi-threaded serv
er (MTS)
dispatcher: it requires a dedicated server process Create a net service name in
the tnsnames.ora file that connects to the non-shared SID. For example, enter: i
nst1_ded = (description= (address=(protocol=tcp)(host=inst1_host)(port1521)) (co
nnect_data=(service_name=inst1)(server=dedicated)) ) $ rman target sys/oracle@in
st1_ded catalog rman/rman@rcat PROBLEM 6. --------No MML libary found. RMAN will
: 1. Attempts to load the library indicated by the SBT_LIBRARY parameter in the
ALLOCATE CHANNEL or CONFIGURE CHANNEL command. If the SBT_LIBRARY parameter is n
ot specified, then Oracle proceeds to the next step. 2. Attempts to load the def
ault media management library. The filename of the default library is operating
system specific. On UNIX, the library filename is $ORACLE_HOME/lib/libobk.so, wi
th the extension name varying according to platform: .so, .sl, .a, and so forth.
On Windows NT the library is named %ORACLE_HOME%\bin\orasbt.dll. If Oracle is u
nable to locate the MML library,then RMAN issues an ORA-27211 error and exits. W
henever channel allocation fails, Oracle writes a trace file to the USER_DUMP_DE
ST directory. The following shows sample output: SKGFQ OSD: Error in function sb
tinit on line 2278 SKGFQ OSD: Look for SBT Trace messages in file /oracle/rdbms/
log/sbtio.log SBT Initialize failed for /oracle/lib/libobk.so
24.15 RMAN 10g Notes: ---------------------
========================== 25. UPGRADE AND MIGRATION: ==========================
25.1 Version and release numbers: --------------------------------Oracle Oracle
Oracle Oracle 7 8 8.1.x 8,8i -> -> -> -> 8,8i,9i 8i 8.1.y 9i
Upgrade:
move upwarde from one release in the same version to a higher release within the
same base version, for example 8.1.6 -> 8.1.7 Migration: move to a different ve
rsion, for example 7.4.3 -> 8.1.5 Patches : bugfixes Patchset : smaller patches
combined to latest patchset Example version: 8.1.6.2 -> 8=version,1=release numb
er,6=maintenance release number,2=patch number Exp Imp matrix: --------------1.
Migration to Oracle9i release 1 - 9.0.1.x : ------------------------------------
------Direct migration with a full database export and full database import is o
nly supported if the source database is: - Oracle7 : 7.3.4 - Oracle8 : 8.0.6 - O
racle8i: 8.1.5 or 8.1.6 or 8.1.7 Migration to Oracle9i release 2 - 9.2.0.x : ---
---------------------------------------Direct migration with a full database exp
ort and full database import is only supported if the source database is: - Orac
le7 : 7.3.4 - Oracle8 : 8.0.6 - Oracle8i: 8.1.7 - Oracle9i: 9.0.1 Tools that can
be used to migrate from one version to another: -------------------------------
------------------------------- exp/imp - MIG Migration Utility - ODMA Oracle Da
ta Migration Assistant There also exists the "Migration Workbench" for migrating
Access, SQL Server etc.. to Oracle. 25.2 Migration From 7 to 8,8i: ------------
-----------------Take into account the following:
-
Changed standard directories of init, alert, dump Changed and obsolete init.ora
parameters Changed and obsolete sqlnet.ora, tnsnames.ora and listener.ora parame
ters Rowid values have changed from "restricted" to "extended" format
Obsolete init.ora parameters: init_sql_files lm_domains lm_non_fault_tolerant pa
rallel_default_max_scans parallel_default_scansize sequence_cache_hash_buckets s
erializable session_cached_cursors v733_plans_enabled Change init.ora parameters
: compatible snapshot_refresh_interval -> job_queue_interval snapshot_refresh_pr
ocess -> job_queue_processes db_writers -> dbwr_io_slaves user_dump_dest, backgr
ound_dump_dest, ifile Three main tools: - exp/imp OWNER= or FULL exp/imp In case
of a full exp/imp you must run catalog.sql of new database - Migration utility
This is a command line utility. From 7 to 8 or higher: the Rowid will not be cha
nged automatically. Migration utility will create a "conversion file" instance_n
ame.dbf Move this file to the /dbs directory of Oracle 8,9. Startup svrmgrl or s
qlplus alter database convert; alter database open resetlogs; - ODMA This tool u
ses a GUI. 25.2 Example Upgrade of 8.1.6 to 9 using ODMA: ----------------------
-----------------------1. Install the Oracle 9i software in it's own ORACLE_HOME
. 2. Prepare the original init.ora DB_DOMAIN=correct domain JOB_QUEUE_PROCESS=0
AQ_TM_PROCESSES=0 REMOTE_LOGIN_PASSWORDFILE=NONE 3. Resize the SYSTEM tablespace
to have more than 100M free 4. Prepare the system rollbacksegment to be big eno
ugh alter rollback segment system storage(maxextents 505 optimal null next 1M);
5. 6. 7. 8.
Verify that SYSTEM is the default tablespace for SYS and SYSTEM Make sure there
is no user MIGRATE. ODMA will use a user called MIGRATE. Shutdown the database c
leanly. Make a backup
9. Setup the environment variables for the 9i software. Also, ODMA uses the java
GUI, just like the OUI 10. Start ODMA $ cd $ORACLE_HOME/bin $ odma 11. Basicall
y, follow the instructions. ODMA will ask you the instance that must be upgraded
. On unix, this is read from the oratab file. Then it will ask you to confirm bo
th the old and new ORACLE_HOME. It will also ask for the location of the init.or
a file. Then it will proceed in the upgrade. The upgrade is primarily about the
datadictionary. 12. When ODMA is ready, do the following: check the alert log an
d other logs Also check oratab, optionally run utlrp.sql to automatically rebuil
d any invalid objects. Check for invalid objects and check indexes. Analyze all
tables plus indexes. 25.3 Example Upgrade of 8.1.6 to 8.1.7: -------------------
-------------------1. Install the new Oracle software in a different $ORACLE_HOM
E For example $ cd $ORACLE_BASE $ cd product $ ls 8.1.6 8.1.7 Backup and shutdow
n the 8.1.6 database, and stop the listener 2. Set the correct env variables for
8.1.7 3. Create a softlink in the new $ORACLE_HOME/dbs to the init.ora in the $
ORACLE_BASE/admin/sid/pfile directory Startup the database with new Oracle relea
se 4. Startup the database using the new Oracle software sqlplus internal (or vi
a svrmgrl) startup restrict; 5. Run the upgrade script $ORACLE_HOME/rdbms/admin/
u0801060.sql This will also rebuild the datadictionary (catalog, catproc) 6. You
optionally run utlrp.sql to automatically rebuild any invalid objects 7. Change
on unix oratab for new $ORACLE_HOME 8. Change listener.ora for $ORACLE_HOME val
ue 9. Set COMPATIBLE in init.ora
10. Checks: check the alert log and other logs Also check oratab, optionally run
utlrp.sql to automatically rebuild any invalid objects. Check for invalid objec
ts and check indexes. Analyze all tables plus indexes. ===================== 26.
Some info on Rdb: ===================== Rdb is most often seen on Digtal unix,
or OpenVMS VAX, or OpenVMS alpha, but there exists a port to NT / 2000 as well.
Samples directory: ------------------ digital unix: /usr/lib/dbs/vnn/examples -
OpenVMS: SQL$EXAMPLE In digital unix, to create a sample database: $/usr/lib/dbs
/sql/vnn/examples/personnel <database-form> <dir> <database-form>: S, M, MSDB <d
ir>: enter a directory where you want the database to be created. $/usr/lib/dbs/
sql/vnn/examples/personnel m /tmp/ Invoking SQL: -------------- In OpenVMS. Crea
te a symbol $ SQL:==$SQL$ $ SQL SQL> - In digital unix: $ SQL SQL> Attach to dat
abase: ------------------SQL>ATTACH 'FILENAME mf_personnel'; SQL>ATTACH 'FILENAM
E DISK$1:[GERALDO.DB]SUPPLIES MULTISCHEMA IS OFF' Detach from database: --------
------------SQL>exit $ or
SQL>DISCONNECT DEFAULT; SQL> Editing a SQL Statement: -----------------------SQL
>EDIT ... EXIT OpenVMS: Defining a Logical name for a database: ----------------
-------------------------------$ DEFINE SQL$DATABASE DISK01:[FIELDMAN.DBS]mf_per
sonnel You do not need to attach to the database anymore. Digital unix: Defining
a configuration parameter: ------------------------------------------------$ SQ
L_DATABASE /usr/fieldman/dbs/mf_personnel SHOW Statements: ---------------SQL> S
QL> SQL> SQL> SQL> SQL> SQL> SQL> SQL> SQL> SQL> SQL> SQL> SHOW SHOW SHOW SHOW S
HOW SHOW SHOW SHOW SHOW SHOW SHOW SHOW SHOW TABLES -- shows all tables TABLE * A
LL TABLES TABLE WORK_STATUS -- displays info about table WORK_STATUS VIEWS -- sh
ows all views VIEW CURRENT_SALARY -- shows info about this view only DOMAINS --
display all domains DOMAIN DATE_DOM INDEXES INDEXES ON SALARY_HISTORY INDEX DEG_
EMP_ID DATABASE -- returns the database name STORAGE AREAS
Single file or multifile database: ---------------------------------A database t
hat stores tables in one file (file type .rdb) is a single file database. Altern
ately, you can have a database in which system information is stored in a databa
se root file (.rdb) and the data and metadata are stored in one or more storage
area files (type .rda). Single file: - a database root file which contains all u
ser data and information about the status of all database operations.
- a snapshot file (.snp file) which contains copies of rows (before images) that
are beiing modified by users updating the database. Multifile: - a database roo
t file which contains information about the status of all database operations. -
a storage area file, .rda file, for the system tables (RDB$SYSTEM) - one or mor
e .rda files for user data. - snapshot files for each .rda file and for the data
base root file. Create multifile database example: -----------------------------
----$ SQL SQL> CREATE DATABASE FILENAME mf_personnel_test cont> ALIAS MF_PERS co
nt> RESERVE 6 JOURNALS cont> RESERVE 15 STORAGE AREAS cont> DEFAULT STORAGE AREA
default_area cont> SYSTEM INDEX COMPRESSION IS ENABLED cont> CREATE STORAGE ARE
A default_area FILENAME default_area cont> CREATE STORAGE AREA RDB$SYSTEM FILENA
ME pers_system_area; Datatypes: ---------Rdb Oracle ----------------------------
---------------------CHAR VARCHAR SMALLINT (16 bits) INTEGER (32 bits) can be us
ed with a scale factor INTEGER(2) BIGINT (64 bits) VARYING DATE ANSI (year, mont
h, day) TIME INTERVAL TIMESTAMP (year, month, day, hours, min, sec) DATE VMS ODB
C for RDB: --------------------------The current driver version is 3.00.02.05 wh
ich doesnt work, and the older driver version (which does work) is 2.10.17.00 (D
riverConf1 outputs attached). --------------I am trying to run a DTS job to impo
rt data from an Oracle 7.3 RDB (DEC) platform into SLQ Server 2000. CHAR, NCHAR
VARCHAR2, NVARCHAR2 NUMBER(L,P) NUMBER(L,P) RAW, LONG, LONG RAW,
DATE
I have an odbc connection set up and I am using it in MS Access 2000 to view the
table that I want to import. When I create the job in SQL Server, I can preview
the data and everything looks fine, as in the Access table, but when I try and
run the job I get an: [ORACLE][ODBC]Function Sequence Error error message. Any e
xperience with these type of errors and RDB. Thanks, John Campbell This can - I
understand - occur where the version of the ODBC drivers on the NT box with SQL
Server running is incompatible with the services running on the VMS box. I can't
remember the various numbers I'm afraid (or even where I found the stuff it was
some time ago). We're running VMS 7.2-1 and Oracle 7.3 and found that this prod
uced a similar error with the most recent version of the Oracle ODBC Drivers for
RdB - but we have no problems running the v2.10 drivers (v2.10.17 to be exact).
HTH --------------ODBC driver for RDB uses SQSAPI32.ini JInitiator: ----------O
racle heeft deze standaard aangepast, specifiek gericht op het uitvoeren van Web
forms. Deze aanpassingen houden verband met stabiliteit (bugfixes) en performanc
e verbetering, zoals JAR file caching, incremental JAR file loading en applet ca
ching. Met behulp van JInitiator kunnen Oracle Forms in een browser (Webforms) w
orden uitgevoerd. JInitiator is gn JVM, maar een extensie op de JVM standaard, waarmee
Oracle Webforms op een stabiele n ondersteunde wijze in een browser kunnen worden
uitgevoerd. JInitiator is alleen beschikbaar voor het Windows platform. Op dit m
oment is het niet mogelijk om Webforms uit te voeren in de standaard Microsoft J
VM. Jinitiator zal in de volgende release niet meer terugkeren. Webforms wordt g
ecertificeerd op de standaard Java Plugin. De Microsoft JVM conformeert zich ook
aan deze standaard (gn certificatie), waardoor Webforms op termijn in een standaard M
icrosoft Internet Explorer browser uitgevoerd zal kunnen worden. Dit kan echter
pas met zekerheid gesteld worden na grondig testen. Installatie JInitiator JInit
iator wordt bij het eerste gebruik automatisch gedownload vanaf de Application S
erver.
Overigens kan de JInitiator ook handmatig worden genstalleerd op de client machines
.
============================ 28. Some info on IFS ============================ F
irst some remarks about IFS in versions 9.0.2 and 9.0.3: 9.0.2 ===== In version
9.0.2, IFS (Internet File System) is a separate product. 9.0.3 ===== In version
9.0.3, CM SDK runs in conjunction with Oracle9i Application Server and an Oracle
9i database. The Oracle Content Management SDK (Oracle CM SDK) is the new name f
or the product formerly known as the Oracle Internet File System (Oracle 9iFS).
This new naming is official as of version 9.0.3. Oracle CM SDK runs in conjuncti
on with Oracle9i Application Server and an Oracle9i database. Written entirely i
n Java, Oracle CM SDK is an extensible content management system with file serve
r convenience. 27.1 IFS 9.0.2 --------------------------We first will turn our a
ttention to iFS 9.0.2: ---------------------------------------------The Oracle 9
i database stores all content that comprises the filesystem, from the files them
selves to metadata like owners and group information. On most occasions, 9iFS st
ores the files contents as LOB's in the database. Tools: ------ Oracle 9iFS Conf
iguration Assistant. Allows you to create a new 9iFS Domain, and add nodes etc..
- Oracle 9IFS Credential Manager Configuration Assistant. To change the default
credential manager to be applied to each user. - OEM for 9iAS website (9iAS Hom
e Page) You can manage 9iFS from the 9iAS OEM website. - OEM console (Oracle Ent
erprise Manager) You can manage 9iFS from the OEM console.
- Oracle 9iFS Manager Graphical java based interface on iFS. - Webinterface iFS
manager - Command line utilities ifsshell etc.. - Import/Export utility The Impo
rt/Export utility exports Oracle 9iFS objects (content and users) into an export
file. Domain: ------9iFS is organized in a Domain concept, with an administrati
ve Domain controller and possibly other nodes as members in the Domain. Reposito
ry: ----------All data managed by 9iFS resides in an 9i database schema, called
the 9iFS repository. You specify the database instance and schemaname during ins
tallation of 9iFS. Commands: --------Stop IFS: Oracle Internet File System 1.1.x
ORACLE_HOME\ifs1.1\bin\ifsstop.bat Oracle 9iFS 9.0.1 (and higher) ORACLE_HOME\9
ifs\bin\ifsstopdomain.bat start iFS OC4J instance Windows NT or 2K:
> ifsstartoc4j.bat
start up ifs domain controller process Windows NT or 2K > ifslaunchdc.bat Start
ifs node processes Windows NT or 2K > ifslaunchnode.bat Activate the iFS domain
controller and Nodes Windows NT or 2K > ifsstartdomain.bat Here is a script exam
ple to run on windows NT or 2K: StartIfs902.bat =============== D:\ora902\9ifs\b
in\ifsstartoc4j.bat start D:\ora902\9ifs\bin\ifslaunchdc.bat start D:\ora902\9if
s\bin\ifslaunchdomain.bat
D:\ora902\9ifs\bin\ifsstartdomain -s myifshost:53140 ifssys echo "iFS 902 starte
d" - Home: Oracle CM SDK must be installed in the Oracle9i Application Server, R
elease 2 home. Make sure to select the file location carefully; once installed,
the Oracle CM SDK software cannot be moved without deinstalling and reinstalling
. Oracle 9iFS requires an Oracle 9.0.2 home, which means you must install and co
nfigure Oracle9i Application Server, Release 2 in an Oracle home separate from t
hat of the database. The Oracle home can be on the same machine (resources allow
ing), or on a different machine. - Install with Oracle Universal Installer. Inst
allation and configuration of Oracle 9iFS starts from the Oracle Universal Insta
ller, the graphical user interface wizard that copies all necessary software to
the Oracle home on the target machine. The Oracle 9iFS Configuration tool launch
es automatically at the end of the Oracle Universal Installer process and guides
you through the process of identifying the Oracle database to be used for the O
racle Internet File System schema; selecting the type of authentication to use (
native Oracle 9iFS credential manager or Oracle Internet Directory for credentia
l management); and various other configuration tasks. The specific configuration
tasks vary, depending on the type of deployment (new Oracle 9iFS domain vs. add
itional Oracle 9iFS nodes, for example) - Starting install wizard again: ORACLE_
HOME\ifs\cmsdk\bin\ifsca.bat - connect to database: The Oracle CM SDK Configurat
ion Assistant attempts to make a connection as SYS AS SYSDBA using a database st
ring, and therefore needs the database to be configured with a password file. -
Directory service: Select either CMSDK Directory Service or Oracle Internet Dire
ctory Service for user authentication. The default Oracle Internet Directory sup
er user name/password is cn=orcladmin/welcome1.
The default Oracle Internet Directory root Oracle context is set to cn=OracleCon
text. - Launch Internet File System Manager from a Web browser: http://hostname.
mycompany.com:7778/cmsdk/admin Access paths and directory structure: -----------
-------------------------- Oracle FileSync Client Software: In addition to using
the networking protocols or client applications native to the Windows operating
system, Windows users can install and use Oracle FileSync to keep local directo
ries on a desktop machine and folders in Oracle CM SDK synchronized. Double-clic
k Setup.exe to run the installation program, or run O:\ifs\clients\filesync\setu
p.exe from the Windows Start...Run Menu. - CUP (Command-line Utilities Protocol)
Client The Oracle Command-line Utilities Protocol server enables administrators
and developers to perform a variety of tasks quickly and easily from a Windows
command-line or a UNIX shell. copy /ifs/clients/cmdline/win32 to a local directo
ry.
============================ 28. Some info on 9iAS rel. 2 ======================
====== 28.1 General Information: ========================= Oracle9i Application
Server (Oracle9iAS) is a part of the Oracle9i platform, a complete and integrate
d e-business platform. Oracle9i platform consists of: - Oracle9i Developer Suite
for developing applications - Oracle9i Application Server for deploying Interne
t applications - Oracle9i Database Server for storing content 9iAS is not just a
webserver. A webserver is only part of the 9iAS system. 9iAS offers OC4J (Oracl
e Containers for J2EE), portals, webserver and webcache, and BusinessIntelligenc
e and other components.
OC4J: ----The "core" of the AS (thus the application part), is the OC4J architec
ture. The OC4J infrastructure supports EJB, JSP and Servlet applications. Develo
pers can write J2EE applications, like EJB, Servlet and JSP applications, that w
ill run on 9iAS. OC4J itself is written in Java and runs on a Java virtual machi
ne. BusinessIntelligence: --------------------A set of services and client appli
cations that make reports and all types of analysis possible. For example, the '
Oracle Reports service' , an application in the middle tier, uses a queue for su
bmitted client requests. These request might create reports of a Datawarehouse i
n a Customer database etc...
28.1.1 Components: -----------------There are 3 install types: -J2EE and Web Cac
he -Portal and Wireless -BusinessIntelligence and Forms Note: The Oracle 9iAS 9.
0.2 Concepts and the 9iAS Install guides mentions 3 install types, but the Admin
guide Rel. 9.0.2 mentions 4 install types. The fourth additional one is "Unifie
d Messaging". This Enables you to integrate different types of messages into a s
ingle framework. It includes all of the components available in the Business Int
elligence and Forms install type. Component J2EE and Web Cache and Forms Oracle9
iAS Web Cache YES Oracle HTTP Server YES Oracle9iAS Container for J2EE YES Oracl
e EM Web site YES Oracle9iAS Portal no Oracle9iAS Wireless no Oracle9iAS Discove
rer no Oracle9iAS Reports Services no Oracle9iAS Clickstream Int. no Oracle9iAS
Forms Services no Oracle9iAS Personalization no Portal and Wireless YES YES YES
YES YES no no no no no YES YES YES YES YES YES YES YES YES YES BusinessInt.
YES
YES
28.1.2. Need of Oracle9iAS Infrastructure: -------------------------------------
----Prior to installing an instance of the "Portal and Wireless" or "Business In
telligence and Forms" install type, you must install and configure the Oracle9iA
S Infrastructure somewhere in your network, optimally on a separate computer. Th
e J2EE and Web Cache install type does not require Oracle9iAS Infrastructure. Yo
u can install single or multiple instances of Oracle9iAS install types, J2EE and
Web Cache, Portal and Wireless, and Business Intelligence and Forms, on the sam
e host, which is not a very realistic scenarion. Multiple instances of different
Oracle9iAS install types, can use one instance of Oracle9iAS Infrastructure, an
d this could be a realistic scenario.
28.1.3. Metadata Repository in the Infrastructure: -----------------------------
--------------------The Oracle9iAS Infrastructure installation consists of: - Or
acle9iAS Metadata Repository: Pre-seeded database containing metadata needed to
run Oracle9iAS instances. - Oracle Internet Directory OID: Directory service tha
t enables sharing information about dispersed users and network resources. Oracl
e Internet Directory implements LDAP v3. - Oracle9iAS Single Sign-On SSO: Create
s an enterprise-wide user authentication to access multiple accounts and Oracle9
iAS applications. - Oracle Management Server OMS: Processes system management ta
sks and administers the distribution of these tasks across the network using the
Oracle Enterprise Manager Console. The Console and its three-tier architecture
can be used with the Oracle Enterprise Manager Web site to manage not only Oracl
e9iAS, but your entire Oracle environment. - J2EE and Web Cache: For internal us
e with Oracle9iAS Infrastructure. Not used for component application deployment.
Application server installations and their components use an infrastructure in
the following ways: -- Components and applications use the Single Sign-on servic
e provided by Oracle9iAS Single Sign-On.
-- Application server installations and components store configuration informati
on and user and group privileges in Oracle Internet Directory. -- Components use
schemas that reside in the metadata repository. SSO is required for "Portal and
Wireless" and "Business Intelligence and Forms" install types. Also required fo
r application server clustering with J2EE and Web Cache install type. 28.1.4. Cu
stomer database: -------------------------This could be any database on any Host
, containing business data. But, The following components require a customer dat
abase: Oracle9iAS Discoverer Oracle9iAS Personalization Oracle9iAS Unified Messa
ging If you configure any of these components during installation, their setup a
nd configuration will not be complete at the end of installation. You need to ta
ke additional steps to install and tune a customer database, load schemas into t
he database, and finish configuring the component to use the customer database.
28.1.5. Oracle Home: -------------------Oracle home is the directory in which Or
acle software is installed. Different Oracle versions always get their own Oracl
e Homes. Multiple instances of Oracle9iAS install types (J2EE and Web Cache, Bus
iness Intelligence and Forms, and Portal and Wireless) must be installed in sepa
rate Oracle homes on the same computer. You must install Oracle9iAS Infrastructu
re in its own Oracle home directory, preferably on a separate host. The Oracle9i
AS installation cannot exist in the same Oracle home as the Oracle9iAS Infrastru
cture installation. 28.1.6. Oracle9iAS Infrastructure Port Usage: --------------
------------------------------!! Oracle9iAS Infrastructure requires exclusive us
e of port 1521 Installation of Oracle9iAS Infrastructure requires exclusive use
of port 1521 on
your computer. If one of your current system applications uses this port, then c
omplete one of the following actions before installing Oracle9iAS Infrastructure
: If you have an existing application using port 1521, then reconfigure the exis
ting application to use another port. If you have an existing Oracle Net listene
r and an Oracle9i database, then proceed with the installation of Oracle9iAS Inf
rastructure. Your Oracle9iAS Infrastructure will use the existing Oracle Net lis
tener. If you have an existing Net8 listener in use by an Oracle8i database, the
n you must upgrade to the Oracle9i Net listener version by installing Oracle9iAS
Infrastructure. 28.1.6. Using the Oracle Enterprise Manager Console: ----------
-----------------------------------------The Oracle Enterprise Manager console p
rovides a wider view of your Oracle environment, beyond Oracle9iAS. Use the Cons
ole to automatically discover and manage databases, application servers, and Ora
cle applications across your entire network. The Console and its related compone
nts are installed with the Oracle Management Server as part of the Oracle9iAS In
frastructure installation option. The Console is part of the Oracle Management S
erver component of the Oracle9iAS Infrastructure. The Management Server, the Con
sole, and Oracle Agent are installed on the Oracle9iAS Infrastructure host, alon
g with the other infrastructure components. 28.1.7. Starting and Stopping the Or
acle Management Server on Windows: ---------------------------------------------
------------------------On Windows systems, use the Services control panel to st
art and stop the management server. The name of the service is in the following
format: OracleORACLE_HOMEManagementServer For example: OracleOraHome902Managemen
tServer 28.1.8. OEM Website: -------------------You can verify the Enterprise Ma
nager Web site is started by pointing your browser to the Web site URL. For exam
ple: http://hostname:1810
get console http://hostname:1810 get welcome http://hostname:7777
http://127.0.0.1:1810
To start or stop the Enterprise Manager Web site on Windows, use the Services co
ntrol panel. The name of the service is in the following format: OracleORACLE_HO
MEEMwebsite Or Start the Enterprise Manager Web site (UNIX) ORACLE_HOME/bin/emct
l start (Windows) ORACLE_HOME\bin\emctl start Stop the Enterprise Manager Web si
te emctl stop Example Services: Oracleias902Discoverer Oracleias902ProcessManage
r Oracleias902WebCache Oracleias902WebCacheAdmin Oracleinfra902Agent Oracleinfra
902EMWebsite Oracleinfra902InternetDirectory_iasdb Oracleinfra902ManagementServe
r Oracleinfra902ProcessManager OracleOraHome901TNSListener OracleServiceIASDB Or
acleServiceO901 Note for Oracle 10g RDBMS EM DB console: =======================
================= Sites: -----Enterprise Manager Database Control URL - (dbname)
: http://hostname:1158/em http://127.0.0.1:1810 http://127.0.0.1:1158 The iSQL*
Plus URL is: http://localhost:5561/isqlplus The iSQL*Plus DBA URL is: http://loc
alhost:5561/isqlplus/dba emctl prompt tool: -----------------C:\ora10g\product\1
0.2.0\db_1\NETWORK\ADMIN>emctl status dbconsole
= Agent for Management Server = Enterprise Manager Web site = OEM Management Ser
ver = just the Listener = infra structure db = regular customer db
Oracle Enterprise Manager 10g Database Control Release 10.2.0.1.0 Copyright (c)
1996, 2005 Oracle Corporation. All rights reserved. http://xpwsora:1158/em/conso
le/aboutApplication Oracle Enterprise Manager 10g is running. Logs are generated
in directory C:\ora10g\product\10.2.0\db_1/xpwsora_SPLCONF/sysman/log Services:
--------C:\ora10g\product\10.2.0\db_1\NETWORK\ADMIN>net start | find "Ora" Orac
leDBConsolesplconf OracleOraDb10g_home1iSQL*Plus OracleOraDb10g_home1TNSListener
OracleServiceSPLCONF C:\ora10g\product\10.2.0\db_1\NETWORK\ADMIN>
28.1.9. emctl tool : for controlling EM website: -------------------------------
-----------------Enterprise manager homepage http://hostname:1810 can only be ac
cessed if EM webste is running. Usage:: emctl emctl emctl emctl emctl emctl emct
l emctl emctl emctl start|stop|status reload | upload set credentials [<Target_n
ame>[:<Target_Type>]] gencertrequest installcert [-ca|-cert] <certificate base64
text file> set ssl test|on|off|password [<old password> <new password>] set pas
sword <old password> <new password> authenticate <pwd> switch home [-silent <new
_home>] config <options> : Start the Enterprise Manager Web site. : Stop the Ent
erprise Manager Web site (requires : Verify the status of the Enterprise Manager
Web : Reset the ias_admin password. : Verify that the supplied password is the
emctl start emctl stop ias_admin password). emctl status site. emctl set passwor
d new_password emctl authenticate password ias_admin password.
emctl config options can be listed by typing "emctl config" emctl status C:\temp
>emctl status EMD is up and running : 200 OK 28.1.10. OEMCTL tool: for controlli
ng Management Server: --------------------------------------------------------
EM control D:\temp>oemctl "Syntax: OEMCTL START OMS " " OEMCTL STOP OMS <EM User
name>/<EM Password>" " OEMCTL STATUS OMS <EM Username>/<EM Password>[@<OMS-HostN
ame>]" " OEMCTL PING OMS " " OEMCTL START PAGING [BootHost Name] " " OEMCTL STOP
PAGING [BootHost Name] " " OEMCTL ENABLE EVENTHANDLER" " OEMCTL DISABLE EVENTHA
NDLER" " OEMCTL EXPORT EVENTHANDLER <filename>" " OEMCTL IMPORT EVENTHANDLER <fi
lename>" " OEMCTL DUMP EVENTHANDLER" " OEMCTL IMPORT REGISTRY <filename> <Rep Us
ername>/<Rep Password>@<RepAlias>" " OEMCTL EXPORT REGISTRY <Rep Username>/<Rep
Password>@<RepAlias>" " OEMCTL CONFIGURE RWS" 28.1.11. The Intelligent Agent: --
----------------------------The Oracle Intelligent Agent is installed whenever y
ou install Oracle9iAS on a host computer. For example, if you select the J2EE an
d Web Cache installation type, the Oracle Universal Installer installs Oracle En
terprise Manager Web site and the Oracle Intelligent Agent, along with the J2EE
and Web Cache software. This means the Intelligent Agent software is always avai
lable if you decide to use the Console and the Management Server to manage your
Oracle9iAS environment. The Console and Management Server are installed as part
of the Oracle9iAS Infrastructure. In most cases, you install the Infrastructure
on a dedicated host that can be used to centrally manage multiple application se
rver instances. The Infrastructure includes Oracle Internet Directory, Single Si
gn-On, the metadata repository, the Intelligent Agent, and Oracle Management Ser
ver. You only need to run the Intelligent Agent if you are using Oracle Manageme
nt Server in your enterprise. In order for Oracle Management Server to detect ap
plication server installations on a host, you must make sure the Intelligent Age
nt is started. Note that one Intelligent Agent is started per host and must be s
tarted after every system boot. 28.1.12. AGENTCTL: for controlling the Intellige
nt Agent: --------------------------------------------------------(UNIX) You can
run the following commands in the Oracle home of the primary installation (the
first installation on the host) to get status and start the Intelligent Agent:
ORACLE_HOME/bin/agentctl status agent ORACLE_HOME/bin/agentctl start agent (Wind
ows) You can check the status and start the Intelligent Agent using the Services
control panel. The name of the service is in the following format: OracleORACLE
_HOMEAgent (the executable is agntsrvc.exe) start the Intelligent Agent in the O
racle home of the primary installation: ORACLE_HOME/bin/agentctl start agent 28.
1.13. Backup and Restore: ---------------------------To ensure that you can make
a full recovery from media failures, you should perform regular backups of the
following: Application Server and Infrastructure Oracle Homes Oracle Internet Di
rectory Metadata Repository Customer Databases
You should perform regular backups of all files in the Oracle home of each appli
cation server and infrastructure installation in your enterprise using your pref
erred method of filesystem backup. Oracle Internet Directory offers command-line
tools for backing up and restoring the Oracle Internet Directory schema and sub
tree. The metadata repository is an Oracle9i Enterprise Edition Database that yo
u can back up and restore using several different tools and operating system com
mands. The customer databases can be backupped using any standard method, the sa
me way you would do for any other 9iEE database.
Applications: ============= 28.2 Report services: --------------------Client con
tacts the Report Server - Web,through url - Nonweb, rwclient -requests goes to a
jobqueue
-users with webbrowser: http Server must be running, and you use or reports serv
let, a JSP, or CGI components on 9iAS The reports server must be running. - defa
ult it is an inprocess server httpd -> mod_oc4j {reports servlet} -> Reports Ser
ver - CGI httpd -> CGI -> Reports Server - starting from URL: http://machine:por
t/reports/rwservlet commandline: rwserver server=machinename - The servlet is pa
rt of the OC4J instance: OC4J_BI_FORMS - its possible to make it a service of it
s own: rwserver -install autostart=yes/no - verify the Reports Servlet and Serve
r Are Running: http://missrv/rwservlet/help (show help page with rwservlet comma
nd line arguments) http://machine:port/reports/rwservlet/showjobs?server=server_
name (show a listing of the jobqueue) IP:7778/reports/rwservlet/showenv http://<
hostname>:<port>/reports/rwservlet/getserverinfo? http://<hostname>:<port>/repor
ts/rwservlet/getserverinfo?authid=orcladmin/<passw ord of ias_admin> http://mach
inename/servlet/RWServlet/showmap?server=Rep60_servername - stopping Reports Ser
ver: commandline: rwserver server=machinename shutdown=normal/immediate authid=a
dmin/password Enterprise Manager: stop Reports Server The reports servlet uses t
he PORT parameter configured in the httpd.conf reports_user/welcome1 ias_admin/w
elcome1 orcladmin /welcome1 Reports Servlet url em username em password reports
store
: http://missrv:7778/reports/rwservlet : reports_user : welcome1 : d:\reports (c
hange in registry, key is REPORTS_PATH)
- Reports Server configuration files: ORACLE_HOME\reports\conf\server_name.conf
ORACLE_HOME\reports\dtd\rwserverconf.dtd ORACLE_HOME\reports\conf\rwbuilder.conf
ORACLE_HOME\reports\conf\rwservlet.properties - Check miskm.propery files: $9ia
s_home\j2ee\OC4J_iFS_cmsdk\applications\brugpaneel\FrontOffice\WEBINF\classes. H
et gaat om de volgende bestanden: misIfs.properties : parameters van iFs interfa
ce/front office. miskm.properties : parameters van MIS Front Office applications
XSQLConfig.xml : XSQL Parameters, moet wijzen naar mis_owner schema. Er wordt o
ok gebruik gemaakt van JDBC. De instellingen van deze connectie staan in het bes
tand: $9ias_home\j2ee\OC4J_iFS_cmsdk\applications\brugpaneel\META-INF\data-sourc
es.xml miskm.properties: ----------------# miskm.reports parameters are used in
order to display reports that are built # using Oracle Reports. # The action of
the hidden form. #miskm.reports.action=http://dgas40.mindef.nl/reports/rwservlet
miskm.reports.action=http://missrv.miskm.mindef.nl:7778/reports/rwservlet # The
schemaname/schemapassword@tns_names entry where the data is stored. #miskm.repo
rts.connectstring=mis_owner/mis_owner@miskm_demo miskm.reports.connectstring=mis
_owner/mis_owner@miskm_dev # The name of the Reports Server (after default insta
llation: rep_missrv) #miskm.reports.repserver=rep_dgas40 miskm.reports.repserver
=rep_missrv # The location where the output is placed on the server. miskm.repor
ts.destype=cache # The output of the the generated report (e.g html, pdf, etc.)
#miskm.reports.desformat=pdf miskm.reports.desformat=rtf&mimetype=application/ms
word # The reports server is a partner application, therefore a sso username/pas
sword # is required. miskm.reports.ssoauthid=reports_user/welcome1 - Reports Ser
ver configuration files: ORACLE_HOME\reports\conf\server_name.conf ORACLE_HOME\r
eports\dtd\rwserverconf.dtd ORACLE_HOME\reports\conf\rwbuilder.conf
ORACLE_HOME\reports\conf\rwservlet.properties (inprocess or standallone) reports
_server_name.conf cgicmd.dat jdbcpds.conf proxyinfo.xml rwbuilder.conf rwserver.
template rwservlet.properties textpds.conf xmlpds.conf in ORACLE_HOME/reports/co
nf Reports Servlet 9i Rapportages worden gemaakt met behulp van de Reports Build
er en moeten worden opgeslagen in een directory op de applicatieserver (standaar
d is dit d:\reports). Om de Reports Servlet te laten weten waar allemaal reports
zijn opgeslagen dient de registersleutel van Windows REPORTS_PATH te worden uit
gebreid met de directory waar de rapportages zijn opgeslagen. De servlet is onde
rdeel van de OC4J instance: OC4J_BI_FORMS, dus om hier gebruik van te maken, moe
t deze instance opgestart zijn. De servlet maakt gebruik van Oracle SSO en daaro
m dient een er een SSO gebruiker aangemaakt te worden die in staat is om gebruik
te maken van de servlet: 1. Ga naar http://missrv.miskm.mindef.nl:7777/oiddas 2
. Log in als de portal gebruiker (standaard portal/welcome1) 3. Maak een nieuwe
gebruiker aan, bijvoorbeeld: reports_user. 4. Sta deze gebruiker de privilege: Allo
w resource management for Oracle Reports and Forms toe. 5. Controleer of deze gebru
iker overeenstemt met de sleutel: miskm.reports.ssoauthid in het bestand miskm.p
roperties
28.3 Internet Directory and Single Sign-On: ------------------------------------
------Oracle Internet Directory, an LDAP directory, provides a single repository
and administration for user accounts. Oracle9iAS Single Sign-On enables users t
o login to Oracle9iAS and gain access to those applications for which they are a
uthorized, without requiring them to re-enter a user name and password for each
application. It is fully integrated with Oracle Internet Directory, which stores
user information. It supports LDAP-based user and password management through O
ID. Oracle Internet Directory is installed as part of the Oracle9iAS Infrastruct
ure installation. Oracle9iAS Single Sign-On is installed as part of the Oracle9i
AS Infrastructure installation.
SSO is Portal's authentication engine. In 9iAS all applications may use SSO. Wit
hout a functioning SSO, users will not be able to logon and use SSO. The first t
est following a failure to authenticate is to login directly using SSO: http://s
ervername:port/pls/orasso Examples: Single Sign-On Server : oasdocs.us.oracle.co
m:7777 Internet Directory : oasdocs.us.oracle.com:389 Infrastructure database :
iasdb.oasdocs.us.oracle.com missrv.miskm.mindef.nl:1521:iasdb In a start script,
you may find commands like the following to start the OID server: %INFRA_BIN%\o
idmon start %INFRA_BIN%\oidctl server=oidldapd instance=1 start In a stop script
, you may notice the following commands to stop the OID server: %INFRA_BIN%\oidc
tl server=oidldapd instance=1 stop %INFRA_BIN%\oidmon stop When oidctl is execut
ed, it connects to the database as user ODSCOMMON and simply inserts/updates row
s into a table ODS.ODS_PROCESS depending on the options used in the command. A r
ow is inserted if the START option is used, and updated if the STOP or RESTART o
ption is used. So there are no processes started at this point, and LDAP server
is not started. Both the listener/dispatcher process and server process are call
ed oidldapd on unix, and oidldapd.exe on NT. Oidmon is also a process (called oi
dmon on unix, oidmon.exe/oidservice.exe on windows). To control the processes (s
ervers) we need to have OID Monitor (oidmon) running. This monitor is often call
ed daemon or guardian process as well. When oidmon is running, it periodically c
onnects to the database and reads the ODS.ODS_PROCESS table in order to start/st
op/restart related processes. NOTE: Because the only task oidctl has is to inser
t / update table ODS.ODS_PROCESS in the database, it's obvious that the database
and listener have to be fully accessible when oidctl is used. Also, oidmon conn
ects periodically to the database. So the database and listener must be
accessible for oidmon to connect. 28.4 Example and default values: -------------
------------------Information Oracle home location Instance Name ias_admin Passw
ord Single Sign-On Server HostName/server Single Sign-On Port Number Internet In
ternet Internet Internet Internet Directory Directory Directory Directory Direct
ory Hostname/server Port Number Username administrator) Password Example Values
Your Information D:\ora9ias instance1 welcome1 oasdocs.us.oracle.com 7777 oasdoc
s.us.oracle.com 389 / 4032 orcladmin, cn=orcladmin welcome1 oasdocs.us.oracle.co
m oasdocs.us.oracle.com oasdocs.us.oracle.com:7777
(the Oracle
9iAS Metadata Repository 9iAS Reports Services Outgoing Mail Server http Server
Metadata database connection string oasdocs.us.oracle.com:1521:iasdb:iasdb.oasdo
cs.us.oracle.com Oracle Universal Installer creates a file showing the port assi
gnments during installation of Oracle9iAS components. This file is ORACLE_HOME\i
nstall\portlist.ini It contains entries like the following default values: Oracl
e HTTP Server Oracle HTTP Server Oracle HTTP Server Oracle HTTP Server Oracle HT
TP Server Enterprise Manager port = 7777 SSL port = 4443 listen port = 7778 SSL
listen port = 4444 Jserv port = 8007 Servlet port = 1810
The ID username and password are defined in Oracle Internet Directory as either
the: - orcladmin (root user) - a user who is member of the IASAdmins group in Or
acle Internet Directory The SSO schema is now 'ORASSO' and the ORASSO user is re
gistered with OID after an infra install. THe default user is 'orcladmin' with a
login of your ias_admin password. EM Website: http://<hostname.domain>:<port> (
port 1810 assigned by default) You will login using the 'ias_admin' username and
the password you entered during the Infrastructure installation.
SSO Login Page: http://<hostname.domain>:<port>/pls/orasso You will login using
the 'orcladmin' username and the password for the 'ias_admin'. The port will be
the HTTP Server port of your Infrastructure, (port 7777 by default) http://missr
v.miskm.mindef.nl:7777/pls/orasso OID_DAS Page: http://<hostname.domain>:<port>/
oiddas You will login using the 'orcladmin' username and the password for the 'i
as_admin'. The port will be the HTTP Server port of your Infrastructure, (port 7
777 by default). The OC4J_DAS component must be UP for this test to succeed.
28.5 Management tools: ---------------------28.5.1. OEM Website: ---------------
---You can access the Welcome Page by pointing your browser to the HTTP Server U
RL for your installation. For example, the default HTTP Server URL is: http://ho
stname:7777 This page offer many options to explore features of 9iAS. You can al
so go directly to the Oracle Enterprise Manager Web site using the following ins
tructions: http://hostname:1810 http://
Enterprise manager homepage http://hostname:1810 can only be accessed if EM webs
te is running. This corresponds to the service like "Oracleinfra902EMWebsite". T
he username for the administrator user is ias_admin. The password is defined dur
ing the installation of Oracle9iAS. The default password is welcome1. Depending
upon the options you have installed, the Administration section of the Oracle9iA
S Instance Home Page provides additional features that allow you to perform the
following tasks: -Associate the current instance with an existing Oracle9iAS Inf
rastructure. -Configure additional Oracle9iAS components that have been installe
d, but not configured -Change the password or default schema for a component Sta
rt or stop on NT/W2K: To start or stop the Enterprise Manager Web site on Window
s, use the Services
control panel. The name of the service is in the following format: OracleORACLE_
HOMEEMwebsite For example, if the name of the Oracle Home is OraHome902, the ser
vice name is: OracleOraHome902EMWebsite You can also use net start OracleOraHome
902EMWebsite net stop OracleOraHome902EMWebsite Start or stop on UNIX: Start the
Enterprise Manager Web site: emctl start
Stop the Enterprise Manager Web site: emctl stop Or use the kill command if it d
oes not respond Changing the ias_admin Password: 1. Using Oracle Enterprise Mana
ger Web Site: Navigate to the Instance Home Page. Select Preferences in the top
right corner. This displays the Change Password Page. Enter the new password and
new password confirmation. Click OK. This resets the ias_admin password for all
application server installations on the host. Restart the Oracle Enterprise Man
ager Web site. 2. Using the emctl Command-Line Tool: To change the ias_admin use
r password using a command-line tool: Enter the following command in the Oracle
home of the primary installation (the first installation on the host): (UNIX) OR
ACLE_HOME/bin/emctl set password new_password (Windows) ORACLE_HOME\bin\emctl se
t password new_password For example: (UNIX) ORACLE_HOME/bin/emctl set password m
5b8r5 (Windows) ORACLE_HOME\bin\emctl set password m5b8r5 Restart the Enterprise
Manager Web site. The Enterprise Manager Web site relies on various technologie
s to discover, monitor, and administer the Oracle9iAS environment. These technol
ogies include: - Oracle Dynamic Monitoring Service (DMS) The Enterprise Manager
Web site uses DMS to gather performance data about your
Oracle9iAS components. - Oracle HTTP Server and Oracle Containers for J2EE (OC4J
) the Enterprise Manager Web site also uses HTTP Server and OC4J to deploy its m
anagement components. - Oracle Process Management Notification (OPMN) OPMN manag
es Oracle HTTP Server and OC4J processes within an application server instance.
It channels all events from different component instances to all components inte
rested in receiving them. - Distributed Configuration Management (DCM) This will
be used with clusters or farms. DCM manages configurations among application se
rver instances that are associated with a common Infrastructure (members of an O
racle9iAS farm). It enables Oracle9iAS cluster-wide deployment so you can deploy
an application to an entire cluster, or make a single host or instance configur
ation change applicable across all instances in a cluster. 28.5.2 OEM Console: -
-----------------The console is a non Web, Java tool, and part of the 3-tier OMS
architecture. See also section 28.1. The Oracle Enterprise Manager console prov
ides a wider view of your Oracle environment, beyond Oracle9iAS. Use the Console
to automatically discover and manage databases, application servers, and Oracle
applications across your entire network. The Console and its related components
are installed with the Oracle Management Server as part of the Oracle9iAS Infra
structure installation option. The Console is part of the Oracle Management Serv
er component of the Oracle9iAS Infrastructure. The Management Server, the Consol
e, and Oracle Agent are installed on the Oracle9iAS Infrastructure host, along w
ith the other infrastructure components. The Console offers advanced management
features, such as an Event system to notify administrators of changes in your en
vironment and a Job system to automate standard and repetitive tasks, such as ex
ecuting a SQL script or executing an operating system command. The Console and M
anagement Server are installed as part of the Oracle9iAS Infrastructure. Use the
OEMCTL commandline tool for controlling OMS. See section 28.1.10.
29. Starting and stopping 9iAS and components:
============================================== 29.1 Starting a simple Webcache/J
2EE installation: -------------------------------------------------Start the Ent
erprise Manager Web site. Even though you are not using the Web site, this ensur
es that the processes to support the dcmctl command-line tool are started. To st
art the Web site, execute the following command in the Oracle home of the primar
y installation on your host: (UNIX) ORACLE_HOME/bin/emctl start (Windows) ORACLE
_HOME\bin\emctl start Start Oracle HTTP Server and OC4J (the rest of the command
s in this section should be executed in the Oracle home of the J2EE and Web Cach
e instance): (UNIX) ORACLE_HOME/dcm/bin/dcmctl start (Windows) ORACLE_HOME\dcm\b
in\dcmctl start If Web Cache is configured, start Web Cache: (UNIX) ORACLE_HOME/
bin/webcachectl start (Windows) ORACLE_HOME\bin\webcachectl start
29.2 Startin and stopping Advanced 9iAS installations --------------------------
--------------------------Start/Stop Enterprise: ---------------------Starting a
n Application Server Enterprise: The order in which to start the pieces of an ap
plication server enterprise is as follows: 1. Start the infrastructure. If your
enterprise contains more than one infrastructure, start the primary infrastructu
re first. 2. Start customer databases. If your enterprise contains customer data
bases, you can start them using several methods, including SQL*Plus and Oracle E
nterprise Manager Console. Remember that iFS could also be installed into the cu
stomer database. 3. Start application server instances. You can start applicatio
n server instances in any order. If instances are part of a cluster, start them
as part of starting the cluster.
The order in which to stop the pieces of an application server enterprise is as
follows: 1. Stop application server instances. You can stop application server i
nstances in any order. If instances are part of a cluster, stop them as part of
stopping the cluster. 2. Stop customer databases. If your enterprise contains cu
stomer databases, you can stop them using several methods, including SQL*Plus an
d Oracle Enterprise Manager Console. 3. Stop the infrastructure. If your enterpr
ise contains more than one infrastructure, stop the primary infrastructure last.
Start/Stop Instance: -------------------Start: First you have started the infra
structure instance, and customer database instance. 1. Preliminary: - Enterprise
Manager Web Site (Required): The first step before starting an application serv
er instance is to ensure that the Enterprise Manager Web site is running on the
host. The Web site provides underlying processes required to run an application
server instance and must be running even if you intend to use command-line tools
to start your instance. There is one Enterprise Manager Web site per host. It r
esides in the primary installation (or first installation) on that host. The pri
mary installation can be an application server installation or an infrastructure
. This Web site usually listens on port 1810 and provides services to all applic
ation server instances and infrastructures on that host. To verify the status of
the Enterprise Manager Web site, run the following command in the Oracle home o
f the primary installation: (UNIX) ORACLE_HOME/bin/emctl status (Windows) ORACLE
_HOME\bin\emctl status To start the Enterprise Manager Web site, run the followi
ng command in the Oracle home of the primary installation: (UNIX) ORACLE_HOME/bi
n/emctl start (Windows) ORACLE_HOME\bin\emctl start Or on NT/W2K: net start Orac
leORACLE_HOMEEMwebsite
- Intelligent Agent (Optional) You only need to run the Intelligent Agent if you
are using Oracle Management Server in your enterprise. In order for Oracle Mana
gement Server to detect application server installations on a host, you must mak
e sure the Intelligent Agent is started. Note that one Intelligent Agent is star
ted per host and must be started after every system boot. (UNIX) You can run the
following commands in the Oracle home of the primary installation (the first in
stallation on the host) to get status and start the Intelligent Agent: ORACLE_HO
ME/bin/agentctl status agent ORACLE_HOME/bin/agentctl start agent (Windows) You
can check the status and start the Intelligent Agent using the Services control
panel. The name of the service is in the following format: OracleORACLE_HOMEAgen
t 2. Start the instance using OEM Website: You can start, stop, and restart all
types of application server instances using the Instance Home Page on the Enterp
rise Manager Web site. Or... 3. Start the 'J2EE and Web Cache' instance using co
mmands: Start OEM Website: ORACLE_HOME\bin\emctl start OracleORACLE_HOMEEMwebsit
e or net start
Start Oracle HTTP Server and OC4J: ORACLE_HOME\dcm\bin\dcmctl start If Web Cache
is configured, start Web Cache: ORACLE_HOME\bin\webcachectl start 4. Stop the '
J2EE and Web Cache' instance using commands: ORACLE_HOME\bin\webcachectl stop OR
ACLE_HOME\dcm\bin\dcmctl stop Start/Stop components: ---------------------You ca
n start, stop, and restart individual components using the Instance Home Page or
the component home page on the Enterprise Manager Web site. You can also start
and stop some components using command-line tools.
Oracle HTTP Server Start: ORACLE_HOME\dcm\bin\dcmctl start -ct ohs Stop : ORACLE
_HOME\dcm\bin\dcmctl stop -ct ohs Individual OC4J Instances Start: ORACLE_HOME\d
cm\bin\dcmctl start -co instance_name Stop : ORACLE_HOME\dcm\bin\dcmctl stop -co
instance_name All OC4J Instances Start: ORACLE_HOME\dcm\bin\dcmctl start -ct oc
4j Stop : ORACLE_HOME\dcm\bin\dcmctl stop -ct oc4j Web Cache Start: ORACLE_HOME\
bin\webcachectl start Stop : ORACLE_HOME\bin\webcachectl stop Reports Start: Sto
p : ORACLE_HOME\bin\rwserver server=name ORACLE_HOME\bin\rwserver server=name sh
utdown=yes
You cannot start or stop some components. The radio buttons in the Select column
on the Instance Home Page are disabled for these components, and their componen
t home pages do not have Start, Stop, or Restart buttons. Start/Stop the Infrast
ructure: -----------------------------No matter which procedure you use, startin
g an infrastructure involves performing the following steps in order: Start Star
t Start Start Start Start Start the Metadata Repository = infrastructure databas
e OID, Oracle Internet Directory the Enterprise Manager Web site. OHS, Oracle HT
TP Server the OC4J_DAS instance Web Cache (optional) Oracle Management Server an
d Intelligent Agent (optional)
No matter which procedure you use, stopping an infrastructure involves performin
g the following steps in order: Stop Stop Stop Stop Stop Stop Stop all middle-ti
er application server instances that use the infrastructure. Oracle Management S
erver and Intelligent Agent (optional) Web Cache (optional) OC4J instances Oracl
e HTTP Server Oracle Internet Directory the Metadata Repository
The next section describes how to start an infrastructure using command-line too
ls on Windows. Except where noted, all commands should be run in the Oracle home
of the infrastructure.
-- ---------------------------------------------------------------------Start th
e metadata repository listener: ORACLE_HOME\bin\lsnrctl start -Set the ORACLE_SI
D environment variable to the metadata repository system identifier (default is
iasdb). You can set the ORACLE_SID system variable using the System Properties c
ontrol panel. -Start the metadata repository instance using SQL*Plus: ORACLE_HOM
E\bin\sqlplus /nolog sql> connect sys/password_for_sys as sysdba sql> startup sq
l> quit -- ---------------------------------------------------------------------
Start Oracle Internet Directory. Make sure the ORACLE_SID is set to the metadat
a repository system identifier (refer to previous step). Start the Oracle Intern
et Directory monitor: ORACLE_HOME\bin\oidmon start -Start the Oracle Internet Di
rectory server: ORACLE_HOME\bin\oidctl server=oidldapd configset=0 instance=n st
art where n is any instance number (1, 2, 3...) that is not in use. For example:
ORACLE_HOME\bin\oidctl server=oidldapd configset=0 instance=1 start -- --------
------------------------------------------------------------- Start the Enterpri
se Manager Web site. Even though you are using command-line, the Web site is req
uired because it provides underlying support for the command-line tools. The Web
site must be started after every system boot. You can check the status and star
t the Enterprise Manager Web site using the Services control panel. The name of
the service is in the following format: OracleORACLE_HOMEEMwebsite You can also
start the service using the following command line: net start WEB_SITE_SERVICE_N
AME -- ---------------------------------------------------------------------Star
t Oracle HTTP Server. ORACLE_HOME\dcm\bin\dcmctl start -ct ohs Note that startin
g Oracle HTTP Server also makes Oracle9iAS Single Sign-On
available. -- ------------------------------------------------------------------
--- Start the OC4J_DAS instance. ORACLE_HOME\dcm\bin\dcmctl start -co OC4J_DAS N
ote that the infrastructure instance contains other OC4J instances, such as OC4J
_home and OC4J_Demos, but these do not need to be started; their services are no
t required and incur unnecessary overhead. -- ----------------------------------
-----------------------------------Start Web Cache (optional). Web Cache is not
configured in the infrastructure by default, but if you have configured it, star
t it as follows: ORACLE_HOME\bin\webcachectl start -- --------------------------
------------------------------------------- Start Oracle Management Server and I
ntelligent Agent (optional). Perform these steps only if you have configured Ora
cle Management Server. Start Oracle Management Server: ORACLE_HOME\bin\oemctl st
art oms -- --------------------------------------------------------------------S
tart the Intelligent Agent. In order for Oracle Management Server to detect the
infrastructure and any other application server installations on this host, you
must make sure the Intelligent Agent is started. Note that one Intelligent Agent
is started per host and must be started after every reboot. You can check the s
tatus and start the Intelligent Agent using the Services control panel. The name
of the service is in the following format: OracleORACLE_HOMEAgent 30. Creating
a Database Access Descriptor (DAD) for mod_plsql: ------------------------------
--------------------------------Oracle HTTP Server contains the mod_plsql module
, which provide support for building PL/SQL-based applications on the Web. PL/SQ
L stored procedures retrieve data from a database and generate HTTP responses co
ntaining data and code to display in a Web browser. In order to use mod_plsql yo
u must install the PL/SQL Web Toolkit into a database and create a Database Acce
ss Descriptor (DAD) which provides mod_plsql with connection information for the
database.
31. Configuring HTTP Server, OC4J, and Web Cache: ------------------------------
-------------------You can use the OEM website in order to configure components
as HTTP Server, OC4J, and Web Cache, or you can manually edit configuration file
s. If you edit Oracle HTTP Server or OC4J configuration files manually, instead
of using the Enterprise Manager Web site, you must use the DCM command-line util
ity dcmctl to notify the DCM repository of the changes. Otherwise, your changes
will not go into effect and will not be reflected in the Enterprise Manager Web
site. Note that the dcmctl tool is located in: UNIX) ORACLE_HOME/dcm/bin/dcmctl
(Windows) ORACLE_HOME\dcm/bin\dcmctl To notify DCM of changes made to: Use this
command: Oracle HTTP Server configuration files: OC4J configuration files All co
nfiguration files - HTTP Server: You can configure Oracle HTTP Server using the
Oracle HTTP Server Home Page on the Oracle Enterprise Manager Web site. You can
perform tasks such as modifying directives, changing log properties, specifying
a port for a listener, modifying the document root directory, managing client re
quests, and editing server configuration files. You can access the Oracle HTTP S
erver Home Page in the Name column of the System Components table on the Instanc
e Home Page. - OC4J: You can configure Oracle9iAS Containers for J2EE (OC4J) usi
ng the Enterprise Manager Web site. You can use the Instance Home Page to create
and delete OC4J instances, each of which has its own OC4J Home Page. You can us
e each individual OC4J Home Page to configure the corresponding OC4J instance an
d its deployed applications. Creating an OC4J Instance. Every application server
instance has a default OC4J instance named OC4J_home. You can create additional
instances, each with a unique name, within an application server instance. : :
dcmctl updateConfig -ct ohs dcmctl updateConfig -ct oc4j dcmctl updateConfig
To create a new OC4J instance: - Navigate to the Instance Home Page on the Oracl
e Enterprise Manager Web site. Scroll to the System Components section. - Click
Create OC4J Instance. This opens the Create OC4J Instance Page. - In the Create
OC4J Instance Page, type a unique instance name in the OC4J instance name field.
Click Create. - A new OC4J instance is created with the name you provided. - Th
is OC4J instance shows up on the Instance Home Page in the System Components sec
tion. - The instance is initially in a stopped state and can be started any time
after creation. Each OC4J instance has its own OC4J Home Page which allows you
to configure global services and deploy applications to that instance.
32. 9iAS CONFIG FILES: ---------------------------------------------------------
---------32.1 9iAS Rel. 2 most obvious config files: ---------------------------
-----------------Oracle HTTP Server: ------------------httpd.conf oracle_apache.
conf access.conf magic mime.types mod_oc4j.conf srm.conf in ORACLE_HOME/Apache/A
pache/conf JServ: -----jserv.conf jserv.properties zone.properties mod_oradav: -
---------moddav.conf mod_plsql: ---------cache.conf dads.conf
in ORACLE_HOME/Apache/Jserv/etc
in ORACLE_HOME/Apache/oradav/conf
in ORACLE_HOME/Apache/modplsql/conf
Oracle9iAS Web Cache: --------------------internal.xml internal_admin.xml webcac
he.xml
in ORACLE_HOME/webcache
Oracle9iAS Reports Services: ---------------------------reports_server_name.conf
cgicmd.dat jdbcpds.conf proxyinfo.xml rwbuilder.conf rwserver.template rwservle
t.properties textpds.conf xmlpds.conf in ORACLE_HOME/reports/conf Oracle9iAS Dis
coverer: ---------------------configuration.xml in ORACLE_HOME/j2ee/OC4J_BI_Form
s/applications/discoverer/web/WEB-INF/lib viewer_config.xml in ORACLE_HOME/j2ee/
OC4J_BI_Forms/applications/discoverer/web/viewer_files plus_config.xml in ORACLE
_HOME/j2ee/OC4J_BI_Forms/applications/discoverer/web/plus_files portal_config.xm
l in ORACLE_HOME/j2ee/OC4J_BI_Forms/applications/discoverer/web/portal pref.txt
in ORACLE_HOME/discoverer902/util .reg_key.dc in ORACLE_HOME/discoverer902/bin/.
reg
--------------------------------------------32.2 9iAS Rel. 2 list of all .conf f
iles: --------------------------------------------Now as an example, follows a l
isting of all .conf configuration files of a real 9iAS Server. -- --------------
------------------------------------------------------ BEGIN LISTING FROM AN REA
L LIFE 9iAS rel. 9.0.2 Server: -- ----------------------------------------------
--------------------Directory of D:\ORACLE\ias902\Apache\Apache\conf 06/25/2002
12/01/2003 12/01/2003 12/01/2003 12/01/2003 06/25/2002 12/01/2003 10:55p 02:07p
02:07p 02:07p 02:07p 10:55p 02:07p 7 File(s) 293 access.conf 46,178 httpd.conf 3
,342 mod_oc4j.conf 517 mod_osso.conf 811 oracle_apache.conf 305 srm.conf 551 wir
eless_sso.conf 51,997 bytes
Directory of D:\ORACLE\ias902\Apache\Apache\conf\osso 04/23/2003 08:41p 1 File(s
) 433 osso.conf 433 bytes
Directory of D:\ORACLE\ias902\Apache\Jserv\conf 04/23/2003 08:38p 10,745 jserv.c
onf
1 File(s)
10,745 bytes
Directory of D:\ORACLE\ias902\Apache\jsp\conf 12/01/2003 02:07p 1 File(s) 594 oj
sp.conf 594 bytes
Directory of D:\ORACLE\ias902\Apache\modplsql\conf 12/01/2003 12/01/2003 12/01/2
003 02:07p 02:07p 02:07p 3 File(s) 840 cache.conf 2,122 dads.conf 1,598 plsql.co
nf 4,560 bytes
Directory of D:\ORACLE\ias902\Apache\oradav\conf 12/01/2003 12/01/2003 02:07p 02
:07p 2 File(s) 785 moddav.conf 396 oradav.conf 1,181 bytes
Directory of D:\ORACLE\ias902\click\conf 12/01/2003 02:07p 1 File(s) 427 click-a
pache.conf 427 bytes
Directory of D:\ORACLE\ias902\click\conf\templates 01/14/2002 11:21p 1 File(s) 4
45 click-apache.conf 445 bytes
Directory of D:\ORACLE\ias902\dcm\config 02/17/2004 01:31p 1 File(s) 186 dcm.con
f 186 bytes
Directory of D:\ORACLE\ias902\dcm\config\plugins\apache 06/27/2002 11:01p 1 File
(s) 43,623 httpd.conf 43,623 bytes
Directory of D:\ORACLE\ias902\dcm\repository.install\dcm\config 04/23/2003 08:57
p 1 File(s) 185 dcm.conf 185 bytes
Directory of D:\ORACLE\ias902\forms90\server 12/01/2003 02:07p 1 File(s) 2,997 f
orms90.conf 2,997 bytes
Directory of D:\ORACLE\ias902\ldap\das 12/01/2003 02:07p 1 File(s) 165 oiddas.co
nf 165 bytes
Directory of D:\ORACLE\ias902\opmn\conf 02/17/2004 01:31p 45 ons.conf
1 File(s)
45 bytes
Directory of D:\ORACLE\ias902\portal\conf 12/01/2003 02:07p 1 File(s) 1,407 port
al.conf 1,407 bytes
Directory of D:\ORACLE\ias902\RDBMS\demo 12/01/2003 02:07p 1 File(s) 482 aqxml.c
onf 482 bytes
Directory of D:\ORACLE\ias902\reports\conf 04/28/2003 05/17/2002 04/28/2003 05/1
7/2002 05/17/2002 02:59p 08:45p 02:59p 08:45p 08:45p 5 File(s) 3,386 Copy (2) of
rep_vbas99.conf 7,421 jdbcpds.conf 3,386 rep_vbas99.conf 6,381 textpds.conf 454
xmlpds.conf 21,028 bytes
Directory of D:\ORACLE\ias902\ultrasearch\webapp\config 12/01/2003 02:07p 1 File
(s) 320 ultrasearch.conf 320 bytes
Directory of D:\ORACLE\ias902\xdk\admin 12/01/2003 02:07p 1 File(s) 294 xml.conf
294 bytes
Directory of D:\ORACLE\infra902\Apache\Apache\conf 06/25/2002 04/23/2003 04/23/2
003 04/23/2003 04/23/2003 06/25/2002 10:55p 08:23p 08:23p 08:23p 08:23p 10:55p 6
File(s) 293 access.conf 46,224 httpd.conf 1,500 mod_oc4j.conf 519 mod_osso.conf
747 oracle_apache.conf 305 srm.conf 49,588 bytes
Directory of D:\ORACLE\infra902\Apache\Apache\conf\osso 04/23/2003 08:20p 1 File
(s) 433 osso.conf 433 bytes
Directory of D:\ORACLE\infra902\Apache\Jserv\conf 04/23/2003 08:04p 1 File(s) 10
,763 jserv.conf 10,763 bytes
Directory of D:\ORACLE\infra902\Apache\jsp\conf 04/23/2003 08:23p 1 File(s) 598
ojsp.conf 598 bytes
Directory of D:\ORACLE\infra902\Apache\modplsql\conf
04/23/2003 04/23/2003 04/23/2003
08:23p 08:23p 08:23p 3 File(s)
842 cache.conf 1,485 dads.conf 1,606 plsql.conf 3,933 bytes
Directory of D:\ORACLE\infra902\Apache\oradav\conf 04/23/2003 04/23/2003 08:23p
08:23p 2 File(s) 789 moddav.conf 2 oradav.conf 791 bytes
Directory of D:\ORACLE\infra902\dcm\config 02/17/2004 01:31p 1 File(s) 188 dcm.c
onf 188 bytes
Directory of D:\ORACLE\infra902\dcm\config\plugins\apache 06/27/2002 11:01p 1 Fi
le(s) 43,623 httpd.conf 43,623 bytes
Directory of D:\ORACLE\infra902\dcm\repository.install\dcm\config 04/23/2003 08:
24p 1 File(s) 187 dcm.conf 187 bytes
Directory of D:\ORACLE\infra902\ldap\das 04/23/2003 08:23p 1 File(s) 165 oiddas.
conf 165 bytes
Directory of D:\ORACLE\infra902\oem_webstage 04/23/2003 08:23p 1 File(s) 943 oem
.conf 943 bytes
Directory of D:\ORACLE\infra902\opmn\conf 02/17/2004 01:31p 1 File(s) 45 ons.con
f 45 bytes
Directory of D:\ORACLE\infra902\RDBMS\demo 04/23/2003 08:23p 1 File(s) 477 aqxml
.conf 477 bytes
Directory of D:\ORACLE\infra902\sqlplus\admin 04/23/2003 08:23p 1 File(s) 1,454
isqlplus.conf 1,454 bytes
Directory of D:\ORACLE\infra902\sso\conf 04/23/2003 08:23p 1 File(s) 154 sso_apa
che.conf 154 bytes
Directory of D:\ORACLE\infra902\ultrasearch\webapp\config
04/23/2003
08:23p 1 File(s)
324 ultrasearch.conf 324 bytes
Directory of D:\ORACLE\infra902\xdk\admin 04/23/2003 08:23p 1 File(s) 291 xml.co
nf 291 bytes
Directory of D:\ORACLE\ora901\Apache\Apache\conf 08/20/2001 04/23/2003 04/23/200
3 08/20/2001 11:00a 07:26p 07:33p 11:00a 4 File(s) 285 access.conf 43,205 httpd.
conf 472 oracle_apache.conf 297 srm.conf 44,259 bytes
Directory of D:\ORACLE\ora901\Apache\Jserv\conf 04/23/2003 07:26p 1 File(s) 6,71
0 jserv.conf 6,710 bytes
Directory of D:\ORACLE\ora901\Apache\jsp\conf 04/23/2003 07:33p 1 File(s) 511 oj
sp.conf 511 bytes
Directory of D:\ORACLE\ora901\Apache\modose\conf 04/23/2003 07:27p 1 File(s) 637
ose.conf 637 bytes
Directory of D:\ORACLE\ora901\Apache\modplsql\cfg 04/23/2003 07:29p 1 File(s) 31
8 plsql.conf 318 bytes
Directory of D:\ORACLE\ora901\BC4J 04/23/2003 07:33p 1 File(s) 121 bc4j.conf 121
bytes
Directory of D:\ORACLE\ora901\oem_webstage 04/23/2003 07:33p 1 File(s) 682 oem.c
onf 682 bytes
Directory of D:\ORACLE\ora901\rdbms\demo 04/23/2003 07:26p 1 File(s) 326 aqxml.c
onf 326 bytes
Directory of D:\ORACLE\ora901\sqlplus\admin 04/23/2003 07:33p 1 File(s) 1,476 is
qlplus.conf 1,476 bytes
Directory of D:\ORACLE\ora901\ultrasearch\jsp\admin\config
05/02/2001
08:26p 1 File(s)
10,681 mod__ose.conf 10,681 bytes
Directory of D:\ORACLE\ora901\xdk\admin 04/23/2003 07:33p 1 File(s) 253 xml.conf
253 bytes 321,045 bytes
Total Files Listed: 71 File(s)
33. Deploying J2EE Applications: ---------------------------------You can deploy
J2EE applications using the OC4J Home Page on the Enterprise Manager Web site.
To navigate to an OC4J Home Page, do the following: -Navigate to the Instance Ho
me Page where the OC4J instance resides. Scroll to the System Components section
. -Select the OC4J instance in the Name column. This opens the OC4J Home Page fo
r that OC4J instance. -Scroll to the Deployed Applications section on the OC4J H
ome Page. Clicking Deploy EAR File or Deploy WAR File starts the deployment wiza
rd, which deploys the application to the OC4J instance and binds any Web applica
tion to a URL context. Your J2EE application can contain the following modules:
-- Web applications The Web applications module (WAR files) includes servlets an
d JSP pages. -- EJB applications The EJB applications module (EJB JAR files) inc
ludes Enterprise JavaBeans (EJBs). -- Client application contained within a JAR
file Now archive the JAR and WAR files that belong to an enterprise Java applica
tion into an EAR file for deployment to OC4J. The J2EE specifications define the
layout for an EAR file. The internal layout of an EAR file should be as follows
: <appname>-
|--META_INF | | | -----application.xml | |--EJB JAR file |
|--WEB WAR file | |--Client JAR file | When you deploy an application within a W
AR file, the application.xml file is created for the Web application. When you d
eploy an application within an EAR file, you must create the application.xml fil
e within the EAR file. Thus, deploying a WAR file is an easier method for deploy
ing a Web application. ------------34. Errors: ----------------TROUBLESHOOTING 9
iAS Rel. 2 Version 2.0 4 juli 2004 Albert van der Sel
With an 9iAS Release 2 Full install (Business Inteligence install), a tremendous
amount of errors might be encountered. Here you will find my own experiences, a
s well as some threads from metalink. OPMN = Oracle Process Manager and Notifica
tion Server JAZN / JAAS = Oracle Application Server Java Authentication and Auth
orization Service DCM = Distributed Configuration Management OPMN stands for 'or
acle process management notification' and is Oracle's 'high availability' system
. OPMN monitors processes and brings them up again automatically if they go down
. It is started when you start enterprise manager website with emctl start from
the prompt in the infrastructure oracle home, and doing this starts 2 opmn proce
sses for each oracle home. OPMN consists of two components - Oracle Process Mana
ger and Oracle Notification System. DCM stands for 'distributed component manage
ment' and is the framework by which all IAS R2 components hang together. DCM is
a layer that ensures that if something is changed in one components, others like
Enterprise Manager are made aware as well. It is not a process as such, but rat
her a generic term for a framework and utilities. It is controlled directly with
the dcmctl command. DMS Dynamic Monitoring Services . These processes are start
ed when you start ohs. DMS basically gathers information on components.
Jserv Jserv works in much the same way as R1 except oracle components no longer
use this servlet architecture, but use oc4j instead. mod_plsql works the same wa
y as R1. mod_oradav oradav allows web folders to be shared with clients e.g. PC'
s and accessed as if they were NT folders. OC4J_DAS is used by Portal for the ma
nagement of users and groups. You access this via http://machine:port/oiddas
============================ PART 1: GENERAL 9iAS ERRORS: ======================
====== 1. troubleshooting the targets.xml: =================================== I
f you change the HOSTNAME for the repository (infrastructure) database, then you
need to update the ssoServerMachineName property for the oracle SSO target in I
NFRA_ORACLE_HOME/sysman/emd/targets.xml The $ORACLE_HOME/sysman/emd/targets.xml
file is created during installation of 9iAS and includes descriptions of all cur
rently known targets. This file is used as the source of targets for the EM Webs
ite. sample targets.xml: - <Targets> - <Target TYPE="oracle_webcache" NAME="ias9
02dev.missrv.miskm.mindef.nl_Web Cache" DISPLAY_NAME="Web Cache"> <Property NAME
="HTTPPort" VALUE="7778" /> <Property NAME="logFileName" VALUE="webcache.log" />
<Property NAME="authrealm" VALUE="Oracle Web Cache Administrator" /> <Property
NAME="AdminPort" VALUE="4003" /> <Property NAME="HTTPProtocol" VALUE="http" /> <
Property NAME="logFileDir" VALUE="/sysman/log" /> <Property NAME="HTTPMachine" V
ALUE="missrv.miskm.mindef.nl" /> <Property NAME="HTTPQuery" VALUE="" /> <Propert
y NAME="controlFile" VALUE="d:\oracle\ias902/bin/webcachectl.exe" /> <Property N
AME="MonitorPort" VALUE="4005" /> <Property NAME="HTTPPath" VALUE="/" /> <Proper
ty NAME="authpwd" VALUE="98574abda4f0a0cadcfe3e420f09854b" ENCRYPTED="TRUE" /> <
Property NAME="authuser" VALUE="98574abda4f0a0cadcfe3e420f09854b" ENCRYPTED="TRU
E" /> - <CompositeMembership> <MemberOf TYPE="oracle_ias" NAME="ias902dev.missrv
.miskm.mindef.nl" ASSOCIATION="null" />
</CompositeMembership> </Target> + <Target TYPE="oracle_clkagtmgr" NAME="ias902d
ev.missrv.miskm.mindef.nl_Clickstream" DISPLAY_NAME="Clickstream Collector" ON_H
OST="missrv.miskm.mindef.nl"> - <CompositeMembership> <MemberOf TYPE="oracle_ias
" NAME="ias902dev.missrv.miskm.mindef.nl" /> </CompositeMembership> </Target> ..
.. - <Target TYPE="oracle_repserv" NAME="ias902dev.missrv.miskm.mindef.nl_Repor
ts:rep_missrv" DISPLAY_NAME="Reports:rep_missrv" VERSION="1.0" ON_HOST="missrv.m
iskm.mindef.nl"> <Property NAME="OracleHome" VALUE="d:\oracle\ias902" /> <Proper
ty NAME="UserName" VALUE="repadmin" /> <Property NAME="Servlet" VALUE="http://mi
ssrv.miskm.mindef.nl:7778/reports/rwservlet" /> <Property NAME="Server" VALUE="r
ep_missrv" /> <Property NAME="Password" VALUE="ced9a541f77e7df6" ENCRYPTED="TRUE
" /> <Property NAME="host" VALUE="missrv.miskm.mindef.nl" /> - <CompositeMembers
hip> <MemberOf TYPE="oracle_ias" NAME="ias902dev.missrv.miskm.mindef.nl" ASSOCIA
TION="null" /> </CompositeMembership> </Target> </Target> - <Target TYPE="oracle
_ifs" NAME="iFS_missrv.miskm.mindef.nl:1521:o901:IFSDP"> <Property NAME="DomainN
ame" VALUE="ifs://missrv.miskm.mindef.nl:1521:o901:IFSDP" /> <Property NAME="Ifs
RootHome" VALUE="d:\oracle\ias902\ifs" /> <Property NAME="SysadminUsername" VALU
E="system" /> <Property NAME="SysadminPassword" VALUE="973dc46d050ca537" ENCRYPT
ED="TRUE" /> <Property NAME="IfsHome" VALUE="d:\oracle\ias902\ifs\cmsdk" /> <Pro
perty NAME="SchemaPassword" VALUE="daeffdd4f05cd456" ENCRYPTED="TRUE" /> - <Comp
ositeMembership> <MemberOf TYPE="oracle_ias" NAME="ias902dev.missrv.miskm.mindef
.nl" /> </CompositeMembership> </Target> The above file stores amongst other thi
ngs, the encrypted passwords that EM uses for access to components. Search for o
racle_portal, oracle_repserv etc. Although encrypted, you can change these to be
a password in Englidh as long as you flag it ENCRYPTED=FALSE. This should only
be done for specific bug problems as recommended by oracle support. Do not chang
e these passwords for any other reason!! The following is a list of things to ch
eck when there appears to be a problem with targets.xml. 1. Check the permission
s on the active targets.xml file and restart all the infrastructure components (
database, listener, oid, emctl in that order). The targets.xml file should be ow
ned by the user who installed 9iAS and who starts emctl. Accidentally starting e
mctl as root recreates the targets.xml under root
ownership. Fix this by changing ownership on targets.xml and restarting emctl. 2
. Check which targets are listed, to ensure there is information on each expecte
d target. 3. 4. a. Check whether the hosts file and targets.xml have matching ho
stnames, and whether both have fully qualified hostnames. What should be done if
targets.xml is empty, or missing targets? Restore targets.xml from backup
b. Copy $ORACLE_HOME/sysman/emd/discoveredTargets.xml to $ORACLE_HOME/sysman/emd
/targets.xml, although it may not be complete if additional targets were install
ed following installation. See EM Website has no Entries for the 9iAS Instances
226226.1 and EM Web Site Fails to Display Application Servers 210552.1 and Login
as ias_admin to 9iAS R2 Enterprise Manager, A Blank List is Displayed for Targe
ts 209540.1 c. Check the amount of disk space available. See Bug 2508930 - TARGE
TS.XML IS EMPTY IF WE HAVE NO DISK SPACE. d. Reinstall. See De-Installing 9iAS R
elease 2 (9.0.2) From Unix Platforms 218277.1 5. Is there an Infrastructure and
Mid-Tier install on the system?
When installing both the infrastructure and a mid-tier on the same server (in di
fferent homes), the installation of the infrastructure creates the emtab file po
inting to its own home. During installation of the mid-tier, the mid-tier instal
lation routine uses the emtab file pointing to the infrastructure home so it kno
ws where to write configuration information required for the infrastructure EM W
ebsite, so it can see not only information concerning itself but also informatio
n related to the mid-tier. If the emtab file is removed/renamed after installati
on of of the infrastructure but before installation of the mid-tier, a new emtab
file is created pointing to the mid-tier home. The configuration file routines
of the mid-tier installation therefore do not know about the existence of the in
frastructure and write the new configuration information into files in its own h
ome and not into the files in the infrastructure home. In addition to entries in
the targets.xml in the infrastructure home, other files such as the ias.propert
ies file in the infrastructure home are also updated with information concerning
the mid-tier.
Merging the targets.xml file from both homes may solve some of the display probl
ems, though they may not solve control of component issues due to incomplete con
figuration files in the infrastructure home. References to renaming the emtab fi
le should be disregarded when performing infrastructure/mid-tier installs on the
same server, and may have in fact been specific to certain platforms and specif
ic for certain circumstances. The EM Web Site is launched as a J2EE application.
The configuration files consist of many XML files and properties files. Here ar
e some of those files: targets.xml emd.properties logging.properties iasadmin.pr
operties
2. Cleanly Restarting OID After A 9iAS 9.0.2 Crash: ============================
======================= A problem that often seems to happen when Oracle 9iAS 9.
0.2 crashes is that you can't seem to restart OID using OIDCTL. For example, a s
ituation might arise when a server is bounced without 9iAS being shut down clean
ly. When you reboot the PC, and use DCMCTL to check the status of the OC4J insta
nces prior to starting them, you get the following error message: C:\ocs_onebox\
infra\dcm\bin>dcmctl getState -V ADMN-202026 A problem has occurred accessing th
e Oracle9iAS infrastructure database. Base Exception: oracle.ias.repository.sche
ma.SchemaException:Unable to connect to Directory Server :javax.naming.Communica
tionException: markr.plusconsultancy.co. uk:4032 [Root exception is java.net.Con
nectException: Connection refused: connect] Please, refer to the base exception
for resolution, or call Oracle support. Or, when you watch an ias start script,
at the point oid get started, you will see C:\ocs_onebox\infra\bin>oidctl server
=oidldapd configset=0 instance=1 start which should startup an OID instance. How
ever, sometimes this fails to work and you get the error message: C:\ocs_onebox\
infra\bin>oidctl server=oidldapd configset=0 instance=1 start
*** Instance Number already in use. *** *** Please try a different Instance numb
er. *** oidmon is the 'monitor' process. It pools the database ( table ODS.ODS_P
ROCESS ) for new ldap server launch requests, and if it finds one, (also placed
there by oidctl as user ODSCOMMON ) , then it starts a 'dispatcher/listener proc
ess.' As such, oidctl does not actually start the ldap processes. Oidmon then sp
awns 'dispatcher' and 'server' oidldapd processes. What actually happens behind
the scenes is that a row is inserted or updated in the ODS.ODS_PROCESS table tha
t contains the instance name (which must be unique), the process ID, and a flag
called 'state', which has three values - 0,1,2 and 3 which stand for stop, start
, running and restart. A second process, OIDMON, polls the ODS.ODS_PROCESS table
and when it finds a row with state=0, it reads the pid and stops the process. W
hen it finds a state=1, oidmon starts a new process and updates pid with a new p
rocess id. With state=2, oidmon reads the pid, and checks that the process with
the same pid is running. If it's not, oidmon starts a new process and updates th
e pid. Lastly, with state=3, oidmon reads the pid, stops the process, starts a n
ew one and updates the pid accordingly. If oidmon can't start the server for som
e reason, it retries 10 times, and if still unsuccessful, it deletes the row fro
m the ODS.ODS_PROCESS table. Therefore, OIDCTL only inserts or updates state inf
ormation, and OIDMON reads rows from ODS.ODS_PROCESS, and performs specified tas
ks based on the value of the state column. This all works fine except when 9iAS
crashes; when this happens, OIDMON exits but the OIDLDAPD processes are not kill
ed, and in addition, stray rows are often left in the ODS.ODS_PROCESS table that
are detected when you try to restart the oidldapd instance after a reboot. The
way to properly deal with this is to take two steps. 1. Kill any stray OIDLDAPD
processes still running (if you haven't rebooted the server since the crash) 2.
Delete any rows in the ODS.ODS_PROCESS table connect to the IASDB database as th
e ODS user, or as SYSTEM select * from ODS.ODS_PROCESS; (there should be at leas
t one row) delete form ODS.ODS_PROCESS; commit; 3. Restart the OID instance agai
n, using C:\ocs_onebox\infra\bin>oidctl server=oidldapd configset=0 instance=1 s
tart
OID uses the configfile: $INFRA_ORACLE_HOME/network/admin/ldap.ora Sample: # LDA
P.ORA Network Configuration File: d:\oracle\infra902\network\admin\ldap.ora # Ge
nerated by Oracle configuration tools. DEFAULT_ADMIN_CONTEXT = "" DIRECTORY_SERV
ERS= (missrv.miskm.mindef.nl:4032:4031) DIRECTORY_SERVER_TYPE = OID
3. Deobfuscate Errors After Reboot, Crash, or Network Change. ==================
=========================================== This can occur occur under these sce
narios: * A reboot has just occurred for the first time after 9iAS was installed
. (And, you had to change the /etc/hosts file during installation) OR * A system
crash occurred, and trying to recover. The 9iAS installion is placed on a machi
ne with the same hostname and IP address as before the crash occurred. OR * Hard
ware changes have occurred to the machine. (ie, CPU, NIC) AND * Everything was w
orking under the current 9iAS configuration. (A 9iAS configuration change causin
g this can be a different problem) There are different times when this error can
occur. But, it basically occurs when a change to the system has been done. This
can be after a reboot or a crash, but there is a difference on the machine befo
re and after the occurance. It is usually a network configuration change that ha
s caused the problem. When you try to start the Oracle HTTP Server, the followin
g error might appear in the opmn logs: "Syntax error on line 6 of OH/Apache/Apac
he/conf/mod_osso.conf: Unable to deobfuscate the SSO server config file, OH/Apac
he/Apache/conf/osso/osso.conf, error Bad padding pattern detected in the last bl
ock." Most of the Mid-Tier components will fail to connect to the Infrastructure
, and will give the following error: "oracle.ias.repository.schema.SchemaExcepti
on:Password could not be retrieved" Possible solution: 1. Start Infrastructure D
B 2. Start the Infrastructure OID 3. Include $ORACLE_HOME/lib in the LD_LIBRARY_
PATH, SHLIB_PATH, or LIBPATH environment variable,
depending on your platform. -For AIX LIBPATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib64
:$LIBPATH; export LIBPATH -For HPUX SHLIB_PATH=$ORACLE_HOME/lib32:$ORACLE_HOME/l
ib:$SHLIB_PATH; export SHLIB_PATH -For Solaris, Linux and Tru64 LD_LIBRARY_PATH=
$ORACLE_HOME/lib:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH 4. Run the command to
reset the iAS password. Please use the SAME password, as we are not attempting t
o change the password you enter when signing onto EM. That is done with the emct
l utility. This command changes it internally, and we want to re-instate the cur
rent obfuscated password: resetiASpasswd.sh "cn=orcladmin" <orcladminpassword_gi
ven_before> <$ORACLE_HOME> Note: There is a resetiASpasswd.bat on Windows, to be
used the same way, just in case these steps are followed on Windows. The above
stated problem is specific to UNIX, but there may be occasions to run through th
e same steps. 5. Use the showPassword utility to obtain the password for the ora
sso user. Then, re-register the listener, being sure to add this information to
the ossoreg command in Step 6: -schema orasso -pass ReplaceWithPassword 6. Run t
he command to re-register mod_osso. * Make sure there are no spaces after the tr
ailing '\'s If on Windows, use all one line, withouth the "\" * Replace the uppe
rcase with proper items * The following assumes the to-be registered http server
is on the mid-tier * If on Windows, use "SYSTEM", instead of "root" for -u java
-jar $ORACLE_HOME/sso/lib/ossoreg.jar \ -host $INFRA_HOST \ -sid iasdb \ -site_
name MID_HOST:MID_PORT\ -oracle_home_path $ORACLE_HOME \ -success_url http://MID
_HOST:MID_PORT/osso_login_success \ -logout_url http://MID_HOST:MID_PORT/osso_lo
gout_success \ -cancel_url http://MID_HOST:MID_PORT/ \ -home_url http://MID_HOST
:MID_PORT/ \ -config_mod_osso TRUE \ -u root \ -sso_server_version v1.2 \ \ -pas
s <ReplaceWithPassword> -port 1521 \
-schema orasso
NOTE: The following command will not work on 9iAS 9.0.2.0.x, unless a patched dc
m.jar has previously been applied with a patch (or 9.0.2.1). Since this cannot b
e run on previous versions, just proceed to step 8. 7. Run following commands on
the machine where the change occurred, (not the associated Mid-Tiers): a. Solar
is i. $ORACLE_HOME/dcm/bin/dcmctl resetHostInformation ii. $ORACLE_HOME/bin/emct
l set password <previous_password>
b. NT i. Make sure the Oracle9iAS is stopped ii. Edit %ORACLE_HOME%\sysman\j2ee\
config\jazn-data.xml iii.Search for ias_admin iv. Replace obfuscated text between <cre
dentials> and </credentials> with "!<password>" where "<password>" is the passwo
rd. Example: <credentials>!welcome1</credentials> v. Save the file. 8. Continue
starting 9iAS, as in Note 200475.1. The next step is: % dcmctl start -ct ohs Thi
s is what was originally failing. After successfully starting OHS, You may want
to take a backup of the deobfuscated information as described in Note215955.1.
3. Not able to access the Middle Tier from EM Website. =========================
============================= 3.1 --Thread Status: Active From: Ishaq Baig <mail
to:ishaq@alrabie.com>19-Nov-03 10:47 Subject: Enable to Access the Middle Tier I
nstance from EM Website RDBMS Version: 8.1.7 Operating System and Version: WIN2K
Service Pack3 Product (i.e., OAS, IAS, etc): IAS Product Version: 9.0.2 JDK Ver
sion: 1.3.1.9 Error number: Enable to Access the Middle Tier Instance from EM We
bsite Hi, We have an 9IAS (9.0.2) Infrastructure and Middle Tier instance runnin
g on ONE Box (Win2k),thing we fine until while trying to implement the Single Si
gon after making the changes as instructed in Note:199072.1 we stopped the HTTP
Server so that change could take effect,but every since we have stopped the HTTP
Server we couldn't gain access to the Middle Instance from the EM WEB SITE the
page just hangs......on the other hand the INFRASTRUCTURE instance is working fi
en.We even tried starting the HTTP server through the DCM UTILITY the following
was the message Content-Type: text/html Response: 0 of 1 processes started. Chec
k opmn log files such as ipm.log and ons.log for detailed.". Resolve the indicat
ed problem at the Oracle9iAS instance where it occurred
thenresync the instance Remote Execute Exception 806212 oracle.ias.sysmgmt.excep
tion.ProcessMgmtException: OPMN operation failure at oracle.ias.sysmgmt.clusterm
anagement.OpmnAgent.validateOperation(Unknown Source) at oracle.ias.sysmgmt.clus
termanagement.OpmnAgent.startOHS(Unknown Source) at oracle.ias.sysmgmt.clusterma
nagement.StartInput.execute(Unknown Source) at oracle.ias.sysmgmt.clustermanagem
ent.ClusterManager.execute(Unknown Source) at oracle.ias.sysmgmt.task.ClusterMan
agementAdapter.execute(Unknown Source) at oracle.ias.sysmgmt.task.TaskMaster.exe
cute(Unknown Source) at oracle.ias.sysmgmt.task.TaskMasterReceiver.process(Unkno
wn Source) at oracle.ias.sysmgmt.task.TaskMasterReceiver.handle(Unknown Source)
at oracle.ias.sysmgmt.task.TaskMasterReceiver.run(Unknown Source) is Any Inputs
highly appreciated,we need to get it up as soon as possible. Regards Ishaq Baig
From: Oracle, Rhoderick Butial <mailto:rhoderick.butial@oracle.com>19-Nov-03 14:
36 Subject: Re : Enable to Access the Middle Tier Instance from EM Website Hello
, What type of changes did you alter? Did you try restarting all of the other co
mponents on the mid tier? There should be some errors generated in the error_log
file, please post these errors in your next reply. You may want to review the f
ollowing notes: Note.236112.1 Wrong user supplied to ossoreg causing ADMN-906025
exception, 806212 Note.223586.1 Starting Oracle HTTP Server gives ADMN-906025 e
rror Note.222051.1 Starting Oracle HTTP Server gives ADMN-906025 Error Also, I n
oticed that you have listed your 9iAS version as 9.0.2, did you apply the latest
patchsets before implementing the changes? If not, you will need to apply the p
atchsets first before making the changes. Please review.. Note.215882.1 9iAS Rel
ease 2 Patching Recommendations Within the Version Lifecycle Thank you, Rod Orac
le Technical Support 3.2 --Displayed below are the messages of the selected thre
ad. Thread Status: Closed
From: Ron Miller <mailto:ron.miller@tccd.edu>28-Oct-03 16:13 Subject: EM Website
extremely slow for 9iAS RDBMS Version:: 9.0.1.3.0 Operating System and Version:
: AIX 4.3.3 Product (i.e. Trace, DB Diff, Expert, etc):: Oracle9i Application Se
rver Product Version:: 9.0.2.2.0 OEM Console Operating System and Version:: Wind
ows 2000 EM Website extremely slow for 9iAS When I use the EM website to access
the components of my 9i App server, the response time is very slow. It takes 2 o
r 3 minutes to go from screen to another. I have found information on this forum
that others are experiencing the same problem. The response from Oracle support
has been that this is a known problem and there is a bug, 2756262, which is to
be fixed in 9.0.4. However, I cannot find any information on when this release w
ill be available. It seems to keep getting pushed back. Does anyone know a relea
se date? Has anyone requested a backport of this fix to an earlier release? Than
ks for any response. From: Oracle, Kathy Ting <mailto:Kathy.Ting@oracle.com>29-O
ct-03 05:41 Subject: Re : EM Website extremely slow for 9iAS The base architectu
re is being redesign. Due to the redesign, backports are not being accepted. Loo
k for a much better improved EM website in future releases. Thank you for using
the MetaLink Forum, Kathy Oracle Support. From: Ron Miller <mailto:ron.miller@tc
cd.edu>29-Oct-03 14:52 Subject: Re : Re : EM Website extremely slow for 9iAS Tha
nks for the reply Kathy. I will look forward to the redesign since the current p
roduct is pretty much useless. From: Oracle, Kathy Ting <mailto:Kathy.Ting@oracl
e.com>29-Oct-03 22:04 Subject: Re : Re : Re : EM Website extremely slow for 9iAS
As do we. Thank you for using the MetaLink Forum, Kathy Oracle Support. 4. Expl
anation of IAS_ADMIN and ORCLADMIN Accounts
================================================== Note:244161.1 Subject: Explan
ation of IAS_ADMIN and ORCLADMIN Accounts Type: BULLETIN Status: PUBLISHED PURPO
SE ------To provide an explanation for the IAS_ADMIN and ORCLADMIN accounts that
are established with Oracle9i Application Server (9iAS) Release 2 (9.0.2.x). SC
OPE & APPLICATION ------------------Website Administrators installing and mainta
ining 9iAS Explanation of IAS_ADMIN and ORCLADMIN Accounts ---------------------
--------------------------There are two users that can create some confusion: ia
s_admin and orcladmin. However, the interaction is more or less internally manag
ed. You log into the EM Website with ias_admin, but use the orcladmin password a
fter initially installing 9iAS. So when changing the orcladmin password, you may
not get the results intended with the ias_admin login. But, if the obfuscation
gets skewed, we found you sometimes need to reinstate the password obfuscation b
etween the two with the resetiASpasswd script. This assumes the same password is
used, and no resulting changes are noted. The *change* occurred internally. The
se changes, and methods, can cause some confusion. You can actually change the E
M Website login separately with the emctl utility. Or, change the orcladmin user
name separately, depending on how your want to manage this. IAS_ADMIN Account --
--------------In EM 9.0.2 and 9.0.3, you will need to use the IAS_ADMIN account
to access the EM Website Home Page. This account is not known within the databas
e or to the Oracle Management Server. Instead, it is a new account used only for
access to the 9iAS Administration (EM) Web Site. The following note can be used
to supplement the Documentation and Release Notes dealing with modifying this p
assword: [NOTE:204182.1] <http://metalink.oracle.com/metalink/plsql/ml2_document
s.showDocument?p_id=204182. 1&p_database_id=NOT> How to Change the IAS_ADMIN pas
sword for Enterprise Manager NOTE: If you change the IAS_ADMIN password (as desc
ribed in Note:204182.1),
the ORCLADMIN password does not change. ORCLADMIN Account ----------------ORCLAD
MIN is used as a superuser account for administering 9iAS. During the initial in
stallation of 9iAS, the installer prompts you to create the IAS_ADMIN password.
This password is then also assigned to the ORCLADMIN account. To reset (not chan
ge) the ORCLADMIN password, you must run the script, ResetiASpasswd.sh. $ORACLE_
HOME/bin/resetiASpasswd.sh "cn=orcladmin" <orcladminpassword_given_before> <$ORA
CLE_HOME> Note: There is a resetiASpasswd.bat on Windows, to be used the same wa
y. If you suspect that the encryption is skewed, use the SAME password, to *rese
t* this. If you desire to change the password you enter when signing onto EM, us
e the emctl utility, (as described in Note:204182.1). If you actually want to ch
ange the ORCLADMIN password, you should use the Oracle Directory Manager, to mod
ify this super user. - Start the Directory Manager from $ORACLE_HOME/bin/oidadmi
n - In the navigator pane, expand Oracle Internet Directory Servers. - Select a
server. The group of tab pages for that server appear in the right pane. - Selec
t the System Passwords tab. This page displays the current user names and passwo
rds for each type of user. Note that passwords are not displayed in the password
fields. SUMMARY ------Is the goal to reset the internally encrypted ias_admin p
assword, change the actual orcladmin password, or just change the password when
logging onto EM? Thats the main question to ask. 1. To reset the internally encr
ypted ias_admin password, use the resetiASpasswd script, and use the same passwo
rd as previously given. 2. To change the orcladmin password, it is best to use t
he Oracle Directory Manager. Please see the Oracle Internet Directory Administra
tor's Guide for more information. 3. Change the EM website or emctl password: Wi
thin the EM Web Site...Preferences link...top right-hand side of the screen. Or,
on command line, using emctl.
RELATED DOCUMENTS ----------------[NOTE:234712.1] <http://metalink.oracle.com/me
talink/plsql/ml2_documents.showDocument?p_id=234712. 1&p_database_id=NOT> Managi
ng Schemas of the 9iAS Release 2 Metadata Repository [NOTE:253149.1] <http://met
alink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_id=253149. 1&p_data
base_id=NOT> Resetting the Single Sign-On password for ORCLADMIN . 5. Password f
or ORASSO Database Schema ====================================== Password for OR
ASSO Database Schema goal: What is the password for the ORASSO database schema?
fact: Oracle9i Application Server Enterprise Edition 9.0.2 fact: Oracle9iAS Sing
le Sign-On 9.0.2 fix: During installation a random password is generated for the
ORASSO database schema. You need to look up this password in the Oracle Interne
t Directory. The following text is taken from the Interoperability Patch Readme
(a patch that was mandatory for 9.0.2.0.0 but is no longer needed for 9.0.2.0.1)
: If you do not know the password for the orasso schema, you can use the followi
ng procedure to determine the password: Note: Do not use the "alter user" SQL co
mmand to change the orasso password. If you need to change the orasso password,
use Enterprise Manager so that it can propagate the password to all components t
hat need to access orasso. Start up the Oracle Internet Directory administration
tool from infrastructure machine. prompt> $ORACLE_HOME/bin/oidadmin Log into th
e oidadmin tool using the OID administrator account (cn=orcladmin) for the Infra
structure installation. Username: cn=orcladmin Password: administrator_password
Server : host running Oracle Internet Directory and port number where Oracle Int
ernet Directory is listening The administrator password is the same as the ias_a
dmin password. The default port for Oracle Internet Directory is 389 (without SS
L). Navigate the Single Sign-On schema (orasso) entry using the administration t
ool. > cn=orcladmin@OID_hostname:OID_port (for example: cn=orcladmin@infra.acme.
com:389) > Entry Management > cn=OracleContext > cn=Products
> cn=IAS > cn=Infrastructure Databases > orclReferenceName=Single Sign-On databa
se SID:Single Sign-On Server hostname (for example: orclReferenceName=iasdb:infr
a.acme.com) > orclResourceName=ORASSO Click the above entry and look for the orc
lpasswordattribute attribute value on the right panel. This value is the passwor
d for the orasso schema. NOTE: If you have multiple Infrastructures installed us
ing one Oracle Internet Directory, ensure that you are looking at the correct Si
ngle Sign-On database entry since all the infrastructure instances would have an
ORASSO schema entry, but only one of them is actually being used.
6. Windows Script to Determine orasso Password in 9iAS Release 2 (9.0.2) =======
================================================================= Note:205984.1
Subject: Windows Script to Determine orasso Password in 9iAS Release 2 (9.0.2) T
ype: BULLETIN Status: PUBLISHED PURPOSE ------The showPassword utility was devel
oped to avoid having to use the oidadmin tool to look up various OID passwords,
by using ldapsearch with Oracle9i Application Server (9iAS) Release 2 (9.0.2). A
s a script, varying on different environments, it is not supported by Oracle Sup
port Services. It is intended as an example, to aid in the understanding of the
product. SCOPE & APPLICATION ------------------9iAS Administrators and Windows A
dministrators Windows Script to Determine orasso Password in 9iAS Release 2 (9.0
.2) --------------------------------------------------------------------1. Paste
the following script in a file named showPassword.bat and copy it in a director
y. Please also ensure that ldapserach is there in PATH on your widnows machine.
8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8< set OIDHOST=bldel18.in.orac
le.com set OIDPORT=4032 if "%1"== "" goto cont if "%2"== "" goto cont ldapsearch
-h %OIDHOST% -p %OIDPORT% -D "cn=orcladmin" -w "%1" -b "cn=IAS Infrastructure
Databases,cn=IAS,cn=Products,cn=OracleContext" -s sub "orclResourceName=%2" orcl
passwordattribute goto :end :cont echo Correct Syntax is echo showpassword.bat o
rcladminpassword username :end 8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8<8
<8< Note that the "ldapsearch...orclpasswordattribute" commands should be put on
one line. 2. Edit the script and update with your own hostname and OID port OID
HOST=bldel18.in.oracle.com OIDPORT=4032 3. Ensure that you have ldapsearch from
the correct ORACLE_HOME in the PATH 4. Check that OID is up and running before p
roceeding. 5. Run the script, and enter the schema name as: orasso, and the pass
word value is shown. For example: (all ONE line...may be easier to copy/paste fr
om Notepad) C:\> showPassword.bat oracle1 orasso OrclResourceName=ORASSO,orclRef
erenceName=iasdb.bldel18.in.oracle.com,cn=IAS Inf rastructure Databases,cn=IAS,c
n=Products,cn=OracleContext orclpasswordattribute=Gbn3Fd24 The orasso password i
n this example is Gbn3Fd24.
6. STARTING AND STOPPING 9iAS WITH SCRIPTS. ====================================
======= ---------------------------------------------------------------5.1 From
metalink: a) StartInfrastructure.bat: REM ######################################
############## REM #################################################### REM ## S
cript to start Infrastructure ## REM ## ## REM #################################
################### REM #################################################### REM
## REM ## Set environment variables for Infrastructure REM ####################
################################ set ORACLE_HOME=D:\IAS90201I set ORACLE_SID=IAS
DB set PATH=%ORACLE_HOME%\bin;%ORACLE_HOME%\dcm\bin;%ORACLE_HOME%\opmn\bin;%PATH
%; REM #####################################################
REM ## Start Oracle Internet Directory processes REM ###########################
########################## echo .....Starting %ORACLE_HOME% Internet Directory .
..... oidmon start oidctl server=oidldapd instance=1 start timeout 20 REM ######
############################################### REM ## Start Oracle HTTP Server
and OC4J processes REM ##################################################### ech
o .....Starting OHS and OC4J processes....... call dcmctl start -ct ohs call dcm
ctl start -ct oc4j REM ##################################################### REM
## Check OHS and OC4J processes are running REM ###############################
###################### echo .....Checking OHS and OC4J status..... call dcmctl g
etstate -v pause REM #################################################### b) Sta
rtMidTier.bat: REM #################################################### REM ####
################################################ REM ## Script to start MidTier
## REM ## ## REM #################################################### REM ######
############################################## REM ## REM ## Set environment var
iables for Midtier REM #################################################### set
ORACLE_HOME=D:\IAS90201J set PATH=%ORACLE_HOME%\bin;%ORACLE_HOME%\dcm\bin;%ORACL
E_HOME%\opmn\bin;%PATH%; REM ###################################################
## REM ## Start Oracle HTTP Server and OC4J processes REM ######################
############################### echo .....Starting OHS and OC4J processes.......
call dcmctl start -ct ohs call dcmctl start -ct oc4j REM ######################
############################### REM ## Check OHS and OC4J processes are running
REM ##################################################### echo .....Checking OHS
and OC4J status..... call dcmctl getstate -v REM ##############################
###################### REM ## Start Webcache REM ###############################
##################### echo .....Starting Webcache.......... webcachectl start RE
M #################################################### REM ## Start Enterprise M
anager Website REM #################################################### echo ...
..Starting EM Website..... net start Oracleias90201iEMWebsite echo ....Done paus
e REM #################################################### c) StopMidTier.bat:
REM #################################################### REM ###################
################################# REM ## Script to stop Midtier ## REM ## ## REM
#################################################### REM ######################
############################## REM ## REM ## Set environment variables for Midti
er REM #################################################### set ORACLE_HOME=D:\I
AS90201J set PATH=%ORACLE_HOME%\bin;%ORACLE_HOME%\dcm\bin;%ORACLE_HOME%\opmn\bin
;%PATH%; REM #################################################### REM ## Stop En
terprise Manager Website REM ###################################################
# echo .....Stopping EM Website..... net stop Oracleias90201iEMWebsite REM #####
############################################### REM ## Stop Webcache REM #######
############################################# echo .....Stopping %ORACLE_HOME% W
ebcache.......... webcachectl stop REM #########################################
########### REM ## Stop Oracle HTTP Server and OC4J processes REM ##############
###################################### echo .....Stopping %ORACLE_HOME% OHS and
OC4J........ dcmctl shutdown echo ....Done pause REM ###########################
######################### d)StopInfrastructure.bat: REM ########################
############################ REM ###############################################
##### REM ## Script to stop Infrastructure ## REM ## ## REM ####################
################################ REM ###########################################
######### REM ## REM ## Set environment variables for Infrastructure REM #######
############################################# set ORACLE_HOME=D:\IAS90201I set O
RACLE_SID=IASDB set PATH=%ORACLE_HOME%\bin;%ORACLE_HOME%\dcm\bin;%ORACLE_HOME%\o
pmn\bin;%PATH%; set EM_ADMIN_PWD=<your_pwd> REM ################################
#################### REM ## Stop Enterprise Manager Website REM ################
#################################### echo .....Stopping EM Website..... call emc
tl stop REM #################################################### REM ## Stop Ora
cle HTTP Server and OC4J processes REM #########################################
########### echo .....Stopping %ORACLE_HOME% OHS and OC4J........ call dcmctl sh
utdown REM ##################################################### REM ## Stop Ora
cle Internet Directory processes REM ###########################################
########## echo .....Stopping %ORACLE_HOME% Internet Directory ...... oidctl ser
ver=oidldapd configset=0 instance=1 stop
timeout 20 oidmon stop echo ....Done pause REM #################################
#################### -----------------------------------------------------------
----5.2 Our scripts: Starting: ========= @ECHO OFF TITLE Startup all REM *******
*************************************************** REM Adjust the following val
ues set ORACLE_BASE=D:\oracle set IAS_HOME=%ORACLE_BASE%\ias902 set IAS_BIN=%IAS
_HOME%\bin set INFRA_HOME=%ORACLE_BASE%\infra902 set INFRA_BIN=%INFRA_HOME%\bin
REM ********************************************************** echo echo echo ec
ho echo echo echo echo *********************************************************
* Parameters used are: ORACLE_BASE = %ORACLE_BASE% IAS_HOME = %IAS_HOME% IAS_BIN
= %IAS_BIN% INFRA_HOME = %INFRA_HOME% INFRA_BIN = %INFRA_BIN% *****************
*****************************************
echo ********************************************************** echo "Starting u
p infra" echo ********************************************************** echo "S
tarting iasdb instance" echo connect sys/change_on_install as sysdba > $$tmp$$ e
cho startup >> $$tmp$$ echo exit >> $$tmp$$ %INFRA_BIN%\sqlplus /nolog < $$tmp$$
del $$tmp$$ echo "Starting Oracle Internet Directory..." %INFRA_BIN%\oidmon sta
rt %INFRA_BIN%\oidctl server=oidldapd instance=1 start timeout 10 echo "Starting
Enterprise manager Services..." net start Oracleinfra902EMWebsite echo "Startin
g OEM ..." net start Oracleinfra902ManagementServer rem net start Oracleinfra902
TNSListener net start Oracleinfra902Agent
echo "Starting up infra services..." %INFRA_HOME%\opmn\bin\opmnctl startall echo
********************************************************** echo "Done kickin' u
p infra!" echo ********************************************************** echo.
echo ********************************************************** echo "Starting a
ll mid tier services..." echo **************************************************
******** %IAS_HOME%\opmn\bin\opmnctl startall echo "Starting webcache..." %IAS_B
IN%\webcachectl start echo "Starting all services..." net start Oracleias902Disc
overer rem net start Oracleias902ProcessManager rem net start Oracleias902WebCac
heAdmin rem net start Oracleias902WebCache echo ********************************
************************** echo "Done starting up mid tier!" echo **************
******************************************** pause Stopping: ========= @ECHO OFF
TITLE Shutdown all REM ********************************************************
** REM Adjust the following values set ORACLE_BASE=D:\oracle set IAS_HOME=%ORACL
E_BASE%\ias902 set IAS_BIN=%IAS_HOME%\bin set INFRA_HOME=%ORACLE_BASE%\infra902
set INFRA_BIN=%INFRA_HOME%\bin REM *********************************************
************* echo echo echo echo echo echo echo echo **************************
******************************** Parameters used are: ORACLE_BASE = %ORACLE_BASE
% IAS_HOME = %IAS_HOME% IAS_BIN = %IAS_BIN% INFRA_HOME = %INFRA_HOME% INFRA_BIN
= %INFRA_BIN% **********************************************************
echo ********************************************************** echo "Shutting d
own mid tier..." echo **********************************************************
echo "Stopping all mid tier services..." %IAS_HOME%\opmn\bin\opmnctl stopall ech
o "Stopping webcache..." %IAS_BIN%\webcachectl stop echo "Stopping Discoverer se
rvice..." net stop Oracleias902Discoverer echo "Sanity stops for WebCache" net s
top Oracleias902WebCache net stop Oracleias902WebCacheAdmin echo ***************
******************************************* echo "Done shutting down mid tier!"
echo ********************************************************** echo. echo *****
***************************************************** echo "Shutting down Infras
tructure..." echo ********************************************************** ech
o "Stopping Enterprise Manager Website" call %INFRA_BIN%\emctl stop welcome1 ech
o "Stopping Enterprise Manager Management Console..." call %INFRA_BIN%\oemctl st
op oms sysman/sysman echo "Stopping Infra Services..." %INFRA_HOME%\opmn\bin\opm
nctl stopall echo "Stopping Oracle Internet Directory..." %INFRA_BIN%\oidctl ser
ver=oidldapd instance=1 stop timeout 10 %INFRA_BIN%\oidmon stop echo "Stopping i
nfra database..." echo connect sys/change_on_install as sysdba > $$tmp$$ echo sh
utdown immediate >> $$tmp$$ echo exit >> $$tmp$$ %INFRA_BIN%\sqlplus /nolog < $$
tmp$$ del $$tmp$$ echo "Stopping all Remaining NT Services..." rem net stop Orac
leinfra902TNSListener net stop Oracleinfra902Agent echo ************************
********************************** echo "Done shutting down infra!" echo *******
*************************************************** pause Starting BI: =========
=== @echo off
title Starting Oracle Reports rem **********************************************
********************** set IAS_HOME=d:\oracle\ias902 set IAS_BIN=%IAS_HOME%\bin
rem ******************************************************************** echo **
****************************************************************** echo Paramete
rs used: echo. echo IAS_HOME = %IAS_HOME% echo IAS_BIN = %IAS_BIN% echo ********
************************************************************ echo. echo ********
************************************************************ echo Bringing up OC
4J_BI_Forms (Business Intelligence/Forms) echo *********************************
*********************************** call %IAS_HOME%\dcm\bin\dcmctl start -co OC4
J_BI_Forms -v timeout 5 echo Check to see if the instance really started up: ech
o. call %IAS_HOME%\dcm\bin\dcmctl getReturnStatus echo Done starting up OC4J_BI_
FORMS... pause Starting CMSDK: =============== @echo off title Starting Oracle C
M SDK 9.0.3.1. rem *************************************************************
******* set IAS_HOME=d:\oracle\ias902 set IAS_BIN=%IAS_HOME%\bin rem ***********
********************************************************* echo *****************
*************************************************** echo Parameters used: echo.
echo IAS_HOME = %IAS_HOME% echo IAS_BIN = %IAS_BIN% echo ***********************
********************************************* echo. echo ***********************
********************************************* echo Bringing up Domain Controller
, note default password is: ifsdp echo *****************************************
*************************** call %IAS_HOME%\ifs\cmsdk\bin\ifsctl start echo Done
bringing up Domain Controller echo. echo **************************************
****************************** echo Bringing up OC4J Instance... echo **********
********************************************************** call %IAS_HOME%\dcm\b
in\dcmctl start -co OC4J_iFS_cmsdk -v timeout 5 echo Check to see if the instanc
e really started up: echo.
call %IAS_HOME%\dcm\bin\dcmctl getReturnStatus echo Done starting up OC4J Instan
ce. echo Done starting up CM SDK. pause
8. Warning: Stop EMD Before Using DCMCTL Utility. ==============================
=================== Note:207208.1 Subject: Warning: Stop EMD Before Using DCMCTL
Utility Type: BULLETIN Status: PUBLISHED PURPOSE ------Issue a warning for the
use of the dcmctl utility when administering the Oracle9i Application Server (9i
AS) Release 2 (9.0.2.0.x). There is now a Patch available which resolves the iss
ue of running DCM and EM at the same time. SCOPE & APPLICATION -----------------
-This article is intended for 9iAS Administrators. It gives a general descriptio
n of a problem that can occur when dcmctl is used without precautions. DCMCTL RE
STRICTIONS ------------------1. Do not use dcmctl while EMD (Enterprise Manager
Console/Website) is running. The dcmctl utility is issuing DCM commands to contr
ol the state of components in 9iAS. The same is done from the EMD, which is gene
rally reachable at the following URLs: http://yourserver:1810/emd/console http:/
/yourserver:1810/ When the dcmctl utility is used while EMD is running, this may
cause out-of-sync problems with your 9iAS instance. This is caused by only one
DCM daemon being available to 'listen' to requests. How to Avoid Problems ------
--------------Stop EMD: $ emctl stop Issue your command with dcmctl When you are
done, restart EMD:
$
emctl start
2. If an Infrastructure and Mid-Tier(s) are installed on same server, EM must be
stopped when issuing dcmctl from either the Infrastructure or a Mid-tier direct
ories. This is because EM is common to all 9iAS instances on the server. Stoppin
g multiple instances of EM across multiple servers is not neccessary. The DCM/EM
concurrency conflict will only come into play with instances on a given machine
. 3. Do not issue multiple DCM commands at once, and do not issue a DCM command
while one might still be running. 4. If you start a component with DCM, it is re
commended to also stop it with DCM. If you start a component with the EM Website
, it is recommended stop it with the EM Website. SOLUTION -------If out-of-sync
errors occur because of EM being up while using dcmctl, then a reinstall may be
neccessary. Please apply the following patches in order to prevent this concurre
ncy problem from happening inadvertently: Patch 2542920 : 9iAS 9.0.2.1 Core Patc
hset Patch 2591631 : DCM/EM Concurrency Fix * * * * The 9.0.2.1 Patchset is a pr
e-requisite of the DCM Patch. Both patches should be applied to all associated 9
iAS Tiers. Please refer to the readme for important information. Future releases
(9.0.2.2+) will have this fix included.
9. MISCELLANEOUS: ================= 9.1 Change of hostname: --------------------
--If you change the HOSTNAME for the repository (infrastructure) database, then
you need to update the ssoServerMachineName property for the oracle SSO target i
n INFRA_ORACLE_HOME/sysman/emd/targets.xml If you change the PORT for the reposi
tory database, discoverer is affected update the port for discodemo in tnsnames.
ora. 9.2 Files with IP in the name: ------------------------------
9.3 ldapcheck and ldapsearch examples: -------------------------------------List
users and or passwords: use ldapcheck and ldapsearch
Example 1: ---------ldapsearch -h uks799 -p 4032 -D "cn=orcladmin" -w your_ias_o
r_oid_password -b "cn=Users,dc=uk,dc=oracle,dc=com" -s sub -v "objectclass=*" se
t OIDHOST=bldel18.in.oracle.com set OIDPORT=4032 if "%1"== "" goto cont if "%2"=
= "" goto cont ldapsearch -h %OIDHOST% -p %OIDPORT% -D "cn=orcladmin" -w "%1" -b
"cn=IAS Infrastructure Databases,cn=IAS,cn=Products,cn=OracleContext" -s sub "o
rclResourceName=%2" orclpasswordattribute goto :end :cont echo Correct Syntax is
echo showpassword.bat orcladminpassword username :end C:\> showPassword.bat ora
cle1 orasso OrclResourceName=ORASSO,orclReferenceName=iasdb.bldel18.in.oracle.co
m,cn=IAS Inf rastructure Databases,cn=IAS,cn=Products,cn=OracleContext orclpassw
ordattribute=Gbn3Fd24 The orasso password in this example is Gbn3Fd24. Example 2
: ----------
9.4 dcmctl commands: -------------------On a simple 9iAS webcache/j2ee installat
ion, you might try the following command: F:\oracle\ias902\dcm\bin>dcmctl getsta
te -V Current State for Instance:ias902dev.localhost Component Type Up Status In
Sync Status
===========================================================================
1 2 3 4
home HTTP Server OC4J_Demos OC4J_iFS_cmsdk
oc4j ohs oc4j oc4j
Up Up Up Up
True True True True
dcmctl getstate -ct ohs dcmctl manual dcmctl dcmctl
- show status of ohs of the current instance ONLY. Atempt to update DCM's view o
f the world after a determines which component aren't starting. force resync of
the instance.
updateConfig configuration change. getstate -v resyncInstance -force
9.5 Fault tolerance: ==================== 217368.1 from Metalink - "Advanced Con
figurations and Topologies for Enterprise Deployments of E-Business" Hot site Or
acle Oracle failover Oracle failover Oracle failover Oracle failover disaster re
covery configuration with Oracle standby database with Oracle9i Dataguard with O
racle9i TAF (Transparent Application Failover) with Oracle9i Real Application Cl
usters (RAC)
|----------------------------------| |Machine A | | | | |-----------------------
------| | | |Instance A | | | | - Cluster manager | | | | - Distributed Lock Man
ager | | | | - OS Shared Disk Driver | |-------------| -------------------------
---| | |----------------------------------| | | | | interconnect -----------| |
Shared | |----------------------------------| | Disk | |Machine B | | Subsystem|
| | -----------| |-----------------------------| | | | |Instance B | | | | | -
Cluster manager | | | | | - Distributed Lock Manager | | | | | - OS Shared Disk
Driver | |--------------| ----------------------------| |-----------------------
-----------| Note 1: -------
Local Clustering Definition Local cluster is defined as two or more physical mac
hines (nodes) that share common disk storage and logical IP address. Clustered n
odes exchange cluster information over heartbeat link(s). Cluster software colle
cts information and checks the situation on both nodes. On error condition, soft
ware will execute a predefined script and switch the clustered services over to
a secondary machine. Oracle instance, as one of clustered services, will be swit
ched off together with listener process, and restarted on the secondary (survivi
ng) node. HA Oracle Agent HA Oracle Agent software controls Oracle database acti
vity on Sun Cluster nodes. The agent performs fault checking using two processes
on the local node and two process on the remote node by querying V$SYSSTAT tabl
e for active sessions. If the database has no active sessions, HA Agent will ope
n a test transaction (connect and execute in serial create, insert, update, drop
table commands). Return error codes from HA Agent have been validated against a
special action file on location. /etc/opt/SUNWscor/haoracle_config_V1: # Action
file for HA-DBMS Oracle fault monitor # State DBMS_er proc_di log_msg timeout i
nt_err new_sta action --co * * * * 1 * stop Internal HA-DBMS Oracle error connec
ting to db on 28 * * * * di none Session killed by DBA, will reconnect * 50 * *
* * di takeover O/S error occurred while obtaining an enqueue co 0 * * 1 0 * res
tart A timeout has occured during connect -Takeover - cluster software will swit
ch to another node. Stop - cluster will stop DBMS None - no action taken Restart
- database restarted locally on the same node HA Oracle Agent requires Oracle c
onfiguration files (listener.ora, oratab and tnsnames.ora) on unique predefined
location /var/opt/oracle Note 2: ------You Asked (Jump to Tom's latest followup)
message
If I want to use Oracle Fail Safe and Dataguard do the servers have to be cluste
red? Right now I have a primary database on one server and a separate server for
the logical standby database. I want automatic failover, but it looks like Orac
le Fail Safe requires clustered servers. The DATAGUARD manual mentions platform,
but the ORACLE FAIL DATAGUARD or how to configure subject that you can refer me
and we said... Fail Safe is a clustering solution. The two (data guard & failsa
fe) are complimentary but somewhat orthogonal here. Failsafe is designed to keep
the single database up and available -- in a single data center. As long as tha
t room exists -- failsafe keeps the database up. data guard is a disaster recove
ry solution. It is for when the room the data center is in "goes away" for whate
ver reason. Data guard wants the machines to be independent (no clusters) of eac
hother and separated by some geographic distance. Failsafe, like 9i RAC, wants t
he machines to be tethered together - sitting right next to eachother in a clust
er. Failsafe is HA (high availability) Data guard is DR (disaster recovery) Fail
safe will give you automated failover. that database is up. As long as the data
center exists, that you can use ORACLE FAIL SAFE on the windows SAFE documentati
on doesn't say squat about for it. Is there any documentation of this to?
With data guard -- you do not WANT automated failover (many *think* they do but
you don't). Do you really want your DR solution to kick in due to a WAN failure?
No, not really. For DR to take over, you want a human to say "yup, data center
burnt to the ground, lets head for the mountains". You do not want the DR site t
o kick in because "it thinks the primary site is gone" -- you need to tell it "t
he primary site is gone". In a cluster -- the machines are very aware of eachoth
er and automated failover is "safe" So, data guards reference to failsafe is inc
idental. That failsafe doesn't talk about data guard is of no real consequence.
They are independent feature/functions. Note 3: terms: -------------Note 4: ----
--FAQ RAC:
Real Application Clusters General RAC Is it supported to install CRS and RAC as
different users. (09-SEP-04) I have changed my spfile with alter system set <par
ameter_name> =.... scope=spfile. The spfile is on ASM storage and the database w
ill not start. (18-APR-04) Is it difficult to transition from Single Instance to
RAC? (18-JUL-05) What are the dependencies between OCFS and ASM in Oracle10g ?
(05-MAY-05) What software is necessary for RAC? Does it have a separate installa
tion CD to order? (05-MAY-05) Do we have to have Oracle RDBMS on all nodes? (02-
APR-04) What kind of HW components do you recommend for the interconnect? (02-AP
R-04) Is rcp and/or rsh required for normal RAC operation ? (06-NOV-03) Are ther
e any suggested roadmaps for implementing a new RAC installation? (26-NOV02) Wha
t is Cache Fusion and how does this affect applications? (26-NOV-02) Can I use i
SCSI storage with my RAC cluster? (13-JUL-05) Can I use RAC in a distributed tra
nsaction processing environment? (16-JUN-05) Is it a good idea to add anti-virus
software to my RAC cluster? (31-JAN-05) When configuring the NIC cards and swit
ch for a GigE Interconnect should it be set to FULL or Half duplex in RAC? (05-N
OV-04) What would you recomend to customer, Oracle clusterware or Vendor Cluster
ware (I.E. MC Service Guard, HACMP, Sun Cluster, Veritas etc.) with Oracle Datab
ase 10g Real Application Clusters? (21-OCT-04) What is Standard Edition RAC? (01
-SEP-04) High Availability If I use Services with Oracle Database 10g, do I stil
l need to set up Load Balancing ? (16-JUN-05) Why do we have a Virtual IP (VIP)
in 10g? Why does it just return a dead connection when its primary node fails? (
12-MAR-04) I am receiving an ORA-29740 error. What should I do? (02-DEC-02) Can
RMAN backup Real Application Cluster databases? (26-NOV-02) What is Server-side
Transparent Application Failover (TAF) and how do I use it? (07-JUL-05) What is
CLB_GOAL and how should I set it? (16-JUN-05) Can I use TAF and FAN/FCF? (16-JUN
-05) What clients provide integration with FAN and FCF? (28-APR-05) What are my
options for load balancing with RAC? Why do I get an uneven number of connection
s on my instances? (15-MAR-05) Can our 10g VIP fail over from NIC to NIC as well
as from node to node ? (10-DEC04) Can I use ASM as mechanism to mirror the data
in an Extended RAC cluster? (18-OCT04) What does the Virtual IP service do? I u
nderstand it is for failover but do we need a separate network card? Can we use
the existing private/public cards? What would happen if we used the public ip? (
15-MAR-04) What do the VIP resources do once they detect a node has failed/gone
down? Are the VIPs automatically acquired, and published, or is manual intervent
ion required? Are VIPs mandatory? (15-MAR-04) Scalability I am seeing the wait e
vents 'ges remote message', 'gcs remote message', and/or 'gcs for action'. What
should I do about these? (02-APR-04) What are the changes in memory requirements
from moving from single instance to RAC? (02-DEC-02) What is the Load Balancing
Advisory? (16-JUN-05) What is Runtime Connection Load Balancing? (16-JUN-05) Ho
w do I enable the load balancing advisory? (16-JUN-05)
Manageability How do I stop the GSD? (22-MAR-04) How should I deal with space ma
nagement? Do I need to set free lists and free list groups? (16-JUN-03) I was in
stalling RAC and my Oracle files did not get copied to the remote node(s). What
went wrong? (26-NOV-02) What is the Cluster Verification Utiltiy (cluvfy)? (16-J
UN-05) What versions of the database can I use the cluster verification utility
(cluvfy) with? (16-JUN-05) What are the implications of using srvctl disable for
an instance in my RAC cluster? I want to have it available to start if I need i
t but at this time to not want to run this extra instance for this database. (31
-MAR-05) Platform Specific How many nodes can be had in an HP/Sun/IBM/Compaq/NT/
Linux cluster? (21-OCT-04) Is crossover cable supported as an interconnect with
9iRAC/10gRAC on any platform ? (21-FEB-05) Is it possible to run RAC on logical
partitions (i.e. LPARs) or virtual separate servers. (18-MAY-04) Can the Oracle
Database Configuration Assistant (DBCA) be used to create a database with Verita
s DBE / AC 3.5? (10-JAN-03) How do I check RAC certification? (26-NOV-02) Where
I can find information about how to setup / install RAC on different platforms ?
(08-AUG-02) Is Veritas Storage Foundation 4.0 supported with RAC? (05-OCT-04) P
latform Specific -- Linux Is 3rd Party Clusterware supported on Linux such as Ve
ritas or Redhat? (11-MAY-05) Can you have multiple RAC $ORACLE_HOME's on Linux?
(19-JUL-05) After installing patchset 9013 and patch_2313680 on Linux, the start
up was very slow (20-DEC-04) Is CFS Available for Linux? (20-DEC-04) Where can I
find more information about hangcheck-timer module on Linux ? And how do we con
figure hangcheck-timer module ? (20-DEC-04) Can RAC 10g and 9i RAC be installed
and run on the same physical Linux cluster? (20-DEC-04) Is the hangcheck timer s
till needed with Oracle Database 10g RAC? (20-DEC-04) How to configure bonding o
n Suse SLES8. (29-NOV-04) How to configure bonding on Suse SLES9. (29-NOV-04) Pl
atform Specific -- Solaris Does RAC run faster with Sun-cluster or Veritas clust
er-ware? (these being alternatives with Sun hardware) Is there some clusterware
that would make RAC run faster? (20-DEC-04) Platform Specific -- HP-UX Is HMP su
pported with 10g on all HP platforms ? (20-DEC-04) Platform Specific -- Windows
Does the Oracle Cluster File System (OCFS) support network access through NFS or
Windows Network Shares? (27-JAN-05) Can I run my 9i RAC and RAC 10g on the same
Windows cluster? (01-JUL-05) My customer wants to understand what type of disk
caching they can use with their Windows RAC Cluster, the install guide tells the
m to disable disk caching? (31MAR-05) Platform Specific -- IBM AIX Do I need HAC
MP/GPFS to store my OCR/Voting file on a shared device. (20-DEC-04) Platform Spe
cific -- IBM-z/OS (Mainframe) Can I run Oracle RAC 10g on my IBM Mainframe Syspl
ex environment (z/OS)? (07-JUL05) Diagnosibility What are the cdmp directories i
n the background_dump_dest used for? (11-AUG-03)
EBusiness Suite with RAC What is the optimal migration path to be used while mig
rating the E-Business suite to RAC? (08-JUL-05) Is the Oracle E-Business Suite (
Oracle Applications) certified against RAC? (04JUN-03) Can I use TAF with e-Busi
ness in a RAC environment? (02-APR-03) How to configure concurrent manager in a
RAC environment? (20-SEP-02) Should functional partitioning be used with Oracle
Applications? (20-SEP-02) Which e-Business version is prefereable? (20-SEP-02) C
an I use Automatic Undo Management with Oracle Applications? (20-SEP-02) Cluster
ed File Systems Can I use OCFS with SE RAC? (01-SEP-04) What are the maximum num
ber of nodes under OCFS on Linux ? (06-NOV-03) Where can I find documentation on
OCFS ? (06-NOV-03) What files can I put on Linux OCFS? (14-AUG-03) Is Sun QFS s
upported with RAC? What about Sun GFS? (19-JAN-05) Is Red Hat GFS(Global File Sy
stem) is certified by Oracle for use with Real Application Clusters? (22-NOV-04)
Oracle Clusterware (CRS) Is it possible to use ASM for the OCR and voting disk?
(19-JUL-05) Is it supported to rerun root.sh from the Oracle Clusterware instal
lation ? (05MAY-05) Is it supported to allow 3rd Party Clusterware to manage Ora
cle resources (instances, listeners, etc) and turn off Oracle Clusterware manage
ment of these? (05-MAY-05) What is the High Availability API? (05-MAY-05) How to
move the OCR location ? (24-MAR-04) Does Oracle Clusterware support application
vips? (11-JUL-05) Why is the home for Oracle Clusterware not recommended to be
subdirectory of the Oracle base directory? (11-JUL-05) Can I use Oracle Clusterw
are to provide cold failover of my 9i or 10g single instance Oracle Databases? (
01-JUL-05) How do I put my application under the control of Oracle Clusterware t
o achieve higher availability? (16-JUN-05) How do I protect the OCR and Voting i
n case of media failure? (05-MAY-05) How do I use multiple network interfaces to
provide High Availability for my interconnect with Oracle Clusterware? (06-APR-
05) How to Restore a Lost Voting Disk used by Oracle Clusterware 10g (02-DEC-04)
With Oracle Clusterware 10g, how do you backup the OCR? (02-DEC-04) Does the ho
stname have to match the public name or can it be anything else? (05NOV-04) Is i
t a requirement to have the public interface linked to ETH0 or does it only need
to be on a ETH lower than the private interface?: - public on ETH1 - private on
ETH2 (05-NOV-04) How do I restore OCR from a backup? On Windows, can I use ocop
y? (27-OCT-04) What should the permissions be set to for the voting disk and ocr
when doing a RAC Install? (22-OCT-04) Which processes access to OCR ? (22-OCT-0
4) Can I change the name of my cluster after I have created it when I am using O
racle Database 10g Clusterware? (05-OCT-04) Can I change the public hostname in
my Oracle Database 10g Cluster using Oracle Clusterware? (05-OCT-04) During CRS
installation, I am asked to define a private node name, and then on the next scr
een asked to define which interfaces should be used as private and public interf
aces. What information is required to answer these questions? (24-MAR-04) Answer
s I have changed my spfile with alter system set <parameter_name> =.... scope=sp
file. The spfile is on
ASM storage and the database will not start. How to recover: In $ORACLE_HOME/dbs
. oraenv <instance_name> sqlplus "/ as sysdba" startup nomount create pfile='re
coversp' from spfile / shutdown immediate quit Now edit the newly created pfile
to change the parameter to something sensible. Then: sqlplus "/ as sysdba" start
up pfile='recoversp' (or whatever you called it in step one). create spfile='+DA
TA/GASM/spfileGASM.ora' from pfile='recoversp' / N.B.The name of the spfile is i
n your original init<instance_name>.ora so adjust to suit shutdown immediate sta
rtup quit Modified: 18-APR-04 Ref #: ID-5068
-------------------------------------------------------------------------------I
s it supported to install CRS and RAC as different users. Yes, CRS and RAC can b
e installed as different users. The CRS user and the RAC user must both have "oi
nstall" as their primary group, and the RAC user should be a member of the OSDBA
group. Modified: 09-SEP-04 Ref #: ID-5769 -------------------------------------
------------------------------------------Do we have to have Oracle RDBMS on all
nodes? Each node of a cluster will typically have the RDBMS and RAC software lo
aded on it, but not actual datafiles (these need to be available via shared disk
). For example, if you wish to run RAC on 2 nodes of a 4-node cluster, you would
need to install it on all nodes, but it would only need to be licensed on the t
wo nodes running the RAC database. Note that using a clustered file system, or N
AS storage can provide a configuration that does not necessarily require the Ora
cle binaries to be installed on all nodes. Modified: 02-APR-04 Ref #: ID-4024
-------------------------------------------------------------------------------W
hat kind of HW components do you recommend for the interconnect? The general rec
ommendation for the interconnect is to provide the highest bandwith interconnect
, together with the lowest latency protocol that is available for a given platfo
rm. In practice, Gigabit Ethernet with UDP has proven sufficient in every case i
t has been implemented, and tends to be the lowest common denominator across pla
tforms. Modified: 02-APR-04 Ref #: ID-4049 -------------------------------------
------------------------------------------Are there any suggested roadmaps for i
mplementing a new RAC installation? Yes, Oracle Support recommends the following
best practices roadmap to successfully implement RAC: A Smooth Transition to Re
al Application Clusters The Purpose of this document is to provide a best practi
ces road map to successfully implement Real Application Clusters. Modified: 26-N
OV-02 Ref #: ID-4062
-------------------------------------------------------------------------------W
hat is Cache Fusion and how does this affect applications? Cache Fusion is a new
parallel database architecture for exploiting clustered computers to achieve sc
alability of all types of applications. Cache Fusion is a shared cache architect
ure that uses high speed low latency interconnects available today on clustered
systems to maintain database cache coherency. Database blocks are shipped across
the interconnect to the node where access to the data is needed. This is accomp
lished transparently to the application and users of the system. Cache Fusion sc
ales to clusters with a large numbers of nodes. For more information about cache
fusion see the following links: Additional Information can be found at: Underst
anding 9i Real Application Clusters Cache Fusion There is also a whitepaper ""Ca
che Fusion Delivers Scalability"" available at http://otn.oracle.com/products/or
acle9i/content.html Cache Fusion in the Oracle Documentation Modified: 26-NOV-02
Ref #: ID-4065
-------------------------------------------------------------------------------I
s it difficult to transition from Single Instance to RAC? If the cluster and the
cluster software are not present, these components must be installed and config
ured. The RAC option must be added using the Oracle Universal Installer, which n
ecessitates the existing DB instance must be shut down. There are no changes nec
essary on the user data within the database. However, a shortage of freelists an
d freelist groups can cause contention with header blocks of tables and indexes
as multiple instances vie for the same block. This may
cause a performance problem and require data partitioning. these changes should
be rare.
However, the need for
Recommendation: apply automatic space segment management to perform these change
s automatically. The free space management will replace the freelists and freeli
st groups and is better. The database requires one Redo thread and one Undo tabl
espace for each instance, which are easily added with SQL commands or with Enter
prise Manager tools. Datafiles will need to be moved to either a clustered file
system (CFS) or raw devices so that all nodes can access it. Also, the MAXINSTAN
CES parameter in the control file must be greater than or equal to number of ins
tances you will start in the cluster. For more detailed information, please see
Migrating from single-instance to RAC in the Oracle Documentation With Oracle Da
tabase 10g Release 2, $ORACLE_HOME/bin/rconfig tool can be used to convert Singl
e instance database to RAC. This tool takes in a xml input file and convert the
Single Instance database whose information is provided in the xml. You can run t
his tool in "verify only" mode prior to performing actual conversion. This is do
cumented in the RAC admin book and a sample xml can be found $ORACLE_HOME/assist
ants/rconfig/sampleXMLs/ConvertToRAC.xml. Grid Control 10g Release 2 provides a
easy to use wizard to perform this function. Note: Please be aware that you may
hit bug 4456047 (shutdown immediate hangs) as you convert the database. The bug
is updated with workaround and the w/a should is release noted as well. Modified
: 18-JUL-05 Ref #: ID-4101
-------------------------------------------------------------------------------W
hat are the dependencies between OCFS and ASM in Oracle10g ? In an Oracle Databa
se 10g RAC environment, there is no dependency between Automatic Storage Managem
ent (ASM) and Oracle Cluster File System (OCFS). OCFS is not required if you are
using Automatic Storage Management (ASM) for database files. You can use OCFS o
n Windows( Version 2 on Linux ) for files that ASM does not handle - binaries (s
hared oracle home), trace files, etc. Alternatively, you could place these files
on local file systems even though it's not as convenient given the multiple loc
ations. If you do not want to use ASM for your database files, you can still use
OCFS for database files in Oracle Database 10g. Please refer to ASM and OCFS Po
sitioning Modified: 05-MAY-05 Ref #: ID-4116 -----------------------------------
--------------------------------------------Is rcp and/or rsh required for norma
l RAC operation ? rcp"" and ""rsh"" are not required for normal RAC operation. H
owever ""rsh"" and ""rcp"" should to be enabled for RAC and patchset installatio
n. In future releases, ssh will be used for these operations. Modified: 06-NOV-0
3 Ref #: ID-4117
-------------------------------------------------------------------------------W
hat software is necessary for RAC? Does it have a separate installation CD to or
der? Real Application Clusters is an option of Oracle Database and therefore par
t of the Oracle Database CD. With Oracle 9i, RAC is part of Oracle9i Enterprise
Edition. If you install 9i EE onto a cluster, and the Oracle Universal Installer
(OUI) recognizes the cluster, you will be provided the option of installing RAC
. Most UNIX platforms require an OSD installation for the necessary clusterware.
For Intel platforms (Linux and Windows), Oracle provides the OSD software withi
n the Oracle9i Enterprise Edition release. With Oracle Database 10g, RAC is an o
ption of EE and available as part of SE. Oracle provides Oracle Clusterware on i
ts own CD included in the database CD pack. Please check the certification matri
x (Note 184875.1) or with the appropriate platform vendor for more information.
@ Sent by Karin Brandauer Modified: 05-MAY-05 Ref #: ID-4132
-------------------------------------------------------------------------------W
hat is Standard Edition RAC? With Oracle Database 10g, a customer who has purcha
sed Standard Edition is allowed to use the RAC option within the limitations of
Standard Edition(SE). For licensing restrictions you should read the Oracle Data
base 10g License Doc. At a high level this means that you can have a max of 4 cp
us in the cluster, you must use ASM for all database files. Oracle Cluster File
System (OCFS) is not supported for use with SE RAC. Modified: 01-SEP-04 Ref #: I
D-5750 -------------------------------------------------------------------------
------Can I use iSCSI storage with my RAC cluster? For iSCSI, Oracle has made th
e statement that, as a block protocol, this technology does not require validati
on for single instance database. There are many early adopter customers of iSCSI
running Oracle9i and Oracle Database 10g. As for RAC, Oracle has chosen to vali
date the iSCSI technology (not each vendor's targets) for the 10g platforms - th
is has been completed for Linux, Unix and Windows. For Windows we have tested up
to 4 nodes - Any Windows iSCSI products that are supported by the host and stor
age device are supported by Oracle. No vendor-specific information will be poste
d on Certify. Modified: 13-JUL-05 Ref #: ID-5788 -------------------------------
------------------------------------------------What would you recomend to custo
mer, Oracle clusterware or Vendor Clusterware (I.E. MC Service Guard, HACMP, Sun
Cluster, Veritas etc.) with Oracle Database 10g Real Application Clusters? You
will be installing and using Oracle Clusterware whether or not you use the Vendo
r Clusterware. The question
you need to ask is whether the Vendor Clusterware gives you something that Oracl
e Clusterware does not. Is the RAC database on the same server as the applicatio
n server? Are there any other processes on the same server as the database that
you require Vendor Clusterware to fail over to another server in the cluster if
the server it is running on fails? IF this is the case, you may want the vendor
clusterware, if not, why spend the extra money when Oracle Clusterware supplies
everything you need to for the clustered database included with your RAC license
. Modified: 21-OCT-04 Ref #: ID-5968 -------------------------------------------
------------------------------------When configuring the NIC cards and switch fo
r a GigE Interconnect should it be set to FULL or Half duplex in RAC? You've got
to use Full Duplex, regardless of RAC or not, but for all network communication
. Half Duplex means you can only either send OR receive at the same time. Modifi
ed: 05-NOV-04 Ref #: ID-6048 ---------------------------------------------------
----------------------------Is it a good idea to add anti-virus software to my R
AC cluster? For customers who choose to run anti-virus (AV) software on their da
tabase servers, they should be aware that the nature of AV software is that disk
IO bandwidth is reduced slightly as most AV software checks disk writes/reads.
Also, as the AV software runs, it will use CPU cycles that would normally be con
sumed by other server processes (e.g your database instance). As such, databases
will have faster performance when not using AV software. As some AV software is
known to lock the files whilst is scans then it is a good idea to exclude the O
racle Datafiles/controlfiles/logfiles from a regular AV scan Modified: 31-JAN-05
Ref #: ID-6595 ----------------------------------------------------------------
---------------Can I use RAC in a distributed transaction processing environment
? YES. Best practices is to have all tightly coupled branches of a distributed t
ransaction running on a RAC database must run on the same instance. Between tran
sactions and between services, transactions can be load balanced across all of t
he database instances. You can use services to manage DTP environments. By defin
ing the DTP property of a service, the service is guaranteed to run on one insta
nce at a time in a RAC database. All global distributed transactions performed t
hrough the DTP service are ensured to have their tightly-coupled branches runnin
g on a single RAC instance. Modified: 16-JUN-05 Ref #: ID-6864 -----------------
--------------------------------------------------------------Why do we have a V
irtual IP (VIP) in 10g? Why does it just return a dead connection when its prima
ry node fails? Its all about availability of the application. When a node fails,
the VIP associated with it is supposed to be automatically
failed over to some other node. When this occurs, two things happen. (1) the new
node re-arps the world indicating a new MAC address for the address. For direct
ly connected clients, this usually causes them to see errors on their connection
s to the old address; (2) Subsequent packets sent to the VIP go to the new node,
which will send error RST packets back to the clients. This results in the clie
nts getting errors immediately. This means that when the client issues SQL to th
e node that is now down, or traverses the address list while connecting, rather
than waiting on a very long TCP/IP time-out (~10 minutes), the client receives a
TCP reset. In the case of SQL, this is ORA-3113. In the case of connect, the ne
xt address in tnsnames is used. Without using VIPs, clients connected to a node
that died will often wait a 10 minute TCP timeout period before getting an error
. As a result, you don't really have a good HA solution without using VIPs. Modi
fied: 12-MAR-04 Ref #: ID-4609 -------------------------------------------------
------------------------------If I use Services with Oracle Database 10g, do I s
till need to set up Load Balancing ? Yes, Services allow you granular definition
of workload and the DBA can dynamically define which instances provide the serv
ice. Connection Load Balancing still needs to be set up to allow the user connec
tions to be balanced across all instances providing a service. Modified: 16-JUN-
05 Ref #: ID-6731 --------------------------------------------------------------
-----------------Can RMAN backup Real Application Cluster databases? Absolutely.
RMAN can be configured to connect to all nodes within the cluster to paralleliz
e the backup of the database files and archive logs. If files need to be restore
d, using set AUTOLOCATE ON alerts RMAN to search for backed up files and archive
logs on all nodes. RAC with RMAN in the Oracle Documentation Modified: 26-NOV-0
2 Ref #: ID-4035
-------------------------------------------------------------------------------I
am receiving an ORA-29740 error. What should I do? This error can occur when pr
oblems are detected on the cluster: Error: ORA-29740 (ORA-29740) Text: evicted b
y member %s, group incarnation %s ----------------------------------------------
----------------------------Cause: This member was evicted from the group by ano
ther member of the cluster database for one of several reasons, which may includ
e a communications error in the cluster, failure to issue a heartbeat to the con
trol file, etc. Action: Check the trace files of other active instances in the c
luster group for indications of errors that caused a reconfiguration. For more i
nformation on troubleshooting this error, see the following Metalink note:
Note 219361.1 Troubleshooting ORA-29740 in a RAC Environment Modified: 02-DEC-02
Ref #: ID-4093
-------------------------------------------------------------------------------W
hat does the Virtual IP service do? I understand it is for failover but do we ne
ed a separate network card? Can we use the existing private/public cards? What w
ould happen if we used the public ip? The 10g Virtual IP Address (VIP) exists on
every RAC node for public network communication. All client communication shoul
d use the VIPs in their TNS connection descriptions. The TNS ADDRESS_LIST entry
should direct clienst to VIPs rather than using hostnames. During normal runtime
, the behaviour is the same as hostnames, however when the node goes down or is
shutdown the VIP is hosted elsewhere on the cluster, and does not accept connect
ion requests. This results in a silent TCP/IP error and the client fails immedia
tely to the next TNS address. If the network interface fails within the node, th
e VIP can be configured to use alternate interfaces in the same node. The VIP mu
st use the public interface cards. There is no requirement to purchase additiona
l public interface cards (unless you want to take advantage of within-node card
failover.) Modified: 15-MAR-04 Ref #: ID-4636 ----------------------------------
---------------------------------------------What do the VIP resources do once t
hey detect a node has failed/gone down? Are the VIPs automatically acquired, and
published, or is manual intervention required? Are VIPs mandatory? When a node
fails, the VIP associated with the failed node is automatically failed over to o
ne of the other nodes in the cluster. When this occurs, two things happen: The n
ew node re-arps the world indicating a new MAC address for this IP address. For
directly connected clients, this usually causes them to see errors on their conn
ections to the old address; Subsequent packets sent to the VIP go to the new nod
e, which will send error RST packets back to the clients. This results in the cl
ients getting errors immediately. In the case of existing SQL conenctions, error
s will typically be in the form of ORA-3113 errors, while a new connection using
an address list will select the next entry in the list. Without using VIPs, cli
ents connected to a node that died will often wait for a TCP/IP timeout period b
efore getting an error. This can be as long as 10 minutes or more. As a result,
you don't really have a good HA solution without using VIPs. Modified: 15-MAR-04
Ref #: ID-4638 ----------------------------------------------------------------
---------------What are my options for load balancing with RAC? Why do I get an
uneven number of connections on my instances? All the types of load balancing av
ailable currently (9i-10g) occur at connect time. This means that it is very imp
ortant how one balances connections and what these connections do on a long term
basis. Since establishing connections can be very expensive for your applicatio
n, it is
good programming practice to connect once and stay connected. This means one nee
ds to be careful as to what option one uses. Oracle Net Services provides load b
alancing or you can use external methods such as hardware based or clusterware s
olutions. The following options exist: Random Either client side load balancing
or hardware based methods will randomize the connections to the instances. On th
e negative side this method is unaware of load on the connections or even if the
y are up meaning they might cause waits on TCP/IP timeouts. Load Based Server si
de load balancing (by the listener) redirects connections by default depending o
n the RunQ length of each of the instances. This is great for short lived connec
tions. Terrible for persistent connections or login storms. Do not use this meth
od for connections from connection pools or applicaton servers Session Based Ser
ver side load balancing can also be used to balance the number of connections to
each instance. Session count balancing is method used when you set a listener p
arameter, prefer_least_loaded_node_listener-name=off. Note listener name is the
actual name of the listener which is different on each node in your cluster and
by default is listener_nodename. Session based load balancing takes into account
the number of sessions connected to each node and then distributes ne connectio
ns to balance the number of sessions across the different nodes. Modified: 15-MA
R-05 Ref #: ID-4940 ------------------------------------------------------------
-------------------Can I use ASM as mechanism to mirror the data in an Extended
RAC cluster? Yes, but it cannot replicate everything that needs replication. ASM
works well to replicate any object you can put in ASM. But you cannot put the O
CR or Voting Disk in ASM. In 10gR1 they can either be mirrored using a different
mechanism (which could then be used instead of ASM) or the OCR needs to be rest
ored from backup and the Voting Disk can be recreated. In the future we are look
ing at providing Oracle redundancy for both. Modified: 18-OCT-04 Ref #: ID-5948
-------------------------------------------------------------------------------C
an our 10g VIP fail over from NIC to NIC Yes the 10g VIP implementation is capab
le to NIC and back if the failed NIC is back between nodes. The NIC to NIC failo
ver is are installed. Modified: 10-DEC-04 Ref #: ID-6348 as well as from node to
node ? from failing over within a node from NIC online again, and also we fail
over fully redundant if redundant switches
-------------------------------------------------------------------------------W
hat clients provide integration with FAN and FCF? With Oracle Database 10g Relea
se 1, JDBC clients (both thick and thin driver) are integrated with FAN by provi
ding FCF. With Oracle Database 10g Release 2, we have added ODP.NET and OCI. Oth
er applications can integrate with FAN by using the API to subscribe to the FAN
events. Modified: 28-APR-05 Ref #: ID-6735
-------------------------------------------------------------------------------W
hat is CLB_GOAL and how should I set it? CLB_GOAL is the connection load balanci
ng goal for a service. There are 2 options, CLB_GOAL_SHORT and CLB_GOAL_LONG (de
fault). Long is for applications that have long-lived connections. This is typic
al for connection pools and SQL*Forms sessions. Long is the default connection l
oad balancing goal. Short is for applications that have short-lived connections.
The GOAL for a service can be set with EM or DBMS_SERVICE. Note: You must still
configure load balancing with Oracle Net Services Modified: 16-JUN-05 Ref #: ID
-6854 --------------------------------------------------------------------------
-----Can I use TAF and FAN/FCF? With Oracle Database 10g Release 1, NO. With Ora
cle Database 10g Release 2, the answer is YES for OCI and ODP.NET, it is recomme
nded. For JDBC, you should not use TAF and FCF even with the Thick JDBC driver.
Modified: 16-JUN-05 Ref #: ID-6866 ---------------------------------------------
----------------------------------What is Server-side Transparent Application Fa
ilover (TAF) and how do I use it? Oracle Database 10g Release 2, introduces serv
er-side TAF when using services. After you create a service, you can use the dbm
s_service.modify_service pl/sql procedure to define the TAF policy for the servi
ce. Only the basic method is supported. Note this is different than the TAF poli
cy (traditional client TAF) that is supported by srvctl and EM Services page. If
your service has a server side TAF policy defined, then you do not have to enco
de TAF on the client connection string. If the instance where a client is connec
ted, fails, then the connection will be failed over to another instance in the c
luster that is supporting the service. All restrictions of TAF still apply. NOTE
: both the client and server must be 10.2 and aq_ha_notifications must be set to
true for the service. Sample code to modify service: execute dbms_service.modif
y_service (service_name => 'gl.us.oracle.com' , aq_ha_notifications => true , fa
ilover_method => dbms_service.failover_method_basic , failover_type => dbms_serv
ice.failover_type_select , failover_retries => 180 , failover_delay => 5 , clb_g
oal => dbms_service.clb_goal_long); Modified: 07-JUL-05 Ref #: ID-6912
-------------------------------------------------------------------------------I
am seeing the wait events 'ges remote message', 'gcs remote message', and/or 'g
cs for action'. What should I do about these? These are idle wait events and can
be safetly ignored. The 'ges remote message' might show up in a 9.0.1 statspack
report as one of the top wait events. To have this wait event not show up you c
an add this event to the
PERFSTAT.STATS$IDLE_EVENT table so that it is not listed in Statspack reports. M
odified: 02-APR-04 Ref #: ID-4092
-------------------------------------------------------------------------------W
hat are the changes in memory requirements from moving from single instance to R
AC? If you are keeping the workload requirements per instance the same, then abo
ut 10% more buffer cache and 15% more shared pool is needed. The additional memo
ry requirement is due to data structures for coherency management. The values ar
e heuristic and are mostly upper bounds. Actual esource usage can be monitored b
y querying current and maximum columns for the gcs resource/locks and ges resour
ce/locks entries in V$RESOURCE_LIMIT. But in general, please take into considera
tion that memory requirements per instance are reduced when the same user popula
tion is distributed over multiple nodes. In this case: Assuming the same user po
pulation N number of nodes M buffer cache for a single system then (M / N) + ((M
/ N )*0.10) [ + extra memory to compensate for failed-over users ] Thus for exa
mple with a M=2G & N=2 & no extra memory for failed-over users =( 2G / 2 ) + ((
2G / 2 )) *0.10 =1G + 100M Modified: 02-DEC-02 Ref #: ID-4030
-------------------------------------------------------------------------------W
hat is the Load Balancing Advisory? To assist in the balancing of application wo
rkload across designated resources, Oracle Database 10g Release 2 provides the L
oad Balancing Advisory. This Advisory monitors the current workload activity acr
oss the cluster and for each instance where a service is active; it provides a p
ercentage value of how much of the total workload should be sent to this instanc
e as well as service quality flag. The feedback is provided as an entry in the A
utomatic Workload Repository and a FAN event is published. Modified: 16-JUN-05 R
ef #: ID-6858 ------------------------------------------------------------------
-------------What is Runtime Connection Load Balancing? Runtime connection load
balancing enables the connection pool to route incoming work requests to the ava
ilable database connection that will provide it with the best service. This will
provide the best service times globally, and routing responds fast to changing
conditions in the system. Oracle has implemented runtime connection load balanci
ng with ODP.NET and JDBC connection pools. Runtime Connection Load Balancing is
tightly integrated with the automatic workload balancing features introduced wit
h Oracle Database 10g I.E. Services, Automatic Workload Repository, and the new
Load Balancing Advisory.
Modified: 16-JUN-05
Ref #: ID-6860
-------------------------------------------------------------------------------H
ow do I enable the load balancing advisory? The load balancing advisory requires
the use of services and Oracle Net connection load balancing. To enable it, on
the server: set a goal (service_time or throughput, for ODP.NET enable AQ_HA_NOT
IFICATIONS=>true, and set CLB_GOAL ) on your service. For client, you must be us
ing the connection pool. For JDBC, enable the datasource parameter FastConnectio
nFailoverEnabled. For ODP.NET enable the datasource parameter Load Balancing=tru
e. Modified: 16-JUN-05 Ref #: ID-6862 ------------------------------------------
-------------------------------------How do I stop the GSD? If you are on 9.0 on
Unix you would issue: $ ps -ef | grep jre $ kill -9 <gsd process> Stop the Orac
leGSDService on Windows. Note: Make sure that this is the process in use by GSD
If you are on 9.2 you would issue: $ gsdctl stop Modified: 22-MAR-04 Ref #: ID-4
091
-------------------------------------------------------------------------------H
ow should I deal with space management? Do I need to set free lists and free lis
t groups? Manually setting free list groups is a complexity that is no longer re
quired. We recommend using Automatic Segment Space Management rather than trying
to manage space manually. Unless you are migrating from an earlier database ver
sion with OPS and have already built and tuned the necessary structures, Automat
ic Segment Space Management is the preferred approach. Automatic Segment Space M
anagement is NOT the default, you need to set it. For more information see: Auto
matic Space Segment Management in RAC Environments Modified: 16-JUN-03 Ref #: ID
-4074
--------------------------------------------------------------------------------
I was installing RAC and my Oracle files did not get copied to the remote node(s
). What went wrong? First make sure the cluster is running and is available on a
ll nodes. You should be able to see all nodes when running an 'lsnodes -v' comma
nd. If lsnodes shows that all members of the cluster are available, then you may
have an rcp/rsh problem on Unix or shares have not been configured on Windows.
You can test rcp/rsh on Unix by issuing the following from each node: [node1]/tm
p> touch test.tst [node1]/tmp> rcp test.tst node2:/tmp [node2]/tmp> touch test.t
st [node2]/tmp> rcp test.tst node1:/tmp On Windows, ensure that each node has ad
ministrative access to all these directories within the Windows environment by r
unning the following at the command prompt: NET USE \\host_name\C$ Clustercheck.
exe also checks for this. More information can be found in the Step-by-Step RAC
notes available on Metalink. To find these search Metalink for 'Step-by-Step Ins
tallation of RAC'. Modified: 26-NOV-02 Ref #: ID-4094
-------------------------------------------------------------------------------W
hat are the implications of using srvctl disable for an instance in my RAC clust
er? I want to have it available to start if I need it but at this time to not wa
nt to run this extra instance for this database. During node reboot, any disable
d resources will not be started by the Clusterware, therefore this instance will
not be restarted. It is recommended that you leave the vip, ons,gsd enabled in
that node. For example, VIP address for this node is present in address list of
database services, so a client connecting to these services will still reach som
e other database instance providing that service via listener redirection. J ust
be aware that by disabling an Instance on a node, all that means is that the in
stance itself is not starting. However, if the database was originally created w
ith 3 instances, that means there are 3 threads of redo. So, while the instance
itself is disabled, the redo thread is still enabled, and will occasionally caus
e log switches. The archived logs for this 'disabled' instance would still be ne
eded in any potential database recovery scenario. So, if you are going to disabl
e the instance through srvctl, you may also want to consider disabling the redo
thread for that instance.
srvctl disable instance -d orcl -i orcl2 SQL> alter database disable public thre
ad 2; Do the reverse to enable the instance. SQL> alter database enable public t
hread 2; srvctl enable instance -d orcl -i orcl2 Modified: 31-MAR-05 Ref #: ID-6
672 ----------------------------------------------------------------------------
---What is the Cluster Verification Utiltiy (cluvfy)? The Cluster Verification U
tility (CVU) is a validation tool that you can use to check all the important co
mponents that need to be verified at different stages of deployment in a RAC env
ironment. The wide domain of deployment of CVU ranges from initial hardware setu
p through fully operational cluster for RAC deployment and covers all the interm
ediate stages of installation and configuration of various components. Cluvfy do
es not take any corrective action following the failure of a verification task,
does not enter into areas of performance tuning or monitoring, does not perform
any cluster or RAC operation, and does not attempt to verify the internals of cl
uster database or cluster elements. Modified: 16-JUN-05 Ref #: ID-6850 ---------
----------------------------------------------------------------------What versi
ons of the database can I use the cluster verification utility (cluvfy) with? Th
e cluster verification utility is release with Oracle Database 10g Release 2 but
can also be used with Oracle Database 10g Release 1. Modified: 16-JUN-05 Ref #:
ID-6852 -----------------------------------------------------------------------
--------How many nodes can be had in an HP/Sun/IBM/Compaq/NT/Linux cluster? The
number of nodes supported is not limited by Oracle, but more generally by the cl
ustering software/hardware in question. When using Solely Oracle Clusterware: 63
nodes (9i or 10gR1) When using a third party clusterware: Sun: 8 HP UX: 16 HP T
ru64: 8 IBM AIX: * 8 nodes for Physical Shared (CLVM) SSA disk
* 16 nodes for Physical Shared (CLVM) non-SSA disk * 128 nodes for Virtual Share
d Disk (VSD) * 128 nodes for GPFS * Subject to storage subsystem limitations Ver
itas: 8-16 nodes (check w/ Veritas) Modified: 21-OCT-04 Ref #: ID-4047
-------------------------------------------------------------------------------W
here I can find information about how to setup / install RAC on different platfo
rms ? There is a roadmap for implementing Real Application Clusters' available a
t: A Smooth Transition to Real Application Clusters There are also Step-by-Step
notes available for each platform available on the Metalink 'Top Tech Docs' for
RAC: High Availability - Real Application Clusters Library Page Index Additional
information can be found on OTN: http://technet.oracle.com/products/oracle9i/co
ntent.html --> 'Oracle Real Application Clusters' Modified: 08-AUG-02 Ref #: ID-
4067
-------------------------------------------------------------------------------I
s it possible to run RAC on logical partitions (i.e. LPARs) or virtual separate
servers. Yes, it is possible. The E10K and other high end servers can be partiti
oned into domains of smaller sizes, each domain with its own CPU(s) and operatin
g system. Each domain is effectively a virtual server. RAC can be run on cluster
comprises of domains. The benefits of using this is similar to a regular cluste
r, any domain failure will have little effect on other domains. Besides, the man
agement of the cluster may be easier since there is only one physical server. No
te however, since one E10K is still just one server. There are single points of
failures. Any failures, such as back plane failure, that crumble the entire serv
er will shutdown the virtual cluster. That is the tradeoff users have to make in
how best to build a cluster database. Modified: 18-MAY-04 Ref #: ID-4075 ------
-------------------------------------------------------------------------How do
I check RAC certification? See the following Metalink note: Note 184875.1
How To Check The Certification Matrix for Real Application Clusters Please note
that certifications for Real Application Clusters are performed against the Oper
ating System and Clusterware versions. The corresponding system hardware is offe
red by System vendors and specialized Technology vendors. Some system vendors of
fer pre-installed, pre-configured RAC clusters. These are included below under t
he corresponding OS platform selection within the certification matrix.
Modified: 26-NOV-02
Ref #: ID-4095
-------------------------------------------------------------------------------C
an the Oracle Database Configuration Assistant (DBCA) be used to create a databa
se with Veritas DBE / AC 3.5? DBCA can be used to create databases on raw device
s in 9i RAC Release 1 and 9i Release 2. Standard database creation scripts using
SQL commands will work with file system and raw. DBCA cannot be used to create
databases on file systems on Oracle 9i Release 1. The user can choose to set up
a database on raw devices, and have DBCA output a script. The script can then be
modified to use cluster file systems instead. With Oracle 9i RAC Release 2 (Ora
cle 9.2), DBCA can be used to create databases on a cluster filesystem. If the O
RACLE_HOME is stored on the cluster filesystem, the tool will work directly. If
ORACLE_HOME is on local drives on each system, and the customer wishes to place
database files onto a cluster file system, they must invoke DBCA as follows: dbc
a -datafileDestination /oradata where /oradata is on the CFS filesystem. See 9iR
2 README and bug 2300874 for more info. Modified: 10-JAN-03 Ref #: ID-4124
-------------------------------------------------------------------------------I
s crossover cable supported as an interconnect with 9iRAC/10gRAC on any platform
? NO. CROSS OVER CABLES ARE NOT SUPPORTED. The requirement is to use a switch:
Detailed Reasons: 1) cross-cabling limits the expansion of RAC to two nodes 2) c
ross-cabling is unstable: a) Some NIC cards do not work properly with it. b) Ins
tability. We have seen different problems e.g.. ORA-29740 at configurations usin
g crossover cable, and other errors. Due to the benefits and stability provided
by a switch, and their afforability, this is the only supported configuration. P
lease see certify.us.oracle.com as well.
(content consolidated from that of Massimo Castelli, Roland Knapp and others) Mo
dified: 21-FEB-05 Ref #: ID-4150
-------------------------------------------------------------------------------I
s Veritas Storage Foundation 4.0 supported with RAC? Veritas Storage Foundation
4.0 is certified on AIX, Solaris and HPUX for 9i RAC and Oracle Database 10g RAC
. Veritas is production also on Linux, but it is not certified by Oracle. If cus
tomers choose Veritas on Linux, Oracle will support the Oracle products in the s
tack, but they do not qualify for Unbreakable Linux support. Modified: 05-OCT-04
Ref #: ID-5888 ----------------------------------------------------------------
---------------Is 3rd Party Clusterware supported on Linux such as Veritas or Re
dhat? No, Oracle RAC 10g does not support 3rd Party clusterware on Linux. This m
eans that if a cluster file system requires a 3rd party clusterware, the cluster
file system is not supported. Modified: 11-MAY-05 Ref #: ID-6743 --------------
-----------------------------------------------------------------Can you have mu
ltiple RAC $ORACLE_HOME's on Linux? No, there should be only one Oracle Cluster
Manager (ORACM) running on each node. All RAC databases should run out of the $O
RACLE_HOME that ORACM is installed in. Modified: 19-JUL-05 Ref #: ID-6931 ------
-------------------------------------------------------------------------After i
nstalling patchset 9013 and patch_2313680 on Linux, the startup was very slow Pl
ease carefully read the following new information about configuring Oracle Clust
er Management on Linux, provided as part of the patch README: Three parameters a
ffect the startup time: soft_margin (defined at watchdog module load) -m (watchd
ogd startup option) WatchdogMarginWait (defined in nmcfg.ora). WatchdogMarginWai
t is calculated using the formula: WatchdogMarginWait = soft_margin(msec) + -m +
5000(msec). [5000(msec) is hardcoded] Note that the soft_margin is measured in
seconds, -m and WatchMarginWait are measured in milliseconds.
Based on benchmarking, it is recommended to set soft_margin between 10 and 20 se
conds. Use the same value for -m (converted to milliseconds) as used for soft_ma
rgin. Here is an example: soft_margin=10 -m=10000 WatchdogMarginWait = 10000+100
00+5000=25000 If CPU utilization in your system is high and you experience unexp
ected node reboots, check the wdd.log file. If there are any 'ping came too late
' messages, increase the value of the above parameters. Modified: 20-DEC-04 Ref
#: ID-4069
-------------------------------------------------------------------------------I
s CFS Available for Linux? Yes, OCFS (Oracle Cluster Filesystem) is now availabl
e for Linux. The following Metalink note has information for obtaining the lates
t version of OCFS: Note 238278.1 - How to find the current OCFS version for Linu
x Modified: 20-DEC-04 Ref #: ID-4089
-------------------------------------------------------------------------------W
here can I find more information about hangcheck-timer module on Linux ? And how
do we configure hangcheck-timer module ? In releases 9.2.0.2.0 and later, Oracl
e recommends using a new I/O fencing model -- HangCheck-Timer module. Hangcheck-
Timer module monitors the Linux kernel for long operating system hangs that coul
d affect the reliability of a RAC node. You can configure hangcheck-timer module
using 3 parameters -- hangcheck_tick, hangcheck_margin and MissCount. For more
details, please review Note :: 259487.1 Modified: 20-DEC-04 Ref #: ID-4179 -----
--------------------------------------------------------------------------Can RA
C 10g and 9i RAC be installed and run on the same physical Linux cluster? Yes -
CRS / CSS and oracm can coexist. Modified: 20-DEC-04 Ref #: ID-4408 ------------
-------------------------------------------------------------------Is the hangch
eck timer still needed with Oracle Database 10g RAC? YES! The hangcheck-timer mo
dule monitors the Linux kernel for extended operating system hangs that could af
fect the reliability of the RAC node ( I/O fencing) and cause database corruptio
n. To verify the hangcheck-timer module is running on every node: as root user:
/sbin/lsmod | grep hangcheck
If the hangcheck-timer module is not listed enter the following command as the r
oot user: /sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180 To
ensure the module is loaded every time the system reboots, verify that the loca
l system startup file (/etc/rc.d/rc.local) contains the command above. For addit
ional information please review the Guide (5-41). Modified: 20-DEC-04 Ref #: ID-
6208 Oracle RAC Install and Configuration
-------------------------------------------------------------------------------H
ow to configure bonding on Suse SLES8. Please see note:291958.1 Modified: 29-NOV
-04 Ref #: ID-6288 -------------------------------------------------------------
------------------How to configure bonding on Suse SLES9. Please see note:291962
.1 Modified: 29-NOV-04 Ref #: ID-6290 ------------------------------------------
-------------------------------------Does RAC run faster with Sun-cluster or Ver
itas cluster-ware? (these being alternatives with Sun hardware) Is there some cl
usterware that would make RAC run faster? RAC scalability and performance are in
dependent of the clusterware. However, we recommend that the customer uses a ver
y fast memory based interconnect if one wants to optimize the performance. For E
xample, Sun can use FireLink, a very fast proprietary interconnect which is more
optimal for RAC, while Veritas is limited to using Gigabit Ethernet. Starting w
ith 10g there will be an alternative to SunCluster and Veritas Cluster than this
is Oracle CRS/CSS. Modified: 20-DEC-04 Ref #: ID-4088
-------------------------------------------------------------------------------I
s HMP supported with 10g on all HP platforms ? - 10g RAC + HMP + PA-RISC = yes -
10g RAC + HMP + Itanium, "Oracle has no plans and will likely never support RAC
over HMP on IPF." - 10g RAC + UDP + Itanium = yes (even over Hyperfabric)
"Oracle recommends that HMP not be used. UDP is the recommended interconnect pro
tocol across all platforms."
Modified: 20-DEC-04
Ref #: ID-5488
-------------------------------------------------------------------------------D
oes the Oracle Cluster File System (OCFS) support network access through NFS or
Windows Network Shares? No, in the current release the Oracle Cluster File Syste
m (OCFS) is not supported for use by network access approaches like NFS or Windo
ws Network Shares. Modified: 27-JAN-05 Ref #: ID-4122 --------------------------
-----------------------------------------------------My customer wants to unders
tand what type of disk caching they can use with their Windows RAC Cluster, the
install guide tells them to disable disk caching? If the write cache identified
is local to the node then that is bad for RAC. If the cache is visible to all no
des as a 'single cache', typically in the storage array, and is also 'battery ba
cked' then that is OK. Modified: 31-MAR-05 Ref #: ID-6670 ----------------------
---------------------------------------------------------Can I run my 9i RAC and
RAC 10g on the same Windows cluster? Yes but the 9i RAC database must have the
9i Cluster Manager and you must run Oracle Clusterware for the Oracle Database 1
0g. 9i Cluster Manager can coexsist with Oracle Clusterware 10g. Modified: 01-JU
L-05 Ref #: ID-6889 ------------------------------------------------------------
-------------------Do I need HACMP/GPFS to store my OCR/Voting file on a shared
device. The prerequisites doc for AIX clearly says: "If you are not using HACMP,
you must use a GPFS file system to store the Oracle CRS files" ==> this is a do
cumentation bug and this will be fixed with 10.1.0.3 ----On AIX it is important
to put the reserve_lock=no/reserve_policy =no_reserve in order to allow AIX to a
ccess the devices from more than one node simultaneously. Use the /dev/rhdisk de
vices (character special) for the crs and voting disk and change the attribute w
ith the command
chdev -l hdiskn -a reserve_lock=no (for ESS, EMC, HDS, CLARiiON, and MPIO-capabl
e devices you have to do an chdev -l hdiskn -a reserve_policy=no_reserve) Modifi
ed: 20-DEC-04 Ref #: ID-5288
-------------------------------------------------------------------------------C
an I run Oracle RAC 10g on my IBM Mainframe Sysplex environment (z/OS)? YES! The
re is no separate documentation for RAC on z/OS. What you would call "clusterwar
e" is built in to the OS and the native file systems are global. IBM z/OS docume
ntation explains how to set up a Sysplex Cluster; once the customer has done tha
t it is trivial to set up a RAC database. The few steps involved are covered in
in Chapter 14 of the Oracle for z/OS System Admin Guide, which you can read here
. There is also an Install Guide for Oracle on z/OS ( here) but I don't think th
ere are any RAC-specific steps in the installation. By the way, RAC on z/OS does
not use Oracle's clusterware (CSS/CRS/OCR). Modified: 07-JUL-05 Ref #: ID-6910
-------------------------------------------------------------------------------W
hat are the cdmp directories in the background_dump_dest used for? These directo
ries are produced by the diagnosibility daemon process (DIAG). DIAG is a process
related to RAC which as one of its tasks, performs cash dumping. The DIAG proce
ss dumps out tracing to file when it discovers the death of an essential process
(foreground or background) in the local instance. A dump directory named someth
ing like cdmp_ is created in the bdump or background_dump_dest directory, and al
l the trace dump files DIAG creates are placed in this directory. Modified: 11-A
UG-03 Ref #: ID-4152 -----------------------------------------------------------
--------------------Is the Oracle E-Business Suite (Oracle Applications) certifi
ed against RAC? Yes. (There is no seperate certification required for RAC.) "" M
odified: 04-JUN-03 Ref #: ID-4029 ----------------------------------------------
---------------------------------What is the optimal migration path to be used w
hile migrating the E-Business suite to RAC? Following is the recommended and mos
t optimal path to migrate you E-Business suite to RAC environment: 1. Migrate th
e existing application to new hardware. (If applicable). 2. Use Clustered File S
ystem for all data base files or migrate all database files
to raw devices. (Use dd for Unix or ocopy for NT) 3. Install/upgrade to the late
st available e-Business suite. 4. Upgrade database to Oracle9i (Refer document 2
16550.1 on Metalink) 5. In step 4, install RAC option while installing Oracle9i
and use Installer to perform install for all the nodes. 6. Clone Oracle Applicat
ion code tree. Reference Documents: Oracle E-Business Suite Release 11i with 9i
RAC: Installation and Configuration : Metalink Note# 279956.1 E-Business Suite 1
1i on RAC : Configuring Database Load balancing & Failover: Metalink Note# 29465
2.1 Oracle E-Business Suite 11i and Database - FAQ : Metalink# 285267.1 Modified
: 08-JUL-05 Ref #: ID-4107
-------------------------------------------------------------------------------H
ow to configure concurrent manager in a RAC environment? Large clients commonly
put the concurrent manager on a separate server now (in the middle tier) to redu
ce the load on the database server. The concurrent manager programs can be #tied
# to a specific middle tier (e.g., you can have CMs running on more than one mid
dle tier box). It is advisable to use specilize CM. CM middle tiers are set up t
o point to the appropriate database instance based on product module being used.
Modified: 20-SEP-02 Ref #: ID-4108
-------------------------------------------------------------------------------S
hould functional partitioning be used with Oracle Applications? We do not recomm
end functional partitioning unless throughput on your server architecture demand
s it. Cache fusion has been optimized to scale well with nonpartitioned workload
. If your processing requirements are extreme and your testing proves you must p
artition your workload in order to reduce internode communications, you can use
Profile Options to designate that sessions for certain applications Responsibili
ties are created on a specific middle tier server. That middle tier server would
then be configured to connect to a specific database instance. To determine the
correct partitioning for your installation you would need to consider several f
actors like number of concurrent users, batch users, modules used, workload char
acteristics etc. Modified: 20-SEP-02 Ref #: ID-4109
-------------------------------------------------------------------------------W
hich e-Business version is prefereable? Versions 11.5.5 onwards are certified wi
th Oracle9i and hence with Oracle9i RAC.
However we recommend the latest available version. Modified: 20-SEP-02 Ref #: ID
-4110
-------------------------------------------------------------------------------C
an I use Automatic Undo Management with Oracle Applications? Yes. In a RAC envir
onment we highly recommend it. Modified: 20-SEP-02 Ref #: ID-4111
-------------------------------------------------------------------------------C
an I use TAF with e-Business in a RAC environment? TAF itself does not work with
e-Business suite due to Forms/TAF limitations, but you can configure the tns fa
ilover clause. On instance failure, when the user logs back into the system, the
ir session will be directed to a surviving instance, and the user will be taken
to the navigator tab. Their committed work will be available; any uncommitted wo
rk must be re-started. We also recommend you configure the forms error URL to id
entify a fallback middle tier server for Forms processes, if no router is availa
ble to accomplish switching across servers. Modified: 02-APR-03 Ref #: ID-4112
-------------------------------------------------------------------------------C
an I use OCFS with SE RAC? It is not supported to use OCFS with Standard Edition
RAC. All database files must use ASM (redo logs, recovery area, datafiles, cont
rol files etc). We recommend that the binaries and trace files (non-ASM supporte
d files) to be replicated on all nodes. This is done automatically by install. M
odified: 01-SEP-04 Ref #: ID-5748 ----------------------------------------------
---------------------------------What are the maximum number of nodes under OCFS
on Linux ? Oracle 9iRAC on Linux, using OCFS for datafiles, can scale to a maxi
mum of 32 nodes. Modified: 06-NOV-03 Ref #: ID-4118 ----------------------------
---------------------------------------------------Where can I find documentatio
n on OCFS ? For Main Page >>> http://oss.oracle.com/projects/ocfs/ For User Manu
al >>> http://oss.oracle.com/projects/ocfs/documentation/ For OCFS Files >>> htt
p://oss.oracle.com/projects/ocfs/files/supported/ Modified: 06-NOV-03 Ref #: ID-
4119 ---------------------------------------------------------------------------
-----
What files can I put on Linux OCFS? For optimal performance, you should only put
the following files on Linux OCFS: Datafiles Control Files Redo Logs Archive Lo
gs Shared Configuration File (OCR) Quorum / Voting File SPFILE Modified: 14-AUG-
03 Ref #: ID-4156
-------------------------------------------------------------------------------I
s Sun QFS supported with RAC? What about Sun GFS? Sun QFS is supported with Orac
le 9i RAC. Sun is planning to certify QFS with Oracle Database 10g and RAC but a
s of November 15,2004, this certification is "planned". For 9i, Software Stack d
etails: For SVM you need Solaris 9 9/04 (Solaris 9 update 7),SVM Patch 116669-03
(this is required SUN patch), Sun Cluster 3.1 Update 3, Oracle 9.2.0.5 + Oracle
patch 3366258 For SharedQFS you need Solaris 9 04/03 and above or Solaris 8 02/0
2 and above, QFS 4.2, Sun Cluster 3.1 Update 2 or above, Oracle 9.2.0.5 + Oracle
patch 3566420 Differently, Sun GFS (Global File System) is only supported for O
racle binary and archive logs only, but NOT for database files. Modified: 19-JAN
-05 Ref #: ID-6128 -------------------------------------------------------------
------------------Is Red Hat GFS(Global File System) is certified by Oracle for
use with Real Application Clusters? Sistina Cluster Filesystem is not part of th
e standard RedHat kernel and therefore is not certified under the unbreakable Li
nux but falls under a kernel extension. This however, does not mean that Oracle
RAC is not certified with it. As a fact, Oracle RAC does not certify against a f
ilesystem per se, but certifies against an operating system. If, as is the case
with Sistina filesystem, the filesystem is certified with the operating system,
this only means that the combination does not fall under the unbreakable Linux c
ombination and Oracle does not provide direct support and fix the filesystem in
case of an error. Customer will have to contact the filesystem provider for supp
ort. Modified: 22-NOV-04 Ref #: ID-6228 ----------------------------------------
----------------------------------------
How to move the OCR location ? - stop the CRS stack on all nodes using "init.crs
stop" - Edit /var/opt/oracle/ocr.loc on all nodes and set up ocrconfig_loc=new
OCR device Restore from one of the automatic physical backups using ocrconfig -r
estore. - Run ocrcheck to verify. - reboot to restart the CRS stack. - additiona
l information can be found at http://stdoc.us.oracle.com/10/101/rac.101/b10765/s
torage.htm#i1016535 Modified: 24-MAR-04 Ref #: ID-4728 -------------------------
------------------------------------------------------Is it supported to rerun r
oot.sh from the Oracle Clusterware installation ? Rerunning root.sh after the in
itial install is expressly discouraged and unsupported. We strongly recommend no
t doing it. Modified: 05-MAY-05 Ref #: ID-4730 ---------------------------------
----------------------------------------------Is it supported to allow 3rd Party
Clusterware to manage Oracle resources (instances, listeners, etc) and turn off
Oracle Clusterware management of these? In 10g we do not support using 3rd Part
y Clusterware for failover and restart of Oracle resources. Oracle Clusterware r
esources should not be disabled. Modified: 05-MAY-05 Ref #: ID-6528 ------------
-------------------------------------------------------------------What is the H
igh Availability API? An application-programming interface to allow processes to
be put under the High Availability infrastructure that is part of the Oracle Cl
usterware distributed with Oracle Database 10g. A user written script defines ho
w Oracle Clusterware should start, stop and relocate the process when the cluste
r node status changes. This extends the high availability services of the cluste
r to any application running in the cluster. Oracle Database 10g Real Applicatio
n Clusters (RAC) databases and associated Oracle processes (E.G. listener) are a
utomatically managed by the clusterware. Modified: 05-MAY-05 Ref #: ID-6741 ----
---------------------------------------------------------------------------Is it
possible to use ASM for the OCR and voting disk? No, the OCR and voting disk mu
st be on raw or CFS (cluster filesystem). Modified: 19-JUL-05 Ref #: ID-6929 ---
----------------------------------------------------------------------------Duri
ng CRS installation, I am asked to define a private node name, and then on the n
ext screen asked to define which interfaces should be used as private and public
interfaces. What information is required to answer these questions? The private
names on the first screen determine which private interconnect will be used by
CSS.
Provide exactly one name that maps to a private IP address, or just the IP addre
ss itself. If a logical name is used, then the IP address this maps to can be ch
anged subsequently, but if you IP address is specified CSS will always use that
IP address. CSS cannot use multiple private interconnects for its communication
hence only one name or IP address can be specified. The private interconnect enf
orcement page determines which private interconnect will be used by the RAC inst
ances. It's equivalent to setting the CLUSTER_INTERCONNECTS init.ora parameter,
but is more convenient because it is a cluster-wide setting that does not have t
o be adjusted every time you add nodes or instances. RAC will use all of the int
erconnects listed as private in this screen, and they all have to be up, just as
their IP addresses have to be when specified in the init.ora paramter. RAC does
not fail over between cluster interconnects; if one is down then the instances
using them won't start. Modified: 24-MAR-04 Ref #: ID-4724
-------------------------------------------------------------------------------C
an I change the name of my cluster after I have created it when I am using Oracl
e Database 10g Clusterware? No, you must properly deinstall CRS and then re-inst
all. To properly de-install CRS, you MUST follow the directions in the Installat
ion Guide Chapter 10. This will ensure the ocr gets cleaned out. Modified: 05-OC
T-04 Ref #: ID-5890 ------------------------------------------------------------
-------------------Can I change the public hostname in my Oracle Database 10g Cl
uster using Oracle Clusterware? Hostname changes are not supported in CRS, unles
s you want to perform a deletenode followed by a new addnode operation. Modified
: 05-OCT-04 Ref #: ID-5892 -----------------------------------------------------
--------------------------What should the permissions be set to for the voting d
isk and ocr when doing a RAC Install? The Oracle Real Application Clusters insta
ll guide is correct. It describes the PRE INSTALL ownership/permission requireme
nts for ocr and voting disk. This step is needed to make sure that the CRS insta
ll succeeds. Please don't use those values to determine what the ownership/permm
ission should be POST INSTALL. The root script will change the ownership/permiss
ion of ocr and voting disk as part of install. The POST INSTALL permissions will
end up being : OCR - root:oinstall 640 Voting Disk - oracle:oinstall - 644 Modi
fied: 22-OCT-04 Ref #: ID-5988 -------------------------------------------------
------------------------------Which processes access to OCR ? Oracle Cluster Reg
istry (OCR) is used to store the cluster configuration information among other t
hings. OCR needs to be accessible from all nodes in the cluster. If OCR became i
naccessible the CSS daemon would soon fail, and take down
the node. PMON never needs to write to OCR. To confirm if OCR is accessible, try
ocrcheck from your ORACLE_HOME and ORA_CRS_HOME. Modified: 22-OCT-04 Ref #: ID-
5990 ---------------------------------------------------------------------------
----How do I restore OCR from a backup? On Windows, can I use ocopy? The only re
commended way to restore an OCR from a backup is "ocrconfig -restore ". The ocop
y command will not be able to perform the restore action for OCR. Modified: 27-O
CT-04 Ref #: ID-6008 -----------------------------------------------------------
--------------------Does the hostname have to match the public name or can it be
anything else? When there is no vendor clusterware, only CRS, then the public n
ode name must match the host name. When vendor clusterware is present, it determ
ines the public node names, and the installer doesn't present an opportunity to
change them. So, when you have a choice, always choose the hostname. Modified: 0
5-NOV-04 Ref #: ID-6050 --------------------------------------------------------
-----------------------Is it a requirement to have the public interface linked t
o ETH0 or does it only need to be on a ETH lower than the private interface?: -
public on ETH1 - private on ETH2 There is no requirement for interface name orde
ring. You could have - public on ETH2 - private on ETH0 Just make sure you choos
e the correct public interface in VIPCA, and in the installer's interconnect cla
ssification screen. Modified: 05-NOV-04 Ref #: ID-6052 -------------------------
------------------------------------------------------How to Restore a Lost Voti
ng Disk used by Oracle Clusterware 10g Please read Note:279793.1 and for OCR Not
e:268937.1 Modified: 02-DEC-04 Ref #: ID-6308 ----------------------------------
---------------------------------------------With Oracle Clusterware 10g, how do
you backup the OCR? There is an automatic backup mechanism for OCR. The default
location is : $ORA_CRS_HOME\cdata\"clustername"\ To display backups : ocrconfig
-showbackup To restore a backup : ocrconfig -restore The automatic backup mecha
nism keeps upto about a week old copy. So, if you want to retain a backup copy m
ore than that, then you should copy that "backup" file to some other name. Unfor
tunately there are a couple of bugs regarding backup file manipulation, and chan
ging default backup dir on Windows. These will be fixed in 10.1.0.4. OCR backup
on Windows are absent. Only file in the backup directory is temp.ocr which would
be the last backup. You can restore this most recent backup
by using the command ocr -restore temp.ocr If you want to take a logical copy of
OCR at any time use : ocrconfig -export , and use -import option to restore the
contents back. Modified: 02-DEC-04 Ref #: ID-6328
-------------------------------------------------------------------------------H
ow do I protect the OCR and Voting in case of media failure? In Oracle Database
10g Release 1 the OCR and Voting device are not mirrored within Oracle,hence bot
h must be mirrored via a storage vendor method, like RAID 1. Starting with Oracl
e Database 10g Release 2 Oracle Clusterware will multiplex the OCR and Voting Di
sk (two for the OCR and three for the Voting). Please read Note:279793.1 and Not
e:268937.1 regarding backup and restore a lost Voting/OCR and FAQ 6238 regarding
OCR backup. Modified: 05-MAY-05 Ref #: ID-6612 --------------------------------
-----------------------------------------------How do I use multiple network int
erfaces to provide High Availability for my interconnect with Oracle Clusterware
? This needs to be done externally to Oracle Clusterware usually by some OS prov
ided nic bonding which gives Oracle Clusterware a single ip address for the inte
rconnect but provide failover across multiple nic cards. There are several artic
les in Metalink on how to do this. For example for Sun Solaris search for IPMP.
On Linux, read the doc on rac.us Configure Redundant Network Cards / Switches fo
r Oracle Database 10g Release 1 Real Application Cluster on Linux Modified: 06-A
PR-05 Ref #: ID-6680 -----------------------------------------------------------
--------------------How do I put my application under the control of Oracle Clus
terware to achieve higher availability? First write a control agent. It must acc
ept 3 different parameters: start-The control agent should start the application
, check-The control agent should check the application, stop-The Control agent s
hould start the application. Secondly you must create a profile for your applica
tion using crs_profile. Thirdly you must register your application as a resource
with Oracle Clusterware (crs_register). See the RAC Admin and Deployment Guide
for details. Modified: 16-JUN-05 Ref #: ID-6846 --------------------------------
-----------------------------------------------Can I use Oracle Clusterware to p
rovide cold failover of my 9i or 10g single instance Oracle Databases? Oracle do
es not provide the necessary wrappers to fail over single-instance databases usi
ng Oracle Clusterware 10g Release 2. But since it's possible for customers to us
e Oracle Clusterware to wrap arbitrary applications, it'd be possible for them t
o wrap single-instance databases this way. Modified: 01-JUL-05 Ref #: ID-6891
-------------------------------------------------------------------------------D
oes Oracle Clusterware support application vips? Yes, with Oracle Database 10g R
elease 2, Oracle Clusterware now supports an "application" vip. This is to suppo
rt putting applications under the control of Oracle Clusterware using the new hi
gh availability API and allow the user to use the same URL or connection string
regardless of which node in the cluster the application is running on. The appli
cation vip is a new resource defined to Oracle Clusterware and is a functional v
ip. It is defined as a dependent resource to the application. There can be many
vips defined, typically one per user application under the control of Oracle Clu
sterware. You must first create a profile (crs_profile), then register it with O
racle Clusterware (crs_register). The usrvip script must run as root. Modified:
11-JUL-05 Ref #: ID-6893 -------------------------------------------------------
------------------------Why is the home for Oracle Clusterware not recommended t
o be subdirectory of the Oracle base directory? If anyone other than root has wr
ite permissions to the parent directories of the CRS home, then they can give th
emselves root escalations. This is a security issue. The CRS home itself is a mi
x of root and non-root permissions, as appropriate to the security requirements.
Please follow the install docs about who is your primary group and what other g
roups you need to create and be a member of. Modified: 11-JUL-05 Ref #: ID-6915
-------------------------------------------------------------------------------.
-------------------------------------------------------------------------------
Copyright
9.6 JRE: ======== JRE: ---Oracle 9.2 uses JRE 1.3.1 - Java Compiler (javac): lan
guage into bytecodes. Compiles programs written in the Java programming In other
words, it runs
- Java Interpreter (java): Executes Java bytecodes. programs written in the Java
programming language.
- Jave Runtime Interpreter (jre): Similar to the Java Interpreter (java), but in
tended for end users who do not require all the development-related options avai
lable with the java tool.
The PATH statement enables Windows to find the executables (javac, java, javadoc
, etc.) from any current directory. The CLASSPATH tells the Java virtual machine
and other applications (which are located in the "jdk_<version>\bin" directory)
where to find the class libraries, such as classes.zip file (which is in the li
b directory). Note 1: ------Suppose on a Solaris 5.9 machine with Oracle 9.2, we
search for jre: # find . -name "jre*" -print ./opt/app/oracle/product/9.2/inven
tory/filemap/jdk/jre ./opt/app/oracle/product/9.2/jdk/jre ./opt/app/oracle/jre .
/opt/app/oracle/jre/1.1.8/bin/sparc/native_threads/jre ./opt/app/oracle/jre/1.1.
8/bin/jre ./opt/app/oracle/jre/1.1.8/jre_config.txt ./usr/j2se/jre ./usr/iplanet
/console5.1/bin/base/jre ./usr/java1.2/jre Suppose on a AIX 5.2 machine with Ora
cle 9.2, we search for jre: ./apps/oracle/product/9.2/inventory/filemap/jdk/jre
./apps/oracle/product/9.2/inventory/filemap/jre ./apps/oracle/product/9.2/jdk/jr
e ./apps/oracle/product/9.2/jre ./apps/oracle/oraInventory/filemap/apps/oracle/j
re ./apps/oracle/oraInventory/filemap/apps/oracle/jre/1.3.1/jre ./apps/oracle/jr
e ./apps/oracle/jre/1.1.8/bin/jre ./apps/oracle/jre/1.1.8/bin/aix/native_threads
/jre ./apps/oracle/jre/1.3.1/jre ./apps/ora10g/product/10.2/jdk/jre ./apps/ora10
g/product/10.2/jre ./usr/java131/jre ./usr/idebug/jre
Note 2: ------jre - The Java Runtime Interpreter (Solaris) jre interprets (execu
tes) Java bytecodes. SYNOPSIS jre [ options ] classname <args> DESCRIPTION
The jre command executes Java class files. The classname argument is the name of
the class to be executed. Any arguments to be passed to the class must be place
d after the classname on the command line. Class paths for the Solaris version o
f the jre tool can be specified using the CLASSPATH environment variable or by u
sing the -classpath or -cp options. The Windows version of the jre tool ignores
the CLASSPATH environment variable. For both Solaris and Windows, the -cp option
is recommend for specifying class paths when using jre. OPTIONS -classpath path
(s) Specifies the path or paths that jre uses to look up classes. Overrides the
default or the CLASSPATH environment variable if it is set. If more than one pat
h is specified, they must be separated by colons. Each path should end with the
directory containing the class file(s) to be executed. However, if a file to be
executed is a zip or jar file, the path to that file must end with the file's na
me. Here is an example of an argument for -classpath that specifies three paths
consisting of the current directory and two additional paths: .:/home/xyz/classe
s:/usr/local/java/classes/MyClasses.jar -cp path(s) Prepends the specified path
or paths to the base classpath or path given by the CLASSPATH environment variab
le. If more than one path is specified, they must be separated by colons. Each p
ath should end with the directory containing the class file(s) to be executed. H
owever, if a file to be executed is a zip or jar file, the path to that file mus
t end with the file's name. Here is an example of an argument for -cp that speci
fies three paths consisting of the current directory and two additional paths: .
:/home/xyz/classes:/usr/local/java/classes/MyClasses.jar -help Print a usage mes
sage. -mx x Sets the maximum size of the memory allocation pool (the garbage col
lected heap) to x. The default is 16 megabytes of memory. x must be greater than
or equal to 1000 bytes. By default, x is measured in bytes. You can specify x i
n either kilobytes or megabytes by appending the letter "k" for kilobytes or the
letter "m" for megabytes. -ms x Sets the startup size of the memory allocation
pool (the garbage collected heap) to x. The default is 1 megabyte of memory. x m
ust be > 1000 bytes. By default, x is measured in bytes. You can specify x in ei
ther kilobytes or
megabytes by appending the letter "k" for kilobytes or the letter "m" for megaby
tes. -noasyncgc Turns off asynchronous garbage collection. When activated no gar
bage collection takes place unless it is explicitly called or the program runs o
ut of memory. Normally garbage collection runs as an asynchronous thread in para
llel with other threads. -noclassgc Turns off garbage collection of Java classes
. By default, the Java interpreter reclaims space for unused Java classes during
garbage collection. -nojit Specifies that any JIT compiler should be ignored an
d instead invokes the default Java interpreter. -ss x Each Java thread has two s
tacks: one for Java code and one for option sets the maximum stack size that can
be used by C code in a thread to x. Every thread that the execution of the prog
ram passed to jre has x as its C stack size. The default units for value of x mu
st be greater than or equal to 1000 bytes. You can modify the meaning of x by ap
pending either the letter or the letter "m" for megabytes. The default stack siz
e is 128 kilobytes ("-ss 128k"). C code. The -ss is spawned during x are bytes.
The "k" for kilobytes
-oss x Each Java thread has two stacks: one for Java code and one for C code. Th
e -oss option sets the maximum stack size that can be used by Java code in a thr
ead to x. Every thread that is spawned during the execution of the program passe
d to jre has x as its Java stack size. The default units for x are bytes. The va
lue of x must be greater than or equal to 1000 bytes. You can modify the meaning
of x by appending either the letter "k" for kilobytes or the letter "m" for meg
abytes. The default stack size is 400 kilobytes ("-oss 400k"). -v, -verbose Caus
es jre to print a message to stdout each time a class file is loaded. -verify Pe
rforms byte-code verification on the class file. Beware, however, that java -ver
ify does not perform a full verification in all situations. Any code path that i
s not actually executed by the interpreter is not verified. Therefore, java -ver
ify cannot be relied upon to certify class files unless all code paths in the cl
ass file are actually run. -verifyremote Runs the verifier on all code that is l
oaded into the system via a classloader. verifyremote is the default
for the interpreter. -noverify Turns verification off. -verbosegc Causes the gar
bage collector to print out messages whenever it frees memory. -DpropertyName=ne
wValue Defines a property value. propertyName is the name of the property whose
value you want to change and newValue is the value to change it to. For example,
this command line % jre -Dawt.button.color=green ... sets the value of the prop
erty awt.button.color to "green". jre accepts any number of -D options on the co
mmand line. ENVIRONMENT VARIABLES CLASSPATH You can use the CLASSPATH environmen
t variable to specify the path to the class file or files that you want to execu
te. CLASSPATH consists of a colon-separated list of directories that contain the
class files to be executed. For example: .:/home/xyz/classes If the file to be
executed is a zip file or a jar file, the path should end with the file name. Fo
r example: .:/usr/local/java/classes/MyClasses.jar SEE ALSO CLASSPATH Note 3: --
----Solaris: Installing IBM JRE, Version 1.3.1 To install JRE 1.3.1 on Solaris,
follow these steps: Log on as root. Insert the IBM Tivoli Access Manager for Sol
aris CD. Install the IBM JRE 1.3.1 package: pkgadd -d /cdrom/cdrom0/solaris -a /
cdrom/cdrom0/solaris/pddefault SUNWj3rt where -d /cdrom/cdrom0/solaris specifies
the location of the package and -a /cdrom/cdrom0/solaris/pddefault specifies th
e location of the installation administration script. Set the PATH environmental
variable: PATH=/usr/j2se/jre/bin:$PATH export PATH After you install IBM JRE 1.
3.1, no configuration is necessary. ############################################
###################################### # ========= 30 LOBS:
========= 30.1 General LOB info: ---------------------Note 1: ======= A LOB is a
Large Object. LOBs are used to store large, unstructured data, such as video, a
udio, photo images etc. With a LOB you can store up to 4 Gigabytes of data. They
are similar to a LONG or LONG RAW but differ from them in quite a few ways. LOB
s offer more features to the developer than a LONG or LONG RAW. The main differe
nces between the data types also indicate why you would use a LOB instead of a L
ONG or LONG RAW. These differences include the following: You can have more than o
ne LOB column in a table, whereas you are restricted to just one LONG or LONG RA
W column per table. When you insert into a LOB, the actual value of the LOB is sto
red in a separate segment (except for in-line LOBs) and only the LOB locator is
stored in the row, thus making it more efficient from a storage as well as query
perspective. With LONG or LONG RAW, the entire data is stored in-line with the
rest of the table row. LOBs allow a random access to its data, whereas with a LONG
you have to go in for a sequential read of the data from beginning to end. The ma
ximum length of a LOB is 4 Gig as compared to a 2 Gig limit on LONG Querying a LOB
column returns the LOB locator and not the entire value of the LOB. On the othe
r hand, querying LONG returns the entire value contained within the LONG column
You can have two categories of LOBs based on their location with respect to the
database. The categories include internal LOBs and external LOBs. As the names s
uggest, internal LOBs are stored within the database, as table columns. External
LOBs are stored outside the database as operating system files. Only a referenc
e to the actual OS file is stored in the database. An internal LOB can also be p
ersistent or temporary depending on the life of the internal LOB. An internal LO
B can be one of three different data types as follows: CLOB A Character LOB. Used to
store character data. BLOB A Binary LOB. Used to store binary, raw data NCLOB
t stores character data that corresponds to the national character set defined f
or the database. The only external LOB data type in Oracle 8i is called a BFILE.
BFILE - Short for Binary File. These hold references to large binary data
stored as physical files in the OS outside the database. DBA_LOBS displays the B
LOBs and CLOBs contained in all tables in the database. BFILEs are stored outsid
e the database, so they are not described by this view. This view's columns are
the same as those in "ALL_LOBS". NCLOB and CLOB, are both encoded a internal fix
ed-width Unicode character set. CLOB NCLOB BLOB BFILE = = = = Character Large Ob
ject 4Gigabytes National Character Large Object 4Gigabytes Binary Large Object 4
Gigabytes pointer to binary file on disk 4Gigabytes
- A limited number of BFILEs can be open simultaneously per session. The initial
ization parameter, SESSION_MAX_OPEN_FILES defines an upper limit on the number o
f simultaneously open files in a session. The default value for this parameter i
s 10. That is, you can open a maximum of 10 files at the same time per session i
f the default value is utilized. If you want to alter this limit, the database a
dministrator can change the value of this parameter in the init.ora file. For ex
ample: SESSION_MAX_OPEN_FILES=20 If the number of unclosed files exceeds the SES
SION_MAX_OPEN_FILES value then you will not be able to open any more files in th
e session. To close all open files, use the FILECLOSEALL call. - LOB locators Re
gardless of where the value of the internal LOB is stored, a locator is stored i
n the row. You can think of a LOB locator as a pointer to the actual location of
the LOB value. A LOB locator is a locator to an internal LOB while a BFILE loca
tor is a locator to an external LOB. When the term locator is used without an id
entifying prefix term, it refers to both LOB locators and BFILE locators. - Inte
rnal LOB Locators For internal LOBs, the LOB column stores a locator to the LOB'
s value which is stored in a database tablespace. Each LOB column/attribute for
a given row has its own distinct LOB locator and copy of the LOB value stored in
the database tablespace. - LOB Locator Operations Setting the LOB Column/Attrib
ute to contain a locator Before you can start writing data to an internal LOB, t
he LOB column/attribute must be made non-null, that is, it must contain a locato
r. Similarly, before you can start accessing the BFILE value,
the BFILE column/attribute must be made non-null. For internal LOBs, you can acc
omplish this by initializing the internal LOB to empty in an INSERT/UPDATE state
ment using the functions EMPTY_BLOB() for BLOBs or EMPTY_CLOB() for CLOBs and NC
LOBs. For external LOBs, you can initialize the BFILE column to point to an exte
rnal file by using the BFILENAME() function.
Note 2: ======= From: Oracle, Kalpana Malligere 29-Aug-01 14:50 Subject: Re : Wh
at is my best LOB choice Hello, There are several articles/discussions available
in the MetaLink Repository which discuss LOBs, including BFILEs. They are acces
sible via the Search option and the following articles should assist you to make
you choice: 66431.1 LOBS - Storage, Redo and Performance Issues 66046.1 Oracle8
i: LOBs 107441.1 Comparison between LOBs, and LONG & LONG Raw Datatypes To find
any performance comparison between BFILEs and BLOBs, the best suggestion is to t
ry a small scale test. One of the customer wrote that his rule of thumb is that
a small number of large LOBs => bfile, and a large number of small LOBs => BLOB.
The BLOB datatype can store up to 4Gb of data. BLOBs can participate fully in t
ransactions. Changes made to a BLOB value by the DBMS_LOB package, PL/SQL, or th
e OCI can be committed or rolled back. The BFILE datatype stores unstructured bi
nary data (such as image files) in operating-system files outside the database.
A BFILE column or attribute stores a file locator that points to an external fil
e containing the data. BFILEs can also store up to 4Gb of data. Howerver, BFILEs
are read-only; you cannot modify them. They support only random (not sequential
) reads, and they do not participate in transactions. The underlying operating s
ystem must maintain the file integrity and durability for BFILEs. The database a
dministrator must ensure that the file exists and that Oracle processes have ope
rating-system read permissions on the file. Your application will have an impact
on which is preferable. BFILEs will really help if your application is WEB base
d because you can access them through an annonymous FTP connect into the browser
by passing
the URL to the HTML. You can also do this through a regular BLOB, but this would
make you drag the entire image through the Oracle server buffer cache everytime
it is requested. The separation of the backup can be beneficial especially if t
he the image files are mostly static. This reduces the backup volume of the data
base itself. You also don't need a special program for loading them into the dat
abase. You just copy the files to the OS and run a DML statement to add them. Th
is way you also avoid the redo created by inserting them as an internal BLOB. On
the other side of the coin, you will have to devise a file naming convention/di
rectory structure to prevent overwriting the BFILE's. You may want to do only on
e backup instead of both. With BLOBs, if you backup the database, you have every
thing needed. You won't be able to update a BFILE through the database, you will
always have to make modifcations through the OS. LOB types can be replicated, b
ut not BFILE. The Oracle 8i Application Developer's Guide - Large Objects (LOBs)
, provides information on the various programmatic environments and how to opera
te on LOB and BFILE data. Questions on these capabilities should be posted to th
e appropriate forum (i.e. Oracle PL/SQL, Oracle Call Interface, Oracle Precompil
er, etc.). To answer your question, it depends on how you want to use the data.
A LOB is stored in line by default if it is less than 3,960 bytes, whereas an ou
tof-line LOB takes about 20 bytes per row. An inline LOB (i.e. one that is actua
lly stored in the row) is always logged, but an out-of-line can be made non-logg
ing. Preference is always to DISABLE STORAGE IN ROW, but if your LOBs are actual
ly very small, and the way you use them is sufficiently special then you may wan
t to store them in line. But if so, they could probably become simple varchar2(4
000). Note - the minimum size an out-of-line LOB can use is one Oracle block (pl
us a bit of extra space in the LOBINDEX). Thanks! Kalpana Oracle Technical Suppo
rt
Note 3: ======= Doc ID </help/usaeng/Search/search.html>: Note:66431.1 Content T
ype: TEXT/PLAIN Subject: LOBS - Storage, Redo and Performance Issues Creation Da
te: 05NOV-1998 Type: BULLETIN Last Revision Date: 25-JUL-2002 Status: PUBLISHED
Introduction
~~~~~~~~~~~~ This is a short note on the internal storage of LOBs. The informati
on here is intended to supplement the documentation and other notes which descri
be how to use LOBS. The focus is on the storage characteristics and configuratio
n issues which can affect performance. There are 4 types of LOB: CLOB, BLOB, NCL
OB stored internally to Oracle BFILE stored externally The note mainly discusses
the first 3 types of LOB which as stored INTERNALLY within the Oracle DBMS. BFI
LE's are pointers to external files and are only mentioned briefly. Examples of
handling LOBs can be found in [NOTE:47740.1] <ml2_documents.showDocument?p_id=47
740.1&p_database_id=NOT> Attributes ~~~~~~~~~~ There are many attributes associa
ted with LOB columns. The aim here is to cover the fundamental points about each
of the main attributes. The attributes for each LOB column are specified using
the "LOB (lobcolname) STORE AS ..." syntax. A table containing LOBs (CLOB, NCLOB
and BLOB) creates 2 additional disk segments per LOB column - a LOBINDEX and a
LOBSEGMENT. These can be viewed, along with the LOB attributes, using the dictio
nary views: DBA_LOBS, ALL_LOBS or USER_LOBS which give the columns: OWNER TABLE_
NAME COLUMN_NAME SEGMENT_NAME INDEX_NAME CHUNK PCTVERSION CACHE LOGGING IN_ROW T
able Owner Table name Column name in the table Segment name of the LOBSEGMENT Se
gment name of the LOBINDEX Chunk size (bytes) PctVersion Cache option of the LOB
Segment (yes/no) Logging mode of the LOB segment (yes/no) Whether storage in ro
w is allowed (yes/no)
SELECT l.table_name as "TABLE", l.column_name as "COLUMN", l.segment_name as "SE
GMENT", l.index_name as "INDEX", l.chunk as "CHUNKSIZE", l.LOGGING, l.IN_ROW, t.
tablespace_name FROM DBA_LOBS l, DBA_TABLES t WHERE l.table_name=t.table_name AN
D l.owner in ('VPOUSERDB','TRIDION_CM'); Storage Parameters ~~~~~~~~~~~~~~~~~~ B
y default LOB segments are created in the same tablespace as the
base table using the tablespaces default storage details. You can specify the st
orage attributes of the LOB segments thus: Create table DemoLob ( A number, B cl
ob ) LOB(b) STORE AS lobsegname ( TABLESPACE lobsegts STORAGE (lobsegment storag
e clause) INDEX lobindexname ( TABLESPACE lobidxts STORAGE ( lobindex storage cl
ause ) ) ) TABLESPACE tables_ts STORAGE( tables storage clause ) ; CREATE TABLE
t_lob (DOCUMENT_NR NUMBER(16,0) NOT NULL, DOCUMENT_BLOB BLOB NOT NULL ) STORAGE
(INITIAL 100k NEXT 100K PCTINCREASE 0 MAXEXTENTS 100 ) TABLESPACE system lob (DO
CUMENT_BLOB) store as DOCUMENT_LOB (tablespace ts storage (initial 30K next 30K
pctincrease 30 maxextents 3) index (tablespace ts_index storage (initial 40K nex
t 40K pctincrease 40 maxextents 4))); In 8.0 the LOB INDEX can be stored separat
ely from the lob segment. If a tablespace is specified for the LOB SEGMENT then
the LOB INDEX will be placed in the same tablespace UNLESS a different tablespac
e is explicitly specified. Unless you specify names for the LOB segments system
generated names are used. In ROW Versus Out of ROW ~~~~~~~~~~~~~~~~~~~~~~~~ LOB
columns can be allowed to store data within the row or not as detailed below. Wh
ether in-line storage is allowed or not can ONLY be specified at creation time.
"STORE AS ( enable storage in row )" Allows LOB data to be stored in the TABLE s
egment provided it is less than about 4000 bytes. The actual maximum in-line LOB
is 3964 bytes. If the lob value is greater than 3964 bytes then the LOB data is
stored in the LOB SEGMENT (ie: out of line). An out of line LOB behaves as desc
ribed under 'disable storage in row' except that
if its size shrinks to 3964 or less the LOB can again be stored inline. When a L
OB is stored out-of-line in an 'enable storage in row' LOB column between 36 and
84 bytes of control data remain in-line in the row piece. In-line LOBS are subj
ect to normal chaining and row migration rules within Oracle. Ie: If you store a
3900 byte LOB in a row with a 2K block size then the row piece will be chained
across two or more blocks. Both REDO and UNDO are written for in-line LOBS as th
ey are part of the normal row data.
"STORE AS ( disable storage in row )" This option prevents any size of LOB from
being stored in-line. Instead a 20 byte LOB locator is stored in the ROW which g
ives a unique identifier for a LOB in the LOB segment for this column. The Lob L
ocator actually gives a key into the LOB INDEX which contains a list of all bloc
ks (or pages) that make up the LOB. The minimum storage allocation for an out of
line LOB is 1 Database BLOCK per LOB ITEM and may be more if CHUNK is larger th
an a single block. UNDO is only written for the column locator and LOB INDEX cha
nges. No UNDO is generated for pages in the LOB SEGMENT. Consistent Read is achi
eved by using page versions. Ie: When you update a page of a LOB the OLD page re
mains and a new page is created. This can appear to waste space but old pages ca
n be reclaimed and reused. CHUNK size ~~~~~~~~~~ "STORE AS ( CHUNK bytes ) " Can
ONLY be specified at creation time. In 8.0 values of CHUNK are in bytes and are
rounded to the next highest multiple of DB_BLOCK_SIZE without erroring. Eg: If
you specify a CHUNK of 3000 with a block size of 2K then CHUNK is set to 4096 by
tes. "bytes" / DB_BLOCK_SIZE determines the unit of allocation of blocks to an '
out of line' LOB in the LOB segment. Eg: if CHUNK is 32K and the LOB is 'disable
storage in row' then even if the LOB is only 10 bytes long 32K will be allocate
d in the LOB SEGMENT. CHUNK does NOT affect in-line LOBS.
PCTVERSION ~~~~~~~~~~ "STORE AS ( PCTVERSION n )" PCTVERSION can be changed afte
r creation using: ALTER TABLE tabname MODIFY LOB (lobname) ( PCTVERSION n ); PCT
VERSION affects the reclamation of old copies of LOB data. This affects the abil
ity to perform consistent read. If a session is attempting to use an OLD version
of a LOB and that version gets overwritten (because PCTVERSION is too small) th
en the user will typically see the errors: ORA-01555: snapshot too old: rollback
segment number with name "" too small ORA-22924: snapshot too old PCTVERSION ca
n prevent OLD pages being used and force the segment to extend instead. Do not e
xpect PCTVERSION to be an exact percentage of space as there is an internal fudg
e factor applied. CACHE ~~~~~ "STORE AS ( CACHE )" or "STORE AS ( NOCACHE )" Thi
s option can be changed after creation using: ALTER TABLE tabname MODIFY LOB (lo
bname) ( CACHE ); or ALTER TABLE tabname MODIFY LOB (lobname) ( NOCACHE ); With
NOCACHE set (the default) reads from and writes to the LOB SEGMENT occur using d
irect reads and writes. This means that the blocks are never cached in the buffe
r cache and the the Oracle shadow process performs the reads/writes itself. The
reads / writes show up under the wait events "direct path read" and "direct path
write" and multiple blocks can be read/written at a time (provided the caller i
s using a large enough buffer size). When set the CACHE option causes the LOB SE
GMENT blocks to be read / written via the buffer cache . Reads show up as "db fi
le sequential read" but unlike a table scan the blocks are placed at the most-re
cently-used end of the LRU chain. The CACHE options for LOB columns is different
to the CACHE option for tables as CACHE_SIZE_THRESHOLD does not limit the size
of LOB read into the buffer cache. This means that extreme caution is required o
therwise the read of a long LOB can effectively flush the cache. In-line LOBS ar
e not affected by the CACHE option as they reside in the actual table block (whi
ch is typically accessed via the buffer cache any way). The cache option can aff
ect the amount of REDO generated for out of line LOBS. With NOCACHE blocks are d
irect loaded and so entire block images are written to the REDO stream. If CHUNK
is also set then enough blocks to cover CHUNK are written to REDO. If CACHE is s
et then the block changes are written to REDO. Eg: In the extreme case 'DISABLE
STORAGE IN ROW NOCACHE CHUNK 32K' would write redo for the whole 32K even if the
LOB was only 5 characters long. CACHE would write a redo record describing the
5 byte change (taking about 100-200 bytes). LOGGING ~~~~~~~ "STORE AS ( NOCACHE
LOGGING )" or "STORE AS ( NOCACHE NOLOGGING )" This option can be changed after
creation but the LOGGING / NOLOGGING attribute must be prefixed by the NOCACHE o
ption. The CACHE option implicitly enables LOGGING. The default for this option
is LOGGING. If a LOB is set to NOCACHE NOLOGGING then updates to the LOB SEGMENT
are not logged to the redo logs. However, updates to in-line LOBS are still log
ged as normal. As NOCACHE operations use direct block updates then all LOB segme
nt operations are affected. NOLOGGING of the LOB segment means that if you have
to recover the database then sections of the LOB segment will be marked as corru
pt during recovery.
Space required for updates ~~~~~~~~~~~~~~~~~~~~~~~~~~ If a LOB is out-of-line th
en updates to pages if the LOB cause new versions of those pages to be created.
Rollback is achieved by reverting back to the pre-updated page versions. This ha
s implications on the amount of space required when a LOB is being updated as th
e LOB SEGMENT needs enough space to hold both the OLD and NEW pages concurrently
in case your transaction rolls back. Eg: Consider the following: INSERT a large
LOB LOB SEGMENT extends take the new pages COMMIT; DELETE the above LOB The LOB
pages are not yet free as they will be needed in case of rollback. INSERT a new
LOB Hence this insert may require more space in the LOB SEGMENT COMMIT; Only af
ter this point could the deleted pages be used. Performance Issues ~~~~~~~~~~~~~
~~~~~~ Working with LOBs generally requires more than one round trip to the data
base. The application first has to obtain the locator and only then can perform
operations against that locator. This is true for inline or out of line LOBS. Th
e buffer size used to read / write the LOB can have a significant impact on perf
ormance, as can the SQL*Net packet sizes. Eg: With OCILobRead() a buffer size is
specified for handling the LOB. If this is small (say 2K) then there can be a r
ound trip to the database for each 2K chunk of the LOB. To make the issue worse
the server will
only fetch the blocks needed to satisfy the current request so may perform singl
e block reads against the LOB SEGMENT. If however a larger chunk size is used (s
ay 32K) then the server can perform multiblock operations and pass the data back
in larger chunks. There is a LOB buffering subsystem which can be used to help
improve the transfer of LOBs between the client and server processes. See the do
cumentation for details of this. BFILEs ~~~~~~ BFILEs are quite different to int
ernal LOBS as the only real storage issue is the space required for the inline l
ocator. This is about 20 bytes PLUS the length of the directory and filename ele
ments of the BFILENAME. The performance implications of the buffer size are the
same as for internal LOBS. References ~~~~~~~~~~ [NOTE:162345.1] <ml2_documents.
showDocument?p_id=162345.1&p_database_id=NOT> LOBS - Storage, Read-consistency a
nd Rollback Note 4: ======= Doc ID: Note:159995.1 Content Type: TEXT/X-HTML Subj
ect: Different Behaviors of Lob and Lobindex Segments in 8.0, 8i and 9i Creation
Date: 05-OCT-2001 Type: BULLETIN Last Revision Date: 27-MAR-2003 Status: PUBLIS
HED PURPOSE ------This bulletin lists the different behaviors of a lob index seg
ment regarding tablespace and storage values: -> When creating the table, the lo
b and lob index segments -> Altering the associated lob segment and/or lob index
segment. SCOPE & APPLICATION ------------------For all DBAs who manage differen
t versions of Oracle with databases containing LOB segments, and who need to mai
ntain the associated lob indexes. Under 8i and 9i In Oracle8i SQL Reference and
Oracle9i SQL Reference, it is clearly stated that: lob_index_clause This clause
is deprecated as of Oracle8i. Oracle generates an index for each LOB column. Ora
cle names and manages the LOB indexes internally. Although it is still possible
for you to specify this clause, Oracle Corporation strongly recommends that you
no longer do so. In any event, do not put the LOB index in a different tablespac
e from the LOB data. 1.Lob and lobindex specifications at table creation If you
create a new table in release 8i and 9i and specify a tablespace and storage val
ues for the LOB index for a non-partitioned table, the
tablespace specification and storage values are ignored. The LOB index is locate
d in the same tablespace as the LOB segment with the same storage values, except
the NEXT and MAXEXTENTS values. the NEXT value of the lobindex = INITIAL defaul
t value of the tablespace (LOB segment) the MAXEXTENTS value of the lobindex = u
nlimited value (2Gb) SQL> CREATE TABLE t_lob 2 (DOCUMENT_NR NUMBER(16,0) NOT NUL
L, 3 DOCUMENT_BLOB BLOB NOT NULL 4 ) 5 STORAGE 6 (INITIAL 100k 7 NEXT 100K 8 PCT
INCREASE 0 9 MAXEXTENTS 100 10 ) 11 TABLESPACE system 12 lob (DOCUMENT_BLOB) sto
re as DOCUMENT_LOB 13 (tablespace ts storage 14 (initial 30K next 30K pctincreas
e 30 maxextents 3) 15 index (tablespace ts_index storage 16 (initial 40K next 40
K pctincrease 40 maxextents 4))); Table created. SQL> select segment_name, segme
nt_type, tablespace_name, 2 initial_extent, next_extent, pct_increase, max_exten
ts 3 from user_segments; SEGMENT_NAME SEGMENT_TY TABLESPA INITIAL NEXT_EXT PCT_I
NC MAX_EXT ----------------------- ----------- --------- -------- -------- -----
-- --------T_LOB TABLE SYSTEM 102400 102400 0 100 SYS_IL0000020297C00002$$ LOBIN
DEX TS 30720 10240 30 2147483645 DOCUMENT_LOB LOBSEGMENT TS 30720 30720 30 3 All
storage modifications are based on this original table t_lob. 2.Lob and lobinde
x storage modifications When you modify the storage values for the lob and lob i
ndex segments, the values of the lob index are kept as initially set, except the
PCT_INCREASE. The value of the lob segment PCTINCREASE spreads out on the lob i
ndex: SQL> alter table t_lob 2 modify lob (document_blob) 3 (storage (next 60K p
ctincrease 60 maxextents 6) 4 index (storage (next 70K pctincrease 70 maxextents
7))); Table altered. SQL> select segment_name, segment_type, tablespace_name, 2
initial_extent, next_extent, pct_increase, max_extents 3 from user_segments; SE
GMENT_NAME SEGMENT_TY TABLESPA INITIAL NEXT_EXT PCT_INC MAX_EXT ----------------
------- ----------- --------- -------- -------- ------- --------T_LOB TABLE SYST
EM 102400 102400 0 100 SYS_IL0000020297C00002$$ LOBINDEX TS 30720 10240 60 21474
83645 DOCUMENT_LOB LOBSEGMENT TS 30720 61440 60 6 3.Storage modifications of lob
segment only If you modify the storage values for the lob segment only, you get
the same behaviour: SQL> alter table t_lob 2 modify lob (document_blob)
3 (storage (next 60K pctincrease 60 maxextents 6)); Table altered. SQL> select s
egment_name, segment_type, tablespace_name, 2 initial_extent, next_extent, pct_i
ncrease, max_extents 3 from user_segments; SEGMENT_NAME SEGMENT_TY TABLESPA INIT
IAL NEXT_EXT PCT_INC MAX_EXT ----------------------- ----------- --------- -----
--- -------- ------- --------T_LOB TABLE SYSTEM 102400 102400 0 100 SYS_IL000002
0297C00002$$ LOBINDEX TS 30720 10240 60 2147483645 DOCUMENT_LOB LOBSEGMENT TS 30
720 61440 60 3 4.Storage modifications of lobindex segment only If you modify th
e storage values for the lob index segment only, nothing is altered: SQL> alter
table t_lob 2 modify lob (document_blob) 3 (index (storage (next 70K pctincrease
70 maxextents 7))) 4 ; Table altered. SQL> select segment_name, segment_type, t
ablespace_name, 2 initial_extent, next_extent, pct_increase, max_extents 3 from
user_segments; SEGMENT_NAME SEGMENT_TY TABLESPA INITIAL NEXT_EXT PCT_INC MAX_EXT
----------------------- ----------- --------- -------- -------- ------- -------
-T_LOB TABLE SYSTEM 102400 102400 0 100 SYS_IL0000020297C00002$$ LOBINDEX TS 307
20 10240 30 2147483645 DOCUMENT_LOB LOBSEGMENT TS 30720 30720 30 3 If you attemp
t to modify the storage values of the lob index directly, you get an error messa
ge: SQL> alter index SYS_IL0000020297C00002$$ storage (pctincrease 80); alter in
dex SYS_IL0000020297C00002$$ storage (pctincrease 80) * ERROR at line 1: ORA-228
64: cannot ALTER or DROP LOB indexes SQL> alter index SYS_IL0000020297C00002$$ r
ebuild storage (pctincrease 60); alter index SYS_IL0000020297C00002$$ rebuild st
orage (pctincrease 60) * ERROR at line 1: ORA-02327: cannot create index on expr
ession with datatype LOB Under 8.0 1.Lob and lobindex specifications at table cr
eation If you create a new table in release 8.0 and specify a tablespace for the
LOB index for a non-partitioned table, the tablespace specification and storage
values are encountered. The LOB index is located in the defined tablespace with
the user-defined storage values. SQL> CREATE TABLE t_lob 2 (DOCUMENT_NR NUMBER(
16,0) NOT NULL, 3 DOCUMENT_BLOB BLOB NOT NULL 4 ) 5 STORAGE 6 (INITIAL 100k 7 NE
XT 100K 8 PCTINCREASE 0 9 MAXEXTENTS 100 10 ) 11 TABLESPACE system 12 lob (DOCUM
ENT_BLOB) store as DOCUMENT_LOB
13 (tablespace ts storage 14 (initial 30K next 30K pctincrease 30 maxextents 3)
15 index (tablespace ts_index storage 16 (initial 40K next 40K pctincrease 40 ma
xextents 4))); Table created. SQL> select segment_name, segment_type, tablespace
_name, 2 initial_extent, next_extent, pct_increase, max_extents 3 from user_segm
ents; SEGMENT_NAME SEGMENT_TY TABLESPA INITIAL NEXT_EXT PCT_INC MAX_EXT --------
--------------- ----------- --------- -------- -------- ------- --------T_LOB TA
BLE SYSTEM 102400 102400 0 100 SYS_IL0000020297C00002$$ LOBINDEX TS_INDEX 40960
40960 40 4 DOCUMENT_LOB LOBSEGMENT TS 32768 30720 30 3 All storage modifications
are based on this original table t_lob. 2.Lob and lobindex storage modification
s When you modify the storage values for the lob and lob index segments, the val
ues for the lobindex are kept as initially set: SQL> alter table t_lob 2 modify
lob (document_blob) 3 (storage (next 60K pctincrease 60 maxextents 6) 4 index (s
torage (next 70K pctincrease 70 maxextents 7))); Table altered. SQL> select segm
ent_name, segment_type, tablespace_name, 2 initial_extent, next_extent, pct_incr
ease, max_extents 3 from user_segments; SEGMENT_NAME SEGMENT_TY TABLESPA INITIAL
NEXT_EXT PCT_INC MAX_EXT ----------------------- ----------- --------- --------
-------- ------- --------T_LOB TABLE SYSTEM 102400 102400 0 100 SYS_IL000002029
7C00002$$ LOBINDEX TS_INDEX 40960 40960 40 4 DOCUMENT_LOB LOBSEGMENT TS 32768 61
440 60 6 3.Storage modifications of lob segment only If you modify the storage v
alues for the lob segment only, you get the same behavior: SQL> alter table t_lo
b 2 modify lob (document_blob) 3 (storage (next 60K pctincrease 60 maxextents 6)
); Table altered. SQL> select segment_name, segment_type, tablespace_name, 2 ini
tial_extent, next_extent, pct_increase, max_extents 3 from user_segments; SEGMEN
T_NAME SEGMENT_TY TABLESPA INITIAL NEXT_EXT PCT_INC MAX_EXT --------------------
--- ----------- --------- -------- -------- ------- --------T_LOB TABLE SYSTEM 1
02400 102400 0 100 SYS_IL0000020297C00002$$ LOBINDEX TS_INDEX 40960 40960 40 4 D
OCUMENT_LOB LOBSEGMENT TS 32768 61440 60 6 Again, the lob segment storage values
do not impact the lob index. 4.Storage modifications of lobindex segment only I
f you modify the storage values for the lob index segment only, nothing is alter
ed: SQL> alter table t_lob 2 modify lob (document_blob) 3 (index (storage (next
70K pctincrease 70 maxextents 7))) 4 ; Table altered. SQL> select segment_name,
segment_type, tablespace_name, 2 initial_extent, next_extent, pct_increase, max_
extents
3 from user_segments; SEGMENT_NAME SEGMENT_TY TABLESPA INITIAL NEXT_EXT PCT_INC
MAX_EXT ----------------------- ----------- --------- -------- -------- -------
--------T_LOB TABLE SYSTEM 102400 102400 0 100 SYS_IL0000020297C00002$$ LOBINDEX
TS_INDEX 40960 40960 40 4 DOCUMENT_LOB LOBSEGMENT TS 32768 30720 30 3 If you at
tempt to modify the storage values of the lob index directly, you get an error m
essage: SQL> alter index SYS_IL0000020297C00002$$ storage (pctincrease 20); alte
r index SYS_IL0000020297C00002$$ storage (pctincrease 20) * ERROR at line 1: ORA
-22864: cannot ALTER or DROP LOB indexes Migration from 7 to 9i The "Oracle9i Da
tabase Migration Release 1 (9.0.1)" documentation states: LOB Index Clause If yo
u used the LOB index clause to store LOB index data in a tablespace separate fro
m the tablespace used to store the LOB, the index data is relocated to reside in
the same tablespace as the LOB. If you used Export/Import to migrate from Oracl
e7 to Oracle9i, the index data was relocated automatically during migration. How
ever, the index data was not relocated if you used the Migration utility or the
Oracle Data Migration Assistant. RELATED DOCUMENTS ----------------<Note:66431.1
> LOBS - Storage, Redo and Performance Issues <Bug:1353339> ALTER TABLE MODIFY D
EFAULT ATTRIBUTES LOB DOES NOT UPDATE LOB INDEX DEFAULT TS <Bug:1864548> LARGE L
OB INDEX SEGMENT SIZE <Bug:747326> ALTER TABLE MODIFY LOB STORAGE PARAMETER DOES
'T WORK <Bug:1244654> UNABLE TO CHANGE STORAGE CHARACTERISTICS FOR LOB INDEXES N
ote 5: ======= Calculate sizes: Example ------SQL> create table my_lob 2 (idx nu
mber null, a_lob clob null, b_lob blob null) 3 storage (initial 20k maxextents 1
21 pctincrease 0 ) 4 lob (a_lob, b_lob) store as 5 ( storage ( initial 100k next
100K maxextents 999 pctincrease 0)); Table created. SQL> select object_name,obj
ect_type,object_id from user_objects order by 2; OBJECT_NAME OBJECT_TYPE OBJECT_
ID ---------------------------------------- ------------------ ---------SYS_LOB0
000004017C00002$$ LOB 4018 SYS_LOB0000004017C00003$$ LOB 4020 MY_LOB TABLE 4017
SQL> select bytes, s.segment_name,s.segment_type 2 from dba_segments s 3 where s
.segment_name='MY_LOB'; BYTES SEGMENT_NAME SEGMENT_TYPE
---------- ------------------------------ -----------------65536 MY_LOB TABLE SQ
L> select sum(bytes), s.segment_name, s.segment_type 2 from dba_lobs l, dba_segm
ents s 3 where s.segment_type = 'LOBSEGMENT' 4 and l.table_name = 'MY_LOB' 5 and
s.segment_name = l.segment_name 6 group by s.segment_name,s.segment_type; SUM(B
YTES) SEGMENT_NAME SEGMENT_TYPE ---------- ------------------------------ ------
-----------131072 SYS_LOB0000004017C00002$$ LOBSEGMENT 131072 SYS_LOB0000004017C
00003$$ LOBSEGMENT Therefore the total size for the table MY_LOB is: 65536 (for
the table) + 131072 (for CLOB segment) + 131072 (for BLOB segment) => 327680 byt
es Note 6: ======= Doc ID: Note:268476.1 Subject: LOB Performance Guideline Type
: WHITE PAPER Status: PUBLISHED Content Type: TEXT/X-HTML Creation Date: 09-APR-
2004 Last Revision Date: 22-JUN-2004 LOB Performance Guidelines An Oracle White
Paper April 2004
LOB Performance Guidelines Executive Overview...................................
....................................... .... 3 LOB Overview.....................
..................................................... ............ 3 Important S
torage Parameters...............................................................
. 4 CHUNK.......................................................................
...... ............... 4 Definition.............................................
........................... .............. 4 Points to Note.....................
......................................................... . 4 Recommendation....
................................................................ ....... 4 In-li
ne and Out-of-Line storage: ENABLE STORAGE IN ROW and DISABLE STORAGE IN ROW 4
Definition......................................................................
.. .............. 4 Points to Note..............................................
................................ . 5 Recommendation.............................
....................................... ....... 5 CACHE, NOCACHE................
....................................................... 5 Definition............
............................................................ .............. 5 Po
ints to Note....................................................................
.......... . 6 Recommendation...................................................
................. ....... 6 Consistent Reads on LOBs: RETENTION and PCTVERSION..
.... 6 Definition...............................................................
......... .............. 6 Points to Note.......................................
....................................... . 6 Recommendation......................
.............................................. ....... 7 LOGGING, NOLOGGING.....
........................................................ 7 Definition...........
............................................................. .............. 7 P
oints to Note...................................................................
........... . 7 Recommendation..................................................
.................. ....... 7 Performance GUIDELINE ? LOB Loading................
.......................... 8 Points to Note.....................................
......................................... ..... 8 Use array operations for LOB i
nserts............................................. 8 Scalability problem ? with
LOB disable storage in row option...... 8 Row Chaining problem ? with the use o
f OCILobWrite API......... 8 High number of consistent read blocks created and e
xamined...... 9 CPU time and Elapsed time - not reported accurately.............
...... 9 Reads/Writes are done one chunk at a time in synchronous way 10 High CP
U system time................................................................. 1
1 Buffer cache sizing problem...................................................
...... 11 Multi-byte character set conversion...................................
.......... 11 HWM enqueue contention............................................
............... 11 RAC environment issues.......................................
....................... 12 Other LOB performance related issues.................
...................... 12 APPENDIX A............................................
..................................... .... 13
LONG API access to LOB datatype............................................... 1
3 APPENDIX B....................................................................
............. .... 15 Migration from in-line to out-of-line (and out-of-line to
in-line) storage 15 APPENDIX C..................................................
............................... .... 16 How LOB data is stored..................
................................................ 16 In-line LOB ? LOB size less
than 3964 bytes............................. 16 In-line LOB ? LOB size = 3965 by
tes (1 byte greater than 3964) 16 In-line LOB ? LOB size greater than 12 chunk a
ddresses........... 17 Out-of-line LOBs ? All LOB sizes.........................
..................... 17
LOB Performance Guidelines Executive Overview This document gives a brief overvi
ew of Oracle?s LOB data structure, emphasizing various storage parameter options
and describes scenarios where those storage parameters are best used. The purpo
se of the latter is to help describe the effects of readers select the appropria
te LOB storage options. This paper assumes that most customers load LOB data onc
e and retrieve many times (less than 10% of DML is update and delete), so perfor
mance guidelines provided here are for LOB loading. LOBs were designed to effici
ently store and retrieve large amounts of data. Small LOBs (< 1MB) perform bette
r than LONGs for inserts, and have comparable performance on selects. Large LOBs
perform better than LONGs in general. Oracle recommends the use of LOBs to stor
e unstructured or semi-structured data, and has provided a LONG API to allow eas
e of migration from LONGs to LOBs. Oracle plans to de-support LONGs in the futur
e. LOB Overview Whenever a table containing a LOB column is created, two segment
s are created to hold the specified LOB column. These segments are of type LOBSE
GMENT and LOBINDEX. The LOBINDEX segment is used to access LOB chunks/pages that
are stored in the LOBSEGMENT segment. CREATE TABLE foo (pkey NUMBER, bar BLOB);
SELECT segment_name, segment_type FROM user_extents; 9792 is the object_id of t
he parent table FOO (if a table has more than one LOB column, LOB segment names
are generated differently, use dba|user_lobs view to get parent table associatio
n).
SEGMENT_NAME FOO SYS_IL0000009792C00002$$ SYS_LOB0000009792C00002$$ chunks/pages
)
SEGMENT_TYPE TABLE LOBINDEX LOBSEGMENT (also referred as LOB
The LOBSEGMENT and the LOBINDEX segments are stored in the same tablespace as th
e table containing the LOB, unless otherwise specified.[1] Important Storage Par
ameters This section defines the important storage parameters of a LOB column (o
r a LOB attribute) - . ?fFor each definition we describe the effects of the para
meter, and give recommendations for on how to get better performance and to avoi
d errors. CHUNK Definition CHUNK is the smallest unit of LOBSEGMENT allocation.
It is a multiple of DB_BLOCK_SIZE. Points to Note ? For example, if the value of
CHUNK is 8K and an inserted LOB is only 1K in size, then 1 chunk is allocated a
nd 7K are wasted in that chunk. The CHUNK option does NOT affect in-line LOBs (s
ee the definition in the next section) ? Choose an appropriate chunk size for be
st performance also to avoid space wastage. The maximum chunk size is 32K. ? The
CHUNK parameter cannot be altered.
Recommendation Choose a chunk size for optimal performance and minimum space was
tage. For LOBs that are less than 32K, a chunk size that is 60% (or more) of the
LOB size is a good starting point. For LOBs larger than 32K, choose a chunk siz
e equal to the frequent update size. In-line and Out-of-Line storage: ENABLE STO
RAGE IN ROW and DISABLE STORAGE IN ROW Definition LOB storage is said to be Inin
-line when the LOB data is stored with the other column data in the row. A LOB c
an only be stored inline if its size is less than ~4000 bytes. For in-line
LOB data, space is allocated in the table segment (the LOBINDEX and LOBSEGMENT s
egments are empty). LOB storage is said to be out-of-line when the LOB data is s
tored , in CHUNK sized blocks in the LOBSEGMENT segment, separate from the other
columns? data. ENABLE STORAGE IN ROW allows LOB data to be stored in the table
segment provided it is less than ~4000 bytes. DISABLE STORAGE IN ROW prevents LO
B data from being stored in-line, regardless of the size of the LOB. Instead onl
y a 20-byte LOB locator is stored with the other column data in the table segmen
t. Points to Note ? In-line LOBs are subject to normal chaining and row migratio
n rules within Oracle. If you store a 3900 byte LOB in a row with 2K block size
then the row will be chained across two or more blocks. Both REDO and UNDO are w
ritten for in-line LOBs as they are part of the normal row data. The CHUNK optio
n does not affect in-line LOBs. ? With out-of-line storage, UNDO is written only
for the LOB locator and LOBINDEX changes. No UNDO is generated for chunks/pages
in the LOBSEGMENT. Consistent Read is achieved by using page versions (see the
RETENTION or PCTVERSION options). ? DML operations on out-of-line LOBs can gener
ate high amounts of redo information, because redo is generated for the entire c
hunk. For example, in the extreme case, ?DISABLE STORAGE IN ROW CHUNK 32K? would
write redo for the whole 32K even if the LOB changes were was only 5 bytes. ? W
hen in-line LOB data is updated, and if the new LOB size is greater than 3964 by
tes, then it is migrated and stored out-of-line. If this migrated LOB is updated
again and its size becomes less than 3964 bytes, it is not moved back in-line (
except when we use LONG API for update). ? ENABLE|DISABLE STORAGE IN ROW paramet
ers cannot be altered.
Recommendation Use ENABLE STORAGE IN ROW, except in cases where the LOB data is
not retrieved as much as other columns? data. In this case, if the LOB data is s
tored out-of-line, the biggest gain is achieved while performing full table scan
s, as the operation does not retrieve the LOB?s data. CACHE, NOCACHE Definition
The CACHE storage parameter causes LOB data blocks to be read/written via the bu
ffer cache. With the NOCACHE storage parameter, LOB data is read/written using d
irect reads/writes. This means that the LOB data blocks are never in the buffer
cache and the Oracle server process performs the reads/writes.
Points to Note ? With the CACHE option, LOB data reads show up as wait event ?db
file sequential read?, writes are performed by the DBWR process. With the NOCAC
HE option, LOB data reads/writes show up as wait events direct path read (lob)?/
?direct path write (lob)?. Corresponding statistics are ?physical reads direct (
lob)? and ?physical writes direct (lob)?. ? In-line LOBs are not affected by the
CACHE option as they reside with the other column data, which is typically acce
ssed via the buffer cache. ? option. The CACHE option gives better read/write pe
rformance than the NOCACHE
? The CACHE option for LOB columns is different from the CACHE option for tables
. This means that caution is required otherwise the read of a large LOB can effe
ctively flush the buffer cache. ? The CACHE|NOCACHE option can be altered.
Recommendation Enable caching, except for cases where caching LOBs would severel
y impact performance for other online users, by forcing these users to perform d
isk reads rather than getting cache hits. Consistent Reads on LOBs: RETENTION an
d PCTVERSION Consistent Read (CR) on LOBs uses a different mechanism than that u
sed for other data blocks in Oracle. Older versions of the LOB are retained in t
he LOB segment and CR is used on the LOB index to access these older versions (f
or in-line LOBs which are stored in the table segment, the regular UNDO mechanis
m is used). There are two ways to control how long older versions are maintained
. Definition ? RETENTION ? time-based: this specifies how long older versions ar
e to be retained. ? PCTVERSION ? space-based: this specifies what percentage of
the LOB segment is to be used
to hold older versions. Points to Note ? RETENTION is a keyword in the LOB colum
n definition. No value can be specified for RETENTION. The RETENTION value is im
plicit,.. If a LOB is created with database compatibility set to 9.2.0.0 or high
er, undo_management=TRUE and PCTVERSION is not explicitly specified, time-based
retention is used. The LOB RETENTION value is always equal to the value of the U
NDO_RETENTION database instance parameter. ? You cannot specify both PCTVERSION
and RETENTION.
? PCTVERSION is applicable only to LOB chunks/pages allocated in LOBSEGMENTS. Ot
her LOB related data in the table column and the LOBINDEX segment use regular un
do mechanism. ? PCTVERSION=0: the space allocated for older versions of LOB data
in LOBSEGMENTS can be reused by other transactions and can cause ?snapshot too
old? errors. ? PCTVERSION=100: the space allocated by older versions of LOB data
can never be reused by other transactions. LOB data storage space is never recl
aimed and it always increases. ? RETENTION and PCTVERSION can be altered
Recommendation Time-based retention using the RETENTION keyword is preferred. A
high value for RETENTION or PCTVERSION may be needed to avoid ?snapshot too old?
errors in environments with high concurrent read/write LOB access. LOGGING, NOL
OGGING Definition LOGGING: enables logging of LOB data changes to the redo logs.
NOLOGGING: changes to LOB data (stored in LOBSEGMENTs) are not logged into the
redo logs, however in-line LOB changes are still logged as normal. Points to Not
e ? ? recovery The CACHE option implicitly enables LOGGING. If NOLOGGING was set
, and if you have to recover the database, then sections of the LOBSEGMENT will
be marked as corrupt during
(LOBINDEX changes are logged to redo logs and are recovered, but the correspondi
ng LOBSEGMENTs are not logged for recovery).
? LOGGING|NOLOGGING can be altered. The NOCACHE option is required to turn off L
OGGING, e.g. (NOCACHE NOLOGGING). Recommendation Use NOLOGGING only when doing b
ulk loads or migrating from LONG to LOB. Backup is recommended after bulk operat
ions.
Performance GUIDELINE
LOB Loading
In the rest of the document, you will notice LOB API and LONG API methods being
referenced many times. The difference between these APIs is as follows: LOB API:
the LOB data is accessed by first selecting the LOB locator. LONG API: the LOB
data is accessed without using the LOB locator. Points to Note Use array operati
ons for LOB inserts Scalability problem with LOB disable storage in row option B
UG 3180333 - LOB LOADING USING SQLLDR DOESN'T SCALE Problem scenario: 2 (or more
) concurrent sqlldr processes trying to load LOB data (LOB column defined with D
ISABLE STORAGE IN ROW). Loading will run almost serially. Serialization point is
getting a CR copy of the LOBINDEX block. Workaround: use ENABLE STORAGE IN ROW
even for LOBs whose size is greater than 3964 bytes. With ENABLE STORAGE IN ROW,
we store the first 12 chunk addresses in the table row and if the inserted LOB
data size can be addressed within these first 12 chunk addresses, then LOBINDEX
is empty. Generating a CR version of a table block is more efficient and,,, in s
ome cases, not required. This code path provides much better scalability. Please
note that if LOB data is larger than 12 chunk addresses, then we may see CR con
tention with the ENABLE STORAGE IN ROW option as well. Row Chaining problem with
the use of OCILobWrite API TAR 2760194.995 (UK) - LOADING SMALL (AVG LEN 1120)
CLOB DATA INTO TABLE PRODUCES MUCH CHAINING, WHY? Problem scenario: in 10gR1 (an
d older releases), SQL*Loader uses OCILobWrite API for LOB loading. This leads t
o a row chaining problem, as described below: CREATE TABLE foo (pkey NUMBER NOT
NULL, bar BLOB); Load 3 rows with LOB data size as 3700, 3000 and 3400 respectiv
ely. SQL*Loader loads the LOB columns, first by inserting empty_blob, and second
, by
writing the LOB data using the LOB locator. In the first step, the average row l
ength is pkey length + empty_blob length= 4 + 40 bytes = ~44 bytes. Assuming tha
t DB_BLOCK_SIZE=8192, these 3 rows can be inserted into one data block. In the s
econd step, loading LOB data, the 1st row, 3700 bytes of LOB, and the 2nd row, 3
000 bytes of LOB, can be inserted into the same block. However, for the 3rd row
of LOB data, there is no space left in that block, so the row must be chained. W
orkaround: the first workaround could be to increase the value of PCTFREE. It ma
y help solve this problem, but it unnecessarily wastes space. The second workaro
und is to write a loader program using the LONG API method (please note that an
enhancement request against sqlldr component is filed for this problem, and ther
e is a plan to fix it in the future release). High number of consistent read blo
cks created and examined BUG 3297800 - SQLLDR MAY NEED TO USE LONG API INTERFACE
FOR LOBS LESS THAN 2GB Problem scenario: 2 (or more) concurrent sqlldr processe
s loading LOB data in conventional mode. Using the LOB API method for loading th
e LOB data in a single user environment may also cause a high number of CR block
creation to occur. As mentioned earlier, loading the LOB data is performed in 2
steps. . In the first step, sqlldr inserts empty_blob for LOB columns. Then, wi
th this LOB locator, the LOB data is written using an OCILobWrite call. In a mul
ti-user loading environment, before OCILobWrite is invoked, if other loading pro
cesses change the data block, it may be required to examine the block and, if re
quired, a CR version of the block is created. Workaround: None, other than writi
ng a loader program using he LONG API method
CPU time and Elapsed time - not reported accurately BUG 3504487 - DBMS_LOB/OCILo
b* CALL RESOURCE USAGE IS NOT REPORTED, AS THEY ARE NOT PART OF A CURSOR Problem
scenario: the work done using LOB API calls is not part of the cursor, so repor
ting resource usage while collecting statistics for the LOB workload, such as th
e CPU time or the elapsed time, may not be accurate. Example to illustrate this
situation: (We have already a table created as: CREATE TABLE foo (pkey NUMBER, b
ar BLOB);) Declare lob_loc blob;
begin
buffer lob_amt
raw(32767); binary_integer := 16384;
end; /
buffer := utl_raw.cast_to_raw(rpad('FF', 32767, 'FF')); for j in 1..10000 loop s
elect bar into lob_loc from foo where pkey = j for update; dbms_lob.write(lob_lo
c, lob_amt, 1, buffer ); commit; end loop; dbms_output.put_line ('Write test fin
ished ');
After executing the above PL/SQL, query V$SQL to measure cpu_time and elapsed ti
me resource usage. select sql_text, cpu_time/100000, elapsed_time/100000 from v$
sql where sql_text like '%foo%' or sql_text like ?%dbms_lob%?; SQL_TEXT --------
--------------------------------------------------------------------------------
------CPU_TIME/1000000 ELAPSED_TIME/1000000 -------------------------- ---------
-------------------------declare lob_amt 'FF')); lob_loc blob; buffer raw(32767)
; binary_integer := 16384 ; begin buffer := utl_raw.cast_to_raw(rpad('FF', 32767
,
for j in 1..10000 loop select bar into lob_loc from foo where pkey = j for updat
e; dbms_lob.write(lob_loc, lob_amt, 1, buffer ); commit; end loop; dbms_output.p
ut_line ('Write test finished '); end; 19.54 19.28 SELECT bar from foo where pke
y = :b1 for update 5.00
4.81
As you can see, the PL/SQL block took about 19.54 seconds in CPU time and 19.28
seconds in elapsed time respectively. Out of 19.54 secondss , the SELECT stateme
nt contributed to 5.00 seconds, so the remaining 14 seconds (approximately) were
spent in dbms_lob.write. This is not reported, because the work done by dbms_lo
b.write is not part of a cursor. Similarly OCILOB API calls were not part of a c
ursor as well. Workaround: None Reads/Writes are done one chunk at a time in syn
chronous way BUG 3437770 - LOB DIRECT PATH READ/WRITES ARE LIMITED BY CHUNK SIZE
Problem scenario: The Oracle server process does NOCACHE LOB reads/writes using
a direct path mechanism. The limitation here is that reads/writes are done one c
hunk at a time in a synchronous way. Consider the example below: Assuming CHUNK
size=8K, DB_BLOCK_SIZE=2k, LOB data = 64K, 8 writes are done (each doing 4 block
s of write at a time) to load the entire LOB data, waiting for each write to com
plete before issuing another write. Workaround: use as many loader processes as
possible to maximize disk throughput. High CPU system time BUG 3437770 - LOB DIR
ECT PATH READ/WRITES ARE LIMITED BY CHUNK SIZE This is probably due to the above
limitation (reads/writes are done one chunk at time in synchronous way) Buffer
cache sizing problem Problem scenario: loading LOB data with the CACHE option wi
ll most likely fill up even a large buffer cache. Under this condition, a degrad
ation in the load rate can be seen if the database writer doesn?t keep up with t
he foreground free buffer requests. Workaround: follow the general instance tuni
ng guidelines - use asynchronous I/O (if not possible, use multiple db writer pr
ocesses) - stripe datafiles across many spindles - use the NOCACHE option The CA
CHE option will also force other online users to perform physical disk reads. Th
is can be avoided by using multiple block sizes. For example, keep online user o
bjects in 4k (or 8k) block size tablespace and and cached LOB data in 8kK (or 16
k) block size tablespace. Allocate the required amount of buffer cache for each
block sizes (e.g. db_4k_block_buffer=500M, db_8k_block_buffer=2000M) Multi-byte
character set conversion BUG 3324897 - LOBS LESS THAN 3964 BYTES ARE STORED OUT-
OF-LINE WHILE LOADING USING SQLLDR Problem scenario: wWhen dealing with multi-by
te character set, additional bytes are required for CLOB data. This may cause cl
ient side CLOB data of ~ 4000 bytes, being stored out-of-line in the database. W
orkaround: None
Use array operations for LOB inserts HWM enqueue contention BUG 3537749 - HW ENQ
UEUE CONTENTION WHEN LOADING LOB DATA Problem scenario: given the large size of
LOB data (compare to relational table row size), blocks under HWM are filled rap
idly (under high concurrent load condition) and can cause HW enqueue contention.
Workaround: ASSM with larger extent size may help. RAC environment issues BUG 3
429986 - CONVENTIONAL LOAD OF LOB FROM 2 RAC NODE DO NOT SCALE DUE TO LOG FLUSH
LATENCIES Problem scenario: In a RAC environment, when loading LOB data into one
partition, you may notice contention on 1st level bitmap and LOB header segment
with ASSM. You may notice the same contention on a single instance (with a larg
e number of CPUs) with a high number of concurrent loaders. Workaround: loading
into separate partitions will avoid this situation. If this is not possible, use
range-hash partition instead of just range partitions. FREEPOOLS should help in
this situation, but we need to do more testing to see the effect of this parame
ter.but didn?t provide any improvement in our testing. Other LOB performance rel
ated issues BUG 3234751 - EXCESSIVE USAGE OF TEMP TS WHILE LOADING LOB USING SQL
LDR IN CONVENTIONAL MODE BUG 3230541 - LOB LOADING USING SQLLDR DIRECT PATH SLOW
ER THAN CONVENTIONAL BUG 3189083 - OPEN/CLOSE OF DATAFILE FOR EVERY LOB CHUNK WR
ITEWRITES APPENDIX A APPENDIX A LONG API access to LOB datatype Oracle provides
transparent access to LOBs from applications that use LONG and LONG RAW datatype
s. If your application uses DML (INSERT, UPDATE, DELETE) statements from OCI or
PL/SQL (PRO*C etc) for LONG or LONG RAW data, no application changes are require
d after the column is converted to a LOB. For example, you can SELECT a CLOB int
o a character variable, or a BLOB into a RAW variable. You can define a CLOB col
umn
as SQLT_CHR or a BLOB column as SQLT_BIN and select the LOB data directly into a
CHARACTER or RAW buffer without selecting out the locator first. The following
example demonstrates this concept: create table foo ( pkey number(10) not null,
bar long raw ); set serveroutput on declare in_buf raw(32767); out_buf raw(32767
); out_pkey number; begin in_buf := utl_raw.cast_to_raw (rpad('FF', 32767, 'FF')
); for j in 1..10 loop insert into foo values (j, in_buf) ; commit; end loop; db
ms_output.put_line ('Write test finished '); for j in 1..10 loop select pkey, ba
r into out_pkey, out_buf from foo where pkey=j ; end loop; dbms_output.put_line
('Read test finished '); end; / Now migrate LONG RAW column to BLOB column alter
table foo modify (bar blob); That works. alter table foo modify (bar long raw);
ERROR at line 1: ORA-22859: invalid modification of columns So that does not wo
rk. There are few things customer should note when doing the LONG to LOB migrati
on. This alter table migration statement runs serially in 9i. i (what about 8i,1
0g). Indexes need to be rebuilt and statistics recollected. After the LONG to LO
B migration, the above PL/SQL block will work without any modifications. Advance
d LOB features may require the use of the LOB API, described in the Oracle Docum
entation[2]
APPENDIX B Migration FROM from in-line to out-of-line (and out-of-line to in-lin
e) STORAGE This section explains one major difference between the LOB API and LO
NG API methods. If a change to the in-line automatically moved out of and stored
out-of-line. If 3964 bytes, it will remain LOB data makes it larger than 3964 b
ytes, then it is table segment during future operations, the LOB data shrinks to
under out-of-line.
In other words, once a LOB is migrated out, it is always stored out-of-line irre
spective of its size, with the following exception scenario. Consider a scenario
where you used the LONG API to update the LOB datatype [..] begin in_buf := utl
_raw.cast_to_raw (rpad('FF', 3964, 'FF')); insert into foo values (1, in_buf) ;
commit; [..] Above LOB is stored in-line, update the LOB to a size more than 396
4 bytes [..] in_buf := utl_raw.cast_to_raw (rpad('FF', 4500, 'FF')); update foo
set bar=buffer where pkey=1; commit; [..] After the update LOB is stored out-of-
line, now update the LOB to a size smaller than 3964 bytes [..] in_buf := utl_ra
w.cast_to_raw (rpad('FF', 3000, 'FF')); update foo set bar=buffer where pkey=1;
commit; [..] LOB is stored in-line again. When using the LONG API for update, th
e older LOB is deleted (or space is reclaimed as per RETENTION or PCTVERSION set
ting) and a new LOB is created, with a new LOB locator. This is different from u
sing LOB API, where DML on LOB is possible only using the LOB locator (the LOB l
ocator doesn?t change) APPENDIX C How LOB data is stored The purpose of this sec
tion is to differentiate how the ENABLE STORAGE IN ROW
option is different from the DISABLE STORAGE IN ROW option for LOB data size gre
ater than 3964 bytes. It also highlights customers when LOBINDEX is really used
(following example scenarios assume Solaris OS and Oracle 9204 32 bit version)..
In-line LOB LOB size less than 3964 bytes LOB can be NULL, EMPTY_BLOB, and actu
al LOB data create table foo ( pkey number(10) not null, bar BLOB ) lob (bar) st
ore as (enable storage in row chunk 2k); declare inbuf
raw(3964);
begin inbuf := utl_raw.cast_to_raw(rpad('FF', 3964, 'FF')); insert into foo valu
es (1, NULL); insert into foo values (2, EMPTY_BLOB() ); insert into foo values
(3, inbuf ); commit; end; / note: RPAD ('-', 60, '-')==>'-----------------------
-------------------------------------' Now Foo table rows are: Pkey=1 Bar=0 byte
(nothing is stored) Pkey=2 Bar=36 byte (10 byte metadata + 10 byte LobId + 16 b
yte Inode) Pkey=3 Bar=4000 byte (36 byte + 3964 byte of LOB data, nothing stored
in LOBINDEX and LOBSEGMENT LobId - LOB Locator In-line LOB ? LOB size = 3965 by
tes (1 byte greater than 3964) LOB is defined as in-line, but actual data is gre
ater than 3964 bytes, so moved out ? please note this is different from LOB bein
g defined as out-of-line. [..] inbuf := utl_raw.cast_to_raw(rpad('FF', 3965, 'FF
')); insert into foo values (4, inbuf ); [..]
Foo table row Pkey=4 Bar=40 bytes (36 byte + 4 byte for one chunk RDBA). Using t
his RDBA, we directly access LOB data in LOBSEGMENT. Nothing stored in LOBINDEX
RDBA ? Relative Database Block Address In-line LOB ? LOB size greater than 12 ch
unk addresses With in-line LOB option, we store the first 12 chunk addresses in
the table row. This takes 84 bytes (36+4*12) of size in table row. LOBs that are
less than 12 chunks in size will not have entries in the LOBINDEX if ENABLE STO
RAGE IN ROW is used [..] inbuf := utl_raw.cast_to_raw(rpad('FF', 32767, 'FF'));
insert into foo values (5, inbuf ); [..] Here, we are inserting 32767 bytes of L
OB data, given our chunk size of 2k, we need approximately 16 blocks (32767/2048
). So we store first 12 chunk RDBAs in table row and the rest in LOBINDEX Foo ta
ble row Pkey=5 Bar=84 bytes (36 byte + 4*12 byte for first 12 chunk RDBA). Using
this RDBA, we directly access 12 LOB chunks in LOBSEGMENT. Then using the LobId
, we lookup LOBINDEX to get rest of the LOB chunk RDBAs. Out-of-line LOBs ? All
LOB sizes With out-of-line LOB option, only LOB locator is stored in table row.
Using LOB locator, we lookup LOBINDEX and find the range of chunk RDBAs, using t
his RDBAs we read LOB data from LOBSEGMENT create table foo (pkey number(10) not
null, bar BLOB) lob (bar) store as (disable storage in row chunk 2k); [..] inbu
f := utl_raw.cast_to_raw(rpad('FF', 20, 'FF')); insert into foo values (6, inbuf
); [..] Foo table rows Pkey=6 Bar=20 bytes (10 byte metadata + 10 byte LobId).
Please note Inode and chunk RDBAs are stored in LOBINDEX.
LOB Performance Guidelines
April 2004 Author: V. Jegraj (Vinayagam.Djegaradjane) Acknowledgements: Vishy Ka
rra, Krishna Kunchithapadam, Cecilia Gervasio
Oracle Corporation World Headquarters 500 Oracle Parkway Redwood Shores, CA 9406
5 U.S.A. Worldwide Inquiries: Phone: +1.650.506.7000 Fax: +1.650.506.7200 www.or
acle.com Copyright ? 2004 Oracle Corporation All rights reserved.
-------------------------------------------------------------------------------[
1] In Oracle8i, users can specify storage parameters for LOB index, but from Ora
cle9i Database onwards, specifying storage parameters for a LOB index is ignored
without any error and the index is stored in the same tablespace as the LOB seg
ment, with an Oracle generated index name. [2] Large Objects (LOBs) in Oracle9i
Application Developer's Guide, DBMS_LOB package in Oracle9i Supplied PL/SQL Pack
ages and Types Reference, LOB and FILE Operations in Oracle Call Interface Progr
ammer?s guide . ----------------------------------------------------------------
---------------Copyright 2005, Oracle. All rights reserved. Legal Notices and Term
s of Use. Note 7: ======= Doc ID: Subject: MAY-1999 Type: Note:1071540.6 Content
Type: TEXT/PLAIN Converting a Long datatype to Clob in Oracle8i?Creation Date:
BULLETIN Last Revision Date: 24-JUN-2004
27-
Status: PUBLISHED PURPOSE This note describes the Oracle 8.1.x function that con
verts data stored in LONG and LONG RAW datatypes to CLOB and BLOB datatypes resp
ectively. This is done using the TO_LOB function. Converting a long datatype to
a Clob: ========================================= The TO_LOB function is provide
d in Oracle 8.1.x to convert LONG and LONG RAW datatypes to CLOB and BLOB dataty
pes respectively. Note: The TO_LOB function is not provided in Oracle 8.0.x. Ora
cle recommends that long datatypes be converted to CLOBs, NCLOB or BLOBs. Note:
When a LOB is stored in a table, the data (LOB VALUE) and a pointer to the data
called a LOB LOCATOR, are stored separately. The data may be stored along with t
he locator in the table itself or in a separate table. The LOB clause in the cre
ate table command can be used to specify whether an attempt should be made to st
ore data in the main table or a separate one. The LOB clause may also be used to
specify a separate tablespace and storage clause for both the LOB table and its
associated index. Example: SQL> create table long_data (c1 number, c2 long); Ta
ble created. SQL> desc long_data Name Null? ------------------------------- ----
---C1 C2 Type ---NUMBER LONG
SQL> insert into long_data values 2 (1, 'This is some long data to be migrated t
o a CLOB'); 1 row created. Note: The TO_LOB function may be used in CREATE TABLE
AS SELECT or INSERT...SELECT statements: Example: SQL> create table test_lobs 2
(c1 number, c2 clob); Table created. SQL> desc test_lobs Name Null? Type ------
------------------------- -------- ---C1 NUMBER
C2 SQL> insert into test_lobs 2 select c1, to_lob(c2) from long_data; 1 row crea
ted. SQL> select c2 from test_lobs;
CLOB
C2 ----------------------------------------------This is some long data to be mi
grated to a CLOB References: =========== Oracle8i SQL Reference Volume 1 [NOTE:6
6046.1] Oracle8i: LOBs
30.2 How to access LOB data: ============================ 30.2.1 SQL DML: ------
--------Using SQL DML for Basic Operations on LOBs SQL DML provides basic operat
ions -- INSERT, UPDATE, SELECT, DELETE -- that let you make changes to the entir
e values of internal LOBs within the Oracle ORDBMS. To work with parts of intern
al LOBs, you will need to use one of the interfaces that have been developed to
handle more complex requirements. Oracle8 supports read-only operations on exter
nal LOBs. So if you need to update/write to external LOBs, you will have to deve
lop client side applications suited to your needs Suppose you have the following
table: create table multimedia_tab ( clip_id number, story clob, flsub nclob, p
hoto bfile, frame blob, sound blob, voiced_ref voiced_type, inseg_ntab inseg_typ
e, music bfile, map_obj map_typ );
create table multimedia_tab ( clip_id number, story clob, flsub nclob, photo bfi
le, frame blob, sound blob, music bfile ); The following INSERT statement popula
tes story with the character string 'JFK interview', sets flsub, frame and sound
to an empty value, sets photo to NULL, and initializes music to point to the fi
le 'JFK_interview' located under the logical directory 'AUDIO_DIR' (see the CREA
TE DIRECTORY command in the Oracle8i Reference. Character strings are inserted u
sing the default character set for the instance. INSERT INTO Multimedia_tab VALU
ES (101, 'JFK interview', EMPTY_CLOB(), NULL, EMPTY_BLOB(), EMPTY_BLOB(), NULL,
NULL, BFILENAME('AUDIO_DIR', 'JFK_interview'), NULL); Similarly, the LOB attribu
tes for the Map_typ column in Multimedia_tab can be initialized to NULL or set t
o empty as shown below. Note that you cannot initialize a LOB object attribute w
ith a literal. INSERT INTO Multimedia_tab VALUES (1, EMPTY_CLOB(), EMPTY_CLOB(),
NULL, EMPTY_BLOB(), EMPTY_BLOB(), NULL, NULL, NULL, Map_typ('Moon Mountain', 23
, 34, 45, 56, EMPTY_BLOB(), NULL); SELECTing a LOB Performing a SELECT on a LOB
returns the locator instead of the LOB value. In the following PL/SQL fragment y
ou select the LOB locator for story and place it in the PL/SQL locator variable
Image1 defined in the program block. When you use PL/SQL DBMS_LOB functions to m
anipulate the LOB value, you refer to the LOB using the locator. DECLARE Image1
BLOB; ImageNum INTEGER := 101; BEGIN SELECT story INTO Image1 FROM Multimedia_ta
b WHERE clip_id = ImageNum; DBMS_OUTPUT.PUT_LINE('Size of the Image is: ' || DBM
S_LOB.GETLENGTH(Image1)); /* more LOB routines */ END;
DECLARE Image1 BLOB; ImageNum INTEGER := 101; BEGIN SELECT content INTO Image1 F
ROM binaries2 WHERE id = 1211; DBMS_OUTPUT.PUT_LINE('Size of the Image is: ' ||
DBMS_LOB.GETLENGTH(Image1)); /* more LOB routines */ END; / XXX So you can retri
eve all kinds of info with DBMS_LOB
30.2.2 The EMPTY_BLOB and EMPTY_CLOB functions: --------------------------------
--------------The EMPTY_BLOB function returns an empty locator of type BLOB (bin
ary large object). The specification for the EMPTY_BLOB function is: FUNCTION EM
PTY_BLOB RETURN BLOB; You can call this function without any parentheses or with
an empty pair. Here are some examples: INSERT INTO family_member (name, photo)
VALUES ('Steven Feuerstein', EMPTY_BLOB()); DECLARE my_photo BLOB := EMPTY_BLOB;
BEGIN Use EMPTY_BLOB to initialize a BLOB to "empty." Before you can work with
a BLOB, either to reference it in SQL DML statements such as INSERTs or to assig
n it a value in PL/SQL, it must contain a locator. It cannot be NULL. The locato
r might point to an empty BLOB value, but it will be a valid BLOB locator. The E
MPTY_CLOB function returns an empty locator of type CLOB. The specification for
the EMPTY_CLOB function is: FUNCTION EMPTY_CLOB RETURN CLOB; You can call this f
unction without any parentheses or with an empty pair. Here are some examples: I
NSERT INTO diary (entry, text) VALUES (SYSDATE, EMPTY_CLOB()); DECLARE the_big_n
ovel CLOB := EMPTY_CLOB; BEGIN
Use EMPTY_CLOB to initialize a CLOB to "empty". Before you can work with a CLOB,
either to reference it in SQL DML statements such as INSERTs or to assign it a
value in PL/SQL, it must contain a locator. It cannot be NULL. The locator might
point to an empty CLOB value, but it will be a valid CLOB locator.
30.2.3 DBMS_LOB --------------Simple example to get the length of a lob: DECLARE
Image1 BLOB; ImageNum INTEGER := 101; BEGIN SELECT content INTO Image1 FROM bin
aries2 WHERE id = 1211; DBMS_OUTPUT.PUT_LINE('Size of the Image is: ' || DBMS_LO
B.GETLENGTH(Image1)); /* more LOB routines */ END; / DBMS_LOB The DBMS_LOB packa
ge provides subprograms to operate on BLOBs, CLOBs, NCLOBs, BFILEs, and temporar
y LOBs. You can use DBMS_LOB to access and manipulation specific parts of a LOB
or complete LOBs. DBMS_LOB can read and modify BLOBs, CLOBs, and NCLOBs; it prov
ides read-only operations for BFILEs. The bulk of the LOB operations are provide
d by this package. Example: Load Text Files to CLOB then Write Back Out to Disk
- (PL/SQL) Overview The following example is part of the Oracle LOB Examples Col
lection. This example provides two PL/SQL procedures that demonstrate how to pop
ulate a CLOB column with a text file (an XML file) then write it back out to the
file system as a different file name. - Load_CLOB_From_XML_File: This PL/SQL pr
ocedure loads an XML file on disk to a CLOB column using a BFILE reference varia
ble. Notice that I use the new PL/SQL procedure DBMS_LOB.LoadCLOBFromFile(), int
roduced in Oracle 9.2, that handles uploading to a multi-byte UNICODE database.
- Write_CLOB_To_XML_File: This PL/SQL procedure writes the contents of the CLOB
column in the database piecewise back to the file system. Let's first take a loo
k at an example XML file: DatabaseInventoryBig.xml: <?xml version="1.0" ?> <!DOC
TYPE DatabaseInventory (View Source for full doctype...)> - <DatabaseInventory>
- <DatabaseName> <GlobalDatabaseName>production.iDevelopment.info</GlobalDatabas
eName> <OracleSID>production</OracleSID> <DatabaseDomain>iDevelopment.info</Data
baseDomain> <Administrator EmailAlias="jhunter" Extension="6007">Jeffrey Hunter<
/Administrator> <DatabaseAttributes Type="Production" Version="9i" /> <Comments>
The following database should be considered the most stable for up-todate data.
The backup strategy includes running the database in Archive Log Mode and perfor
ming nightly backups. All new accounts need to be approved by the DBA Group befo
re being created.</Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseNam
e>development.iDevelopment.info</GlobalDatabaseName> <OracleSID>development</Ora
cleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Administrator EmailA
lias="jhunter" Extension="6007">Jeffrey Hunter</Administrator> <Administrator Em
ailAlias="mhunter" Extension="6008">Melody Hunter</Administrator> <DatabaseAttri
butes Type="Development" Version="9i" /> <Comments>The following database should
contain all hosted applications. Production data will be exported on a weekly b
asis to ensure all development environments have stable and current data.</Comme
nts> </DatabaseName> - <DatabaseName> <GlobalDatabaseName>testing1.iDevelopment.
info</GlobalDatabaseName> <OracleSID>testing1</OracleSID> <DatabaseDomain>iDevel
opment.info</DatabaseDomain> <Administrator EmailAlias="jhunter" Extension="6007
">Jeffrey Hunter</Administrator> <Administrator EmailAlias="mhunter" Extension="
6008">Melody Hunter</Administrator> <Administrator EmailAlias="ahunter">Alex Hun
ter</Administrator> <DatabaseAttributes Type="Testing" Version="9i" /> <Comments
>The following database will host more than half of the testing for our hosting
environment.</Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseName>tes
ting2.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing2</OracleSID> <Da
tabaseDomain>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias="jhunt
er" Extension="6007">Jeffrey
Hunter</Administrator> <Administrator EmailAlias="mhunter" Extension="6008">Melo
dy Hunter</Administrator> <Administrator EmailAlias="ahunter">Alex Hunter</Admin
istrator> <DatabaseAttributes Type="Testing" Version="9i" /> <Comments>The follo
wing database will host a testing database to be HR department only.</Comments>
</DatabaseName> - <DatabaseName> <GlobalDatabaseName>testing3.iDevelopment.info<
/GlobalDatabaseName> <OracleSID>testing3</OracleSID> <DatabaseDomain>iDevelopmen
t.info</DatabaseDomain> <Administrator EmailAlias="jhunter" Extension="6007">Jef
frey Hunter</Administrator> <Administrator EmailAlias="mhunter" Extension="6008"
>Melody Hunter</Administrator> <Administrator EmailAlias="ahunter">Alex Hunter</
Administrator> <DatabaseAttributes Type="Testing" Version="9i" /> <Comments>The
following database will host a testing database to be Finance department only.</
Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseName>testing4.iDevelop
ment.info</GlobalDatabaseName> <OracleSID>testing4</OracleSID> <DatabaseDomain>i
Development.info</DatabaseDomain> <Administrator EmailAlias="jhunter" Extension=
"6007">Jeffrey Hunter</Administrator> <Administrator EmailAlias="mhunter" Extens
ion="6008">Melody Hunter</Administrator> <Administrator EmailAlias="ahunter">Ale
x Hunter</Administrator> <DatabaseAttributes Type="Testing" Version="9i" /> <Com
ments>The following database will host a testing database to be HQ department on
ly.</Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseName>testing5.iDe
velopment.info</GlobalDatabaseName> <OracleSID>testing5</OracleSID> <DatabaseDom
ain>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias="jhunter" Exten
sion="6007">Jeffrey Hunter</Administrator> <Administrator EmailAlias="mhunter" E
xtension="6008">Melody Hunter</Administrator> <Administrator EmailAlias="ahunter
">Alex Hunter</Administrator> <DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be Engineering
department only.</Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseNam
e>testing6.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing6</OracleSID
> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias="
jhunter" Extension="6007">Jeffrey Hunter</Administrator> <Administrator EmailAli
as="mhunter" Extension="6008">Melody Hunter</Administrator> <Administrator Email
Alias="ahunter">Alex Hunter</Administrator> <DatabaseAttributes Type="Testing" V
ersion="9i" /> <Comments>The following database will host a testing database to
be
used by the
used by the
used by the
used by the
used by the
IT department only.</Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseN
ame>testing7.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing7</OracleS
ID> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias
="jhunter" Extension="6007">Jeffrey Hunter</Administrator> <Administrator EmailA
lias="mhunter" Extension="6008">Melody Hunter</Administrator> <Administrator Ema
ilAlias="ahunter">Alex Hunter</Administrator> <DatabaseAttributes Type="Testing"
Version="9i" /> <Comments>The following database will host a testing database t
o be used Marketing department only.</Comments> </DatabaseName> - <DatabaseName>
<GlobalDatabaseName>testing8.iDevelopment.info</GlobalDatabaseName> <OracleSID>
testing8</OracleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Adminis
trator EmailAlias="jhunter" Extension="6007">Jeffrey Hunter</Administrator> <Adm
inistrator EmailAlias="mhunter" Extension="6008">Melody Hunter</Administrator> <
Administrator EmailAlias="ahunter">Alex Hunter</Administrator> <DatabaseAttribut
es Type="Testing" Version="9i" /> <Comments>The following database will host a t
esting database to be used Purchasing department only.</Comments> </DatabaseName
> - <DatabaseName> <GlobalDatabaseName>testing9.iDevelopment.info</GlobalDatabas
eName> <OracleSID>testing9</OracleSID> <DatabaseDomain>iDevelopment.info</Databa
seDomain> <Administrator EmailAlias="jhunter" Extension="6007">Jeffrey Hunter</A
dministrator> <Administrator EmailAlias="mhunter" Extension="6008">Melody Hunter
</Administrator> <Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" /> <Comments>The following data
base will host a testing database to be used Accounts Payable department only.</
Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseName>testing10.iDevelo
pment.info</GlobalDatabaseName> <OracleSID>testing10</OracleSID> <DatabaseDomain
>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias="jhunter" Extensio
n="6007">Jeffrey Hunter</Administrator> <Administrator EmailAlias="mhunter" Exte
nsion="6008">Melody Hunter</Administrator> <Administrator EmailAlias="ahunter">A
lex Hunter</Administrator> <DatabaseAttributes Type="Testing" Version="9i" /> <C
omments>The following database will host a testing database to be used DBA depar
tment for testing OEM.</Comments> </DatabaseName> - <DatabaseName> <GlobalDataba
seName>testing11.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing11</Or
acleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomain>
by the
by the
by the
by the
<Administrator EmailAlias="jhunter" Extension="6007">Jeffrey Hunter</Administrat
or> <Administrator EmailAlias="mhunter" Extension="6008">Melody Hunter</Administ
rator> <Administrator EmailAlias="ahunter">Alex Hunter</Administrator> <Database
Attributes Type="Testing" Version="9i" /> <Comments>The following database will
host a testing database to be used DBA department for testing XMLDB.</Comments>
</DatabaseName> - <DatabaseName> <GlobalDatabaseName>testing12.iDevelopment.info
</GlobalDatabaseName> <OracleSID>testing12</OracleSID> <DatabaseDomain>iDevelopm
ent.info</DatabaseDomain> <Administrator EmailAlias="jhunter" Extension="6007">J
effrey Hunter</Administrator> <Administrator EmailAlias="mhunter" Extension="600
8">Melody Hunter</Administrator> <Administrator EmailAlias="ahunter">Alex Hunter
</Administrator> <DatabaseAttributes Type="Testing" Version="9i" /> <Comments>Th
e following database will host a testing database to be used DBA department for
tuning.</Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseName>testing1
3.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing13</OracleSID> <Datab
aseDomain>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias="jhunter"
Extension="6007">Jeffrey Hunter</Administrator> <Administrator EmailAlias="mhun
ter" Extension="6008">Melody Hunter</Administrator> <Administrator EmailAlias="a
hunter">Alex Hunter</Administrator> <DatabaseAttributes Type="Testing" Version="
9i" /> <Comments>The following database will host a testing database to be used
DBA department for UAT.</Comments> </DatabaseName> - <DatabaseName> <GlobalDatab
aseName>testing14.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing14</O
racleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Administrator Emai
lAlias="jhunter" Extension="6007">Jeffrey Hunter</Administrator> <Administrator
EmailAlias="mhunter" Extension="6008">Melody Hunter</Administrator> <Administrat
or EmailAlias="ahunter">Alex Hunter</Administrator> <DatabaseAttributes Type="Te
sting" Version="9i" /> <Comments>The following database will host a testing data
base to be used DBA department for additional monitoring.</Comments> </DatabaseN
ame> - <DatabaseName> <GlobalDatabaseName>testing15.iDevelopment.info</GlobalDat
abaseName> <OracleSID>testing15</OracleSID> <DatabaseDomain>iDevelopment.info</D
atabaseDomain> <Administrator EmailAlias="jhunter" Extension="6007">Jeffrey Hunt
er</Administrator> <Administrator EmailAlias="mhunter" Extension="6008">Melody H
unter</Administrator> <Administrator EmailAlias="ahunter">Alex Hunter</Administr
ator> <DatabaseAttributes Type="Testing" Version="9i" />
by the
by the
by the
by the
<Comments>The following database will host a testing database to be used DBA dep
artment for testing upgrades.</Comments> </DatabaseName> - <DatabaseName> <Globa
lDatabaseName>testing16.iDevelopment.info</GlobalDatabaseName> <OracleSID>testin
g16</OracleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Administrato
r EmailAlias="jhunter" Extension="6007">Jeffrey Hunter</Administrator> <Administ
rator EmailAlias="mhunter" Extension="6008">Melody Hunter</Administrator> <Admin
istrator EmailAlias="ahunter">Alex Hunter</Administrator> <DatabaseAttributes Ty
pe="Testing" Version="9i" /> <Comments>The following database will host a testin
g database to be used DBA department for certification tesing.</Comments> </Data
baseName> - <DatabaseName> <GlobalDatabaseName>testing17.iDevelopment.info</Glob
alDatabaseName> <OracleSID>testing17</OracleSID> <DatabaseDomain>iDevelopment.in
fo</DatabaseDomain> <Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator> <Administrator EmailAlias="mhunter" Extension="6008">Mel
ody Hunter</Administrator> <Administrator EmailAlias="ahunter">Alex Hunter</Admi
nistrator> <DatabaseAttributes Type="Testing" Version="9i" /> <Comments>The foll
owing database will host a testing database to be used DBA department for testin
g of all ERP application modules.</Comments> </DatabaseName> - <DatabaseName> <G
lobalDatabaseName>testing18.iDevelopment.info</GlobalDatabaseName> <OracleSID>te
sting18</OracleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Administ
rator EmailAlias="jhunter" Extension="6007">Jeffrey Hunter</Administrator> <Admi
nistrator EmailAlias="mhunter" Extension="6008">Melody Hunter</Administrator> <A
dministrator EmailAlias="ahunter">Alex Hunter</Administrator> <DatabaseAttribute
s Type="Testing" Version="9i" /> <Comments>The following database will host a te
sting database to be used DBA department for testing of all ERP application modu
les.</Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseName>testing19.i
Development.info</GlobalDatabaseName> <OracleSID>testing19</OracleSID> <Database
Domain>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias="jhunter" Ex
tension="6007">Jeffrey Hunter</Administrator> <Administrator EmailAlias="mhunter
" Extension="6008">Melody Hunter</Administrator> <Administrator EmailAlias="ahun
ter">Alex Hunter</Administrator> <DatabaseAttributes Type="Testing" Version="9i"
/> <Comments>The following database will host a testing database to be used DBA
department for testing of all ERP application modules.</Comments> </DatabaseNam
e> - <DatabaseName> <GlobalDatabaseName>testing20.iDevelopment.info</GlobalDatab
aseName> <OracleSID>testing20</OracleSID>
by the
by the
by the
by the
by the
<DatabaseDomain>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias="jh
unter" Extension="6007">Jeffrey Hunter</Administrator> <Administrator EmailAlias
="mhunter" Extension="6008">Melody Hunter</Administrator> <Administrator EmailAl
ias="ahunter">Alex Hunter</Administrator> <DatabaseAttributes Type="Testing" Ver
sion="9i" /> <Comments>The following database will host a testing database to be
used DBA department for testing of all ERP application modules.</Comments> </Da
tabaseName> - <DatabaseName> <GlobalDatabaseName>testing21.iDevelopment.info</Gl
obalDatabaseName> <OracleSID>testing21</OracleSID> <DatabaseDomain>iDevelopment.
info</DatabaseDomain> <Administrator EmailAlias="jhunter" Extension="6007">Jeffr
ey Hunter</Administrator> <Administrator EmailAlias="mhunter" Extension="6008">M
elody Hunter</Administrator> <Administrator EmailAlias="ahunter">Alex Hunter</Ad
ministrator> <DatabaseAttributes Type="Testing" Version="9i" /> <Comments>The fo
llowing database will host a testing database to be used DBA department for test
ing of all ERP application modules.</Comments> </DatabaseName> - <DatabaseName>
<GlobalDatabaseName>testing22.iDevelopment.info</GlobalDatabaseName> <OracleSID>
testing22</OracleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Admini
strator EmailAlias="jhunter" Extension="6007">Jeffrey Hunter</Administrator> <Ad
ministrator EmailAlias="mhunter" Extension="6008">Melody Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator> <DatabaseAttribu
tes Type="Testing" Version="9i" /> <Comments>The following database will host a
testing database to be used DBA department for testing of all ERP application mo
dules.</Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseName>testing23
.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing23</OracleSID> <Databa
seDomain>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias="jhunter"
Extension="6007">Jeffrey Hunter</Administrator> <Administrator EmailAlias="mhunt
er" Extension="6008">Melody Hunter</Administrator> <Administrator EmailAlias="ah
unter">Alex Hunter</Administrator> <DatabaseAttributes Type="Testing" Version="9
i" /> <Comments>The following database will host a testing database to be used D
BA department for testing of all ERP application modules.</Comments> </DatabaseN
ame> + <DatabaseName> <GlobalDatabaseName>testing24.iDevelopment.info</GlobalDat
abaseName> <OracleSID>testing24</OracleSID> <DatabaseDomain>iDevelopment.info</D
atabaseDomain> <Administrator EmailAlias="jhunter" Extension="6007">Jeffrey Hunt
er</Administrator> <Administrator EmailAlias="mhunter" Extension="6008">Melody H
unter</Administrator> <Administrator EmailAlias="ahunter">Alex Hunter</Administr
ator>
by the
by the
by the
by the
<DatabaseAttributes Type="Testing" Version="9i" /> <Comments>The following datab
ase will host a testing database to be used DBA department for testing of all ER
P application modules.</Comments> </DatabaseName> + <DatabaseName> <GlobalDataba
seName>testing25.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing25</Or
acleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Administrator Email
Alias="jhunter" Extension="6007">Jeffrey Hunter</Administrator> <Administrator E
mailAlias="mhunter" Extension="6008">Melody Hunter</Administrator> <Administrato
r EmailAlias="ahunter">Alex Hunter</Administrator> <DatabaseAttributes Type="Tes
ting" Version="9i" /> <Comments>The following database will host a testing datab
ase to be used DBA department for testing of all ERP application modules.</Comme
nts> </DatabaseName> + <DatabaseName> <GlobalDatabaseName>testing26.iDevelopment
.info</GlobalDatabaseName> <OracleSID>testing26</OracleSID> <DatabaseDomain>iDev
elopment.info</DatabaseDomain> <Administrator EmailAlias="jhunter" Extension="60
07">Jeffrey Hunter</Administrator> <Administrator EmailAlias="mhunter" Extension
="6008">Melody Hunter</Administrator> <Administrator EmailAlias="ahunter">Alex H
unter</Administrator> <DatabaseAttributes Type="Testing" Version="9i" /> <Commen
ts>The following database will host a testing database to be used DBA department
for testing of all ERP application modules.</Comments> </DatabaseName> + <Datab
aseName> <GlobalDatabaseName>testing27.iDevelopment.info</GlobalDatabaseName> <O
racleSID>testing27</OracleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomain
> <Administrator EmailAlias="jhunter" Extension="6007">Jeffrey Hunter</Administr
ator> <Administrator EmailAlias="mhunter" Extension="6008">Melody Hunter</Admini
strator> <Administrator EmailAlias="ahunter">Alex Hunter</Administrator> <Databa
seAttributes Type="Testing" Version="9i" /> <Comments>The following database wil
l host a testing database to be used DBA department for testing of all ERP appli
cation modules.</Comments> </DatabaseName> + <DatabaseName> <GlobalDatabaseName>
testing28.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing28</OracleSID
> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias="
jhunter" Extension="6007">Jeffrey Hunter</Administrator> <Administrator EmailAli
as="mhunter" Extension="6008">Melody Hunter</Administrator> <Administrator Email
Alias="ahunter">Alex Hunter</Administrator> <DatabaseAttributes Type="Testing" V
ersion="9i" /> <Comments>The following database will host a testing database to
be used DBA department for testing of all ERP application modules.</Comments> </
DatabaseName> - <DatabaseName> <GlobalDatabaseName>testing29.iDevelopment.info</
GlobalDatabaseName>
by the
by the
by the
by the
by the
<OracleSID>testing29</OracleSID> <DatabaseDomain>iDevelopment.info</DatabaseDoma
in> <Administrator EmailAlias="jhunter" Extension="6007">Jeffrey Hunter</Adminis
trator> <Administrator EmailAlias="mhunter" Extension="6008">Melody Hunter</Admi
nistrator> <Administrator EmailAlias="ahunter">Alex Hunter</Administrator> <Data
baseAttributes Type="Testing" Version="9i" /> <Comments>The following database w
ill host a testing database to be used DBA department for testing of all ERP app
lication modules.</Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseNam
e>testing30.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing30</OracleS
ID> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias
="jhunter" Extension="6007">Jeffrey Hunter</Administrator> <Administrator EmailA
lias="mhunter" Extension="6008">Melody Hunter</Administrator> <Administrator Ema
ilAlias="ahunter">Alex Hunter</Administrator> <DatabaseAttributes Type="Testing"
Version="9i" /> <Comments>The following database will host a testing database t
o be used DBA department for testing of all ERP application modules.</Comments>
</DatabaseName> - <DatabaseName> <GlobalDatabaseName>testing31.iDevelopment.info
</GlobalDatabaseName> <OracleSID>testing31</OracleSID> <DatabaseDomain>iDevelopm
ent.info</DatabaseDomain> <Administrator EmailAlias="jhunter" Extension="6007">J
effrey Hunter</Administrator> <Administrator EmailAlias="mhunter" Extension="600
8">Melody Hunter</Administrator> <Administrator EmailAlias="ahunter">Alex Hunter
</Administrator> <DatabaseAttributes Type="Testing" Version="9i" /> <Comments>Th
e following database will host a testing database to be used DBA department for
testing of all ERP application modules.</Comments> </DatabaseName> - <DatabaseNa
me> <GlobalDatabaseName>testing32.iDevelopment.info</GlobalDatabaseName> <Oracle
SID>testing32</OracleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Ad
ministrator EmailAlias="jhunter" Extension="6007">Jeffrey Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody Hunter</Administrat
or> <Administrator EmailAlias="ahunter">Alex Hunter</Administrator> <DatabaseAtt
ributes Type="Testing" Version="9i" /> <Comments>The following database will hos
t a testing database to be used DBA department for testing of all ERP applicatio
n modules.</Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseName>testi
ng33.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing33</OracleSID> <Da
tabaseDomain>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias="jhunt
er" Extension="6007">Jeffrey Hunter</Administrator> <Administrator EmailAlias="m
hunter" Extension="6008">Melody Hunter</Administrator>
by the
by the
by the
by the
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator> <DatabaseAttribu
tes Type="Testing" Version="9i" /> <Comments>The following database will host a
testing database to be used DBA department for testing of all ERP application mo
dules.</Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseName>testing34
.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing34</OracleSID> <Databa
seDomain>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias="jhunter"
Extension="6007">Jeffrey Hunter</Administrator> <Administrator EmailAlias="mhunt
er" Extension="6008">Melody Hunter</Administrator> <Administrator EmailAlias="ah
unter">Alex Hunter</Administrator> <DatabaseAttributes Type="Testing" Version="9
i" /> <Comments>The following database will host a testing database to be used D
BA department for testing of all ERP application modules.</Comments> </DatabaseN
ame> - <DatabaseName> <GlobalDatabaseName>testing35.iDevelopment.info</GlobalDat
abaseName> <OracleSID>testing35</OracleSID> <DatabaseDomain>iDevelopment.info</D
atabaseDomain> <Administrator EmailAlias="jhunter" Extension="6007">Jeffrey Hunt
er</Administrator> <Administrator EmailAlias="mhunter" Extension="6008">Melody H
unter</Administrator> <Administrator EmailAlias="ahunter">Alex Hunter</Administr
ator> <DatabaseAttributes Type="Testing" Version="9i" /> <Comments>The following
database will host a testing database to be used DBA department for testing of
all ERP application modules.</Comments> </DatabaseName> - <DatabaseName> <Global
DatabaseName>testing36.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing
36</OracleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Administrator
EmailAlias="jhunter" Extension="6007">Jeffrey Hunter</Administrator> <Administr
ator EmailAlias="mhunter" Extension="6008">Melody Hunter</Administrator> <Admini
strator EmailAlias="ahunter">Alex Hunter</Administrator> <DatabaseAttributes Typ
e="Testing" Version="9i" /> <Comments>The following database will host a testing
database to be used DBA department for testing of all ERP application modules.<
/Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseName>testing37.iDevel
opment.info</GlobalDatabaseName> <OracleSID>testing37</OracleSID> <DatabaseDomai
n>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias="jhunter" Extensi
on="6007">Jeffrey Hunter</Administrator> <Administrator EmailAlias="mhunter" Ext
ension="6008">Melody Hunter</Administrator> <Administrator EmailAlias="ahunter">
Alex Hunter</Administrator> <DatabaseAttributes Type="Testing" Version="9i" /> <
Comments>The following database will host a testing database to be used DBA depa
rtment for testing of all ERP application modules.</Comments> </DatabaseName> -
<DatabaseName>
by the
by the
by the
by the
by the
<GlobalDatabaseName>testing38.iDevelopment.info</GlobalDatabaseName> <OracleSID>
testing38</OracleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Admini
strator EmailAlias="jhunter" Extension="6007">Jeffrey Hunter</Administrator> <Ad
ministrator EmailAlias="mhunter" Extension="6008">Melody Hunter</Administrator>
<Administrator EmailAlias="ahunter">Alex Hunter</Administrator> <DatabaseAttribu
tes Type="Testing" Version="9i" /> <Comments>The following database will host a
testing database to be used DBA department for testing of all ERP application mo
dules.</Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseName>testing39
.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing39</OracleSID> <Databa
seDomain>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias="jhunter"
Extension="6007">Jeffrey Hunter</Administrator> <Administrator EmailAlias="mhunt
er" Extension="6008">Melody Hunter</Administrator> <Administrator EmailAlias="ah
unter">Alex Hunter</Administrator> <DatabaseAttributes Type="Testing" Version="9
i" /> <Comments>The following database will host a testing database to be used D
BA department for testing of all ERP application modules.</Comments> </DatabaseN
ame> - <DatabaseName> <GlobalDatabaseName>testing40.iDevelopment.info</GlobalDat
abaseName> <OracleSID>testing40</OracleSID> <DatabaseDomain>iDevelopment.info</D
atabaseDomain> <Administrator EmailAlias="jhunter" Extension="6007">Jeffrey Hunt
er</Administrator> <Administrator EmailAlias="mhunter" Extension="6008">Melody H
unter</Administrator> <Administrator EmailAlias="ahunter">Alex Hunter</Administr
ator> <DatabaseAttributes Type="Testing" Version="9i" /> <Comments>The following
database will host a testing database to be used DBA department for testing of
all ERP application modules.</Comments> </DatabaseName> - <DatabaseName> <Global
DatabaseName>testing41.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing
41</OracleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Administrator
EmailAlias="jhunter" Extension="6007">Jeffrey Hunter</Administrator> <Administr
ator EmailAlias="mhunter" Extension="6008">Melody Hunter</Administrator> <Admini
strator EmailAlias="ahunter">Alex Hunter</Administrator> <DatabaseAttributes Typ
e="Testing" Version="9i" /> <Comments>The following database will host a testing
database to be used DBA department for testing of all ERP application modules.<
/Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseName>testing42.iDevel
opment.info</GlobalDatabaseName> <OracleSID>testing42</OracleSID> <DatabaseDomai
n>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias="jhunter" Extensi
on="6007">Jeffrey Hunter</Administrator> <Administrator EmailAlias="mhunter" Ext
ension="6008">Melody
by the
by the
by the
by the
Hunter</Administrator> <Administrator EmailAlias="ahunter">Alex Hunter</Administ
rator> <DatabaseAttributes Type="Testing" Version="9i" /> <Comments>The followin
g database will host a testing database to be used DBA department for testing of
all ERP application modules.</Comments> </DatabaseName> - <DatabaseName> <Globa
lDatabaseName>testing43.iDevelopment.info</GlobalDatabaseName> <OracleSID>testin
g43</OracleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Administrato
r EmailAlias="jhunter" Extension="6007">Jeffrey Hunter</Administrator> <Administ
rator EmailAlias="mhunter" Extension="6008">Melody Hunter</Administrator> <Admin
istrator EmailAlias="ahunter">Alex Hunter</Administrator> <DatabaseAttributes Ty
pe="Testing" Version="9i" /> <Comments>The following database will host a testin
g database to be used DBA department for testing of all ERP application modules.
</Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseName>testing44.iDeve
lopment.info</GlobalDatabaseName> <OracleSID>testing44</OracleSID> <DatabaseDoma
in>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias="jhunter" Extens
ion="6007">Jeffrey Hunter</Administrator> <Administrator EmailAlias="mhunter" Ex
tension="6008">Melody Hunter</Administrator> <Administrator EmailAlias="ahunter"
>Alex Hunter</Administrator> <DatabaseAttributes Type="Testing" Version="9i" />
<Comments>The following database will host a testing database to be used DBA dep
artment for testing of all ERP application modules.</Comments> </DatabaseName> -
<DatabaseName> <GlobalDatabaseName>testing45.iDevelopment.info</GlobalDatabaseN
ame> <OracleSID>testing45</OracleSID> <DatabaseDomain>iDevelopment.info</Databas
eDomain> <Administrator EmailAlias="jhunter" Extension="6007">Jeffrey Hunter</Ad
ministrator> <Administrator EmailAlias="mhunter" Extension="6008">Melody Hunter<
/Administrator> <Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" /> <Comments>The following datab
ase will host a testing database to be used DBA department for testing of all ER
P application modules.</Comments> </DatabaseName> - <DatabaseName> <GlobalDataba
seName>testing46.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing46</Or
acleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Administrator Email
Alias="jhunter" Extension="6007">Jeffrey Hunter</Administrator> <Administrator E
mailAlias="mhunter" Extension="6008">Melody Hunter</Administrator> <Administrato
r EmailAlias="ahunter">Alex Hunter</Administrator> <DatabaseAttributes Type="Tes
ting" Version="9i" /> <Comments>The following database will host a testing datab
ase to be used DBA department for testing of all ERP application modules.</Comme
nts> </DatabaseName>
by the
by the
by the
by the
by the
- <DatabaseName> <GlobalDatabaseName>testing47.iDevelopment.info</GlobalDatabase
Name> <OracleSID>testing47</OracleSID> <DatabaseDomain>iDevelopment.info</Databa
seDomain> <Administrator EmailAlias="jhunter" Extension="6007">Jeffrey Hunter</A
dministrator> <Administrator EmailAlias="mhunter" Extension="6008">Melody Hunter
</Administrator> <Administrator EmailAlias="ahunter">Alex Hunter</Administrator>
<DatabaseAttributes Type="Testing" Version="9i" /> <Comments>The following data
base will host a testing database to be used DBA department for testing of all E
RP application modules.</Comments> </DatabaseName> - <DatabaseName> <GlobalDatab
aseName>testing48.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing48</O
racleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Administrator Emai
lAlias="jhunter" Extension="6007">Jeffrey Hunter</Administrator> <Administrator
EmailAlias="mhunter" Extension="6008">Melody Hunter</Administrator> <Administrat
or EmailAlias="ahunter">Alex Hunter</Administrator> <DatabaseAttributes Type="Te
sting" Version="9i" /> <Comments>The following database will host a testing data
base to be used DBA department for testing of all ERP application modules.</Comm
ents> </DatabaseName> - <DatabaseName> <GlobalDatabaseName>testing49.iDevelopmen
t.info</GlobalDatabaseName> <OracleSID>testing49</OracleSID> <DatabaseDomain>iDe
velopment.info</DatabaseDomain> <Administrator EmailAlias="jhunter" Extension="6
007">Jeffrey Hunter</Administrator> <Administrator EmailAlias="mhunter" Extensio
n="6008">Melody Hunter</Administrator> <Administrator EmailAlias="ahunter">Alex
Hunter</Administrator> <DatabaseAttributes Type="Testing" Version="9i" /> <Comme
nts>The following database will host a testing database to be used DBA departmen
t for testing of all ERP application modules.</Comments> </DatabaseName> - <Data
baseName> <GlobalDatabaseName>testing50.iDevelopment.info</GlobalDatabaseName> <
OracleSID>testing50</OracleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomai
n> <Administrator EmailAlias="jhunter" Extension="6007">Jeffrey Hunter</Administ
rator> <Administrator EmailAlias="mhunter" Extension="6008">Melody Hunter</Admin
istrator> <Administrator EmailAlias="ahunter">Alex Hunter</Administrator> <Datab
aseAttributes Type="Testing" Version="9i" /> <Comments>The following database wi
ll host a testing database to be used DBA department for testing of all ERP appl
ication modules.</Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseName
>testing51.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing51</OracleSI
D> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias=
"jhunter" Extension="6007">Jeffrey Hunter</Administrator>
by the
by the
by the
by the
<Administrator EmailAlias="mhunter" Extension="6008">Melody Hunter</Administrato
r> <Administrator EmailAlias="ahunter">Alex Hunter</Administrator> <DatabaseAttr
ibutes Type="Testing" Version="9i" /> <Comments>The following database will host
a testing database to be used DBA department for testing of all ERP application
modules.</Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseName>testin
g52.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing52</OracleSID> <Dat
abaseDomain>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias="jhunte
r" Extension="6007">Jeffrey Hunter</Administrator> <Administrator EmailAlias="mh
unter" Extension="6008">Melody Hunter</Administrator> <Administrator EmailAlias=
"ahunter">Alex Hunter</Administrator> <DatabaseAttributes Type="Testing" Version
="9i" /> <Comments>The following database will host a testing database to be use
d DBA department for testing of all ERP application modules.</Comments> </Databa
seName> - <DatabaseName> <GlobalDatabaseName>testing53.iDevelopment.info</Global
DatabaseName> <OracleSID>testing53</OracleSID> <DatabaseDomain>iDevelopment.info
</DatabaseDomain> <Administrator EmailAlias="jhunter" Extension="6007">Jeffrey H
unter</Administrator> <Administrator EmailAlias="mhunter" Extension="6008">Melod
y Hunter</Administrator> <Administrator EmailAlias="ahunter">Alex Hunter</Admini
strator> <DatabaseAttributes Type="Testing" Version="9i" /> <Comments>The follow
ing database will host a testing database to be used DBA department for testing
of all ERP application modules.</Comments> </DatabaseName> - <DatabaseName> <Glo
balDatabaseName>testing54.iDevelopment.info</GlobalDatabaseName> <OracleSID>test
ing54</OracleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Administra
tor EmailAlias="jhunter" Extension="6007">Jeffrey Hunter</Administrator> <Admini
strator EmailAlias="mhunter" Extension="6008">Melody Hunter</Administrator> <Adm
inistrator EmailAlias="ahunter">Alex Hunter</Administrator> <DatabaseAttributes
Type="Testing" Version="9i" /> <Comments>The following database will host a test
ing database to be used DBA department for testing of all ERP application module
s.</Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseName>testing55.iDe
velopment.info</GlobalDatabaseName> <OracleSID>testing55</OracleSID> <DatabaseDo
main>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias="jhunter" Exte
nsion="6007">Jeffrey Hunter</Administrator> <Administrator EmailAlias="mhunter"
Extension="6008">Melody Hunter</Administrator> <Administrator EmailAlias="ahunte
r">Alex Hunter</Administrator> <DatabaseAttributes Type="Testing" Version="9i" /
> <Comments>The following database will host a testing database to be used DBA d
epartment for testing of all ERP application modules.</Comments>
by the
by the
by the
by the
by the
</DatabaseName> - <DatabaseName> <GlobalDatabaseName>testing56.iDevelopment.info
</GlobalDatabaseName> <OracleSID>testing56</OracleSID> <DatabaseDomain>iDevelopm
ent.info</DatabaseDomain> <Administrator EmailAlias="jhunter" Extension="6007">J
effrey Hunter</Administrator> <Administrator EmailAlias="mhunter" Extension="600
8">Melody Hunter</Administrator> <Administrator EmailAlias="ahunter">Alex Hunter
</Administrator> <DatabaseAttributes Type="Testing" Version="9i" /> <Comments>Th
e following database will host a testing database to be used DBA department for
testing of all ERP application modules.</Comments> </DatabaseName> - <DatabaseNa
me> <GlobalDatabaseName>testing57.iDevelopment.info</GlobalDatabaseName> <Oracle
SID>testing57</OracleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Ad
ministrator EmailAlias="jhunter" Extension="6007">Jeffrey Hunter</Administrator>
<Administrator EmailAlias="mhunter" Extension="6008">Melody Hunter</Administrat
or> <Administrator EmailAlias="ahunter">Alex Hunter</Administrator> <DatabaseAtt
ributes Type="Testing" Version="9i" /> <Comments>The following database will hos
t a testing database to be used DBA department for testing of all ERP applicatio
n modules.</Comments> </DatabaseName> - <DatabaseName> <GlobalDatabaseName>testi
ng58.iDevelopment.info</GlobalDatabaseName> <OracleSID>testing58</OracleSID> <Da
tabaseDomain>iDevelopment.info</DatabaseDomain> <Administrator EmailAlias="jhunt
er" Extension="6007">Jeffrey Hunter</Administrator> <Administrator EmailAlias="m
hunter" Extension="6008">Melody Hunter</Administrator> <Administrator EmailAlias
="ahunter">Alex Hunter</Administrator> <DatabaseAttributes Type="Testing" Versio
n="9i" /> <Comments>The following database will host a testing database to be us
ed DBA department for testing of all ERP application modules.</Comments> </Datab
aseName> - <DatabaseName> <GlobalDatabaseName>testing59.iDevelopment.info</Globa
lDatabaseName> <OracleSID>testing59</OracleSID> <DatabaseDomain>iDevelopment.inf
o</DatabaseDomain> <Administrator EmailAlias="jhunter" Extension="6007">Jeffrey
Hunter</Administrator> <Administrator EmailAlias="mhunter" Extension="6008">Melo
dy Hunter</Administrator> <Administrator EmailAlias="ahunter">Alex Hunter</Admin
istrator> <DatabaseAttributes Type="Testing" Version="9i" /> <Comments>The follo
wing database will host a testing database to be used DBA department for testing
of all ERP application modules.</Comments> </DatabaseName> - <DatabaseName> <Gl
obalDatabaseName>testing60.iDevelopment.info</GlobalDatabaseName> <OracleSID>tes
ting60</OracleSID> <DatabaseDomain>iDevelopment.info</DatabaseDomain> <Administr
ator EmailAlias="jhunter" Extension="6007">Jeffrey
by the
by the
by the
by the
Hunter</Administrator> <Administrator EmailAlias="mhunter" Extension="6008">Melo
dy Hunter</Administrator> <Administrator EmailAlias="ahunter">Alex Hunter</Admin
istrator> <DatabaseAttributes Type="Testing" Version="9i" /> <Comments>The follo
wing database will host a testing database to be used by the Sales Force Automat
ion department.</Comments> </DatabaseName> </DatabaseInventory After downloading
the above XML file, create all Oracle database objects: DROP TABLE test_clob CA
SCADE CONSTRAINTS / Table dropped. CREATE TABLE test_clob ( id NUMBER(15) , file
_name VARCHAR2(1000) , xml_file CLOB , timestamp DATE ) / Table created. CREATE
OR REPLACE DIRECTORY EXAMPLE_LOB_DIR AS '/u01/app/oracle/lobs' / Directory creat
ed. Now, let's define our two example procedures: CREATE OR REPLACE PROCEDURE Lo
ad_CLOB_From_XML_File IS dest_clob CLOB; src_clob BFILE := BFILENAME('EXAMPLE_LO
B_DIR', 'DatabaseInventoryBig.xml'); dst_offset number := 1 ; src_offset number
:= 1 ; lang_ctx number := DBMS_LOB.DEFAULT_LANG_CTX; warning number; BEGIN DBMS_
OUTPUT.ENABLE(100000); ---------------------------------------------------------
-------------------THE FOLLOWING BLOCK OF CODE WILL ATTEMPT TO INSERT / WRITE TH
E CONTENTS OF AN XML FILE TO A CLOB COLUMN. IN THIS CASE, I WILL USE THE NEW DBM
S_LOB.LoadCLOBFromFile() API WHICH *DOES* SUPPORT MULTI-BYTE CHARACTER SET DATA.
IF YOU ARE NOT USING ORACLE 9iR2 AND/OR DO NOT NEED TO SUPPORT LOADING TO A MUL
TI-BYTE CHARACTER SET DATABASE, USE THE
-- FOLLOWING FOR LOADING FROM A FILE: --DBMS_LOB.LoadFromFile( -DEST_LOB => dest
_clob -, SRC_LOB => src_clob -, AMOUNT => DBMS_LOB.GETLENGTH(src_clob) -); --- -
---------------------------------------------------------------------INSERT INTO
test_clob(id, file_name, xml_file, timestamp) VALUES(1001, 'DatabaseInventoryBi
g.xml', empty_clob(), sysdate) RETURNING xml_file INTO dest_clob; -- -----------
--------------------------- OPENING THE SOURCE BFILE IS MANDATORY -- -----------
-------------------------DBMS_LOB.OPEN(src_clob, DBMS_LOB.LOB_READONLY); DBMS_LO
B.LoadCLOBFromFile( DEST_LOB => dest_clob , SRC_BFILE => src_clob , AMOUNT => DB
MS_LOB.GETLENGTH(src_clob) , DEST_OFFSET => dst_offset , SRC_OFFSET => src_offse
t , BFILE_CSID => DBMS_LOB.DEFAULT_CSID , LANG_CONTEXT => lang_ctx , WARNING =>
warning ); DBMS_LOB.CLOSE(src_clob); COMMIT; DBMS_OUTPUT.PUT_LINE('Loaded XML Fi
le using DBMS_LOB.LoadCLOBFromFile: (ID=1001).'); END; / SQL> @load_clob_from_xm
l_file.sql Procedure created. CREATE OR REPLACE PROCEDURE Write_CLOB_To_XML_File
IS clob_loc buffer buffer_size amount offset file_handle directory_name new_xml
_filename CLOB; VARCHAR2(32767); CONSTANT BINARY_INTEGER := 32767; BINARY_INTEGE
R; NUMBER(38); UTL_FILE.FILE_TYPE; CONSTANT VARCHAR2(80) := 'EXAMPLE_LOB_DIR'; C
ONSTANT VARCHAR2(80) := 'DatabaseInventoryBig_2.xml';
BEGIN DBMS_OUTPUT.ENABLE(100000); -- ----------------- GET CLOB LOCATOR -- -----
----------SELECT xml_file INTO clob_loc FROM test_clob WHERE id = 1001; -- -----
---------------------------- OPEN NEW XML FILE IN WRITE MODE -- ----------------
---------------file_handle := UTL_FILE.FOPEN( location => directory_name, filena
me => new_xml_filename, open_mode => 'w', max_linesize => buffer_size); amount :
= buffer_size; offset := 1; -- ----------------------------------------------- R
EAD FROM CLOB XML / WRITE OUT NEW XML TO DISK -- -------------------------------
--------------WHILE amount >= buffer_size LOOP DBMS_LOB.READ( lob_loc amount off
set buffer => => => => clob_loc, amount, offset, buffer);
offset := offset + amount; UTL_FILE.PUT( file => file_handle, buffer => buffer);
UTL_FILE.FFLUSH(file => file_handle); END LOOP; UTL_FILE.FCLOSE(file => file_ha
ndle); END; / SQL> @write_clob_to_xml_file.sql Procedure created. Now lets test
it:
SQL> set serveroutput on SQL> exec Load_CLOB_From_XML_File Loaded XML File using
DBMS_LOB.LoadCLOBFromFile: (ID=1001). PL/SQL procedure successfully completed.
SQL> exec Write_CLOB_To_XML_File PL/SQL procedure successfully completed. SQL> S
ELECT id, DBMS_LOB.GETLENGTH(xml_file) Length FROM test_clob; ID LENGTH --------
-- ---------1001 41113 SQL> host ls -l DatabaseInventory* -rw-r--r-1 oracle dba
41113 Sep 20 15:02 DatabaseInventoryBig.xml -rw-r--r-1 oracle dba 41113 Sep 20 1
5:48 DatabaseInventoryBig_2.xml
30.2.4 REMOTE SELECTS, INSERTS, UPDATES: ---------------------------------------
Valid operations on LOB columns in remote tables include: CREATE INSERT UPDATE I
NSERT UPDATE DELETE TABLE as select * from table1@remote_site; INTO t select * f
rom table1@remote_site; t set lobcol = (select lobcol from table1@remote_site);
INTO table1@remote... table1@remote... table1@remote...
30.2.5: Export a BLOB to a file with Java: -------------------------------------
----First we create a Java stored procedure that accepts a file name and a BLOB
as parameters: CREATE import import import import OR REPLACE JAVA SOURCE NAMED "
BlobHandler" AS java.lang.*; java.sql.*; oracle.sql.*; java.io.*;
public class BlobHandler { public static void ExportBlob(String myFile, BLOB myB
lob) throws Exception
{ // Bind the image object to the database object // Open streams for the output
file and the blob File binaryFile = new File(myFile); FileOutputStream outStrea
m = new FileOutputStream(binaryFile); InputStream inStream = myBlob.getBinaryStr
eam(); // Get the optimum buffer size and use this to create the read/write buff
er int size = myBlob.getBufferSize(); byte[] buffer = new byte[size]; int length
= -1; // Transfer the data while ((length = inStream.read(buffer)) != -1) { out
Stream.write(buffer, 0, length); outStream.flush(); } // Close everything down i
nStream.close(); outStream.close();
} }; /
ALTER java source "BlobHandler" compile; show errors java source "BlobHandler" N
ext we publish the Java call specification so we can access it via PL/SQL: CREAT
E OR REPLACE PROCEDURE ExportBlob (p_file IN VARCHAR2, p_blob IN BLOB) AS LANGUA
GE JAVA NAME 'BlobHandler.ExportBlob(java.lang.String, oracle.sql.BLOB)'; / Next
we grant the Oracle JVM the relevant filesystem permissions: EXEC Dbms_Java.Gra
nt_Permission( 'SCHEMA-NAME', 'java.io.FilePermission', '<<ALL FILES>>', 'read ,
write, execute, delete'); Finally we can test it: CREATE TABLE tab1 (col1 BLOB);
INSERT INTO tab1 VALUES(empty_blob()); COMMIT; DECLARE v_blob BLOB; BEGIN SELEC
T col1
INTO FROM
v_blob tab1;
ExportBlob('c:\MyBlob',v_blob); END; / 30.2.6 Import into a BLOB from a file: --
-----------------------------------Import BLOB Contents The following article pr
esents a simple methods for importing a file into a BLOB datatype. First a direc
tory object is created to point to the relevant filesystem directory: CREATE OR
REPLACE DIRECTORY images AS 'C:\'; Next we create a table to hold the BLOB: CREA
TE TABLE tab1 (col1 BLOB); Finally we import the file into a BLOB datatype and i
nsert it into the table: DECLARE v_bfile BFILE; v_blob BLOB; BEGIN INSERT INTO t
ab1 (col1) VALUES (empty_blob()) RETURN col1 INTO v_blob; v_bfile := BFILENAME('
IMAGES', 'MyImage.gif'); Dbms_Lob.Fileopen(v_bfile, Dbms_Lob.File_Readonly); Dbm
s_Lob.Loadfromfile(v_blob, v_bfile, Dbms_Lob.Getlength(v_bfile)); Dbms_Lob.Filec
lose(v_bfile); COMMIT; END; / Hope this helps. Regards Tim... 30.2.7 Import into
a CLOB from a file: -------------------------------------Import CLOB Contents T
he following article presents a simple methods for importing a file into a CLOB
datatype. First a directory object is created to point to the relevant filesyste
m directory: CREATE OR REPLACE DIRECTORY documents AS 'C:\'; Next we create a ta
ble to hold the CLOB: CREATE TABLE tab1 (col1 CLOB); Finally we import the file
into a CLOB datatype and insert it into the table:
DECLARE v_bfile BFILE; v_clob CLOB; BEGIN INSERT INTO tab1 (col1) VALUES (empty_
clob()) RETURN col1 INTO v_clob; v_bfile := BFILENAME('DOCUMENTS', 'Sample.txt')
; Dbms_Lob.Fileopen(v_bfile, Dbms_Lob.File_Readonly); Dbms_Lob.Loadfromfile(v_cl
ob, v_bfile, Dbms_Lob.Getlength(v_bfile)); Dbms_Lob.Fileclose(v_bfile); COMMIT;
END; / Hope this helps. Regards Tim... Note 5: ------You Asked (Jump to Tom's la
test followup) I have a table with a blob column. It's possible to specify an ex
tra storage clause for this column ? and we said... Yes, the following example i
s cut and pasted from the SQL Reference Manual, the CREATE TABLE command: CREATE
TABLE lob_tab (col1 BLOB, col2 CLOB) STORAGE (INITIAL 512 NEXT 256) LOB (col1,
col2) STORE AS (TABLESPACE lob_seg_ts STORAGE (INITIAL 6144 NEXT 6144) CHUNK 4 N
OCACHE LOGGING INDEX (TABLESPACE lob_index_ts STORAGE (INITIAL 256 NEXT 256) ) )
; The table will be stored in the users default tablespace with (INITIAL 512 NEX
T 256). The actual lob data will be in LOB_SEG_TS with (INITIAL 6144 NEXT 6144).
The lob index built on the pages constituting the lob will be stored in yet a 3
'rd tablespace -- lob_index_ts with (INITIAL 256 NEXT 256). Reviews GOTO a page
to Bookmark Review | Bottom | Top lob storage recovery May 07, 2004 Reviewer: bo
b from PA Tom,
If the LOB tablespace is not backed up, can the table data (a different tablespa
ce) be recovered in a failure scenario? I know with TSPITR the process validates
that no objects cross tablespaces that are not included in the set being recove
red with the TSPITR check/validate funcion. This doesn't mean the tablespace won
't be recovered in the auxiliary db, it just means the automated process won't c
ontinue through to export the objects, and re-import unless you pass the check.
(or at least that was what happened in the test I ran). I am just curious about
what would happen to this table if its lob tablespace was lost and non-recoverab
le. can just the regular data be recovered
Followup: well, it's going to be problematic as the lob locators will point to "
garbage". You cannot really TSPITR a table with lobs without doing the *same* to
the lob segments. You'd have to sort of update the lobs to NULL and pull it man
ually -- but then I would ask "why have the lobs in the first place, must not be
very important"? so yes, we'd be able to get the scalar data back (complete rec
overy would be best here), update the lob to null and go forward with that.
30.3 Errors in LOB: =================== 30.3.1: ------Doc ID: Note:293515.1 Subj
ect: ORA-1578 ORA-26040 in a LOB segment - Script to solve the errors Type: PROB
LEM Status: PUBLISHED Content Type: TEXT/X-HTML Creation Date: 09-DEC-2004 Last
Revision Date: 25-FEB-2005 Purpose ============ - The purpose of this article is
to provide a script to solve errors ORA-1578 / ORA-26040 when a lob block is ac
cessed by a sql statement. - Note that the data inside the corrupted lob blocks
is not salvageable. This procedure will update the lob column with an empty lob
to avoid errors ORA-1578 / ORA-26040. - After applying this solution dbverify wo
uld still produce error DBV-200 until block marked as corrupted is reused and re
formatted.
Symptoms =========== - ORA-1578 and ORA-26040 are produced when accesing a lob c
olumn in a table: ORA-1578 : ORACLE data block corrupted (file # %s, block # %s)
ORA-26040: Data block was loaded using the NOLOGGING option - dbverify for the
datafile that produces the errors fails with: DBV-00200: Block, dba <dba number>
, already marked corrupted Example: dbv file=/oracle/oradata/data.dbf blocksize=
8192 DBV-00200: Block, dba 54528484, already marked corrupted ..... The dba can
be used to get the relative file number and block number: Relative File number:
SQL> select dbms_utility.data_block_address_file(54528484) from dual; DBMS_UTILI
TY.DATA_BLOCK_ADDRESS_FILE(54528484) -------------------------------------------
--13 Block Number: SQL> select dbms_utility.data_block_address_block(54528484) f
rom dual; DBMS_UTILITY.DATA_BLOCK_ADDRESS_BLOCK(54528484) ----------------------
------------------------2532
Cause ========== - LOB segment has been defined as NOLOGGING - LOB Blocks were m
arked as corrupted by Oracle after a datafile restore / recovery. Identify the t
able referencing the lob segment - Example =====================================
==================== Error example when accessing the lob column by a sql statem
ent: ORA-01578 : ORACLE data block corrupted (file #13 block # 2532) ORA-01110 :
datafile 13: '/oracle/oradata/data.dbf' ORA-26040 : Data block was loaded using
the NOLOGGING option. 1. Query dba_extents to find out the lob segment name
select from where and
owner, segment_name, segment_type dba_extents file_id = 13 2532 between block_id
and block_id + blocks - 1;
In our example it returned: owner=SCOTT segment_name=SYS_LOB0000029815C00006$$ s
egment_type=LOBSEGMENT 2. Query dba_lobs to identify the table_name and lob colu
mn name: select from where and table_name, column_name dba_lobs segment_name = '
SYS_LOB0000029815C00006$$' owner = 'SCOTT';
In our example it returned: table_name = EMP column_name = EMPLOYEE_ID_LOB Fix =
===== 1. Identify the table rowid's referencing the corrupted lob segment blocks
by running the following plsq script: rem ********************* Script begins h
ere ******************** create table corrupted_data (corrupted_rowid rowid); se
t concat # declare error_1578 exception; pragma exception_init(error_1578,-1578)
; n number; begin for cursor_lob in (select rowid r, &&lob_column from &table_ow
ner.&table_with_lob) loop begin n:=dbms_lob.instr(cursor_lob.&&lob_column,hextor
aw('8899')) ; exception when error_1578 then insert into corrupted_data values (
cursor_lob.r); commit; end; end loop; end; / undefine lob_column
rem ********************* Script ends here ******************** When prompted by
variable values and following our example: Enter value for lob_column: EMPLOYEE
_ID_LOB Enter value for table_owner: SCOTT Enter value for table_with_lob: EMP 2
. Update the lob column with empty lob to avoid ORA-1578 and ORA-26040: SQL> set
concat # SQL> update &table_owner.&table_with_lob set &lob_column = empty_blob(
) where rowid in (select corrupted_rowid from corrupted_data); if &lob_column is
a CLOB datatype, replace empty_blob by empty_clob. Reference ============== Not
e 290161.1 - The Gains and Pains of Nologging Operations 30.3.2: ------Displayed
below are the messages of the selected thread. Thread Status: Closed From: Neil
Bullen 26-Mar-02 08:26 Subject: How do you alter NOLOGGING in lob index partiti
on RDBMS Version: 8.1.7.2.1 Operating System and Version: Compaq Tru64 Unix 5.2
Error Number (if applicable): Product (i.e. SQL*Loader, Import, etc.): Product V
ersion: How do you alter NOLOGGING in lob index partition I have discovered that
a lob index partition is set to NOLOGGING, how can I alter this to LOGGING. The
lob is set to CACHE and LOGGING, the index def_logging is set to NONE and the t
ablespace is set to LOGGING. ---------------------------------------------------
----------------------------From: Oracle, Rowena Serna 02-Apr-02 03:26 Subject:
Re : How do you alter NOLOGGING in lob index partition You could find the system
generated lobindex name and use the "alter index" command.
Regards, Rowena Serna Oracle Corporation ---------------------------------------
---------------------------------------From: Neil Bullen 03-Apr-02 23:42 Subject
: Re : How do you alter NOLOGGING in lob index partition Using alter index on a
lob segment index results in error ORA-22864 cannot ALTER or DROP LOB indexes, t
he solution I found was to alter the lob caching setting, even though dba_lobs s
howed the CACHE and LOGGING settings to be 'YES' by issuing the ALTER TABLE <tab
lename> MODIFY LOB(<lobname>) (CACHE); command all partitions of the associated
index were changed to LOGGING. What threw me was the CACHE and LOGGING settings
in dba_lobs already being set correctly, however resetting these again was the k
ey. ----------------------------------------------------------------------------
---From: Oracle, Rowena Serna 09-Apr-02 02:46 Subject: Re : How do you alter NOL
OGGING in lob index partition Thanks for updating. Regards, Rowena Serna Oracle
Corporation
30.3.4 exp/imp errors and LOBS: ------------------------------Note 1: ------Doc
ID: Note:48023.1 Subject: OERR: IMP 64 Definition of LOB was truncated by export
Type: REFERENCE Status: PUBLISHED Content Type: TEXT/PLAIN Creation Date: 07-NO
V-1997 Last Revision Date: 26-MAR-2001 Error: IMP 64 Text: Definition of LOB was
truncated by export -----------------------------------------------------------
---------------Cause: While producing the dump file, Export was unable to write
the * entire contents of a LOB. Import is therefore unable to * reconstruct the
contents of the LOB. The remainder of the * import of the current table will be
skipped. Action: Delete the offending row in the exported database and retry the
*
export. . Note 2: ------An export or import of a table with a Large Object (LOB)
column, has a slower performance than an export or import of a table without LO
B columns: -- create two tables: TESTTAB1 with a VARCHAR2 column, and TESTTAB2 w
ith a -- CLOB column: connect / as sysdba create table scott.testtab1 (nr number
, txt varchar2(2000)); create table scott.testtab2 (nr number, txt clob); -- pop
ulate both tables with the same 500,000 rows: declare x varchar2(50); begin for
i in 1..500000 loop x := 'This is a line with the number: ' || i; insert into sc
ott.testtab1 values(i,x); insert into scott.testtab2 values(i,x); commit; end lo
op; end; / -- export both tables: % exp system/manager file=exp_testtab1.dmp tab
les=scott.testtab1 direct=y % exp system/manager file=exp_testtab1a.dmp tables=s
cott.testtab1 % exp system/manager file=exp_testtab2.dmp tables=scott.testtab2 N
o CLOB No CLOB With CLOB DIRECT CONVENTIONAL column ------------ ------------ --
---------8.1.7.4.0 0:13 0:20 7:49 9.2.0.4.0 0:14 0:18 7:37 9.2.0.5.0 0:12 0:15 7
:03 10.1.0.2.0 0:16 0:31 7:15 Note 3: ------Doc ID: Note:157024.1 Content Type:
TEXT/X-HTML Subject: Insert/Import of Table with Lob Fails IMP-00003 ORA-3237 Cr
eation Date: 24-MAY-2001 Type: PROBLEM Last Revision Date: 21-OCT-2003 Status: P
UBLISHED fact: Oracle Server - Enterprise Edition fact: Import Utility (IMP) sym
ptom: Import fails with error symptom: Insert fails symptom: Table with LOB colu
mn symptom: Locally managed tablespace symptom: IMP-00003: ORACLE error %lu enco
untered symptom: IMP-00017: following statement failed with ORACLE error %lu:
symptom: ORA-03237: Initial Extent of specified size cannot be allocated cause:
Extent size specified for the tablespace is not large enough. fix: For LOBS, ens
ure that the extent size specification in the tablespace is least three times th
e db_block_size. For example: If the db_block_size is 8192, then the extent size
for the tablespace should be at least 24576. Explaination: Certain objects may
require larger extents by virtue of how they are built internally (Example: an R
BS requires at least four blocks and a LOB at least three). References: <Bug:118
6625> SQL Reference Guide, Create Tablespace Note 4: ------Doc ID: Subject: Type
: Status: Note:211721.1 Content Type: TEXT/X-HTML Unable to Import Tables with L
OB Columns Creation Date: PROBLEM PUBLISHED Last Revision Date: 03-OCT-2003
13-SEP-2002
fact: Oracle Server - Enterprise Edition 9+fact: Oracle Server - Enterprise Edit
ion 8.1 fact: Oracle Server - Enterprise Edition 8 fact: Import Utility (IMP) sy
mptom: Import fails symptom: ORA-01658: unable to create INITIAL extent for segm
ent in tablespace '%s' symptom: ORA-01652: unable to extend temp segment by %s i
n tablespace %s symptom: Table contains LOB column symptom: Problem does not occ
ur for tables without LOB columns cause: No LOB storage specifications were spec
ified on the table creation for those tables with LOB columns. LOB data is store
d both within and outwith the table depending on how much data the column contai
ns. A new database was created and the data reimported into a tablespace with 1.
7GB default initial extent size. The LOB storage outwith the table defaults to t
he initial extent of the tablespace and this storage requirement could not be fu
lfilled. fix: As a user with dba privileges issue alter tablespace <tablespace_n
ame> default storage (initial <x>M)&
#059; where <tablespace_name> and <x> are replaced with appropriate values. See
also : Note:1074731.6 ORA-01658 During 'Create Table' Statement Note 5: ------Do
c ID: Note:197699.1 Subject: "IMP-00003 ORA-00959 ON IMPORT OF TABLE WITH CLOB D
ATATYPES" Type: PROBLEM Status: PUBLISHED Content Type: TEXT/PLAIN Creation Date
: 31-MAY-2002 Last Revision Date: 29-AUG-2002 Problem Description --------------
----You are attempting to import a table that has CLOB datatype and you receive
the following errors: IMP-00003: ORACLE error 959 encountered ORA-00959: tablesp
ace <tablespace_name> does not exist Solution Description -------------------Cre
ate the table that has CLOB datatypes before the import, specifying tablespaces
that exist on the target system, and import using IGNORE=Y. Here is a simple exa
mple where you can get this problem and how to resolve it: I have a user "TEST"
with default tablespace has "USERS" Step-1: Create tst Tablespace ==============
=================== SQL> create tablespace tst datafile 'c:\temp\tst1.dbf' size
5m; Tablespace created. Step-2: Create table with CLOB datatype by login to "TES
T" user ================================================================= SQL> C
REATE TABLE "TEST"."PX2000" ("ID" NUMBER(*,0), "SUBMITDATE" DATE, "COMMENTS" VAR
CHAR2(4000),"RECOMMENDEDTIMELONG" CLOB) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRAN
S 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE "TST" LOGG
ING LOB ("RECOMMENDEDTIMELONG") STORE AS (TABLESPACE "TST" ENABLE STORAGE IN ROW
CHUNK 8192 PCTVERSION 10 NOCACHE STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GRO
UPS 1)) ; SQL> select table_name,tablespace_name from user_tables 2 where table_
name='PX2000'; TABLE_NAME TABLESPACE_NAME ------------------------------ -------
----------------------PX2000 TST SQL> select username,default_tablespace from us
er_users; USERNAME DEFAULT_TABLESPACE
------------------------------ -----------------------------TEST USERS Step-3: E
xport the Table ========================= exp test/test file=px2000.dmp tables=p
x2000 . . exporting table PX2000 0 rows exported Step-4: Drop the "TST" tablespa
ce including contents: Please note that 'AND datafiles' is a new option in versi
on 9i. Omit this clause if running version prior to 9i. ========================
==================================== SQL> drop tablespace tst including contents
and datafiles; Tablespace dropped. Step-5: Import the table back to test schema
============================================== imp test/test file=px2000.dmp ta
bles=px2000 IMP-00017: following statement failed with ORACLE error 959: "CREATE
TABLE "PX2000" ("ID" NUMBER(*,0), "SUBMITDATE" DATE, "COMMENTS" VARC" "HAR2(400
0), "RECOMMENDEDTIMELONG" CLOB) PCTFREE 10 PCTUSED 40 INITRANS 1 M" "AXTRANS 255
STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE" " "TST" LOGGIN
G LOB ("RECOMMENDEDTIMELONG") STORE AS (TABLESPACE "TST" ENAB" "LE STORAGE IN RO
W CHUNK 8192 PCTVERSION 10 NOCACHE STORAGE(INITIAL 65536 F" "REELISTS 1 FREELIST
GROUPS 1))" IMP-00003: ORACLE error 959 encountered ORA-00959: tablespace 'TST'
does not exist Import terminated successfully with warnings. Step-6: Workaround
is to extract the DDL from the dumpfile,change the tablespace to target databas
e. Create the table manually and import with ignore=y option ===================
============================================================== = % imp test/test
file=px2000.dmp full=y show=y log=<logFile> Step-7: Use the logFile to pre-crea
te the table, then ignore object creation errors. ==============================
==================================================== == % imp test/test file=px2
000.dmp full=y ignore=y Explanation ----------For most of the DDL's (except for
Partitioned tables,tables without CLOB datatypes), i mport will automatically cr
eate the objects to the users default tablespace if the specified tablespace doe
s not exist. DDL's with tables with CLOB datatypes and partitioned tables an IMP
-00003 and ORA-00959 will result if the tablespace does not exists in target dat
abase. References ---------- [NOTE:1058330.6] "IMP-00003 ORA-00959 ON IMPORT OF
PARTITIONED TABLE" [BUG:1982168] "IMP-3 / ORA-959 importing table with CLOB usin
g IGNORE=Y into variable width charset DB" [BUG:2398272] "IMPORT TABLE WITH CLOB
DATATYPE FAILS WITH IMP-3 AND ORA-959"
Oracle Utilites Manual . Note 6: ------Displayed below are the messages of the s
elected thread. Thread Status: Closed From: Helmut Daiminger 12-Dec-00 21:50 Sub
ject: MOVE table with LOB column to another tablespace RDBMS Version: 8.1.6.1.2
Operating System and Version: Win2k, SP1 Error Number (if applicable): Product (
i.e. SQL*Loader, Import, etc.): Product Version: MOVE table with LOB column to a
nother tablespace Hi! I'm having a problem here: I want to move a table with a L
OB column (i.e. LOB index segment) to a different tablespace. In the beginning t
he table and the LOB segment were in the USERS tablespace. I then exported the t
able using the EXP tool. Then I revoked the user's quota to the USERS tablespace
and only gave him quota on the default tablespace. Then I run IMP and import th
at LOB-table. The table gets recreated in the new tablespace, but the creation o
f the LOB index fails with an error message that I don't have privileges to writ
e to the USERS tablespace. How do I completey move the table and the LOB index s
egment to a new tablespace? This is 8.1.6 on Windows 2000 Server. Thanks, Helmut
From: Oracle, Ken Robinson 14-Dec-00 21:05 Subject: Re : MOVE table with LOB co
lumn to another tablespace I believe you can do the following: ALTER TABLE foo M
OVE TABLESPACE new_tbsp STORAGE(new_storage) LOB (lobcol) STORE AS lobsegment (T
ABLESPACE new_tbsp STORAGE (new_storage)); Regards, Ken Robinson Oracle Server E
E Analyst
Note 7. ------Doc ID: Subject: Type: Status: Note:176898.1 Content Type: TEXT/X-
HTML Import Fails with IMP-00032 and IMP-00008 Creation Date: PROBLEM PUBLISHED
Last Revision Date: 24-JUN-2003
15-FEB-2002
fact: Oracle Server - Enterprise Edition fact: Import Utility (IMP) symptom: IMP
-00032: SQL statement exceeded buffer length symptom: IMP-00008: unrecognized st
atement in the export file cause: The insert statement run when importing exceed
s the default or specified buffer size. For import of tables containing LONG, LO
B, BFILE, REF, ROWID, LOGICALROWID or type columns, rows are inserted individual
ly. The size of the buffer must be large enough to contain the entire row insert
ed. fix: Increase the buffer size, and make sure that it is big enough to contai
n the biggest row in the table(s) imported. For example: imp system/manager file
=test.dmp full=y log=test.log buffer= 10000000 Note 8: ------For tables with LOB
columns, make sure the tablespace already exists in the target database before
the import is done. Also, make sure the extent size is large enough. Note 9: ---
---With imp/exp I hit a problem that on remote database users tablespace is call
ed 'users', while on local it's 'users_data'. Now I have to go to documentation
to figure out if those stupid switches would save the day... Also with schlobs t
he elegant insert into t2 select * from t1@remote_db_link; doesn't work. I wonde
r why export/import is not plain sqlplus statements where I can just specify the
right 'where' clause... Followup: Yes, when you deal with multi segment objects
(tables with LOBS, partitioned table, IOTs with overflows for example), using E
XP/IMP is complicated if the target database doesn't have the same tablespace st
ructure. That is because the CREATE statement contains many tablespaces and IMP
will only "rewrite" the first
TABLESPACE in it (it will not put multi-tablespace objects into a single tablesp
ace, the object creation will fail of the tablespaces needed by that create do n
ot exist). I dealt with this issue in my book, imp .... full=y indexfile=temp.sq
l In temp.sql, you will have all of the DDL for indexes and tables. Simply delet
e all index creates and uncomment any table creates you want. Then, you can spec
ify the tablespaces for the various components -- precreate the objects and run
imp with ignore=y. The objects will now be populated. You are incorrect with the
"schlobs" comment (both in spelling and in conclusion). scott@ORA815.US.ORACLE.
COM> create table t ( a int, b blob ); Table created. scott@ORA815.US.ORACLE.COM
> desc t Name Null? ----------------------------------- -------A B Type --------
---------------NUMBER(38) BLOB in there, I recommend you do an:
scott@ORA815.US.ORACLE.COM> select a, dbms_lob.getlength(b) from t; no rows sele
cted scott@ORA815.US.ORACLE.COM> insert into t select x, y from t@ora8i.world; 1
row created. scott@ORA815.US.ORACLE.COM> select a, dbms_lob.getlength(b) from t
; A DBMS_LOB.GETLENGTH(B) ---------- --------------------1 1000011 So, the "eleg
ant insert into select * from" does work. imp/exp can be plain sqlplus statement
s -- use indexfile=y (if you get my book, I use this over and over in various pl
aces to get the DDL). In 9i, there is a simple stored procedure interface as wel
l. Note 10: -------Tom Without using the export import( show=y) Is there any que
ry to find out in which Tablespace the LOB column is stored Thanks in advance Fo
llowup:
select * from user_segments you can join user_segments to user_lobs if you like
as well. user_segments will give you tablespace info. user_lobs will give you th
e lob segment name. Note 11: -------IMP-00003 ORACLE error number encountered Ca
use: Import encountered the referenced Oracle error. Action: Look up the Oracle
message in the ORA message chapters of this manual, and take appropriate action.
IMP-00020 long column too large for column buffer size (number) Cause: The colu
mn buffer is too small. This usually occurs when importing LONG data. Action: In
crease the insert buffer size 10,000 bytes at a time (for example). Use this ste
p-by-step approach because a buffer size that is too large may cause a similar p
roblem. IMP-00064 Definition of LOB was truncated by export Cause: While produci
ng the dump file, Export was unable to write the entire contents of a LOB. Impor
t is therefore unable to reconstruct the contents of the LOB. The remainder of t
he import of the current table will be skipped. Action: Delete the offending row
in the exported database and retry the export. IMP-00070 Lob definitions in dum
p file are inconsistent with database. Cause: The number of LOBS per row in the
dump file is different than the number of LOBS per row in the table being popula
ted. Action: Modify the table being imported so that it matches the column attri
bute layout of the table that was exported. Note 12: -------we have a 10 Mill ro
ws table with a BLOB column in it the size of the lob varies: from 1K up ward to
a few megabytes, but most are in the 2K-3K range.
So currently, we have ENABLE STORAGE IN ROW. and want to do DISABLE STORAGE IN R
OW b/c we are starting to do a lot of range scan on the table. When we export/im
port the table and during import have moved all the lobs out of line.. the total
space used during the import bloated 5 times from a 2GIG tablespace into a 10GI
G tablespace??? Why? The database block size is 8K, running 9.2.0.6 with auto sg
ement management in the tablespace CREATE TABLESPACE "BLOB_DATA" LOGGING DATAFIL
E 'D:ORACLEORADATATESTDBBLOB_DATA01.ora' SIZE 2048M REUSE AUTOEXTEND OFF EXTENT
MANAGEMENT LOCAL UNIFORM SIZE 8M SEGMENT SPACE MANAGEMENT AUTO Note 13: -------T
o relocate tables using lobs: Method 1: ========= 1. 2. 3. 4. export data using
exp cmd drop all tables create a new LOB tablespace re-create all the tables wit
h the LOB Storage clause, for example
create table FOO ( col1 NUMBER ,col2 BLOB ) tablespace DATA_TBLSPCE LOB ( col2 )
STORE AS col2_blob ( tablespace BLOB_TBLSPCE disable storage in row chunk 8192
pctversion 10 cache storage (initial 64K next 64K minextents 1 maxextents unlimi
ted pctincrease 0 ) 5. import data with ignore=y Method 2: ======= Doc ID: Note:
130814.1 Subject: How to move LOB Data to Another Tablespace Type: HOWTO Status:
PUBLISHED Content Type: TEXT/X-HTML Creation Date: 19-DEC-2000
Last Revision Date: Purpose -------
05-AUG-2003
The purpose of this article is to provide the syntax for altering the storage pa
rameters of a table that contains one or more LOB columns. Scope & Application -
-----------------This article will be useful for Oracle DBSs, Developers, and Su
pport Analysts. How to move LOB Data to Another Tablespace ---------------------
--------------------If you want to make no other changes to the table containing
a lob other than to rebuild it, use: ALTER TABLE foo MOVE; This will rebuild th
e table segment. It does not affect any of the lob segments associated with the
lob columns which is the desired optimization. If you want to change one or more
of the physical attibutes of the table containing the lob, however no attribute
s of the lob columns are to be changed, use the following syntax: ALTER TABLE fo
o MOVE TABLESPACE new_tbsp STORAGE(new_storage); This will rebuild the table seg
ment. It does not rebuild any of the lob segments associated with the lob column
s which is the desired optimization. If a table containing a lob needs no change
s to the physical attributes of the table segment, but you want to change one or
more lob segments; for example, you want to move the lob column to a new tables
pace as well as the lob's storage attributes, use the following syntax: ALTER TA
BLE foo MOVE LOB(lobcol) STORE AS lobsegment (TABLESPACE new_tbsp STORAGE (new_s
torage)); Note that this will also rebuild the table segment (although, in this
case, in the same tablespace and without changing the table segment physical att
ributes). If a table containing a lob needs changes to both the table attributes
as well as the lob attributes then use the following syntax: ALTER TABLE foo MO
VE TABLESPACE new_tbsp STORAGE(new_storage) LOB (lobcol) STORE AS lobsegment (TA
BLESPACE new_tbsp STORAGE (new_storage)); Explanation -----------
The 'ALTER TABLE foo MODIFY LOB (lobcol) ...' syntax does not allow for a change
of tablespace ALTER TABLE my_lob MODIFY LOB (a_lob) (TABLESPACE new_tbsp); (TAB
LESPACE new_tbsp) * ORA-22853: invalid LOB storage option specification You have
to use the MOVE keyword instead as shown in the examples. References ---------N
ote 66431.1 LOBS - Storage, Redo and Performance Issues Bug 747326 ALTER TABLE M
ODIFY LOB STORAGE PARAMETER DOES'T WORK Additional Search Words ----------------
------ora-1735 ora-906 ora-2143 ora-22853 clob nclob blob Method 3: ========= Mo
ve doesnt support Long datatypes. You can either convert them to LOBs and then m
ove or do exp/imp of the table with the LONG column or create the table with LON
G in the locally managed tablespace and copy the data from the old table using P
L/SQL loop or CTAS with to_lob in the locally managed tablespace.. SQL> desc t N
ame Null? ----------------------------------------- -------X Y SQL> alter table
t move; alter table t move * ERROR at line 1: ORA-00997: illegal use of LONG dat
atype -- You can create the new table in the Locally Managed tablespace SQL> cre
ate table t_lob Table created. SQL> desc t_lob tablespace users as select x,to_l
ob(y) y from t;
Type ---------------------------NUMBER(38) LONG
Name Null? ----------------------------------------- -------X Y
Type ---------------------------NUMBER(38) CLOB
-- Now you can drop the old table and rename the new table -- Or you can move th
e LOB table to the locally managed tablespace SQL> alter table t_lob move; Table
altered. -- Or you can precreate the new table with LONG in the locally managed
tablespace and do exp/imp -- export the Long table SQL> !exp / file=t.dmp table
s=t compress=n Export: Release 9.2.0.3.0 - Production on Tue Mar 2 09:32:30 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle9i Enterprise Edition Release 9.2.0.3.0 - Production With th
e Partitioning, OLAP and Oracle Data Mining options JServer Release 9.2.0.3.0 -
Production Export done in WE8ISO8859P1 character set and AL16UTF16 NCHAR charact
er set About to export specified tables via Conventional Path ... . . exporting
table T Export terminated successfully without warnings. -- just rename the old
table for reference purposes SQL> rename t to tbak; Table renamed. -- Create the
LONG table in the locally managed tablespace SQL> create table t(x int,y long)
tablespace users; Table created. -- now import the data SQL> !imp / file=t.dmp t
ables=t ignore=y Import: Release 9.2.0.3.0 - Production on Tue Mar 2 09:33:43 20
04 Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved. 2 rows exp
orted
Connected to: Oracle9i Enterprise Edition Release 9.2.0.3.0 - Production With th
e Partitioning, OLAP and Oracle Data Mining options JServer Release 9.2.0.3.0 -
Production Export file created by EXPORT:V09.02.00 via conventional path import
done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set . importing
OPS$ORACLE's objects into OPS$ORACLE
. . importing table "T" Import terminated successfully without warnings. SQL> de
sc t Name Null? ----------------------------------------- -------X Y
2 rows imported
Type ---------------------------NUMBER(38) LONG
Note 14: -------Doc ID </help/usaeng/Search/search.html>: Note:225337.1 TEXT/PLA
IN Subject: ORA-22285 ON ACCESSING THE BFILE COLUMN OF A TABLE 08-JAN-2003 Type:
PROBLEM Last Revision Date: 17-DEC-2004 Status: PUBLISHED Fact(s) ~~~~~~~ *The
directory alias for the relevant directory exists. *This condition might be enco
untered in general or particularly after successful export/import of 'table with
bfile column' from one schema to another. *Non-bfile columns of the table could
be accessed but not the bfile column. Symptom(s) ~~~~~~~~~~ Accessing the bfile
column of table gives the following errors: ORA-22285: non-existent directory o
r file for ..... ORA-06512: at "SYS.DBMS_LOB", line ... Diagnosis: ~~~~~~~~~~ --
create the exporting user schema and the table with bfile data-SQL>connect syst
em/manager SQL>create user test2 identified by test2 default tablespace users qu
ota 50 m on users / SQL>grant connect, create table, create any directory to tes
t2 / SQL>conn test2/test2 SQL>create table test_lobs ( c1 number, Content Type:
Creation Date:
c2 clob, c3 bfile, c4 blob ) LOB (c2) STORE AS (ENABLE STORAGE IN ROW) LOB (c4)
STORE AS (DISABLE STORAGE IN ROW) / create two files (rec2.txt , rec3.txt) using
OS utilities in some directory say ( /tmp ) --create the directory alias -SQL>c
reate directory tmp_dir as '/tmp' / -- Populate the table-SQL>insert into test_l
obs values (1,null,null,null) / SQL>insert into test_lobs values (2,EMPTY_CLOB()
,BFILENAME('TMP_DIR','rec2.txt'),EMPTY_BLOB()) / SQL>insert into test_lobs value
s (3,'Some data for record3.', BFILENAME('TMP_DIR','rec2.txt'), '48656C6C6F'||UT
L_RAW.CAST_TO_RAW('there!')) / -- access the table-SQL>column len_c2 format 9999
SQL>column len_c3 format 9999 SQL>column len_c4 format 9999 SQL>select c1, DBMS
_LOB.GETLENGTH(c2) len_c2, DBMS_LOB.GETLENGTH(c3) len_c3, DBMS_LOB.GETLENGTH(c4)
len_c4 from test_lobs / C1 LEN_C2 LEN_C3 LEN_C4 -------------------------- ----
-- ------ -----1 2 0 124 0 3 22 124 11 -- carry out the schema level export-$ ex
p system/manager file=exp44.dmp log=logr44.log owner=test2
IMPORTING DATABASE: create same two files (rec2.txt , rec3.txt) using OS utiliti
es in some directory say ( /tmp ) --create the directory alias --
SQL>conn system/manager SQL>create directory tmp_dir as '/tmp' / -- create the i
mporting user schema-SQL>create user test3 identified by test3 default tablespac
e users quota 50 m on users / SQL>grant connect, create table, create any direct
ory to test3 / --carry out the successful schema level import-$ imp system/manag
er fromuser=test2 touser=test3 file=exp44.dmp log=log44.log --try to access the
imported table as below (same statement as by the exporting user-SQL>select c1,
DBMS_LOB.GETLENGTH(c2) len_c2, DBMS_LOB.GETLENGTH(c3) len_c3, DBMS_LOB.GETLENGTH
(c4) len_c4 from test_lobs / ERROR: ORA-22285: non-existent directory or file fo
r GETLENGTH operation ORA-06512: at "SYS.DBMS_LOB", line 547 -- However non bfil
e columns could be accessed--
Cause ~~~~~ The importing user lacks the read access on the corresponding direct
ory/ directory alias. Solution(s) ~~~~~~~~~~~ grant read access on the corresspo
nding directory to the user who tries to access the bfile table as below: SQL> c
onn system/manager Connected. SQL> grant read on directory tmp_dir to test3; ( p
lease see the example above ) Once the read permission is granted ,the bfile col
umn of the said table is accessible since the corresponding directory (/alias) i
s accessible. Refrences: ~~~~~~~~~~
[NOTE:66046.1] <ml2_documents.showDocument?p_id=66046.1&p_database_id=NOT>: LOBs
, Longs, and other Datatypes
Note 15: -------Doc ID: Note:279722.1 Subject: IMPORT OF TABLE WITH LOB GENERATE
S CORE DUMP Type: PROBLEM Status: MODERATED Content Type: TEXT/X-HTML Creation D
ate: 31-JUL-2004 Last Revision Date: 02-AUG-2004
The information in this article applies to: Oracle Server - Enterprise Edition -
Version: 9.2.0.3 This problem can occur on any platform. Symptoms IMPORT OF TAB
LE WITH LOB GENERATES CORE DUMP Cause <Bug:3091499> Importing a table having a c
lob created with chunksize = 32k Error Details: ------------. importing DBAPIDB1
's objects into DBAPIDB1 . . importing table "TE2006"Segmentation fault Trace fr
om the Core Dump: -----------------------lmmstrmlr 44 lmmstmrg D4 lmmstmrg D4 lm
mstfree 104 lmmfree C0 impmfr 24 impplb 5BC impins 22B8 do_insert 48C imptabwrk
F4 impdta 41C impdrv 2D68 main 14 __start 94 Fix FIX: --Apply the patch for Bug:
3091499
WORKAROUND: ---------Before import, create the table with chunksize <= 16K and r
un import setting ignore=y References <BUG:3091499> - Import Of Table With Lob G
enerates Core Dump Note 16: keep LOBS at manageble size. -----------------------
-------------(1) Look at PCTVERSION: Since the LOB segments are usually very lar
ge, they are treated differently from other columns. While other columns can be
guaranteed to give consistent reads, these columns are not. This is because, it
is difficult to manage with LOB data rollback segments due to their size unlike
other columns. So they do not use rollback segments. Usually only one copy exist
s, so the queries reading that column may not get consistent reads while other q
ueries modify them. In these cases, the other queries will get "ORA-22924 snapsh
ot too old" errors. To maintain read consistency Oracle creates new LOB page ver
sions every time a lob changes. PCTVERSION is the percentage of all used LOB dat
a space that can be occupied by old versions of LOB data pages. As soon as old v
ersions of LOB data pages start to occupy more than the PCTVERSION amount of use
d LOB space, Oracle tries to reclaim the old versions and reuse them. In other w
ords, PCTVERSION is the percent of used LOB data blocks that is available for ve
rsioning old LOB data. The PCTVERSION can be set to the percentage of LOB's that
are occasionally updated. Often a table's a LOB column usually gets the data up
loaded only once, but is read multiple times. Hence it is not necessary to keep
older versions of LOB data. It is recommended that this value be changed to "0".
By default PCTVERSION is set to 10%. So, most of the instances usually have it
set to 10%, it must be set to 0% explicitly. The value can be changed any time i
n a running system. Use the following query to find out currently set value for
PCTVERSION: SQL> select PCTVERSION from dba_lobs where TABLE_NAME = 'table_name'
and COLUMN_NAME='column_name'; PCTVERSION ---------10 PCTVERSION can be changed
using the following SQL (it can be run anytime in a
running system): ALTER TABLE FND_LOBS MODIFY LOB (FILE_DATA) ( PCTVERSION 0 ); N
ote 17: difference 9iR1 9iR2 with respect to Locally managed tablespace --------
---------------------------------------------------------------Doc ID: Note:1590
78.1 Subject: Cannot Create Table with LOB Column in Locally Managed Tablespace
Type: PROBLEM Status: PUBLISHED Content Type: TEXT/X-HTML Creation Date: 26-SEP-
2001 Last Revision Date: 04-AUG-2004 fact: Oracle Server - Enterprise Edition 9.
0.1 symptom: Creating new OEM repository fails symptom: Create table SMP_LMV_SEA
RCH_OBJECT fails symptom: ORA-03001: unimplemented feature symptom: Table with L
OB column cause: You try to create a LOB segment in a bitmapped (locally managed
) tablespace. This is a limitation for bitmapped segments in 9i. This is being d
ocumented in the SQL Reference- the restriction will be lifted in 9i Release 2.
fix: Create the table in a tablespace that was created with clause SEGMENT SPACE
MANAGEMENT MANUAL Note 18: -------In a trace file you either get ORA-00600: int
ernal error code, arguments: [kkdoilsn1], [], [], [], [], [], [], [] or ORA-0060
0: internal error code, arguments: [15265], [], [], [], [], [], [], [] descripti
on: in a 9.2 database, a table with lob and indexsegments was moved to another t
ablespace. Explanation: 9202 2405258 Dictionary corruption / OERI:15265 from MOV
E LOB to existing segment name 2405258 Dictionary corruption / OERI:15265 from M
OVE LOB to existing segment name This is Bug 2405258 Corruption Fixed: 9202
LOB Related (CLOB/BLOB/BFILE) Dictionary corruption / ORA-600 [15265] from MOVE
LOB toexisting segment name. Eg: ALTER TABLE mytab MOVE LOB (artist_bio) STORE A
S lobsegment (STORAGE(INITIAL 1M NEXT 1M)); corrupts the dictionary if "logsegme
nt" already exists. Bug 2405258 Dictionary corruption / OERI:15265 from MOVE LOB
to existing segment name This note gives a brief overview of bug 2405258. Affec
ts: Product (Component) Oracle Server (RDBMS) Range of versions believed to be a
ffected Versions >= 8 but < 10G Versions confirmed as being affected 9.2.0.1 Pla
tforms affected Generic (all / most platforms affected) Fixed: This issue is fix
ed in 9.2.0.2 (Server Patch Set) 10G Production Base Release Symptoms: Corruptio
n (Dictionary) <javascript:taghelp('TAGS_CORR_DIC')> Internal Error may occur (O
RA-600) <javascript:taghelp('TAGS_OERI')> ORA-600 [15265] Related To: Datatypes
- LOBs (CLOB/BLOB/BFILE) Description Dictionary corruption / ORA-600 [15265] fro
m MOVE LOB to existing segment name. Eg: ALTER TABLE mytab MOVE LOB (artist_bio)
STORE AS lobsegment (STORAGE(INITIAL 1M NEXT 1M)); corrupts the dictionary if "
logsegment" already exists. ===================== 31. BLOCK CORRUPTION: ========
============= Note 1: ======= Doc ID </help/usaeng/Search/search.html>: Note:479
55.1 Content Type: TEXT/PLAIN Subject: Block Corruption FAQ Creation Date: 14-NO
V-1997 Type: FAQ Last Revision Date: 17-AUG-2004 Status: PUBLISHED ORACLE SERVER
------------BLOCK CORRUPTION ---------------FREQUENTLY ASKED QUESTIONS --------
-----------------25-JAN-2000 CONTENTS -------1. What does the error ORA-01578 me
an? 2. How to determine what object is corrupted?
3. 4. 5. 6. 7. 8. 9.
What are the recovery options if the object is a table? What are the recovery op
tions if the object is an index? What are the recovery options if the object is
a rollback segment? What are the recovery options if the object is a data dictio
nary object? What methods are available to assist in pro-actively identifying co
rruption? How can corruption be prevented? What are the common causes of corrupt
ion?
QUESTIONS & ANSWERS 1. What does the error ORA-01578 mean? An Oracle data block
is written in an internal binary format which conforms to a defined structure. T
he size of the physical data block is determined by the "init.ora" parameter DB_
BLOCK_SIZE set at the time of database creation. The format of the block is simi
lar regardless of the type of data contained in the block. Each formatted block
on disk has a wrapper which consists of a block header and footer. Unformatted b
locks should be zero throughout. Whenever a block is read into the buffer cache,
the block wrapper information is checked for validity. The checks include verif
ying that the block passed to Oracle by the operating system is the block reques
ted (data block address) and also that certain information stored in the block h
eader matches information stored in the block footer in case of a split (fractur
ed) block. On a read from disk, if an inconsistency in this information is found
, the block is considered to be corrupt and ORA-01578: ORACLE data block corrupt
ed (file # %s, block # %s)
is signaled where file# is the file ID of the Oracle datafile and block# is the
block number, in Oracle blocks, within that file. However, this does not always
mean that the block on disk is truely physically corrupt. That fact needs to be
confirmed. 2. How to determine what object is corrupted? The following query wil
l display the segment name, type, and owner: SELECT SEGMENT_NAME, SEGMENT_TYPE,
OWNER FROM SYS.DBA_EXTENTS WHERE FILE_ID = <f> AND <b> BETWEEN BLOCK_ID AND BLOC
K_ID + BLOCKS - 1; Where <f> is the file number and <b> is the block number repo
rted in the ORA-01578 error message. Suppose block 82817 from table 'USERS' is c
orrupt: SQL> select extent_id, block_id, blocks from dba_extents where segment_n
ame='USERS'; EXTENT_ID BLOCK_ID BLOCKS ---------- ---------- ----------
0 1 2 3 4 SQL> 2 3 4
82817 82825 82833 82841 82849
8 8 8 8 8
SELECT SEGMENT_NAME, SEGMENT_TYPE, OWNER FROM SYS.DBA_EXTENTS WHERE FILE_ID = 9
AND 82817 BETWEEN BLOCK_ID AND BLOCK_ID + BLOCKS - 1;
SEGMENT_NAME SEGMENT_TYPE OWNER ------------------------------------------------
-------------------------------------------------USERS TABLE VPOUSERDB 3. What a
re the recovery options if the object is a table? The following options exist fo
r resolving non-index block corruption in a table which is not part of the data
dictionary: o Restore and recover the database from backup (recommended). o Reco
ver the object from an export. o Select the data out of the table bypassing the
corrupted block(s). If the table is a Data Dictionary table, you should contact
Oracle Support Services. The recommended recovery option is to restore the datab
ase from backup. [NOTE:28814.1] <ml2_documents.showDocument?p_id=28814.1&p_datab
ase_id=NOT> contains information on how to handle ORA-1578 errors in Oracle7. Re
ferences: ----------[NOTE:28814.1] <ml2_documents.showDocument?p_id=28814.1&p_da
tabase_id=NOT> TECH ORA-1578 and Data Block Corruption in Oracle7 4. What are th
e recovery options if the object is an index?
If the object is an index which is not part of the data dictionary and the base
table does not contain any corrupt blocks, you can simply drop and recreate the
index. If the index is a Data Dictionary index, you should contact Oracle Suppor
t Services. The recommended recovery option is to restore the database from back
up. There is a possibility you might be able to drop the index and then recreate
it based on the original create SQL found in the administrative scripts. Oracle
Support Services will be able to make the determination as to whether this is a
viable option for you. 5. What are the recovery options if the object is a roll
back segment? If the object is a rollback segment, you should contact Oracle Sup
port Services. The recommended recovery option is to restore the database
from backup. 6. What are the recovery options if the object is a data dictionary
object? If the object is a Data Dictionary object, you should contact Oracle Su
pport Services. The recommended recovery option is to restore the database from
backup. If the object is an index on a Data Dictionary table, you might be able
to drop the index and then recreate it based on the original create SQL found in
the administrative scripts. Oracle Support Services will be able to make the de
termination as to whether this is a viable option. 7. What methods are available
to assist in pro-actively identifying corruption? ANALYZE TABLE/INDEX/CLUSTER .
.. VALIDATE STRUCTURE is a SQL command which can be executed against a table, in
dex, or cluster which scans every block and reports a failure upon encountering
any potentially corrupt blocks. The CASCADE option checks all associated indices
and verifies the 1 to 1 correspondence between data and index rows. This is the
most detailed block check available, but requires the database to be open. DB V
erify is a utility which can be run against a datafile of a database that will s
can every block in the datafile and generate a report identifying any potentiall
y corrupt blocks. DB Verify performs basic block checking steps, however it does
not provide the capability to verify the 1 to 1 correspondence between data and
index rows. It can be run when the database is closed. Export will read the blo
cks allocated to each table being exported and report any potential block corrup
tions encountered. References: ----------[NOTE:35512.1] <ml2_documents.showDocum
ent?p_id=35512.1&p_database_id=NOT> DBVERIFY - Database file Verification Utilit
y (7.3.2 onwards) 8. How can corruption be prevented? Unfortunately, there is no
way to totally eliminate the risk of corruption. You can only minimize the risk
and plan accordingly. 9. What are the common causes of corruption? o o o o o o
o Bad I/O, H/W, Firmware. Operating System I/O or caching problems. Memory or pa
ging problems. Disk repair utilities. Part of a datafile being overwritten. Orac
le incorrectly attempting to access an unformatted block. Oracle or operating sy
stem bug.
Note 77587.1 <ml2_documents.showDocument?p_id=77587.1&p_database_id=NOT> discuss
es block corruptions in Oracle and how they are related to the underlying operat
ing system and hardware. References: ----------[NOTE:77587.1] <ml2_documents.sho
wDocument?p_id=77587.1&p_database_id=NOT> BLOCK CORRUPTIONS ON ORACLE AND UNIX N
ote 2: ======= ORA-00600: Internal message code, arguments: [01578] [...] [...]
[] [] []. ORA-01578: Oracle data block corrupted (file ..., block ...). Having e
ncountered the Oracle data block corruption, we must firstly investigate which d
atabase segment (name and type) the corrupted block is allocated to. Chances are
that the block belongs either to an index or to a table segment, since these tw
o type of segments fill the major part of our databases. The following query wil
l reveil the segment that holds the corrupted block identified by <filenumber> a
nd <blocknumber> (which were given to you in the error message): SELECT ds.* FRO
M dba_segments ds, sys.uet$ e WHERE ds.header_file=e.segfile# and ds.header_bloc
k=e.segblock# and e.file#=<filenumber> and <blocknumber> between e.block# and e.
block#+e.length-1; If the segment turns out to be an index segment, then the pro
blem can be very quickly solved. Since all the table data required for recreatin
g the index is still accessable, we can drop and recreate the index (since the b
lock will reformatted, when taken FROM the free-space list and reused for the in
dex). If the segment turns out to be a table segment a number of options for sol
ving the problem are available: - restore and recovery of datafile the block is
in - imp table - sql The last option involves using SQL to SELECT as much data a
s possible FROM the current corrupted table segment and save the SELECTed rows i
nto a new table. SELECTing data that is stored in segment blocks that preceede t
he corrupted block can be easily done using a full table scan (via a cursor). Ro
ws stored in blocks after the corrupted block cause a problem. A full table scan
will never reach these. However these rows can still be fetched using rowids (s
ingle row lookups).
2.1 Table was indexed Using an optimizer hint we can write a query that SELECTs
the rows FROM the table via an index scan (using rowid's), instead of via a full
table scan. Let's assume our table is named X with columns a, b and c. And tabl
e X is indexed uniquely on columns a and b by index X_I, the query would look li
ke: SELECT /*+index(X X_I) */ a, b, c FROM X; We must now exclude the corrupt bl
ock FROM being accessed to avoid the internal exception ORA-00600[01578]. Since
the blocknumber is a substring of the rowid ( ) this can very easily be achieved
: SELECT /*+index(X X_I) */ a, b, c FROM X WHERE rowid not like <corrupt_block_n
umber>||'.%.'||<file_number>; But it is important to realize that the WHERE-clau
se gets evaluated right after the index is accessed and before the table is acce
ssed. Otherwise we would still get the ORA-00600[01578] exception. Using the abo
ve query as a subquery in an insert statement we can restore all rows of still v
alid blocks to a new table. Since the index holds the actual column values of th
e indexed columns we could also use the index to restore all indexed columns of
rows that reside in the corrupt block. The following query, SELECT /*+index(X X_
I) */ a, b FROM X WHERE rowid like <corrupt_block_number>||'.%.'||<file_number>;
retreives only indexed columns a and b FROM rows inside the corrupt block. The
optimizer will not access the table for this query. It can retreive the column v
alues using the index segment only. Using this technique we are able to restore
all indexed column values of the rows inside the corrupt block, without accessin
g the corrupt block at all. Suppose in our example that column c of table X was
also indexed by index X_I2. This enables us to completely restore rows inside th
e corrupt block. First restore columns a and b using index X_I: create table X_a
_b(rowkey,a,b) as SELECT /*+index(X X_I) */ rowid, a, b FROM X WHERE rowid like
<corrupt_block_number>||'.%.'||<file_number>; Then restore column c using index
X_I2: create table X_c(rowkey,c) as SELECT /*+index(X X_I2) */ rowid, c FROM X W
HERE rowid like <corrupt_block_number>||'.%.'||<file_number>; And finally join t
he columns together using the restored rowid:
SELECT x1.a, x1.b, x2.c FROM X_a_b x1, X_c x2 WHERE x1.rowkey=x2.rowkey; In summ
ary: Indexes on the corrupted table segment can be used to restore all columns o
f all rows that are stored outside the corrupted data blocks. Of rows inside the
corrupted data blocks, only the columns that were indexed can be restored. We m
ight even be able to use an old version of the table (via Import) to further res
tore non-indexed columns of these records. 2.2 Table has no indexes This situati
on should rarely occur since every table should have a primary key and therefore
a unique index. However when no index is present, all rows of corrupted blocks
should be considered lost. All other rows can be retrieved using rowid's. Since
there is no index we must build a rowid generator ourselves. The SYS.UET$ table
shows us exactly which extents (file#, startblock, endblock) we need to inspect
for possible rows of our table X. If we make an estimate of the maximum number o
f rows per block for table X, we can build a PL/SQL-loop that generates possible
rowid's of records inside table X. By handling the 'invalid rowid' exception, a
nd skipping the corrupted data block, we can restore all rows except those insid
e the corrupted block. declare v_rowid varchar2(18); v_xrec X%rowtype; e_invalid
_rowid exception; pragma exception_init(e_invalid_rowid,-1410); begin for v_uetr
ec in (SELECT file# file, block# start_block, block#+length#-1 end_block FROM ue
t$ WHERE segfile#=6 and segblock#=64) -- Identifies our segment X. loop for v_bl
k in v_uetrec.start_block..v_uetrec.end_block loop if v_uetrec.file<>6 and v_blk
<>886 -- 886 in file 6 is our corrupted block. then for v_row in 0..200 -- 200 i
s maximum number of rows per block for segment X. loop begin SELECT a,b,c into v
_rec FROM x WHERE rowid=chartorowid('0000'||hex(v_blk)||'.'|| hex(v_row)||'.'||h
ex(v_uetrec.file); insert into x_saved(a,b,c) values(v_rec.a,v_rec.b,v_rec.c); c
ommit; exception when e_invalid_rowid then null; end; end loop; /*row-loop*/ end
if; end loop; /*blk-loop*/
end loop; /*uet-loop*/ end; / The above code assumes that block id's never excee
d 4 hexadecimal places. A definition of the hex-function which is used in the ab
ove code can be found in the appendix. Note 3: ======= Doc ID </help/usaeng/Sear
ch/search.html>: Note:33405.1 Content Type: TEXT/PLAIN Subject: Extracting Data
from a Corrupt Table using SKIP_CORRUPT_BLOCKS or Event 10231 Creation Date: 24-
JAN-1996 Type: BULLETIN Last Revision Date: 13-SEP-2000 Status: PUBLISHED ******
*********** *** *** ***************** This note is an extension to article [NOTE
:28814.1] <ml2_documents.showDocument?p_id=28814.1&p_database_id=NOT> about hand
ling block corruption errors where the block wrapper of a datablock indicates th
at the block is bad. (Typically for ORA-1578 errors). The details here will not
work if only the block internals are corrupt (eg: for ORA-600 or other errors).
Please read [NOTE:28814.1] <ml2_documents.showDocument?p_id=28814.1&p_database_i
d=NOT> before reading this note. Introduction ~~~~~~~~~~~~ This short article ex
plains how to skip corrupt blocks on an object either using the Oracle8i SKIP_CO
RRUPT table flag or the special Oracle event number 10231 which is available in
Oracle releases 7 through 8.1 inclusive. The information here explains how to us
e these options. Before proceeding you should: a) Be certain that the corrupt bl
ock is on a USER table. (i.e.: not a data dictionary table) b) Have contacted Or
acle Support Services and been advised to use event 10231 or the SKIP_CORRUPT fl
ag. c) Have decided how you are to recreate the table. Eg: Export , and disk spa
ce is available etc.. d) You have scheduled down-time to attempt the salvage ope
ration. e) Have a backup of the database. f) Have the SQL to rebuild the problem
table, its indexes constraints, triggers, grants etc... This SQL should include
relevant storage clauses.
What is event 10231 ? ~~~~~~~~~~~~~~~~~~~~~
This event allows Oracle to skip certain types of corrupted blocks on full table
scans ONLY hence allowing export or "create table as select" type operations to
retrieve rows from the table which are not in the corrupt block. Data in the co
rrupt block is lost. The scope of this event is limited for Oracle versions prio
r to Oracle 7.2 as it only allows you to skip 'soft corrupt' blocks. Most ORA 15
78 errors are a result of media corruptions and in such cases event 10231 is use
less. From Oracle 7.2 onwards the event allows you to skip many forms of media c
orrupt blocks in addition to soft corrupt blocks and so is far more useful. It i
s still *NOT* guaranteed to work. [NOTE:28814.1] <ml2_documents.showDocument?p_i
d=28814.1&p_database_id=NOT> describes alternatives which can be used if this ev
ent fails. What is the SKIP_CORRUPT flag ? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In Or
acle8i the functionality of the 10231 event has been externalised on a PER-SEGME
NT basis such that it is possible to mark a TABLE or PARTITION to skip over corr
upt blocks when possible. The flag is set or cleared using the DBMS_REPAIR packa
ge. DBA_TABLES has a SKIP_CORRUPT column which indicates if this flag is set for
an object or not. Setting the event or flag ~~~~~~~~~~~~~~~~~~~~~~~~~ The event
can either be set within the session or at database instance level. If you inte
nd to use a CREATE TABLE AS SELECT then setting the event in the session may suf
fice. If you want to EXPORT the table data then it is best to set the event at i
nstance level, or set the SKIP_CORRUPT table attribute if on Oracle8i. Oracle8i
~~~~~~~~ Connect as a DBA user and mark the table as needing to skip corrupt blo
cks thus: execute DBMS_REPAIR.SKIP_CORRUPT_BLOCKS('<schema>','<tablename>'); or
for a table partition: execute DBMS_REPAIR.SKIP_CORRUPT_BLOCKS('<schema>','<tabl
ename>'.'<partition>'); Now you should be able to issue a CREATE TABLE AS SELECT
operation against the corrupt table to extract data from all non-corrupt blocks
, or EXPORT the table. Eg: CREATE TABLE salvage_emp AS SELECT * FROM corrupt_emp
; To clear the attribute for a table use: execute DBMS_REPAIR.SKIP_CORRUPT_BLOCK
S('<schema>','<tablename>', flags=>dbms_repair.noskip_flag); execute DBMS_REPAIR
.SKIP_CORRUPT_BLOCKS('VPOUSERDB','USERS', flags=>dbms_repair.noskip_flag);
Setting the event in a Session ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Connect to Oracle
as a user with access to the corrupt table and issue the command: ALTER SESSION
SET EVENTS '10231 TRACE NAME CONTEXT FOREVER, LEVEL 10'; Now you should be able
to issue a CREATE TABLE AS SELECT operation against the corrupt table to extract
data from all non-corrupt blocks, but an export would still fail as the event i
s only set within your current session. Eg: CREATE TABLE salvage_emp AS SELECT *
FROM corrupt_emp; Setting the event at Instance level ~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~ This requires that the event be added to the init$ORACLE_SID.ora file
used to start the instance: shutdown the database Edit your init<SID>.ora start
up configuration file and ADD a line that reads: event="10231 trace name context
forever, level 10" Make sure this appears next to any other EVENT= lines in the
init.ora file. STARTUP If the instance fails to start check the syntax of the e
vent parameter matches the above exactly. Note the comma as it is important. SHO
W PARAMETER EVENT To check the event has been set in the correct place. You shou
ld see the initial portion of text for the line in your init.ora file. If not ch
eck which parameter file is being used to start the database. Select out the dat
a from the table using a full table scan operation. Eg: Use a table level export
or create table as select. Export Warning: If the table is very large then some
versions of export may not be able to write more than 2Gb of data to the export
file. See [NOTE:62427.1] <ml2_documents.showDocument?p_id=62427.1&p_database_id
=NOT> for general information on 2Gb limits in various Oracle releases. Salvagin
g data from the corrupt block itself ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~
SKIP_CORRUPT and event 10231 extract data from good blocks but skip over corrupt
blocks. To extract information from the corrupt block there are three main opti
ons: - Select column data from any good indexes This is discussed towards the en
d of the following 2 articles: Oracle7 - using ROWID range scans [NOTE:34371.1]
<ml2_documents.showDocument?p_id=34371.1&p_database_id=NOT> Oracle8/8i - using R
OWID range scans [NOTE:61685.1] <ml2_documents.showDocument?p_id=61685.1&p_datab
ase_id=NOT> - See if Oracle Support can extract any data from HEX dumps of the c
orrupt block. - It may be possible to salvage some data using Log Miner Once you
have the data extracted ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Once you have the requ
ired data extracted either into an export file or into another table make sure y
ou have a valid database backup before proceeding. The importance of this cannot
be over-emphasised. Double check you have the SQL to rebuild the object and its
indexes etc.. Double check that you have any diagnostic information if requeste
d by Oracle support. Once you proceed with dropping the object certain informati
on is destroyed so it is important to capture it now. Now you can: If 10231 was
set at instance level: Remove the 'event' line from the init.ora file SHUTDOWN a
nd RESTART the database. SHOW PARAMETER EVENT Make sure the 10231 event is no lo
nger shown RENAME or DROP the problem table If you have space it is advisable to
RENAME the problem table rather than DROP it at this stage. Recreate the table.
Eg: By importing. Take special care to get the storage clauses correct when rec
reating the table. Create any indexes, triggers etc.. required Again take care w
ith storage clauses. Re-grant any access to the table. If you RENAMEd the origin
al table you can drop it once the new table has been tested. .
Note 4: Analyze table validate structure: ======================================
=== validate structure table: ANALYZE TABLE CHARLIE.CUSTOMERS VALIDATE STRUCTURE
; validate structure index: ANALYZE INDEX CHARLIE.PK_CUST VALIDATE STRUCTURE; Al
s er geen corrupte blocks worden gevonden, is de output slechts "table analyzed"
. Als er wel corrupte blocks worden gevonden, moet een aangemaakte trace file wo
rden bekeken. Note 5: DBVERIFY Utility: ========================= Vanaf de OS pr
ompt kan het dbv utility gedraaid worden om een datafile te onderzoeken. $ dbv F
ILE=/u02/oracle/cc1/data01.dbf BLOCKSIZE=8192 Note 6: DBMS_REPAIR package: =====
======================= Het DBMS_REPAIR package wordt aangemaakt door bmprpr.sql
script. Stap 1. via ANALYZE TABLE ben je er achter gekomen dat van een table ee
n of meer blocks corrupt zijn. Stap 2. Gebruik eerst DBMS_REPAIR.ADMIN_TABLES om
de REPAIR_TABLE aan te maken. Deze table zal dan gegevens gaan bevatten over de
blocks, en of die gemarkeerd zijn als zijnde corrupt e.d. declare begin dbms_re
pair.admin_tables('REPAIR_TABLE, dbms_repair.repair_table, dbms_repair.create_ac
tion, 'users'); end; / Stap 3. Gebruik nu de DBMS_REPAIR.CHECK_OBJECT procedure
op het object om de repair_table uit stap 2 te vullen met corruptie gegevens.
set serveroutput on declare rpr_count int; begin rpr_count:=0; dbms_repair.check
_object('CHARLIE', 'CUSTOMERS', 'REPAIR_TABLE', rpr_count); dbms_output.put_line
('repair_block_count :'||to_char(rpr_count)); end; / Note 7: ======= Tom, If I h
ave this information: select * from V$DATABASE_BLOCK_CORRUPTION; FILE# BLOCK# BL
OCKS CORRUPTION_CHANGE# CORRUPTIO ---------- ---------- ---------- -------------
----- --------11 12357 12 197184960 LOGICAL and select * from v$backup_corruptio
n; RECID STAMP SET_STAMP SET_COUNT PIECE# FILE# BLOCK# BLOCKS CORRUPTION_CHANGE#
MAR CO ---------- ---------- ---------- ---------- ---------- ---------- ------
------------- -----------1 533835361 533835140 3089 1 11 12357 12 197184960 NO L
OGICAL How can I get more details of what data resides on this blocks? and being
'Logical' can they be recoverd without loosing that data at all? Any extra deta
ils would be appreciated. Thanks, Orlando Followup: select * from dba_extents wh
ere file_id = 11 and 12357 between block_id an block_id+blocks-1; if it is somet
hing "rebuildable" -- like an index, drop and recreate might be the path of leas
t resistance, else you would go back to your backups -- to before this was detec
ted and restore that file/range of blocks (rman can do block level recovery) Tom
trace file generated by analyze contained
table scan: segment: file# 55 block# 229385 skipping corrupt block file# 55 bloc
k# 251372 This is repeated every day (analyzed each morning) but daily direct ex
port / import succeeds. SQL> select segment_type from dba_extents where file_id=
55 and 229385 between block_id and (block_id +( blocks -1)); SEGMENT_TYPE ------
---------------------------------TABLE $ dbv file=/u03/oradata/emu/emu_data_larg
e02.dbf \ blocksize=8192 logfile=/dbv.log DBVERIFY: Release 8.1.7.2.0 - Producti
on on Mon Aug 10 10:10:13 2004 (c) Copyright 2000 Oracle Corporation. All rights
reserved.
DBVERIFY - Verification starting : FILE = /u03/oradata/emu/emu_data_large02.dbf
Block Checking: DBA = 230938092, Block Type = KTB-managed data block Found block
already marked corrupted DBVERIFY - Verification complete Total Total Total Tot
al Total Total Total Total Total Pages Pages Pages Pages Pages Pages Pages Pages
Pages Examined : Processed (Data) : Failing (Data) : Processed (Index): Failing
(Index): Processed (Other): Empty : Marked Corrupt : Influx : 256000 253949 0 0
0 11 2040 0 0
Any thoughts ? Thanks
Note 6: ------Detect And Correct Corruption Oracle provides a number of methods
to detect and repair corruption within datafiles: DBVerify ANALYZE .. VALIDATE S
TRUCTURE DB_BLOCK_CHECKING. DBMS_REPAIR.
Other Repair Methods. DBVerify DBVerify is an external utility that allows valid
ation of offline datafiles. In addition to offline datafiles it can be used to c
heck the validity of backup datafiles: C:>dbv file=C:\Oracle\oradata\TSH1\system
01.dbf feedback=100 blocksize=4096 ANALYZE .. VALIDATE STRUCTURE The ANALYZE com
mand can be used to verify each data block in the analyzed object. If any corrup
tion is detected rows are added to the INVALID_ROWS table: -- Create the INVALID
_ROWS table. SQL> @C:\Oracle\901\rdbms\admin\UTLVALID.SQL -- Validate the table
structure. SQL> ANALYZE TABLE scott.emp VALIDATE STRUCTURE; -- Validate the tabl
e structure along with all it's indexes. SQL> ANALYZE TABLE scott.emp VALIDATE S
TRUCTURE CASCADE; -- Validate the index structure. SQL> ANALYZE INDEX scott.pk_e
mp VALIDATE STRUCTURE; DB_BLOCK_CHECKING When the DB_BLOCK_CHECKING parameter is
set to TRUE Oracle performs a walk through of the data in the block to check it
is self-consistent. Unfortunately block checking can add between 1 and 10% over
head to the server. Oracle recommend setting this parameter to TRUE if the overh
ead is acceptable. DBMS_REPAIR Unlike the previous methods dicussed, the DBMS_RE
PAIR package allows you to detect and repair corruption. The process requires tw
o administration tables to hold a list of corrupt blocks and index keys pointing
to those blocks. These are created as follows: BEGIN Dbms_Repair.Admin_Tables (
table_name => 'REPAIR_TABLE', table_type => Dbms_Repair.Repair_Table, action =>
Dbms_Repair.Create_Action, tablespace => 'USERS'); Dbms_Repair.Admin_Tables ( t
able_name => 'ORPHAN_KEY_TABLE', table_type => Dbms_Repair.Orphan_Table, action
=> Dbms_Repair.Create_Action, tablespace => 'USERS'); END; / With the administra
tion tables built we are able to check the table of interest using the
CHECK_OBJECT procedure: SET SERVEROUTPUT ON DECLARE v_num_corrupt INT; BEGIN v_n
um_corrupt := 0; Dbms_Repair.Check_Object ( schema_name => 'SCOTT', object_name
=> 'DEPT', repair_table_name => 'REPAIR_TABLE', corrupt_count => v_num_corrupt);
Dbms_Output.Put_Line('number corrupt: ' || TO_CHAR (v_num_corrupt)); END; / Ass
uming the number of corrupt blocks is greater than 0 the CORRUPTION_DESCRIPTION
and the REPAIR_DESCRIPTION columns of the REPAIR_TABLE can be used to get more i
nformation about the corruption. At this point the currupt blocks have been dete
cted, but are not marked as corrupt. The FIX_CORRUPT_BLOCKS procedure can be use
d to mark the blocks as corrupt, allowing them to be skipped by DML once the tab
le is in the correct mode: SET SERVEROUTPUT ON DECLARE v_num_fix INT; BEGIN v_nu
m_fix := 0; Dbms_Repair.Fix_Corrupt_Blocks ( schema_name => 'SCOTT', object_name
=> 'DEPT', object_type => Dbms_Repair.Table_Object, repair_table_name => 'REPAIR
_TABLE', fix_count=> v_num_fix); Dbms_Output.Put_Line('num fix: ' || to_char(v_n
um_fix)); END; / Once the corrupt table blocks have been located and marked all
indexes must be checked to see if any of their key entries point to a corrupt bl
ock. This is done using the DUMP_ORPHAN_KEYS procedure: SET SERVEROUTPUT ON DECL
ARE v_num_orphans INT; BEGIN v_num_orphans := 0; Dbms_Repair.Dump_Orphan_Keys (
schema_name => 'SCOTT', object_name => 'PK_DEPT', object_type => Dbms_Repair.Ind
ex_Object, repair_table_name => 'REPAIR_TABLE',
orphan_table_name=> 'ORPHAN_KEY_TABLE', key_count => v_num_orphans); Dbms_Output
.Put_Line('orphan key count: ' || to_char(v_num_orphans)); END; / If the orphan
key count is greater than 0 the index should be rebuilt. The process of marking
the table block as corrupt automatically removes it from the freelists. This can
prevent freelist access to all blocks following the corrupt block. To correct t
his the freelists must be rebuilt using the REBUILD_FREELISTS procedure: BEGIN D
bms_Repair.Rebuild_Freelists ( schema_name => 'SCOTT', object_name => 'DEPT', ob
ject_type => Dbms_Repair.Table_Object); END; / The final step in the process is
to make sure all DML statements ignore the data blocks marked as corrupt. This i
s done using the SKIP_CORRUPT_BLOCKS procedure: BEGIN Dbms_Repair.Skip_Corrupt_B
locks ( schema_name => 'SCOTT', object_name => 'DEPT', object_type => Dbms_Repai
r.Table_Object, flags => Dbms_Repair.Skip_Flag); END; / The SKIP_CORRUPT column
in the DBA_TABLES view indicates if this action has been successful. At this poi
nt the table can be used again but you will have to take steps to correct any da
ta loss associated with the missing blocks. Other Repair Methods Other methods t
o repair corruption include: Full database recovery. Individual datafile recover
y. Block media recovery (BMR), available in Oracle9i when using RMAN. Recreate t
he table using the CREATE TABLE .. AS SELECT command, taking care to avoid the c
orrupt blocks by retricting the where clause of the query. Drop the table and re
store it from a previous export. This may require some manual effort to replace
missing data. Hope this helps. Regards Tim... Note 7: -------
If you know the file number and the block number indicating the corruption, you
can salvage the data in the corrupt table by selecting around the bad blocks. Se
t event 10231 in the init.ora file to cause Oracle to skip software- and mediaco
rrupted blocks when performing full table scans: Event="10231 trace name context
forever, level 10" Set event 10233 in the init.ora file to cause Oracle to skip
software- and mediacorrupted blocks when performing index range scans: Event="1
0233 trace name context forever, level 10" Note 8: ------Detecting and reporting
data block corruption using the DBMS_REPAIR package: Note: Note that this event
can only be used if the block "wrapper" is marked corrupt. Eg: If the block rep
orts ORA-1578. 1. Create DBMS_REPAIR administration tables: To Create Repair tab
les, run the below package. SQL> EXEC DBMS_REPAIR.ADMIN_TABLES(REPAIR_ADMIN, 1,1, REP
TS); Note that table names prefix with REPAIR_ or ORPAN_. If the second variabl
create REAIR_key tables, if it is 2, then it will create ORPAN_key tables. If the thre
ad variable is 1 then package performs create operations. 2 then package performs del
rations. 3 then package performs drop operations. 2. Scanning a specific table or Inde
x using the DBMS_REPAIR.CHECK_OBJECT procedure: In the following example we chec
k the table employee for possible corruptions that belongs to the schema TEST. Lets as
sume that we have created our administration tables called REPAIR_ADMIN in schem
a SYS. To check the table block corruption use the following procedure: SQL> VAR
IABLE A NUMBER; SQL> EXEC DBMS_REPAIR.CHECK_OBJECT (TEST,EMP, NULL, 1,REPAIR_AD
LL, NULL,:A); SQL> PRINT A; To check which block is corrupted, check in the REPA
IR_ADMIN table.
SQL> SELECT * FROM REPAIR_ADMIN; 3. Fixing corrupt block using the DBMS_REPAIR.F
IX_CORRUPT_BLOCK procedure: SQL> VARIABLE A NUMBER; SQL> EXEC DBMS_REPAIR.FIX.CO
RRUPT_BLOCKS (TEST,EMP, NULL, ARI_ADMIN, NULL,:A); SQL> SELECT MARKED FROM REPA
elect the EMP table now you still get the error ORA-1578. 4. Skipping corrupt bl
ocks using the DBMS_REPAIR. SKIP_CORRUPT_BLOCK procedure: SQL> EXEC DBMS_REPAIR.
SKIP_CORRUPT.BLOCKS (TEST, EMP, 1,1); Notice the verification of running the DBMS
ol. You have lost some of data. One main advantage of this tool is that you can
retrieve the data past the corrupted block. However we have lost some data in th
e table. 5. This procedure is useful in identifying orphan keys in indexes that
are pointing to corrupt rows of the table: SQL> EXEC DBMS_REPAIR. DUMP ORPHAN_KE
YS (TEST,IDX_EMP, NULL, 2, REPAIR_ADMIN, ORPHAN_ADMIN, NULL,:A); If u
ble you have to drop and re-create the index to avoid any inconsistencies in you
r queries. 6. The last thing you need to do while using the DBMS_REPAIR package
is to run the DBMS_REPAIR.REBUILD_FREELISTS procedure to reinitialize the free l
ist details in the data dictionary views. SQL> EXEC DBMS_REPAIR.REBUILD_FREELIST
S (TEST,EMP, NULL, 1); NOTE Setting events 10210, 10211, 10212, and 10225 can be d
ing the following line for each event in the init.ora file: Event = "event_numbe
r trace name errorstack forever, level 10" - When event 10210 is set, the data b
locks are checked for corruption by checking their integrity. Data blocks that d
on't match the format are marked as soft corrupt. - When event 10211 is set, the
index blocks are checked for corruption by checking their integrity. Index bloc
ks that don't match the format are marked as soft corrupt. - When event 10212 is
set, the cluster blocks are checked for corruption by checking their integrity.
Cluster blocks that don't match the format are marked as soft corrupt.
1,REP
- When event 10225 is set, the fet$ and uset$ dictionary tables are checked for
corruption by checking their integrity. Blocks that don't match the format are m
arked as soft corrupt. - Set event 10231 in the init.ora file to cause Oracle to
skip software- and media-corrupted blocks when performing full table scans: Eve
nt="10231 trace name context forever, level 10" - Set event 10233 in the init.or
a file to cause Oracle to skip software- and media-corrupted blocks when perform
ing index range scans: Event="10233 trace name context forever, level 10" To dum
p the Oracle block you can use below command from 8.x on words: SQL> ALTER SYSTE
M DUMP DATAFILE 11 block 9; This command dumps datablock 9 in datafile11, into U
SER_DUMP_DEST directory. Dumping Redo Logs file blocks: SQL> ALTER SYSTEM DUMP L
OGFILE /usr/oracle8/product/admin/udump/rl. log; Rollback segments block corruption, i
t will cause problems (ORA-1578) while starting up the database. With support of
oracle, can use below under source parameter to startup the database. _CORRUPTE
D_ROLLBACK_SEGMENTS=(RBS_1, RBS_2) DB_BLOCK_COMPUTE_CHECKSUM This parameter is n
ormally used to debug corruptions that happen on disk. The following V$ views conta
in information about blocks marked logically corrupt: V$ BACKUP_CORRUPTION, V$CO
PY_CORRUPTION When this parameter is set, while reading a block from disk to cat
ch, oracle will compute the checksum again and compares it with the value that i
s in the block. If they differ, it indicates that the block is corrupted on disk
. Oracle makes the block as corrupt and signals an error. There is an overhead i
nvolved in setting this parameter. DB_BLOCK_CACHE_PROTECT=TRUE Oracle will catch stray
writes made by processes in the buffer catch. Oracle 9i new RMAN futures: Obtai
n the datafile numbers and block numbers for the corrupted blocks. Typically, yo
u obtain this output
from the standard output, the alert.log, trace files, or a media management inte
rface. For example, you may see the following in a trace file: ORA-01578: ORA-01
110: ORA-01578: ORA-01110: ORACLE data block corrupted (file # 9, block # 13) da
ta file 9: '/oracle/dbs/tbs_91.f' ORACLE data block corrupted (file # 2, block #
19) data file 2: '/oracle/dbs/tbs_21.f'
$rman target =rman/rman@rmanprod RMAN> run { 2> allocate channel ch1 type disk;
3> blockrecover datafile 9 block 13 datafile 2 block 19; 4> } Recovering Data bl
ocks Using Selected Backups: # restore from backupset BLOCKRECOVER DATAFILE 9 BL
OCK 13 DATAFILE 2 BLOCK 19 FROM BACKUPSET; # restore from datafile image copy BL
OCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 FROM DATAFILECOPY; # restore
from backupset with tag "mondayAM" BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 B
LOCK 199 FROM TAG = mondayAM; # restore using backups made before one week ago B
LOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE UNTIL 'SYSDATE-7'; #
restore using backups made before SCN 100 BLOCKRECOVER DATAFILE 9 BLOCK 13 DATA
FILE 2 BLOCK 19 RESTORE UNTIL SCN 100; # restore using backups made before log s
equence 7024 BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE UNTIL
SEQUENCE 7024; Note 9: ======= Displayed below are the messages of the selected
thread. Thread Status: Closed From: nitinpawar@birlasunlife.com 23-Feb-05 11:51
Subject: ORA-01578 on system datafile RDBMS Version: Oracle9i Enterprise Edition
Release 9.2.0.1.0 Operating System and Version: Windows 2000 Error Number (if a
pplicable): ORA-01578 Product (i.e. SQL*Loader, Import, etc.): Product Version:
ORA-01578 on system datafile A data block in SYSTEM tablespace datafile is corru
pted.
The error has been occuring since past 7 months. I noticed it recently when I to
ok over the support. The database is in archivelog mode. We don't have any old h
ot backups of the database files. Both export and alert log indicate corrupt blo
ck to be # 7873, but dbverify declares block #7875 to be corrupt. It seems there
is no object using the block. Following is the extract from the alert log. ***
Corrupt block relative dba: 0x00401ec1 (file 1, block 7873) Fractured block foun
d during buffer read Data in bad block type: 16 format: 2 rdba: 0x00401ec1 last
change scn: 0x0000.00007389 seq: 0x1 flg: 0x04 consistency value in tail: 0x2343
0601 check value in block header: 0x5684, computed block checksum: 0x396b spare1
: 0x0, spare2: 0x0, spare3: 0x0 *** Reread of rdba: 0x00401ec1 (file 1, block 78
73) found same corrupted data From: Oracle, Fahad Abdul Rahman 25-Feb-05 08:18 S
ubject: Re : ORA-01578 on system datafile Nitin, I would suggest you to relocate
the system datafiles to a new location on disk and see if the corruption is rem
oved. If the issue still persist ,then I would suggest you to log a TAR with Ora
cle Support for further research. ======================== 32. iSQL*Plus and EM
10: ======================== 32.1 iSQL*Plus: =============== Note 1: ------How t
o start iSql*Plus: ----------------------lsnrctl start emctl start dbconsole isq
lplusctl start http://localhost:5561/isqlplus/ Note 2: -------
Doc ID: Note:281946.1 Content Type: TEXT/X-HTML Subject: How to Verify that iSQL
*Plus 10i is Running and How to Restart the Processes? Creation Date: 31-AUG-200
4 Type: HOWTO Last Revision Date: 06-APR-2005 Status: PUBLISHED The information
in this document applies to: SQL*Plus - Version: 10.1.0 Information in this docu
ment applies to any platform. Goal How to verify that iSQL*Plus 10i is running,
and how to restart the processes? Fix How to Verify that iSQL*Plus is running? =
====================================== UNIX Platform ------------------Check whe
ther the iSQL*Plus process is running by entering the following command: ps -eaf
|grep java The iSQL*Plus process looks something like the following: oracle 184
88 1 0 16:01:30 pts/8 0:36 $ORACLE_HOME/jdk/bin/java -Djava. awt.headless=true -
Doracle.oc4j.localhome=/ora Windows Platform -------------------------Check whet
her the iSQL*Plus process is running by opening the Windows services dialog from
the Control Panel and checking the status of the iSQL*Plus service. The iSQL*Pl
us service will be called "OracleOracle_Home_NameiSQL*Plus". How to Start and St
op iSQL*Plus? =============================== UNIX Platform -------------------T
o start iSQL*Plus, enter the command: $ORACLE_HOME/bin/isqlplusctl start To stop
iSQL*Plus, enter the command: $ORACLE_HOME/bin/isqlplusctl stop Windows Platfor
m -------------------------Use the Windows service to start and stop iSQL*Plus.
The service is set to start automatically on installation and when the operating
system is started. Note 3: ------Doc ID: Subject: Note:281847.1 Content Type: T
EXT/X-HTML How do I configure or test iSQL*Plus 10i? Creation Date:
30-AUG-2004
Type: HOWTO Last Revision Date: 25-MAR-2005 Status: PUBLISHED The information in
this document applies to: SQL*Plus - Version: 10.1.0.0 to 10.1.0 Information in
this document applies to any platform. Goal How do I configure or test?iSQL*Plu
s after the install or Oracle Enterprise Edition 10i? Fix iSQL*Plus 10.x is auto
matically installed and configured with Enterprise Edition 10i. At the end of th
e installation process a file called $ORACLE_HOME/install/readme.txt has the inf
ormation needed to configure or test iSQL*Plus: readme.txt example: ------------
---The following J2EE Applications have been deployed and are accessible at the
URLs listed below. Your database configuration files have been installed in?$ORA
CLE_HOME while other components selected for installation have been installed in
$ORACLE_HOME\Db_1.? Be cautious not to accidentally delete these configuration
files. Ultra Search URL: :5620/ultrasearch"http://<your host name>:5620/ultrasea
rch Ultra Search Administration Tool URL: :5620/ultrasearch/admin"http://<your h
ost name>:5620/ultrasearch/admin iSQL*Plus URL: :5560/isqlplus"http://<your host
name>:5560/isqlplus Enteprise Manager 10g Database Control URL: :5500/em"http:/
/<your host name>:5500/em ---------------The URL for your iSQL*Plus server is: :
port/isqlplus" target=_blankhttp://<your host name>:port /isqlplus :port/isqlplu
s/dba" target=_blankhttp://<your host name>:port /isqlplus/dba The port number i
s likely to be 5560. If this URL does not display the iSQL*Plus log in page, che
ck that iSQL*Plus has been started For more additional information about iSQL*Pl
us please check the following Metalink notes: Note 281947.1 How to Troubleshoot
iSQLPlus 10i when it is not Starting on Unix? Note 281946.1?How to Verify that i
SQLPlus 10i is Running and How to Restart the Processes? Note 283114.1?How to co
nnect as sysdba/sysoper through iSQL*Plus in Oracle 10g Note 4: -------
Doc ID: Note:283114.1 Content Type: TEXT/X-HTML Subject: How to connect as sysdb
a/sysoper through iSQL*Plus in Oracle 10g Creation Date: 16-SEP-2004 Type: HOWTO
Last Revision Date: 12-JAN-2005 Status: MODERATED
This document is being delivered to you via Oracle Support's Rapid Visibility (R
aV) process, and therefore has not been subject to an independent technical revi
ew. The information in this document applies to: SQL*Plus - Version: 10.0.1 Info
rmation in this document applies to any platform. Goal Enabling iSQL*Plus DBA Ac
cess. Fix Inorder to connect as SYSDBA through iSQL*Plus you will have to use iS
QL*Plus DBA URL. Given below is a sample DBS URL in iSQL*Plus. " target=_blankht
tp://Hostname:Port/isqlplus/dba
Enabling iSQL*Plus DBA Access ============================= To access the iSQL*P
lus DBA URL, you must set up the OC4J user manager. You can set up OC4J to use:
The XML-based provider type, jazn-data.xml The LDAP-based provider type, Oracle
Internet Directory This document discusses how to set up the iSQL*Plus DBA URL t
o use the XML-based provider. For information on how to set up the LDAP-based pr
ovider, see the Oracle9iAS Containers for J2EE documentation. To set up the iSQL
*Plus DBA URL ================================= 1. Create users for the iSQL*Plu
s DBA URL. 2. Grant the webDba role to users. 3. Test iSQL*Plus DBA Access The O
racle JAAS Provider, otherwise known as JAZN (Java AuthoriZatioN), is Oracle's i
mplementation of the Java Authentication and Authorization Service (JAAS). Oracl
e's JAAS Provider is referred to as JAZN in the remainder of this document. See
the Oracle9iAS Containers for J2EE documentation for more information about JAZN
, the Oracle JAAS Provider. Create and Manage Users for the iSQL*Plus DBA URL ==
=============================================== The actions available to manage
users for the iSQL*Plus DBA URL are: 1. Create users 2. List users
3. Grant the webDba role 4. Remove users 5. Revoke the webDba role 6. Change use
r passwords
You perform these actions from the $ORACLE_HOME/oc4j/j2ee/isqlplus/applicationde
ployments/isqlplus directory. $JAVA_HOME is the location of your JDK (1.4 or abo
ve). It should be set to $ORACLE_HOME/jdk, but you may use another JDK. admin_pa
ssword is the password for the iSQL*Plus DBA realm administrator user, admin. Th
e password for the admin user is set to 'welcome' by default. You should change
this password as soon as possible. A JAZN shell option, and a command line optio
n are given for all steps. To start the JAZN shell, enter: $JAVA_HOME/bin/java -
Djava. security.properties=$ORACLE_HOME/sqlplus/admin/iplus/provider -jar $ORACL
E_HOME/oc4j/j2ee/home/jazn.jar -user "iSQL*Plus DBA/admin" -password admin_passw
ord -shell To exit the JAZN shell, enter: EXIT Create Users You can create multi
ple users who have access to the iSQL*Plus DBA URL. To create a user from the JA
ZN shell, enter: JAZN> adduser "iSQL*Plus DBA" username password To create a use
r from the command-line, enter: $JAVA_HOME/bin/java -Djava. security.properties=
$ORACLE_HOME/sqlplus/admin/iplus/provider -jar $ORACLE_HOME/oc4j/j2ee/home/jazn.
jar -user "iSQL*Plus DBA/admin" -password admin_password -adduser "iSQL*Plus DBA
" username password username and password are the username and password used to
log into the iSQL*Plus DBA URL. To create multiple users, repeat the above comma
nd for each user. List Users You can confirm that users have been created and ad
ded to the iSQL*Plus DBA realm. To confirm the creation of a user using the JAZN
shell, enter: JAZN> listusers "iSQL*Plus DBA" To confirm the creation of a user
using the command-line, enter: $JAVA_HOME/bin/java -Djava. security.properties=
$ORACLE_HOME/sqlplus/admin/iplus/provider -jar $ORACLE_HOME/oc4j/j2ee/home/jazn.
jar -user "iSQL*Plus DBA/admin" -password admin_password -listusers "iSQL*Plus D
BA"
The usernames you created are displayed. Grant Users the webDba Role Each user y
ou created above must be granted access to the webDba role. To grant a user acce
ss to the webDba role from the JAZN shell, enter: JAZN> grantrole webDba "iSQL*P
lus DBA" username To grant a user access to the webDba role from the command-lin
e, enter: $JAVA_HOME/bin/java -Djava. security.properties=$ORACLE_HOME/sqlplus/a
dmin/iplus/provider -jar $ORACLE_HOME/oc4j/j2ee/home/jazn.jar -user "iSQL*Plus D
BA/admin" -password admin_password -grantrole webDba "iSQL*Plus DBA" username Re
move Users To remove a user using the JAZN shell, enter: JAZN> remuser "iSQL*Plu
s DBA" username To remove a user using the command-line, enter: $JAVA_HOME/bin/j
ava -Djava. security.properties=$ORACLE_HOME/sqlplus/admin/iplus/provider -jar $
ORACLE_HOME/oc4j/j2ee/home/jazn.jar -user "iSQL*Plus DBA/admin" -password admin_
password -remuser "iSQL*Plus DBA" username Revoke the webDba Role To revoke a us
er's webDba role from the JAZN shell, enter: JAZN> revokerole webDba "iSQL*Plus
DBA" username To revoke a user's webDba role from the command-line, enter: $JAVA
_HOME/bin/java -Djava. security.properties=$ORACLE_HOME/sqlplus/admin/iplus/prov
ider -jar $ORACLE_HOME/oc4j/j2ee/home/jazn.jar -user "iSQL*Plus DBA/admin" -pass
word admin_password -revokerole "iSQL*Plus DBA" username Change User Passwords T
o change a user's password from the JAZN shell, enter: JAZN> setpasswd "iSQL*Plu
s DBA" username old_password new_password To change a user's password from the c
ommand-line, enter: $JAVA_HOME/bin/java -Djava. security.properties=$ORACLE_HOME
/sqlplus/admin/iplus/provider -jar $ORACLE_HOME/oc4j/j2ee/home/jazn.jar -user "i
SQL*Plus DBA/admin" -password admin_password -setpasswd "iSQL*Plus DBA" username
old_password new_password Test iSQL*Plus DBA Access Test iSQL*Plus DBA access b
y entering the iSQL*Plus DBA URL in your web browser: " target=_blankhttp://mach
ine_name.domain:5560/isqlplus/dba A dialog is displayed requesting authenticatio
n for the iSQL*Plus DBA URL. Log in as the user you created above. You may need
to restart iSQL*Plus for the changes to take effect. Help us improve our service
. Please email us your comments for this document. . What is a wire protocol ODB
C driver? ====================================
A DBMS is written using an application programming interface (API), which is spe
cific to that database. For example, an Oracle 9i database has its own version o
f the API specification (called Net9), which must run on each client application
. Developers write applications compliant to the ODBC specification and use ODBC
drivers to access the database. The ODBC driver communicates with the vendor's
native API. Then, the native API passes instructions to another vendor-specific
low-level API. Finally the wire protocol API communicates with the database. The
wire protocol architecture eliminates the need for the database's native API (f
or example, Net9), so the driver communicates directly to the database through t
he database's own wire level protocol. This effectively removes an entire commun
ication layer.
================================ 33. ADDM and other 10g features: ==============
================== ========================= 33.1 Flash_recovery_area: =========
================ Note 1: ------A flash recovery area is a directory, file system
, or Automatic Storage Management disk group that serves as the default storage
area for files related to recovery. Such files include Multiplexed copies of the
control file and online redo logs Archived redo logs and flashback logs RMAN ba
ckups Files created by RESTORE and RECOVER commands
Recovery components of the database interact with the flash recovery area to ens
ure that the database is completely recoverable using files in the flash recover
y area. The database manages the disk space in the flash recovery area, and when
there is not sufficient disk space to create new files, the database creates mo
re room automatically by deleting the minimum set of files from flash recovery a
rea that are obsolete, backed up to tertiary storage, or redundant. Note 2: ----
--Before any Flash Backup and Recovery activity can take place, the Flash Recove
ry
Area must be set up. The Flash Recovery Area is a specific area of disk storage
that is set aside exclusively for retention of backup components such as datafil
e image copies, archived redo logs, and control file autobackup copies. These fe
atures include: Unified Backup Files Storage. All backup components can be store
d in one consolidated spot. The Flash Recovery Area is managed via Oracle Manage
d Files (OMF), and it can utilize disk resources managed by Oracle Automated Sto
rage Management (ASM). In addition, the Flash Recovery Area can be configured fo
r use by multiple database instances if so desired. Automated Disk-Based Backup
and Recovery. Once the Flash Recovery Area is configured, all backup components
(datafile image copies, archived redo logs, and so on) are managed automatically
by Oracle. Automatic Deletion of Backup Components. Once backup components have
been successfully created, RMAN can be configured to automatically clean up fil
es that are no longer needed (thus reducing risk of insufficient disk space for
backups). Disk Cache for Tape Copies. Finally, if your disaster recovery plan in
volves backing up to alternate media, the Flash Recovery Area can act as a disk
cache area for those backup components that are eventually copied to tape. Flash
back Logs. The Flash Recovery Area is also used to store and manage flashback lo
gs, which are used during Flashback Backup operations to quickly restore a datab
ase to a prior desired state. Sizing the Flash Recovery Area. Oracle recommends
that the Flash Recovery Area should be sized large enough to include all files r
equired for backup and recovery. However, if insufficient disk space is availabl
e, Oracle recommends that it be sized at least large enough to contain any archi
ved redo logs that have not yet been backed up to alternate media. initializatio
n parameters: DB_RECOVERY_FILE_DEST_SIZE specifies the total size of all files t
hat can be stored in the Flash Recovery Area. Note that Oracle recommends settin
g this value first. DB_RECOVERY_FILE_DEST specifies the physical disk location w
here the Flashback Recovery Area will be stored. Oracle recommends that this be
a separate location from the database's datafiles, control files, and redo logs.
Also, note that if the database is using Oracle's new
Automatic Storage Management (ASM) feature, then the shared disk area that ASM m
anages can be targeted for the Flashback Recovery Area. Examples: ------ Listing
2.2: Setting up the Flash Recovery Area - open database ------ Be sure to set D
B_FILE_RECOVERY_DEST_SIZE first ... ALTER SYSTEM SET db_file_recovery_dest_size
= '5G' SCOPE=BOTH SID='*'; -- ... and then set DB_FILE_RECOVERY_DEST and DB_FLAS
HBACK_RETENTION_TARGET ALTER SYSTEM SET db_file_recovery_dest = 'c:\oracle\fbrda
ta\zdcdb' SCOPE=BOTH SID='*'; ALTER SYSTEM SET db_flashback_retention_target = 2
880; http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/toc.htm Note
2: ------Flashback Database Demo An alternative strategy to the demo presented
here is to use Recovery Manager RMAN> FLASHBACK DATABASE TO SCN = <system_change
_number>; Dependent Objects GV_$FLASHBACK_DATABASE_LOG V_$FLASHBACK_DATABASE_LOG
GV_$FLASHBACK_DATABASE_LOGFILE V_$FLASHBACK_DATABASE_LOGFILE GV_$FLASHBACK_DATA
BASE_STAT V_$FLASHBACK_DATABASE_STAT Syntax 1: SCN FLASHBACK [STANDBY] DATABASE
[<database_name>] TO [BEFORE] SCN <system_change_number> Syntax 2: TIMESTAMP FLA
SHBACK [STANDBY] DATABASE [<database_name>] TO [BEFORE] TIMESTMP <system_timesta
mp_value> Syntax 3: RESTORE POINT FLASHBACK [STANDBY] DATABASE [<database_name>]
TO [BEFORE] RESTORE POINT <restore_point_name> Flashback Syntax Elements OFF AL
TER DATABASE FLASHBACK OFF alter database flashback off; ON ALTER DATABASE FLASH
BACK ON alter database flashback on; Set Retention Target ALTER SYSTEM SET db_fl
ashback_retention_target = <number_of_minutes>; alter system set DB_FLASHBACK_RE
TENTION_TARGET = 2880; Start flashback on a tablespace ALTER TABLESPACE <tablesp
ace_name> FLASHBACK ON; alter tablespace example flashback on; Stop flashback on
a tablespace ALTER TABLESPACE <tablespace_name> FLASHBACK OFF; alter tablespace
example flashback off; Initialization Parameters Setting the location of the fl
ashback recovery area db_recovery_file_dest=/oracle/flash_recovery_area Setting
the size of the flashback recovery area db_recovery_file_dest_size=2147483648 Se
tting the retention time for flashback files (in minutes) -- 2 days
db_flashback_retention_target=2880 Demo conn / as sysdba SELECT flashback_on, lo
g_mode FROM gv$database; set linesize 121 col name format a30 col value format a
30 SELECT name, value FROM gv$parameter WHERE name LIKE '%flashback%'; shutdown
immediate; startup mount exclusive; alter database archivelog; alter database fl
ashback on; alter database open; SELECT flashback_on, log_mode FROM gv$database;
SELECT name, value FROM gv$parameter WHERE name LIKE '%flashback%'; -- 2 days a
lter system set DB_FLASHBACK_RETENTION_TARGET=2880; SELECT name, value FROM gv$p
arameter WHERE name LIKE '%flashback%'; SELECT estimated_flashback_size FROM gv$
flashback_database_log; As SYS As UWCLASS SELECT current_scn FROM gv$database; S
ELECT oldest_flashback_scn, oldest_flashback_time FROM gv$flashback_database_log
; create table t ( mycol VARCHAR2(20)) ROWDEPENDENCIES; INSERT INTO t VALUES ('A
BC');
INSERT INTO t VALUES ('DEF'); COMMIT; INSERT INTO t VALUES ('GHI'); COMMIT; SELE
CT ora_rowscn, mycol FROM t; SHUTDOWN immediate; startup mount exclusive; FLASHB
ACK DATABASE TO SCN 19513917; /* FLASHBACK DATABASE TO TIMESTAMP (SYSDATE-1/24);
FLASHBACK DATABASE TO TIMESTAMP timestamp'2002-11-05 14:00:00'; FLASHBACK DATAB
ASE TO TIMESTAMP to_timestamp('2002-11-11 16:00:00', 'YYYY-MM-DD HH24:MI:SS'); *
/ alter database open; alter database open resetlogs; conn uwclass/uwclass SELEC
T ora_rowscn, mycol FROM t; SELECT * FROM gv$flashback_database_stat; alter syst
em switch logfile; shutdown immediate; startup mount exclusive; alter database f
lashback off; alter database noarchivelog; alter database open; SELECT flashback
_on, log_mode FROM gv$database; host rman target sys/pwd@orabase RMAN> crosschec
k archivelog all; RMAN> delete archivelog all; RMAN> list archivelog all;
-- if out of disk space ORA-16014: log 2 sequence# 4163 not archived, no availab
le destinations ORA-00312: online log 2 thread 1: 'c:\oracle\oradata\orabase\red
o02.log' -- what happens The error ora-16014 is the real clue for this problem.
Once the archive destination becomes full the location also becomes invalid. Nor
mally Oracle does not do a recheck to see if space has been made available. -- t
hen shutdown abort; -- clean up disk space: then startup alter system archive lo
g all to '/oracle/flash_recovery_area/ORABASE/ARCHIVELOG';
========== 33.2 ADDM: ========== Note 1: =======
Doc ID: Note:250655.1 Content Type: TEXT/PLAIN Subject: How to use the Automatic
Database Diagnostic Monitor Creation Date: 09-OCT-2003 Type: BULLETIN Last Revi
sion Date: 10-JUN-2004 Status: PUBLISHED PURPOSE ------The purpose of this artic
le is to show an introduction on how to use the Automatic Database Diagnostic Mo
nitor feature. The ADDM consists of functionality built into the Oracle kernel t
o assist in making tuning an Oracle instance less elaborate. SCOPE & APPLICATION
------------------Audience Use : Oracle developers and DBAs : Using the Automat
ic Database Diagnostic Monitor feature as a first step in the creation of an aut
otunable database Level of detail : medium Limitation on use: none
USING THE AUTOMATIC DATABASE DIAGNOSTIC MONITOR --------------------------------
---------------
Introduction: ------------The Automatic Database Diagnostic Monitor (hereafter c
alled ADDM) is an integral part of the Oracle RDBMS capable of gathering perform
ance statistics and advising on changes to solve any exitsing performance issues
measured. For this it uses the Automatic Workload Repository ( hereafter called
AWR), a repository defined in the database to store database wide usage statist
ics at fixed size intervals (60 minutes). To make use of ADDM, a PL/SQL interfac
e called DBMS_ADVISOR has been implemented. This PL/SQL interface may be called
through the supplied $ORACLE_HOME/rdbms/admin/addmrpt.sql script, called directl
y, or used in combination with the Oracle Enterprise Manager application. Beside
s this PL/SQL package, a number of views (with names starting with the DBA_ADVIS
OR_ prefix) allow retrieval of the results of any actions performed with the DBM
S_ADVISOR API. The preferred way of accessing ADDM is through the Enterprise Man
ager interface, as it shows a complete performance overview including recommenda
tions on how to solve bottlenecks on a single screen. When accessing ADDM manual
ly, you should consider using the ADDMRPT.SQL script provided with your Oracle r
elease, as it hides the complexities involved in accessing the DBMS_ADVISOR pack
age. To use ADDM for advising on how to tune the instance and SQL, you need to m
ake sure that the AWR has been populated with at least 2 sets of performance dat
a. When the STATISTICS_LEVEL is set to TYPICAL or ALL the database will automati
cally schedule the AWR to be populated at 60 minute intervals. When you wish to
create performance snapshots outside of the fixed intervals, then you can use th
e DBMS_WORKLOAD_REPOSITORY package for this, like in: BEGIN DBMS_WORKLOAD_REPOSI
TORY.CREATE_SNAPSHOT('TYPICAL'); END; / The snapshots need be created before and
after the action you wish to examine. E.g. when examining a bad performing quer
y, you need to have performance data snapshots from the timestamps before the qu
ery was started and after the query finished. they You may also change the frequ
ency of the snapshots and the duration for which
are saved in the AWR. Use the DBMS_WORKLOAD_REPOSITORY package as in the followi
ng example: execute DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(interval=>
60,retention=>43200); Example: -------You can use ADDM through the PL/SQL API an
d query the various advisory views in SQL*Plus to examine how to solve performan
ce issues.
The example is based on the SCOTT account executing the various tasks. To allow
SCOTT to both generate AWR snapshots and sumbit ADDM recommendation jobs, he nee
ds to be granted proper access: CONNECT / AS SYSDBA GRANT ADVISOR TO scott; GRAN
T SELECT_CATALOG_ROLE TO scott; GRANT EXECUTE ON dbms_workload_repository TO sco
tt; Furthermore, the buffer cache size (DB_CACHE_SIZE) has been reduced to 24M.
The example presented makes use of a table called BIGEMP, residing in the SCOTT
schema. The table (containing about 14 million rows) has been created with: CONN
ECT scott/tiger CREATE TABLE bigemp AS SELECT * FROM emp; ALTER TABLE bigemp MOD
IFY (empno NUMBER); DECLARE n NUMBER; BEGIN FOR n IN 1..18 LOOP INSERT INTO bige
mp SELECT * FROM bigemp; END LOOP; COMMIT; END; / UPDATE bigemp SET empno = ROWN
UM; COMMIT; The next step is to generate a performance data snapshot: EXECUTE db
ms_workload_repository.create_snapshot('TYPICAL'); Execute a query on the BIGEMP
table to generate some load: SELECT * FROM bigemp WHERE deptno = 10; After this
, generate a second performance snapshot: EXECUTE dbms_workload_repository.creat
e_snapshot('TYPICAL'); The easiest way to get the ADDM report is by executing: @
?/rdbms/admin/addmrpt Running this script will show which snapshots have been ge
nerated, asks for the snapshot IDs to be used for generating the report, and wil
l generate the report containing the ADDM findings. When you do not want to use
the script, you need to submit and execute the ADDM task manually. First, query
DBA_HIST_SNAPSHOT to see which snapshots have been created. These snapshots will
be used by ADDM to generate recommendations: SELECT * FROM dba_hist_snapshot OR
DER BY snap_id; SNAP_ID DBID INSTANCE_NUMBER ---------- ---------- -------------
-STARTUP_TIME ------------------------------------------------------------------
----BEGIN_INTERVAL_TIME --------------------------------------------------------
---------------
END_INTERVAL_TIME --------------------------------------------------------------
--------FLUSH_ELAPSED ----------------------------------------------------------
------------SNAP_LEVEL ERROR_COUNT ---------- ----------1 494687018 1 17-NOV-03
09.39.17.000 AM 17-NOV-03 09.39.17.000 AM 17-NOV-03 09.50.21.389 AM +00000 00:00
:06.6 1 0 2 494687018 1 17-NOV-03 09.39.17.000 AM 17-NOV-03 09.50.21.389 AM 17-N
OV-03 10.29.35.704 AM +00000 00:00:02.3 1 0 3 494687018 1 17-NOV-03 09.39.17.000
AM 17-NOV-03 10.29.35.704 AM 17-NOV-03 10.35.46.878 AM +00000 00:00:02.1 1 0 Ma
rk the 2 snapshot IDs (such as the lowest and highest ones) for use in generatin
g recommendations. Next, you need to submit and execute the ADDM task manually,
using a script similar to: DECLARE task_name VARCHAR2(30) := 'SCOTT_ADDM'; task_
desc VARCHAR2(30) := 'ADDM Feature Test'; task_id NUMBER; BEGIN (1) dbms_advisor
.create_task('ADDM', task_id, task_name, task_desc, null); (2) dbms_advisor.set_
task_parameter('SCOTT_ADDM', 'START_SNAPSHOT', 1); dbms_advisor.set_task_paramet
er('SCOTT_ADDM', 'END_SNAPSHOT', 3); dbms_advisor.set_task_parameter('SCOTT_ADDM
', 'INSTANCE', 1); dbms_advisor.set_task_parameter('SCOTT_ADDM', 'DB_ID', 494687
018); (3) dbms_advisor.execute_task('SCOTT_ADDM'); END; / Here is the explanatio
n of the steps you need to take to successfully execute an ADDM job: 1) The firs
t step is to create the task. For this, you need to specify the name under which
the task will be known in the ADDM task system. Along with the name you can pro
vide a more readable description on what the job should do. The task type must b
e 'ADDM' in order to have it executed in the ADDM environment. 2) After having d
efined the ADDM task, you must define the boundaries within which the task needs
to be executed. For this you need to set the starting and ending snapshot IDs,
instance ID (especially necessary when running in a RAC environment), and databa
se ID for the newly created job. 3) Finally, the task must be executed.
When querying DBA_ADVISOR_TASKS you see the just created job: SELECT * FROM dba_
advisor_tasks; OWNER TASK_ID TASK_NAME ------------------------------ ----------
-----------------------------DESCRIPTION --------------------------------------
---------------------------------ADVISOR_NAME CREATED LAST_MODI PARENT_TASK_ID -
----------------------------- --------- --------- -------------PARENT_REC_ID REA
D_ ------------- ----SCOTT 5 SCOTT_ADDM ADDM Feature Test ADDM 17-NOV-03 17-NOV-
03 0 0 FALSE When the job has successfully completed, examine the recommendation
s made by ADDM by calling the DBMS_ADVISOR.GET_TASK_REPORT() routine, like in: S
ET LONG 1000000 PAGESIZE 0 LONGCHUNKSIZE 1000 COLUMN get_clob FORMAT a80 SELECT
dbms_advisor.get_task_report('SCOTT_ADDM', 'TEXT', 'TYPICAL') FROM sys.dual; The
recommendations supplied should be sufficient to investigate the performance is
sue, as in: DETAILED ADDM REPORT FOR TASK 'SCOTT_ADDM' WITH ID 5 ---------------
------------------------------------Analysis Period: Database ID/Instance: Snaps
hot Range: Database Time: Average Database Load: 17-NOV-2003 from 09:50:21 to 10
:35:47 494687018/1 from 1 to 3 4215 seconds 1.5 active sessions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ FINDING
1: 65% impact (2734 seconds) -----------------------------------PL/SQL executio
n consumed significant database time. RECOMMENDATION 1: SQL Tuning, 65% benefit
(2734 seconds) ACTION: Tune the PL/SQL block with SQL_ID fjxa1vp3yhtmr. Refer to
the "Tuning PL/SQL Applications" chapter of Oracle's "PL/SQL User's Guide and R
eference" RELEVANT OBJECT: SQL statement with SQL_ID fjxa1vp3yhtmr BEGIN EMD_NOT
IFICATION.QUEUE_READY(:1, :2, :3); END; FINDING 2: 35% impact (1456 seconds) ---
--------------------------------SQL statements consuming significant database ti
me were found. RECOMMENDATION 1: SQL Tuning, 35% benefit (1456 seconds) ACTION:
Run SQL Tuning Advisor on the SQL statement with SQL_ID gt9ahqgd5fmm2. RELEVANT
OBJECT: SQL statement with SQL_ID gt9ahqgd5fmm2 and PLAN_HASH 547793521
UPDATE bigemp SET empno = ROWNUM FINDING 3: 20% impact (836 seconds) -----------
-----------------------The throughput of the I/O subsystem was significantly low
er than expected. RECOMMENDATION 1: Host Configuration, 20% benefit (836 seconds
) ACTION: Consider increasing the throughput of the I/O subsystem. Oracle's reco
mmended solution is to stripe all data file using the SAME methodology. You migh
t also need to increase the number of disks for better performance. RECOMMENDATI
ON 2: Host Configuration, 14% benefit (584 seconds) ACTION: The performance of f
ile D:\ORACLE\ORADATA\V1010\UNDOTBS01.DBF was significantly worse than other fil
es. If striping all files using the SAME methodology is not possible, consider s
triping this file over multiple disks. RELEVANT OBJECT: database file "D:\ORACLE
\ORADATA\V1010\UNDOTBS01.DBF" SYMPTOMS THAT LED TO THE FINDING: Wait class "User
I/O" was consuming significant database time. (34% impact [1450 seconds]) FINDI
NG 4: 11% impact (447 seconds) ----------------------------------Undo I/O was a
significant portion (33%) of the total database I/O. NO RECOMMENDATIONS AVAILABL
E SYMPTOMS THAT LED TO THE FINDING: The throughput of the I/O subsystem was sign
ificantly lower than expected. (20% impact [836 seconds]) Wait class "User I/O"
was consuming significant database time. (34% impact [1450 seconds]) FINDING 5:
9.9% impact (416 seconds) -----------------------------------Buffer cache writes
due to small log files were consuming significant database time. RECOMMENDATION
1: DB Configuration, 9.9% benefit (416 seconds) ACTION: Increase the size of th
e log files to 796 M to hold at least 20 minutes of redo information. SYMPTOMS T
HAT LED TO THE FINDING: The throughput of the I/O subsystem was significantly lo
wer than expected. (20% impact [836 seconds]) Wait class "User I/O" was consumin
g significant database time. (34% impact [1450 seconds]) FINDING 6: 9.2% impact
(387 seconds) -----------------------------------Individual database segments re
sponsible for significant user I/O wait were found.
RECOMMENDATION 1: Segment Tuning, 7.2% benefit (304 seconds) ACTION: Run "Segmen
t Advisor" on database object "SCOTT.BIGEMP" with object id 49634. RELEVANT OBJE
CT: database object with id 49634 ACTION: Investigate application logic involvin
g I/O on database object "SCOTT.BIGEMP" with object id 49634. RELEVANT OBJECT: d
atabase object with id 49634 RECOMMENDATION 2: Segment Tuning, 2% benefit (83 se
conds) ACTION: Run "Segment Advisor" on database object "SYSMAN.MGMT_METRICS_RAW
_PK" with object id 47084. RELEVANT OBJECT: database object with id 47084 ACTION
: Investigate application logic involving I/O on database object "SYSMAN.MGMT_ME
TRICS_RAW_PK" with object id 47084. RELEVANT OBJECT: database object with id 470
84 SYMPTOMS THAT LED TO THE FINDING: Wait class "User I/O" was consuming signifi
cant database time. (34% impact [1450 seconds]) FINDING 7: 8.7% impact (365 seco
nds) -----------------------------------Individual SQL statements responsible fo
r significant physical I/O were found. RECOMMENDATION 1: SQL Tuning, 8.7% benefi
t (365 seconds) ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
gt9ahqgd5fmm2. RELEVANT OBJECT: SQL statement with SQL_ID gt9ahqgd5fmm2 and PLAN
_HASH 547793521 UPDATE bigemp SET empno = ROWNUM RECOMMENDATION 2: SQL Tuning, 0
% benefit (0 seconds) ACTION: Tune the PL/SQL block with SQL_ID fjxa1vp3yhtmr. R
efer to the "Tuning PL/SQL Applications" chapter of Oracle's "PL/SQL User's Guid
e and Reference" RELEVANT OBJECT: SQL statement with SQL_ID fjxa1vp3yhtmr BEGIN
EMD_NOTIFICATION.QUEUE_READY(:1, :2, :3); END; SYMPTOMS THAT LED TO THE FINDING:
The throughput of the I/O subsystem was significantly lower than expected. (20%
impact [836 seconds]) Wait class "User I/O" was consuming significant database
time. (34% impact [1450 seconds]) FINDING 8: 8.3% impact (348 seconds) ---------
--------------------------Wait class "Configuration" was consuming significant d
atabase time. NO RECOMMENDATIONS AVAILABLE ADDITIONAL INFORMATION: Waits for fre
e buffers were not consuming significant database time. Waits for archiver proce
sses were not consuming significant database time. Log file switch operations we
re not consuming significant database time while waiting for checkpoint completi
on. Log buffer space waits were not consuming significant database
time. High watermark (HW) enqueue waits were not consuming significant database
time. Space Transaction (ST) enqueue waits were not consuming significant databa
se time. ITL enqueue waits were not consuming significant database time. ~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ADDITIONAL INF
ORMATION ---------------------An explanation of the terminology used in this rep
ort is available when you run the report with the 'ALL' level of detail. The ana
lysis of I/O performance is based on the default assumption that the average rea
d time for one database block is 5000 micro-seconds. Wait class "Administrative"
was not consuming significant database time. Wait class "Application" was not c
onsuming significant database time. Wait class "Cluster" was not consuming signi
ficant database time. Wait class "Commit" was not consuming significant database
time. Wait class "Concurrency" was not consuming significant database time. CPU
was not a bottleneck for the instance. Wait class "Network" was not consuming s
ignificant database time. Wait class "Scheduler" was not consuming significant d
atabase time. Wait class "Other" was not consuming significant database time. ==
=========================== END OF ADDM REPORT ======================
ADDM points out which events cause the performance problems to occur and suggest
s directions to follow to fix these bottlenecks. The ADDM recommendations show a
mongst others that the query on BIGEMP needs to be examined; in this case it sug
gests to run the Segment Advisor to check whether the data segment is fragmented
or not; it also advices to check the application logic involved in accessing th
e BIGEMP table. Furthermore, it shows the system suffers from I/O problems (whic
h is in this example caused by not using SAME and placing all database files on
a single disk partition). The findings are sorted descending by impact: the issu
es causing the greatest performance problems are listed at the top of the report
. Solving these issues will result in the greatest performance benefits. Also, i
n the section of the report ADDM indicates the areas that are not representing a
problem for the performance of the instance In this example the database is rat
her idle. As such the Enterprise Manager notification job (which runs frequently
) is listed at the top. You need not worry about this job at all. Please notice
that the output of the last query may differ depending on what took place on you
r database at the time the ADDM recommendations were generated. RELATED DOCUMENT
S
last
----------------Oracle10g Database Performance Guide Release 1 (10.1) Oracle10g
Database Reference Release 1 (10.1) PL/SQL Packages and Types Reference Release
1 (10.1) Note 2: ======= To determine which segments will benefit from segment s
hrink, you can invoke Segment Advisor. alter table hr.employees enable row movem
ent; After the Segment Advisor has been invoked to give recommendations, the fin
dings are available in BDA_ADVISOR_FINDINGS and DBA_ADVISOR_RECOMMENDATIONS. var
iable task_id number; declare name varchar2(100); desc varchar2(500); obj_id num
ber; begin name:=''; desc:='Check HR.EMPLOYEE'; DBMS_ADVISOR.CREATE_TASK('Segmen
t Advisor', :task_id, name, descr, NULL); DBMS_ADVISOR.CREATE_OBJECT(name,'TABLE
','HR','EMPLOYEES', NULL,NULL,obj_id); DBMS_ADVISOR.SET_TASK_PARAMETER(name,'REC
OMMEND_ALL','TRUE'); DBMS_ADVISOR.EXECUTE_TASK(name); end; PL/SQL procedure succ
essfully completed. print task_id TASK_ID ------6 SELECT owner, task_id, task_na
me, type, message, more_info FROM DBA_ADVISOR_FINDINGS WHERE task_id=6; OWNER TA
SK_ID TASK_NAME TYPE MESSAGE ---------------------------------------------------
-------------------RJB 6 TASK_00003 INFORMATION Perform shrink, estimated saving
s is 107602 bytes. In DBA_ADVISOR_ACTIONS, you can even find the exact SQL state
ment to shrink the
hr.employees segment. alter table hr.employees shrink space;
============================== 34. ASM and RAC in Oracle 10g: ==================
============
34.1 ASM ========
======== Note 1: ======== Automatic Storage Management (ASM) in Oracle Database
10g With ASM, Automatic Storage Management, there is a separate lightweight 10g
database involved. This ASM database (+ASM), contains all metadata about the ASM
system. It also acts as the interface between the regular database and the file
systems. ASM will provide for presentation and implementation of a special files
ystem, on which a number of redundancy/availability and performance features are
implemented. In addition to the normal database background processes like CKPT,
DBWR, LGWR, SMON, and PMON, an ASM instance uses at least two additional backgr
ound processes to manage data storage operations. The Rebalancer process, RBAL,
coordinates the rebalance activity for ASM disk groups, and the Actual ReBalance
processes, ARBn, handle the actual rebalance of data extent movements. There ar
e usually several ARB background processes (ARB0, ARB1, and so forth). Every dat
abase instance that uses ASM for file storage, will also need two new processes.
The Rebalancer background process (RBAL) handles global opens of all ASM disks
in the ASM Disk Groups, while the ASM Bridge process (ASMB) connects as a foregr
ound process into the ASM instance when the regular database instance starts. AS
MB facilitates communication between the ASM instance and the regular database,
including handling physical file changes like data file creation and deletion.
ASMB exchanges messages between both servers for statistics update and instance
health validation. These two processes are automatically started by the database
instance when a new Oracle file type for example, a tablespace's datafile -- is
created on an ASM disk group. When an ASM instance mounts a disk group, it regi
sters the disk group and connect string with Group Services. The database instan
ce knows the name of the disk group, and can therefore use it to locate connect
information for the correct ASM instance. ======== Note 2: ======== Some termino
logy in RAC: CRS cluster ready services - Clusterware: For Oracle10g on Linux an
d Windows-based platforms, CRS co-exists with but does not inter-operate with ve
ndor clusterware. You may use vendor clusterware for all UNIX-based operating sy
stems except for Linux. Even though, many of the Unix platforms have their own c
lusterware products, you need to use the CRS software to provide the HA support
services. CRS (cluster ready services) supports services and workload management
and helps to maintain the continuous availability of the services. CRS also man
ages resources such as virtual IP (VIP) address for the node and the global serv
ices daemon. Note that the "Voting disks" and the "Oracle Cluster Registry", are
regarded as part of the CRS. OCR: The Oracle Cluster Registry (OCR) contains cl
uster and database configuration information for Real Application Clusters Clust
er Ready Services (CRS), including the list of nodes in the cluster database, th
e CRS application, resource profiles, and the authorizations for the Event Manag
er (EVM). The OCR can reside in a file on a cluster file system or on a shared r
aw device. When you install Real Application Clusters, you specify the location
of the OCR. OCFS: OCFS is a shared disk cluster filesystem. Version 1 released f
or Linux is specifically designed to alleviate the need for manag-ing raw device
s. It can contain all the oracle datafiles, archive log files and controlfiles.
It is however not designed as a general purpose filesystem. OCFS2 is the next ge
neration of the Oracle Cluster File System for Linux. It is an
extent based, POSIX compliant file system. Unlike the previous release (OCFS), O
CFS2 is a general-purpose file system that can be used for shared Oracle home in
stallations making management of Oracle Real Application Cluster (RAC) installat
ions even easier. Among the new features and benefits are: Node and architecture
local files using Context Dependent Symbolic Links (CDSL) Network based pluggab
le DLM Improved journaling / node recovery using the Linux Kernel "JBD" subsyste
m Improved performance of meta-data operations (space allocation, locking, etc).
Improved data caching / locking (for files such as oracle binaries, libraries,
etc) - OCFS1 does NOT support a shared Oracle Home - OCFS2 does support a shared
Oracle Home Though ASM appears to be the intended replacement for Oracle Cluste
r File System (OCFS) for the Real Applications Cluster (RAC). ASM supports Oracl
e Real Application Clusters (RAC), so there is no need for a separate Cluster LV
M or a Cluster File System. So it boils down to: - You use or OCFS2, or RAW, or
ASM (preferrably) for your database files. Storage Option area -------------Auto
matic Storage Management Cluster file system (OCFS) Shared raw storage ========
Note 3: ======== Automatic Storage Management (ASM) simplifies database administ
ration. It eliminates the need for you, as a DBA, to directly manage potentially
thousands of Oracle database files. It does this by enabling you to create disk
groups, which are comprised of disks and the files that reside on them. You onl
y need to manage a small number of disk groups. In the SQL statements that you u
se for creating database structures such as tablespaces, redo log and archive lo
g files, and control files, you specify file location in terms of disk groups. A
utomatic Storage Management then creates and manages the associated underlying f
iles for you. Automatic Storage Management extends the power of Oracle-managed f
iles. With Oracle-managed files, files are created and managed automatically for
you, but with Automatic Storage Oracle Clusterware -----------------No Yes Yes
Yes Yes Yes Database -------Yes Yes No Recovery ------------
Management you get the additional benefits of features such as mirroring and str
iping. The primary component of Automatic Storage Management is the disk group.
You configure Automatic Storage Management by creating disk groups, which, in yo
ur database instance, can then be specified as the default location for files cr
eated in the database. Oracle provides SQL statements that create and manage dis
k groups, their contents, and their metadata. A disk group consists of a groupin
g of disks that are managed together as a unit. These disks are referred to as A
SM disks. Files written on ASM disks are ASM files, whose names are automaticall
y generated by Automatic Storage Management. You can specify user-friendly alias
names for ASM files, but you must create a hierarchical directory structure for
these alias names. You can affect how Automatic Storage Management places files
on disks by specifying failure groups. Failure groups define disks that share c
omponents, such that if one fails then other disks sharing the component might a
lso fail. An example of what you might define as a failure group would be a set
of SCSI disks sharing the same SCSI controller. Failure groups are used to deter
mine which ASM disks to use for storing redundant data. For example, if two-way
mirroring is specified for a file, then redundant copies of file extents must be
stored in separate failure groups. If you would take a look at the v$datafile,
v$logfile, and v$controlfile of the regular Database, you would see information
like in the following example: SQL> select file#, name from v$datafile; 1 2 3 4
5 +DATA1/rac0/datafile/system.256.1 +DATA1/rac0/datafile/undotbs.258.1 +DATA1/ra
c0/datafile/sysaux.257.1 +DATA1/rac0/datafile/users.259.1 +DATA1/rac0/datafile/e
xample.269.1
SQL> select name from v$controlfile; +DATA1/rac0/controlfile/current.261.3 +DATA
1/rac0/controlfile/current.260.3
-- Initialization Parameters (init.ora or SPFILE) for ASM Instances The followin
g initialization parameters relate to an ASM instance. Parameters that start wit
h ASM_ cannot be set in database instances. Name Description
INSTANCE_TYPE
Must be set to INSTANCE_TYPE = ASM. Note: This is the only required parameter. A
ll other parameters take suitable defaults for most environments. DB_UNIQUE_NAME
Unique name for this group of ASM instances within the cluster or on a node. De
fault: +ASM (Needs to be modified only if trying to run multiple ASM instances o
n the same node) ASM_POWER_LIMIT Default: 1 The maximum power on an ASM instance
for disk rebalancing. Can range from 1 to 11. 1 is the lowest priority.
See Also: "Tuning Rebalance Operations" ASM_DISKSTRING Limits the set of disks t
hat Automatic Storage Management considers for discovery. Default: NULL (This de
fault causes ASM to find all of the disks in a platformspecific location to whic
h it has read/write access.). Example: /dev/raw/* ASM_DISKGROUPS at startup, Def
ault: NULL mounted.) Lists the names of disk groups to be mounted by an ASM inst
ance or when the ALTER DISKGROUP ALL MOUNT statement is used. (If this parameter
is not specified, then no disk groups are
Note: This parameter is dynamic and if you are using a server parameter file (SP
FILE), then you should rarely need to manually alter this value. Automatic Stora
ge Management automatically adds a disk group to this parameter when a disk grou
p is successfully mounted, and automatically removes a disk group that is specif
ically dismounted. However, when using a traditional text initialization paramet
er file, remember to edit the initialization parameter file to add the name of a
ny disk group that you want automatically mounted at instance startup, and remov
e the name of any disk group that you no longer want automatically mounted.
-- ASM Views: The ASM configuration can be viewed using the V$ASM_% views, which
often contain different information depending on whether they are queried from
the ASM instance, or a dependant database instance. Viewing ASM Instance Informa
tion Via SQL Queries Finally, there are several dynamic and data dictionary view
s available to view an ASM configuration from within the ASM instance itself: AS
M Dynamic Views: FROM ASM Instance Information View Name Description
V$ASM_ALIAS instance
Shows every alias for every disk group mounted by the ASM
V$ASM_CLIENT Shows which database instance(s) are using any ASM disk groups that
are being mounted by this ASM instance V$ASM_DISK Lists each disk discovered by
the ASM instance, including disks that are not part of any ASM disk group V$ASM
_DISKGROUP instance V$ASM_FILE instance Describes information about ASM disk gro
ups mounted by the ASM Lists each ASM file in every ASM disk group mounted by th
e ASM
V$ASM_OPERATION Like its counterpart, V$SESSION_LONGOPS, it shows each longrunni
ng ASM operation in the ASM instance V$ASM_TEMPLATE Lists each template present
in every ASM disk group mounted by the ASM instance
-- Managing disk groups The SQL statements introduced in this section are only a
vailable in an ASM instance. You must first start the ASM instance. Creating dis
k group examples: Example 1: ---------Creating a Disk Group: Example The followi
ng examples assume that the ASM_DISKSTRING is set to '/devices/*'. Assume the fo
llowing: ASM disk discovery identifies the following disks in directory /devices
. /devices/diska1 /devices/diska2 /devices/diska3 /devices/diska4 /devices/diskb
1 /devices/diskb2 /devices/diskb3 /devices/diskb4 The disks diska1 - diska4 are
on a separate SCSI controller from disks diskb1 diskb4. The following SQL*Plus s
ession illustrates starting an ASM instance and creating a disk group named dgro
up1.
% SQLPLUS /NOLOG SQL> CONNECT / AS SYSDBA SQL> CREATE DISKGROUP dgroup1 NORMAL R
EDUNDANCY 2 FAILGROUP controller1 DISK 3 '/devices/diska1', 4 '/devices/diska2',
5 '/devices/diska3', 6 '/devices/diska4', 7 FAILGROUP controller2 DISK 8 '/devi
ces/diskb1', 9 '/devices/diskb2', 10 '/devices/diskb3', 11 '/devices/diskb4'; In
this example, dgroup1 is composed of eight disks that are defined as belonging
to either failure group controller1 or controller2. Since NORMAL REDUNDANCY leve
l is specified for the disk group, then Automatic Storage Management provides re
dundancy for all files created in dgroup1 according to the attributes specified
in the disk group templates. For example, in the system default template shown i
n the table in "Managing Disk Group Templates", normal redundancy for the online
redo log files (ONLINELOG template) is two-way mirroring. This means that when
one copy of a redo log file extent is written to a disk in failure group control
ler1, a mirrored copy of the file extent is written to a disk in failure group c
ontroller2. You can see that to support normal redundancy level, at least two fa
ilure groups must be defined. Since no NAME clauses are provided for any of the
disks being included in the disk group, the disks are assigned the names of dgro
up1_0001, dgroup1_0002, ..., dgroup1_0008. Example 2: ---------CREATE DISKGROUP
disk_group_1 NORMAL REDUNDANCY FAILGROUP failure_group_1 DISK '/devices/diska1'
NAME diska1, '/devices/diska2' NAME diska2, FAILGROUP failure_group_2 DISK '/dev
ices/diskb1' NAME diskb1, '/devices/diskb2' NAME diskb2; Example 3: ---------At
some point in using OUI in installing the software, and creating a database, you
will see the following screen: ------------------------------------------------
----
|SPECIFY Database File Storage Option | | | | o File system | | Specify Database
file location: ######### | | | | o Automatic Storage Management (ASM) | | | | o
Raw Devices | | | | Specify Raw Devices mapping file: ########## | ------------
---------------------------------------Suppose that you have on a Linux machine
the following raw disk devices: /dev/raw/raw1 /dev/raw/raw2 /dev/raw/raw3 /dev/r
aw/raw4 /dev/raw/raw5 /dev/raw/raw6 8GB 8GB 6GB 6GB 6GB 6GB
Then you can choose ASM in the upper screen, and see the following screen, where
you can create the initial diskgroup and assign disks to it: ------------------
----------------------------------| Configure Automatic Storage Management | | D
isk Group Name: data1 | | Redundancy | o High o Normal o External | | Add member
Disks | |-------------------------------| | select Disk Path | | |[#] /dev/raw/
raw1 | | |[#] /dev/raw/raw2 | | |[ ] /dev/raw/raw3 | | |[ ] /dev/raw/raw4 | | --
-----------------------------| -------------------------------------------------
----- Mounting and Dismounting Disk Groups Disk groups that are specified in the
ASM_DISKGROUPS initialization parameter are mounted automatically at ASM instan
ce startup. This makes them available to all database instances running on the s
ame node as Automatic Storage Management. The disk groups are dismounted at ASM
instance shutdown. Automatic Storage Management also automatically mounts a disk
group when you initially create it, and dismounts a disk group if you drop it.
There may be times that you want to mount or dismount disk groups manually. For
these actions use | | | | | | | | | | | | | | | |
the ALTER DISKGROUP ... MOUNT or ALTER DISKGROUP ... DISMOUNT statement. You can
mount or dismount disk groups by name, or specify ALL. If you try to dismount a
disk group that contains open files, the statement will fail, unless you also s
pecify the FORCE clause. Example The following statement dismounts all disk grou
ps that are currently mounted to the ASM instance: ALTER DISKGROUP ALL DISMOUNT;
The following statement mounts disk group dgroup1: ALTER DISKGROUP dgroup1 MOUN
T; ======== Note 4: ======== -- Installing Oracle ASMLib for Linux: ASMLib is a
support library for the Automatic Storage Management Database 10g. This document
is a set of tips for installing the Linux specific its assocated driver. This l
ibrary is provide to enable ASM I/O to Linux disks without of the standard Unix
I/O API. The steps below are steps that the system must follow. feature of Oracl
e ASM library and the limitations administrator
The ASMLib software is available from the Oracle Technology Network. Go to ASMLi
b download page and follow the link for your platform. You will see 4-6 packages
for your Linux platform. -The oracleasmlib package provides the actual ASM libr
ary. -The oracleasm-support package provides the utilities used to get the ASM d
river up and running. Both of these packages need to be installed. -The remainin
g packages provide the kernel driver for the ASM library. Each package provides
the driver for a different kernel. You must install the appropriate package for
the kernel you are running. Use the "uname -r command to determine the version o
f the kernel. The oracleasm kerel driver package will have that version string i
n its name. For example, if you were running Red Hat Enterprise Linux 4 AS, and
the kernel you were using was the 2.6.9-5.0.5.ELsmp kernel, you would choose the
oracleasm-2.6.9-5.0.5-ELsmp package.
So, for example, to install these packages on RHEL4 on an Intel x86 machine, mig
ht use the command: rpm -Uvh oracleasm-support-2.0.0-1.i386.rpm \ oracleasm-lib-
2.0.0-1.i386.rpm \ oracleasm-2.6.9-5.0.5-ELsmp-2.0.0-1.i686.rpm Once the command
completes, ASMLib is now installed on the system. -- Configuring ASMLib:
you
Now that the ASMLib software is installed, a few steps have to be taken by the s
ystem administrator to make the ASM driver available. The ASM driver needs to be
loaded, and the driver filesystem needs to be mounted. This is taken care of by
the initialization script, "/etc/init.d/oracleasm". Run the "/etc/init.d/oracle
asm" script with the "configure" option. It will ask for the user and group that
default to owning the ASM driver access point. If the database was running as t
he 'oracle' user and the 'dba' group, the output would look like this: [root@ca-
test1 /]# /etc/init.d/oracleasm configure Configuring the Oracle ASM library dri
ver. This will configure the on-boot properties of the Oracle ASM library driver
. The following questions will determine whether the driver is loaded on boot an
d what permissions it will have. The current values will be shown in brackets ('
[]'). Hitting without typing an answer will keep that current value. Ctrl-C will
abort. Default user to own the driver interface []: oracle Default group to own
the driver interface []: dba Start Oracle ASM library driver on boot (y/n) [n]:
y Fix permissions of Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM l
ibrary driver configuration Creating /dev/oracleasm mount point Loading module "
oracleasm" Mounting ASMlib driver filesystem Scanning system for ASM disks
[ [ [ [ [
OK OK OK OK OK
] ] ] ] ]
This should load the oracleasm.o driver module and mount the ASM driver filesyst
em. By selecting enabled = 'y' during the configuration, the system will always
load the module and mount the filesystem on boot. The automatic start can be ena
bled or disabled with the 'enable' and 'disable' options to /etc/init.d/oracleas
m: [root@ca-test1 /]# /etc/init.d/oracleasm disable Writing Oracle ASM library d
river configuration Unmounting ASMlib driver filesystem Unloading module "oracle
asm" [root@ca-test1 /]# /etc/init.d/oracleasm enable [ [ [ OK OK OK ] ] ]
Writing Oracle ASM library driver configuration Loading module "oracleasm" Mount
ing ASMlib driver filesystem Scanning system for ASM disks -- Making Disks Avail
able to ASMLib:
[ [ [ [
OK OK OK OK
] ] ] ]
The system administrator has one last task. Every disk that ASMLib is going to b
e accessing needs to be made available. This is accomplished by creating an ASM
disk. The /etc/init.d/oracleasm script is again used for this task: [root@ca-tes
t1 /]# /etc/init.d/oracleasm createdisk VOL1 /dev/sdg1 Creating Oracle ASM disk
"VOL1" [ OK ] Disk names are ASCII capital letters, numbers, and underscores. Th
ey must start with a letter. Disks that are no longer used by ASM can be unmarke
d as well: [root@ca-test1 /]# /etc/init.d/oracleasm deletedisk VOL1 Deleting Ora
cle ASM disk "VOL1" [ OK ]
Any operating system disk can be queried to see if it is used by ASM: [root@ca-t
est1 /]# Checking if device [root@ca-test1 /]# Checking if device /etc/init.d/or
acleasm querydisk /dev/sdg1 "/dev/sdg1" is an Oracle ASM disk [ OK ] /etc/init.d
/oracleasm querydisk /dev/sdh1 "/dev/sdh1" is an Oracle ASM disk [FAILED]
Existing disks can be listed and queried: [root@ca-test1 /]# /etc/init.d/oraclea
sm listdisks VOL1 VOL2 VOL3 [root@ca-test1 /]# /etc/init.d/oracleasm querydisk V
OL1 Checking for ASM disk "VOL1"
[
OK
]
When a disk is added to a RAC setup, the other nodes need to be notified about i
t. Run the 'createdisk' command on one node, and then run 'scandisks' on every o
ther node: [root@ca-test1 /]# /etc/init.d/oracleasm scandisks Scanning system fo
r ASM disks -- Discovery Strings for Linux ASMLib: ASMLib uses discovery strings
to determine what disks ASM is asking for. The generic Linux ASMLib uses glob s
trings. The string must be prefixed with "ORCL:". Disks are specified by name. A
disk created with the name "VOL1" can be discovered in ASM via the discovery st
ring "ORCL:VOL1". [ OK ]
Similarly, all disks that start with the string "VOL" can be queried with the di
scovery string "ORCL:VOL*". Disks cannot be discovered with path names in the di
scovery string. If the prefix is missing, the generic Linux ASMLib will ignore t
he discovery string completely, expecting that it is intended for a different AS
MLib. The only exception is the empty string (""), which is considered a full wi
ldcard. This is precisely equivalent to the discovery string "ORCL:*". NOTE: Onc
e you mark your disks with Linux ASMLib, Oracle Database 10g R1 (10.1) OUI will
not be able to discover your disks. It is recommended that you complete a Softwa
re Only install and then use DBCA to create your database (or use the custom ins
tall).
======== Note 5: ======== Automatic Storage Management (ASM) is a new feature th
at has be introduced in Oracle 10g to simplify the storage of Oracle datafiles,
controlfiles and logfiles. Overview of Automatic Storage Management (ASM) Initia
lization Parameters and ASM Instance Creation Startup and Shutdown of ASM Instan
ces Administering ASM Disk Groups Disks Templates Directories Aliases Files Chec
king Metadata ASM Filenames ASM Views SQL and ASM Migrating to ASM Using RMAN
Overview of Automatic Storage Management (ASM) Automatic Storage Management (ASM
) simplifies administration of Oracle related files by allowing the administrato
r to reference disk groups rather than individual disks and files, which are man
aged by ASM. The ASM functionality is an extention of the Oracle Managed Files (
OMF) functionality that also includes striping and mirroring to provide balanced
and secure storage. The new ASM functionality can be used in combination with e
xisting raw and cooked file systems, along with OMF and manually managed files.
The ASM functionality is controlled by an ASM instance. This is not a full datab
ase instance, just the memory structures and as such is very small and lightweig
ht.
The main components of ASM are disk groups, each of which comprise of several ph
ysical disks that are controlled as a single unit. The physical disks are known
as ASM disks, while the files that reside on the disks are know as ASM files. Th
e locations and names for the files are controlled by ASM, but user-friendly ali
ases and directory structures can be defined for ease of reference. The level of
redundancy and the granularity of the striping can be controlled using template
s. Default templates are provided for each file type stored by ASM, but addition
al templates can be defined as needed. Failure groups are defined within a disk
group to support the required level of redundancy. For two-way mirroring you wou
ld expect a disk group to contain two failure groups so individual files are wri
tten to two locations. In summary ASM provides the following functionality: Mana
ges groups of disks, called disk groups. Manages disk redundancy within a disk g
roup. Provides near-optimal I/O balancing without any manual tuning. Enables man
agement of database objects without specifying mount points and filenames. Suppo
rts large files. Initialization Parameters and ASM Instance Creation The init.or
a / spfile initialization parameters that are of specific interest for an ASM in
stance are: INSTANCE_TYPE is RDBMS. DB_UNIQUE_NAME to +ASM but - Set to ASM or R
DBMS depending on the instance type. The default - Specifies a globally unique n
ame for the database. This defaults
must be altered if you intend to run multiple ASM instances. ASM_POWER_LIMIT - T
he maximum power for a rebalancing operation on an ASM instance. The valid value
s range from 1 to 11, with 1 being the default. The higher the limit the more re
sources are allocated resulting in faster rebalancing operations. This value is
also used as the default when the POWER clause is omitted from a rebalance opera
tion. ASM_DISKGROUPS - The list of disk groups that should be mounted by an ASM
instance during instance startup, or by the ALTER DISKGROUP ALL MOUNT statement.
ASM configuration changes are automatically reflected in this parameter. ASM_DI
SKSTRING - Specifies a value that can be used to limit the disks considered for
discovery. Altering the default value may improve the speed of disk group mount
time and the speed of adding a disk to a disk group. Changing the parameter to a
value which prevents the discovery of already mounted disks results in an error
. The default value is NULL allowing all suitable disks to be considered.
Incorrect usage of parameters in ASM or RDBMS instances result in ORA-15021 erro
rs. To create an ASM instance first create a file called init+ASM.ora in the /tm
p directory containing the following information. INSTANCE_TYPE=ASM Next, using
SQL*Plus connect to the ide instance. export ORACLE_SID=+ASM sqlplus / as sysdba
Create an spfile using the contents of the init+ASM.ora file. SQL> CREATE SPFIL
E FROM PFILE='/tmp/init+ASM.ora'; File created. Finally, start the instance with
the NOMOUNT option. SQL> startup nomount ASM instance started Total System Glob
al Area Fixed Size Variable Size Database Buffers Redo Buffers SQL> 125829120 13
01456 124527664 0 0 bytes bytes bytes bytes bytes
The ASM instance is now ready to use for creating and mounting disk groups. To s
hutdown the ASM instance issue the following command. SQL> shutdown ASM instance
shutdown SQL> Once an ASM instance is present disk groups can be used for the f
ollowing parameters in database instances (INSTANCE_TYPE=RDBMS) to allow ASM fil
e creation: DB_CREATE_FILE_DEST DB_CREATE_ONLINE_LOG_DEST_n DB_RECOVERY_FILE_DES
T CONTROL_FILES LOG_ARCHIVE_DEST_n LOG_ARCHIVE_DEST STANDBY_ARCHIVE_DEST Startup
and Shutdown of ASM Instances ASM instance are started and stopped in a similar
way to normal database instances. The options for the STARTUP command are:
FORCE - Performs a SHUTDOWN ABORT before restarting the ASM instance. MOUNT - St
arts the ASM instance and mounts the disk groups specified by the ASM_DISKGROUPS
parameter. NOMOUNT - Starts the ASM instance without mounting any disk groups.
OPEN - This is not a valid option for an ASM instance. The options for the SHUTD
OWN command are: NORMAL - The ASM instance waits for all connected ASM instances
and SQL sessions to exit then shuts down. IMMEDIATE - The ASM instance waits fo
r any SQL transactions to complete then shuts down. It doesn't wait for sessions
to exit. TRANSACTIONAL - Same as IMMEDIATE. ABORT - The ASM instance shuts down
instantly. Aministering ASM Disk Groups Disk groups are created using the CREAT
E DISKGROUP statement. This statement allows you to specify the level of redunda
ncy: NORMAL REDUNDANCY HIGH REDUNDANCY EXTERNAL REDUNDANCY hardware mirroring or
Two-way mirroring, requiring two failure groups. Three-way mirroring, requiring
three failure groups. No mirroring for disks that are already protected using R
AID.
In addition failure groups and preferred names for disks can be defined. If the
NAME clause is omitted the disks are given a system generated name like "disk_gr
oup_1_0001". The FORCE option can be used to move a disk from another disk group
into this one. CREATE DISKGROUP disk_group_1 NORMAL REDUNDANCY FAILGROUP failur
e_group_1 DISK '/devices/diska1' NAME diska1, '/devices/diska2' NAME diska2, FAI
LGROUP failure_group_2 DISK '/devices/diskb1' NAME diskb1, '/devices/diskb2' NAM
E diskb2; Disk groups can be deleted using the DROP DISKGROUP statement. DROP DI
SKGROUP disk_group_1 INCLUDING CONTENTS; Disks can be added or removed from disk
groups using the ALTER DISKGROUP statement. Remember that the wildcard "*" can
be used to reference disks so long as the resulting string does not match a disk
already used by an existing disk group. -- Add disks. ALTER DISKGROUP disk_grou
p_1 ADD DISK '/devices/disk*3', '/devices/disk*4'; -- Drop a disk. ALTER DISKGRO
UP disk_group_1 DROP DISK diska2;
Disks can be resized using the RESIZE clause of the ALTER DISKGROUP statement. T
he statement can be used to resize individual disks, all disks in a failure grou
p or all disks in the disk group. If the SIZE clause is omitted the disks are re
sized to the size of the disk returned by the OS. -- Resize a specific disk. ALT
ER DISKGROUP disk_group_1 RESIZE DISK diska1 SIZE 100G; -- Resize all disks in a
failure group. ALTER DISKGROUP disk_group_1 RESIZE DISKS IN FAILGROUP failure_g
roup_1 SIZE 100G; -- Resize all disks in a disk group. ALTER DISKGROUP disk_grou
p_1 RESIZE ALL SIZE 100G;The UNDROP DISKS clause of the ALTER DISKGROUP statemen
t allows pending disk drops to be undone. It will not revert drops that have com
pleted, or disk drops associated with the dropping of a disk group. ALTER DISKGR
OUP disk_group_1 UNDROP DISKS; Disk groups can be rebalanced manually using the
REBALANCE clause of the ALTER DISKGROUP statement. If the POWER clause is omitte
d the ASM_POWER_LIMIT parameter value is used. Rebalancing is only needed when t
he speed of the automatic rebalancing is not appropriate. ALTER DISKGROUP disk_g
roup_1 REBALANCE POWER 5; Disk groups are mounted at ASM instance startup and un
mounted at ASM instance shutdown. Manual mounting and dismounting can be accompl
ished using the ALTER DISKGROUP statement as seen below. ALTER ALTER ALTER ALTER
DISKGROUP DISKGROUP DISKGROUP DISKGROUP ALL DISMOUNT; ALL MOUNT; disk_group_1 D
ISMOUNT; disk_group_1 MOUNT;
Templates Templates are named groups of attributes that can be applied to the fi
les within a disk group. The following example show how templates can be created
, altered and dropped. -- Create a new template. ALTER DISKGROUP disk_group_1 AD
D TEMPLATE my_template ATTRIBUTES (MIRROR FINE); -- Modify template. ALTER DISKG
ROUP disk_group_1 ALTER TEMPLATE my_template ATTRIBUTES (COARSE); -- Drop templa
te. ALTER DISKGROUP disk_group_1 DROP TEMPLATE my_template;Available attributes
include: UNPROTECTED - No mirroring or striping regardless of the redundancy set
ting.
MIRROR - Two-way mirroring for normal redundancy and three-way mirroring for hig
h redundancy. This attribute cannot be set for external redundancy. COARSE - Spe
cifies lower granuality for striping. This attribute cannot be set for external
redundancy. FINE - Specifies higher granularity for striping. This attribute can
not be set for external redundancy. Directories A directory heirarchy can be def
ined using the ALTER DISKGROUP statement to support ASM file aliasing. The follo
wing examples show how ASM directories can be created, modified and deleted. --
Create a directory. ALTER DISKGROUP disk_group_1 ADD DIRECTORY '+disk_group_1/my
_dir'; -- Rename a directory. ALTER DISKGROUP disk_group_1 RENAME DIRECTORY '+di
sk_group_1/my_dir' TO '+disk_group_1/my_dir_2'; -- Delete a directory and all it
s contents. ALTER DISKGROUP disk_group_1 DROP DIRECTORY '+disk_group_1/my_dir_2'
FORCE;Aliases Aliases allow you to reference ASM files using user-friendly name
s, rather than the fully qualified ASM filenames. -- Create an alias using the f
ully qualified filename. ALTER DISKGROUP disk_group_1 ADD ALIAS '+disk_group_1/m
y_dir/my_file.dbf' FOR '+disk_group_1/mydb/datafile/my_ts.342.3'; -- Create an a
lias using the numeric form filename. ALTER DISKGROUP disk_group_1 ADD ALIAS '+d
isk_group_1/my_dir/my_file.dbf' FOR '+disk_group_1.342.3'; -- Rename an alias. A
LTER DISKGROUP disk_group_1 RENAME ALIAS '+disk_group_1/my_dir/my_file.dbf' TO '
+disk_group_1/my_dir/my_file2.dbf'; -- Delete an alias. ALTER DISKGROUP disk_gro
up_1 DELETE ALIAS '+disk_group_1/my_dir/my_file.dbf'; Attempting to drop a syste
m alias results in an error. Files Files are not deleted automatically if they a
re created using aliases, as they are not Oracle Managed Files (OMF), or if a re
covery is done to a point-in-time before the file was created. For these circums
tances it is necessary to manually delete the files, as shown below. -- Drop fil
e using an alias. ALTER DISKGROUP disk_group_1 DROP FILE '+disk_group_1/my_dir/m
y_file.dbf'; -- Drop file using a numeric form filename. ALTER DISKGROUP disk_gr
oup_1 DROP FILE '+disk_group_1.342.3'; -- Drop file using a fully qualified file
name. ALTER DISKGROUP disk_group_1 DROP FILE '+disk_group_1/mydb/datafile/my_ts.
342.3';
Checking Metadata The internal consistency of disk group metadata can be checked
in a number of ways using the CHECK clause of the ALTER DISKGROUP statement. --
Check metadata for a specific file. ALTER DISKGROUP disk_group_1 CHECK FILE '+d
isk_group_1/my_dir/my_file.dbf' -- Check metadata for a specific failure group i
n the disk group. ALTER DISKGROUP disk_group_1 CHECK FAILGROUP failure_group_1;
-- Check metadata for a specific disk in the disk group. ALTER DISKGROUP disk_gr
oup_1 CHECK DISK diska1; -- Check metadata for all disks in the disk group. ALTE
R DISKGROUP disk_group_1 CHECK ALL; ASM Views The ASM configuration can be viewe
d using the V$ASM_% views, which often contain different information depending o
n whether they are queried from the ASM instance, or a dependant database instan
ce. Viewing ASM Instance Information Via SQL Queries Finally, there are several
dynamic and data dictionary views available to view an ASM configuration from wi
thin the ASM instance itself: -- ASM Dynamic Views: FROM ASM Instance Informatio
n View Name V$ASM_ALIAS instance Description Shows every alias for every disk gr
oup mounted by the ASM
V$ASM_CLIENT Shows which database instance(s) are using any ASM disk groups that
are being mounted by this ASM instance V$ASM_DISK Lists each disk discovered by
the ASM instance, including disks that are not part of any ASM disk group V$ASM
_DISKGROUP instance V$ASM_FILE instance Describes information about ASM disk gro
ups mounted by the ASM Lists each ASM file in every ASM disk group mounted by th
e ASM
V$ASM_OPERATION Like its counterpart, V$SESSION_LONGOPS, it shows each longrunni
ng ASM operation in the ASM instance V$ASM_TEMPLATE Lists each template present
in every ASM disk group mounted by the ASM instance I was also able to query the
following dynamic views against my database instance to view the related ASM st
orage components of that instance:
-- ASM Dynamic Views: FROM Database Instance Information View Name Description
V$ASM_DISKGROUP Shows one row per each ASM disk group that's mounted by the loca
l ASM instance V$ASM_DISK Displays one row per each disk in each ASM disk group
that are in use by the database instance V$ASM_CLIENT Lists one row per each ASM
instance for which the database instance has any open ASM files
ASM Filenames There are several ways to reference ASM file. Some forms are used
during creation and some for referencing ASM files. The forms for file creation
are incomplete, relying on ASM to create the fully qualified name, which can be
retrieved from the supporting views. The forms of the ASM filenames are summaris
ed below. Filename Type Format Fully Qualified ASM Filename +dgroup/dbname/file_
type/file_type_tag.file.incarnation Numeric ASM Filename +dgroup.file.incarnatio
n Alias ASM Filenames +dgroup/directory/filename Alias ASM Filename with Templat
e +dgroup(template)/alias Incomplete ASM Filename +dgroup Incomplete ASM Filenam
e with Template +dgroup(template) SQL and ASM ASM filenames can be used in place
of conventional filenames for most Oracle file types, including controlfiles, d
atafiles, logfiles etc. For example, the following command creates a new tablesp
ace with a datafile in the disk_group_1 disk group. CREATE TABLESPACE my_ts DATA
FILE '+disk_group_1' SIZE 100M AUTOEXTEND ON;Migrating to ASM Using RMAN The fol
lowing method shows how a primary database can be migrated to ASM from a disk ba
sed backup: Disable change tracking (only available in Enterprise Edition) if it
is currently being used. SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;Shut
down the database. SQL> SHUTDOWN IMMEDIATEModify the parameter file of the targe
t database as follows: Set the DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST
_n parameters to the relevant ASM disk groups. Remove the CONTROL_FILES paramete
r from the spfile so the control files will be moved to the DB_CREATE_* destinat
ion and the spfile gets updated automatically. If you are using a pfile the CONT
ROL_FILES parameter must be set
to the appropriate ASM files or aliases. Start the database in nomount mode. RMA
N> STARTUP NOMOUNTRestore the controlfile into the new location from the old loc
ation. RMAN> RESTORE CONTROLFILE FROM 'old_control_file_name';Mount the database
. RMAN> ALTER DATABASE MOUNT;Copy the database into the ASM disk group. RMAN> BA
CKUP AS COPY DATABASE FORMAT '+disk_group';Switch all datafile to the new ASM lo
cation. RMAN> SWITCH DATABASE TO COPY;Open the database. RMAN> ALTER DATABASE OP
EN;Create new redo logs in ASM and delete the old ones. Enable change tracking i
f it was being used. SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;Form more
information see: Using Automatic Storage Management Migrating a Database into AS
M Hope this helps. Regards Tim... Note 6: ======= Good example !!!! How to Use O
racle10g release 2 ASM on Linux: [root@danaly etc]# fdisk /dev/cciss/c0d0 The nu
mber of cylinders for this disk is set to 8854. There is nothing wrong with that
, but this is larger than 1024, and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO) 2) booting and
partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m fo
r help): p Disk /dev/cciss/c0d0: 72.8 GB, 72833679360 bytes 255 heads, 63 sector
s/track, 8854 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device
Boot /dev/cciss/c0d0p1 * /dev/cciss/c0d0p2 /dev/cciss/c0d0p3 /dev/cciss/c0d0p4 /
dev/cciss/c0d0p5 /dev/cciss/c0d0p6 Start 1 34 556 687 687 1731 End 33 555 686 88
54 1730 2774 Blocks 265041 4192965 1052257+ 65609460 8385898+ 8385898+ Id 83 82
83 5 83 83 System Linux Linux swap Linux Extended Linux Linux
/dev/cciss/c0d0p7 /dev/cciss/c0d0p8
2775 3819
3818 4601
8385898+ 6289416
83 83
Linux Linux
Command (m for help): n First cylinder (4602-8854, default 4602): Using default
value 4602 Last cylinder or +size or +sizeM or +sizeK (4602-8854, default 8854):
+20000M Command (m for help): n First cylinder (7035-8854, default 7035): Using
default value 7035 Last cylinder or +size or +sizeM or +sizeK (7035-8854, defau
lt 8854): +3000M Command (m for help): n First cylinder (7401-8854, default 7401
): Using default value 7401 Last cylinder or +size or +sizeM or +sizeK (7401-885
4, default 8854): +3000M Command (m for help): p Disk /dev/cciss/c0d0: 72.8 GB,
72833679360 bytes 255 heads, 63 sectors/track, 8854 cylinders Units = cylinders
of 16065 * 512 = 8225280 bytes Device Boot /dev/cciss/c0d0p1 * /dev/cciss/c0d0p2
/dev/cciss/c0d0p3 /dev/cciss/c0d0p4 /dev/cciss/c0d0p5 /dev/cciss/c0d0p6 /dev/cc
iss/c0d0p7 /dev/cciss/c0d0p8 /dev/cciss/c0d0p9 /dev/cciss/c0d0p10 /dev/cciss/c0d
0p11 Start 1 34 556 687 687 1731 2775 3819 4602 7035 7401 End 33 555 686 8854 17
30 2774 3818 4601 7034 7400 7766 Blocks 265041 4192965 1052257+ 65609460 8385898
+ 8385898+ 8385898+ 6289416 19543041 2939863+ 2939863+ Id 83 82 83 5 83 83 83 83
83 83 83 System Linux Linux swap Linux Extended Linux Linux Linux Linux Linux L
inux Linux
Command (m for help): w The partition table has been altered! Calling ioctl() to
re-read partition table. WARNING: Re-reading the partition table failed with er
ror 16: Device or resource busy. The kernel still uses the old table. The new ta
ble will be used at the next reboot. Syncing disks. [root@danaly Marking disk [r
oot@danaly Marking disk [root@danaly VOL1 VOL2 VOL3 VOL4 data1]# /etc/init.d/ora
cleasm createdisk VOL5 /dev/cciss/c0d0p10 "/dev/cciss/c0d0p10" as an ASM disk: [
OK ] data1]# /etc/init.d/oracleasm createdisk VOL6 /dev/cciss/c0d0p11 "/dev/cci
ss/c0d0p11" as an ASM disk: [ OK ] data1]# /etc/init.d/oracleasm listdisks
VOL5 VOL6 (THE FOLLOWING QUERIES ARE ISSUED FROM THE ASM INSTANCE.) [oracle@dana
ly ~]$ export ORACLE_SID=+ASM [oracle@danaly ~]$ sqlplus "/ as sysdba" SQL*Plus:
Release 10.2.0.1.0 - Production on Sun Sep 3 00:28:09 2006 Copyright (c) 1982,
2005, Oracle. Connected to an idle instance. SQL> startup ASM instance started T
otal System Global Area Fixed Size Variable Size ASM Cache ASM diskgroups mounte
d 83886080 1217836 57502420 25165824 bytes bytes bytes bytes All rights reserved
.
SQL> select group_number,disk_number,mode_status from v$asm_disk; GROUP_NUMBER D
ISK_NUMBER MODE_STATUS ------------ ----------- -------------0 4 ONLINE 0 5 ONLI
NE 1 0 ONLINE 1 1 ONLINE 1 2 ONLINE 1 3 ONLINE 6 rows selected. SQL> select grou
p_number,disk_number,mode_status,name from v$asm_disk; GROUP_NUMBER DISK_NUMBER
MODE_STATUS NAME ------------ ----------- -------------- -----------------------
---------0 4 ONLINE 0 5 ONLINE 1 0 ONLINE VOL1 1 1 ONLINE VOL2 1 2 ONLINE VOL3 1
3 ONLINE VOL4 6 rows selected. SQL> create diskgroup orag2 external redundancy
disk 'ORCL:VOL5'; Diskgroup created. SQL> select group_number,disk_number,mode_s
tatus,name from v$asm_disk; GROUP_NUMBER DISK_NUMBER MODE_STATUS NAME ----------
-- ----------- -------------- -------------------------------------
0 1 1 1 1 2 6 rows selected.
5 0 1 2 3 0
ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE
VOL1 VOL2 VOL3 VOL4 VOL5
(THE FOLLOWING QUERIES ARE ISSUED FROM THE DATABASE INSTANCE.) [oracle@danaly ~]
$ export ORACLE_SID=danaly [oracle@danaly ~]$ sqlplus "/ as sysdba" SQL*Plus: Re
lease 10.2.0.1.0 - Production on Sun Sep 3 00:47:04 2006 Copyright (c) 1982, 200
5, Oracle. Connected to an idle instance. SQL> startup ORACLE instance started.
Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers
Database mounted. Database opened. 943718400 1222744 281020328 654311424 7163904
bytes bytes bytes bytes bytes All rights reserved.
SQL> select name from v$datafile; NAME -----------------------------------------
--------------------------------------+ORADG/danaly/datafile/system.264.60001695
5 +ORADG/danaly/datafile/undotbs1.265.600016969 +ORADG/danaly/datafile/sysaux.26
6.600016977 +ORADG/danaly/datafile/users.268.600016987 SQL> create tablespace ey
gle datafile '+ORAG2' ; Tablespace created. SQL> select name from v$datafile; NA
ME -----------------------------------------------------------------------------
---+ORADG/danaly/datafile/system.264.600016955 +ORADG/danaly/datafile/undotbs1.2
65.600016969 +ORADG/danaly/datafile/sysaux.266.600016977 +ORADG/danaly/datafile/
users.268.600016987 +ORAG2/danaly/datafile/eygle.256.600137647 oracle@danaly log
]$ export ORACLE_SID=+ASM
[oracle@danaly log]$ sqlplus "/ as sysdba" SQL*Plus: Release 10.2.0.1.0 - Produc
tion on Sun Sep 3 01:36:37 2006 Copyright (c) 1982, 2005, Oracle. All rights res
erved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring
Engine options SQL> alter diskgroup orag2 add disk 'ORCL:VOL6'; Diskgroup altere
d. ============ Note 7: OMF ============ Using Oracle-managed files simplifies t
he administration of an Oracle database. Oracle-managed files eliminate the need
for you, the DBA, to directly manage the operating system files comprising an O
racle database. You specify operations in terms of database objects rather than
filenames. Oracle internally uses standard file system interfaces to create and
delete files as needed for the following database structures: Tablespaces Online
redo log files Control files The following initialization parameters init.ora/s
pfile.ora allow the database server to use the Oracle Managed Files feature: - D
B_CREATE_FILE_DEST Defines the location of the default file system directory whe
re Oracle creates datafiles or tempfiles when no file specification is given in
the creation operation. Also used as the default file system directory for onlin
e redo log and control files if DB_CREATE_ONLINE_LOG_DEST_n is not specified. -
DB_CREATE_ONLINE_LOG_DEST_n Defines the location of the default file system dire
ctory for online redo log files and control file creation when no file specifica
tion is given in the creation operation. You can use this initialization paramet
er multiple times, where n specifies a multiplexed copy of the online redo log o
r control file. You can specify up to five multiplexed copies
Example: DB_CREATE_FILE_DEST = '/u01/oradata/payroll' DB_CREATE_ONLINE_LOG_DEST_
1 = '/u02/oradata/payroll' DB_CREATE_ONLINE_LOG_DEST_2 = '/u03/oradata/payroll'
34.2 RAC 10g: ============= =========================================== Note 1:
High Level Overview Oracle 10g RAC =========================================== -
RAC Architecture Overview Let's begin with a brief overview of RAC architecture
. A cluster is a set of 2 or more machines (nodes) that share or coordinate reso
urces to perform the same task. A RAC database is 2 or more instances running on
a set of clustered nodes, with all instances accessing a shared set of database
files. Depending on the O/S platform, a RAC database may be deployed on a clust
er that uses vendor clusterware plus Oracle's own clusterware (Cluster Ready Ser
vices), or on a cluster that solely uses Oracle's own clusterware. Thus, every R
AC sits on a cluster that is running Cluster Ready Services. srvctl is the prima
ry tool DBAs use to configure CRS for their RAC database and processes. - Cluste
r Ready Services and the OCR Cluster Ready Services, or CRS, is a new feature fo
r 10g RAC. Essentially, it is Oracle's own clusterware. On most platforms, Oracl
e supports vendor clusterware; in these cases, CRS interoperates with the vendor
clusterware, providing high availability support and service and workload manag
ement. On Linux and Windows clusters, CRS serves as the sole clusterware. In all
cases, CRS provides a standard cluster interface that is consistent across all
platforms. CRS consists of four processes (crsd, occsd, evmd, and evmlogger) and
two disks: the Oracle Cluster Registry (OCR), and the voting disk. CRS manages
the following resources: . . . . . The ASM instances on each node Databases The
instances on each node Oracle Services on each node The cluster nodes themselves
, including the following processes, or "nodeapps":
. . . .
VIP GSD The listener The ONS daemon
CRS stores information about these resources in the OCR. If the information in t
he OCR for one of these resources becomes damaged or inconsistent, then CRS is n
o longer able to manage that resource. Fortunately, the OCR automatically backs
itself up regularly and frequently. 10g RAC (10.2) uses, or depends on,: - Oracl
e Clusterware (10.2), formerly referred to as CRS "Cluster Ready Services" (10.1
). - Oracle's optional Cluster File System OCFS (This is optional), or use ASM a
nd RAW. - Oracle Database extensions RAC is "scale out" technology: just add com
modity nodes to the system. The key component is "cache fusion". Data are transf
erred from one node to another via very fast interconnects. Essential to 10g RAC
is a "Shared Cache" technology. Automatic Workload Repository (AWR) plays a rol
e also. The Fast Application Notification (FAN) mechanism that is part of RAC, p
ublishes events that describe the current service level being provided by each i
nstance, to AWR. The load balancing advisory information is then used to determi
ne the best instance to serve the new request. . With RAC, ALL Instances of ALL
nodes in a cluster, access a SINGLE database. . But every instance has it's own
UNDO tablespace, and REDO logs. The Oracle Clusterware comprise several backgrou
nd processes that facilitate cluster operations. The Cluster Synchronization Ser
vice CSS, Event Management EVM, and Oracle Cluster components communicate with o
ther cluster components layers in the other instances within the same cluster da
tabase environment. Questions per implementation arise in the following points:
. Storage . Computer Systems/Storage-Interconnect . Database . Application Serve
r . Public and Private networks . Application Control & Display On the Storage l
evel, it can be said that 10g RAC supports - Automatic Storage Management (ASM)
- Oracle Cluster File System (OCFS) - ??? Network File System (NFS) - limited (o
nly theoretical actually) - Disk raw partitions
- Third party cluster file systems For application control and tools, it can be
said that 10g RAC supports - OEM Grid Control http://hostname:5500/em OEM Databa
se Control http://hostname:1158/em - "svrctl" is a command line interface to man
age the cluster configuration, for example, starting and stopping all nodes in o
ne command. - Cluster Verification Utility (cluvfy) can be used for an installat
ion and sanity check. Failure in Client connections: Depending on the Net config
uration, type of connection, type of transaction etc.., Oracle Net services prov
ides a feature called "Transparant Application Failover", or TAF, which can fail
over a client session to another backup connection. About HA and DR: - RAC is H
A , High Availability, that will keep things Up and Running in one site. - Data
Guard is DR, Disaster Recovery, and is able to mirror one site to another remote
site.
=========================================================== Note 2: 10g RAC proc
esses, services, daemons and start stop ========================================
=================== CRS consists of four processes (crsd, occsd, evmd, and evmlo
gger) and two disks: the Oracle Cluster Registry (OCR), and the voting disk. On
most platforms, you may see the following processes: oprocd the Process Monitor
Daemon crsd the CRS Daemon occsd Oracle Cluster Synchronization Service Daemon e
vmd Event Volume Manager Daemon To start and stop CRS when the machine starts or
shutdown, on unix there are rc scripts in place. You can also, as root, manuall
y start, stop, enable or disable the services with: /etc/init.d/init.crs /etc/in
it.d/init.crs /etc/init.d/init.crs /etc/init.d/init.crs Or with # crsctl start c
rs start stop enable disable
# crsctl stop crs # crsctl enable crs # crsctl disable crs
============================================== Note 3: Installation notes 10g RA
C on Windows ============================================== See the next note fo
r installation on Linux 3.1 Before you install: ----------------------Each node
in a cluster requires the following: > One private internet protocol (IP) addres
s for each node to serve as the private interconnect. The following must be true
for each private IP address: -It must be separate from the public network -It m
ust be accessible on the same network interface on each node -It must have a uni
que address on each node The private interconnect is used for inter-node communi
cation by both Oracle Clusterware and RAC. If the private address is available f
rom a network name server (DNS), then you can use that name. Otherwise, the priv
ate IP address must be available in each node's C:\WINNT\system32\drivers\etc\ho
sts file. > One public IP address for each node, to be used as the Virtual IP (V
IP) address for client connections and for connection failover. The name associa
ted with the VIP must be different from the default host name. This VIP must be
associated with the same interface name on every node that is part of your clust
er. In addition, the IP addresses that you use for all of the nodes that are par
t of a cluster must be from the same subnet. > One public fixed hostname address
for each node, typically assigned by the system administrator during operating
system installation. If you have a DNS, then register both the fixed IP and the
VIP address with DNS. If you do not have DNS, then you must make sure that the p
ublic IP and VIP addresses for all nodes are in each node's host file. For examp
le, with a two node cluster where each node has one public and one private inter
face, you might have the configuration shown in the following table for your net
work interfaces, where the hosts file is %SystemRoot%\system32\drivers\etc\hosts
: Node Interface Name Type IP Address Registered In
rac1 rac1 hosts file) rac1 rac1-vip hosts file) rac1 rac1-priv rac2 rac2 hosts f
ile) rac2 rac2-vip hosts file) rac2 rac2-priv
Public Virtual Private Public Virtual Private
143.46.43.100 143.46.43.104
DNS (if available, else the DNS (if available, else the
10.0.0.1 Hosts file 143.46.43.101 DNS (if available, else the 143.46.43.105 10.0
.0.2 DNS (if available, else the
Hosts file
The virtual IP addresses are assigned to the listener process. To enable VIP fai
lover, the configuration shown in the preceding table defines the public and VIP
addresses of both nodes on the same subnet, 143.46.43. When a node or interconn
ect fails, then the associated VIP is relocated to the surviving instance, enabl
ing fast notification of the failure to the clients connecting through that VIP.
If the application and client are configured with transparent application failo
ver options, then the client is reconnected to the surviving instance. To disabl
e Windows Media Sensing for TCP/IP, you must set the value of the DisableDHCPMed
iaSense parameter to 1 on each node. Disable Media Sensing by completing the fol
lowing steps on each node of your cluster: Use Registry Editor (Regedt32.exe) to
view the following key in the registry: HKEY_LOCAL_MACHINE\System\CurrentContro
lSet\Services\Tcpip\Parameters Add the following registry value: Value Name: Dis
ableDHCPMediaSense Data Type: REG_DWORD -Boolean Value: 1 - External shared disk
s for storing Oracle Clusterware and database files. The disk configuration opti
ons available to you are described in Chapter 3, "Storage Pre-Installation Tasks
". Review these options before you decide which storage option to use in your RA
C environment. However, note that when Database Configuration Assistant (DBCA) c
onfigures automatic disk backup, it uses a database recovery area which must be
shared. The database files and recovery files do not necessarily have to be loca
ted on the same type of storage. Determine the storage option for your system an
d configure the shared disk. Oracle recommends that you use Automatic Storage Ma
nagement (ASM) and Oracle Managed Files (OMF), or a cluster file system. If you
use ASM or a cluster file system, then you can also take advantage of OMF and ot
her Oracle Database 10g storage features. If you use RAC on Oracle Database 10g
Standard Edition, then you must use ASM.
If you use ASM, then Oracle recommends that you install ASM in a separate home f
rom the Oracle Clusterware home and the Oracle home. Oracle Database 10g Real Ap
plication Clusters installation is a two-phase installation. In phase one, use O
racle Universal Installer (OUI) to install Oracle Clusterware. In phase two, ins
tall the database software using OUI. When you install Oracle Clusterware or RAC
, OUI copies the Oracle software onto the node from which you are running it. If
your Oracle home is not on a cluster file system, then OUI propagates the softw
are onto the other nodes that you have selected to be part of your OUI installat
ion session. - Shared Storage for Database When you configure a database recover
y area must be on shared storage. When Database disk backup, it uses a database
recovery area that Recovery Area recovery area in a RAC environment, the databas
e Configuration Assistant (DBCA) configures automatic must be shared.
If the database files are stored on a cluster file system, then the recovery are
a can also be shared through the cluster file system. If the database files are
stored on an Automatic Storage Management (ASM) disk group, then the recovery ar
ea can also be shared through ASM. If the database files are stored on raw devic
es, then you must use either a cluster file system or ASM for the recovery area.
Note: ASM disk groups are always valid recovery areas, as are cluster file syst
ems. Recovery area files do not have to be in the same location where datafiles
are stored. For instance, you can store datafiles on raw devices, but use ASM fo
r the recovery area. Data files are not placed on NTFS partitions, because they
cannot be shared. Data files can be placed on Oracle Cluster File System (OCFS),
on raw disks using ASM, or on raw disks. - Oracle Clusterware You must provide
OUI with the names of the nodes on which you want to install Oracle Clusterware.
The Oracle Clusterware home can be either shared by all nodes, or private to ea
ch node, depending on your responses when you run OUI. The home that you select
for Oracle Clusterware must be different from the RAC-enabled Oracle home.
Versions of cluster manager previous to Oracle Database 10g were sometimes refer
red to as "Cluster Manager". In Oracle Database 10g, this function is performed
by a Oracle Clusterware component known as Cluster Synchronization Services (CSS
). The OracleCSService, OracleCRService, and OracleEVMService replace the servic
e known previous to Oracle Database 10g as OracleCMService9i. 3.2 cluvfy or runc
luvfy.bat: ---------------------------Once you have installed Oracle Clusterware
, you can use CVU by entering cluvfy commands on the command line. To use CVU be
fore you install Oracle Clusterware, you must run the commands using a command f
ile available on the Oracle Clusterware installation media. Use the following sy
ntax to run a CVU command run from the installation media, where media is the lo
cation of the Oracle Clusterware installation media and options is a list of one
or more CVU command options: media\clusterware\cluvfy\runcluvfy.bat options The
following code example is of a CVU help command, run from a staged copy of the
Oracle Clusterware directory downloaded from OTN into a directory called stage o
n your C: drive: C:\stage\clusterware\cluvfy> runcluvfy.bat comp nodereach -n no
de1,node2 -verbose For a quick test, you can run the following CVU command that
you would normally use after you have completed the basic hardware and software
configuration: prompt> media\clusterware\cluvfy\runcluvfy.bat stage post hwos n node_l
ist Use the location of your Oracle Clusterware installation media for the media
value and a list of the nodes, separated by commas, in your cluster for node_li
st. Expect to see many errors if you run this command before you or your system
administrator complete the cluster pre-installation steps. On Oracle Real Applic
ation Clusters systems, each member node of the cluster must have user equivalen
cy for the Administrative privileges account that installs the database. This me
ans that the administrative privileges user account and password must be the sam
e on all nodes. - Checking the Hardware and Operating System Setup with CVU You
can use two different CVU commands to check your hardware and operating system c
onfiguration. The first is a general check of the configuration, and the second
specifically checks for the components required to install Oracle Clusterware. T
he syntax of the more general CVU command is:
cluvfy stage post hwos n node_list [-verbose] where node_list is the names of the node
s in your cluster, separated by commas. However, because you have not yet instal
led Oracle Clusterware, you must execute the CVU command from the installation m
edia using a command like the one following. In this example, the command checks
the hardware and operating system of a two-node cluster with nodes named node1
and node2, using a staged copy of the installation media in a directory called s
tage on the C: drive: C:\stage\clusterware\cluvfy> runcluvfy.bat stage post hwos n nod
e1,node2 -verbose You can omit the -verbose keyword if you do not wish to see de
tailed results listed as CVU performs each individual test. The following exampl
e is a command, without the -verbose keyword, to check for the readiness of the
cluster for installing Oracle Clusterware: C:\stage\clusterware\cluvfy> runcluvf
y.bat comp sys -n node1,node2 -p crs - Checking the Network Setup Enter a comman
d using the following syntax to verify node connectivity between all of the node
s for which your cluster is configured: cluvfy comp nodecon -n node_list [-verbo
se] - Verifying Cluster Privileges Before running Oracle Universal Installer, fr
om the node where you intend to run the Installer, verify that you have administ
rative privileges on the other nodes. To do this, enter the following command fo
r each node that is a part of the cluster: net use \\node_name\C$ where node_nam
e is the node name. If your installation will access drives in addition to the C
: drive, repeat this command for every node in the cluster, substituting the dri
ve letter for each drive you plan to use. For the installation to be successful,
you must use the same user name and password on each node in a cluster or use a
domain user name. If you use a domain user name, then log on under a domain wit
h a user name and password to which you have explicitly granted local administra
tive privileges on all nodes. 3.3 Shared disk considerations: ------------------
------------Preliminary Shared Disk Preparation Complete the following steps to
prepare shared disks for storage:
-- Disabling Write Caching You must disable write caching on all disks that will
be used to share data between nodes in your cluster. To disable write caching,
perform these steps: Click Start, then click Settings, then Control Panel, then
Administrative Tools, then Computer Management, then Device Manager, and then Di
sk drives Expand the Disk drives and double-click the first drive listed Under t
he Disk Properties tab for the selected drive, uncheck the option that enables t
he write cache Double-click each of the other drives listed in the Disk drives h
ive and disable the write cache as described in the previous step Caution: Any d
isks that you use to store files, including database files, that will be shared
between nodes, must have write caching disabled. -- Enabling Automounting for Wi
ndows 2003 If you are using Windows 2003, then you must enable disk automounting
, depending on the Oracle products you are installing and on other conditions. Y
ou must enable automounting when using: Raw partitions for Oracle Real Applicati
on Clusters (RAC) Cluster file system for Oracle Real Application Clusters Oracl
e Clusterware Raw partitions for a single-node database installation Logical dri
ves for Automatic Storage Management (ASM) To enable automounting: Enter the fol
lowing commands at a command prompt: c:\> diskpart DISKPART> automount enable Au
tomatic mounting of new volumes enabled. Type exit to end the diskpart session R
epeat steps 1 and 2 for each node in the cluster. 3.4 Reviewing Storage Options
for Oracle Clusterware, Database, and Recovery Files: --------------------------
-------------------------------------------------------This section describes su
pported options for storing Oracle Clusterware files, Oracle Database software,
and database files. -- Overview of Oracle Clusterware Storage Options
Note that Oracle Clusterware files include the Oracle Cluster Registry (OCR) and
the Oracle Clusterware voting disk. There are two ways to store Oracle Clusterw
are files: 1. Oracle Cluster File System (OCFS): The cluster file system Oracle
provides for the Windows and Linux communities. If you intend to store Oracle Cl
usterware files on OCFS, then you must ensure that OCFS volume sizes are at leas
t 500 MB each. 2. Raw storage: Raw logical volumes or raw partitions are created
and managed by Microsoft Windows disk management tools or by tools provided by
third party vendors. Note that you must provide disk space for one mirrored Orac
le Cluster Registry (OCR) file, and two mirrored voting disk files. -- Overview
of Oracle Database and Recovery File Options There are three ways to store Oracl
e Database and recovery files on shared disks: 1. Automatic Storage Management (
database files only): Automatic Storage Management (ASM) is an integrated, high-
performance database file system and disk manager for Oracle files. Because ASM
requires an Oracle Database instance, it cannot contain Oracle software, but you
can use ASM to manage database and recovery files. 2. Oracle Cluster File Syste
m (OCFS): Note that if you intend to use OCFS for your database files, then you
should create partitions large enough for the database files when you create par
titions for Oracle Clusterware Note: If you want to have a shared Oracle home di
rectory for all nodes, then you must use OCFS. 3. Raw storage: Note that you can
not use raw storage to store Oracle database recovery files. The storage option
that you choose for recovery files can be the same as or different to the option
you choose for the database files. Storage Option area -------------Automatic S
torage Management Cluster file system (OCFS) Shared raw storage Oracle Clusterwa
re -----------------No Yes Yes Yes Yes Yes Database -------Yes Yes No Recovery -
-----------
-- Checking for Available Shared Storage with CVU To check for all shared file s
ystems available across all nodes on the cluster, use the following CVU command:
cluvfy comp ssa -n node_list Remember to use the full path name and the runcluv
fy.bat command on the installation media and include the list of nodes in your c
luster, separated by commas, for the node_list. The following example is for a s
ystem with two nodes, node1 and node2, and the installation media on drive F: F:
\clusterware\cluvfy> runcluvfy.bat comp ssa -n node1,node2 If you want to check
the shared accessibility of a specific shared storage type to specific nodes in
your cluster, then use the following command syntax: cluvfy comp ssa -n node_lis
t -s storageID_list In the preceding syntax, the variable node_list is the list
of nodes you want to check, separated by commas, and the variable storageID_list
is the list of storage device IDs for the storage devices managed by the file s
ystem type that you want to check.
===================================== Note 4: Installation on Redhat Linux =====
================================ 4.2 Prepare your nodes: ----------------------4
.2.1 Scetch of a 2-node Linux cluster 192.168.2.0 ------------------------------
------------ public network | | | | -----------------------|InstanceA |Private n
etwork |InstanceB | | |Ethernet | | | |--------------------| | | |192.168.1.0 |
| | | | | | |____________ | | | | -----|--| | | |--|PWR| |PWR|----| | | | ------
--| | | | |_______________| | | | | | ------------------------
Fig 4.1 (not ASM) RAW)
| SCSI bus or Fible Channel | ------------------ -------------Interconnect | | |
| ----------|Shared | - has Single DB on: ASM or OCFS or RAW |Disk | - has OCR
and Voting disk on: OCFS or RAW |Storage | - has Recovery area on: ASM or OCFS (
not
-----------
4.2.2 Storage Options Storage area -------------Automatic Storage Management Clu
ster file system (OCFS) Shared raw storage Oracle Clusterware -----------------N
o Yes Yes Yes Yes Yes Database -------Yes Yes No Recovery ------------
In the following, we will do an example installation on 3 nodes. 4.2.3 Install R
edhat on all nodes with all options. 4.2.4 create oracle user and groups dba, oi
nstall on all nodes. Make sure they all have the same UID and GUI. 4.2.5 Make su
re the user oracle has an appropriate .profile or .bash_profile 4.2.6 Every node
needs a private network connection and a public network connection (at least tw
o networkcards). 4.2.7 Linux kernel parameters: Most out of the box kernel param
eters (of RHELS 3,4,5) are set correctly for Oracle except a few. You should hav
e the following minimal configuration: net.ipv4.ip_local_port_range 1024 65000 k
ernel.sem 250 32000 100 128 kernel.shmmni 4096 kernel.shmall 2097152 kernel.shmm
ax 2147483648 fs.file-max 65536 You can check the most important parameters usin
g the following command: # /sbin/sysctl -a | egrep 'sem|shm|file-max|ip_local'
net.ipv4.ip_local_port_range = 1024 kernel.sem = 250 32000 100 128 kernel.shmmni
= 4096 kernel.shmall = 2097152 kernel.shmmax = 2147483648 fs.file-max = 65536
65000
If some value should be changed, you can change the "/etc/sysctl.conf" file and
run the "/sbin/sysctl -p" command to change the value immediately. Every time th
e system boots, the init program runs the /etc/rc.d/rc.sysinit script. This scri
pt contains a command to execute sysctl using /etc/sysctl.conf to dictate the va
lues passed to the kernel. Any values added to /etc/sysctl.conf will take effect
each time the system boots. 4.2.8 make sure ssh and scp are working on all node
s without asking for a password. Use shh-keygen to arrange that. 4.2.9 Example "
/etc/host" on the nodes: Suppose you have the following 3 hosts, with their asso
ciated public and private names: public private oc1 poc1 oc2 poc2 oc3 poc3 Then
this could be a valid host file on the nodes: 127.0.0.1 localhost.localdomain rh
es30 oltp mw oc1 poc1 voc1 oc2 poc2 voc2 oc3 poc3 voc3 #public1 #private1 #virtu
al1 #public2 #private2 #virtual2 #public3 #private3 #virtual3 localhost
192.168.2.99 192.168.2.166 192.168.2.167 192.168.2.101 192.168.1.101 192.168.2.1
76 192.168.2.102 192.168.1.102 192.168.2.177 192.168.2.103 192.168.1.103 192.168
.2.178
4.2.10 Example disk devices On all nodes, the shared disk devices should be acce
ssible through the same devices names.
Raw Device Name /dev/raw/raw1 /dev/raw/raw2 /dev/raw/raw3 /dev/raw/raw4 /dev/raw
/raw5 /dev/raw/raw6 4.3 CRS installation: ---------------------
Physical Device Name Purpose /dev/sda1 ASM Disk 1: +DATA1 /dev/sdb1 ASM Disk 1:
+DATA1 /dev/sdc1 ASM Disk 2: +RECOV1 /dev/sdd1 ASM Disk 2: +RECOV1 /dev/sde1 OCR
Disk (on RAW device) /dev/sdf1 Voting Disk (on RAW device)
4.3.1 First install CRS in its own home directory First install CRS in its own h
ome directory, e.g. CRS10gHome, apart from the Oracle home dir. As Oracle user:
./runInstaller --------------------------------------------------| | |Specify Fi
le LOcations | | | |Source | |Path: /install/crs10g/Disk1/stage/products.xml | |
| |Destination | |Name: CRS10gHome | |Path: /u01/app/oracle/product/10.1.0/CRS1
0gHome | | | -------------------------------------------------------------------
---------------------------------| | |Cluster Configuration | | | |Cluster Name:
lec1 | | | | Public Node Name Private Node Name | | ---------------------------
-----------------| | |oc1 | p0c1 | | | |----------------------------------------
---| | |oc2 | p0c2 | | | |-------------------------------------------| | |oc3 |
poc3 | | | |-------------------------------------------| -----------------------
---------------------------Screen 1
Screen 2
In the next screen, you specify which of your networks is to be used as the publ
ic interface (to connect to the public network) and which will be used for the p
rivate interconnect to support cache fushion and the cluster heartbeat. --------
------------------------------------------| | Screen 3
|Private Interconnect Enforcement | | | | | | | | Interface Name Subnet Interfac
e type | | --------------------------------------------| | |eth0 |192.168.2.0 |P
ublic | | | |-------------------------------------------| | |eth1 |192.168.1.0 |
Private | | | |-------------------------------------------| | | ----------------
----------------------------------In the next screen, you specify /dev/raw/raw5
as the raw disk for the Oracle Cluster Registry. -------------------------------
-------------------| | |Oracle Cluster Registry | | | |Specify OCR Location: /de
v/raw/raw5 | | | --------------------------------------------------Screen 4
In a similar fashion you specify the location of the Voting Disk. --------------
------------------------------------| | |Voting Disk | | | |Specify Voting Disk:
/dev/raw/raw6 | | | --------------------------------------------------Screen 5
You now have to execute the /u01/app/oracle/orainventory/orainstRoot.sh script o
n all Cluster Nodes as the root user. After this, you can continue with the othe
r window, and see an "Install Summary" screen. Now you click "Install" and the i
nstallation begins. Apart from the node you work on, the software will also be c
opied to the other nodes as well. After the installation is complete, you are on
ce again prompted to run a script as root on each node of the Cluster. This is t
he script "/u01/app/oracle/product/10.1.0/CRS10gHome/root.sh". -- The olsnodes c
ommand. After finishing the CSR installation, you can verify that the installati
on completed successfully by running on any node the following command: # cd /u0
1/app/oracle/product/10.1.0/CRS10gHome/bin # olsnodes -n oc1 1 oc2 2
oc3
3
4.4 Database software installation: ----------------------------------You can in
stall the database software into the same directory in each node. With OCFS2, yo
u might do one install in a common shared directory for all nodes. Because CSR i
s already running, the OUI detects that, and because its cluster aware, it provi
des you with the options to install a clustered implementation. You start the in
stallation by running ./runInstaller as the oracle user on one node. For most pa
rt, it looks the same as a single-instance installation. After the file location
screen, that is source and destination, you will see this screen: -------------
-------------------------------------| | |Specify Hardware Cluster Installation
Mode | | | | o Cluster installation mode | | | | Node name | | -----------------
---------------------------- | | | [] oc1 | | | | [] oc2 | | | | [] oc3 | | | --
------------------------------------------- | | | | o Local installation (non cl
uster) | | | |-------------------------------------------------| Most of the tim
e, you will do a "software only" installation, and create the database later wit
h the DBCA. For the first node only, after some time, the Virtual IP Configurati
on Assistant, VIPCA, will start. Here you can configure the Virtual IP adresses
you will use for application failover and the Enterprise Manager Agent. Here you
will select the Virtual IP's for all nodes. VIPCA only needs to run once per Cl
uster. 4.5 Creating the RAC database with DBCA: --------------------------------
-------Launching the DBCA for installing a RAC database is much the same as laun
ching DBCA for a single instance. If DBCA detects cluster software installed, it
gives you the option to install a RAC database
or a single instance. as oracle user: % dbca & ---------------------------------
-----------------| | |Welcome to the database configuration assistant | | | | |
| | | o Oracle Real Application Cluster database | | | | o Oracle single instanc
e database | | | |-------------------------------------------------| After selec
ting RAC, the next screen gives you the option to select nodes: ----------------
----------------------------------| | |Select the nodes on which you want to cre
ate | |the cluster database. The local node oc1 will | |always be used whether o
r not it is selected. | | | | Node name | | ------------------------------------
--------- | | | [] oc1 | | | | [] oc2 | | | | [] oc3 | | | ---------------------
------------------------ | | | | | |--------------------------------------------
-----| In the next screens, you can choose the type of database (oltp, dw etc..)
, and all other items, just like a single instance install. At a cetain point, y
ou can choose to use ASM diskgroups, flash-recovery area etc..
=========================================== Note 5. RAC tools an utilities. ====
======================================= Example 1: removing and adding a failed
node -------------------------------------------Suppose, using above example, th
at instance rac3 on node oc3, fails. Suppose that you need to repair the node (e
.g. harddisk crash). -- Remove the instance: % srvctl remove instance -d rac -i
rac3 Remove instance rac3 for the database rac (y/n)? y
-- Remove the node from the cluster: # cd /u01/app/oracle/product/10.1.0/CRS10gH
ome/bin # ./olsnode -n oc1 1 oc2 2 oc3 3 # cd ../install # ./rootdeletenode.sh o
c3,3 # cd ../bin # ./olsnode -n oc1 1 oc2 2 # Suppose that you have repared host
oc3. We now want to add it back into the cluster. Host oc3 has the OS newly ins
talled, and its /etc/host file is just like it is on the other nodes. -- Add the
node at the clusterware layer: From oc1 or oc2, go to the $CRS_Home/oui/bin dir
ectory, and run # ./addNode.sh A graphical screen pops up, and you are able to a
dd oc3 to the cluster. Al CRS files are copied to the new node. To start the ser
vices on the new node, you are then prompted to run "rootaddnode.sh" on the acti
ve node and "root.sh" on the new node. # ./rootaddnode.sh # ssh oc3 # cd /u01/ap
p/oracle product/10.1.0/CRS10gHome # ./root.sh -- Install the Oracle software on
the new node:
Example 2: showing all nodes from a node ---------------------------------------
# lsnodes -v # cd /u01/app/oracle/product/10.1.0/CRS10gHome/bin # ./olsnodes -n
oc1 1 oc2 2 oc3 3
Example 3: using svrctl ----------------------The Server Control SVRCTL utility
is installed on each node by default. You can use SRVCTL to start and stop the d
atabase and instances, manage configuration information, and to move or remove i
nstances and services. Some SVRCTL operations store configuration information in
the OCR. SVRCTL performs other operations, such as starting and stopping instan
ces, by sending request to the Oracle Clusterware process CSRD, which then start
s or stops the Oracle Clusterware resources. srvctl must be run from the $ORACLE
_HOME of the RAC you are administering. The basic format of a srvctl command is
srvctl <command> <target> [options] where command is one of enable|disable|start
|stop|relocate|status|add|remove|modify|getenv|setenv| unsetenv|config and the t
arget, or object, can be a -database, -instance, -service, -ASM instance, or the
-nodeapps.
-- Example 1: To view help: % svrctl -h % svrctl command -h -- Example 2: To see
the SRVCTL version number, enter % svrctl -V -- Example 3. Bring up the MYSID1
instance of the MYSID database. % srvctl start instance -d MYSID -i MYSID1 -- Ex
ample 4. Stop the MYSID database: all its instances and all its services, on all
nodes. % srvctl stop database -d MYSID The following command mounts all of the
non-running instances, using the default connection information: % srvctl start
database -d orcl -o mount -- Example 5. Stop the nodeapps on the myserver node.
NB: Instances and services also stop.
% srvctl stop nodeapps -n myserver -- Example 6. Add the MYSID3 instance, which
runs on the myserver node, to the MYSID clustered database. % srvctl add instanc
e -d MYSID -i MYSID3 -n myserver -- Example 7. Add a new node, the mynewserver n
ode, to a cluster. % srvctl add nodeapps -n mynewserver -o $ORACLE_HOME -A 149.1
81.201.1/255.255.255.0/eth1 (The -A flag precedes an address specification.) --
Example 8. To change the VIP (virtual IP) on a RAC node, use the command % srvct
l modify nodeapps -A new_address -- Example 9. Status of components . Find out w
hether the nodeapps on mynewserver are up. % srvctl status nodeapps -n mynewserv
er VIP is running on node: mynewserver GSD is running on node: mynewserver Liste
ner is not running on node: mynewserver ONS daemon is running on node: mynewserv
er . Find out whether the ASM is running:
% srvctl status asm -n docrac1 ASM instance +ASM1 is running on node docrac1. .
Find status of cluster database % srvctl status database -d EOPP Instance EOPP1
is running on node dbq0201 Instance EOPP2 is running on node dbq0102 % srvctl co
nfig database -d EOPP dbq0201 EOPP1 /ora/product/10.2.0/db dbq0102 EOPP2 /ora/pr
oduct/10.2.0/db % srvctl config service -d EOPP opp.et.supp PREF: EOPP1 AVAIL: E
OPP2 opp.et.grid PREF: EOPP1 AVAIL: EOPP2
-- Example 10. The following command and output show the expected configuration
for a three node database called ORCL. % srvctl config database -d ORCL server01
ORCL1 /u01/app/oracle/product/10.1.0/db_1 server02 ORCL2 /u01/app/oracle/produc
t/10.1.0/db_1
server03 ORCL3 /u01/app/oracle/product/10.1.0/db_1 -- Example 11. Disable the AS
M instance on myserver for maintenance. % srvctl disable asm -n myserver -- Exam
ple 12. Debugging srvctl Debugging srvctl in 10g couldn't be easier. Simply set
the SRVM_TRACE environment variable. % export SRVM_TRACE=true -- Example 13. Que
stion Version 10G RAC Q: how to add a listener to the nodeapps using the srvctl
command ?? or even if it can be added using srvctl ?? A: just edit listener.ora
on all concerned nodes and add entries ( the usual way). srvctl will automatical
ly make use of it. For example % srvctl start database -d SAMPLE will start data
base SAMPLE and its associated listener LSNR_SAMPLE. -- Example 14. Adding servi
ces. % % % % srvctl srvctl srvctl srvctl add add add add database instance insta
nce instance -d -d -d -d ORCL ORCL ORCL ORCL -o -i -i -i /u01/app/oracle/product
/10.1.0/db_1 ORCL1 -n server01 ORCL2 -n server02 ORCL3 -n server03
-- Example 15. Administering ASM Instances with SRVCTL in RAC You can use SRVCTL
to add, remove, enable, and disable an ASM instance as described in the followi
ng procedure: Use the following to add configuration information about an existi
ng ASM instance: % srvctl add asm -n node_name -i asm_instance_name -o oracle_ho
me Use the following to remove an ASM instance: % srvctl remove asm -n node_name
[-i asm_instance_name] -- Example 16. Stop multiple instances. The following co
mmand provides its own connection information to shut down the two instances orc
l3 and orcl4 using the IMMEDIATE option: % srvctl stop instance -d orcl -i "orcl
3,orcl4" -o immediate -c "sysback/oracle as sysoper"
-- Example 17. Showing policies. Clusterware can automatically start your RAC da
tabase when the system restarts. You can use Automatic or Manual "policies", to
control whether clusterware restarts RAC. To display the current policy: % srvct
l config database -d database_name -a To change to another policy: % srvctl modi
fy database -d database_name -y policy_name -- Example 18. % srvctl start servic
e -d DITOB -- More examples % srvctl remove instance -d rac -i rac3 % srvctl dis
able instance -d orcl -i orcl2 % srvctl enable instance -d orcl -i orcl2
Example 4: crsctl ----------------Use CRSCTL to Control Your Clusterware Oracle
Clusterware enables servers in an Oracle database Real Application Cluster to co
ordinate simultaneous workload on the same database files. The crsctl command pr
ovides administrators many useful capabilities. For example, with crsctl, you ca
n check Clusterware health disable/enable Oracle Clusterware startup on boot, fi
nd information on the voting disk and check the Clusterware version, and more. 1
. Do you want to check the health of the Clusterware? # crsctl check crs CSS app
ears healthy CRS appears healthy EVM appears healthy 2. Do you want to reboot a
node for maintenance without Clusterware coming up on boot? ## Disable clusterwa
re on machine2 bootup: # crsctl disable crs ## Stop the database then stop clust
erware processes: # srvctl stop instance d db i db2 # crsctl stop crs # reboot ## Enab
le clusterware on machine bootup: # crsctl enable crs # crsctl start crs
# srvctl start instance d db i db2 3. Do you wonder where your voting disk is? # crsct
l query css votedisk 0. 0 /dev/raw/raw2 4. Do you need to find out what clusterw
are version is running on a server? # crsctl query crs softwareversion CRS softw
are version on node [db2] is [10.2.0.2.0] 5. Adding and Removing Voting Disks Yo
u can dynamically add and remove voting disks after installing Oracle RAC. Do th
is using the following commands where path is the fully qualified path for the a
dditional voting disk. Run the following command as the root user to add a votin
g disk: # crsctl add css votedisk path Run the following command as the root use
r to remove a voting disk: # crsctl delete css votedisk path
Example 5: cluvfy ----------------The Cluster Verification Utility pre or post v
alidates an Oracle Clusterware environment or configuration. We found the CVU ut
ility to be very useful for checking a cluster server environment for RAC. The C
VU can check shared storage, interconnects, server systems and user permissions.
The Universal Installer runs the verification utility at the end of the cluster
ware install. The utility can also be run from the command line with parameters
and options to validate components. For example, a script that verifies a clust
er using cluvfy is named runcluvfy.sh and is located on the /clusterware/cluvfy
directory in the installation area. This script unpacks the utility, sets enviro
nment variables and executes the verification command. This command verifies tha
t the hosts atlanta1, atlanta2 and atlanta3 are ready for a clustered database i
nstall of release 2. ./runcluvfy.sh stage -pre dbinst -n atlanta1,atlanta2,atlan
ta3 -r 10gR2 -osdba dba verbose The results of the command above check user and gro
up equivalence across machines, connectivity, interface settings, system require
ments like memory, disk space and kernel settings and versions, required Linux p
ackage existence and so on. Any problems are reported as errors,
all successful checks are marked as passed. Many other aspects of the cluster ca
n be verified with this utility for Release 2 or Release 1. Some more examples:
-- Checking for Available Shared Storage with CVU To check for all shared file s
ystems available across all nodes on the cluster, use the following CVU command:
% cluvfy comp ssa -n node_list Remember to use the full path name and the runcl
uvfy.bat command on the installation media and include the list of nodes in your
cluster, separated by commas, for the node_list. The following example is for a
system with two nodes, node1 and node2, and the installation media on drive F:
% runcluvfy.bat comp ssa -n node1,node2 If you want to check the shared accessib
ility of a specific shared storage type to specific nodes in your cluster, then
use the following command syntax: % cluvfy comp ssa -n node_list -s storageID_li
st In the preceding syntax, the variable node_list is the list of nodes you want
to check, separated by commas, and the variable storageID_list is the list of s
torage device IDs for the storage devices managed by the file system type that y
ou want to check.
================================= Note 6: Example tnsnames.ora in RAC ==========
======================= Example 1: ---------tnsnames.ora File TEST = (DESCRIPTIO
N = (LOAD_BALANCE = ON) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = testl
inux1)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = testlinux2)(PORT = 1521)
)) (CONNECT_DATA = (SERVICE_NAME = TEST)))) TEST1 = (DESCRIPTION =
(ADDRESS_LIST = (LOAD_BALANCE = ON) (ADDRESS = (PROTOCOL = TCP)(HOST = testlinux
1)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = TEST)(INSTANCE_NAME = TEST1)))
TEST2 = (DESCRIPTION = (ADDRESS_LIST = (LOAD_BALANCE = ON) (ADDRESS = (PROTOCOL
= TCP)(HOST = testlinux2)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = TEST)(
INSTANCE_NAME = TEST2))) EXTPROC_CONNECTION_DATA = (DESCRIPTION = (ADDRESS_LIST
= (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))) (CONNECT_DATA = (SID=PLSExtProc)(
PRESENTATION = RO))) LISTENERS_TEST = (ADDRESS = (PROTOCOL = TCP)(HOST = testlin
ux1)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = testlinux2)(PORT = 1521))
Example 2: ---------Connect-Time Failover From the clients end, when your connec
tion fails at one node or service, you can then do a look up from your tnsnames.
ora file and go on seeking a connection with the other available node. Take this
example of our 4-node VMware ESX 3.x Oracle Linux Servers: FOKERAC = (DESCRIPTI
ON = (ADDRESS = (PROTOCOL = TCP)(HOST (ADDRESS = (PROTOCOL = TCP)(HOST (ADDRESS
= (PROTOCOL = TCP)(HOST (ADDRESS = (PROTOCOL = TCP)(HOST (CONNECT_DATA = (SERVIC
E_NAME = fokerac) ) )
= = = =
nick01.wolga.com)(PORT = 1521)) nick02.wolga. com)(PORT = 1521)) brian01.wolga.
com)(PORT = 1521)) brian02.wolga. com)(PORT = 1521))
Here the first address in the list is tried at the clients end. Should the connecti
on to nick01.wolga.nl fail, then the next address, nick02.wolga.nl, will be trie
d. This phenomenon is called connection-time failover. You could very well have
a 32-node RAC cluster monitoring the galactic system at NASA and thus have all t
hose nodes typed in your tnsnames.ora file. Moreover, these entries do not neces
sarily have to be part of the RAC cluster. So it is possible that you are using
Streams, Log Shipping or Advanced Replication to maintain your HA (High Availabi
lity) model. These technologies facilitate
continued processing of the database by such a HA (High Availability) model in a
non-RAC environment. In a RAC environment we know (and expect) the data to be t
he same across all nodes since there is only one database. Example 3: ---------T
AF (Transparent Application Failover) Transparent Application Failover actually
refers to a failover that occurs when a node or instance is unavailable due to a
n outage or other reason that prohibits a connection to be established on that n
ode. This can be set to on with the following parameter FAILOVER. Setting it to
ON will activate the TAF. It is turned on by default unless you set it to OFF to
disable it. Now, when you turn it on you have two types of connections availabl
e by the means of the FAILOVER_MODE parameter. The type can be session, which is
default or select. When the type is SESSION, if the instance fails, then the us
er is automatically connected to the next available node without the users manual i
ntervention. The SQL statements need to be carried out again on the next node. H
owever, when you set the TYPE to SELECT, then if you are connected and are in th
e middle of your query, then your query will be restarted after you have been fa
iled over to the next available node. Take this example of our tnsnames.ora file
, (go to the section beginning with CONNECT_DATA): (CONNECT_DATA = (SERVER = DED
ICATED) (SERVICE_NAME = fokerac.wolga.com) (FAILOVER_MODE = (TYPE = SELECT) (MET
HOD = BASIC) (RETRIES = 180) (DELAY = 5) ) )
============================================== Note 7: Notes about Backup and Re
store of RAC ============================================== 7.1 Backing up Votin
g Disk: --------------------------Run the following command to backup a voting d
isk. Perform this operation on every voting disk as needed where 'voting_disk_na
me' is the name of the active voting disk, and 'backup_file_name' is the name of
the file to which you want to backup the voting disk contents:
# dd if=voting_disk_name of=backup_file_name When you use the dd command for mak
ing backups of the voting disk, the backup can be performed while the Cluster Re
ady Services (CRS) process is active; you do not need to stop the crsd.bin proce
ss before taking a backup of the voting disk. -- Adding and Removing Voting Disk
s You can dynamically add and remove voting disks after installing Oracle RAC. D
o this using the following commands where path is the fully qualified path for t
he additional voting disk. Run the following command as the root user to add a v
oting disk: # crsctl add css votedisk path Run the following command as the root
user to remove a voting disk: # crsctl delete css votedisk path
7.2 Recovering Voting Disk: --------------------------Run the following command
to recover a voting disk where 'backup_file_name' is the name of the voting disk
backupfile, and 'voting_disk_name' is the name of the active voting disk: # dd
if=backup_file_name of=voting_disk_name 7.3 Backup and Recovery OCR: -----------
----------------Oracle Clusterware automatically creates OCR backups every 4 hou
rs. At any one time, Oracle Clusterware always retains the latest 3 backup copie
s of the OCR that are 4 hours old, 1 day old, and 1 week old. You cannot customi
ze the backup frequencies or the number of files that Oracle Clusterware retains
. You can use any backup software to copy the automatically generated backup fil
es at least once daily to a different device from where the primary OCR file res
ides. The default location for generating backups on Red Hat Linux systems is "C
RS_home/cdata/cluster_name" where cluster_name is the name of your cluster and C
RS_home is the home directory of your Oracle Clusterware installation. -- Viewin
g Available OCR Backups To find the most recent backup of the OCR, on any node i
n the cluster, use the following command:
# ocrconfig -showbackup -- Backing Up the OCR Because of the importance of OCR i
nformation, Oracle recommends that you use the ocrconfig tool to make copies of
the automatically created backup files at least once a day. In addition to using
the automatically created OCR backup files, you should also export the OCR cont
ents to a file before and after making significant configuration changes, such a
s adding or deleting nodes from your environment, modifying Oracle Clusterware r
esources, or creating a database. Exporting the OCR contents to a file lets you
restore the OCR if your configuration changes cause errors. For example, if you
have unresolvable configuration problems, or if you are unable to restart your c
luster database after such changes, then you can restore your configuration by i
mporting the saved OCR content from the valid configuration. To export the conte
nts of the OCR to a file, use the following command, where backup_file_name is t
he name of the OCR backup file you want to create: # ocrconfig -export backup_fi
le_name -- Recovering the OCR This section describes two methods for recovering
the OCR. The first method uses automatically generated OCR file copies and the s
econd method uses manually created OCR export files. In event of a failure, befo
re you attempt to restore the OCR, ensure that the OCR is unavailable. Run the f
ollowing command to check the status of the OCR: # ocrcheck If this command does
not display the message 'Device/File integrity check succeeded' for at least on
e copy of the OCR, then both the primary OCR and the OCR mirror have failed. You
must restore the OCR from a backup. -- Restoring the Oracle Cluster Registry fr
om Automatically Generated OCR Backups When restoring the OCR from automatically
generated backups, you first have to determine which backup file you will use f
or the recovery. To restore the OCR from an automatically generated backup on a
Red Hat Linux system: Identify the available OCR backups using the ocrconfig com
mand: # ocrconfig -showbackup Note:
You must be logged in as the root user to run the ocrconfig command. Review the
contents of the backup using the following ocrdump command, where file_name is t
he name of the OCR backup file: $ ocrdump -backupfile file_name As the root user
, stop Oracle Clusterware on all the nodes in your Oracle RAC cluster by executi
ng the following command: # crsctl stop crs Repeat this command on each node in
your Oracle RAC cluster. As the root user, restore the OCR by applying an OCR ba
ckup file that you identified in step 1 using the following command, where file_
name is the name of the OCR that you want to restore. Make sure that the OCR dev
ices that you specify in the OCR configuration exist, and that these OCR devices
are valid before running this command. # ocrconfig -restore file_name As the ro
ot user, restart Oracle Clusterware on all the nodes in your cluster by restarti
ng each node, or by running the following command: # crsctl start crs Repeat thi
s command on each node in your Oracle RAC cluster. Use the Cluster Verify Utilit
y (CVU) to verify the OCR integrity. Run the following command, where the -n all
argument retrieves a list of all the cluster nodes that are configured as part
of your cluster: $ cluvfy comp ocr -n all [-verbose] -- Recovering the OCR from
an OCR Export File Using the ocrconfig -export command enables you to restore th
e OCR using the -import option if your configuration changes cause errors. To re
store the previous configuration stored in the OCR from an OCR export file: Plac
e the OCR export file that you created previously with the ocrconfig -export com
mand in an accessible directory on disk. As the root user, stop Oracle Clusterwa
re on all the nodes in your Oracle RAC cluster by executing the following comman
d: # crsctl stop crs
Repeat this command on each node in your Oracle RAC cluster. As the root user, r
estore the OCR data by importing the contents of the OCR export file using the f
ollowing command, where file_name is the name of the OCR export file: # ocrconfi
g -import file_name As the root user, restart Oracle Clusterware on all the node
s in your cluster by restarting each node, or by running the following command:
# crsctl start crs Repeat this command on each node in your Oracle RAC cluster.
Use the CVU to verify the OCR integrity. Run the following command, where the -n
all argument retrieves a list of all the cluster nodes that are configured as p
art of your cluster: $ cluvfy comp ocr -n all [-verbose]
7.4 RMAN snapshot controlfile: -----------------------------RMAN> SHOW SNAPSHOT
CONTROLFILE NAME; RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'ORACLE_HOME/dbf/
scf/snap_prod.cf';
================================= Note 8: Noticable items in 10g RAC ===========
====================== 8.1 SPFILE: ----------If an initialization parameter appl
ies to all instances, use *.<parameter> notation, otherwise prefix the parameter
with the name of the instance. For example: *.OPEN_CURSORS=500 prod1.OPEN_CURSO
RS=1000 8.2 Start and stop of RAC: --------------------------
8.2.1 Stopping RAC: ------------------#### NOTE 1: #### > Stop Oracle Clusterwar
e or Cluster Ready Services Processes If you are modifying an Oracle Clusterware
or Oracle Cluster Ready Services (CRS) installation, then shut down the followi
ng Oracle Database 10g services. Note: You must perform these steps in the order
listed. Shut down any processes in the Oracle home on each node that might be a
ccessing a database; for example, shut down Oracle Enterprise Manager Database C
ontrol. Note: Before you shut down any processes that are monitored by Enterpris
e Manager Grid Control, set a blackout in Grid Control for the processes that yo
u intend to shut down. This is necessary so that the availability records for th
ese processes indicate that the shutdown was planned downtime, rather than an un
planned system outage. Shut down all Oracle RAC instances on all nodes. To shut
down all Oracle RAC instances for a database, enter the following command, where
db_name is the name of the database: $ oracle_home/bin/srvctl stop database -d
db_name Shut down all ASM instances on all nodes. To shut down an ASM instance,
enter the following command, where node is the name of the node where the ASM in
stance is running: $ oracle_home/bin/srvctl stop asm -n node Stop all node appli
cations on all nodes. To stop node applications running on a node, enter the fol
lowing command, where node is the name of the node where the applications are ru
nning $ oracle_home/bin/srvctl stop nodeapps -n node Log in as the root user, an
d shut down the Oracle Clusterware or CRS process by entering the following comm
and on all nodes: # CRS_home/bin/crsctl stop crs #### END NOTE 1 #### #### NOTE
2: #### To stop process in an existing Oracle Real Application Clusters Database
, where you want to shut down
the entire database, complete the following steps. -- Shut Down Oracle Real Appl
ication Clusters Databases Shut down any existing Oracle Database instances on e
ach node, with normal or immediate priority. If Automatic Storage Management (AS
M) is running, then shut down all databases that use ASM, and then shut down the
ASM instance on each node of the cluster. Note: -- Stop All Oracle Processes St
op all listener and other processes running in the Oracle home directories where
you want to modify the database software. Note: If you shut down ASM instances,
then you must first shut down all database instances that use ASM, even if thes
e databases run from different Oracle homes. -- Stop Oracle Clusterware or Clust
er Ready Services Processes If you are modifying an Oracle Clusterware or Oracle
Cluster Ready Services (CRS) installation, then shut down the following Oracle
Database 10g services. Note: You must perform these steps in the order listed. S
hut down any processes in the Oracle home on each node that might be accessing a
database; for example, shut down Oracle Enterprise Manager Database Control. No
te: Before you shut down any processes that are monitored by Enterprise Manager
Grid Control, set a blackout in Grid Control for the processes that you intend t
o shut down. This is necessary so that the availability records for these proces
ses indicate that the shutdown was planned downtime, rather than an unplanned sy
stem outage. Shut down all Oracle RAC instances on all nodes. To shut down all O
racle RAC instances for a database, enter the following command, where db_name i
s the name of the database: $ oracle_home/bin/srvctl stop database -d db_name Sh
ut down all ASM instances on all nodes. To shut down an ASM instance, enter the
following command, where node is the name of the node where the ASM instance is
running: $ oracle_home/bin/srvctl stop asm -n node Stop all node applications on
all nodes. To stop node applications running on a node, enter the following com
mand, where node is the name of the node where the applications are running
$ oracle_home/bin/srvctl stop nodeapps -n node Log in as the root user, and shut
down the Oracle Clusterware or CRS process by entering the following command on
all nodes: # CRS_home/bin/crsctl stop crs #### END NOTE 2 #### Notes about Star
ting up: -----------------------crsd : Cluster Ready Services Daemon (CRSD) occs
d : Oracle Cluster Synchronization Server Daemon (OCSSD), the CCS. evmd : Event
Manager Daemon (EVMD). evmlogger The CRSD manages the HA functionality by starti
ng, stopping, and failing over the application resources and maintaining the pro
files and current states in the Oracle Cluster Registry (OCR) whereas the OCSSD
manages the participating nodes in the cluster by using the voting disk. The OCS
SD also protects against the data corruption potentially caused by "split brain"
syndrome by forcing a machine to reboot.
>Linux: # cat /etc/inittab | grep crs h3:35:respawn:/etc/init.d/init.crsd run >
/dev/null 2>&1 </dev/null # cat /etc/inittab | grep evmd h1:35:respawn:/etc/init
.d/init.evmd run > /dev/null 2>&1 </dev/null # cat /etc/inittab | grep css h2:35
:respawn:/etc/init.d/init.cssd fatal > /dev/null 2>&1 </dev/null /etc/init.d> ls
-al *init* init.crs init.crsd init.cssd init.evmd # cat /etc/inittab .. .. h1:3
5:respawn:/etc/init.d/init.evmd run > /dev/null 2>&1 </dev/null h2:35:respawn:/e
tc/init.d/init.cssd fatal > /dev/null 2>&1 </dev/null h3:35:respawn:/etc/init.d/
init.crsd run > /dev/null 2>&1 </dev/null init.crsd -> calls crsd correct order
for stopping: Reverse order of startup. crsd should be shutdown
before cssd and evmd. evmd should be shutdown before cssd. init.crs stop: init.c
rsd init.evmd init.cssd init.crs start init.cssd autostart|manualstart ---------
---------------------------------links: http://dmx0201.nl.eu.abnamro.com:7900/wi
https://dmp0101.nl.eu.abnamro.com:1159/em -------------------------------------
------
============================ 35. ORACLE STREAMS AND CDC: =======================
===== 35.1 Data replication, Heterogeneous Services, Gateway. Streams: =========
======================================================= To connect Oracle to a n
on Oracle database: There are a couple of answers a) http://www.oracle.com/gatew
ays/ is the most complete. distributed query, distributed transactions -- 100% f
unctionality. Lets you treat DB2 as if it were an Oracle instance for all intent
s and purposes. b) generic connectivity. If you have ODBC on the SERVER (oracle
server) and can use that to connect to DB2, you can use generic connectivity. Le
ss functional then a) http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUE
STION_ID:4406709207206 c) lastly, you can get their type4 (thin) jdbc (all java)
drivers and load them into Oracle. Then, you can write a java stored procedure
in Oracle that accesses DB2 over jdbc.
35.2 Information on CDC: ========================
Change Data Capture can capture and publish committed change data in either of t
he following modes: -- Synchronous Triggers on the source database allow change
data to be captured immediately, as each SQL statement that performs a data mani
pulation language (DML) operation (INSERT, UPDATE, or DELETE) is made. In this m
ode, change data is captured as part of the transaction modifying the source tab
le. Synchronous Change Data Capture is available with Oracle Standard Edition an
d Enterprise Edition. -- Asynchronous By taking advantage of the data sent to th
e redo log files, change data is captured after a SQL statement that performs a
DML operation is committed. In this mode, change data is not captured as part of
the transaction that is modifying the source table, and therefore has no effect
on that transaction. Asynchronous Change Data Capture is available with Oracle
Enterprise Edition only. There are three modes of asynchronous Change Data Captu
re: HotLog, Distributed HotLog, and AutoLog. Asynchronous Change Data Capture is
built on, and provides a relational interface to, Oracle Streams. See Oracle St
reams Concepts and Administration for information on Oracle Streams. - Change ta
bles With any CDC mode, change tables are involved. A given change table contain
s the change data resulting from DML operations performed on a given source tabl
e. A change table consists of two things: the change data itself, which is store
d in a database table, ; and the system metadata necessary to maintain the chang
e table, which includes control columns. The publisher specifies the source colu
mns that are to be included in the change table. Typically, for a change table t
o contain useful data, the publisher needs to include the primary key column in
the change table along with any other columns of interest to subscribers. For ex
ample, suppose subscribers are interested in changes that occur to the UNIT_COST
and the UNIT_PRICE columns in the sh.costs table. If the publisher does not inc
lude the PROD_ID column in the change table, subscribers will know only that the
unit cost and unit price of some products have changed, but will be unable to d
etermine for which products these changes have occurred. There are optional and
required control columns. The required control columns are always included in a
change table; the optional ones are included if specified by the publisher when
creating
the change table. Control columns are managed by Change Data Capture.
- Interface Change Data Capture includes the DBMS_CDC_PUBLISH and DBMS_CDC_SUBSC
RIBE packages, which provide easy-to-use publish and subscribe interfaces. - Pub
lish and Subscribe Model Most Change Data Capture systems have one person who ca
ptures and publishes change data; this person is the publisher. There can be mul
tiple applications or individuals that access the change data; these application
s and individuals are the subscribers. Change Data Capture provides PL/SQL packa
ges to accomplish the publish and subscribe tasks. -- TASKS: => These are the ma
in tasks performed by the publisher: . Determines the source databases and table
s from which the subscribers are interested in viewing change data, and the mode
(synchronous or one of the asynchronous modes) in which to capture the change d
ata. . Uses the Oracle-supplied package, DBMS_CDC_PUBLISH, to set up the system
to capture change data from the source tables of interest. . Allows subscribers
to have controlled access to the change data in the change tables by using the S
QL GRANT and REVOKE statements to grant and revoke the SELECT privilege on chang
e tables for users and roles. (Keep in mind, however, that subscribers use views
, not change tables directly, to access change data.) => These are the main task
s performed by the subscriber: The subscribers are consumers of the published ch
ange data. A subscriber performs the following tasks: > Uses the Oracle supplied
package, DBMS_CDC_SUBSCRIBE, to: . Create subscriptions A subscription controls
access to the change data from one or more source tables of interest within a s
ingle change set. A subscription contains one or more subscriber views. A subscr
iber view is a view that specifies the change data from a specific publication i
n a subscription. The subscriber is restricted to seeing change data that the pu
blisher has published and has granted the subscriber access to use. See "Subscri
bing to Change Data" for more information on choosing a method for specifying a
subscriber view.
. Notify Change Data Capture when ready to receive a set of change data A subscr
iption window defines the time range of rows in a publication that the subscribe
r can currently see in subscriber views. The oldest row in the window is called
the low boundary; the newest row in the window is called the high boundary. Each
subscription has its own subscription window that applies to all of its subscri
ber views. . Notify Change Data Capture when finished with a set of change data
> Uses SELECT statements to retrieve change data from the subscriber views.
-- Other items: Source Database MODE CHANGE SOURCE Represented Associated Change
Set ---------------------------------------------------------------------------
------------------------Synchronous Predefined SYNC_SOURCE Local Predefined SYNC
_SET and publisher-defined Async HotLog defined Predefined HOTLOG_SOURCE Local P
ublisher-
Async Distr HotLog Publisher-defined defined. Change sets must all be Async Auto
Log online Publisher-defined defined. There can only be one online change source
Asynchronous AutoLog archive Publisher-defined
Remote
Publisher-
on the same staging database Remote Publisher-
change set in an AutoLog
Remote
Publisher-defined
-- Views intended for Publisher or Subscriber: CHANGE_SOURCES Describes existing
change sources.
CHANGE_PROPAGATIONS Describes the Oracle Streams propagation associated with a g
iven Distributed HotLog change source on the source database. This view is popul
ated on the source database for 10.2 change sources or on the staging database f
or 9.2 or 10.1 change sources. CHANGE_PROPAGATION_SETS Describes the Oracle Stre
ams propagation associated with a given Distributed HotLog change set on the sta
ging database. This view is populated on the source database for 10.2 change sou
rces or on the staging database for 9.2 or 10.1 change sources.
CHANGE_SETS Describes existing change sets. CHANGE_TABLES Describes existing cha
nge tables. DBA_SOURCE_TABLES Describes all published source tables in the datab
ase. DBA_PUBLISHED_COLUMNS Describes all published columns of source tables in t
he database. DBA_SUBSCRIPTIONS Describes all subscriptions. DBA_SUBSCRIBED_TABLE
S Describes all source tables to which any subscriber has subscribed. DBA_SUBSCR
IBED_COLUMNS Describes the columns of source tables to which any subscriber has
subscribed.
ALL_SOURCE_TABLES Describes all published source tables accessible to the curren
t user. USER_SOURCE_TABLES Describes all published source tables owned by the cu
rrent user. ALL_PUBLISHED_COLUMNS Describes all published columns of source tabl
es accessible to the current user. USER_PUBLISHED_COLUMNS Describes all publishe
d columns of source tables owned by the current user. ALL_SUBSCRIPTIONS Describe
s all subscriptions accessible to the current user. USER_SUBSCRIPTIONS Describes
all the subscriptions owned by the current user. ALL_SUBSCRIBED_TABLES Describe
s the source tables to which any subscription accessible to the current user has
subscribed. USER_SUBSCRIBED_TABLES Describes the source tables to which the cur
rent user has subscribed. ALL_SUBSCRIBED_COLUMNS Describes the columns of source
tables to which any subscription accessible to the current user has subscribed.
USER_SUBSCRIBED_COLUMNS Describes the columns of source tables to which the cur
rent user has subscribed. -- Adjusting Initialization Parameter Values When Orac
le Streams Values Change Asynchronous Change Data Capture uses an Oracle Streams
configuration for each change set. This Streams configuration consists of a Str
eams capture process and a Streams apply process, with an accompanying queue and
queue table. Each Streams configuration uses additional processes, parallel exe
cution servers, and memory. For details about the Streams
architecture, see Oracle Streams Concepts and Administration. Oracle Streams cap
ture and apply processes each have a parallelism parameter that is used to impro
ve performance. When a publisher first creates a change set, its capture paralle
lism value and apply parallelism value are each 1. If desired, a publisher can i
ncrease one or both of these values using Streams interfaces. If Oracle Streams
capture parallelism and apply parallelism values are increased after change sets
are created, the DBA (or DBAs in the case of the Distributed HotLog mode) must
adjust initialization parameter values accordingly. How these adjustments are ma
de vary slightly, depending on the mode of Change Data Capture being employed, a
s described in the following sections. -- Adjustments for HotLog and AutoLog Cha
nge Data Capture For HotLog and AutoLog change data capture, adjustments to init
ialization parameters are made on the staging database. Examples below demonstra
te how to obtain the current capture parallelism and apply parallelism values fo
r change set CHICAGO_DAILY. By default, each parallelism value is 1, so the amou
nt by which a given parallelism value has been increased is the returned value m
inus 1. Example 1 Obtaining the Oracle Streams Capture Parallelism Value for a C
hange Set SELECT cp.value FROM DBA_CAPTURE_PARAMETERS cp, CHANGE_SETS cset WHERE
cset.SET_NAME = 'CHICAGO_DAILY' AND cset.CAPTURE_NAME = cp.CAPTURE_NAME AND cp.
PARAMETER = 'PARALLELISM'; Example 2 Obtaining the Oracle Streams Apply Parallel
ism Value for a Change Set SELECT ap.value FROM DBA_APPLY_PARAMETERS ap, CHANGE_
SETS cset WHERE cset.SET_NAME = 'CHICAGO_DAILY' AND cset.APPLY_NAME = ap.APPLY_N
AME AND ap.parameter = 'PARALLELISM'; The staging database DBA must adjust the s
taging database initialization parameters as described in the following list to
accommodate the parallel execution servers and other processes and memory requir
ed for Change Data Capture: PARALLEL_MAX_SERVERS For each change set for which O
racle Streams capture or apply parallelism values were increased, increase the v
alue of this parameter by the increased Streams parallelism value. For example,
if the statement in Example 1 returns a value of 2, and the statement
in Example 2 returns a value of 3, then the staging database DBA should increase
the value of the PARALLEL_MAX_SERVERS parameter by (2-1) + (3-1), or 3 for the
CHICAGO_DAILY change set. If the Streams capture or apply parallelism values hav
e increased for other change sets, increases for those change sets must also be
made. PROCESSES For each change set for which Oracle Streams capture or apply pa
rallelism values were changed, increase the value of this parameter by the sum o
f increased Streams parallelism values. See the previous list item, PARALLEL_MAX
_SERVERS, for an example. STREAMS_POOL_SIZE For each change set for which Oracle
Streams capture or apply parallelism values were changed, increase the value of
this parameter by (10MB * (the increased capture parallelism value)) + (1MB * i
ncreased apply parallelism value). For example, if the statement in Example 1 re
turns a value of 2, and the in Example 2 returns a value of 3, then the staging
database DBA should increase the value of STREAMS_POOL_SIZE parameter by (10 MB
* (2-1) + 1MB * (3-1)), or 12MB for the CHICAGO_DAILY change set. Oracle Streams
capture or apply parallelism values have increased for other change sets, incre
ases change sets must also be made. statement the If the for those
See Oracle Streams Concepts and Administration for more information on Streams c
apture parallelism and apply parallelism values. See Oracle Database Reference f
or more information about database initialization parameters.
Note 3: Oracle 10.2 Sync CDC Example: ===================================== CDC
Mode : Source table : Changing table : table with added change data added by mea
ns of handler function: conn / as sysdba -- *NIX only define _editor=vi -- valid
ate database parameters Synchroneous CDC hr.cdc_demo cdcadmin.cdc_demo_ct hr.sal
ary_history
archive log list show parameter aq_tm_processes show parameter compatible show p
arameter global_names show parameter job_queue_processes show parameter open_lin
ks show parameter shared_pool_size show parameter streams_pool_size show paramet
er undo_retention
----------
Archive Mode min 3 must be 10.1.0 or above must be TRUE min 2 recommended 4-6 no
t less than the default 4 must be 0 or at least 200MB min. 480MB (10MB/capture 1
MB/apply) min. 3600 (1 hr.) (900)
-- Examples of altering initialization parameters alter system set aq_tm_process
es=3 scope=BOTH; alter system set compatible='10.2.0.1.0' scope=SPFILE; alter sy
stem set global_names=TRUE scope=BOTH; alter system set job_queue_processes=6 sc
ope=BOTH; alter system set open_links=4 scope=SPFILE; alter system set streams_p
ool_size=200M scope=BOTH; -- very slow if making smaller alter system set undo_r
etention=3600 scope=BOTH; /* JOB_QUEUE_PROCESSES (current value) + 2 PARALLEL_MA
X_SERVERS (current value) + (5 * (the number of change sets planned)) PROCESSES
(current value) + (7 * (the number of change sets planned)) SESSIONS (current va
lue) + (2 * (the number of change sets planned)) */ -- Retest parameter after mo
dification shutdown immediate; startup mount; alter database archivelog; -- impo
rtant alter database force logging; -- one option among several alter database a
dd supplemental log data; alter database open; -- validate archivelogging archiv
e log list alter system switch logfile; archive log list -- validate force and s
upplemental logging SELECT supplemental_log_data_min, supplemental_log_data_pk,
supplemental_log_data_ui, supplemental_log_data_fk, supplemental_log_data_all, f
orce_logging FROM gv$database; SELECT force_logging FROM dba_tablespaces;
desc dba_hist_streams_apply_sum SELECT apply_name, reader_total_messages_dequeue
d, reader_lag, server_total_messages_applied FROM dba_hist_streams_apply_sum; --
examine CDC related data dictionary objects SELECT table_name FROM dba_tables W
HERE owner = 'SYS' AND table_name LIKE 'CDC%$'; desc cdc_system$ SELECT * FROM c
dc_system$; Setup As SYS - Create Streams Administrators conn / as sysdba SELECT
* FROM dba_streams_administrator; CREATE USER cdcadmin IDENTIFIED BY cdcadmin D
EFAULT TABLESPACE users TEMPORARY TABLESPACE temp QUOTA 0 ON system QUOTA 10M ON
sysaux QUOTA 20M ON users; -- system privs GRANT create session TO cdcadmin; GR
ANT create table TO cdcadmin; GRANT create sequence TO cdcadmin; GRANT create pr
ocedure TO cdcadmin; GRANT dba TO cdcadmin; -- role privs GRANT execute_catalog_
role TO cdcadmin; GRANT select_catalog_role TO cdcadmin; -- object privileges GR
ANT execute ON dbms_cdc_publish TO cdcadmin; GRANT execute ON dbms_cdc_subscribe
TO cdcadmin; -- do also to HR -- streams specific priv execute dbms_streams_aut
h.grant_admin_privilege('CDCADMIN'); SELECT account_status, created FROM dba_use
rs WHERE username = 'CDCADMIN'; SELECT * FROM dba_sys_privs WHERE grantee = 'CDC
ADMIN'; SELECT username FROM dba_users u, streams$_privileged_user s
WHERE u.user_id = s.user#; SELECT * FROM dba_streams_administrator; Prepare Sche
ma Tables for CDC Replication conn / as sysdba alter user hr account unlock iden
tified by hr; connect hr/hr desc employees SELECT * FROM employees; -- create CD
C demo table CREATE TABLE cdc_demo AS SELECT * FROM employees; ALTER TABLE cdc_d
emo ADD CONSTRAINT pk_cdc_demo PRIMARY KEY (employee_id) USING INDEX PCTFREE 0;
-- a second way to implement supplemental logging ALTER TABLE cdc_demo ADD SUPPL
EMENTAL LOG DATA (ALL) COLUMNS; -- table to track salary history changes origina
ting in cdc_demo -CREATE TABLE salary_history ( employee_id NUMBER(6), first_nam
e VARCHAR2(20), last_name VARCHAR2(25), old_salary NUMBER(8,2), new_salary NUMBE
R(8,2), pct_change NUMBER(4,2), action_date DATE); SELECT table_name FROM user_t
ables; Instantiate Source Table conn cdcadmin/cdcadmin desc dba_capture_prepared
_tables SELECT table_name, scn, supplemental_log_data_pk, supplemental_log_data_
ui, supplemental_log_data_fk, supplemental_log_data_all FROM dba_capture_prepare
d_tables; dbms_capture_adm.prepare_table_instantiation( table_name IN VARCHAR2,
supplemental_logging IN VARCHAR2 DEFAULT 'keys');
Note: This procedure performs the synchronization necessary for instantiating th
e table at another database. This procedure records the lowest SCN of the table
for instantiation. SCNs subsequent to the lowest SCN for an object can be used f
or instantiating the object.
exec dbms_capture_adm.prepare_table_instantiation('HR.CDC_DEMO'); SELECT table_n
ame, scn, supplemental_log_data_pk PK, supplemental_log_data_ui UI, supplemental
_log_data_fk FK, supplemental_log_data_all "ALL" FROM dba_capture_prepared_table
s; Create Synchronous Change Set conn cdcadmin/cdcadmin col object_name format a
30 SELECT object_name, object_type FROM user_objects ORDER BY 2,1; dbms_cdc_publ
ish.create_change_set( change_set_name IN VARCHAR2, description IN VARCHAR2 DEFA
ULT NULL, change_source_name IN VARCHAR2, stop_on_ddl IN CHAR DEFAULT 'N', begin
_date IN DATE DEFAULT NULL, end_date IN DATE DEFAULT NULL); -- this may take a m
inute or two exec dbms_cdc_publish.create_change_set('CDC_DEMO_SET', 'Synchronou
s Demo Set', 'SYNC_SOURCE'); SELECT object_name, object_type FROM user_objects O
RDER BY 2,1; conn / as sysdba desc cdc_change_sets$ set col col col col linesize
121 set_name format a20 capture_name format a20 queue_name format a20 queue_tab
le_name format a20
SELECT set_name, capture_name, queue_name, queue_table_name FROM cdc_change_sets
$; SELECT set_name, change_source_name, capture_enabled, stop_on_ddl, publisher
FROM change_sets; Create Change Table
conn cdcadmin/cdcadmin dbms_cdc_publish.create_change_table( owner IN VARCHAR2,
change_table_name IN VARCHAR2, change_set_name IN VARCHAR2, source_schema IN VAR
CHAR2, source_table IN VARCHAR2, column_type_list IN VARCHAR2, capture_values IN
VARCHAR2, -- BOTH, NEW, OLD rs_id IN CHAR, row_id IN CHAR, user_id IN CHAR, tim
estamp IN CHAR, object_id IN CHAR, source_colmap IN CHAR, target_colmap IN CHAR,
options_string IN VARCHAR2); BEGIN dbms_cdc_publish.create_change_table('CDCADM
IN', 'CDC_DEMO_CT', 'CDC_DEMO_SET', 'HR', 'CDC_DEMO', 'EMPLOYEE_ID NUMBER(6), FI
RST_NAME VARCHAR2(20), LAST_NAME VARCHAR2(25), SALARY NUMBER', 'BOTH', 'Y', 'Y',
'Y', 'N', 'N', 'Y', 'Y', ' TABLESPACE USERS pctfree 0 pctused 99'); END; / GRAN
T select ON cdc_demo_ct TO hr; conn / as sysdba SELECT set_name, change_source_n
ame, queue_name, queue_table_name FROM cdc_change_sets$; desc cdc_change_tables$
SELECT change_set_name, source_schema_name, source_table_name FROM cdc_change_t
ables$; conn cdcadmin/cdcadmin SELECT object_name, object_type FROM user_objects
ORDER BY 2,1; col high_value format a15 SELECT table_name, composite, partition
_name, high_value FROM user_tab_partitions; Create Subscription conn hr/hr dbms_
cdc_subscribe.create_subscription( change_set_name IN VARCHAR2,
description IN VARCHAR2, subscription_name IN VARCHAR2); exec dbms_cdc_subscribe
.create_subscription('CDC_DEMO_SET', 'Sync Capture Demo Set', 'CDC_DEMO_SUB'); c
onn / as sysdba set col col col linesize 121 description format a30 subscription
_name format a20 username format a10
SELECT subscription_name, handle, set_name, username, earliest_scn, description
FROM cdc_subscribers$; Subscribe to conn hr/hr and Activate Subscription
dbms_cdc_subscribe.subscribe( subscription_name IN VARCHAR2, source_schema IN VA
RCHAR2, source_table IN VARCHAR2, column_list IN VARCHAR2, subscriber_view IN VA
RCHAR2); BEGIN dbms_cdc_subscribe.subscribe('CDC_DEMO_SUB', 'HR', 'CDC_DEMO', 'E
MPLOYEE_ID, FIRST_NAME, LAST_NAME, SALARY', 'CDC_DEMO_SUB_VIEW'); END; / desc us
er_subscriptions SELECT set_name, subscription_name, status FROM user_subscripti
ons; SELECT set_name, subscription_name, status FROM dba_subscriptions; dbms_cdc
_subscribe.activate_subscription( subscription_name IN VARCHAR2); exec dbms_cdc_
subscribe.activate_subscription('CDC_DEMO_SUB'); SELECT set_name, subscription_n
ame, status FROM user_subscriptions; Create Procedure To Populate Salary History
Table conn hr/hr /* Create a stored procedure to populate the new HR.SALARY_HIS
TORY table. The procedure extends the subscription window of the CDC_DEMP_SUB su
bscription to get the most recent set of source table changes. It uses the subsc
riber's DEMO_SUB_VIEW view to scan the changes and insert them into the SALARY_H
ISTORY table. It then purges the subscription window to indicate that it is fini
shed with that set of changes. */
CREATE OR REPLACE PROCEDURE update_salary_history IS CURSOR cur IS SELECT * FROM
( SELECT 'I' opt, cscn$, rsid$, employee_id, first_name, last_name, 0 old_salar
y, salary new_salary, commit_timestamp$ FROM cdc_demo_sub_view WHERE operation$
= 'I ' UNION ALL SELECT 'D' opt, cscn$, rsid$, employee_id, first_name, last_nam
e, salary old_salary, 0 new_salary, commit_timestamp$ FROM cdc_demo_sub_view WHE
RE operation$ = 'D ' UNION ALL SELECT 'U' opt , v1.cscn$, v1.rsid$, v1.employee_
id, v1.first_name, v1.last_name, v1.salary old_salary, v2.salary new_salaryi, v1
.commit_timestamp$ FROM cdc_demo_sub_view v1, cdc_demo_sub_view v2 WHERE v1.oper
ation$ = 'UU' and v2.operation$ = 'UN' AND v1.cscn$ = v2.cscn$ AND v1.rsid$ = v2
.rsid$ AND ABS(v1.salary - v2.salary) > 0) ORDER BY cscn$, rsid$; percent NUMBER
; BEGIN --Step 1 Get the change (extend the window). dbms_cdc_subscribe.extend_w
indow('CDC_DEMO_SUB'); FOR rec IN cur LOOP IF rec.opt = 'I' THEN INSERT INTO sal
ary_history (employee_id, first_name, last_name, old_salary, new_salary, pct_cha
nge, action_date) VALUES (rec.employee_id, rec.first_name, rec.last_name, 0, rec
.new_salary, NULL, rec.commit_timestamp$); END IF; IF rec.opt = 'D' THEN INSERT
INTO salary_history (employee_id, first_name, last_name, old_salary, new_salary,
pct_change, action_date) VALUES (rec.employee_id, rec.first_name, rec.last_name
, rec.old_salary, 0, NULL, rec.commit_timestamp$); END IF; IF rec.opt = 'U' THEN
percent := (rec.new_salary - rec.old_salary) / rec.old_salary * 100; INSERT INT
O salary_history (employee_id, first_name, last_name, old_salary, new_salary, pc
t_change, action_date) VALUES (rec.employee_id, rec.first_name, rec.last_name, r
ec.old_salary, rec.new_salary, percent, rec.commit_timestamp$);
END IF; END LOOP; COMMIT; --Step 3 Purge the window of consumed data dbms_cdc_su
bscribe.purge_window('CDC_DEMO_SUB'); END update_salary_history; / desc dba_hist
_streams_apply_sum SELECT apply_name, reader_total_messages_dequeued, reader_lag
, server_total_messages_applied FROM dba_hist_streams_apply_sum; DML On Source T
able conn hr/hr SELECT employee_id, first_name, last_name, salary FROM cdc_demo
ORDER BY 1 DESC; SELECT employee_id, first_name, last_name, salary FROM cdc_demo
_sub_view; SELECT * FROM salary_history; UPDATE cdc_demo SET salary = salary+1 W
HERE employee_id = 100; COMMIT; SELECT employee_id,first_name,last_name,salary F
ROM cdc_demo_sub_view; exec update_salary_history; SELECT employee_id,first_name
,last_name,salary FROM cdc_demo_sub_view; SELECT * FROM salary_history; -- Captu
re Cleanup conn hr/hr exec dbms_cdc_subscribe.drop_subscription('CDC_DEMO_SUB');
conn / as sysdba -- reverse prepare table instantiation exec dbms_capture_adm.a
bort_table_instantiation('HR.CDC_DEMO'); -- drop the change table exec dbms_cdc_
publish.drop_change_table('CDCADMIN', 'CDC_DEMO_CT', 'Y');
-- drop the change set exec dbms_cdc_publish.drop_change_set('CDC_DEMO_SET'); co
nn hr/hr drop table salary_history purge; drop table cdc_demo purge; drop proced
ure update_salary_history; conn / as sysdba drop user cdcadmin;
Note 4: Oracle 10.2 ASync Hotlog CDC Example: ==================================
=========== conn / as sysdba -- *NIX only define _editor=vi -- validate database
parameters archive log list show parameter aq_tm_processes show parameter compa
tible show parameter global_names show parameter job_queue_processes show parame
ter open_links show parameter shared_pool_size show parameter streams_pool_size
show parameter undo_retention ---------Archive Mode min 3 must be 10.1.0 or abov
e must be TRUE min 2 recommended 4-6 not less than the default 4 must be 0 or at
least 200MB min. 480MB (10MB/capture 1MB/apply) min. 3600 (1 hr.) (900)
-- Examples of altering initialization parameters alter system set aq_tm_process
es=3 scope=BOTH; alter system set compatible='10.2.0.1.0' scope=SPFILE; alter sy
stem set global_names=TRUE scope=BOTH; alter system set job_queue_processes=6 sc
ope=BOTH; alter system set open_links=4 scope=SPFILE; alter system set streams_p
ool_size=200M scope=BOTH; -- very slow if making smaller alter system set undo_r
etention=3600 scope=BOTH; /* JOB_QUEUE_PROCESSES (current value) + 2 PARALLEL_MA
X_SERVERS (current value) + (5 * (the number of change sets planned)) PROCESSES
(current value) + (7 * (the number of change sets planned)) SESSIONS (current va
lue) + (2 * (the number of change sets planned)) */ -- Retest parameter after mo
dification shutdown immediate; startup mount;
alter database archivelog; -- important alter database force logging; -- one opt
ion among several alter database add supplemental log data; alter database open;
-- validate archivelogging archive log list alter system switch logfile; archiv
e log list -- validate force and supplemental logging SELECT supplemental_log_da
ta_min, supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_dat
a_fk, supplemental_log_data_all, force_logging FROM gv$database; SELECT force_lo
gging FROM dba_tablespaces; -- examine existing queues desc dba_queues set col c
ol col linesize 121 owner format a6 queue_table format a25 user_comment format a
31
SELECT owner, name, queue_table, queue_type, user_comment FROM dba_queues ORDER
BY 1,4,2; -- examine existing streams desc dba_hist_streams_capture SELECT captu
re_name, total_messages_captured, total_messages_enqueued FROM dba_hist_streams_
capture; desc dba_hist_streams_apply_sum SELECT apply_name, reader_total_message
s_dequeued, reader_lag, server_total_messages_applied FROM dba_hist_streams_appl
y_sum; -- examine CDC related data dictionary objects SELECT table_name FROM dba
_tables WHERE owner = 'SYS' AND table_name LIKE 'CDC%$'; desc cdc_system$
SELECT * FROM cdc_system$; Setup As SYS - Create Streams Administrators conn / a
s sysdba SELECT * FROM dba_streams_administrator; CREATE USER cdcadmin IDENTIFIE
D BY cdcadmin DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp QUOTA 0 ON syst
em QUOTA 10M ON sysaux QUOTA 20M ON users; -- system privs GRANT create session
TO cdcadmin; GRANT create table TO cdcadmin; GRANT create sequence TO cdcadmin;
GRANT create procedure TO cdcadmin; GRANT dba TO cdcadmin; -- role privs GRANT e
xecute_catalog_role TO cdcadmin; GRANT select_catalog_role TO cdcadmin; -- objec
t privileges GRANT execute ON dbms_cdc_publish TO cdcadmin; GRANT execute ON dbm
s_cdc_subscribe TO cdcadmin; -- required for this demo but not by CDC GRANT exec
ute ON dbms_lock TO cdcadmin; -- streams specific priv execute dbms_streams_auth
.grant_admin_privilege('CDCADMIN'); SELECT account_status, created FROM dba_user
s WHERE username = 'CDCADMIN'; SELECT * FROM dba_sys_privs WHERE grantee = 'CDCA
DMIN'; SELECT username FROM dba_users u, streams$_privileged_user s WHERE u.user
_id = s.user#; SELECT * FROM dba_streams_administrator; Prepare Schema Tables fo
r CDC Replication conn / as sysdba alter user hr account unlock identified by hr
;
connect hr/hr desc employees SELECT * FROM employees; -- create CDC demo table C
REATE TABLE cdc_demo AS SELECT * FROM employees; -- a second way to implement su
pplemental logging ALTER TABLE cdc_demo ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
-- table to track salary history changes originating in cdc_demo CREATE TABLE s
alary_history ( employee_id NUMBER(6) NOT NULL, job_id VARCHAR2(10) NOT NULL, de
partment_id NUMBER(4), old_salary NUMBER(8,2), new_salary NUMBER(8,2), percent_c
hange NUMBER(4,2), salary_action_date DATE); SELECT table_name FROM user_tables;
Instantiate Source Table conn / as sysdba desc dba_capture_prepared_tables SELE
CT table_name, scn, supplemental_log_data_pk, supplemental_log_data_ui, suppleme
ntal_log_data_fk, supplemental_log_data_all FROM dba_capture_prepared_tables; db
ms_capture_adm.prepare_table_instantiation( table_name IN VARCHAR2, supplemental
_logging IN VARCHAR2 DEFAULT 'keys'); Note: This procedure performs the synchron
ization necessary for instantiating the table at another database. This procedur
e records the lowest SCN of the table for instantiation. SCNs subsequent to the
lowest SCN for an object can be used for instantiating the object.
exec dbms_capture_adm.prepare_table_instantiation(table_name => 'HR.CDC_DEMO');
SELECT table_name, scn, supplemental_log_data_pk, supplemental_log_data_ui, supp
lemental_log_data_fk, supplemental_log_data_all FROM dba_capture_prepared_tables
; Create Asynchronous HotLog Change Set conn cdcadmin/cdcadmin
col object_name format a30 SELECT object_name, object_type FROM user_objects ORD
ER BY 2,1; dbms_cdc_publish.create_change_set( change_set_name IN VARCHAR2, desc
ription IN VARCHAR2 DEFAULT NULL, change_source_name IN VARCHAR2, stop_on_ddl IN
CHAR DEFAULT 'N', begin_date IN DATE DEFAULT NULL, end_date IN DATE DEFAULT NUL
L); -- this may take awhile don't be impatient exec dbms_cdc_publish.create_chan
ge_set('CDC_DEMO_SET', 'CDC Demo 2 Change Set', 'HOTLOG_SOURCE', 'Y', NULL, NULL
); -- here is why SELECT object_name, object_type FROM user_objects ORDER BY 2,1
; SELECT table_name, tablespace_name, iot_type FROM user_tables; col high_value
format a15 SELECT table_name, composite, partition_name, high_value FROM user_ta
b_partitions; conn / as sysdba desc cdc_change_sets$ set col col col col linesiz
e 121 set_name format a20 capture_name format a20 queue_name format a20 queue_ta
ble_name format a20
SELECT set_name, capture_name, queue_name, queue_table_name FROM cdc_change_sets
$; SELECT set_name, change_source_name, capture_enabled, stop_on_ddl, publisher
FROM change_sets; SELECT process_type, name FROM streams$_process_params; Create
Change Table conn cdcadmin/cdcadmin dbms_cdc_publish.create_change_table( owner
IN VARCHAR2, change_table_name IN VARCHAR2,
change_set_name source_schema source_table column_type_list capture_values rs_id
row_id user_id timestamp object_id source_colmap target_colmap options_string
IN IN IN IN IN IN IN IN IN IN IN IN IN
VARCHAR2, VARCHAR2, VARCHAR2, VARCHAR2, VARCHAR2, -- BOTH, NEW, OLD CHAR, CHAR,
CHAR, CHAR, CHAR, CHAR, CHAR, VARCHAR2);
BEGIN dbms_cdc_publish.create_change_table('CDCADMIN', 'CDC_DEMO_CT', 'CDC_DEMO_
SET', 'HR', 'CDC_DEMO', 'EMPLOYEE_ID NUMBER(6), FIRST_NAME VARCHAR2(20), LAST_NA
ME VARCHAR2(25), EMAIL VARCHAR2(25), PHONE_NUMBER VARCHAR2(20), HIRE_DATE DATE,
JOB_ID VARCHAR2(10), SALARY NUMBER, COMMISSION_PCT NUMBER, MANAGER_ID NUMBER, DE
PARTMENT_ID NUMBER', 'BOTH', 'N', 'N', 'N', 'N', 'N', 'N', 'Y', NULL); END; / ex
ec dbms_cdc_publish.alter_change_table('CDCADMIN', 'CDC_DEMO_CT', rs_id=>'Y'); G
RANT select ON cdc_demo_ct TO hr; conn / as sysdba SELECT set_name, change_sourc
e_name, queue_name, queue_table_name FROM cdc_change_sets$; desc cdc_change_tabl
es$ SELECT change_set_name, source_schema_name, source_table_name FROM cdc_chang
e_tables$; Enable Capture conn / as sysdba SELECT set_name, change_source_name,
capture_enabled FROM cdc_change_sets$; conn cdcadmin/cdcadmin dbms_cdc_publish.a
lter_change_set( change_set_name IN VARCHAR2, description IN VARCHAR2 DEFAULT NU
LL, remove_description IN CHAR DEFAULT 'N', enable_capture IN CHAR DEFAULT NULL,
recover_after_error IN CHAR DEFAULT NULL, remove_ddl IN CHAR DEFAULT NULL, stop
_on_ddl IN CHAR DEFAULT NULL);
exec dbms_cdc_publish.alter_change_set(change_set_name=>'CDC_DEMO_SET', enable_c
apture=> 'Y'); conn / as sysdba SELECT set_name, change_source_name, capture_ena
bled FROM cdc_change_sets$; Create Subscription conn hr/hr dbms_cdc_subscribe.cr
eate_subscription( change_set_name IN VARCHAR2, description IN VARCHAR2, subscri
ption_name IN VARCHAR2); exec dbms_cdc_subscribe.create_subscription('CDC_DEMO_S
ET', 'cdc_demo subx', 'CDC_DEMO_SUB'); conn / as sysdba set col col col linesize
121 description format a30 subscription_name format a20 username format a10
SELECT subscription_name, handle, set_name, username, earliest_scn, description
FROM cdc_subscribers$; Subscribe to conn hr/hr and Activate Subscription
dbms_cdc_subscribe.subscribe( subscription_name IN VARCHAR2, source_schema IN VA
RCHAR2, source_table IN VARCHAR2, column_list IN VARCHAR2, subscriber_view IN VA
RCHAR2); BEGIN dbms_cdc_subscribe.subscribe('CDC_DEMO_SUB', 'HR', 'CDC_DEMO', 'E
MPLOYEE_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, JOB_ID, SALAR
Y, COMMISSION_PCT, MANAGER_ID, DEPARTMENT_ID', 'CDC_DEMO_SUB_VIEW'); END; / desc
user_subscriptions SELECT set_name, subscription_name, status FROM user_subscri
ptions; dbms_cdc_subscribe.activate_subscription( subscription_name IN VARCHAR2)
;
exec dbms_cdc_subscribe.activate_subscription('CDC_DEMO_SUB'); SELECT set_name,
subscription_name, status FROM user_subscriptions; Create Procedure To Populate
Salary History Table conn hr/hr /* Create a stored procedure to populate the new
HR.SALARY_HISTORY table. The procedure extends the subscription window of the C
DC_DEMP_SUB subscription to get the most recent set of source table changes. It
uses the subscriber's DEMO_SUB_VIEW view to scan the changes and insert them int
o the SALARY_HISTORY table. It then purges the subscription window to indicate t
hat it is finished with that set of changes. */ CREATE OR REPLACE PROCEDURE upda
te_salary_history IS CURSOR cur IS SELECT * FROM ( SELECT 'I' opt, cscn$, rsid$,
employee_id, job_id, department_id, 0 old_salary, salary new_salary, commit_tim
estamp$ FROM cdc_demo_sub_view WHERE operation$ = 'I ' UNION ALL SELECT 'D' opt,
cscn$, rsid$, employee_id, job_id, department_id, salary old_salary, 0 new_sala
ry, commit_timestamp$ FROM cdc_demo_sub_view WHERE operation$ = 'D ' UNION ALL S
ELECT 'U' opt , v1.cscn$, v1.rsid$, v1.employee_id, v1.job_id, v1.department_id,
v1.salary old_salary, v2.salary new_salaryi, v1.commit_timestamp$ FROM cdc_demo
_sub_view v1, cdc_demo_sub_view v2 WHERE v1.operation$ = 'UO' and v2.operation$
= 'UN' AND v1.cscn$ = v2.cscn$ AND v1.rsid$ = v2.rsid$ AND ABS(v1.salary - v2.sa
lary) > 0) ORDER BY cscn$, rsid$; percent NUMBER; BEGIN -- Get the next set of c
hanges to the HR.CDC_DEMO source table dbms_cdc_subscribe.extend_window('CDC_DEM
O_SUB'); -- Process each change FOR rec IN cur LOOP IF rec.opt = 'I' THEN INSERT
INTO salary_history VALUES (rec.employee_id, rec.job_id, rec.department_id, 0,
rec.new_salary, NULL, rec.commit_timestamp$); END IF;
IF rec.opt = 'D' THEN INSERT INTO salary_history VALUES (rec.employee_id, rec.jo
b_id, rec.department_id, rec.old_salary, 0, NULL, rec.commit_timestamp$); END IF
; IF rec.opt = 'U' THEN percent := (rec.new_salary - rec.old_salary) / rec.old_s
alary * 100; INSERT INTO salary_history VALUES (rec.employee_id, rec.job_id, rec
.department_id, rec.old_salary, rec.new_salary, percent, rec.commit_timestamp$);
END IF; END LOOP; -- Indicate subscriber is finished with this set of changes d
bms_cdc_subscribe.purge_window('CDC_DEMO_SUB'); END update_salary_history; / Cre
ate Procedure To Wait For Changes /* Create function CDCADMIN.WAIT_FOR_CHANGES t
o enable this demo to run predictably. The asynchronous nature of CDC HotLog mod
e means that there is a delay for source table changes to appear in the CDC chan
ge table and the subscriber view. By default this procedure waits up to 3 minute
s for the change table and 1 additional minute for the subscriber view. This can
be adjusted if it is insufficient. The caller must specify the name of the chan
ge table and the number of rows expected to be in the change table. The caller m
ay also optionally specify a different number of seconds to wait for changes to
appear in the change table. */ conn cdcadmin/cdcadmin CREATE OR REPLACE FUNCTION
wait_for_changes ( rowcount NUMBER, -- number of rows to wait for maxwait_secon
ds NUMBER := 300) -- maximum time to wait, in seconds RETURN VARCHAR2 AUTHID CUR
RENT_USER AS numrows NUMBER := 0; slept NUMBER := 0; sleep_time NUMBER := 3; ret
urn_msg VARCHAR2(100); keep_waiting BOOLEAN := TRUE; BEGIN WHILE keep_waiting LO
OP SELECT COUNT(*) INTO numrows FROM CDC_DEMO_CT; -----number of rows in change
table total time slept number of seconds to sleep informational message whether
to keep waiting
-- Got expected number of rows IF numrows >= rowcount THEN keep_waiting := FALSE
; return_msg := 'Change table contains at least ' || TO_CHAR(rowcount) || ' rows
'; EXIT;
-- Reached maximum number of seconds to wait ELSIF slept > maxwait_seconds THEN
return_msg := ' - Timed out while waiting for the change table to reach ' || TO_
CHAR(rowcount) || ' rows'; EXIT; END IF; dbms_lock.sleep(sleep_time); slept := s
lept+sleep_time; END LOOP; -- additional wait time for changes to become availab
le to subscriber view dbms_lock.sleep(60); RETURN return_msg; END wait_for_chang
es; / Preparation for DML -- In a separate terminal window cd $ORACLE_BASE/admin
/ORCL/bdump tail -f alertorcl.log -- tailing the alert log allows us to watch lo
g miner at work -- open a SQL*Plus session as SYS desc gv$streams_capture set li
nesize 121 col state format a20 SELECT capture_name, logminer_id, state, total_m
essages_captured FROM gv$streams_capture; -- open a SQL*Plus session as SYS desc
gv$streams_apply_reader set linesize 121 col state format a20 SELECT apply_name
, state, total_messages_dequeued FROM gv$streams_apply_reader; DML On Source Tab
le conn hr/hr UPDATE cdc_demo SET salary = salary + 500 WHERE job_id = 'SH_CLERK
'; UPDATE cdc_demo SET salary = salary + 1000 WHERE job_id = 'ST_CLERK'; UPDATE
cdc_demo SET salary = salary + 1500 WHERE job_id = 'PU_CLERK'; COMMIT; INSERT IN
TO cdc_demo (employee_id, first_name, last_name, email, phone_number, hire_date,
job_id, salary, commission_pct, manager_id, department_id) VALUES (207, 'Mary',
'Lee', 'MLEE', '310.234.4590', TO_DATE('10-JAN-2003'), 'SH_CLERK', 4000, NULL,
121, 50);
INSERT INTO cdc_demo (employee_id, first_name, last_name, email, phone_number, h
ire_date, job_id, salary, commission_pct, manager_id, department_id) VALUES (208
, 'Karen', 'Prince', 'KPRINCE', '345.444.6756', TO_DATE('10-NOV-2003'), 'SH_CLER
K', 3000, NULL, 111, 50); INSERT INTO cdc_demo (employee_id, first_name, last_na
me, email, phone_number, hire_date, job_id, salary, commission_pct, manager_id,
department_id) VALUES (209, 'Frank', 'Gate', 'FGATE', '451.445.5678', TO_DATE('1
3-NOV-2003'), 'IT_PROG', 8000, NULL, 101, 50); INSERT INTO cdc_demo (employee_id
, first_name, last_name, email, phone_number, hire_date, job_id, salary, commiss
ion_pct, manager_id, department_id) VALUES (210, 'Paul', 'Jeep', 'PJEEP', '607.3
45.1112', TO_DATE('28-MAY-2003'), 'IT_PROG', 8000, NULL, 101, 50); COMMIT; Valid
ate Capture -- Expecting 94 rows to appear in the change table CDCADMIN.CDC_DEMO
_CT. This first -- capture may take a few minutes. Later captures should be subs
tantially faster. conn cdcadmin/cdcadmin SELECT wait_for_changes(94, 180) messag
e FROM dual; Another Test conn hr/hr /* The wait_for_changes function having ind
icated the changes have been populated apply the changes to the salary_history t
able */ exec update_salary_history; SELECT employee_id, job_id, department_id, o
ld_salary, new_salary, percent_change FROM salary_history ORDER BY 1, 4, 5; dele
te from delete from delete from delete from COMMIT; cdc_demo cdc_demo cdc_demo c
dc_demo where where where where first_name first_name first_name first_name = =
= = 'Mary' and last_name = 'Lee'; 'Karen' and last_name = 'Prince'; 'Frank' and
last_name = 'Gate'; 'Paul' and last_name = 'Jeep';
update cdc_demo set salary = salary + 5000 where job_id = 'AD_VP'; update cdc_de
mo set salary = salary - 1000 where job_id = 'ST_MAN';
update cdc_demo set salary = salary - 500 where job_id = 'FI_ACCOUNT'; COMMIT; -
- Expecting 122 rows to appear in the change table CDCADMIN.CDC_DEMO_CT. -- (94
rows from the first set of DMLs and 28 from the second set) conn cdcadmin/cdcadm
in SELECT wait_for_changes(122, 180) message from dual; conn hr/hr exec update_s
alary_history SELECT employee_id, job_id, department_id, old_salary, new_salary,
percent_change FROM salary_history order by 1, 4, 5; Capture Cleanup conn hr/hr
exec dbms_cdc_subscribe.drop_subscription('CDC_DEMO_SUB'); conn / as sysdba --
reverse prepare table instantiation exec dbms_capture_adm.abort_table_instantiat
ion('HR.CDC_DEMO'); -- drop the change table exec dbms_cdc_publish.drop_change_t
able('CDCADMIN', 'CDC_DEMO_CT', 'Y'); -- drop the change set exec dbms_cdc_publi
sh.drop_change_set('CDC_DEMO_SET'); conn cdcadmin/cdcadmin drop function wait_fo
r_changes; SELECT COUNT(*) FROM user_objects; conn hr/hr drop table salary_histo
ry purge; drop table cdc_demo purge; drop procedure update_salary_history; conn
/ as sysdba drop user cdcadmin;
Note 5: Oracle 9.2 CDC Example: =============================== -- Change table
example code -- by Jon Emmons
------
www.lifeaftercoffee.com NOTE: This code is provided for educational purposes onl
y! Use at your own risk. I have only used this code on Oracle 9.2 Enterprise Edi
tion. Due to the way variables are handled, this should be run one command at a
time, but must be run all in the same SQLPlus session.
-- Connect as a priveleged user conn system -- Create scott if he doesn't alread
y exist CREATE user scott IDENTIFIED BY tiger DEFAULT tablespace users TEMPORARY
tablespace temp quota unlimited ON users; -- Grant scott appropriate priveleges
GRANT connect TO scott; GRANT execute_catalog_role TO scott; GRANT select_catal
og_role TO scott; GRANT CREATE TRIGGER TO scott; -- Connect up as scott conn sco
tt/tiger -- Create Table CREATE TABLE scott.classes ( class_id NUMBER, class_tit
le VARCHAR2(30), class_instructor VARCHAR2(30), class_term_code VARCHAR2(6), cla
ss_credits NUMBER, CONSTRAINT PK_classes PRIMARY KEY (class_id ) ); -- Load some
data INSERT INTO classes VALUES (100, 'Reading', 'Jon', '200510', 3); INSERT IN
TO classes VALUES (101, 'Writing', 'Stacey', '200510', 4); INSERT INTO classes V
ALUES (102, 'Arithmetic', 'Laurianne', '200530', 3); commit; -- Confirm current
data SELECT * FROM classes; -- Set up the change table exec dbms_logmnr_cdc_publ
ish.create_change_table ('scott', 'classes_ct', 'SYNC_SET', 'scott', 'classes',
'class_id NUMBER, class_title VARCHAR2(30), class_instructor VARCHAR2(30), class
_term_code VARCHAR2(6), class_credits NUMBER', 'BOTH', 'Y', 'N', 'N', 'Y', 'N',
'Y', 'N', NULL);
-- Subscribe to the change table variable subhandle NUMBER; execute dbms_logmnr_
cdc_subscribe.get_subscription_handle (CHANGE_SET => 'SYNC_SET', DESCRIPTION =>
'Changes to classes table', SUBSCRIPTION_HANDLE => :subhandle); execute dbms_log
mnr_cdc_subscribe.subscribe (subscription_handle => :subhandle, source_schema =>
'scott', source_table => 'classes', column_list => 'class_id, class_title, clas
s_instructor, class_term_code, class_credits'); execute dbms_logmnr_cdc_subscrib
e.activate_subscription (SUBSCRIPTION_HANDLE => :subhandle); -- Now modify the t
able in a few different ways UPDATE classes SET class_title='Math' WHERE class_i
d=102; INSERT INTO classes VALUES (103, 'Computers', 'Ken', '200510', 1); INSERT
INTO classes VALUES (104, 'Racketball', 'Matt', '200530', 2); UPDATE classes SE
T class_credits=3 WHERE class_id=103; DELETE FROM classes WHERE class_title='Rea
ding'; commit; -- Confirm current data SELECT * FROM classes; -- Now lets check
out the change table variable viewname varchar2(40) execute dbms_logmnr_cdc_subs
cribe.extend_window (subscription_handle => :subhandle); execute dbms_logmnr_cdc
_subscribe.prepare_subscriber_view (SUBSCRIPTION_HANDLE => :subhandle, SOURCE_SC
HEMA => 'scott', SOURCE_TABLE => 'classes', VIEW_NAME => :viewname); print viewn
ame -- This little trick will move the bind variable :viewname into the -- subst
itution variable named subscribed_view COLUMN myview new_value subscribed_view n
oprint SELECT :viewname myview FROM dual; -- Examine the actual change data. You
could also look at the table in a -- browser such as TOAD for easier viewing.
SELECT * FROM &subscribed_view; -- Close the subscriber view execute dbms_logmnr
_cdc_subscribe.drop_subscriber_view (SUBSCRIPTION_HANDLE => :subhandle, SOURCE_S
CHEMA => 'scott', SOURCE_TABLE => 'classes'); -- Purge the window execute dbms_l
ogmnr_cdc_subscribe.purge_window (subscription_handle => :subhandle); -- If done
altogether, end the subscription execute dbms_logmnr_cdc_subscribe.drop_subscri
ption (subscription_handle => :subhandle); -- drop the change table exec dbms_lo
gmnr_cdc_publish.drop_change_table('scott', 'classes_ct', 'N'); -- Delete the ta
ble DROP TABLE scott.classes;
Note 6: ======= DBMS_CDC_PUBLISH: In previous releases, this package with releas
e 10g, the LOGMNR string has been removed DBMS_CDC_PUBLISH. Although both varian
ts of the name LOGMNR string has been deprecated and may not be was named DBMS_L
OGMNR_CDC_PUBLISH. Beginning from the name, resulting in the name are still supp
orted, the variant with the supported in a future release
The DBMS_CDC_PUBLISH package is used by a publisher to set up an Oracle Change D
ata Capture system to capture and publish change data from one or more Oracle re
lational source tables. Change Data Capture captures and publishes only committe
d data. Oracle Change Data Capture identifies new data that has been added to, u
pdated in, or removed from relational tables, and publishes the change data in a
form that is usable by subscribers. Typically, a Change Data Capture system has
one publisher who captures and publishes changes for any number of Oracle relat
ional source tables. The publisher then provides subscribers (applications or in
dividuals) with access to the published data.
Note 7: ======= Oracle Tips by Burleson Oracle 10g Create the change tables The
dbms_cdc_publish.create_change_table procedure is used by the publisher user on
the staging database to create change tables. The publisher user creates one or
more change tables for each source table to be published, specifies which column
s should be included, and specifies the combination of before and after images o
f the change data to capture. To have more control over the physical properties
and tablespace properties of the change tables, the publisher can set the option
s_string field of the dbms_cdc_publish.create_change_table procedure. The option
s_string field can contain any option available on the CREATE TABLE statement. T
he following script creates a change table on the staging database that captures
changes made to a source table in the source database. The example uses the sam
ple table pl.project_history. BEGIN DBMS_CDC_PUBLISH.CREATE_CHANGE_TABLE( owner
=> 'cdcproj', change_table_name => 'PROJ_HIST_CT', change_set_name => 'PROJECT_D
AILY', source_schema => 'PL', source_table => 'PROJ_HISTORY', column_type_list =
> 'EMPLOYEE_ID NUMBER(6),START_DATE DATE, END_DATE DATE, PROJ_ID VARCHAR2(10), D
EPARTMENT_ID NUMBER(4)', capture_values => 'both', rs_id => 'y', row_id => 'n',
user_id => 'n', timestamp => 'n', object_id => 'n', source_colmap => 'n', target
_colmap => 'y', options_string => NULL); END; / PL/SQL procedure successfully co
mpleted. This example statement creates a change table named proj_hist_ct, withi
n change set project_daily. The column_type_list parameter is used to identify t
he columns captured by the change table. Remember that the source_schema and sou
rce_table parameters identify the schema and source table that reside in the sou
rce database, not the staging database.
Note 8: Example using streams (1) ================================= http://blogs
.ittoolbox.com/oracle/guide/archives/oracle-streams-configurationchange-data-cap
ture-13501 I have been playing with Oracle Streams again lately. My goal is to c
apture changes in 10g and send them to a 9i database. Below is the short list fo
r setting up Change Data Capture using Oracle Streams. These steps are mostly fr
om the docs with a few tweaks I have added. This entry only covers setting up th
e local capture and apply. I'll add the propagation to 9i later this week or nex
t weekend. First the set up: we will use the HR account's Employee table. We'll
capture all changes to the Employee table and insert them into an audit table. I
'm not necessarily saying this is the way you should audit your database but it
makes a nice example. I'll also add a monitoring piece to capture process. I wan
t to be able to see exactly what is being captured when it is being captured. Yo
u will need to have sysdba access to follow along with me. Your database must al
so be in archivelog mode. The changes are picked up from the redo log. So, away
we go! The first step is to create out streams administrator. I will follow the
guidelines from the oracle docs exactly for this: - Connect as sysdba: sqlplus /
as sysdba - Create the streams tablespace (change the name and/or location to s
uit): create tablespace streams_tbs datafile 'c:\temp\stream_tbs.dbf' size 25M r
euse autoextend on maxsize unlimited; - Create our streams administrator: create
user strmadmin identified by strmadmin default tablespace streams_tbs quota unl
imited on streams_tbs; I haven't quite figured out why, but we need to grant our
administrator DBA privs.
I think this is a bad thing. There is probably a work around where I could do so
me direct grants instead but I haven't had time to track those down. grant dba t
o strmadmin; We also want to grant streams admin privs to the user. BEGIN SYS.DB
MS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE( grantee => 'strmadmin', grant_privileges
=> true); END; / -The next steps we'll run as the HR user. conn hr/hr - Grant al
l access to the employee table to the streams admin: grant all on hr.employee to
strmadmin; - We also need to create the employee_audit table. Note that I am ad
ding three columns in this table that do not exist in the employee table. CREATE
TABLE employee( employee_id NUMBER(6), first_name VARCHAR2(20), last_name VARCH
AR2(25), email VARCHAR2(25), phone_number VARCHAR2(20), hire_date DATE, job_id V
ARCHAR2(10), salary NUMBER(8,2), commission_pct NUMBER(2,2), manager_id NUMBER(6
), department_id NUMBER(4)); ALTER TABLE employee add constraint pk_employee_id
PRIMARY KEY; ALTER TABLE employee ADD CONSTRAINT pk_employee_id PRIMARY KEY (emp
loyee_id) INSERT INTO hr.employee VALUES(206, 'Albert', 'Sel','avds@antapex.org'
,NULL, '07-JUN-94', 'AC_ACCOUNT', 777, NULL, NULL, 110); COMMIT; CREATE TABLE em
ployee_audit( employee_id NUMBER(6), first_name VARCHAR2(20), last_name VARCHAR2
(25), email VARCHAR2(25), phone_number VARCHAR2(20),
hire_date job_id salary commission_pct manager_id department_id upd_date user_na
me action
DATE, VARCHAR2(10), NUMBER(8,2), NUMBER(2,2), NUMBER(6), NUMBER(4), DATE, VARCHA
R2(30), VARCHAR2(30));
ALTER TABLE employee_audit ADD CONSTRAINT pk_employee_audit_id PRIMARY KEY (empl
oyee_id) - Grant all access to the audit table to the streams admin user: grant
all on hr.employee_audit to strmadmin; - We connect as the streams admin user: c
onn strmadmin/strmadmin We can create a logging table. You would NOT want to do
this in a high-volume production system. I am doing this to illustrate user defi
ned monitoring and show how you can get inside the capture process. CREATE TABLE
streams_monitor ( date_and_time TIMESTAMP(6) DEFAULT systimestamp, txt_msg CLOB
); - Here we create the queue. Unlike AQ, where you have to create a separate t
able, this step creates the queue and the underlying ANYDATA table. BEGIN DBMS_S
TREAMS_ADM.SET_UP_QUEUE( queue_table => 'strmadmin.streams_queue_table', queue_n
ame => 'strmadmin.streams_queue'); END; / - This just defines that we want to ca
pture DML and not DDL. BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr
.employee', streams_type => 'capture', streams_name => 'capture_emp', queue_name
=> 'strmadmin.streams_queue', include_dml => true, include_ddl => false, inclus
ion_rule => true); END;
/ | Possible errors on that statement: | | ERROR at line 1: | ORA-32593: databas
e supplemental logging attributes in flux | ORA-06512: at "SYS.DBMS_STREAMS_ADM"
, line 372 | ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 312 | ORA-06512: at line
2 | | Oracle Error :: ORA-32593 | database supplemental logging attributes in f
lux| | | Cause | there is another process actively modifying the database wide s
upplemental logging attributes. | | Action | Retry the DDL or the LogMiner dicti
onary build that raised this error. | | Restaring the database worked for me. -
Tell the capture process that we want to know who made the change: BEGIN DBMS_CA
PTURE_ADM.INCLUDE_EXTRA_ATTRIBUTE( capture_name => 'capture_emp', attribute_name
=> 'username', include => true); END; / - We also need to tell Oracle where to
start our capture. Change the source_database_name to match your database. DECLA
RE iscn NUMBER; -- Variable to hold instantiation SCN value BEGIN iscn := DBMS_F
LASHBACK.GET_SYSTEM_CHANGE_NUMBER(); DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
source_object_name => 'hr.employee', source_database_name => 'test10g', instant
iation_scn => iscn); END; / Note: To get the latest SCN from a database: SQL> se
lect DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER() from dual; DBMS_FLASHBACK.GET_SYS
TEM_CHANGE_NUMBER() ----------------------------------------8854917
And the fun part! This is where we define our capture procedure. I'm taking this
right from the docs but I'm adding a couple steps. The follwing will be a userd
efined procedure, on what to do additionally when changes occurs. CREATE OR REPL
ACE PROCEDURE emp_dml_handler(in_any IN ANYDATA) IS lcr SYS.LCR$_ROW_RECORD; rc
PLS_INTEGER; command VARCHAR2(30); old_values SYS.LCR$_ROW_LIST; BEGIN -- Access
the LCR rc := in_any.GETOBJECT(lcr); -- Get the object command type command :=
lcr.GET_COMMAND_TYPE(); -- I am inserting the XML equivalent of the LCR into the
monitoring table. insert into streams_monitor (txt_msg) values (command || DBMS
_STREAMS.CONVERT_LCR_TO_XML(in_any) ); -- Set the command_type in the row LCR to
INSERT lcr.SET_COMMAND_TYPE('INSERT'); -- Set the object_name in the row LCR to
EMP_DEL lcr.SET_OBJECT_NAME('EMPLOYEE_AUDIT'); -- Set the new values to the old
values for update and delete IF command IN ('DELETE', 'UPDATE') THEN -- Get the
old values in the row LCR old_values := lcr.GET_VALUES('old'); -- Set the old v
alues in the row LCR to the new values in the row LCR lcr.SET_VALUES('new', old_
values); -- Set the old values in the row LCR to NULL lcr.SET_VALUES('old', NULL
); END IF; -- Add a SYSDATE for upd_date lcr.ADD_COLUMN('new', 'UPD_DATE', ANYDA
TA.ConvertDate(SYSDATE)); -- Add a user column lcr.ADD_COLUMN('new', 'user_name'
, lcr.GET_EXTRA_ATTRIBUTE('USERNAME') ); -- Add an action column lcr.ADD_COLUMN(
'new', 'ACTION', ANYDATA.ConvertVarChar2(command)); -- Make the changes lcr.EXEC
UTE(true); commit; END; / - Create the DML handlers:
BEGIN DBMS_APPLY_ADM.SET_DML_HANDLER( object_name => 'hr.employee', object_type
=> 'TABLE', operation_name => 'INSERT', error_handler => false, user_procedure =
> 'strmadmin.emp_dml_handler', apply_database_link => NULL, apply_name => NULL);
END; / BEGIN DBMS_APPLY_ADM.SET_DML_HANDLER( object_name => 'hr.employee', obje
ct_type => 'TABLE', operation_name => 'UPDATE', error_handler => false, user_pro
cedure => 'strmadmin.emp_dml_handler', apply_database_link => NULL, apply_name =
> NULL); END; / BEGIN DBMS_APPLY_ADM.SET_DML_HANDLER( object_name => 'hr.employe
e', object_type => 'TABLE', operation_name => 'DELETE', error_handler => false,
user_procedure => 'strmadmin.emp_dml_handler', apply_database_link => NULL, appl
y_name => NULL); END; / - Create the apply rule. This tells streams, yet again,
that we in fact do want to capture changes. The second calls tells streams where
to put the info. Change the source_database_name to match your database. DECLAR
E emp_rule_name_dml VARCHAR2(30); emp_rule_name_ddl VARCHAR2(30); BEGIN DBMS_STR
EAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.employee', streams_type => 'apply',
streams_name => 'apply_emp', queue_name => 'strmadmin.streams_queue', include_dm
l => true, include_ddl => false, source_database => 'test10g', dml_rule_name =>
emp_rule_name_dml, ddl_rule_name => emp_rule_name_ddl);
DBMS_APPLY_ADM.SET_ENQUEUE_DESTINATION( rule_name => emp_rule_name_dml, destinat
ion_queue_name => 'strmadmin.streams_queue'); END; / We don't want to stop apply
ing changes when there is an error, so: BEGIN DBMS_APPLY_ADM.SET_PARAMETER( appl
y_name => 'apply_emp', parameter => 'disable_on_error', value => 'n'); END; / -
Turn on the apply process: BEGIN DBMS_APPLY_ADM.START_APPLY( apply_name => 'appl
y_emp'); END; / - Turn on the capture process: BEGIN DBMS_CAPTURE_ADM.START_CAPT
URE( capture_name => 'capture_emp'); END; / - Connect as HR and make some change
s to Employees. sqlplus hr/hr INSERT INTO hr.employee VALUES(207, 'JOHN', 'SMITH
','JSMITH@MYCOMPANY.COM',NULL, '07-JUN-94', 'AC_ACCOUNT', 777, NULL, NULL, 110);
COMMIT; INSERT INTO hr.employee VALUES(208, 'Piet', 'Pietersen','JSMITH@MYCOMPA
NY.COM',NULL, '07-JUN-94', 'AC_ACCOUNT', 777, NULL, NULL, 110); COMMIT; INSERT I
NTO hr.employee VALUES(209, 'Piet', 'Pietersen','JSMITH@MYCOMPANY.COM',NULL, '07
-JUN-94', 'AC_ACCOUNT', 777, NULL, NULL, 110); COMMIT; UPDATE hr.employee SET sa
lary=5999
WHERE employee_id=206; COMMIT; DELETE FROM hr.employees WHERE employee_id=207; C
OMMIT; It takes a few seconds for the data to make it to the logs and then back
into the system to be appled. Run this query until you see data (remembering tha
t it is not instantaneous): SELECT employee_id, first_name, last_name, upd_Date,
action hr.employee_audit ORDER BY employee_id; Then you can log back into the s
treams admin account: sqlplus strmadmin/strmadmin View the XML LCR that we inser
ted during the capture process: set long 9999 set pagesize 0 select * from strea
ms_monitor; That's it! It's really not that much work to capture and apply chang
es. Of course, it's a little bit more work to cross database instances, but it's
not that much. Keep an eye out for a future entry where I do just that. One of
the things that amazes me is how little code is required to accomplish this. The
less code I have to write, the less code I have to maintain. Thank care, LewisC
FROM
Note 9: Streams example (2) =========================== The entry builds directl
y on my last entry, Oracle Streams Configuration: Change Data Capture. This entr
y will show you how to propagate the changes you captured in that entry to a 9i
database. NOTE #1: I would recommend that you run the commands and make sure the
last entry works for you before trying the code in this entry. That way you wil
l need to debug as few moving parts as possible. NOTE #2: I have run this code w
indows to windows, windows to linux, linux to solaris and solaris to solaris.
The only time I had any problem at all was solaris to solaris. If you run into p
roblems with propagation running but not sending data, shutdown the source datab
ase and restart it. That worked for me. NOTE #3: I have run this code 10g to 10g
and 10g to 9i. It works without change between them. NOTE #4: If you are not su
re of the exact name of your database (including domain), use global_name, i.e.
select * from global_name; NOTE #5: Streams is not available with XE. Download a
nd install EE. If you have 1 GB or more of RAM on your PC, you can download EE a
nd use the DBCA to run two database instances. You do not physically need two ma
chines to get this to work. NOTE #6: I promise this is the last note. Merry Chri
stmas and/or Happy Holidays! Now for the fun part. As I mentioned above, you nee
d two instances for this. I called my first instance ORCL (how creative!) and I
called my second instance SECOND. It works for me! ORCL will be my source instan
ce and SECOND will be my target instance. You should already have the CDC code f
rom the last article running in ORCL. ORCL must be in archivelog mode to run CDC
. SECOND does not need archivelog mode. Having two databases running on a single
PC in archivelog mode can really beat up a poor IDE drive. You already created
your streams admin user in ORCL so now do the same thing in SECOND. The code bel
ow is mostly the same code that you ran on ORCL. I made a few minor changes in c
ase you are running both instances on a single PC:
sqlplus / as sysdba create tablespace streams_second_tbs datafile 'c:\temp\strea
m_2_tbs.dbf' size 25M reuse autoextend on maxsize unlimited; create user strmadm
in identified by strmadmin default tablespace streams_second_tbs quota unlimited
on streams_second_tbs; grant dba to strmadmin; Connect as strmadmin. You need t
o create an AQ table, AQ queue and then start the queue. That's what the code be
low does.
BEGIN
DBMS_AQADM.CREATE_QUEUE_TABLE( queue_table => 'lrc_emp_t', queue_payload_type =>
'sys.anydata', multiple_consumers => TRUE, compatible => '8.1');
DBMS_AQADM.CREATE_QUEUE( queue_name => queue_table =>
'lrc_emp_q', 'lrc_emp_t');
DBMS_AQADM.START_QUEUE ( queue_name => 'lrc_emp_q'); END; / You also need to cre
ate a database link. You have to have one from ORCL to SECOND but for debugging,
I like a link in both. So, while you're in SECOND, create a link:
CREATE DATABASE LINK orcl.world CONNECT TO strmadmin IDENTIFIED BY strmadmin USI
NG 'orcl.world'; Log into ORCL as strmadmin and run the exact same command there
. Most of the setup for this is exactly the same between the two instances. Crea
te your link on this side also. CREATE DATABASE LINK second.world CONNECT TO str
madmin IDENTIFIED BY strmadmin USING 'second.world'; Ok, now we have running que
ues in ORCL and SECOND. While you are logged into ORCL, you will create a propag
ation schedule. You DO NOT need to run this in SECOND.
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES( table_name => 'hr.employees'
, streams_name => 'orcl_2_second', source_queue_name => 'strmadmin.lrc_emp_q', d
estination_queue_name => 'strmadmin.lrc_emp_q@second.world', include_dml => true
, include_ddl => FALSE, source_database => 'orcl.world'); END;
/ This tells the database to take the data in the local lrc_emp_q and send it to
the named destination queue. We're almost done with the propagation now. We jus
t need to change the code we wrote in the last article in our DML handler. Go ba
ck and review that code now. We are going to modify the EMP_DML_HANDLER so that
we get an enqueue block just above the execute statement: CREATE OR REPLACE PROC
EDURE emp_dml_handler(in_any IN ANYDATA) IS lcr SYS.LCR$_ROW_RECORD; rc PLS_INTE
GER; command VARCHAR2(30); old_values SYS.LCR$_ROW_LIST; BEGIN -- Access the LCR
rc := in_any.GETOBJECT(lcr); -- Get the object command type command := lcr.GET_
COMMAND_TYPE(); -- I am inserting the XML equivalent of the LCR into the monitor
ing table. insert into streams_monitor (txt_msg) values (command || DBMS_STREAMS
.CONVERT_LCR_TO_XML(in_any) ); -- Set the command_type in the row LCR to INSERT
lcr.SET_COMMAND_TYPE('INSERT'); -- Set the object_name in the row LCR to EMP_DEL
lcr.SET_OBJECT_NAME('EMPLOYEE_AUDIT'); -- Set the new values to the old values
for update and delete IF command IN ('DELETE', 'UPDATE') THEN -- Get the old val
ues in the row LCR old_values := lcr.GET_VALUES('old'); -- Set the old values in
the row LCR to the new values in the row LCR lcr.SET_VALUES('new', old_values);
-- Set the old values in the row LCR to NULL lcr.SET_VALUES('old', NULL); END I
F; -- Add a SYSDATE value for the timestamp column lcr.ADD_COLUMN('new', 'UPD_DA
TE', ANYDATA.ConvertDate(SYSDATE)); -- Add a user value for the timestamp column
lcr.ADD_COLUMN('new', 'user_name', lcr.GET_EXTRA_ATTRIBUTE('USERNAME') ); -- Ad
d an action column lcr.ADD_COLUMN('new', 'ACTION', ANYDATA.ConvertVarChar2(comma
nd));
DECLARE enqueue_options DBMS_AQ.enqueue_options_t; message_properties DBMS_AQ.me
ssage_properties_t; message_handle RAW(16); recipients DBMS_AQ.aq$_recipient_lis
t_t; BEGIN recipients(1) := sys.aq$_agent( 'anydata_subscriber',
'strmadmin.lrc_emp_q@second.world', NULL); message_properties.recipient_list :=
recipients;
DBMS_AQ.ENQUEUE( queue_name => enqueue_options => message_properties => payload
=> msgid => EXCEPTION WHEN OTHERS THEN insert into streams_monitor values ('Anyd
ata: ' || END; -- Make the changes lcr.EXECUTE(true); commit; END; /
'strmadmin.lrc_emp_q', enqueue_options, message_properties, anydata.convertObjec
t(lcr), message_handle); (txt_msg) DBMS_UTILITY.FORMAT_ERROR_STACK );
The declaration section above created some variable required for an enqueue. We
created a subscriber (that's the name of the consumer). We will use that name to
dequeue the record in the SECOND instance. We then enqueued our LCR as an ANYDA
TA datatype. I put the exception handler there in case there are any problems wi
th our enqueue. That's all it takes. Insert some records into the HR.employees t
able and commit them. Then log into strmadmin@second and select * from the lrc_e
mp_t table. You should have as many records there as you inserted. There are not
a lot of moving parts so there aren't many things that will go wrong. Propagati
on is where I have the most troubles. You can query DBA_PROPAGATION to see if yo
u have any propagation errors. That's it for moving the data from 10g to 9i. In
my next article, I will show you how to dequeue the data and put it into the emp
loyee_audit table on the SECOND side. If you have any problems or any questions
please post them. Take care, LewisC
Note 10: CDC 9.2 ================ A change table is required for each source tab
le. The publisher uses the procedure DBMS_LOGMNR_CDC_PUBLISH .CREATE_CHANGE_TABL
E to create change tables, as shown in Listing 1. In this example, the change ta
bles corresponding to PRICE_LIST and SALES_TRAN are named CDC_PRICE_LIST and CDC
_SALES_TRAN respectively. This procedure creates a change table in a specified s
chema. execute DBMS_CDC_PUBLISH.CREATE_CHANGE_TABLE(OWNER => 'cdc1', \ CHANGE_TA
BLE_NAME => 'emp_ct', \ CHANGE_SET_NAME => 'SYNC_SET', \ SOURCE_SCHEMA => 'scott
', \ SOURCE_TABLE => 'emp', \ COLUMN_TYPE_LIST => 'empno number, ename varchar2(
10), job varchar2(9), mgr number, hiredate date, deptno number', \ CAPTURE_VALUE
S => 'both', \ RS_ID => 'y', \ ROW_ID => 'n', \ USER_ID => 'n', \ TIMESTAMP => '
n', \ OBJECT_ID => 'n',\ SOURCE_COLMAP => 'n', \ TARGET_COLMAP => 'y', \ OPTIONS
_STRING => NULL); This procedure adds columns to, or drops columns from, an exis
ting change table. EXECUTE DBMS_LOGMNR_CDC_PUBLISH.ALTER_CHANGE_TABLE (OWNER =>
'cdc1') \ CHANGE_TABLE_NAME => 'emp_ct' \ OPERATION => ADD \ ADD_COLUMN_LIST =>
'' \ RS_ID => 'Y' \ ROW_ID => 'N' \ USER_ID => 'N' \ TIMESTAMP => 'N' \ OBJECT_I
D => 'N' \ SOURCE_COLMAP => 'N' \ TARGET_COLMAP => 'N'); This procedure allows a
publisher to drop a subscriber view in the subscriber's schema. EXECUTE sys.DBM
S_CDC_SUBSCRIBE.DROP_SUBSCRIBER_VIEW( \ SUBSCRIPTION_HANDLE =>:subhandle, \ SOUR
CE_SCHEMA =>'scott', \ SOURCE_TABLE => 'emp');
Note 11: Asktom thread ====================== You Asked
I am looking for specific example of setting up streams for bi-directional schem
a level replication. What are your thoughts on using Oracle Streams to implement
activeactive configuration of databases for high availability? Thanks, Pratap a
nd we said... replication is for replication. replication is definitely nothing
I would consider for HA. For HA there is: o RAC -- active active servers in a ro
om. o Data Guard -- active/warm for failover in the even the room disappears. Re
plication is a study in complexity. Update anywhere will make your application
o infinitely hard to design o fairly impossible to test o more fragile (more mov
ing pieces, more things that can go wrong. which conflicts with your stated goal
of HA) I would not consider replication for HA in any circumstance. feature you
are looking for. Data Guard is the
Tom, Correct me if I am wrong. My understanding is that for a Dataguard failover
, manual intervention of the DBA is required. But if I have a replicated databas
e (2 masters - synchronous from primary to replicated database and asynchronous
the other way around - and only the primary being updated in normal circumstance
s), the failover would be automatic and does not require the DBA to be on site i
mmediately. Thanks Anandhi
Followup
March 8, 2004 - 8am US/Eastern:
the problem is you have to design your entire system from day 1 to be replicated
since when you "failover" (lose the ability to connect to db1) there will be QU
EUED transactions that have not yet taken place on db2 (eg: your users will say
"hey, I know i did that already and do it all over
again") when db1 "recovers" it'll push its transactions and db2 will push its tr
ansactions. bamm -- update conflicts. So, replication is a tool developers can u
se to build a replicated database. dataguard is a tool DBA's can use to set up a
highly available environment. they are not to be confused - you cannot replicat
e 3rd party applications like Oracle Apps, people soft, SAP, etc. You cannot rep
lication most custom developed applications without major design/coding efforts.
you can data guard ANYTHING. and yes, when failover is to take place, you want
a human deciding that. failover is something that happens in minutes, it is in r
esponse to a disaster (hence the name DR). It is a fire, an unrecoverable situat
ion. You do not want to failover because a system blue screened (wait for it to
reboot). You do not want to failover some people but not others (as would happen
with db1, db2 and siteA, siteB if siteA cannot route to db1 but can route to db
2 but siteB can still route to db1 - bummer, now you have transactions taking pl
ace on BOTH and unless you designed the systems to be "replicatable" you are in
a hole world of hurt) DR is something you want a human to be involved in. trigge
r. Hi Tom, can you please provide a classification of streams and change data ca
pture. I guess the main difference is that streams covers event capture, transpo
rt (transformation) and consumption. CDC only the capture. But if you consider o
nly event capture, are there technical differences between streams and change da
ta capture? What was the main reason to made CDC as a separate product? thx Jaro
mir http://www.db-nemec.com They need to pull the
Followup
March 26, 2004 - 9am US/Eastern:
think of streams like a brick. think of CDC like a building made of brick.
streams can be used to build CDC. anyway, sync CDC is trigger based).
CDC is built on top of streams (async CDC is
they are complimentary, not really competing. Hi Tom, Is Oracle advanced queuing
(AQ) is renamed as Oracle Stream in 10g? Thanks Followup February 13, 2005 - 4pm
US/Eastern:
no, AQ is a foundation technology used in the streams implemenation (and advance
d replication), but streams is not AQ.
Note 12: ANYDATA: ================= This datatype could be useful in an applicat
ion that stores generic attributes -- attributes you don't KNOW what the datatyp
es are until you actually run the code. In the past, we would have stuffed every
thing into a VARCHAR2 -- dates, numbers, everything. Now, you can put a date in
and have it stay as a date (and the system will enforce it is in fact a valid da
te and let you perform date operations on it -- if it were in a varchar2 -- some
one could put "hello world" into your "date" field) SQL> connect adm/vga88nt Con
nected. SQL> create table t ( x sys.anyData ); Table created. SQL> insert into t
values ( sys.anyData.convertNumber(5) ); 1 row created. SQL> SQL> insert into t
values ( sys.anyData.convertDate(sysdate) ); 1 row created. SQL> SQL> insert in
to t values ( sys.anyData.convertVarchar2('hello world') ); 1 row created. SQL>
commit;
Commit complete. SQL> select t.x.gettypeName() typeName from t t; TYPENAME -----
--------------------------------------------------------------------------SYS.NU
MBER SYS.DATE SYS.VARCHAR2 SQL> select * from t; X() ---------------------------
----------------------------------------------------ANYDATA() ANYDATA() ANYDATA(
) Unfortunately, they don't have a method to display the contents of ANYDATA in
a query (most useful in programs that will fetch the data, figure out what it is
and do something with it -- eg: the application has some intelligence as to how
to handle the data) Fortunately we can write one tho: create or replace functio
n getData( p_x in sys.anyData ) return varchar2 2 as 3 l_num number; 4 l_date da
te; 5 l_varchar2 varchar2(4000); 6 begin 7 case p_x.gettypeName 8 when 'SYS.NUMB
ER' then 9 if ( p_x.getNumber( l_num ) = dbms_types.success ) 10 then 11 l_varch
ar2 := l_num; 12 end if; 13 when 'SYS.DATE' then 14 if ( p_x.getDate( l_date ) =
dbms_types.success ) 15 then 16 l_varchar2 := l_date; 17 end if; 18 when 'SYS.V
ARCHAR2' then 19 if ( p_x.getVarchar2( l_varchar2 ) = dbms_types.success ) 20 th
en 21 null; 22 end if; 23 else 24 l_varchar2 := '** unknown **'; 25 end case; 26
27 return l_varchar2; 28 end; 29 /
Function created. select getData( x ) getdata from t; GETDATA ------------------
-5 19-MAR-02 hello world
Note 12: Materialized Views =========================== thread 1: --------create
enable as select from group materialized view emp_rollback query rewrite deptno
, sum(sal) sal emp by deptno;
Now, given that all the necessary settings have been done (see the data warehous
ing guide for a comprehensive example) your end users can query: select deptno,
sum(sal) from emp where deptno in ( 10, 20) group by deptno; and the database en
gine will rewrite the query to go against the precomputed rollup, not the detail
s -- giving you the answer in a fraction of the time it would normally take. CRE
ATE MATERIALIZED VIEW LOG ON sales WITH SEQUENCE, ROWID (prod_id, cust_id, time_
id, channel_id, promo_id, quantity_sold, amount_sold) INCLUDING NEW VALUES; CREA
TE MATERIALIZED VIEW sum_sales PARALLEL BUILD IMMEDIATE REFRESH FAST ON COMMIT A
S SELECT s.prod_id, s.time_id, COUNT(*) AS count_grp, SUM(s.amount_sold) AS sum_
dollar_sales, COUNT(s.amount_sold) AS count_dollar_sales, SUM(s.quantity_sold) A
S sum_quantity_sales, COUNT(s.quantity_sold) AS count_quantity_sales FROM sales
s GROUP BY s.prod_id, s.time_id; This example creates a materialized view that c
ontains aggregates on a single table.
Because the materialized view log has been created with all referenced columns i
n the materialized view's defining query, the materialized view is fast refresha
ble. If DML is applied against the sales table, then the changes will be reflect
ed in the materialized view when the commit is issued. CREATE MATERIALIZED VIEW
cust_sales_mv PCTFREE 0 TABLESPACE demo STORAGE (INITIAL 16k NEXT 16k PCTINCREAS
E 0) PARALLEL BUILD IMMEDIATE REFRESH COMPLETE ENABLE QUERY REWRITE AS SELECT c.
cust_last_name, SUM(amount_sold) AS sum_amount_sold FROM customers c, sales s WH
ERE s.cust_id = c.cust_id GROUP BY c.cust_last_name; thread 2: --------Use the C
REATE MATERIALIZED VIEW statement to create a materialized view. A materialized
view is a database object that contains the results of a query. The FROM clause
of the query can name tables, views, and other materialized views. Collectively
these objects are called master tables (a replication term) or detail tables (a
data warehousing term). This reference uses "master tables" for consistency. The
databases containing the master tables are called the master databases. Note: T
he keyword SNAPSHOT is supported in place of MATERIALIZED VIEW for backward comp
atibility. thread 3: --------The following statement creates the primary-key mat
erialized view on the table emp located on a remote database. SQL> CREATE MATERI
ALIZED VIEW mv_emp_pk REFRESH FAST START WITH SYSDATE NEXT SYSDATE + 1/48 WITH P
RIMARY KEY AS SELECT * FROM emp@remote_db;
Materialized view created. Note: When you create a materialized view using the F
AST option you will need to create a view log on the master tables(s) as shown b
elow:
SQL> CREATE MATERIALIZED VIEW LOG ON emp; Materialized view log created. thread
4: --------Refreshing Materialized Views When creating a materialized view, you
have the option of specifying whether the refresh occurs ON DEMAND or ON COMMIT.
In the case of ON COMMIT, the materialized view is changed every time a transac
tion commits, thus ensuring that the materialized view always contains the lates
t data. Alternatively, you can control the time when refresh of the materialized
views occurs by specifying ON DEMAND. In this case, the materialized view can o
nly be refreshed by calling one of the procedures in the DBMS_MVIEW package. DBM
S_MVIEW provides three different types of refresh operations. DBMS_MVIEW.REFRESH
Refresh one or more materialized views. DBMS_MVIEW.REFRESH_ALL_MVIEWS Refresh a
ll materialized views. DBMS_MVIEW.REFRESH_DEPENDENT Refresh all materialized vie
ws that depend on a specified master table or materialized view or list of maste
r tables or materialized views. Note 10: ======== /*****************************
**************************************************** * @author : Chandar * @vers
ion * * Name of the Application * Creation/Modification History * * Chandar 02-F
eb-2003 * * Overview of Script: * This SQL scripts sets up the streams for bi-di
rectional replication between two * databases. Replication is set up for the tab
le named tabone in strmuser schema * created by the script in both the databases
. : : : Created 1.0 SetupStreams.sql
* Ensure that you have created a streams administrator before executing this scr
ipt. * The script StreamsAdminConfig.sql can be used to create a streams adminis
trator * and configure it. * After running this script you can use AddTable.sql
script to add another active * table to streams environment. *******************
*************************************************************** */ SET VERIFY OF
F SET ECHO OFF SPOOL streams_setup.log
--define variables to store global names of two databases variable site1 varchar
2(128); variable site2 varchar2(128); variable scn number; ---------------------
-------------------------------------------------------------- get TNSNAME , SYS
password and streams admin user details for both the databases ----------------
-----------------------------------------------------------------PROMPT -- TNSNA
ME for database 1 ACCEPT db1 PROMPT 'Enter TNS Name of first database :' PROMPT
-- SYS password for database 1 ACCEPT syspwddb1 PROMPT 'Enter password for sys u
ser of first database :' PROMPT -- Streams administrator username for database 1
ACCEPT strm_adm_db1 PROMPT 'Enter username for streams admin of first database
:' PROMPT -- Streams administrator password for database 1 ACCEPT strm_adm_pwd_d
b1 PROMPT 'Enter password for streams admin on first database :' PROMPT -- TNSNA
ME for database 2 ACCEPT db2 PROMPT 'Enter TNS Name of second database :' PROMPT
-- SYS password for database 2 ACCEPT syspwddb2 PROMPT 'Enter password for sys
user of second database :' PROMPT
-- Streams administrator username for database 2 ACCEPT strm_adm_db2 PROMPT 'Ent
er username for streams admin of second database :' PROMPT -- Streams administra
tor password for database 2 ACCEPT strm_adm_pwd_db2 PROMPT 'Enter password for s
treams admin on second database :' PROMPT PROMPT Connecting as SYS user to datab
ase 1 CONN sys/&syspwddb1@&db1 AS SYSDBA; -- Store global name in site1 variable
EXECUTE SELECT global_name INTO :site1 FROM global_name; PROMPT Granting execut
e privileges on dbms_lock and dbms_pipe to streams admin GRANT EXECUTE ON DBMS_L
OCK TO &strm_adm_db1; GRANT EXECUTE ON DBMS_PIPE to &strm_adm_db1; -- create a u
ser name strmuser and grant necessary privileges PROMPT Creating user named strm
user GRANT CONNECT, RESOURCE TO strmuser IDENTIFIED BY strmuser; PROMPT Connecti
ng as strmuser to database1 CONN strmuser/strmuser@&db1 -- create a sample table
named tabone for which the replication will be set up PROMPT PROMPT Creating ta
ble tabone CREATE TABLE tabone (id NUMBER(5) PRIMARY KEY, name VARCHAR2(50)); --
grant all permissions on tabone to stream administration PROMPT Adding suppleme
tal logging for table tabone ALTER TABLE tabone ADD SUPPLEMENTAL LOG GROUP tabon
e_log_group ( id,name) ALWAYS; PROMPT Granting permissions on table tabone to st
reams administration GRANT ALL ON strmuser.tabone TO &strm_adm_db1; ------------
------------------------- Repeat above steps for database 2 --------------------
----------------
PROMPT Connecting as SYS user to database2 CONN sys/&syspwddb2@&db2 AS SYSDBA;
-- Store global name in site2 variable EXECUTE SELECT global_name INTO :site2 FR
OM global_name; PROMPT Granting execute privileges on dbms_lock and dbms_pipe to
streams admin GRANT EXECUTE ON DBMS_LOCK TO &strm_adm_db2; GRANT EXECUTE ON DBM
S_PIPE to &strm_adm_db2; -- create a user name strmuser and grant necessary priv
ileges PROMPT Creating user named strmuser GRANT CONNECT, RESOURCE TO strmuser I
DENTIFIED BY strmuser; PROMPT Connecting as strmuser CONN strmuser/strmuser@&db2
-- create a sample table named tabone for which the replication will be set up
PROMPT PROMPT Creating table tabone CREATE TABLE tabone (id NUMBER(5) PRIMARY KE
Y, name VARCHAR2(50)); PROMPT Adding supplemetal logging for table tabone ALTER
TABLE tabone ADD SUPPLEMENTAL LOG GROUP tabone_log_group ( id,name) ALWAYS; -- g
rant all permissions on tabone to stream administration PROMPT Granting all perm
issions on tabone to streams administrator
GRANT ALL ON strmuser.tabone TO &strm_adm_db2; ---------------------------------
-------------------------------------------------- Set up replication for table
tabone from database 1 to database 2 using streams -----------------------------
------------------------------------------------------ connect as streams admin
to database 1 PROMPT Connecting as streams adimistrator to database 1 conn &strm
_adm_db1/&strm_adm_pwd_db1@&db1
-- create and set up streams queue at database 1 PROMPT PROMPT Creating streams
queue BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'strmuser_queue_table'
, queue_name => 'strmuser_queue', queue_user => 'strmuser'); END; / -- Add table
propagation rules for table tabone to propagate captured changes -- from databa
se 1 to database 2 PROMPT Adding propagation rules for table tabone BEGIN DBMS_S
TREAMS_ADM.ADD_TABLE_PROPAGATION_RULES( table_name streams_name source_queue_nam
e destination_queue_name include_dml include_ddl source_database END; / -- creat
e a capture process and add table rules for table tabone to capture the -- chang
es made to tabone in database 1 PROMPT Creating capture process at database 1 an
d adding table rules for table tabone. BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( t
able_name => 'strmuser.tabone', streams_type => 'capture', streams_name => 'capt
ure_db1', queue_name include_dml include_ddl END; / -- create a database link to
database 2 connecting as streams administrator PROMPT Creating database link to
database 2 DECLARE sql_command VARCHAR2(200); BEGIN sql_command :='CREATE DATAB
ASE LINK ' ||:site2|| ' CONNECT TO'|| => '&strm_adm_db1..strmuser_queue', => tru
e, => true); => => => => => => => 'strmuser.tabone', 'db1_to_db2_prop', '&strm_a
dm_db1..strmuser_queue', '&strm_adm_db2..strmuser_queue@'||:site2, true, true, :
site1);
'&strm_adm_db2 IDENTIFIED BY &strm_adm_pwd_db2 USING ''&db2'''; EXECUTE IMMEDIAT
E sql_command; END; / -- get the current SCN of database 1
PROMPT Getting current SCN of database 1 EXECUTE :scn := DBMS_FLASHBACK.GET_SYST
EM_CHANGE_NUMBER(); -- connect to database 2 as streams administrator PROMPT Con
necting as streams administrator to database 2 conn &strm_adm_db2/&strm_adm_pwd_
db2@&db2 ----Set table instantiation SCN for table tabone at database 2 to curre
nt SCN of database 1 We need not use import/export for instantiation because tab
le tabone does not contain any data
PROMPT PROMPT Setting instantiation SCN for table tabone at database 2 BEGIN DBM
S_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(source_object_name => 'strmuser.tabone',
source_database_name => :site1, instantiation_scn => :scn); END; / -- create an
d set up streams queue at database 2 PROMPT Setting up streams queue at database
2 BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'strmuser_queue_table', q
ueue_name => 'strmuser_queue', queue_user => 'strmuser'); END; / -- create an ap
ply process and add table rules for table tabone to apply -- any changes propaga
ted from database 1 PROMPT Creating Apply process at database 2 and adding table
rules for table tabone
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'strmuser.tabone', streams
_type => 'apply', streams_name => 'apply_db2', queue_name => '&strm_adm_db2..str
muser_queue', include_dml => true, include_ddl => true, source_database => :site
1); END; / -- start the apply process at database 2 PROMPT Starting the apply pr
ocess BEGIN DBMS_APPLY_ADM.START_APPLY( apply_name => 'apply_db2'); END; /
-- connect to database 1 as streams administrator PROMPT Connecting as streams a
dministrator to database 1 conn &strm_adm_db1/&strm_adm_pwd_db1@&db1 -- start th
e capture process PROMPT PROMPT Starting the capture process at database 1 BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE( capture_name => 'capture_db1'); END; / -- make
dml changes to tabone to check if streams is working PROMPT Inserting row in tab
one at database 1 INSERT INTO strmuser.tabone VALUES(11,'chan'); COMMIT; -- wait
for some time so that changes are applied to database 2. EXECUTE DBMS_LOCK.SLEE
P(35); -------------------------------------------------------------------------
---------
-- Set up replication for table tabone from database 2 to database 1 using strea
ms -----------------------------------------------------------------------------
------ connect to database 2 as streams administrator PROMPT Connecting as strea
ms administrator to database 2 conn &strm_adm_db2/&strm_adm_pwd_db2@&db2 -- sele
ct table tabone to see if changes from database 1 are applied PROMPT PROMPT Sele
cting rows from tabone at database 2 to see if changes are propagated select * f
rom strmuser.tabone;
PROMPT PROMPT Setting up bi-directional replication of table tabone -- create a
database link to database 1 connecting as streams administrator PROMPT PROMPT Cr
eating database link from database 2 to database 1 DECLARE sql_command varchar2(
200); BEGIN sql_command :='CREATE DATABASE LINK ' ||:site1|| ' CONNECT TO'|| '&s
trm_adm_db1 IDENTIFIED BY &strm_adm_pwd_db1 USING ''&db1'''; EXECUTE IMMEDIATE s
ql_command; END; / -- Add table propagation rules for table tabone to propagate
capture changes -- from database 2 to database 1 PROMPT Adding table propagation
rules for tabone at database 2 BEGIN DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RUL
ES( table_name =>'strmuser.tabone', streams_name => 'db2_to_db1_prop', source_qu
eue_name => '&strm_adm_db2..strmuser_queue', destination_queue_name include_dml
include_ddl source_database END; / -- create a capture process and add table rul
es for table tabone to -- capture the changes made to tabone in database 2 => =>
=> => '&strm_adm_db1..strmuser_queue@'||:site1, true, true, :site2);
PROMPT Creating capture process at database 2 and adding table rules for table t
abone BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'strmuser.tabone', s
treams_type => 'capture', streams_name => 'capture_db2', queue_name => '&strm_ad
m_db2..strmuser_queue', include_dml => true, include_ddl => true); END; / -- get
the current SCN of database 2 PROMPT Getting the current SCN of database 2 EXEC
UTE :scn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER(); -- connect to database 1
as streams administrator PROMPT Connecting as streams administrator to database
1 CONN &strm_adm_db1/&strm_adm_pwd_db1@&db1 -- Set table instantiation SCN for t
able tabone at database 1 to current -- SCN of database 2 PROMPT PROMPT Setting
instantiation SCN for tabone at database 2 BEGIN DBMS_APPLY_ADM.SET_TABLE_INSTAN
TIATION_SCN(source_object_name => 'strmuser.tabone', source_database_name => :si
te2, instantiation_scn => :scn); END; / -- create an apply process and add table
rules for table tabone to apply -- any changes propagated from database 2 PROMP
T Creating apply process at database 1 and adding table rules for tabone BEGIN D
BMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'strmuser.tabone', streams_type =
> 'apply', streams_name => 'apply_db1', queue_name => '&strm_adm_db1..strmuser_q
ueue',
include_dml => true, include_ddl => true, source_database => :site2); END; / --
start the apply process PROMPT Starting the apply process BEGIN DBMS_APPLY_ADM.S
TART_APPLY( apply_name => 'apply_db1'); END; / -- connect to database 2 as strea
ms administrator PROMPT Connecting to database 2 as streams administrator CONN &
strm_adm_db2/&strm_adm_pwd_db2@&db2; -- start the capture process PROMPT PROMPT
Starting the capture process at database 2 BEGIN DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => 'capture_db2'); END; / -- perform dml on tabone at database 2 t
o check if changes are propagated PROMPT Inserting a row into tabone at database
2 INSERT INTO strmuser.tabone VALUES(12,'kelvin'); COMMIT; -- wait for some tim
e so that changes are applied to database 1. EXECUTE DBMS_LOCK.SLEEP(35); -- con
nect to database 1 as streams administrator PROMPT Connecting as streams adminis
trator to database 1 CONN &strm_adm_db1/&strm_adm_pwd_db1@&db1 PROMPT Checking i
f the changes made at database 2 are applied at database 1 SELECT * FROM strmuse
r.tabone; SET ECHO OFF
SPOOL OFF PROMPT End of Script
Note 11: ======== The Data Propagator replication engine uses a change-capture m
echanism and a log to replicate data between a source system and a target system
. A capture process running on the source system captures changes as they occur
in the source tables and stores them temporarily in the change data tables. The
database administrator of the source database must ensure that the Data Propagat
or capture process is active on the source system. The apply process reads the c
hange data tables and applies the changes to the target tables. You can create a
Data Propagator subscription with the DB2 Everyplace XML Scripting tool. In DB2
Everyplace Version 8.2, you cannot create or configure Data Propagator subscrip
tions by using the DB2 Everyplace Sync Server Administration Console. You can use the
DB2 Everyplace Sync Server Administration Console only to view and assign Data P
ropagator subscriptions to subscription sets. All the Data Propagator subscripti
ons use the Data Propagator replication engine. Each Data Propagator replication
environment consists of a source system and a mirror system. The source system
contains the source database, the tables that you want to replicate, and the cap
ture process that is used to capture the data changes. The mirror system contain
s the mirror database and tables. DB2 Everyplace starts the apply process on the
mirror system. When capturing changes to the change data tables, the capture pr
ocess that is running on both the source system and the mirror system will consu
me processor resources and input/output resources. As a result of this additiona
l load on the source system, replication competes with the source applications f
or system resources. Additionally, with the Data Propagator engine, the number o
f moves that the changed data has to make between the tables in the mirror syste
m is higher than with the JDBC engine. As a result, the mirror database requires
a substantially larger logging space than the JDBC replication engine. Capacity
planners should balance the needs of the replication tasks and source applicati
on to determine the size of the source system accordingly. -How the Data Propaga
tor replication engine handles data changes to the source system When the source
application changes a table in the source system, the changes are synchronized
Data Propagator replication engine first captures the changes, synchronizes them
to the mirror system, and then applies them to the target system (the mobile de
vice). -How the Data Propagator replication engine handles data changes to the c
lient sys9tem When the client application on the mobile device changes a table i
n the client system, the Data Propagator replication engine
first synchronizes the changes to the mirror system, captures them, and then app
lies them to the source system.
Note 12: commitscn ================== The COMMIT SCN - an undocumented feature M
ay 1999 ------------------------------------------------------------------------
-------Try the following experiment: create table t (n1 number); insert into t v
alues (userenv('commitscn')); select n1 from t; N1 -------43526438 rem Wait a fe
w seconds if there are other people working rem on your system, or start a secon
d session execute a rem couple of small (but real) transactions and commits then
commit; select n1 from t; N1 -------43526441 Obviously your values for N1 will
not match the values above, but you should see that somehow the data you inserte
d into your table was not the value that was finally committed, so what's going
on ? The userenv('commitscn') function has to be one of the most quirky little u
ndocumented features of Oracle. You can only use it in a very restricted fashion
, but if you follow the rules the value that hits the database is the current va
lue of the SCN (System Commit Number), but when you commit your transaction the
number changes to the latest value of the SCN which is always just one less than
the commit SCN used by your transaction. Why on earth, you say, would Oracle pr
oduce such a wierd function - and how on earth do they stop it from costing a fo
rtune in processing time. To answer the first question think replication. Back t
o the days of 7.0.9, when a client asked me to build a system which used asynchr
onous replication between London and New York; eventually I persuaded him this w
as not a good idea, especially on early release software when the cost to the bu
siness of an error would be around $250,000 per shot; nevertheless I did have to
demonstrate that in principal it was possible. The biggest problem, though, was
guaranteeing that transactions were applied at the remote site in exactly the s
ame order that they had been committed at the
local site; and this is precisely where Oracle uses userenv('commitscn'). Each t
ime a commit hits the database, the SCN is incremented, so each transaction is '
owned' by an SCN and no two transactions can belong to a single SCN - ultimately
the SCN generator is the single-thread through which all the database must pass
and be serialised. Although there is a small arithmetical quirk that the value
of the userenv('commitscn') is changed to one less than the actual SCN used to c
ommit the transaction, nevertheless each transaction gets a unique, correctly or
dered value for the function. If you have two transactions, the one with the low
er value of userenv('commitscn') is guaranteeably the one that committed first.
So how does Oracle ensure that the cost of using this function is not prohbitive
. Well you need to examine Oracle errors 1735 and 1721 in the $ORACLE_HOME/rdbms
/admin/mesg/oraus.msg file. ORA-01721: USERENV(COMMITSCN) invoked more than once
in a transaction ORA-01735: "USERENV('COMMITSCN') not allowed here You may only
use userenv('commitscn') to update exactly one column of one row in a transacti
on, or insert exactly one value for one row in a transaction, and (just to add t
hat final touch of peculiarity) the column type has to be an unconstrained numbe
r type otherwise the subsequent change does not take place. --------------------
-----------------------------------------------------------Build Your Own Replic
ation: Given this strange function, here's the basis of what you have to do to w
rite your own replication code: create table control_table(sequence_id number, c
ommit_id number); begin transaction insert into control_table (sequence_id,commi
t_id) select meaningless_sequence.nextval, null from dual; -- save the value of
meaningless_sequence -- left as a language-specific exercise update control_tabl
e set commit_id = userenv('commitscn') where sequence_id = {saved value of meani
ngless_sequence}; -- now do all the rest of the work, and include the saved -- m
eaningless_sequence.currval in every row of every table commit; end transaction
If you now transport the changed data to the remote site, using the commit_id to
send the transactions in the correct order, and the sequence_id to find the cor
rect items of data, most of your problems are over. (Although you still have som
e messy details which are again left as an exercise.) Note 13:
======== Oracle stream not working as Logminer is down Posted: Dec 17, 2007 11:5
8 PM Reply Hi, Oracle streams capture process is not capturing any updates made
on table for which capture & apply process are configured. Capture process & app
ly process are running fine showing enabled as status & no error. But, No new re
cords are captured in streams_queue_table when I update record in table, which is conf
igured for capturing changes. This setup was working till I got ORA-01341: LogMiner
out-of-memory error in alert.log file. I guess logminer is not capturing the updat
es from redo log. Current Alert log is showing following lines for logminer init
process LOGMINER: Parameters summary for session# = 1 LOGMINER: Number of proce
sses = 3, Transaction Chunk Size = 1 LOGMINER: Memory Size = 10M, Checkpoint int
erval = 10M But same log was like this before LOGMINER: Parameters summary for s
ession# = 1 LOGMINER: Number of processes = 3, Transaction Chunk Size = 1 LOGMIN
ER: Memory Size = 10M, Checkpoint interval = 10M >>LOGMINER: session# = 1, reade
r process P002 started with pid=18 OS id=5812 >>LOGMINER: session# = 1, builder
process P003 started with pid=36 OS id=3304 >>LOGMINER: session# = 1, preparer p
rocess P004 started with pid=37 OS id=1496 We can clearly see reader, builder &
preparer process are not starting after I got Out of memory exception in log min
er. To allocate more space to logminer, I tried to setup tablespace to logminer
I got 2 exception which was contradicting each other error. SQL> exec DBMS_LOGMN
R.END_LOGMNR(); BEGIN DBMS_LOGMNR.END_LOGMNR(); END; * ERROR at line 1: >>ORA-01
307: no LogMiner session is currently active ORA-06512: at "SYS.DBMS_LOGMNR", li
ne 76 ORA-06512: at line 1 SQL> EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('logmnrts')
; BEGIN DBMS_LOGMNR_D.SET_TABLESPACE('logmnrts'); END; * ERROR at line 1: >>ORA-
01356: active logminer sessions found ORA-06512: at "SYS.DBMS_LOGMNR_D", line 23
2
ORA-06512: at line 1 When I tried stopping logminer exception was no logminer sessi
on is active, But when I tried to setup tablespace exception was active logminer sessi
ons found. I am not sure how to resolve this issue. Please let me know how to resol
ve this issue. Thanks Posts: 25 From: Brazil Registered: 6/19/06 Re: Oracle stre
am not working as Logminer is down Posted: Dec 19, 2007 3:34 AM in response to:
sgurusam
Reply
The Logminer session associated with a capture process is a special kind of sess
ion which is called a "persistent session". You will not be able to stop it usin
g DBMS_LOGMNR. This package controls only non-persistent sessions. To stop the p
ersistent LogMiner session you must stop the capture process. However, I think y
our problem is more related to a lack of RAM space instead of tablespace (i. e,
disk) space. Try to increase the size of the SGA allocated to LogMiner, by setti
ng capture parameter _SGA_SIZE. I can see you are using the default of 10M, whic
h may be not enough for your case. Of course, you will have to increase the valu
es of init parameters streams_pool_size, sga_target/sga_max_size accordingly, to
avoid other memory problems. To set the _SGA_SIZE parameter, use the PL/SQL pro
cedure DBMS_CAPTURE_ADM.SET_PARAMETER. The example below would set it to 100Megs
: begin DBMS_CAPTURE_ADM.set_parameter('<name of capture process','_SGA_SIZE','1
00'); end; / I hope this helps. Posts: 68 Registered: 1/16/08 Re: Oracle stream
not working as Logminer is down Posted: Jan 21, 2008 5:55 AM in response to: ili
dioj The other way round is to clear the archivelog on ur box. You can use rman
for doing the same. Posts: 68
Reply
Registered: 1/16/08 Re: Oracle stream not working as Logminer is down Posted: Ja
n 21, 2008 5:56 AM in response to: anoopS
Reply
The best way is to write a function for clearing up the archivelogs and schedule
it at regular intervals to avoid these kind of errors. Note 14: ======== > I ha
ve set up asyncronous hotlog change data capture from a 9iR2 mainframe > oracle
database to an AIX 10gR2 database. The mainframe process didn't work > and put t
he capture into Abended status. > > *** SESSION ID:(21.175) 2006-08-01 17:28:51.
777 > error 10010 in STREAMS process > ORA-10010: Begin Transaction > ORA-00308:
cannot open archived log '//'EDB.RL11.ARCHLOG.T1.S5965.DBF'' > ORA-04101: OS/39
0 implementation layer error > ORA-04101, FUNC=LOCATE , RC=8, RS=C5C7002A, ERROR
ID=1158 > ::OPIRIP: Uncaught error 447. Error stack::ORA-00447: fatal error in >
background > A-00308: cannot open archived log '//'EDB.RL11.ARCHLOG.T1.S5965.DB
F'' > ORA-04101: OS/390 implementation layer error > ORA-04101, FUNC=LOCATE , RC
=8, RS=C5C7002A, ERRORID=1158 > > This is because I had lower case characters in
the log file format in the > init.ora on the mainframe. The actual log file tha
t was created was a > completely different name. > > I shut down the database an
d fixed the init.ora. Switched the log file. I > dropped all the objects that I
created for CDC. I recreated the capture and > altered the start scn of the capt
ure to the current log which I found by > running: select to_char(max(first_chan
ge#)) from v$log; > > I created the other objects, but when I run > dbms_cdc_pub
lish.alter_hotlog_change_source to enable, it immediately changes > the capture
from disabled to abended status, and gives me the same error > message as above.
> > How do I get the capture out of abended status, and how do I get it to NOT
> try to find the old archive log file (which isn't there anyways)? > Any help w
ould be greatly appreciated!
================================================================================
== ============ Note 15: Async CDC extended TEST: ==============================
==================================================== ============
Purpose: Test Async CDC Hotlog and solve errors 1. long running txn detected 2.
stop of capture Date : 26/02/2008 DB : 10.2.0.3 --------------------------------
---------------------------------------SOURCE TABLE OWNER: ALBERT SOURCE TABLE :
PERSOON PUBLISHER : publ_cdc CDC_SET : CDC_DEMO_SET SUBSCRIBER : subs_cdc CHANG
E TABLE : CDC_PERSOON CHANGE_SOURCE : SYNC_SOURCE ------------------------------
-----------------------------------------set ORACLE_HOME=C:\ora10g\product\10.2.
0\db_1 Init: -- specific: TEST10G: startup mount pfile=c:\oracle\admin\test10g\p
file\init.ora TEST10G2: startup mount pfile=c:\oracle\admin\test10g2\pfile\init.
ora -- common: alter database archivelog; archive log start; alter database forc
e logging; alter database add supplemental log data; alter database open; archiv
e log list show parameter aq_tm_processes show parameter compatible show paramet
er global_names show parameter job_queue_processes show parameter open_links sho
w parameter shared_pool_size show parameter streams_pool_size show parameter und
o_retention ---------Archive Mode min 3 must be 10.1.0 or above must be TRUE min
2 recommended 4-6 not less than the default 4 must be 0 or at least 200MB min.
480MB (10MB/capture 1MB/apply) min. 3600 (1 hr.) (900)
-- Examples of altering initialization parameters alter system set aq_tm_process
es=3 scope=BOTH; alter system set compatible='10.2.0.1.0' scope=SPFILE; alter sy
stem set global_names=TRUE scope=BOTH; alter system set job_queue_processes=6 sc
ope=BOTH; alter system set open_links=4 scope=SPFILE; alter system set streams_p
ool_size=200M scope=BOTH; -- very slow if making smaller alter system set undo_r
etention=3600 scope=BOTH; /* JOB_QUEUE_PROCESSES (current value) + 2 PARALLEL_MA
X_SERVERS (current value) + (5 * (the number of change sets planned))
PROCESSES (current value) + (7 * (the number of change sets planned)) SESSIONS (
current value) + (2 * (the number of change sets planned)) */ ------------------
-----------------------------------------------------Admin Queries: connect / as
sysdba select * FROM DBA_SOURCE_TABLES; SELECT SET_NAME,CHANGE_SOURCE_NAME,BEGI
N_SCN,END_SCN,CAPTURE_ENABLED,PURGING,QUEUE_NAME FROM CHANGE_SETS; SELECT OWNER,
QUEUE_TABLE, TYPE, OBJECT_TYPE, RECIPIENTS FROM DBA_QUEUE_TABLES; SELECT SET_NA
ME,STATUS,EARLIEST_SCN,LATEST_SCN,to_char(LAST_PURGED, 'DD-MMYYYY;HH24:MI'), to_
char(LAST_EXTENDED, 'DD-MM-YYYY;HH24:MI'),SUBSCRIPTION_NAME FROM DBA_SUBSCRIPTIO
NS; SELECT PROPAGATION_SOURCE_NAME, PROPAGATION_NAME, STAGING_DATABASE, DESTINAT
ION_QUEUE FROM CHANGE_PROPAGATIONS; SELECT tablespace_name, force_logging FROM d
ba_tablespaces; SELECT supplemental_log_data_min, supplemental_log_data_pk, supp
lemental_log_data_ui, supplemental_log_data_fk, supplemental_log_data_all, force
_logging FROM gv$database; SELECT owner, name, QUEUE_TABLE, ENQUEUE_ENABLED, DEQ
UEUE_ENABLED FROM dba_queues; SELECT capture_name, total_messages_captured, tota
l_messages_enqueued, elapsed_enqueue_time FROM dba_hist_streams_capture; SELECT
apply_name, reader_total_messages_dequeued, reader_lag, server_total_messages_ap
plied FROM dba_hist_streams_apply_sum; SELECT table_name, scn, supplemental_log_
data_pk, supplemental_log_data_ui, supplemental_log_data_fk, supplemental_log_da
ta_all FROM dba_capture_prepared_tables; SELECT table_name, scn, supplemental_lo
g_data_pk, supplemental_log_data_ui, supplemental_log_data_fk, supplemental_log_
data_all FROM dba_capture_prepared_tables; SELECT DBMS_FLASHBACK.GET_SYSTEM_CHAN
GE_NUMBER() from dual; SELECT EQ_NAME,EQ_TYPE,TOTAL_WAIT#,FAILED_REQ#,CUM_WAIT_T
IME,REQ_DESCRIPTION
FROM
V_$ENQUEUE_STATISTICS WHERE CUM_WAIT_TIME>0 ;
SELECT set_name,capture_name,queue_name,queue_table_name,capture_enabled FROM cd
c_change_sets$; SELECT set_name,capture_name,capture_enabled FROM cdc_change_set
s$; SELECT set_name, CAPTURE_ENABLED, BEGIN_SCN, END_SCN,LOWEST_SCN,CAPTURE_ERRO
R FROM cdc_change_sets$; SELECT set_name, change_source_name, capture_enabled, s
top_on_ddl, publisher FROM change_sets; SELECT subscription_name, handle, set_na
me, username, earliest_scn, description FROM cdc_subscribers$; SELECT username F
ROM dba_users u, streams$_privileged_user s WHERE u.user_id = s.user#; SELECT ca
p.CAPTURE_NAME, cap.FIRST_SCN, cap.APPLIED_SCN, cap.REQUIRED_CHECKPOINT_SCN FROM
DBA_CAPTURE cap, CHANGE_SETS cset WHERE cset.SET_NAME = 'CDC_DEMO_SET' AND cap.
CAPTURE_NAME = cset.CAPTURE_NAME; SELECT r.SOURCE_DATABASE,r.SEQUENCE#,r.NAME,r.
DICTIONARY_BEGIN,r.DICTIONARY_END FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTUR
E c WHERE c.CAPTURE_NAME = 'CDC$C_CHANGE_SET_ALBERT' AND r.CONSUMER_NAME = c.CAP
TURE_NAME; SELECT CONSUMER_NAME,PURGEABLE,THREAD#, FIRST_SCN,NEXT_SCN, SEQUENCE#
FROM DBA_REGISTERED_ARCHIVED_LOG ----------------------------------------------
------------------------->>>>>>>>>>> connect / as sysdba Initial: -- TS CREATE T
ABLESPACE TS_CDC DATAFILE 'C:\ORACLE\ORADATA\TEST10G\TS_CDC.DBF' SIZE 50M EXTENT
MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO LOGGING FORCE LOGGI
NG; - USERS: create user albert identified by albert default tablespace ts_cdc t
emporary tablespace temp QUOTA 10M ON sysaux QUOTA 20M ON users QUOTA 50M ON ts_
cdc ;
create user publ_cdc identified by publ_cdc default tablespace ts_cdc temporary
tablespace temp QUOTA 10M ON sysaux QUOTA 20M ON users QUOTA 50M ON TS_CDC ; cre
ate user subs_cdc identified by subs_cdc default tablespace ts_cdc temporary tab
lespace temp QUOTA 10M ON sysaux QUOTA 20M ON users QUOTA 50M ON TS_CDC ; -- GRA
NTS: GRANT create session TO albert; GRANT create table TO albert; GRANT create
sequence TO albert; GRANT create procedure TO albert; GRANT connect TO albert; G
RANT resource TO albert; GRANT GRANT GRANT GRANT GRANT GRANT GRANT GRANT GRANT G
RANT GRANT GRANT GRANT GRANT create session TO publ_cdc; create table TO publ_cd
c; create sequence TO publ_cdc; create procedure TO publ_cdc; connect TO publ_cd
c; resource TO publ_cdc; dba TO publ_cdc; create session TO subs_cdc; create tab
le TO subs_cdc; create sequence TO subs_cdc; create procedure TO subs_cdc; conne
ct TO subs_cdc; resource TO subs_cdc; dba TO subs_cdc;
GRANT execute_catalog_role TO publ_cdc; GRANT select_catalog_role TO publ_cdc; G
RANT execute_catalog_role TO subs_cdc; GRANT select_catalog_role TO subs_cdc; --
object privileges GRANT execute ON dbms_cdc_publish TO publ_cdc; GRANT execute
ON dbms_cdc_subscribe TO publ_cdc; GRANT execute ON dbms_lock TO publ_cdc; GRANT
execute ON dbms_cdc_publish TO subs_cdc; GRANT execute ON dbms_cdc_subscribe TO
subs_cdc; GRANT execute ON dbms_lock TO subs_cdc; execute dbms_streams_auth.gra
nt_admin_privilege('publ_cdc');
SQL> SELECT * 2 FROM dba_streams_administrator; USERNAME LOC ACC ---------------
--------------- --- --publ_cdc YES YES SQL> desc dba_streams_administrator; SQL>
SELECT username 2 FROM dba_users u, streams$_privileged_user s 3 WHERE u.user_i
d = s.user#; USERNAME -----------------------------publ_cdc --------------------
---------------------------------------------------CDC: ==== -- CREATE CHANGE_SE
T >>>>>>>>>>> connect albert/albert create table persoon ( userid number, name v
archar(30), lastname varchar(30), constraint pk_userid primary key (userid) ); G
RANT SELECT ON PERSOON TO publ_cdc; GRANT SELECT ON PERSOON TO subs_cdc; ALTER T
ABLE persoon ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; >>>>>>>>>>> connect / as s
ysdba SQL> SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER() from dual; DBMS_FLAS
HBACK.GET_SYSTEM_CHANGE_NUMBER() ----------------------------------------608789
exec dbms_capture_adm.prepare_table_instantiation(table_name => ALBERT.PERSOON);
SQL> SELECT table_name, scn, supplemental_log_data_pk, supplemental_log_data_ui
, 2 supplemental_log_data_fk, supplemental_log_data_all 3 FROM dba_capture_prepa
red_tables; TABLE_NAME SCN SUPPLEME SUPPLEME SUPPLEME SUPPLEME -----------------
------------- ---------- -------- -------- -------- --------
PERSOON
608809 IMPLICIT IMPLICIT IMPLICIT EXPLICIT
SQL> SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER() from dual; DBMS_FLASHBACK.
GET_SYSTEM_CHANGE_NUMBER() ----------------------------------------608879
>>>>>>>>>>> connect publ_cdc/publ_cdc
(done as publisher!!)
exec dbms_cdc_publish.create_change_set('CDC_DEMO_SET', 'CDC Demo 2 Change Set',
'HOTLOG_SOURCE', 'Y', NULL, NULL); Note the 'HOTLOG_SOURCE' !! SQL> exec dbms_c
dc_publish.create_change_set('CDC_DEMO_SET', 'CDC Demo 2 Change Set', 'HOTLOG_SO
URCE ', 'Y', NULL, NULL); PL/SQL procedure successfully completed. Note: if you
need to drop a change set, use: DBMS_CDC_PUBLISH.DROP_CHANGE_SET(change_set_name
IN VARCHAR2);
>>>>>>>>>>> conn / as sysdba SQL> SELECT set_name, capture_name, queue_name, que
ue_table_name 2 FROM cdc_change_sets$; SET_NAME QUEUE_TABLE_NAME ---------------
-------------------------------------------SYNC_SET CDC_DEMO_SET CDC$T_CDC_DEMO_
SET CAPTURE_NAME -----------------------------------CDC$C_CDC_DEMO_SET CDC$Q_CDC
_DEMO_SET QUEUE_NAME
SQL> SQL> SELECT set_name, CAPTURE_ENABLED, BEGIN_SCN, END_SCN,LOWEST_SCN,CAPTUR
E_ERROR 2 FROM cdc_change_sets$; SET_NAME -----------------------------SYNC_SET
CDC_DEMO_SET C BEGIN_SCN END_SCN LOWEST_SCN C - ---------- ---------- ----------
Y 0 N N 0 N
SQL> SELECT set_name, CAPTURE_ENABLED, BEGIN_SCN, END_SCN,LOWEST_SCN,CAPTURE_ERR
OR 2 FROM cdc_change_sets$; SET_NAME C BEGIN_SCN END_SCN LOWEST_SCN C
------------------------------ - ---------- ---------- ---------- SYNC_SET Y 0 N
CDC_DEMO_SET N 0 N SQL> SELECT set_name, change_source_name, capture_enabled, s
top_on_ddl, publisher 2 FROM change_sets; SET_NAME -----------------------------
-----------------------------SYNC_SET CDC_DEMO_SET CHANGE_SOURCE_NAME C S PUBLIS
HER ------------------------------ - SYNC_SOURCE HOTLOG_SOURCE Y N N Y SYS
SQL> SQL> SELECT subscription_name, handle, set_name, username, earliest_scn, de
scription 2 FROM cdc_subscribers$; no rows selected -- CREATE CHANGE TABLE: >>>>
>>>>>>> conn publ_cdc/publ_cdc BEGIN dbms_cdc_publish.create_change_table('publ_
cdc', 'CDC_PERSOON', 'CDC_DEMO_SET', 'ALBERT', 'PERSOON', 'userid number, name v
archar(30), lastname varchar(30)', 'BOTH', 'Y', 'Y', 'Y', 'Y', 'N', 'N', 'Y', 'T
ABLESPACE TS_CDC'); END; / The publisher can use this procedure for asynchronous
and synchronous Change Data Capture. However, the default values for the follow
ing parameters are the only supported values for synchronous change sets: begin_
date, end_date, and stop_on_ddl. SQL> BEGIN 2 dbms_cdc_publish.create_change_tab
le('publ_cdc', 'CDC_PERSOON', 'CDC_DEMO_SET', 3 'ALBERT', 'PERSOON', 'userid num
ber, name varchar(30), lastname varchar(30)', 4 5 6 'BOTH', 'Y', 'Y', 'Y', 'Y',
'N', 'N', 'Y', END; / 'TABLESPACE TS_CDC');
PL/SQL procedure successfully completed. GRANT select ON CDC_PERSOON TO subs_cdc
; Note: To drop a change table use: -- drop the change table
DBMS_CDC_PUBLISH.DROP_CHANGE_TABLE( owner IN VARCHAR2, change_table_name IN VARC
HAR2, force_flag IN CHAR); exec dbms_cdc_publish.drop_change_table('publ_cdc','C
DC_PERSOON','Y'); >>>>>>>>>>> connect / as sysdba SQL> SELECT change_set_name, s
ource_schema_name, source_table_name 2 FROM cdc_change_tables$; CHANGE_SET_NAME
SOURCE_SCHEMA_NAME SOURCE_TABLE_NAME ------------------------------ ------------
----------------------------------------------CDC_DEMO_SET ALBERT PERSOON SQL> S
ELECT set_name,capture_name,capture_enabled 2 FROM cdc_change_sets$; SET_NAME CA
PTURE_NAME ------------------------------ -----------------------------SYNC_SET
CDC_DEMO_SET CDC$C_CDC_DEMO_SET >>>>>>>>>>> connect publ_cdc/publ_cdc exec dbms_
cdc_publish.alter_change_set(change_set_name=>'CDC_DEMO_SET', enable_capture=> '
Y'); SQL> exec dbms_cdc_publish.alter_change_set(change_set_name=>'CDC_DEMO_SET'
, enable_capture=> 'Y'); PL/SQL procedure successfully completed. >>>>>>>>>>> co
nnect / as sysdba SQL> SELECT set_name,capture_name,capture_enabled 2 FROM cdc_c
hange_sets$; SET_NAME CAPTURE_NAME ------------------------------ --------------
---------------SYNC_SET CDC_DEMO_SET CDC$C_CDC_DEMO_SET C Y Y C Y N
SQL> SELECT owner, name, QUEUE_TABLE, ENQUEUE_ENABLED, DEQUEUE_ENABLED 2 FROM db
a_queues; OWNER ENQUEUE DEQUEUE ------------------------------------------------
----------SYS NAME QUEUE_TABLE
-----------------------------------CDC$Q_CDC_DEMO_SET CDC$T_CDC_DEMO_SET
YES YES SYS NO NO ........ ........
AQ$_CDC$T_CDC_DEMO_SET_E
CDC$T_CDC_DEMO_SET
SQL> SELECT OWNER, QUEUE_TABLE, TYPE, OBJECT_TYPE, RECIPIENTS 2 FROM DBA_QUEUE_T
ABLES; OWNER QUEUE_TABLE TYPE OBJECT_TYPE
------------------------------ ------------------------------ ------------------
------------.......... .......... SYS CDC$T_CDC_DEMO_SET OBJECT SYS.ANYDATA ....
...... SQL> SELECT set_name, change_source_name, capture_enabled, stop_on_ddl, p
ublisher 2 FROM change_sets; SET_NAME ------------------------------------------
----------------SYNC_SET CDC_DEMO_SET CHANGE_SOURCE_NAME C S PUBLISHER ---------
--------------------- - SYNC_SOURCE HOTLOG_SOURCE Y N Y Y SYS
SQL> SQL> SELECT subscription_name, handle, set_name, username, earliest_scn, de
scription 2 FROM cdc_subscribers$; no rows selected >>>>>>>>>>> connect subs_cdc
/subs_cdc exec dbms_cdc_subscribe.create_subscription('CDC_DEMO_SET', 'cdc_demo
subx', 'CDC_DEMO_SUB'); SQL> exec dbms_cdc_subscribe.create_subscription('CDC_DE
MO_SET', 'cdc_demo subx', 'CDC_DEMO_SUB'); PL/SQL procedure successfully complet
ed. >>>>>>>>>>> connect / as sysdba SQL> SELECT subscription_name, handle, set_n
ame, username, earliest_scn, description 2 FROM cdc_subscribers$; SUBSCRIPTION_N
AME HANDLE SET_NAME USERNAME EARLIEST_SCN DESCRIP ------------------------------
---------- -------------------------------------------------------CDC_DEMO_SUB
1 CDC_DEMO_SET subs_cdc
1 cdc_dem Note: If you want to drop a subscription, use: DBMS_CDC_SUBSCRIBE.DROP
_SUBSCRIPTION(subscription_name IN VARCHAR2); DBMS_CDC_SUBSCRIBE.DROP_SUBSCRIPTI
ON('SUBSCRIPTION_ALBERT');
>>>>>>>>>> connect subs_cdc/subs_cdc BEGIN dbms_cdc_subscribe.subscribe('CDC_DEM
O_SUB', 'ALBERT', 'PERSOON', 'userid, name, lastname', 'CDC_DEMO_SUB_VIEW'); END
; / SQL> BEGIN 2 dbms_cdc_subscribe.subscribe('CDC_DEMO_SUB', 'ALBERT', 'PERSOON
', 3 'userid, name, lastname', 'CDC_DEMO_SUB_VIEW'); 4 END; 5 / PL/SQL procedure
successfully completed. SQL> SELECT set_name, subscription_name, status 2 FROM
user_subscriptions; SET_NAME SUBSCRIPTION_NAME S ------------------------------
------------------------------ CDC_DEMO_SET CDC_DEMO_SUB N exec dbms_cdc_subscri
be.activate_subscription('CDC_DEMO_SUB'); SQL> exec dbms_cdc_subscribe.activate_
subscription('CDC_DEMO_SUB'); PL/SQL procedure successfully completed. SQL> SELE
CT set_name, subscription_name, status 2 FROM user_subscriptions; SET_NAME SUBSC
RIPTION_NAME S ------------------------------ ------------------------------ CDC
_DEMO_SET CDC_DEMO_SUB A >>>>>>>>>>> connect albert/albert SQL> insert into pers
oon 2 values 3 (1,'piet','pietersen');
1 row created. SQL> commit; Commit complete. SQL> insert into persoon 2 values 3
(2,'jan','janssen'); 1 row created. SQL> commit; Commit complete.
>>>>>>>>>>>>>>> connect subs_cdc/subs_cdc exec dbms_cdc_subscribe.extend_window(
'CDC_DEMO_SUB'); SQL> select * from publ_cdc.CDC_PERSOON; OP CSCN$ COMMIT_TI XID
USN$ XIDSLT$ XIDSEQ$ RSID$ ROW_ID$ USERNAME$ -- ---------- --------- ----------
---------- ---------- --------------------------- ------------I 627180 27-FEB-08
2 44 323 1 AAAM1CAAGAAAAAQAAA ALBERT I 627232 27-FEB-08 10 7 326 2 AAAM1CAAGAAA
AAQAAB ALBERT SQL> select * from CDC_DEMO_SUB_VIEW; OP CSCN$ COMMIT_TI XIDUSN$ X
IDSLT$ XIDSEQ$ ROW_ID$ RSID$ TARGET_COLMAP$ -- ---------- --------- ---------- -
--------- ---------- --------------------------- ------------I 627180 27-FEB-08
2 44 323 AAAM1CAAGAAAAAQAAA 1 FE7F000000000000000000000000000 I 627232 27-FEB-08
10 7 326 AAAM1CAAGAAAAAQAAB 2 FE7F000000000000000000000000000 SQL> select OPERA
TION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from CDC_DEMO_SUB_VIEW; OP -I I
COMMIT_TI --------27-FEB-08 27-FEB-08 ROW_ID$ USERID NAME ------------------ ---
------- -----------------------------AAAM1CAAGAAAAAQAAA 1 piet AAAM1CAAGAAAAAQAA
B 2 jan
>>>>>>>>>>>>>> connect albert/albert
insert into persoon values (3,'kees','pot'); >>>>>>>>>>>>>> connect subs_cdc/sub
s_cdc SQL> select * from publ_cdc.CDC_PERSOON; OP CSCN$ COMMIT_TI XIDUSN$ XIDSLT
$ XIDSEQ$ RSID$ ROW_ID$ USERNAME$ -- ---------- --------- ---------- ----------
---------- --------------------------- ------------I 627180 27-FEB-08 2 44 323 1
AAAM1CAAGAAAAAQAAA ALBERT I 627232 27-FEB-08 10 7 326 2 AAAM1CAAGAAAAAQAAB ALBE
RT I 628175 27-FEB-08 9 16 351 10001 AAAM1CAAGAAAAAQAAC ALBERT SQL> select OPERA
TION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from CDC_DEMO_SUB_VIEW; OP -I I
COMMIT_TI --------27-FEB-08 27-FEB-08 ROW_ID$ USERID NAME ------------------ ---
------- -----------------------------AAAM1CAAGAAAAAQAAA 1 piet AAAM1CAAGAAAAAQAA
B 2 jan
exec dbms_cdc_subscribe.extend_window('CDC_DEMO_SUB'); SQL> exec dbms_cdc_subscr
ibe.extend_window('CDC_DEMO_SUB'); PL/SQL procedure successfully completed. SQL>
select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from CDC_DEMO_SUB_V
IEW; OP -I I I COMMIT_TI --------27-FEB-08 27-FEB-08 27-FEB-08 ROW_ID$ USERID NA
ME ------------------ ---------- -----------------------------AAAM1CAAGAAAAAQAAA
1 piet AAAM1CAAGAAAAAQAAB 2 jan AAAM1CAAGAAAAAQAAC 3 kees
exec dbms_cdc_subscribe.purge_window('CDC_DEMO_SUB'); SQL> exec dbms_cdc_subscri
be.purge_window('CDC_DEMO_SUB'); PL/SQL procedure successfully completed. SQL> s
elect * from publ_cdc.CDC_PERSOON; OP CSCN$ COMMIT_TI XIDUSN$ XIDSLT$ XIDSEQ$ RS
ID$ ROW_ID$ USERNAME$ -- ---------- --------- ---------- ---------- ---------- -
---------
------------------ ------------I 627180 27-FEB-08 2 AAAM1CAAGAAAAAQAAA ALBERT I
627232 27-FEB-08 10 AAAM1CAAGAAAAAQAAB ALBERT I 628175 27-FEB-08 9 AAAM1CAAGAAAA
AQAAC ALBERT
44 7 16
323 326 351
1 2 10001
SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from CDC_DEMO_S
UB_VIEW; no rows selected >>>>>>>>>>>>>> connect albert/albert Connected. SQL> i
nsert into persoon 2 values 3 (4,'joop','joopsen'); >>>>>>>>>>>>>>> connect subs
_cdc/subs_cdc Connected. SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USE
RID, NAME from CDC_DEMO_SUB_VIEW; no rows selected SQL> select * from publ_cdc.C
DC_PERSOON; OP CSCN$ COMMIT_TI XIDUSN$ XIDSLT$ XIDSEQ$ RSID$ ROW_ID$ USERNAME$ -
- ---------- --------- ---------- ---------- ---------- ------------------------
--- ------------I 627180 27-FEB-08 2 44 323 1 AAAM1CAAGAAAAAQAAA ALBERT I 627232
27-FEB-08 10 7 326 2 AAAM1CAAGAAAAAQAAB ALBERT I 628175 27-FEB-08 9 16 351 1000
1 AAAM1CAAGAAAAAQAAC ALBERT I 628841 27-FEB-08 5 3 350 20001 AAAM1CAAGAAAAAPAAA
ALBERT SQL> exec dbms_cdc_subscribe.extend_window('CDC_DEMO_SUB'); PL/SQL proced
ure successfully completed. SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$,
USERID, NAME from CDC_DEMO_SUB_VIEW; OP COMMIT_TI ROW_ID$ USERID NAME -- -------
-- ------------------ ---------- -----------------------------I 27-FEB-08 AAAM1C
AAGAAAAAPAAA 4 joop >>>>>>>>>>>>>> connect albert/albert
SQL> insert into persoon 2 values 3 (5,'gerrit','gerritsen'); 1 row created. SQL
> commit; >>>>>>>>>>>>>> connect subs_cdc/subs_cdc SQL> select * from publ_cdc.C
DC_PERSOON; OP CSCN$ COMMIT_TI XIDUSN$ XIDSLT$ XIDSEQ$ RSID$ ROW_ID$ USERNAME$ -
- ---------- --------- ---------- ---------- ---------- ------------------------
--- ------------I 636854 27-FEB-08 2 7 333 30001 AAAM1CAAGAAAAAOAAA ALBERT I 627
180 27-FEB-08 2 44 323 1 AAAM1CAAGAAAAAQAAA ALBERT I 627232 27-FEB-08 10 7 326 2
AAAM1CAAGAAAAAQAAB ALBERT I 628175 27-FEB-08 9 16 351 10001 AAAM1CAAGAAAAAQAAC
ALBERT I 628841 27-FEB-08 5 3 350 20001 AAAM1CAAGAAAAAPAAA ALBERT SQL> select OP
ERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from CDC_DEMO_SUB_VIEW; OP CO
MMIT_TI ROW_ID$ USERID NAME -- --------- ------------------ ---------- ---------
--------------------I 27-FEB-08 AAAM1CAAGAAAAAPAAA 4 joop SQL> exec dbms_cdc_sub
scribe.extend_window('CDC_DEMO_SUB'); PL/SQL procedure successfully completed. S
QL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from CDC_DEMO_SU
B_VIEW; OP -I I COMMIT_TI --------27-FEB-08 27-FEB-08 ROW_ID$ USERID NAME ------
------------ ---------- -----------------------------AAAM1CAAGAAAAAOAAA 5 gerrit
AAAM1CAAGAAAAAPAAA 4 joop
>>>>>>>>>>>>>>>> connect albert/albert@test10g Connected. SQL> insert into perso
on 2 values 3 (6,'marie','bruinsma');
1 row created. SQL> commit; Commit complete. >>>>>>>>>>>>>>>> connect subs_cdc/s
ubs_cdc Connected. SQL> select * from publ_cdc.CDC_PERSOON; OP CSCN$ COMMIT_TI X
IDUSN$ XIDSLT$ XIDSEQ$ RSID$ ROW_ID$ USERNAME$ -- ---------- --------- ---------
- ---------- ---------- --------------------------- ------------I 636854 27-FEB-
08 2 7 333 30001 AAAM1CAAGAAAAAOAAA ALBERT I 643057 27-FEB-08 9 13 364 40001 AAA
M1CAAGAAAAAPAAB ALBERT I 627180 27-FEB-08 2 44 323 1 AAAM1CAAGAAAAAQAAA ALBERT I
627232 27-FEB-08 10 7 326 2 AAAM1CAAGAAAAAQAAB ALBERT I 628175 27-FEB-08 9 16 3
51 10001 AAAM1CAAGAAAAAQAAC ALBERT I 628841 27-FEB-08 5 3 350 20001 AAAM1CAAGAAA
AAPAAA ALBERT 6 rows selected. SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID
$, USERID, NAME from CDC_DEMO_SUB_VIEW; OP -I I COMMIT_TI --------27-FEB-08 27-F
EB-08 ROW_ID$ USERID NAME ------------------ ---------- ------------------------
-----AAAM1CAAGAAAAAOAAA 5 gerrit AAAM1CAAGAAAAAPAAA 4 joop
SQL> exec dbms_cdc_subscribe.extend_window('CDC_DEMO_SUB'); PL/SQL procedure suc
cessfully completed. SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID,
NAME from CDC_DEMO_SUB_VIEW; OP -I I I COMMIT_TI --------27-FEB-08 27-FEB-08 27
-FEB-08 ROW_ID$ USERID NAME ------------------ ---------- ----------------------
-------AAAM1CAAGAAAAAOAAA 5 gerrit AAAM1CAAGAAAAAPAAB 6 marie AAAM1CAAGAAAAAPAAA
4 joop
>>>>>>>>>>> Now about RMAN: A redo log file used by Change Data Capture must rem
ain available on the staging database until Change Data Capture
has captured it. However, it is not necessary that the redo log file remain avai
lable until the Change Data Capture subscriber is done with the change data. To
determine which redo log files are no longer needed by Change Data Capture for a
given change set, the publisher alters the change set's Streams capture process
, which causes Streams to perform some internal cleanup and populates the DBA_LO
GMNR_PURGED_LOG view. The publisher follows these steps: Uses the following quer
y on the staging database to get the three SCN values needed to determine an app
ropriate new first_scn value for the change set, CHICAGO_DAILY: SELECT cap.CAPTU
RE_NAME, cap.FIRST_SCN, cap.APPLIED_SCN, cap.REQUIRED_CHECKPOINT_SCN FROM DBA_CA
PTURE cap, CHANGE_SETS cset WHERE cset.SET_NAME = 'CDC_DEMO_SET' AND cap.CAPTURE
_NAME = cset.CAPTURE_NAME; SQL> SELECT cap.CAPTURE_NAME, cap.FIRST_SCN, cap.APPL
IED_SCN, 2 cap.REQUIRED_CHECKPOINT_SCN 3 FROM DBA_CAPTURE cap, CHANGE_SETS cset
4 WHERE cset.SET_NAME = 'CDC_DEMO_SET' AND 5 cap.CAPTURE_NAME = cset.CAPTURE_NAM
E; CAPTURE_NAME FIRST_SCN APPLIED_SCN REQUIRED_CHECKPOINT_SCN ------------------
------------ ---------- ----------- ----------------------CDC$C_CDC_DEMO_SET 610
086 672502 665072
SQL> SELECT recid, first_change#, sequence#, next_change# 2 FROM V$LOG_HISTORY;
RECID FIRST_CHANGE# SEQUENCE# NEXT_CHANGE# ---------- ------------- ---------- -
----------1 534907 1 555371 2 555371 2 557968 ......... 68 648702 68 651777 69 6
51777 69 653085 70 653085 70 655053 71 655053 71 655234 72 655234 72 656658 73 6
56658 73 657846 74 657846 74 659879 75 659879 75 662288 76 662288 76 662292 77 6
62292 77 662297 78 662297 78 662312 79 662312 79 662322 80 662322 80 662329 81 6
62329 81 662337 --> 82 662337 82 664708 83 664708 83 665724 84 665724 84 670061
85 85 rows selected.
670061
85
674246
SQL> SELECT first_change#, next_change#, sequence#, archived, substr(name, 1, 40
) 2 FROM V$ARCHIVED_LOG; FIRST_CHANGE# NEXT_CHANGE# SEQUENCE# ARC SUBSTR(NAME,1,
40) ------------- ------------ ---------- --------------------------------------
---------------------570647 572710 19 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\
AR ......... 657846 659879 74 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR 65987
9 662288 75 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR 662288 662292 76 YES C:
\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR 662292 662297 77 YES C:\ORACLE\FLASH_RECO
VERY_AREA\TEST10G\AR 662297 662312 78 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\
AR 662312 662322 79 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR 662322 662329 8
0 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR 662329 662337 81 YES C:\ORACLE\FL
ASH_RECOVERY_AREA\TEST10G\AR 662337 664708 82 YES C:\ORACLE\FLASH_RECOVERY_AREA\
TEST10G\AR --> 664708 665724 83 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR 665
724 670061 84 YES C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR 670061 674246 85 YES
C:\ORACLE\FLASH_RECOVERY_AREA\TEST10G\AR 104 rows selected.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> TEST: <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< Dinsdag
13:15:47 2008 in alertlog C001: long running txn detected, xid: 0x0003.02a.0000
0160 C001: long txn committed, xid: 0x0003.02a.00000160 Dinsdag 16.00: >>>>>>>>>
>> connect albert/albert SQL> insert into persoon 2 values 3 (8,'appie','sel');
1 row created. SQL> commit; >>>>>>>>>>> connect subs_cdc/subs_cdc SQL> select OP
ERATION$,RSID$,USERID,NAME,LASTNAME from publ_cdc.CDC_PERSOON; OP RSID$ USERID N
AME LASTNAME -- ---------- ---------- ------------------------------------------
-----------------
I 30001 I 40001 I 50001 I 50002 inderdaad toegevoegd I 1 I 2 I 10001 I 20001 8 r
ows selected.
5 6 7 8 1 2 3 4
gerrit marie lubbie appie piet jan kees joop
gerritsen bruinsma lubbie sel pietersen janssen pot joopsen
<--------
SQL> connect sys/vga88nt@test10g as sysdba Connected. SQL> SELECT cap.CAPTURE_NA
ME, cap.FIRST_SCN, cap.APPLIED_SCN, 2 cap.REQUIRED_CHECKPOINT_SCN 3 FROM DBA_CAP
TURE cap, CHANGE_SETS cset 4 WHERE cset.SET_NAME = 'CDC_DEMO_SET' AND 5 cap.CAPT
URE_NAME = cset.CAPTURE_NAME; CAPTURE_NAME FIRST_SCN APPLIED_SCN REQUIRED_CHECKP
OINT_SCN ------------------------------ ---------- ----------- -----------------
-----CDC$C_CDC_DEMO_SET 610086 683349 683184 Dinsdag 20.00h: SQL> connect sys/vg
a88nt@test10g as sysdba Connected. SQL> SELECT cap.CAPTURE_NAME, cap.FIRST_SCN,
cap.APPLIED_SCN, 2 cap.REQUIRED_CHECKPOINT_SCN 3 FROM DBA_CAPTURE cap, CHANGE_SE
TS cset 4 WHERE cset.SET_NAME = 'CDC_DEMO_SET' AND 5 cap.CAPTURE_NAME = cset.CAP
TURE_NAME; CAPTURE_NAME FIRST_SCN APPLIED_SCN REQUIRED_CHECKPOINT_SCN ----------
-------------------- ---------- ----------- ----------------------CDC$C_CDC_DEMO
_SET 610086 690551 683349 NO LONG RUNNING TRANSACTIONS Dinsdag 21.20h >>>>>>>>>>
>>>>> conn albert/albert SQL> connect albert/albert@test10g Connected. SQL> inse
rt into persoon 2 values 3 (10,'marietje','popje'); 1 row created. Woe 08.00h
>>>>>>>>>>>>>> conn subs_cdc/subs_cdc SQL> select * from albert.persoon; USERID
---------5 4 6 7 8 1 2 3 NAME -----------------------------gerrit joop marie lub
bie appie piet jan kees LASTNAME -------------gerritsen joopsen bruinsma lubbie
sel pietersen janssen pot
8 rows selected.
SQL> select OPERATION$,RSID$,USERID,NAME,LASTNAME from publ_cdc.CDC_PERSOON; OP
RSID$ USERID NAME LASTNAME -- ---------- ---------- ----------------------------
------------------------------I 50001 7 lubbie lubbie I 50002 8 appie sel SQL> s
elect OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from CDC_DEMO_SUB_VIE
W; no rows selected >>>>>>>>>>>>>>>>>>>>>>connect sys/vga88nt@test10g as sysdba
Connected. SQL> select * from DBA_SOURCE_TABLES; SOURCE_SCHEMA_NAME SOURCE_TABLE
_NAME ------------------------------ ------------------------ALBERT PERSOON Woe
08.30: >>>>>>>>>>>>>>>>>>>>>> connect albert/albert@test10g Connected. SQL> inse
rt into persoon 2 values 3 (9,'nadia','nadia'); 1 row created. SQL> commit; Comm
it complete. >>>>>>>>>>>>>>>>>>>>> connect subs_cdc/subs_cdc@test10g
Connected. SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from
CDC_DEMO_SUB_VIEW; no rows selected SQL> exec dbms_cdc_subscribe.extend_window(
'CDC_DEMO_SUB'); PL/SQL procedure successfully completed. SQL> select OPERATION$
, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from CDC_DEMO_SUB_VIEW; OP -I I I COM
MIT_TI --------28-FEB-08 28-FEB-08 28-FEB-08 ROW_ID$ USERID NAME ---------------
--- ---------- -----------------------------AAAM1CAAGAAAAAPAAC 7 lubbie AAAM1CAA
GAAAAAPAAD 8 appie AAAM1CAAGAAAAAQAAD 9 nadia
SQL> select OPERATION$,RSID$,USERID,NAME,LASTNAME from publ_cdc.CDC_PERSOON; OP
RSID$ USERID NAME -- ---------- ---------- -------------------------------------
---------------------I 50001 7 lubbie I 50002 8 appie I 60001 9 nadia LASTNAME l
ubbie sel nadia
Woe 8.45: >>>>>>>>>>>>>>> connect albert/albert@test10g SQL> insert into persoon
2 values 3 (10,'lejah','lejah'); 1 row created. No commit done Woe 9:22:12 2008
C001: long running txn detected, xid: 0x0006.025.0000018f etc.. 12:02:31 2008 C
001: long running txn detected, xid: 0x0006.025.0000018f 12.15 COMMIT
RMAN FULL BACKUP MADE 12.30 SQL> insert into persoon
2 3
values (11,'mira','mira');
1 row created. No COMMIT 13:02:38 2008 C001: long running txn detected, xid: 0x0
00a.019.000001a2 13:12:38 2008 C001: long running txn detected, xid: 0x000a.019.
000001a2 13:22:40 2008 C001: long running txn detected, xid: 0x000a.019.000001a2
COMMIT 13:26:25 2008 C001: long txn committed, xid: 0x000a.019.000001a2 ? knllg
objinfo: MISSING Streams multi-version data dictionary 13.29: SQL> create table
persoon2 2 ( 3 userid number, 4 name varchar(30), 5 lastname varchar(30), 6 cons
traint pk_userid2 PRIMARY KEY (userid)); Table created. SQL> insert into persoon
2 2 values 3 (1,'piet','piet'); 1 row created. SQL> commit; Commit complete. 13.
46: SQL> insert into persoon2 2 values 3 (2,'karel','karel'); 1 row created. SQL
> NO COMMIT 16.00h NEVER LONG RUNNING TRANSACTION DETECTED. -- COMPLETELY INDEPE
NDENT OF CDC
27 Feb 16.00h >>>>>>>>>>>> conn albert/albert SQL> insert into persoon 2 values
3 (12,'xyz','xyz'); 1 row created. SQL> commit; >>>>>>>>>>>>> conn subs_cdc/subs
_cdc SQL> select * from albert.persoon; USERID ---------5 4 6 7 8 1 2 3 9 10 11
12 NAME -----------------------------gerrit joop marie lubbie appie piet jan kee
s nadia lejah mira xyz LASTNAME -----------------------------gerritsen joopsen b
ruinsma lubbie sel pietersen janssen pot nadia lejah mira xyz
12 rows selected. SQL> select OPERATION$,COMMIT_TIMESTAMP$,USERNAME$,USERID,NAME
from publ_cdc.cdc_persoon; OP COMMIT_TI USERNAME$ USERID NAME -- --------- ----
-------------------------- --------------------------------------I 28-FEB-08 ALB
ERT 7 lubbie I 28-FEB-08 ALBERT 8 appie I 28-FEB-08 ALBERT 9 nadia I 29-FEB-08 A
LBERT 10 lejah I 29-FEB-08 ALBERT 11 mira I 29-FEB-08 ALBERT 12 xyz 6 rows selec
ted. SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from CDC_D
EMO_SUB_VIEW; OP -I I COMMIT_TI --------28-FEB-08 28-FEB-08 ROW_ID$ USERID NAME
------------------ ---------- -----------------------------AAAM1CAAGAAAAAPAAC 7
lubbie AAAM1CAAGAAAAAPAAD 8 appie
I
28-FEB-08 AAAM1CAAGAAAAAQAAD
9 nadia
SQL> exec dbms_cdc_subscribe.extend_window('CDC_DEMO_SUB'); PL/SQL procedure suc
cessfully completed. SQL> select OPERATION$,COMMIT_TIMESTAMP$,USERNAME$,USERID,N
AME from publ_cdc.cdc_persoon; OP COMMIT_TI USERNAME$ USERID NAME -- --------- -
----------------------------- --------------------------------------I 28-FEB-08
ALBERT 7 lubbie I 28-FEB-08 ALBERT 8 appie I 28-FEB-08 ALBERT 9 nadia I 29-FEB-0
8 ALBERT 10 lejah I 29-FEB-08 ALBERT 11 mira I 29-FEB-08 ALBERT 12 xyz 6 rows se
lected. SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USERID, NAME from CD
C_DEMO_SUB_VIEW; OP -I I I I I I COMMIT_TI --------28-FEB-08 28-FEB-08 28-FEB-08
29-FEB-08 29-FEB-08 29-FEB-08 ROW_ID$ USERID NAME ------------------ ----------
-----------------------------AAAM1CAAGAAAAAPAAC 7 lubbie AAAM1CAAGAAAAAPAAD 8 a
ppie AAAM1CAAGAAAAAQAAD 9 nadia AAAM1CAAGAAAAAQAAE 10 lejah AAAM1CAAGAAAAAQAAF 1
1 mira AAAM1CAAGAAAAAQAAG 12 xyz
6 rows selected. SQL> exec dbms_cdc_subscribe.purge_window('CDC_DEMO_SUB'); PL/S
QL procedure successfully completed. SQL> select OPERATION$,COMMIT_TIMESTAMP$,US
ERNAME$,USERID,NAME from publ_cdc.cdc_persoon; OP COMMIT_TI USERNAME$ USERID NAM
E -- --------- ------------------------------ ----------------------------------
----I 28-FEB-08 ALBERT 7 lubbie I 28-FEB-08 ALBERT 8 appie I 28-FEB-08 ALBERT 9
nadia I 29-FEB-08 ALBERT 10 lejah I 29-FEB-08 ALBERT 11 mira I 29-FEB-08 ALBERT
12 xyz 6 rows selected. SQL> select OPERATION$, COMMIT_TIMESTAMP$, ROW_ID$, USER
ID, NAME from CDC_DEMO_SUB_VIEW;
no rows selected SQL> Note about "long running txn": ---------------------------
--I have found the following definition of a long running transaction: - A long-
running transaction is a transaction that has not received any LCRs for over 10
minutes. Open transactions (ie, transactions where the commit or rollback has no
t been received) without new LCRs in 10 minutes will spill to the apply spill ta
ble. In dba_apply_parameters you can find parameters of the apply process. APPLY
_NAME -----------------------------CDC$A_CHANGE_SET_ALBERT CDC$A_CHANGE_SET_ALBE
RT CDC$A_CHANGE_SET_ALBERT CDC$A_CHANGE_SET_ALBERT CDC$A_CHANGE_SET_ALBERT CDC$A
_CHANGE_SET_ALBERT CDC$A_CHANGE_SET_ALBERT CDC$A_CHANGE_SET_ALBERT CDC$A_CHANGE_
SET_ALBERT CDC$A_CHANGE_SET_ALBERT CDC$A_CHANGE_SET_ALBERT CDC$A_CHANGE_SET_ALBE
RT PARAMETER -----------------------------ALLOW_DUPLICATE_ROWS COMMIT_SERIALIZAT
ION DISABLE_ON_ERROR DISABLE_ON_LIMIT MAXIMUM_SCN PARALLELISM STARTUP_SECONDS TI
ME_LIMIT TRACE_LEVEL TRANSACTION_LIMIT TXN_LCR_SPILL_THRESHOLD WRITE_ALERT_LOG V
ALUE ---------N NONE Y Y INFINITE 1 0 INFINITE 0 INFINITE 10000 Y
===============================================================================
TEST CASE: Export CDC objects from DATABASE TEST10G to DATABASE TEST10G2 - Make
database properties in TEST10G2 same as in TEST10G (example, archive logging, po
ols etc..) - Create same CDC related tablespaces - Create users in TEST10G2 DB -
GRANT ALL APROPRIATE PERMISSIONS - Export from TEST10G - IMPORT INTO TEST10G2 =
==============================================================================
===============================================================================
PROBLEMS: ========= 1. long running txn detected: ----------------------------No
t serious. 2. RMAN-08137: WARNING: archive log not deleted as it is still needed
: ---------------------------------------------------------------------WARNING:
archive log not deleted as it is still needed Cause An archivelog that should ha
ve been deleted was not as it was required by Streams or Data Guard. The next me
ssage identifies the archivelog. Action This is an informational message. The ar
chivelog can be deleted after it is no longer needed. See the documentation for
Data Guard to alter the set of active Data Guard destinations. See the documenta
tion for Streams to alter the set of active streams. Starting backup at 27-FEB-0
8 channel t1: starting archive log backupset channel t1: specifying archive log(
s) in backup set input archive log thread=1 sequence=600 recid=570 stamp=6478205
34 channel t1: starting piece 1 at 27-FEB-08 channel t1: finished piece 1 at 27-
FEB-08 piece handle=ipj9q29g_1_1 tag=TAG20080227T233511 comment=API Version 2.0,
MMS Version 5.3.3.0 channel t1: backup set complete, elapsed time: 00:00:04 RMAN
-08137: WARNING: archive log not deleted as it is still needed archive log filen
ame=/dbms/tdbaaccp/accptrid/recovery/archive/arch_1_600_630505403.arch thread=1
sequence=600 Finished backup at 27-FEB-08 Thu Feb 28 00:00:01 2008 Starting back
up at 28-FEB-08 channel t1: starting archive log backupset channel t1: specifyin
g archive log(s) in backup set input archive log thread=1 sequence=600 recid=570
stamp=647820534 channel t1: starting piece 1 at 28-FEB-08 channel t1: finished
piece 1 at 28-FEB-08 piece handle=isj9q3o5_1_1 tag=TAG20080228T000004 comment=AP
I Version 2.0,MMS Version 5.3.3.0 channel t1: backup set complete, elapsed time:
00:00:04 RMAN-08137: WARNING: archive log not deleted as it is still needed arc
hive log filename=/dbms/tdbaaccp/accptrid/recovery/archive/arch_1_600_630505403.
arch thread=1 sequence=600
Finished backup at 28-FEB-08 Thu Feb 28 01:00:01 2008 Starting backup at 28-FEB-
08 channel t1: starting archive log backupset channel t1: specifying archive log
(s) in backup set input archive log thread=1 sequence=600 recid=570 stamp=647820
534 channel t1: starting piece 1 at 28-FEB-08 channel t1: finished piece 1 at 28
-FEB-08 piece handle=ivj9q78l_1_1 tag=TAG20080228T010004 comment=API Version 2.0
,MMS Version 5.3.3.0 channel t1: backup set complete, elapsed time: 00:00:04 cha
nnel t1: deleting archive log(s) archive log filename=/dbms/tdbaaccp/accptrid/re
covery/archive/arch_1_600_630505403.arch recid=570 stamp=647820534 Finished back
up at 28-FEB-08 Also handled. 3. ORA-00600: internal error code, arguments: [knl
cLoop-200], [], [], [], [], [], [], [] -----------------------------------------
---------------------------------------------LOGMINER: End mining logfile: /dbms
/tdbaaccp/accptrid/recovery/redo_logs/redo03.log Thu Feb 28 09:10:30 2008 LOGMIN
ER: Begin mining logfile: /dbms/tdbaaccp/accptrid/recovery/redo_logs/redo01.log
Thu Feb 28 09:21:01 2008 Thread 1 advanced to log sequence 608 Current log# 2 se
q# 608 mem# 0: /dbms/tdbaaccp/accptrid/recovery/redo_logs/redo02.log Thu Feb 28
09:21:01 2008 LOGMINER: End mining logfile: /dbms/tdbaaccp/accptrid/recovery/red
o_logs/redo01.log Thu Feb 28 09:21:01 2008 LOGMINER: Begin mining logfile: /dbms
/tdbaaccp/accptrid/recovery/redo_logs/redo02.log Thu Feb 28 09:22:46 2008 Errors
in file /dbms/tdbaaccp/accptrid/admin/dump/bdump/accptrid_c001_1491066.trc: ORA
-00600: internal error code, arguments: [knlcLoop-200], [], [], [], [], [], [],
[] Thu Feb 28 09:22:59 2008 Streams CAPTURE C001 with pid=25, OS id=1491066 stop
ped Thu Feb 28 09:22:59 2008 Errors in file /dbms/tdbaaccp/accptrid/admin/dump/b
dump/accptrid_c001_1491066.trc: ORA-00600: internal error code, arguments: [knlc
Loop-200], [], [], [], [], [], [], [] On AIX: IDENTIFIER TIMESTAMP T C RESOURCE_
NAME A63BEB70 0221101308 P S SYSPROC DESCRIPTION SOFTWARE PROGRAM ABNORMALLY TER
MINATED
SQL> select SEQUENCE#, FIRST_CHANGE#,STATUS,ARCHIVED from v$log; SEQUENCE# FIRST
_CHANGE# STATUS ARC ---------- ------------- ---------------- --607 9579856 INAC
TIVE YES 608 9594819 CURRENT NO 606 9579542 INACTIVE YES SQL> / SEQUENCE# FIRST_
CHANGE# STATUS ARC ---------- ------------- ---------------- --607 9579856 INACT
IVE YES 608 9594819 CURRENT NO 606 9579542 INACTIVE YES
Warning: Errors detected in file /dbms/tdbaaccp/accptrid/admin/dump/bdump/accptr
id_c001_1499336.trc > /dbms/tdbaaccp/accptrid/admin/dump/bdump/accptrid_c001_149
9336.trc > Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Pro
duction > With the Partitioning, OLAP and Data Mining options > ORACLE_HOME = /d
bms/tdbaaccp/ora10g/home > System name: AIX > Node name: pl003 > Release: 3 > Ve
rsion: 5 > Machine: 00CB560D4C00 > Instance name: accptrid > Redo thread mounted
by this instance: 1 > Oracle process number: 23 > Unix process pid: 1499336, im
age: oracle@pl003 (C001) > > *** 2008-02-28 10:49:04.501 > *** SERVICE NAME:(SYS
$USERS) 2008-02-28 10:49:04.488 > *** SESSION ID:(195.1286) 2008-02-28 10:49:04.
488 > KnlcLoop: priorCkptScn currentCkptScn > 0x0000.00926d84 0x0000.00927073 >
knlcLoop: buf_txns_knlcctx:1:: lowest bufLcrScn:0x0000.00926cca > knlcPrintCharC
achedTxn:xid: 0x000b.005.000001ac > *** 2008-02-28 10:49:04.501 > ksedmp: intern
al or fatal error > ORA-00600: internal error code, arguments: [knlcLoop-200], [
], [], [], [], [], [], [] > OPIRIP: Uncaught error 447. Error stack: > ORA-00447
: fatal error in background process > ORA-00600: internal error code, arguments:
[knlcLoop-200], [], [], [], [], [], [], []
================================================================================
==
============ END OF Async CDC extended TEST: ===================================
=============================================== ============ exec dbms_cdc_subsc
ribe.extend_window('CHANGE_SET_ALBERT'); exec dbms_cdc_subscribe.purge_window('C
HANGE_SET_ALBERT'); exec DBMS_CDC_PUBLISH.DROP_CHANGE_SET('CHANGE_SET_ALBERT');
exec dbms_capture_adm.abort_table_instantiation('HR.CDC_DEMO');
-- drop the change set exec dbms_cdc_publish.drop_change_set('CDC_DEMO_SET');
============= 26 X$ TABLES: ============= Listed below are some of the important
subsystems in the Oracle kernel. This table might help you to read those dreade
d trace files and internal messages. For example, if you see messages like this,
you will at least know where they come from: OPIRIP: Uncaught error 447. Error
stack: KCF: write/open error block=0x3e800 online=1 Kernel Subsystems: OPI Oracl
e Program Interface KK Compilation Layer - Parse SQL, compile PL/SQL KX Executio
n Layer - Bind and execute SQL and PL/SQL K2 Distributed Execution Layer - 2PC h
andling NPI Network Program Interface KZ Security Layer - Validate privs KQ Quer
y Layer RPI Recursive Program Interface KA Access Layer KD Data Layer KT Transac
tion Layer KC Cache Layer KS Services Layer KJ Lock Manager Layer KG Generic Lay
er KV Kernel Variables (eg. x$KVIS and X$KVII) S or ODS Operating System Depende
ncies
Where can one get a list of all hidden Oracle parameters?
Oracle initialization or INIT.ORA parameters with an underscore in front are hid
den or unsupported parameters. One can get a list of all hidden parameters by ex
ecuting this query: select * from SYS.X$KSPPI where substr(KSPPINM,1,1) = '_'; T
he following query displays parameter names with their current value: select a.k
sppinm "Parameter", b.ksppstvl "Session Value", c.ksppstvl "Instance Value" from
x$ksppi a, x$ksppcv b, x$ksppsv c where a.indx = b.indx and a.indx = c.indx and
substr(ksppinm,1,1)='_' order by a.ksppinm; Remember: Thou shall not play with
undocumented parameters! Oracle's x$ Tables See also: Speculation of X$ Table Na
mes x$ tables are the sql interface to viewing oracle's memory in the SGA. The n
ames for the x$ tables can be queried with select kqftanam from x$kqfta; x$activ
eckpt x$bh Information on buffer headers. Contains a record (the buffer header)
for each block in the buffer cache. This select statement lists how many blocks
are Available, Free and Being Used. select count(*), State from ( select decode
(state, 0, 'Free', 1, decode (lrba_seq, 0, 'Available', 'Being Used'), 3, 'Being
Used', state) State from x$bh ) group by state The meaning of state: 0 FREE no
valid block image 1 XCUR a current mode block, exclusive to this instance 2 SCUR
a current mode block, shared with other instances 3 CR a consistent read (stale
) block image 4 READ buffer is reserved for a block being read from disk 5 MREC
a block in media recovery mode 6 IREC a block in instance (crash) recovery mode
The meaning of tch: tch is the touch count. A high touch count indicates that th
e buffer is used often. Therefore, it will probably be at the head of the MRU li
st. See also touch count. The meaning of tim: touch time. class represents a val
ue
designated for the use of the block. lru_flag set_ds maps to addr on x$kcbwds. l
e_addr can be outer joined on x$le.le_addr. flag is a bit array. Bit if set 0 Bl
ock is dirty 4 temporary block 9 or 10 ping 14 stale 16 direct 524288 (=0x80000)
Block was read in a full table scan See this link x$bufqm x$class_stat x$contex
t x$globalcontext x$hofp x$hs_session The x$kc... tables x$kcbbhs x$kcbmmav x$kc
bsc x$kcbwait x$kcbwbpd Buffer pool descriptor, the base table for v$buffer_pool
. How is the buffer cache split between the default, the recycle and the keep bu
ffer pool. x$kcbwds Set descriptor, see also x$kcbwbpd The column id can be join
ed with v$buffer_pool.id. The column bbwait corresponds to the buffer busy waits
wait event. Information on working set buffers addr can be joined with x$bh.set
_ds. set_id will be between lo_setid and hi_setid in v$buffer_pool for the relev
ant buffer pool. x$kccal x$kccbf x$kccbi x$kccbl x$kccbp x$kccbs x$kcccc x$kcccf
x$kccdc x$kccdi x$kccdl x$kccfc x$kccfe x$kccfn x$kccic x$kccle Controlfile log
file entry. Use select max(lebsz) from x$kccle to find out the size of a log blo
ck. The log block size is the unit for the following init params: log_checkpoint
_interval, _log_io_size, and max_dump_file_size. x$kcclh x$kccor x$kcccp Checkpo
int Progress: The column cpodr_bno displays the current redo block number. Multi
plied with the OS Block Size (usually 512), it returns the amount of bytes of re
do currently written to the redo logs. Hence, this number is reset at each log s
witch. k$kcccp can (together with x$kccle) be used to monitor the progress of th
e writing of
online redo logs. The following query does this. select le.leseq "Current log se
quence No", 100*cp.cpodr_bno/le.lesiz "Percent Full", cp.cpodr_bno "Current Bloc
k No", le.lesiz "Size of Log in Blocks" from x$kcccp cp, x$kccle le where LE.les
eq =CP.cpodr_seq and bitand(le.leflg,24)=8; bitand(le.leflg,24)=8 makes sure we
get the current log group How much redo is written by Oracle uses a variation of
this SQL statement to track how much redo is written by different DML Statement
s. x$kccrs x$kccrt x$kccsl x$kcctf x$kccts x$kcfio x$kcftio x$kckce x$kckty x$kc
lcrst x$kcrfx x$kcrmf x$kcrmx x$kcrralg x$kcrrarch x$kcrrdest x$kcrrdstat x$kcrr
ms x$kcvfh x$kcvfhmrr x$kcvfhonl x$kcvfhtmp x$kdnssf The x$kg... tables KG stand
s for kernel generic x$kghlu This view shows one row per shared pool area. If th
ere's a java pool, an additional row is displayed. x$kgicc x$kgics x$kglcursor x
$kgldp x$kgllk This table lists all held and requested library object locks for
all sessions. It is more complete than v$lock. The column kglnaobj displays the
first 80 characters of the name of the object. select kglnaobj, kgllkreq from x$
kgllk x join v$session s on s.saddr = x.kgllkses; kgllkreq = 0 means, the lock i
s held, while kgllkreq > 0 means that the lock is requested.
x$kglmem x$kglna x$kglna1 x$kglob Library Cache Object x$kglsim x$kglst x$kgskas
p x$kgskcft x$kgskcp x$kgskdopp x$kgskpft x$kgskpp x$kgskquep x$kjbl x$kjbr x$kj
drhv x$kjdrpcmhv x$kjdrpcmpf x$kjicvt x$kjilkft x$kjirft x$kjisft x$kjitrft x$kk
sbv x$kkscs x$kkssrd x$klcie x$klpt x$kmcqs x$kmcvc x$kmmdi x$kmmrd x$kmmsg x$km
msi x$knstacr x$knstasl x$knstcap x$knstmvr x$knstrpp x$knstrqu x$kocst The x$kq
... tables x$kqfco This table has an entry for each column of the x$tables and c
an be joined with x$kqfta. The column kqfcosiz indicates the size (in bytes?) of
the columns. select t.kqftanam "Table Name", c.kqfconam "Column Name", c.kqfcos
iz "Column Size" from x$kqfta t, x$kqfco c where t.indx = c.kqfcotab x$kqfdt x$k
qfsz x$kqfta
It seems that all x$table names can be retrieved with the following query. selec
t kqftanam from x$kqfta; This table can be joined with x$kqfco which contains th
e columns for the tables: select t.kqftanam "Table Name", c.kqfconam "Column Nam
e" from x$kqfta t, x$kqfco c where t.indx = c.kqfcotab x$kqfvi x$kqfvt x$kqlfxpl
x$kqlset x$kqrfp x$kqrfs x$kqrst x$krvslv x$krvslvs x$krvxsv The x$ks... tables
KS stands for kernel services. x$ksbdd x$ksbdp x$ksfhdvnt x$ksfmcompl x$ksfmele
m x$ksfmextelem x$ksfmfile x$ksfmfileext x$ksfmiost x$ksfmlib x$ksfmsubelem x$ks
fqp x$ksimsi x$ksled x$kslei x$ksles x$kslld x$ksllt x$ksllw x$kslwsc x$ksmfs x$
ksmfsv This SGA map. x$ksmge x$ksmgop x$ksmgsc x$ksmgst x$ksmgv x$ksmhp x$ksmjch
x$ksmjs x$ksmlru Memory least recently used Whenever a select is performed on x
$ksmlru, its content is reset! This table show which memory allocations in the s
hared pool caused the throw out of the biggest memory chunks since it was last q
ueried.
x$ksmls x$ksmmem This 'table' seems to allow to address (that is read (write????
)) every byte in the SGA. Since the size of the SGA equals the size of select su
m(value) from v$sga, the following query must return 0 (at least on a four byte
architecture. Don't know about 8 bytes.) select (select sum(value) from v$sga )
(select 4*count(*) from x$ksmmem) "Must be Zero!" from dual; x$ksmsd x$ksmsp x$k
smsp_nwex x$ksmspr x$ksmss x$ksolsfts x$ksolsstat x$ksppcv x$ksppcv2 Contains th
e value kspftctxvl for each parameter found in x$ksppi. Determine if this value
is the default value with the column kspftctxdf. x$ksppi This table contains a r
ecord for all documented and undocumented (starting with an underscore) paramete
rs. select ksppinm from x$ksppi to show the names of all parameters. Join indx+1
with x$ksppcv2.kspftctxpn. x$ksppo x$ksppsv x$ksppsv2 x$kspspfile x$ksqeq x$ksq
rs x$ksqst Enqueue management statistics by type. ksqstwat: The number of wait f
or the enqueue statistics class. ksqstwtim: Cumulated waiting time. This column
is selected when v$enqueue_stat.cum_wait_time is selected. The types of classes
are: BL Buffer Cache Management CF Controlfile Transaction CI Cross-instance cal
l invocation CU Bind Enqueue DF Datafile DL Direct Loader index creation DM Data
base mount DP ??? DR Distributed Recovery DX Distributed TX FB acquired when for
matting a range of bitmap blocks far ASSM segments. id1=ts#, id2=relative dba FS
File Set IN Instance number IR Instance Recovery IS Instance State IV Library c
ache invalidation JD Something to do with dbms_job JQ Job queue KK Redo log kick
LA..LP Library cache lock
MD enqueue for Change data capture materialized view log (gotten internally for
DDL on a snapshot log) id1=object# of the snapshot log. MR Media recovery NA..NZ
Library cache pin PF Password file PI Parallel slaves PR Process startup PS Par
allel slave synchronization SC System commit number SM SMON SQ Sequence number e
nqueue SR Synchronized replication SS Sort segment ST Space management transacti
on SV Sequence number value SW Suspend writes enqueue gotten when someone issues
alter system suspend|resume TA Transaction recovery UL User defined lock UN Use
r name US Undo segment, serialization WL Redo log being written XA Instance attr
ibute lock XI Instance registration lock XR Acquired for alter system quiesce re
stricted x$kstex x$ksull x$ksulop x$ksulv x$ksumysta x$ksupr x$ksuprlat x$ksurlm
t x$ksusd Contains a record for all statistics. x$ksuse x$ksusecon x$ksusecst x$
ksusesta x$ksusgif x$ksusgsta x$ksusio x$ksutm x$ksuxsinst x$ktadm x$targetrba x
$ktcxb The SGA transaction table. x$ktfbfe x$ktfthc x$ktftme x$ktprxrs x$ktprxrt
x$ktrso x$ktsso x$ktstfc x$ktstssd x$kttvs
Lists save undo for each tablespace: The column kttvstnm is the name of the tabl
espace that has saved undo. The column is null otherwise. x$kturd x$ktuxe Kernel
transaction, undo transaction entry x$kvis Has (among others) a row containing
the db block size: select kvisval from x$kvis where kvistag = 'kcbbkl' x$kvit x$
kwddef x$kwqpd x$kwqps x$kxfpdp x$kxfpns x$kxfpsst x$kxfpys x$kxfqsrow x$kxsbd x
$kxscc x$kzrtpd x$kzspr x$kzsrt x$le Lock element: contains an entry for each PC
M lock held for the buffer cache. x$le can be left outer joined to x$bh on le_ad
dr. x$le_stat x$logmnr_callback x$logmnr_contents x$logmnr_dictionary x$logmnr_l
ogfile x$logmnr_logs x$logmnr_parameters x$logmnr_process x$logmnr_region x$logm
nr_session x$logmnr_transaction x$nls_parameters x$option x$prmsltyx x$qesmmiwt
x$qesmmsga x$quiesce x$uganco x$version x$xsaggr x$xsawso x$xssinfo A perlscript
to find x$ tables #!/usr/bin/perl -w use strict; open O, ("/appl/oracle/product
/9.2.0.2/bin/oracle"); open F, (">x"); my $l; my $p = ' ' x 40; my %x;
while (read (O,$l,10000)) { $l = $p.$l; foreach ($l =~ /(x\$\w{3,})/g) { $x{$_}+
+; } $p = substr ($l,-40); } foreach (sort keys %x) { print F "$_\n"; } Obviousl
y, it is also possible to extract those names through x$kqfta =============== 27
OTHER STUFF: =============== 27.1 How to retrieve DDL from sqlplus: ===========
============================ Use DBMS_METADATA.GET_DDL() Examples: SELECT dbms_m
etadata.get_ddl('TABLE','EMPLOYEE','RM_LIVE') from dual; SQL> set pagesize 0 SQL
> set long 90000 SELECT dbms_metadata.get_ddl('TABLE', table_name, 'RM_LIVE') FR
OM DBA_TABLES WHERE OWNER = 'RM_LIVE' and table_name like 'CDC_%'; More on this
procedure: If there is a task in Oracle for which the wheel has been reinvented
many times, it is that of generating database object DDL. There are numerous scr
ipts floating in different forums doing the same thing. Some of them work great,
while others work only until a specific version. Sometimes the DBAs prefer to c
reate the scripts themselves. Apart from the testing overhead, these scripts req
uire substantial insight into the data dictionary. As new versions of the databa
se are released, the scripts need to be modified to fit the new requirements. St
arting from Oracle 9i Release 1, the DBMS_METADATA package has put an official e
nd to all such scripting effort. This article provides a tour of the reverse eng
ineering features of the above package, with a focus on generating the creation
DDL of existing database objects. The article also has a section covering the is
sue of finding object dependencies. Why do we need to reverse engineer object cr
eation DDL We need them for several reasons:
Database upgrade from earlier versions when for various reason export-import is
the only way out. But huge databases would require a precreated structure import
ing data with several parallel processes into individual tables. Moving developm
ent objects into production. The cleanest method is to reverse engineer the DDL
of the existing objects and run them in the production. For learning the various
parameters that an object has been created with. When we create an object, we d
o not specify all the options, letting Oracle pick the defaults. We might want t
o view the defaults that have been picked up, or we might want to crosscheck the
parameters of the object. For that we need Enterprise Manager, Toad, or some ot
her tool, or self-developed queries in the data dictionary. Now DBMS_METADATA ge
t the clean complete DDL with all options. Modes of usage of the Metadata Packag
e A set of functions that can be used with SQL. This is known as the browsing in
terface. The functions in the browsing interface are GET_DDL, GET_DEPENDENT_DDL,
GET_GRANTED_DDL A set of functions that can be used in PLSQL, which is in fact
a superset of (1). They support filtering, and optional turning on and turning o
ff of some clause in the DDL. The flexibilities provided by the programmer inter
face are rarely required. For general use the browsing interface is sufficient -
more so if the programmer knows SQL well. Retrieving DDL information by SQL As
mentioned in the section above, GET_DDL, GET_DEPENDENT_DDL and GET_GRANTED_DDL a
re the three functions in this mode. The next few sections discuss them in detai
l. The objects on which the examples are tested are given in Table 9. GET_DDL Th
e general syntax of GET_DDL is GET_DDL(object_type, name, schema, version, model
, transform). Version, model and transform take the default values "COMPATIBLE",
"ORACLE", and "DDL" - further discussion of these is not in the scope of this a
rticle. object_type can be any of the object types given in Table 8 below. Table
1 shows a simple usage of the GET_DDL function to get all the tables of a schem
a. This function can only be used to fetch named objects, that is, objects with
type N or S in Table 8. We will see in a later section how the "/" at the end of
the DDL can be turned on by default. Table 1 (DBMS_METADATA.GET_DDL Usage) SQL>
SQL> SQL> SQL> USER SQL> SQL> set head off set long 1000 set pages 0 show user
is "REVRUN" select DBMS_METADATA.GET_DDL('TABLE','EMPLOYEE')||'/' from dual;
CREATE TABLE "REVRUN"."EMPLOYEE" ( "LASTNAME" VARCHAR2(60) NOT NULL ENABLE, "FIR
STNAME" VARCHAR2(20) NOT NULL ENABLE, "MI" VARCHAR2(2),
"SUFFIX" VARCHAR2(10), "DOB" DATE NOT NULL ENABLE, "BADGE_NO" NUMBER(6,0), "EXEM
PT" VARCHAR2(1) NOT NULL ENABLE, "SALARY" NUMBER(9,2), "HOURLY_RATE" NUMBER(7,2)
, PRIMARY KEY ("BADGE_NO") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAG
E(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FR
EELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "SYSTEM" ENABLE ) PC
TFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 6
5536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 F
REELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "SYSTEM" / GET_DEPENDENT_DDL Th
e general syntax of GET_DEPENDENT_DDL is GET_DEPENDENT_DDL(object_type, base_obj
ect_name, base_object_schema, version, model, transform, object_count) Version,
model and transform take the default values "COMPATIBLE", "ORACLE" and "DDL", an
d are not discussed futher. object_count takes the default of 10000 and can be l
eft like that for most cases. object_type can be any object of type D in Table 8
. base_object_name is the base object on which the object_type objects are depen
dent. The GET_DEPENDENT_DDL function allows the fetching of metadata for depende
nt objects with a single call. For some object types, other functions can be use
d for the same effect. For example, GET_DDL can be used to fetch an index by its
name or GET_DEPENDENT_DDL can be used to fetch the same index by specifying the
table on which it is defined. An added reason for using GET_DEPENDENT_DDL in th
is case might be that it gives the DDL of all dependent objects of that base obj
ect and the specific object type. Table 2 shows a simple usage of GET_DEPENDENT_
DDL. Table 2 (GET_DEPENDENT_DDL example) SQL> column aa format a132 SQL> SQL> se
lect DBMS_METADATA.GET_DEPENDENT_DDL('TRIGGER','EMPLOYEE') aa from dual; CREATE
OR REPLACE TRIGGER "REVRUN"."HOURLY_TRIGGER" before update of hourly_rate on emp
loyee for each row begin :new.hourly_rate:=:old.hourly_rate;end; ALTER TRIGGER "
REVRUN"."HOURLY_TRIGGER" ENABLE CREATE OR REPLACE TRIGGER "REVRUN"."SALARY_TRIGG
ER" before insert or update of salary on employee
for each row WHEN (new.salary > 150000) CALL check_sal(:new.salary) ALTER TRIGGE
R "REVRUN"."SALARY_TRIGGER" ENABLE GET_GRANTED_DDL The general syntax of GET_GRA
NTED_DDL is GET_GRANTED_DDL(object_type, grantee, version, model, transform, obj
ect_count) Version, model and transform take the default values "COMPATIBLE", "O
RACLE" and "DDL", and need no further discussion. object_count takes the default
of 10000, and can be left like that for most cases. grantee is the user who is
granting the object_types. The object types that can work in GET_GRANTED_DDL are
the ones with type G in Table 8. Table 3 shows a simple usage of the GET_GRANTE
D_DDL function. Table 3 (GET_GRANTED_DDL Usage) SQL> set long 99999 SQL> column
aa format a132 SQL> select DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT','REVRUN_
USER') aa from dual; GRANT UPDATE ("SALARY") ON "REVRUN"."EMPLOYEE" TO "REVRUN_U
SER" GRANT UPDATE ("HOURLY_RATE") ON "REVRUN"."EMPLOYEE" TO "REVRUN_USER" GRANT
INSERT ON "REVRUN"."TIMESHEET" TO "REVRUN_USER" GRANT UPDATE ON "REVRUN"."TIMESH
EET" TO "REVRUN_USER" Table 4 below classifies some common objects as Dependent
Object (D), Named Object (N) or Granted Object (G). Some objects exhibit more th
an one such property. For a complete list, refer to the Oracle Documentation. Ho
wever, the list below will meet most requirements. Metadata information retrieva
l by programmatic interface The programmatic interface is for fine-grained detai
led control on DDL generation. The list of procedures available for use in the p
rogrammatic interface is as follows: OPEN SET_FILTER SET_COUNT GET_QUERY SET_PAR
SE_ITEM ADD_TRANSFORM SET_TRANSFORM_PARAM FETCH_xxx CLOSE To make use of this in
terface one must write a PLSQL block. Considering the fact that several CLOB col
umns are involved, this is not simple. However, the next section shows how to us
e the SET_TRANSFORM_PARM function in SQLPLUS in order to perform most of the job
s done by this interface. If one adds simple SQL skills to it, the programmatic
interface can be bypassed in almost all cases. To get details of the programmati
c interface, the reader should refer to the documentation.
Using the SET_TRANSFORM_PARAM function in SQL Session This function determines h
ow the output of the DBMS_METADATA is displayed. The general syntax is SET_TRANS
FORM_PARAM(transform_handle, name, value). transform_handle for SQL Sessions is
DBMS_METADATA.SESSION_TRANSFORM name is the name of the transform, and value is
essentially TRUE or FALSE. Table 4 shows how to get the DDL of tables not contai
ning the word LOG in a good indented form and with SQL Terminator without a stor
age clause. Table 4 (SET_TRANSFORM_PARAM Usage) SQL> execute DBMS_METADATA.SET_T
RANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'STORAGE',false) ; PL/SQL procedu
re successfully completed. SQL> execute DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_M
ETADATA.SESSION_TRANSFORM,'PRETTY',true); PL/SQL procedure successfully complete
d. SQL> execute DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFOR
M,'SQLTERMINATOR', true); PL/SQL procedure successfully completed. SQL> select d
bms_metadata.get_ddl('TABLE',table_name) from user_tables 2 where table_name not
like '%LOG'; CREATE TABLE "REVRUN"."EMPLOYEE" ( "LASTNAME" VARCHAR2(60) NOT NUL
L ENABLE, "FIRSTNAME" VARCHAR2(20) NOT NULL ENABLE, "MI" VARCHAR2(2), "SUFFIX" V
ARCHAR2(10), "DOB" DATE NOT NULL ENABLE, "BADGE_NO" NUMBER(6,0), "EXEMPT" VARCHA
R2(1) NOT NULL ENABLE, "SALARY" NUMBER(9,2), "HOURLY_RATE" NUMBER(7,2), PRIMARY
KEY ("BADGE_NO") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 TABLESPACE "SYST
EM" ENABLE ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TA
BLESPACE "SYSTEM" ; CREATE TABLE "REVRUN"."TIMESHEET" ( "BADGE_NO" NUMBER(6,0),
"WEEK" NUMBER(2,0), "JOB_ID" NUMBER(5,0), "HOURS_WORKED" NUMBER(4,2), FOREIGN KE
Y ("BADGE_NO")
REFERENCES "REVRUN"."EMPLOYEE" ("BADGE_NO") ENABLE ) PCTFREE 10 PCTUSED 40 INITR
ANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TABLESPACE "SYSTEM" ; SQL> Thus we see how
a DDL requirement even with some filtering condition and a formatting requireme
nt was met by the SQL browsing interface along with SET_SESSION_TRANSFORM. Table
5 shows the name and meaning of the SET_SESSION_TRANSFORM parameters. Table 5 (
SET_SESSION_TRANSFORM "name" Parameters) PRETTY (all objects) - If TRUE, format
the output with indentation and line feeds. Defaults to TRUE. SQLTERMINATOR (all
objects) - If TRUE, append a SQL terminator (; or /) to each DDL statement. Def
aults to FALSE. DEFAULT (all objects) - Calling SET_TRANSFORM_PARAM with this pa
rameter set to TRUE has the effect of resetting all parameters for the transform
to their default values. Setting this FALSE has no effect. There is no default.
INHERIT (all objects) - If TRUE, inherits session-level parameters. Defaults to
FALSE. If an application calls ADD_TRANSFORM to add the DDL transform, then by
default the only transform parameters that apply are those explicitly set for th
at transform handle. This has no effect if the transform handle is the session t
ransform handle. SEGMENT_ATTRIBUTES (TABLE and INDEX) - If TRUE, emit segment at
tributes (physical attributes, storage attributes, tablespace, logging). Default
s to TRUE. STORAGE (TABLE and INDEX) - If TRUE, emit storage clause. (Ignored if
SEGMENT_ATTRIBUTES is FALSE.) Defaults to TRUE. TABLESPACE (TABLE and INDEX) -
If TRUE, emit tablespace. (Ignored if SEGMENT_ATTRIBUTES is FALSE.) Defaults to
TRUE. CONSTRAINTS (TABLE) - If TRUE, emit all non-referential table constraints.
Defaults to TRUE. REF_CONSTRAINTS (TABLE) - If TRUE, emit all referential const
raints (foreign key and scoped refs). Defaults to TRUE. CONSTRAINTS_AS_ALTER (TA
BLE) - If TRUE, emit table constraints as separate ALTER TABLE (and, if necessar
y, CREATE INDEX) statements. If FALSE, specify table constraints as part of the
CREATE TABLE statement. Defaults to FALSE. Requires that CONSTRAINTS be TRUE. FO
RCE (VIEW) - If TRUE, use the FORCE keyword in the CREATE VIEW statement. Defaul
ts to TRUE. DBMS_METADATA Security Model The object views of the Oracle metadata
model implement security as follows:
Non-privileged users can see the metadata only of their own objects. SYS and use
rs with SELECT_CATALOG_ROLE can see all objects. Non-privileged users can also r
etrieve object and system privileges granted to them or by them to others. This
also includes privileges granted to PUBLIC. If callers request objects they are
not privileged to retrieve, no exception is raised; the object is simply not ret
rieved. If non-privileged users are granted some form of access to an object in
someone else's schema, they will be able to retrieve the grant specification thr
ough the Metadata API, but not the object's actual metadata. Finding objects tha
t are dependent on a given object This is another type of requirement. While dro
pping a seemingly unimportant table or procedure from a schema one might like to
know the objects that are dependent on this object. The data dictionary view DB
A_DEPENDENCIES or USER_DEPENDENCIES or ALL_DEPENDENCIES is the answer to these r
equirements. The columns of the ALL_DEPENDENCIES view are discussed in Table 6.
ALL_DEPENDENCIES describes dependencies between procedures, packages, functions,
package bodies, and triggers accessible to the current user, including dependen
cies on views created without any database links. Only tables are left out of th
is view. However for finding table dependencies we can use ALL_CONSTRAINTS. The
ALL_DEPENDENCIES view comes to the rescue in the very important area of finding
dependencies between stored code objects. Table 6 (Columns of ALL_DEPENDENCIES t
able) Column Description ---------------OWNER Owner of the object NAME Name of t
he object TYPE Type of object REFERENCED_OWNER Owner of the parent object REFERE
NCED_NAME Type of parent object REFERENCED_TYPE Type of referenced object REFERE
NCED_LINK_NAME Name of the link to the parent object (if remote) SCHEMAID ID of
the current schema DEPENDENCY_TYPE Whether the dependency is a REF dependency (R
EF) or not (HARD) Table 7 below shows how to use the above view to get the depen
dencies. The example shows a case where we might want to drop the procedure CHEC
K_SAL, but we would like to find any objects dependent on it. The query below sh
ows that a TRIGGER named SALARY_TRIGGER is dependent on it. Table 7 (Use of the
ALL_DEPENDENCIES view) SQL> 2 3 4 select name, type, owner from all_dependencies
where referenced_owner = 'REVRUN' and referenced_name = 'CHECK_SAL';
NAME TYPE OWNER ------------------------------ ----------------- ---------------
-------
SALARY_TRIGGER CONCLUSION
TRIGGER
REVRUN
This article is intended to give the minimum effort answer to elementary and int
ermediate level object dependency related issues. For advanced object dependency
issues, this article points to the solution. As Oracle keeps on upgrading its v
ersions, it is clear that they will be upgrading the DBMS_METADATA interface and
ALL_DEPENDENCIES view along with it. The solutions developed along those lines
will persist. Table 8 (Classifying common database objects as Named, Dependent,
Granted and Schema objects) CONSTRAINT (Constraints) DB_LINK (Database links) DE
FAULT_ROLE (Default roles) FUNCTION (Stored functions) INDEX (Indexes) MATERIALI
ZED_VIEW (Materialized views) MATERIALIZED_VIEW_LOG (Materialized view logs) OBJ
ECT_GRANT (Object grants) PACKAGE (Stored packages) PACKAGE_SPEC (Package specif
ications) PACKAGE_BODY (Package bodies) PROCEDURE (Stored procedures) ROLE (Role
s) ROLE_GRANT (Role grants) SEQUENCE (Sequences) SYNONYM (Synonyms) SYSTEM_GRANT
(System privilege grants) TABLE (Tables) TABLESPACE (Tablespaces) TRIGGER (Trig
gers) TYPE (User-defined types) TYPE_SPEC (Type specifications) TYPE_BODY (Type
bodies) USER (Users) VIEW (Views) Table 9 (Creation script of the REVRUN Schema)
connect system/manager drop user revrun cascade; drop user revrun_user cascade;
drop user revrun_admin cascade; create user revrun identified by revrun; GRANT
resource, connect, create session , create table , create procedure , create seq
uence , create trigger , create view , create synonym , alter session TO revrun;
SND SN G SN SND SN D DG SN SN SN SN N G SN S G SN N SND SN SN SN N SN
create user revrun_user identified by user1; create user revrun_admin identified
by admin1; grant connect to revrun_user; grant connect to revrun_admin; connect
revrun/revrun Rem Creating employee tables... create table employee ( lastname
varchar2(60) not null, firstname varchar2(20) not null, mi varchar2(2), suffix v
archar2(10), DOB date not null, badge_no number(6) primary key, exempt varchar(1
) not null, salary number (9,2), hourly_rate number (7,2) ) / create table times
heet (badge_no number(6) references employee (badge_no), week number(2), job_id
number(5), hours_worked number(4,2) ) / create table system_log (action_time DAT
E, lastname VARCHAR2(60), action LONG ) / Rem grants... grant update (salary,hou
rly_rate) on employee to revrun_user; grant ALL on employee to revrun_admin with
grant option; grant insert,update on timesheet to revrun_user; grant ALL on tim
esheet to revrun_admin with grant option; Rem indexes... create index i_employee
_name on employee(lastname); create index i_employee_dob on employee(DOB); creat
e index i_timesheet_badge on timesheet(badge_no); Rem triggers
create or replace procedure check_sal( salary in number) as begin return; -- Dem
o code end; / create or replace trigger salary_trigger before insert or update o
f salary on employee for each row when (new.salary > 150000) call check_sal(:new
.salary) / create or replace trigger hourly_trigger before update of hourly_rate
on employee for each row begin :new.hourly_rate:=:old.hourly_rate;end;
SELECT substr(username, 1, 20), account_status, default_tablespace, temporary_ta
blespace, created FROM dba_users WHERE created > SYSDATE -10;
============= 11g Features: ============= Note 1: ------Vraag: Wat zijn de belan
grijkste nieuwe performance features in Oracle Database 11g? Antwoord: De drie R
esult Caches: SQL Result Cache, PL/SQL Function Result Cache en de OCI Client Re
sult Cache. De SQL result Cache bewaart de uitkomst van een vaak uitgevoerd SQL
query statement in de SGA. De Query Optimizer houdt zelf bij welke queries in aa
nmerking komen, rekening houdende met de DML en query frequentie. Met name queri
es op lookup tabellen hebben hier enorm veel profijt van. De PL/SQL Result Cache
doet hetzelfde, maar dan voor PL/SQL Functions. De OCI Client Cache bewaart het
query resultaat op de client zodat er geen netwerk trip naar de database nodig
is voor geselecteerde SQL queries. Verder nog SQL Plan Management, een feature d
ie SQL performance regressie voorkomt door executie plannen op te slaan in de da
tabase als basis voor toekomstige executie plannen. Als er in de toekomst een an
der executie plan beschikbaar komt, doordat er bijvoorbeeld een index is aangema
akt, kan zon nieuw plan alleen geaccepteerd worden als het daadwerkelijk tot een be
tere performance leidt. Dus de SQL optimizer is hiermee zelflerend geworden. Vra
ag: Wat zijn de belangrijkste nieuwe Backup and Recovery features? Antwoord: De
Data Recovery Advisor: in plaats van zelf te bedenken hoe je een
Recovery vraagstuk het beste kunt aanpakken vraag je nu gewoon aan RMAN een advi
es en natuurlijk kan RMAN dat advies ook voor je uitvoeren. Dus 'advise failure'
en repair failure' is alles wat een Oracle 11g DBA-er hoeft te weten. En qua backu
p performance is de RMAN multisection backup een belangrijke verbetering die het
mogelijk maakt om n file met meerdere channels te backuppen om intra-file parallelism
e te bewerkstellen. Een grote time saver is de RMAN duplicate from active database fea
ture die het mogelijk maakt om een database te dupliceren zonder dat daarvoor ee
n opgeslagen backup nodig is. Vraag: Wat zijn de belangrijkste nieuwe security f
eatures? Antwoord: Tablespace encryptie is belangrijk als het om data protectie
gaat. Hiermee is het mogelijk om de gehele inhoud van een Tablespace te encrypte
n ongeacht de gebruikte datatypen. Hiermee is de data niet alleen beveiligd binn
en de database maar ook tegen aanvallen buiten de database om. Een andere belang
rijke verbetering is het nieuwe password algoritme en de mogelijkheid om de DBA
passwords onder te brengen in een LDAP server zodat ze centraal beheerd kunnen w
orden. Vraag: Wat zijn de belangrijkste nieuwe data opslag features? Antwoord: D
e nieuwe LOB implementatie is ronduit geweldig met een veel betere performance e
n ingebouwde encryptie mogelijkheid die voor iedereen met LOBs in de database veel
voordelen biedt. Maar ook de integratie van NFS in de database, Direct NFS, is e
en feature met veel performance voordelen en meer keuze vrijheid wat het onder l
iggende disk systeem betreft. De Oracle 11g database kan nu rechtstreeks met een
NFS server praten zonder tussenkomst van de NFS layer uit het operating systeem
. Als laatste zijn de vele nieuwe partitionerings methoden ronduit overweldigend
en is alles zo ongeveer mogelijk wat men maar wensen kan. Misschien wel de mees
t belangrijke vorm van partitioneren, range partitioning, is geautomatiseerd met
de introductie van interval partitioning, maar ook de virtuele partition key bi
edt vele nieuwe mogelijkheden.

Anda mungkin juga menyukai