Anda di halaman 1dari 55

Oracle Performance Tuning

Oracle Performance Tuning is very important part. It is not just the DBA responsibility ,it is also
the responsibility of Oracle developers.Performance tuning should starts before Design
and should be continuously tested. We need to have good Knowledge of Oracle
Software working. Many times problems occurs as we are not aware of the working of
the Oracle
In this section,We are presenting the Oracle performance tuning articles which will help u in
solving the problems quickly
Tuning Tools

Oracle Performance Tuning Tools( Oracle


Explain Plan,Auto-trace,tkprof)
We are presenting here All information about Oracle Performance Tuning Tools like Oracle
Explain Plan,Auto-trace,tkprof. We will take a deep dive into each of them
Oracle Explain Plan
-Oracle database internally creates a query execution plan in order to fetch the desired data
from the physical tables. The query execution plan is nothing but a set of methods on how the
database will access the data from the tables. This query execution plan is crucial as different
execution plans will need different cost and time for the query execution
-It shows the execution path of a SQL statement
-indicates what tables it will read first and in what order
-indicates what indexes it will use, or if new indexes are needed
-Helps verify that Oracle will retrieve the data the way you expect it to
How to find the Oracle Explain Plan?
Must have full access to a PLAN_TABLE or create PLAN_TABLE with utlxplan.sql delivered by
Oracle
Insert the following command at the top of your SQL statement
explain plan
set statement_id = x
into plan_table
for

;
select lpad( ,2*level) || operation operations,options,object_name
from plan_table
where statement_id = x
connect by prior id = parent_id and statement_id = x
start with id = 1 and statement_id = x
order by id;
or
SQL> set linesize 132
SQL> SELECT * FROM TABLE(dbms_xplan.display);
How to Read?
-Read innermost out, top/down
-Join operations always require two sets. The order you read the sets is top down, so the first
set is the driving set and the second is the probed set. In the case of a nested loop, the first set
is the outer loop. In the case of a hash join, the first set is used to build the hash table.
-One join is performed at a time, so you only need to consider two sets and their join operation
at any one time.
What to look for ?
-Look for TABLE ACCESS (FULL)
-Costly if table is big
-Costly if performed many times
-Look for INDEX (RANGE SCAN)
-Expected for non-unique indexes
-Suspicious for unique indexes
Some info about joins
Nested Loops-Good for on-line screens and reports
-Read all rows from table 1
-Then access table 2 once for each row returned from table 1
-Fastest if:
rows returned from table 1 are small
access to table2 is inexpensive. This means either a UNIQUE lookup or a SMALL Range Scan.
Merge Join
-Better than nested loops for joining a large number of rows

Hash Join-Common in Parallel Queries


-Better than Merge Join
-Smallest table is passed over and a hashing algorithm is applied to each row to create a hash
table and a bitmap.
Second table is passed over and a hashing algorithm applied and check for matchs (ie. joins)
The bitmap is used as a quick lookup and is especially useful when hash table is too large to fit
in memory.
Sort Merge Join
-Rows are produced from table 1 and are then sorted
-Rows are produced from table 2 and sorted by the same sort key as table 1
-Table 1 and 2 are NOT accessed concurrently
-Sorted rows from both sides are then merged together(joined)
Autotrace( Another Oracle Performance Tuning Tool)
Similar to Explain Plan
Provides plan without having to perform separate query from PLAN_TABLE
Provides statistics
Note: Does not support bind variables
Syntax
set autotrace { off | on | traceonly }
[explain] [statistics]
Autotrace
SET AUTOTRACE OFF
default.
SET AUTOTRACE ON EXPLAIN
shows only the optimizer execution path
SET AUTOTRACE ON STATISTICS
shows only execution statistics
SET AUTOTRACE ON
includes both the plan and the statistics
SET AUTOTRACE TRACEONLY
same as ON but suppresses query output
Autotrace-Execution Plan produced by Autotrace contains the following:
Line number of each execution step

The relationship number between each step and its parent


Each step of the plan
shows any database links or parallel servers used
-TKPROF discussed next will show row counts whereas Autotrace does not!
Autotrace: Example
Trace Files and TKPROF
Regular Trace
Contains SQL, execution statistics, and execution plan
Advantages
Provides execution path
Provides row counts
Produces smallest flat file
Disadvantages
Cannot tell what values were supplied at run time to SQL
If statistics for query are low but run time is long cannot tell what events caused the long wait
Trace with Binds(level 4)
Regular Trace plus values supplied to the SQL statement via the local variables (i.e. binds)
Advantages
Provides execution path
Provides row counts
Can tell what values the statement was run for
Disadvantages
Again if run time is long and statistics are low it will be hard to tell why in this type of trace.
Produces a bigger flat file that the regular trace because bind variable information has to be
stored.
Trace with Waits (level 8)
Regular Trace plus database operation timings that the SQL waited to have done in order to
complete. For example: disk access timings.
Advantages
Provides execution path
Provides row counts
Can tell timings for all events associated with SQL.
Disadvantages
Trace file may easily max out due to all the information Oracle must write to the trace file and
then only partial information is available in trace file.
Bind variable information is not available
Trace with Binds and Waits(level 12)
Regular trace with the both the wait and bind information. Contains the most complete
information and will produce the largest trace file.

Advantages
Provides execution path
Provides row counts
Can tell timings for all events associated with SQL.
Can tell what values the SQL was run with
Disadvantages
Trace file may easily max out due to all the information Oracle must write to the trace file and
then only partial information is available in trace file.
Formatting Your SQL Trace File with TKPROF
Trace Files are unformatted dumps of data
TKPROF is tool used to format trace files for reading
Syntax
tkprof {tracefile} {outputfile}
[explain={user/passwd} ] [table= ]
[print= ] [insert= ] [sys= ] [sort= ]
TKPROF Usage Best Practices
The following is the recommended usage:
tkprof {trace file} {output file} explain={usernm/passwd} print=? sort=prsela,exeela,fchela
Use explain= to get Plan in the report
Use print= to only report on the first ? statements
Use sort= for sorting the statements with the longest elapsed times first (works with
timed_statistics=true)
Some more command related to tracing
1. To trace any sid from outside
sys.dbms_system.set_ev(sid, serial#, , , )
Examples: SQL> execute sys.dbms_system.set_ev(8, 219, 10046, 12, );
2.Gathering stats for any object in APPS
exec
apps.fnd_stats.GATHER_TABLE_STATS(APPLSYS,FND_CONCURRENT_REQUESTS,100,
4);
3. Using oradebug
oradebug setospid
oradebug close_trace
oradebug setospid
oradebug event 10046 trace name context off;
oradebug event 10046 trace name context forever, level 4;
4.Using tkprof
This print 10 sql only
tkprof .trc elaps.prf sys=no explain=apps/ sort=(prsela,exeela,fchela) print=10
This print all the sql
tkprof .trc elaps.prf sys=no explain=apps/apps sort=prsela,exeela,fchela

Autotrace Utility
Autotrace is beautiful tool provided by Oracle for getting the explain plan and execution
statistics.
You need to know the query and its bind variable if any and with autotrace access, we can get
all the useful information about
Autotrace Utility installation
1. cd [ORACLE_HOME]/rdbms/admin2. log into SQL*Plus as SYSTEM3. Run @utlxplan4. Run
CREATE PUBLIC SYNONYM PLAN_TABLE FOR PLAN_TABLE;5. Run GRANT ALL ON
PLAN_TABLE TO PUBLIC;
6. Log in to SQL*Plus as SYS or as SYSDBA7. Run @plustrce
8. Run GRANT PLUSTRACE TO PUBLIC;
Autotrace options
SET AUTOTRACE OFF: No AUTOTRACE report is generated. This is the default.
SET AUTOTRACE ON EXPLAIN: The AUTOTRACE report shows only the optimizer execution
path.
SET AUTOTRACE ON STATISTICS: The AUTOTRACE report shows only the SQL statement
execution statistics.
SET AUTOTRACE ON: The AUTOTRACE report includes both the optimizer execution path and
the SQL statement execution statistics.
SET AUTOTRACE TRACEONLY: This is like SET AUTOTRACE ON, but it suppresses the
printing of the users query output, if any. This is very useful for queries returning large rows,so
we dont need to scroll down that much
SET AUTOTRACE TRACEONLY STATISTICS: This is like SET AUTOTRACE TRACEONLY but
it shows the statistics only and supress the explain plan output
SET AUTOTRACE TRACEONLY EXPLAIN: This is like SET AUTOTRACE TRACEONLY but it
shows the explain plan only and supress the statistics . This does not execute the select
statement ,just parse the statement and shows the explain. INSERT/UPDATE are executed and
then explain plan shown
Understanding Autotrace Output
Autotrace shows two things
a) Explain plan: Explain plan shows the plan for query and it shows exactly how the query will
be executed in Database.It will shows the cost ,cardinality and bytes for the each step in the
explain plan
b) Statistics: Lots of statistics will be shown.Some of the statistics are
i) Recursive calls: Number of sql statement executed
in order to execute the statement
ii) DB block gets: The no of blocks read from buffer cache in current mode
iii) Consistent gets: The no of blocks read from buffer cache in consistents reads
iv)redo size: redo generated by sql
v) physical reads: No of disks reads

Oracle Performance :sql_trace parameter


and its impact
Oracle Performance case study on sql_trace being set improperly
My apps system was generating lot of trace files and mount was getting , resulting in I/O
issues. we cleaned up the trace files and but again it came backUpon investigation we found
that some body has by mistake set the init.ora parameter sql_trace to true .
Because of this every connection to the database was generating trace file.
Once we set it FALSE, trace file creation becomes lessor and it resolve the issue
The sql_trace setting to true also resulted in severe performance issue for the instance. So
appropiate care should be taken before setting this parameter. I will always recommend to set
this at session level
alter session set sql_trace=true;
If it is still needed to be set at system level then following should be taken care to minimize the
performance impact

Maintaining at least 25% idle CPU capacity


Maintaining adequate disk space for the USER_DUMP_DEST location
Striping disk space over sufficient disks

About sql_trace parameter


SQL_TRACE enables or disables the SQL trace facility. Setting this parameter to true provides

information on tuning that you can use to improve performance. You can change the value using
the DBMS_SYSTEM package

How to turn on the SQL trace, 10046 event


in Oracle database and trcsess, tkprof
utility
Tracing is a important part for Oracle database tuning. This will help capture all the problem
statement , wait event which are responsible for slow running of the session.
Oracle has provided has many quickest ways to trace the local session, other user session
and format the trace to make readable.

Let us take some look at how to turn on SQL trace, 10046 event in Oracle database and
trcsess, tkprof utlity

SQL Trace ,100046 event


If you want to trace in local session,here are the steps to turn on SQL trace

Normal trace
alter session set sql_trace = true; To put trace on
alter session set sql_trace = false; To put trace off
Full level with wait event And bind trace
alter session set events = 10046 trace name context forever, level 12;
To put tracing off
alter session set events = 10046 trace name context off;
Same as Normal trace
exec DBMS_SESSION.set_sql_trace(sql_trace => TRUE);
exec DBMS_SESSION.set_sql_trace(sql_trace => FALSE);
If you want to trace in other running session,here are the steps
Normal trace

execute dbms_system.set_sql_trace_in_session (sid,serial,true); To put tracing on


execute dbms_system.set_sql_trace_in_session (sid,serial,true); To put tracing off
Full level with wait event And bind trace
execute dbms_system.set_ev(sid,serial,10046,12,);
To put trace off
execute dbms_system.set_ev(sid,serial,10046,0,);
You must have seen using 12,0 in various statement above.They are tracing levels.Oracle
Tracing has tracing levels .Here are the valid values

No trace. Like switching sql_trace off.


The equivalent of regular sql_trace

2
It Provides execution path,row counts,smallest flat file
4

The same as 2, but with the addition of bind variable values


The same as 2, but with the addition of wait events

8
Regular Trace plus database operation timings that the SQL waited to have done in order to
complete. For example: disk access timings.
The same as 2, but with both bind variable values and wait events

12

Regular trace with the both the wait and bind information. Contains the most complete
information and will produce the largest trace file.
Provides execution path
Provides row counts
Can tell timings for all events associated with SQL.
Can tell what values the SQL was run with

There are other ways to do the tracing also. Here are some of these
1) ORADEBUG
This requires login as sysdba

oradebug setospid 1111 Debug session with the specified Oracle process id
oradebug setorapid 1111 Debug session with the specified OS process
oradebug event 10046 trace name context forever, level 4;
oradebug event 10046 trace name context off; This disable the trace
oradebug close_trace This closes the trace file
Oradebug TRACEFILE_NAME;
2) With Oracle 10g the SQL tracing options have been extended using the DBMS_MONITOR
package
EXECUTE dbms_monitor.session_trace_enable
Which is similar
ALTER SESSION SET EVENTS 10046 trace name context forever, level 2;
EXECUTE dbms_monitor.session_trace_enable (binds=>true);
Which is similar
ALTER SESSION SET EVENTS 10046 trace name context forever, level 4;
EXECUTE dbms_monitor.session_trace_enable (waits=>true);
Which is similar
ALTER SESSION SET EVENTS 10046 trace name context forever, level 8;
EXECUTE dbms_monitor.session_trace_enable(sid,serial#)
Which is similar
execute dbms_system.set_ev(sid,serial,10046,2,);
EXECUTE dbms_monitor.session_trace_enable (sid,serial#,binds=>true);
Which is similar
execute dbms_system.set_ev(sid,serial,10046,4,);

EXECUTE dbms_monitor.session_trace_enable (sid,serial#,waits=>true);


Which is similar
execute dbms_system.set_ev(sid,serial,10046,8,);
There are many other functionality available in dbms_monitor. We can enable trace based on
client identifier and multiple ways
How to identify the Trace files
We can identify the trace files using the spid of the session. Also the trace file will contain
sid,serial# pair at the start of the trace file.
Below query can be used to find the local session trace file
select c.value || / || d.instance_name ||_ora_ || a.spid || .trc trace
from v$process a, v$session b, v$parameter c, v$instance d
where a.addr = b.paddr
and b.audsid = userenv(sessionid)
and c.name = user_dump_dest
/
There is another easy way to identify the trace file which is called trace identifier
alter session set tracefile_identifer=ORAC; This is identifier
Example trace file
Trace file /app/oracle/TEST/diag/rdbms/test/TEST/trace/TEST_ora_13025_IDEN.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Miningand Real Application Testing
options
ORACLE_HOME = /app/oracle/TEST/11.2.0/db
System name:

SunOS12

Node name: sun12Release: 5.10Version: Generic_144488-11


Machine: sun4u
Instance name: TEST
Redo thread mounted by this instance: 1
Oracle process number: 202

Unix process pid: 13025, image: oracle@ SunOS12


*** 2013-10-21 12:28:56.461
*** SESSION ID:(3875.17341) 2013-10-21 12:28:56.461*** CLIENT ID:() 2013-10-21 12:28:56.461
*** SERVICE NAME:(TEST) 2013-10-21 12:28:56.461
*** MODULE NAME:(STANDARD@sunOS12(TNS V1-V3)) 2013-10-21 12:28:56.461
*** ACTION NAME:() 2013-10-21 12:28:56.461
CLOSE #4:c=0,e=11,dep=1,type=1,tim=8866891135825
=====================
PARSING IN CURSOR #5 len=356 dep=1 uid=173 oct=3 lid=173 tim=8866891136821
hv=2468783182 ad=4c70e4398 sqlid=0wmwsjy9kd92fSELECT PROFILE_OPTION_ID,
APPLICATION_ID, SITE_ENABLED_FLAG , APP_ENABLED_FLAG , RESP_ENABLED_FLAG ,
USER_ENABLED_FLAG, ORG_ENABLED_FLAG , SERVER_ENABLED_FLAG,
SERVERRESP_ENABLED_FLAG, HIERARCHY_TYPE, USER_CHANGEABLE_FLAG FROM
FND_PROFILE_OPTIONS WHERE PROFILE_OPTION_NAME = :B1 AND
START_DATE_ACTIVE <= SYSDATE AND NVL(END_DATE_ACTIVE, SYSDATE) >=
SYSDATEEND OF STMT
BINDS #5:
Bind#0
oacdty=01 mxl=128(80) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=01 csi=871 siz=128 off=0
kxsbbbfp=ffffffff7d677b48 bln=128 avl=21 flg=0
value=PER_BUSINESS_GROUP_ID
EXEC
#5:c=0,e=1474,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=1374985481,tim=8866891138224
FETCH #5:c=0,e=61,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=1,plh=1374985481,tim=8866891138429

CLOSE #5:c=0,e=5,dep=1,type=3,tim=8866891138503
=====================PARSING IN CURSOR #4 len=230 dep=1 uid=173 oct=3 lid=173
tim=8866891138634 hv=3575592451 ad=3aeea3da0 sqlid=55dc767ajydh3
SELECT PROFILE_OPTION_VALUE FROM FND_PROFILE_OPTION_VALUES WHERE
PROFILE_OPTION_ID = :B4 AND APPLICATION_ID = :B3 AND LEVEL_ID = 10003 AND
LEVEL_VALUE = :B2 AND LEVEL_VALUE_APPLICATION_ID = :B1 AND
PROFILE_OPTION_VALUE IS NOT NULL
END OF STMT
BINDS #4:
Bind#0
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=96 off=0
kxsbbbfp=ffffffff7d677b68 bln=22 avl=03 flg=05
value=1204
Bind#1
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=24
kxsbbbfp=ffffffff7d677b80 bln=22 avl=02 flg=01
value=800
Bind#2
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=48
kxsbbbfp=ffffffff7d677b98 bln=22 avl=04 flg=01
value=50334
Bind#3

oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00


oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=72
kxsbbbfp=ffffffff7d677bb0 bln=22 avl=01 flg=01
value=0
EXEC #4:c=0,e=377,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=2802907561,tim=8866891138978
FETCH #4:c=0,e=26,p=0,cr=3,cu=0,mis=0,r=0,dep=1,og=1,plh=2802907561,tim=8866891139050
CLOSE #4:c=0,e=2,dep=1,type=3,tim=8866891139116
=====================
PARSING IN CURSOR #5 len=191 dep=1 uid=173 oct=3 lid=173 tim=8866891139308
hv=303338305 ad=3bedf0e48 sqlid=7qs7fx89194u1
SELECT PROFILE_OPTION_VALUE FROM FND_PROFILE_OPTION_VALUES WHERE
PROFILE_OPTION_ID = :B4 AND APPLICATION_ID = :B3 AND LEVEL_ID = :B2 AND
LEVEL_VALUE = :B1 AND PROFILE_OPTION_VALUE IS NOT NULL
END OF STMT
BINDS #5:
Bind#0
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=96 off=0
kxsbbbfp=ffffffff7d673b78 bln=22 avl=03 flg=05
value=1204
Bind#1
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=24
kxsbbbfp=ffffffff7d673b90 bln=22 avl=02 flg=01
value=800

Bind#2
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=48
kxsbbbfp=ffffffff7d673ba8 bln=22 avl=04 flg=01
value=10001
Bind#3
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=72
kxsbbbfp=ffffffff7d673bc0 bln=22 avl=01 flg=01
value=0
EXEC #5:c=0,e=377,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=2802907561,tim=8866891139624
FETCH #5:c=0,e=26,p=0,cr=4,cu=0,mis=0,r=1,dep=1,og=1,plh=2802907561,tim=8866891139692
CLOSE #5:c=0,e=7,dep=1,type=3,tim=8866891139739=====================PARSING IN
CURSOR #4 len=356 dep=1 uid=173 oct=3 lid=173 tim=8866891139952 hv=2468783182
ad=4c70e4398 sqlid=0wmwsjy9kd92f
SELECT PROFILE_OPTION_ID, APPLICATION_ID, SITE_ENABLED_FLAG ,
APP_ENABLED_FLAG , RESP_ENABLED_FLAG , USER_ENABLED_FLAG,
ORG_ENABLED_FLAG , SERVER_ENABLED_FLAG, SERVERRESP_ENABLED_FLAG,
HIERARCHY_TYPE, USER_CHANGEABLE_FLAG FROM FND_PROFILE_OPTIONS WHERE
PROFILE_OPTION_NAME = :B1 AND START_DATE_ACTIVE <= SYSDATE AND
NVL(END_DATE_ACTIVE, SYSDATE) >= SYSDATE
END OF STMT
BINDS #4:
Bind#0
oacdty=01 mxl=128(80) mxlc=00 mal=00 scl=00 pre=00

oacflg=03 fl2=1206001 frm=01 csi=871 siz=128 off=0


kxsbbbfp=ffffffff7d673b58 bln=128 avl=23 flg=05
value=PER_SECURITY_PROFILE_ID
EXEC #4:c=0,e=164,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=1374985481,tim=8866891140097
FETCH #4:c=0,e=28,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=1,plh=1374985481,tim=8866891140171
CLOSE #4:c=0,e=2,dep=1,type=3,tim=8866891140218
=====================
PARSING IN CURSOR #5 len=230 dep=1 uid=173 oct=3 lid=173 tim=8866891140290
hv=3575592451 ad=3aeea3da0 sqlid=55dc767ajydh3
SELECT PROFILE_OPTION_VALUE FROM FND_PROFILE_OPTION_VALUES WHERE
PROFILE_OPTION_ID = :B4 AND APPLICATION_ID = :B3 AND LEVEL_ID = 10003 AND
LEVEL_VALUE = :B2 AND LEVEL_VALUE_APPLICATION_ID = :B1 AND
PROFILE_OPTION_VALUE IS NOT NULL
END OF STMT
BINDS #5:
Bind#0
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=96 off=0
kxsbbbfp=ffffffff7d673b78 bln=22 avl=03 flg=05
value=2199
Bind#1
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=24
kxsbbbfp=ffffffff7d673b90 bln=22 avl=02 flg=01
value=800

Bind#2
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=48
kxsbbbfp=ffffffff7d673ba8 bln=22 avl=04 flg=01
value=50334
Bind#3
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=72
kxsbbbfp=ffffffff7d673bc0 bln=22 avl=01 flg=01
value=0
EXEC #5:c=0,e=325,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=2802907561,tim=8866891140599
FETCH
#5:c=0,e=19,p=0,cr=3,cu=0,mis=0,r=0,dep=1,og=1,plh=2802907561,tim=8866891140659CLOSE
#5:c=0,e=1,dep=1,type=3,tim=8866891140710
=====================
PARSING IN CURSOR #4 len=191 dep=1 uid=173 oct=3 lid=173 tim=8866891140843
hv=303338305 ad=3bedf0e48 sqlid=7qs7fx89194u1SELECT PROFILE_OPTION_VALUE
FROM FND_PROFILE_OPTION_VALUES WHERE PROFILE_OPTION_ID = :B4 AND
APPLICATION_ID = :B3 AND LEVEL_ID = :B2 AND LEVEL_VALUE = :B1 AND
PROFILE_OPTION_VALUE IS NOT NULL
END OF STMT
BINDS #4:
Bind#0
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=96 off=0

kxsbbbfp=ffffffff7d673b78 bln=22 avl=03 flg=05


value=2199
Bind#1
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=24
kxsbbbfp=ffffffff7d673b90 bln=22 avl=02 flg=01
value=800
Bind#2
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=48
kxsbbbfp=ffffffff7d673ba8 bln=22 avl=04 flg=01
value=10001
Bind#3
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=00 csi=00 siz=0 off=72
kxsbbbfp=ffffffff7d673bc0 bln=22 avl=01 flg=01
value=0
EXEC
#4:c=0,e=324,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=2802907561,tim=8866891141152FETCH
#4:c=0,e=17,p=0,cr=4,cu=0,mis=0,r=1,dep=1,og=1,plh=2802907561,tim=8866891141208
CLOSE #4:c=0,e=7,dep=1,type=3,tim=8866891141256

tkprof utility
The trace files obtained from above method is in raw form which can be converted into more
readable format using tkprof utility (Transient Kernel PROFile utility)

tkprof
Usage: tkprof tracefile outputfile [explain= ] [table= ]
[print= ] [insert= ] [sys= ] [sort= ]
table=schema.tablename Use schema.tablename with explain= option.
explain=user/password
print=integer

Connect to ORACLE and issue EXPLAIN PLAN.

List only the first integer SQL statements.

aggregate=yes|no
insert=filename List SQL statements and data inside INSERT statements.
sys=no

TKPROF does not list SQL statements run as user SYS.

record=filename Record non-recursive statements found in the trace file.


waits=yes|no
sort=option

Record summary for any wait events found in the trace file.
Set of zero or more of the following sort options:

prscnt number of times parse was called


prscpu cpu time parsing
prsela elapsed time parsing

prsdsk number of disk reads during parseprsqry number of

buffers for consistent read during parseprscu number of buffers for current read during
parseprsmis number of misses in library cache during parse
execnt number of execute was called
execpu cpu time spent executing
exeela elapsed time executing
exedsk number of disk reads during execute
exeqry number of buffers for consistent read during execute
execu number of buffers for current read during execute
exerow number of rows processed during execute

exemis number of library cache misses during execute


fchcnt number of times fetch was called
fchcpu cpu time spent fetching
fchela elapsed time fetching
fchdsk number of disk reads during fetch
fchqry number of buffers for consistent read during fetch
fchcu number of buffers for current read during fetch
fchrow number of rows fetched
userid userid of user that parsed the cursor
Some examples
tkprof file.trc file.txt sys=no explain=userid/password sort=prsela,exeela,fchela
tkprof file.trc file.txt sys=no explain=userid/password sort=prsela,exeela,fchela
tkprof file.trc file.txt sys=no
tkprof file.trc file.txt sys=no explain=userid/password sort=prsela,exeela,fchela
Example content of tkprof file
TKPROF: Release 11.2.0.4.0 Production on Tue Jan 17 14:12:41 2013
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Trace file: TEST_ora_15941.trc
Sort options: execpu fchcpu
********************************************************************************
count

= number of times OCI procedure was executed

cpu

= cpu time in seconds executing

elapsed = elapsed time in seconds executing


disk
query

= number of physical reads of buffers from disk


= number of buffers gotten for consistent read

current = number of buffers gotten in current mode (usually for update)

rows

= number of rows processed by the fetch or execute call

********************************************************************************
SQL ID: 6w821sggrtysx
Plan Hash: 2325776775SELECT FUNCTION_NAME

FROM FND_USER_DESKTOP_OBJECTS

WHERE USER_ID = :b1 AND APPLICATION_ID = :b2 AND RESPONSIBILITY_ID = :b3 AND TYPE
= FUNCTION AND ROWNUM <= 10 ORDER BY SEQUENCE
call

count

cpu

elapsed

disk

query

current

rows

- - - - - Parse

Execute

0.00

Fetch

0.00
0.00

0.00

0.00

0.00

- - - - - total

0.00

0.00

Misses in library cache during parse: 0


Optimizer mode: ALL_ROWS
Parsing user id: 173 (APPS)
Number of plan statistics captured: 1
Rows (1st) Rows (avg) Rows (max) Row Source Operation
- - -
1

1 SORT ORDER BY (cr=4 pr=0 pw=0 time=0 us cost=6 size=41 card=1)

1 COUNT STOPKEY (cr=4 pr=0 pw=0 time=0 us)

TABLE ACCESS BY INDEX ROWID FND_USER_DESKTOP_OBJECTS (cr=4 pr=0

pw=0 time=0 us cost=5 size=41 card=1)


1

INDEX RANGE SCAN FND_USER_DESKTOP_OBJECTS_N1 (cr=3 pr=0 pw=0

time=0 us cost=3 size=0 card=3)(object id 33596)

Rows

Execution Plan

-
0 SELECT STATEMENT MODE: ALL_ROWS
1 SORT (ORDER BY)
1

COUNT (STOPKEY)

TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF

FND_USER_DESKTOP_OBJECTS (TABLE)
1

INDEX MODE: ANALYZED (RANGE SCAN) OF

FND_USER_DESKTOP_OBJECTS_N1 (INDEX)
Elapsed times include waiting on following events:
Event waited on

Times Max. Wait Total Waited

- Waited -
SQL*Net message to client

SQL*Net message from client

0.00
5

0.00

0.00

0.00********************************************************************************
SQL ID: 276ut2y7ywqux
Plan Hash: 3856112528
select object_name, icon_name
from
fnd_desktop_objects
call

count

cpu

elapsed

disk

query

current

rows

- - - - - Parse
Execute
Fetch

1
1
3

0.00
0.00
0.00

0.00
0.00
0.00

0
0
0

0
0
6

0
0
0

0
0
47

- - - - - total

0.00

0.00

47

Misses in library cache during parse: 0


Optimizer mode: ALL_ROWS
Parsing user id: 173 (APPS)
Number of plan statistics captured: 1
Rows (1st) Rows (avg) Rows (max) Row Source Operation
- - -
47

47

47 TABLE ACCESS FULL FND_DESKTOP_OBJECTS (cr=6 pr=0 pw=0 time=0 us

cost=2 size=1175 card=47)


Rows

Execution Plan

-
0 SELECT STATEMENT MODE: ALL_ROWS
47 TABLE ACCESS MODE: ANALYZED (FULL) OF FND_DESKTOP_OBJECTS
(TABLE)
Elapsed times include waiting on following events:Event waited on

Times Max. Wait

Total Waited- Waited -


SQL*Net message to client
SQL*Net message from client

0.00
4

0.00

0.00
0.00

********************************************************************************

trcsess utlity
When using shared server sessions, many processes are involved. The trace pertaining to the
user session is scattered across different trace files belonging to different processes. This
makes it difficult to get a complete picture of the life cycle of a session.
The trcsess utility consolidates trace output from selected trace files based on several criteria
trcsess [output=output_file_name]

[session=session_id]
[clientid=client_id]
[service=service_name]
[action=action_name]
[module=module_name]
[trace_files]
trcsess output=main.trc service=TEST *trc

After the consolidate trace file had been generated you can execute tkprof on it.

How to run sql tuning expert for particular


sql -d in the Cursor cache
Steps for running sql tuning on the particular sql -d in the Cursor cache
Create Tuning Task
DECLARE
my_task_name VARCHAR2(30);
BEGIN
my_task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(
sql_id => ,
scope => COMPREHENSIVE,
time_limit => 3600,
task_name => ,
description => );
end;
/
Execute Tuning Task:
BEGIN
DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name => example -1);
end;
/
Get the summary of recommendation
SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK( example -1) from DUAL;

How to accept the profile recommended by the sql tuning expert


DECLARE
my_sqlprofile_name VARCHAR2(30);
BEGIN
my_sqlprofile_name := DBMS_SQLTUNE.ACCEPT_SQL_PROFILE (
task_name =>example -1,
name => my_sql_profile);
END;
/

0
Performance Terms explained

What is Logical I/O and Physical I/O in


Oracle Database
Many times we get confused with Logical I/O and Physical I/O in Oracle Database.
I am trying to clear all the doubt about what is Logical I/O and Physical I/O in Oracle Database
What is Logical I/O
It is the data block accessed from Buffer Cache
Its statistics in captured in db block get, consistents gets, bufer is pinned count, session
logical read
LIO may have incurred a PIO in order to get into the cache in the first place
Db block gets is the state of the block as it exists
consistents gets is the blocks read from the buffer cache in a consistent state
This stats is clearly visible as db block gets and consistent gets in Autotrace executed
We can find this for statement from cr+cu in raw trace data
This stats is clearly visible as query and current in tkprof output
What is Physical I/O

it is data block requested from Physical disks or Operation system


It could be from Operation system cache
Data blocks must be placed into memory in order to be accessed to
satisfy the acquisition of rows for a querys result set
Its statistics in captured in physical reads statistics
This stats is clearly visible as physical reads in Autotrace executed
We can find this for statement from pr in raw trace dat
This stats is clearly visible as disks in tkprof output

V$SYSSTAT

Use the following query (or similar) to get the information from this
dictionary view:
select name, value from v$sysstat where name in (consistent
gets,db block gets,physical reads);
Which is better logical I/O and physical I/O in terms of performance
A physical I/O is not good for performance of queries . Whenever a physical I/O takes place,
Oracle tries to read the data block from the disk which will be slow. The goal hence is to avoid
physical I/O as far as possible.
A Logical I/O is considered better for performance (when compared to Physical I/O) because the
reads happen from the memory as the data block is pre-fetched from the disk. So now Oracle
does not need to go to disk to fetch blocks for your query results. But it is important to note that
excess of Logical reads (I/O) is also not good or recommended. There are many reasons for
that
1) a logical read might have resulted in a physical read to fetch the data block into the buffer
cache.
2) every time a block is read from the cache, a lock or latch is acquired on the cache and hence
higher logical reads could mean that there are high chances of buffer cache contention.
So our goal should be to access least number of logical I/O for queries inorder to improve its
performance

How to find session generating lots of redo


To find sessions generating lots of redo, you can use either of the following
methods. Both methods examine the amount of undo generated. When a transaction
generates undo, it will automatically generate redo as well.how to find session generating
lots of redo
1) Query V$SESS_IO. This view contains the column BLOCK_CHANGES which indicates
how much blocks have been changed by the session. High values indicate a
session generating lots of redo.
The query you can use is:
SQL> SELECT s.sid, s.serial#, s.username, s.program,
2 i.block_changes
3 FROM v$session s, v$sess_io i
4 WHERE s.sid = i.sid
5 ORDER BY 5 desc, 1, 2, 3, 4;
Run the query multiple times and examine the delta between each occurrence
of BLOCK_CHANGES. Large deltas indicate high redo generation by the session.
This will need to executed on all the nodes of database if it is RAC
2) Query V$TRANSACTION. This view contains information about the amount of
undo blocks and undo records accessed by the transaction (as found in the
USED_UBLK and USED_UREC columns).

The query you can use is:


SQL> SELECT s.sid, s.serial#, s.username, s.program,
2 t.used_ublk, t.used_urec
3 FROM v$session s, v$transaction t
4 WHERE s.taddr = t.addr
5 ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;
Run the query multiple times and examine the delta between each occurrence
of USED_UBLK and USED_UREC. Large deltas indicate high redo generation by
the session.
You use the first query when you need to check for programs generating lots of
redo when these programs activate more than one transaction. The latter query
can be used to find out which particular transactions are generating redo.

How to calculate IOPS of an oracle


database
What is IOPS
IOPS (Input/Output Operations Per Second, pronounced eye-ops) is a common performance
measurement used to benchmark computer storage devices like hard disk drives (HDD), solid
state drives (SSD), and storage area networks (SAN)
How to calculate IOPS of an oracle database
In Oracle database we measure how much of such requests are generated by the database
system. We have to check whether our hardware are capable of processing the request
generated by the database system in order to maintain the performance.IOPS is calculated in
Oracle database as the sum of Physical Read Total IO Requests Per Sec and Physical Write
Total IO Requests Per SecThese values will be available in the table,
dba_hist_sysmetric_summary
Here is a query on How to calculate IOPS of an oracle database
break on report
compute sum of Value on report
select METRIC_NAME,avg(AVERAGE) as Value
from dba_hist_sysmetric_summary
where METRIC_NAME in (Physical Read Total IO Requests Per Sec,Physical Write Total IO
Requests Per Sec)
Some important terms related to IOPS in metrics
Physical read IO requests Number of read requests for application activity (mainly buffer
cache and direct load operation) which read one or more database blocks per request. This is a
subset of physical read total IO requests statistic.
Physical read total IO requests Number of read requests which read one or more database
blocks for all instance activity including application, backup and recovery, and other utilities.

Physical write IO requests Number of write requests for application activity (mainly buffer
cache and direct load operation) which wrote one or more database blocks per request.
Physical write total IO requests Number of write requests which wrote one or more database
blocks from all instance activity including application activity, backup and recovery, and other
utilities.
physical read total multi block requests Number of large read requests which read multi
database blocks for all instance activity
physical write total multi block requests -Number of large write requests which write multi
database blocks for all instance activity
physical read total bytes -Total bytes read which read one or more database blocks for all
instance activity including application, backup and recovery, and other utilities.
physical write total bytes Total bytes write which read one or more database blocks for all
instance activity including application, backup and recovery, and other utilities.
To calculate small reads:
Small Reads = Total Reads Large Reads
Small Writes = Total Writes Large Writes
These metrics can be used taken at two point of time can also be used to calculate IOPS for
small read,wrire,large write,large read , total bytes per sec etc

What is DB time and Average Active


sessions, Active session in oracle
In this section we will be defining the terms line Database time, Action session and Average
active session
Database Time, or DB Time, is defined by Oracle as the total time by foreground sessions
executing database calls. This includes CPU time, IO time, and non-idle wait time. In other
words, its the total time spent either actively working or actively waiting in a database call.An
Active Session in oracle is defined as an Oracle session currently spending time in a database
call, and the average activity of a session is the ratio of active to total wall clock time.
Average Active Sessions
At a macroscopic level, DB Time is the sum of DB time over all sessions. Average Active
Sessions, or AAS, represents the sum of active sessions over all sessions, at any given point in
time.
AAS = (DB Time / Elapsed Time)
The Average Active Sessions metric is important because its best representation of your
database system load.

Performance Lesson :What is a Bind


Variable
What is a Bind Variable?
-It is a placeholder for a value
-In PL/SQL it is called a local or global variable.
DECLARE
l_empl_id DEPT.emp_id%TYPE;
BEGIN
select first_name from DEPT where id = l_empl_id;
END;
-in SQL*Plus it is a variable
variable a number
BEGIN
:a = 1000;
dbms_output.put_line(a);
END;- Sql without binds
SELECT id FROM users
WHERE user_id=875875;
SELECT id FROM users
WHERE user_id=86986;
Sql with binds
SELECT id FROM users
WHERE user_id==:a_user_id
When sql statement with binds are used, It can be shared with mutiple user session and Soft
parse is just good to execute the statement. Hard parsing each statement is quite a pain and
severely impact the performance of the oracle database.

How to create ADDM task and check its


report
In this post, I will explain How to create ADDM task and check its report
We are using start snapshot as 900
And end snapshots as 950
BEGIN
Create an ADDM task.
DBMS_ADVISOR.create_task (
advisor_name
=> ADDM,
task_name
=> 900_950_AWR_SNAPSHOT,
task_desc
=> Advisor for snapshots 900 to 950.);

Set the start and end snapshots.


DBMS_ADVISOR.set_task_parameter (
task_name => 900_950_AWR_SNAPSHOT,
parameter => START_SNAPSHOT,
value => 900);
DBMS_ADVISOR.set_task_parameter (
task_name => 900_950_AWR_SNAPSHOT,
parameter => END_SNAPSHOT,
value => 950);
Execute the task.
DBMS_ADVISOR.execute_task(task_name => 900_950_AWR_SNAPSHOT,);
END;
/
Display the report.
SET LONG 100000
SET PAGESIZE 50000
SELECT DBMS_ADVISOR.get_task_report(900_950_AWR_SNAPSHOT) AS report
FROM dual;
SET PAGESIZE 24
Related Views
The following views can be used to display the ADDM output without using Enterprise Manager
or the GET_TASK_REPORT function:
DBA_ADVISOR_TASKS Basic information about existing tasks.
DBA_ADVISOR_LOG Status information about existing tasks.
DBA_ADVISOR_FINDINGS Findings identified for an existing task.
DBA_ADVISOR_RECOMMENDATIONS Recommendations for the problems identified by an
existing task.

Oracle ASH(Active Session History) Useful


queries
Active Session History (ASH) was introduced in Oracle 10g. It samples the activity of each
active database session every second. The data is held in a buffer in memory in the
database.The design goal is to keep about an hour (your mileage will vary). If a session is not
active it will not be sampled. The in-memory buffer is exposed via a view called
v$active_session_history.
When an AWR snapshot is taken, 1 row in 10 from the ASH buffer is copied down into the AWR
repository. It can also be flushed to disk between snapshots when the buffer reaches 66% full,
so there is no missed data.The data is stored in WRH$_ACTIVE_SESSION_HISTORY and it is
exposed via dba_hist_active_sess_history.it is enabled by default, but before you rush off to
use it, be aware that it is a licenced feature.
You can disable ash by setting the underscore parameter

alter system set _ash_enable=False;


I want to emphasize that if the session is not active it will not be sampled. You can actually set a
parameter _ash_enable_all = TRUE to force all sessions, including idle sessions, to be

sampledSome important points in this regard


1)Oracle has provided a package called DBMS_APPLICATION_INFO This allows you to set
two attributes; MODULE and ACTION for a session. That value then
appears in v$session, and can be very useful to help you identify what database sessions relate
to what part of an application. These values are then also captured by ASH.
2) Most active session in last one hour can be found using active session history

SELECT sql_id,COUNT(*),ROUND(COUNT(*)/SUM(COUNT(*)) OVER(), 2) PCTLOAD


FROM gv$active_session_history
WHERE sample_time > SYSDATE 1/24
AND session_type = BACKGROUND
GROUP BY sql_id
ORDER BY COUNT(*) DESC;
SELECT sql_id,COUNT(*),ROUND(COUNT(*)/SUM(COUNT(*)) OVER(), 2) PCTLOAD
FROM gv$active_session_history
WHERE sample_time > SYSDATE 1/24
AND session_type = FOREGROUND
GROUP BY sql_id
ORDER BY COUNT(*) DESC;
3) To find out the wait events for which this session

SELECT sample_time, event, wait_time


FROM gv$active_session_history
WHERE session_id = &1
AND session_serial# = &2
4) Most active session in last one hour
SELECT sql_id,COUNT(*),ROUND(COUNT(*)/SUM(COUNT(*)) OVER(), 2) PCTLOAD

FROM gv$active_session_history
WHERE sample_time > SYSDATE 1/24
AND session_type = BACKGROUND
GROUP BY sql_id
ORDER BY COUNT(*) DESC;
SELECT sql_id,COUNT(*),ROUND(COUNT(*)/SUM(COUNT(*)) OVER(), 2) PCTLOAD
FROM gv$active_session_history
WHERE sample_time > SYSDATE 1/24
AND session_type = FOREGROUND
GROUP BY sql_id
ORDER BY COUNT(*) DESC;

5) Most I/O intensive sql in last 1 hour


SELECT sql_id, COUNT(*)
FROM gv$active_session_history ash, gv$event_name evt
WHERE ash.sample_time > SYSDATE 1/24
AND ash.session_state = WAITING
AND ash.event_id = evt.event_id
AND evt.wait_class = User I/O
GROUP BY sql_id
ORDER BY COUNT(*) DESC;

6)Locking information in last 1 min

col event for a22

col block_type for a18


col objn for a18
col otype for a10
col fn for 99
col sid for 9999
col bsid for 9999
col lm for 99
col p3 for 99999
col blockn for 99999
select
to_char(sample_time,HH:MI) st,
substr(event,0,20) event,
a.session_id sid,
mod(a.p1,16) lm,
a.p2,
a.p3,
nvl(o.object_name,ash.current_obj#) objn,
substr(o.object_type,0,10) otype,
CURRENT_FILE# fn,
CURRENT_BLOCK# blockn,
a.SQL_ID,
BLOCKING_SESSION bsid
from v$active_session_history a,

all_objects o
where event like enq: TX%
and o.object_id (+)= a.CURRENT_OBJ#
and sample_time > sysdate 40/(60*24)
Order by sample_time
/

7) top sqls spent more on cpu/wait/io


select
ash.SQL_ID ,
sum(decode(a.session_state,ON CPU,1,0)) CPU,
sum(decode(a.session_state,WAITING,1,0))
sum(decode(a.session_state,WAITING, decode(en.wait_class, User I/O,1,0),0)) WAIT ,
sum(decode(a.session_state,WAITING, decode(en.wait_class, User I/O,1,0),0)) IO ,
sum(decode(a.session_state,ON CPU,1,1)) TOTAL
from v$active_session_history a,v$event_name en
where SQL_ID is not NULL and en.event#=ash.event#

How To Generate ASH Report


Oracle Provide a standard script just like AWR to analyze the ASM data
I will explain how to generate ASH(Active Session History) reports from SQLPLUS. The same
can be done using Enterprise Manager also.

Script Location is same as other standard script $ORACLE_HOME/rdbms/admin/


SQL> @$ORACLE_HOME/rdbms/admin/ashrpt.sql
Current Instance
~~~~~~~~~~~~~~~~
DB Id DB Name Inst Num Instance

848748484 TECHDB 1 TECHDB
Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
Enter html for an HTML report, or text for plain text
Defaults to html
Enter value for report_type: html
..
..
Defaults to -15 mins
Enter value for begin_time: 01/11/15 00:00:00
Report begin time specified: 01/11/15 00:00:00
Enter value for duration: 10
Report duration specified: 10
Using 11-Jan-15 00:00:00 as report begin time
Using 11-Jan-15 00:10:00 as report end time
Enter value for report_name: Example
Summary of All User Input
Format : HTML
DB Id : 848748484
Inst num : 1
Begin time : 11-Jan-15 00:00:00
End time : 11-Jan-15 00:10:00
Slot width : Default
Report targets : 0
Report name : Example
End of Report
</body></html>
Report written to Example

All about AWR ( Automatic Workload


Repository)
What is AWR?
Automatic Workload Repository (AWR) is a collection of persistent system performance
statistics owned by the SYS user. It resides in SYSAUX tablespace. By default snapshot are
generated once every 60min and maintained for 7 days.AWR report are used to investigate
performance and other issues.
Script Availabe in $ORACLE_HOME/rdbms/admin
awrrpt.sql:Depending on the reasons for collecting the report, the default can be used, or for a
more focused view, a short 10-15 minute snapshot could be used.awrrpti.sql :Displays statistics
for a range of snapshot Ids on a specified database and instance.
awrsqrpt.sql:Displays statistics of a particular SQL statement for a range of snapshot Ids. Run
this report to inspect or debug the performance of a particular SQL statement.
awrsqrpi.sql :Displays statistics of a particular SQL statement for a range of snapshot Ids on a
specified SQL.
awrddrpt.sql:Compares detailed performance attributes and configuration settings between two
selected time periods.
awrddrpi.sql :Compares detailed performance attributes and configuration settings between two
selected time periods on a specific database and instance.

Useful Operations on AWR


a) How to Modify the awr snapshot settings.
BEGIN
DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings(
retention => 21600,
Minutes (21600 = 15 Days).
Current value retained if NULL.
interval => 60);
Minutes. Current value retained if NULL.
END;
/

b) Dropping the AWR snaps in range:


BEGIN
DBMS_WORKLOAD_REPOSITORY.drop_snapshot_range(
(low_snap_id=>40,
High_snap_id=>80);
END;
/

c) Creating a SNAPSHOT Manually:


BEGIN
DBMS_WORKLOAD_REPOSITORY.create_snapshot();
END;
/
Workload Repository Views
V$ACTIVE_SESSION_HISTORY Displays the active session history (ASH) sampled every
second. DBA_HIST_ACTIVE_SESS_HISTORY Displays the history contents of the active
session history.
DBA_HIST_BASELINE Displays baseline information.
DBA_HIST_DATABASE_INSTANCE Displays database environment information.
DBA_HIST_SNAPSHOT Displays snapshot information.
DBA_HIST_SQL_PLAN Displays SQL execution plans.
DBA_HIST_WR_CONTROL Displays AWR settings.

Oracle DBA: Hanganalyze and system


state dump
Many times Oracle database get hangs due to locks or latching issue and Nobody can login to
the system. It becomes critical and And Usually we do shutdown abort to immediately resolve
the issue. But it is good to take system state dump/hanganalyze at 1 min interval to analyze
the locking or latching issue afterwards before doing shutdown abort so that we can find the fix
for the problem and avoid that in future.Oracle Support also usually ask for these when the
database hangs for some reason
Here are the steps to take system state dump
In RAC system this need to be executed on all the instance
Syntax for system state dump:
Logon to sqlplus as sysdba
SQL> oradebug setmypid
SQL> oradebug unlimit
SQL> oradebug dump systemstate 10
.. Wait at least 1 min
SQL> oradebug dump systemstate 10
.. Wait at lease 1 min
SQL> oradebug dump systemstate 10
Similarly Hanganalyze is a good utility to analysis.
Here are the steps

Syntax for hangaanalyze for single instance


Logon to sqlplus as sysdba
oradebug setmypid;
oradebug unlimit;
oradebug hanganalyze 3;
Syntax for hanganalyze for RAC
Logon to sqlplus as sysdba
oradebug setmypid;
oradebug unlimit;
oradebug setinst all
oradebug -g def hanganalyze 3
Sometimes it is not possible to even login with sqlplus / as sysdba at the time of hang, In that
situation we can use below command to enter with 11g onwards
sqlplus -prelim / as sysdba

Oracle Wait Events That Everyone Should


Know Part -1
This is part I in series of Oracle Wait Events That Everyone Should Know
Here are some of the Common Oracle wait events that everyone should Know.
Wait events
You can find which event session is waiting for it by following query
select event from V$session_wait where sid=&1
I am trying to explain few common Oracle wait events ,there causes and resolutions
enqueue
The process is waiting on an enqueue (a lock you can see in v$lock). This commonly occurs
when one user is trying to update a row in a table that is currently being updated by another
user. The blockers can be find out by using following query
select * from dba_waiters
library cache pin
The process wants to pin an object in memory in the library cache for examination, ensuring no
other processes can update the object at the same time. This happens when you are compiling
or parsing a PL/SQL object or a view. Avoid compiling PL/SQL object or view at high usage time
to avoid this wait event

library cache load lock


The process is waiting for the opportunity to load an object or a piece of an object into the library
cache. (Only one process can load an object or a piece of an object at a time.)
latch free
The process is waiting for a latch held by another process. (This wait event does not apply to
processes that are spinning while waiting for a latch; when a process is spinning, it is not
waiting.).Latches are lightweight serialization devices used to coordinate multiuser access to
shared data structures, objects, and files.
Latches are locks designed to be held for extremely short periods of time for example, the time
it takes to modify an in-memory data structure. They are used to protect certain memory
structures, such as the database block buffer cache or the library cache in the shared pool.
Latches are typically requested internally in a willing to wait mode. This means that if the latch
is not available, the requesting session will sleep for a short period of time and retry the
operation later. Other latches may be requested in an immediate mode, which is similar in
concept to a SELECT FOR UPDATE NOWAIT, meaning that the process will go do something
else, such as try to grab an equivalent sibling latch that may be free, rather than sit and wait for
this latch to become available. Since many requestors may be waiting for a latch at the same
time, you may see some processes waiting longer than others. Latches are assigned rather
randomly, based on the luck of the draw, if you will. Whichever session asks for a latch right
after it was released will get it. There is no line of latch waitersjust a mob of waiters constantly
retrying.
buffer busy waits
The process wants to access a data block that is currently not in memory, but another process
has already issued an I/O request to read the block into memory. (The process is waiting for the
other process to finish bringing the block into memory.). The hot blocks can be found using view
v$bh

Oracle Wait Events That Everyone Should


Know part-II
This is part II in series of Oracle Wait Events That Everyone Should Know
control file sequential read
The process is waiting for blocks to be read from a control file. This happens generally

making a backup of the controlfiles


sharing information (between instances) from the controlfile
reading other blocks from the controlfiles
reading the header block

If this is major waiting event, it means control file location need to changed to faster disk
location

control file parallel write


The process has issued multiple I/O requests in parallel to write blocks to all control files, and is
waiting for all of the writes to complete.
log buffer space
The process is waiting for space to become available in the log buffer (Space becomes
available only after LGWR has written the current contents of the log buffer to disk.) This
typically happens when applications generate redo faster than LGWR can write it to disk.
This can also happen, if the I/O to disk where redo logs are located is slow
There should be no log buffer space waits as such in the database.Consider making the log
buffer bigger if it is small or consider moving log files to faster disks such as striped disks.
Select event, total_waits, total_timeouts, time_waited, average_wait
from v$system_event
where event = log buffer space;
Select sid, event, seconds_in_wait, state
from v$session_wait
where event = log buffer space;
Select name, value
from v$sysstat
where name in (redo log space requests);
The pct_buff_alloc_retries should be zero or less than 0.01 (< 1%). If it is greater consider
making the log buffer bigger. If it is greater consider moving the log files to faster disks such as
striped disks.
Select v1.value as redo_buff_alloc_retries, v2.value as redo_entries,
trunc(v1.value/v2.value,4) as pct_buff_alloc_retries
from v$sysstat v1, v$sysstat v2
where v1.name = redo buffer allocation retries
and v2.name = redo entries;
log file sequential read
The process is waiting for blocks to be read from the online redo log into memory. This primarily
occurs at instance startup and when the ARCH process archives filled online redo logs.
log file parallel write
The process is waiting for blocks to be written to all online redo log members in one group.
LGWR is typically the only process to see this wait event. It will wait until all blocks have been
written to all members.
log file sync
The process is waiting for LGWR to finish flushing the log buffer to disk. This occurs when a
user commits a transaction. (A transaction is not considered committed until all of the redo to
recover the transaction has been successfully written to disk.)
A slow LGWR process can introduce log file sync waits which makes the user to experience
wait times during commit or rollback. The log file parallel write and log file sync wait events are
interrelated and must be dealt simultaneously.

We must try to allocate the redo logs to high performance disk(Solid state disk). Also we should
try to reduce the load on LGWR by reducing commits in the applications.
The manual hotbackup piece can also introduce stress in the system by generating lot of redo
stuff,So avoid that during peak time
Sometimes LGWR is starving for CPU resource. If the server is very busy, then LGWR can
starve for CPU too. This will lead to slower response from LGWR, increasing log file sync
waits. After all, these system calls and I/O calls must use CPU. In this case, log file sync is a
secondary symptom and resolving root cause for high CPU usage will reduce log file sync
waits.
Due to memory starvation issues, LGWR can also be paged out. This can lead to slower
response from LGWR too.

Oracle Wait Events That Everyone Should


Know Part -III
Lets get going with the Third part
db file scattered read
The process has issued an I/O request to read a series of contiguous blocks from a data file into
the buffer cache, and is waiting for the operation to complete. This typically happens during a
full table scan or full index scan.
We should check if the query should be using full table scan. Make sure optimizer stats are Up
to date. Use Partition pruning to reduce number of blocks visited
If an query that has been running fine for a while suddenly clocks a lot of time on the db file
scattered read event and there hasnt been a code change, you might want to check to see if
one or more indexes has been dropped or become unusable.
db file sequential read
The process has issued an I/O request to read one block from a data file into the buffer cache,
and is waiting for the operation to complete. This typically happens during an index lookup or a
fetch from a table by ROWID when the required data block is not already in memory. Do not be
misled by the confusing name of this wait event!
We should be checking if the right indexes are being used. A wrong index can make the query
perform badly.Make sure optimizer stats are Up to date.
db file parallel read
The process has issued multiple I/O requests in parallel to read blocks from data files into
memory, and is waiting for all requests to complete. The documentation says this wait event
occurs only during recovery, but in fact it also occurs during regular activity when a process
batches many single block I/O requests together and issues them in parallel. (In spite of the

name, you will not see this wait event during parallel query or parallel DML. In those cases wait
events with PX in their names occur instead.)
db file parallel write
The process, typically DBWR, has issued multiple I/O requests in parallel to write dirty blocks
from the buffer cache to disk, and is waiting for all requests to complete.
direct path read, direct path write
The process has issued asynchronous I/O requests that bypass the buffer cache, and is waiting
for them to complete. These wait events typically involve sort segments.
SQL statements with functions that require sorts, such as ORDER BY, GROUP BY, UNION,
DISTINCT, and ROLLUP, write sort runs to the temporary tablespace when the input size is
larger than workarea in PGA
Make sure the optimizer stats are up to data and query is using the right driving table. Check to
see if the composite indexs columns can be rearranged to match the ORDER BY clause to
avoid sort entirely.
Make sure appropriate value PGA_AGGREGATE_TARGET is set. If possible use UNION ALL
instead of UNION.
Shared pool latch
The shared pool latch is used to protect critical operations when allocating and freeing memory
in the shared pool. Contentions for the shared pool and library cache latches are mainly due to
intense hard parsing. A hard parse applies to new cursors and cursors that are aged out and
must be re-executed
The cost of parsing a new SQL statement is expensive both in terms of CPU requirements and
the number of times the library cache and shared pool latches may need to be acquired and
released.
Eliminating literal SQL is also useful to avoid the shared pool latch

How to find which sid is doing full table


scan
Many times the performance of the database will be slow.We need to first find out if any big
table full table scan is going on.
We can use below query to find out what all the sid are doing full tables scan
column user_process heading Name |SID format a20;
column long_scans heading Long Scans format 999,999,999;
column short_scans heading Short Scans format 999,999,999;

column rows_retreived heading Rows Retrieved format 999,999,999;


set linesize 1000
set timing on
select ss.username||'(||se.sid||) USER_PROCESS,
sum(decode(name,table scans (short tables),value)) SHORT_SCANS,
sum(decode(name,table scans (long tables), value)) LONG_SCANS,
sum(decode(name,table scan rows gotten,value)) ROWS_RETRIEVED
from v$session ss, v$sesstat se, v$statname sn
where se.statistic# = sn.statistic#
and (name like %table scans (short tables)%
or name like %table scans (long tables)%
or name like %table scan rows gotten% )
and se.sid = ss.sid
and ss.username is not null
group by ss.username||'(||se.sid||)
order by LONG_SCANS desc
/
Optimizer

New Optimizer Feature with 11g


New Optimizer Feature with 11g
1) Invisible indexes
With 11g release,index can be made invisible. They will be maintained all the time and can be
made visible any time
alter index K invisible;
Optimizer will not use the index if it is invisible
alter index K visible
If want one query to use invisible index,we can set this parameter in the session
optimizer_use_invisible_index=true
Then the query in the session will use the invisible index
2)Extented stats on the table
If we use any function on the predicate in the query,the optimizer is not able to get the selectivity
properly.With 11g we can create extented stats on the column for the function .

exec dbms_stats.create_extented_stats(ownname => OWN,tab_name=> TAB ,extension=>


func(col));
Then optimizer will be able to find the correct selectivity

How to use Optimizer hints


What is Optimizer?
When a query is submitted in Oracle server. optimizer attempts to generate the best execution
plan for a SQL statement. The best execution plan is defined as the plan with the lowest cost
among all considered candidate plans. The cost computation accounts for factors of query
execution such as I/O, CPU, and communication.
The best method of execution depends on myriad conditions including how the query is written,
the size of the data set, the layout of the data, and which access structures exist. The optimizer
determines the best plan for a SQL statement by examining multiple access methods, such as
full table scan or index scans, and different join methods such as nested loops and hash joins.
What is a Optimizer Hint ?
Optimizer Hints are clause that used in sql queries to force the optimizer to take particular route
like using index,nested loop join.Syntax to use hint
/*+ < Hint name>*/
Some of commonly used hint
SELECT /*+ ALL_ROWS */ empid, last_name, sal FROM emp;
SELECT /*+ FIRST_ROWS */ * FROM emp;
SELECT /*+ FIRST_20_ROWS */ * FROM emp;
SELECT /*+ FIRST_ROWS(100) */ empid, last_name, sal FROM emp;
/*+INDEX(Index name)*/
/*+INDEX_JOIN(emp index1, index2)*/ /*+INDEX_COMBINE(emp bitmap1, bitmap2)*/
/*+AND_EQUAL(emp index1, index2,)*/
/*+ dynamic_sampling(A 3) */
USE_NL(table table)
USE_MERGE(table table)
USE_HASH(table table)

Various Joins Method in Oracle


Performance is key for OLTP system and we should have good information about Joins Method
in Oracle to increase performance for sql queries
Here are the Joins Method in Oracle available in Oracle database

Nested Loops
-For each row in the first row source access all the rows from the second row source.
-Best for OLTP type transactions
-it will be Fastest if rows returned from first table are small
-The optimizer first determine the driving table and designates it as the outer loop.This is the
driving row source. It produces a set of rows for driving the join condition. The row source can
be a table accessed using index scan or full table scan. The rows can be produced from any
other operation too. For example the output from a Nested Loop Join can be used as a row
source.
-The optimizer designate other table as inner Loop.This is iterated for every row returned from
the outer loop. This is an access operation on a table and ideally should be an index scan.
-Operation performed by INNER table is repeated for every row returned in OUTER table
-If the optimizer is choosing to use some other join method, you can use the USE_NL(A B) hint,
where A and B are the aliases of the tables being joined.
Plan
SELECT STATEMENT
NESTED LOOPS
TABLE ACCESS FULL OF OE_ORDER_LINES_ALL
TABLE ACCESS BY INDEX ROWID OE_LINES_ALL
INDEX RANGE SCAN OE_LINES_N1
Nested Loop outer join
-Similar to nested loop
-Rows returned even if inner loop does not have any rows meeting the criteria
-Unlike the nested loop which might be driven from either of the tables, this is one way join
a = b(+) will always go to a before b, this may result in more expensive plan (possibly non-NL)
(+) always goes on deficient side
Sort Merge Join
-Rows are produced from first table and are then sorted
-Rows are produced from second table and sorted by the same sort key as first table
-Table A and B are NOT accessed concurrently
-Sorted rows from both sides are then merged together(joined)
-No concept of driving table order cannot affect outcome
-Faster than Nested loops if:
rows produced from table 1 are large
access to table 2 requires a large range scan for each row in table 1
-To instruct the optimizer to use a sort merge join, apply the USE_MERGE hint. You might also
need to give hints to force an access path.
-There are situations where it is better to override the optimize with the USE_MERGE hint.

Execution Plan
MERGE JOIN
SORT JOIN
TABLE ACCESS FULL OF OE_ORDER_LINES_ALL
SORT JOIN
TABLE ACCESS FULL OF OE_HEADERS
Hash Join
-Smallest table is passed over and a hashing algorithm is applied to each row to create a hash
table in memory.
-Second table is passed over and the same hashing algorithm applied to check for matches (ie.
joins)
-Faster than sort merge join:
sort operation required by sort merge join can be expensive if tables are large and not in any
matching order
-Apply the USE_HASH hint to instruct the optimizer to use a hash join when joining two tables
together.
-Ensure that hash_area_size is large enough to hold the smaller table in memory. Otherwise,
Oracle must write to the TEMP tablespace, slowing down the hash join
Plan
SELECT STATEMENT
HASH JOIN
TABLE ACCESS FULL OE_ORDER_LINES_ALL
TABLE ACCESS FULL OE_HEADERS
Cartesian Join-Are generally expensive as the result is the Cartesian product of the two tables.
-Can result from 1 or more of the tables not having any join conditions to any other tables in the
statement
-Can occur even with a join.
Plan
SELECT STATEMENT

SORT UNIQUE
MERGE JOIN CARTESIAN
NESTED LOOPS
TABLE ACCESS BY INDEX ROWID OE_ORDER_LINES_ALL
INDEX RANGE SCAN OE_ORDER_LINES_ALL_N1
TABLE ACCESS BY INDEX ROWID OE_HEADERS
INDEX RANGE SCAN OE_HEADERS_N1
SORT JOIN
INDEX FAST FULL SCAN OE_HEADERS_N1
Statistics

Find indexes and assigned columns for a


table
It is very common that you want to know / see the available index on a table.
And also you sometime want to know the columns which are assigned to the
indexes.
Following views would be helpful in these cases
dba_ind_columns
all_ind_columns
set pagesize 50000 verify off echo off
col table_name head Table Name format a20
col index_name head Index Name format a25
col column_name head Column Name format a30
break on table_name on index_name
select table_name, index_name, column_name
from all_ind_columns
where table_name like upper(&Table_Name)
order by table_name, index_name, column_position
/

How Table Monitoring and


STATISTICS_LEVEL parameter are
connected from 10g onwards
Before Oracle 10g -Automated collection of statistics for objects that had become stale was
controlled by the setting of the MONITORING flag on table.
Depending on the MONITORING flag, the GATHER_STATS_JOB job collected GATHER
EMPTY and GATHER STALE on the flagged objects.
10g onwards -The MONITORING and NOMONITORING keywords are deprecated and will be
ignored.Table-monitoring feature is now controlled by the STATISTICS_LEVEL parameter.
When STATISTICS_LEVEL is set to BASIC, monitoring is disabled on the table.
When STATISTICS_LEVEL is set to TYPICAL, then monitoring is enabled.
By default STATISTICS_LEVEL is set to TYPICAL and monitoring of tables is enabled.
It is strongly recommended to set STATISTICS_LEVEL to TYPICAL in 10g.
Monitoring tracks the approximate number of INSERT, UPDATE, and DELETE operations for
the table since the last time statistics were gathered. This information on changes made is
maintained in the SGA and periodically (about every 15 minutes) the SMON flushes the data
into the data dictionary tables. The data dictionary information is made visible through the views
DBA_TAB_MODIFICATIONS,ALL_TAB_MODIFICATIONS and
USER_TAB_MODIFICATIONS.
Oracle uses these views to identify tables that have stale statistics.
Whenever there is 10% change in data in a table, Oracle considers its statistics to be stale.
Up to date statistics are important to generate good execution plans. Automatic statistics
collection job using DBMS_STATS packages depend on the monitoring data to determine when
to collect statistics on objects with stale statistics.
Prior to Oracle11g, the staleness threshold is hardcoded at 10%. This means that an object is
considered stale if the number of rows inserted,
updated or deleted since the last statistics gathering time is more than 10% of the number of
rows. There is no way to modify this value prior to Oracle 11g.
Starting with Oracle11g, the staleness threshold can be set using the STALE_PERCENT
statistics preference. This can be set globally using
DBMS_STATS.SET_GLOBAL_PREFS or at the table level using
DBMS_STATS.SET_TABLE_PREFS.
E.g. to modify and check the staleness threshold for table SCOTT in USER schema:
SQL> exec dbms_stats.set_table_prefs(null,SCITT,STALE_PERCENT,27)

Oracle Tutorial: How to check Stale


statistics
Stats can become stale when the data in the table changes substantially.Up to date statistics
are important to generate good execution plans
How Oracle decide if the stats has become stale
Stats are considered stale when #(INSERTS + UPDATES + DELETES) >= 10% of
NUM_ROWS from dba_tables:
Parameter setting required to Track the table changes
Prior to Oracle 10g, automated collection of statistics for objects that had become stale was
controlled by the setting of the MONITORING flag on table.
Depending on the MONITORING flag, the GATHER_STATS_JOB job collected GATHER
EMPTY and GATHER STALE on the flagged objects.
In 10g the MONITORING and NOMONITORING keywords are deprecated and will be
ignored.Table-monitoring feature is now controlled by the STATISTICS_LEVEL parameter.
When STATISTICS_LEVEL is set to BASIC, monitoring is disabled on the table.
When STATISTICS_LEVEL is set to TYPICAL, then monitoring is enabled.
By default STATISTICS_LEVEL is set to TYPICAL and monitoring of tables is enabled. It is
strongly recommended to set STATISTICS_LEVEL to TYPICAL in 10g and above
By setting these parameter,Oracle tracks the approximate number of INSERT, UPDATE, and
DELETE operations for the table since the last time statistics were gathered. This information on
changes made is maintained in the SGA and periodically (about every 15 minutes) the SMON
flushes the data into the data dictionary tables. You can manually flush the information by calling
dbms_stats.FLUSH_DATABASE_MONITORING_INFO(). The data dictionary information is
made visible through the views: DBA_TAB_MODIFICATIONS, ALL_TAB_MODIFICATIONS and
USER_TAB_MODIFICATIONS.
Oracle uses these views to identify tables that have stale statistics.
Whenever there is 10% change in data in a table, Oracle considers its statistics to be stale.
How to check Stale statistics
The below plsql procedure find out all the tables in SCOTT schema which is stale stats
SET SERVEROUTPUT ON
SQL> DECLARE
ObjList dbms_stats.ObjectTab;
BEGIN
DBMS_STATS.GATHER_SCHEMA_STATS(ownname=>SCOTT, objlist=>ObjList,
options=>LIST STALE);

FOR k in ObjList.FIRST..ObjList.LAST
LOOP
dbms_output.put_line(ObjList(i).ownname || . || ObjList(k).ObjName || || ObjList(k).ObjType ||
|| ObjList(k).partname);
END LOOP;
END;
/
The below sql can also be used to find out insert,updates,deletes
select u.TIMESTAMP,
t.last_analyzed,
u.table_name,
u.inserts,
u.updates,
u.deletes,
d.num_rows,
decode(num_rows,0,Table Stats indicate No Rows,
nvl(TO_CHAR(((U.inserts+u.deletes+u.updates)/d.num_rows) * 100,999.99)
,Null Value in USER_TAB_MODIFICATIONS)
) percent
from user_tables t,USER_TAB_MODIFICATIONS u,dba_tables d
where u.table_name = t.table_name
and d.table_name = t.table_name
and d.owner = &Owner
and (u.inserts > 3000 or u.updates > 3000 or u.deletes > 3000)
order by t.last_analyzed
/
Some other Important takeaways
Prior to Oracle11g, the staleness threshold is hard coded at 10%. This means that an object is
considered stale if the number of rows inserted, updated or deleted since the last statistics
gathering time is more than 10% of the number of rows. There is no way to modify this value
prior to Oracle 11g.
Starting with Oracle11g, the staleness threshold can be set using the STALE_PERCENT
statistics preference. This can be set globally using DBMS_STATS.SET_GLOBAL_PREFS or at
the table level using DBMS_STATS.SET_TABLE_PREFS.

How to find table where statistics are locked


You can use the below query on How to find table where statistics are locked
select owner, table_name, stattype_locked
from dba_tab_statistics
where stattype_locked is not null;

Now once we find out the objects,we can use below queries to unlock them
exec dbms_stats.unlock_schema_stats(schema_owner);
exec dbms_stats.unlock_table_stats(table_owner,table_name);
General Performance topics

sql performance tuning


Sql statement are written to retrive /fetch data from the database. We want our sql statement to
running fast and providing the results in secs.
A poorly design sql could slow down the entire database operation and bring the entire
operation to halt. It is much harder to write efficient SQL than it is to write functionally correct
SQL. sql performance tuning can signficantly improve a systems health and performance.
The key to tuning SQL is to minimize the data it accesess to provide the result. We can
minimize the data it accesses to provide the result through optimal search path.
A simple example would be
select * from dept where emp=10
Now this query will have to search the entire table dept to find out the data where emp=10. So it
has to access the full table
Now if we create the index to emp column, then it can just access the index and get the result.
So here it access the less data
Here the general tips for sql performance tuning
1) First you must have all the required tools for sql tuning .You must have good information on
tracing,formating the trace, explain plan,reading th explain plan.
Good knowlegde about the various join method availabe in Oracle and how to use them
efficently
1) Read less data and be I/O efficient.
The more data you read for the sql statement, the more latches it need to acquire and it slows
down the performance. so it must be doing fewer logical reads always
Write sensible sql statement where proper filters . Do check about the number of rows in various
tables involved and find out the best method to create the sql statement
2) Use good indexes
BTree indexes and Bitmap indexes can be used to increased the performance of the queries if
the data returned is less than 10%. But we must be careful while creating the index as it need to
maintained for insert,update and delete operation also. So creating an index create overhead
over many things. So we must carefull examine the effect of creating the index.
3) Avoid sql which disable the usage of index

SQL that Disables Indexes


Functions ( to_char(), to_date(), etc. )
move the function to the constant/bind variable side
Type Casting
In SQL
where emp_no = 10 (emp_no is a varchar2)
In PL/SQL
where emp_no = v_emp_num (v_emp_num is a number)
Modifiers
and id + 0 = 111
and date + 1 = sysdate (try date = sysdate 1)
4) Always use bind variable in the application. If you dont use bind variable, the sql will be
parsed every time and will impact the database performance.If it contains the bind variable, sql
would be cached and further execution will not require parsing and thus overall performance of
the system is improved
5) UNION vs OR. Use UNION for queries with two clear execution paths; each returning a fairly
small number of rows. Dont use union for queries that are likely to return a large number of
rows since all rows need to be sorted and most of them will be discarded. OR tends to disable
the index
6) USe the accurate statistics on the table to get the optimal plan.
7) If you are using function on expression on the condition,check if there is a function-based
index on that column. If it is not present the index will not be used
8) Use exists vs in and Non exists vs not in for correlated subqueries
9) Avoid Poor coding practices
a) Avoid cartesian join . Make sure all the tables required in the queries are needed and are
linked to each other
b) Use Decode to avoid multiple trips to database
c) Try to avoid outer join
d) Sometimes decomoposing the logic into small parts makes the work faster
10) If trying to use the complex view,check if the base tables can be used instead as the view
tends to make the performance bad
11) Use UNION ALL Vs UNION if you knows that data retrieved will not have duplicate rows
12) Use hints to optimize the execution plan. Sometimes hint can be used to change the
execution plan for the query so as to take the most optimal path.
Some times bind peeking create a bad plan ,so in that case putting necessary hint to fix the plan
help in getting the good performance everytime
The most common hints are
/*+ LEADING (table alias) */ specifies table by which to start join
/*+ FIRST_ROWS */ very good for on-line screens favours NESTED LOOPS
/*+ INDEX ( table alias.index name) */ specifies index you want to use. Note: if index gets
dropped and recreated and name changes then hint is no longer valid.

/*+ USE_NL (table alias1 table alias 2)*/ asks optimizer to use the Nested Loop Join for the two
tables specified
Avoid unneccsary hint and use them with care

sql performance tuning some questions


Here are some of sql performance tuning examination questions
1) following an ANALYZE command, which of the following data dictionary views will contain
statistics on the number of rows in a table
a dbms_rows
b dba_table_rows
c dbs_tables
d dbs_statistics
Solution (c)
2) histograms are useful because they allow the optimizer to have a better idea of how the data
in an indexed columns is
a sorted
b named
c entered
d distributed
Solution (d)
3) the optimizer mode can be set at which of the following levels
a instance
b session
c statement
d all of the above
Solution (d)
4) the ability to store predefined execution plans so that the optimizer knows in advance which
execution plan is best is called
a stored outline
b materialized views
c hash cluster
d index organized tables
Solution (a)
5) which of the following types of indexes stores the indexed columns values associated row id
as binary string
a b-tree index
b reverse key index

c bitmap index
d bitwise index
Solution (c)
6) which of the following factors would not make a column a good candidate for b-tree index
a the data in the column has low cardinality
b the column frequently used in sql statement where clases
c most queries on the table return only small portion of all row
d none of the above
Solution (a)
7) the process of preparing a statement for execution is called
a caching
b hashing
c parsing
d none of the above
Solution (c)
8) finding a statement already cached in the library cache is referred as
a cache hit
b cache miss
c cache match
d cache parse
Solution (a)
9) to determine if a new statement is a match for an existing statement already in the shared
pool, oracle compares each statement
a result set
b security
c execution plan
d hashed value
Solution (d)
10) in order for two statements to result in a cache hit, which of the following must be true
a the statements must use the same case either upper , lower , or mixed
b the statements must be issued against the same table
c the statements must be on the same number of lines
d all of the above must be true
Solution (d)
11) which dynamic data dictionary view contains information about the library cache hit ratio
a v$rowcache
b v$librarycache
c v$dictionarycache
d all of the above

Solution (b)
12) according to oracle, what should the data dictionary hit ratio be for a well tuned oltp system
a more than 85 percent
b less than 85 percent
c between 50 and 60 percent
d non of the above
Solution (a)

Anda mungkin juga menyukai