Anda di halaman 1dari 17

http://oracledbasupport.co.

uk/page/10/

How to retrieve entire SQL + Execution PLAN from Statspack


To retrieve SQL plan , you need to have statspack running on level 7
http://orafaq.com/node/1405
1. sprepsql.sql
The SQL report (sprepsql.sql)is a report for a specific SQL statement.
The SQL report is usually run after examining the high-load SQL sections
of the instance health report.The SQL report provides detailed
statistics and data for a single SQL statement (as identified by the
Hash Value in Statspack report).
2. Hash Value is known
- Select * from STATS$SQLTEXT where hash_value='%from stats pack%' order by piece;
- For an Object first locate the OBJECT#
select * from sys.obj$ where name='TRANSACTION'
select snap_time
snap_id,
plan_hash_value,
OBJECT# ,
OBJECT_NAME ,
OPERATION ,
OPTIONS ,
COST ,
IO_COST ,
CARDINALITY ,
POSITION ,
CPU_COST ,
OPTIMIZER ,
SEARCH_COLUMNS ,
BYTES ,
DISTRIBUTION ,
TEMP_SPACE ,
ACCESS_PREDICATES ,
FILTER_PREDICATES
from stats$SQL_PLAN a , STATS$SNAPSHOT b where object#='&&OBJECT_ID' and a.snap_id=b.snap_id
Locate Hard hitting SQL from Statpack Reposistory
1. Login as PERFSTAT user on database.
It won't work unless U login as PERFSTAT user.
2. Find DBID using
"select dbid from stats$sql_summary"
3. Locate MIN(SNAP_ID) pBgnSnap & MAX(SNAP_ID) pEndSnap from
select min(snap_id),max(snap_id),min(snap_time),max(snap_time) from stats$snapshot
where to_number(to_char(snap_time,'HH24')) > 10 and to_number(to_char(snap_time,'HH24')) < 13 and
trunc(snap_time)=trunc(sysdate)
Show All SQL Stmts ordered by Logical Reads
select
e.hash_value "E.HASH_VALUE"
, e.module "Module"
, e.buffer_gets - nvl(b.buffer_gets,0) "Buffer Gets"
, e.executions - nvl(b.executions,0) "Executions"
, Round( decode ((e.executions - nvl(b.executions, 0)), 0, to_number(NULL)
, (e.buffer_gets - nvl(b.buffer_gets,0)) /
(e.executions - nvl(b.executions,0))) ,3) "Gets / Execution"
, Round(100*(e.buffer_gets nvl(b.buffer_gets,0))/sp920.getGets(:pDbID,:pInstNum,:pBgnSnap,:pEndSnap,'NO'),3) "Percent of Total"
, Round((e.cpu_time - nvl(b.cpu_time,0))/1000000,3) "CPU (s)"
, Round((e.elapsed_time - nvl(b.elapsed_time,0))/1000000,3) "Elapsed (s)"
, Round(e.fetches - nvl(b.fetches,0)) "Fetches"
, sp920.getSQLText ( e.hash_value , 400) "SQL Statement"
from stats$sql_summary e
, stats$sql_summary b

where b.snap_id(+)
= :pBgnSnap
and b.dbid(+)
= e.dbid
and b.instance_number(+) = e.instance_number
and b.hash_value(+)
= e.hash_value
and b.address(+)
= e.address
and b.text_subset(+)
= e.text_subset
and e.snap_id
= :pEndSnap
and e.dbid
= :pDbId
and e.instance_number = :pInstNum
order by 3 desc
Show SQL Stmts where SQL_TEXT like '%'
select
e.hash_value "E.HASH_VALUE"
, e.module "Module"
, e.buffer_gets - nvl(b.buffer_gets,0) "Buffer Gets"
, e.executions - nvl(b.executions,0) "Executions"
, Round( decode ((e.executions - nvl(b.executions, 0)), 0, to_number(NULL)
, (e.buffer_gets - nvl(b.buffer_gets,0)) /
(e.executions - nvl(b.executions,0))) ,3) "Gets / Execution"
, Round(100*(e.buffer_gets nvl(b.buffer_gets,0))/sp920.getGets(:pDbID,:pInstNum,:pBgnSnap,:pEndSnap,'NO'),3) "Percent of Total"
, Round((e.cpu_time - nvl(b.cpu_time,0))/1000000,3) "CPU (s)"
, Round((e.elapsed_time - nvl(b.elapsed_time,0))/1000000,3) "Elapsed (s)"
, Round(e.fetches - nvl(b.fetches,0)) "Fetches"
, sp920.getSQLText ( e.hash_value , 400) "SQL Statement"
from stats$sql_summary e
, stats$sql_summary b
where b.snap_id(+)
= :pBgnSnap
and b.dbid(+)
= e.dbid
and b.instance_number(+) = e.instance_number
and b.hash_value(+)
= e.hash_value
and b.address(+)
= e.address
and b.text_subset(+)
= e.text_subset
and e.snap_id
= :pEndSnap
and e.dbid
= 2863128100
and e.instance_number = :pInstNum
and sp920.getSQLText ( e.hash_value , 400) like '%ZPV_DATA%'
order by 3 desc
Locate Server Workload from Statspack for days in Past
Change a.statistic# to respective value
Stats for Working Hours
select to_char(trunc(b.snap_time),'DD-MM-YYYY') ,statistic#,name, sum(value) from STATS$SYSSTAT A,
stats$snapshot B
where a.snap_id=b.snap_id
and trunc(b.snap_time) > trunc(sysdate -30)
and to_char(b.SNAP_TIME,'HH24') > 8
and to_char(b.SNAP_TIME,'HH24') <18
and a.statistic#=54
group by trunc(b.snap_time) ,statistic#,name
order by trunc(b.snap_time)
Locate kind of stats you want to pull from statspack
(select * from STATS$SYSSTAT where name like '%XXX%' )
9 session logical reads
Physical Reads
54 Physical Reads
56 Physical reads direct
58 physical read bytes
39 physical read total bytes
42 physical write total bytes
66 physical write bytes
66 physical writes
CPU Related
355 OS Wait-cpu (latency) time
328 parse time cpu
8 recursive cpu usage

Rollback Related 176 transaction tables consistent read rollbacks


180 rollbacks only - consistent read gets
181 cleanouts and rollbacks - consistent read gets
187 transaction rollbacks
5 user rollbacks
239 IMU CR rollbacks
186 rollback changes - undo records applied

search articles Enviar

o
o

Application Server/iAS
E Business Suite

Concepts
Daily Admin
Installation,Upgrade

Patching,Cloning
General DBA Tasks

o
o

Utilties
Grid Control
Linux/Unix

o
o

Installation
Miscellaneous
MultiMaster Replication

o
o

Daily Admin

Administration

ASM

Performance
Queue/Setup Errors

Setup
Oracle Dataguard
Concepts/Build

New Features
PLSQL Development
RAC
Basic Admin
Build/Install
ClusterWare CRS
Errors
FAQ/Concepts
OCFS
TAF/Failover

RMAN

o
o
o

Restore
Scripts
SQL Server
Tuning

AWR/ASH (10g/11g)
Index Tuning
SGA/PGA Tuning
Statspack (8i/9i)

Tune Optimizer Stats


Under Progress

Search Errors

ORA8i R3
9i R1

9i R2

10g R1

10g R2

11g R1

Search Docs

Metalink

8i R3

9i R1

9i R2

10g R1

10g R2

11g R1

Oracle Documentation
8i R3 , 9i R2 , 10g R2 , 11g R1 , AS10g R2 , AS10g R3

Most Read
o Installing Oracle 10.2.0.1 on Red Hat Linux AS release 4 Update 5 (Nahant Update)
(418)
o Installing Oracle 9.2.0.6 on Red Hat Linux AS release 5 Update 3 (278)
o How AWR works? (132)
o EMD upload error: uploadXMLFiles skipped :: OMS version not checked yet.. (86)
o Locks,Monitoring SQL Server Pocesses (76)
o Maintaining SQL Server High Availability (76)
Editors Pick

o
o
o
o
o
o

Metalink Notes
Scripts
Tuning
RAC
10g Grid
E Business Suite

Recent Posts

o
o
o
o
o

How to carry Multi node EBS install


Adding Read Only Access to a Grid Control
How to recover system master database
Intra Query Parallelism
Troubleshooting Bottlenecks Using Dynamic Memory Views : Part III

Archives

Previous 20 items
Next 20 items
Page 10 of 14 First...89101112...Last
Summary report of ASM disk groups and Space Utilised
PURPOSE : Provide a summary report of all disk groups.
SET LINESIZE 145
SET PAGESIZE 9999
SET VERIFY off
COLUMN
COLUMN
COLUMN
COLUMN
COLUMN
COLUMN
COLUMN
COLUMN
COLUMN

group_name FORMAT a20 HEAD 'Disk Group|Name'


sector_size FORMAT 99,999 HEAD 'Sector|Size'
block_size FORMAT 99,999 HEAD 'Block|Size'
allocation_unit_size FORMAT 999,999,999 HEAD 'Allocation|Unit Size'
state FORMAT a11 HEAD 'State'
type FORMAT a6 HEAD 'Type'
total_mb FORMAT 999,999,999 HEAD 'Total Size (MB)'
used_mb FORMAT 999,999,999 HEAD 'Used Size (MB)'
pct_used FORMAT 999.99 HEAD 'Pct. Used'

break on report on disk_group_name skip 1


compute sum label "Grand Total: " of total_mb used_mb on report
SELECT
name group_name
, sector_size sector_size
, block_size block_size
, allocation_unit_size allocation_unit_size
, state state
, type type
, total_mb total_mb
, (total_mb - free_mb) used_mb
, ROUND((1- (free_mb / total_mb))*100, 2) pct_used
FROM
v$asm_diskgroup
ORDER BY
name;
Sample Report
Disk Group Sector Block Allocation
Name Size Size Unit Size State Type Total Size (MB) Used Size (MB) Pct. Used
-------------------- ------- ------- ------------ ----------- ------ --------------- -------------- --------XYZ_REDO_DG01 512 4,096 16,777,216 MOUNTED EXTERN 28,144 9,424 33.48
ABC_ARCH_DG00 512 4,096 16,777,216 MOUNTED EXTERN 225,216 28,656 12.72
ABC_DATA_DG00 512 4,096 16,777,216 MOUNTED EXTERN 450,432 88,800 19.71
ABC_FLBK_DG00 512 4,096 16,777,216 MOUNTED EXTERN 112,608 4,848 4.31
ABC_REDO_DG00 512 4,096 16,777,216 MOUNTED EXTERN 28,128 9,584 34.07
ABC_REDO_DG01 512 4,096 16,777,216 MOUNTED EXTERN 28,128 9,456 33.62
--------------- -------------Grand Total: 4,448,192 2,110,496
Performance summary report of all disks contained within all ASM DiskGroups
---------

+----------------------------------------------------------------------------+
| Jeffrey M. Hunter |
|----------------------------------------------------------------------------|
| PURPOSE : Provide a summary report of all disks contained within all ASM |
| disk groups along with their performance metrics. |
| NOTE : As with any code, ensure to test this script in a development |
| environment before attempting to run it in production. |
+----------------------------------------------------------------------------+

SET LINESIZE 145


SET PAGESIZE 9999
SET VERIFY off
COLUMN
COLUMN
COLUMN
COLUMN
COLUMN
COLUMN
COLUMN
COLUMN
COLUMN
COLUMN

disk_group_name FORMAT a20 HEAD 'Disk Group Name'


disk_path FORMAT a20 HEAD 'Disk Path'
reads FORMAT 999,999,999 HEAD 'Reads'
writes FORMAT 999,999,999 HEAD 'Writes'
read_errs FORMAT 999,999 HEAD 'Read|Errors'
write_errs FORMAT 999,999 HEAD 'Write|Errors'
read_time FORMAT 999,999,999 HEAD 'Read|Time'
write_time FORMAT 999,999,999 HEAD 'Write|Time'
bytes_read FORMAT 999,999,999,999 HEAD 'Bytes|Read'
bytes_written FORMAT 999,999,999,999 HEAD 'Bytes|Written'

break on report on disk_group_name skip 2


compute sum label "" of reads writes read_errs write_errs read_time write_time bytes_read bytes_written
on disk_group_name
compute sum label "Grand Total: " of reads writes read_errs write_errs read_time write_time bytes_read
bytes_written on report
SELECT
a.name disk_group_name
, b.path disk_path
, b.reads reads
, b.writes writes
, b.read_errs read_errs
, b.write_errs write_errs
, b.read_time read_time
, b.write_time write_time
, b.bytes_read bytes_read
, b.bytes_written bytes_written
FROM
v$asm_diskgroup a JOIN v$asm_disk b USING (group_number)
ORDER BY
a.name
/
Summary report of all disks contained within all disk Groups
--------

+----------------------------------------------------------------------------+
| From : Jeffrey M. Hunter |
| PURPOSE : Provide a summary report of all disks contained within all disk |
| groups. This script is also responsible for queriing all |
| candidate disks - those that are not assigned to any disk |
| group. |
+----------------------------------------------------------------------------+

SET LINESIZE 145


SET PAGESIZE 9999
SET VERIFY off
COLUMN
COLUMN
COLUMN
COLUMN
COLUMN
COLUMN
COLUMN

disk_group_name FORMAT a20 HEAD 'Disk Group Name'


disk_file_path FORMAT a17 HEAD 'Path'
disk_file_name FORMAT a20 HEAD 'File Name'
disk_file_fail_group FORMAT a20 HEAD 'Fail Group'
total_mb FORMAT 999,999,999 HEAD 'File Size (MB)'
used_mb FORMAT 999,999,999 HEAD 'Used Size (MB)'
pct_used FORMAT 999.99 HEAD 'Pct. Used'

break on report on disk_group_name skip 1


compute sum label "" of total_mb used_mb on disk_group_name
compute sum label "Grand Total: " of total_mb used_mb on report
SELECT
NVL(a.name, '[CANDIDATE]') disk_group_name
, b.path disk_file_path

, b.name disk_file_name
, b.failgroup disk_file_fail_group
, b.total_mb total_mb
, (b.total_mb - b.free_mb) used_mb
, ROUND((1- (b.free_mb / b.total_mb))*100, 2) pct_used
FROM
v$asm_diskgroup a RIGHT OUTER JOIN v$asm_disk b USING (group_number)
ORDER BY
a.name
/
Mastering ASMCMD
cd Changes the current directory to the specified directory.
duDisplays the total disk space occupied by ASM files in the specified ASM directory and all its
subdirectories, recursively.
exit Exits ASMCMD.
find Lists the paths of all occurrences of the specified name (with wildcards) under the specified directory.
ASMCMD> find +dgroup1 undo* +dgroup1/SAMPLE/DATAFILE/UNDOTBS1.258.555341963
+dgroup1/SAMPLE/DATAFILE/UNDOTBS1.272.557429239
The following example returns the absolute path of all the control files in the
+dgroup1/sample directory.ASMCMD> find -t CONTROLFILE +dgroup1/sample *
+dgroup1/sample/CONTROLFILE/Current.260.555342185
+dgroup1/sample/CONTROLFILE/Current.261.555342183
ls Lists the contents of an ASM directory, the attributes of the specified file, or the names and attributes
of all disk groups.
lsct Lists information about current ASM clients.
lsdg Lists all disk groups and their attributes.
mkalias Creates an alias for a system-generated filename.
mkdir Creates ASM directories.
pwd Displays the path of the current ASM directory.
rm Deletes the specified ASM files or directories.
rmalias Deletes the specified alias, retaining the file that the alias points to.
List the connected Users , Machines to Database
SELECT s.username, s.logon_time, s.machine, s.osuser, s.program
FROM v$session s, v$process p, sys.v_$sess_io si
WHERE s.paddr = p.addr(+) AND si.sid(+) = s.sid
AND s.machine like '%otau157%' order by 3;
SELECT s.username, s.logon_time, s.machine, s.osuser, s.program
FROM v$session s, v$process p, sys.v_$sess_io si
WHERE s.paddr = p.addr(+) AND si.sid(+) = s.sid
AND s.type='USER';
Display tablespace usage
column
column
column
column

tsname format a30 heading 'Tablespace Name'


tbs_size_mb format 99999,999 heading 'Size|(MB)'
used format 99999,999 heading 'Used|(MB)'
avail format 99999,999 heading 'Free|(MB)'

column used_visual format a11 heading 'Used'


column pct_used format 999 heading '% Used'
set
set
set
set
set

linesize 1000;
trimspool on;
pagesize 32000;
verify off;
feedback off;

PROMPT
PROMPT *************************
PROMPT *** TABLESPACE STATUS ***
PROMPT *************************
SELECT df.tablespace_name tsname
, round(sum(df.bytes)/1024/1024) tbs_size_mb
, round(nvl(sum(e.used_bytes)/1024/1024,0)) used
, round(nvl(sum(f.free_bytes)/1024/1024,0)) avail
, rpad(' '||rpad('X',round(sum(e.used_bytes)
*10/sum(df.bytes),0), 'X'),11,'-') used_visual
, nvl((sum(e.used_bytes)*100)/sum(df.bytes),0) pct_used
FROM sys.dba_data_files df
, (SELECT file_id
, sum(nvl(bytes,0)) used_bytes
FROM sys.dba_extents
GROUP BY file_id) e
, (SELECT max(bytes) free_bytes
, file_id
FROM dba_free_space
GROUP BY file_id) f
WHERE e.file_id(+) = df.file_id
AND df.file_id = f.file_id(+)
GROUP BY df.tablespace_name
ORDER BY 6;
Will produce results like
XYZ Live Database
===================
Size Used Free
Tablespace Name (MB) (MB) (MB) Used % Used
------------------------------ ---------- ---------- ---------- ----------- -----STATSPACK 2,048 0 2,047 ---------- 0
TOOLS 1,024 0 1,024 ---------- 0
ACF_XYZ 2,048 0 2,048 ---------- 0
ACF_IABC 2,048 3 2,045 ---------- 0
UNDOTBS1 1,024 337 449 XXX------- 33
SYSTEM 1,024 557 467 XXXXX----- 54
SYSAUX 5,000 2,738 1,032 XXXXX----- 55
USERS 14,000 9,210 2,678 XXXXXXX--- 66
UNDOTBS2 1,024 703 20 XXXXXXX--- 69
UNDOTBS3 1,024 740 5 XXXXXXX--- 72
Locate deprecated parameters
You can determine deprecated parameters using the column "isdeprecated" in the v$parameter view.
Enabling ArchiveLog Mode in a RAC Environment
Login to one of the nodes (i.e. linux1) and disable the cluster instance parameter by setting
cluster_database to FALSE from the current instance:
$ sqlplus "/ as sysdba"
SQL> alter system set cluster_database=false scope=spfile sid='orcl1';
Shutdown all instances accessing the clustered database:
$ srvctl stop database -d orcl
Using the local instance, MOUNT the database:
$ sqlplus "/ as sysdba"

SQL> startup mount


Enable archiving:
SQL> alter database archivelog;
Re-enable support for clustering by modifying the instance parameter cluster_database to
TRUE from the current instance:
SQL> alter system set cluster_database=true scope=spfile sid='orcl1';
Shutdown the local instance:
SQL> shutdown immediate
Bring all instance back up using srvctl:
$ srvctl start database -d orcl
(Optional) Bring any services (i.e. TAF) back up using srvctl:
$ srvctl start service -d orcl
Login to the local instance and verify Archive Log Mode is enabled:
$ sqlplus "/ as sysdba"
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 83
Next log sequence to archive 84
Current log sequence 84
OCR administration utilities
[oracle@oradb3 oracle]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Device/File Name : /u01/oradata/OCRConfig.dbf
Device/File integrity check succeeded
Device/File Name : /u03/oradata/OCRConfig.dbf
Device/File integrity check succeeded
Cluster registry integrity check succeeded
Oracle performs an automatic backup of the OCR once every four
hours while the system is up.[root@oradb4 SskyClst]# ocrconfig -showbackup
The automatic backup (default) location can be changed using[root@oradb4 root]# ocrconfig
-backuploc /u13/oradata/Sskyclst
Oracle Clusterware Administration Quick Reference
Sequence of events to bring cluster database back..
1. Start all services using start -nodeapps
2. Start ASM instnace using srvctl start asm -n (node)
3. Start RAC instances using srvctl start instance -d (database) -I (instance)
4. Finish up by bringing our load balanced/TAF service online srvctl start service -d orcl -s RAC
List of nodes and other information for all nodes participating in the cluster:
[oracle@oradb4 oracle]$ olsnodes
oradb4 oradb3 oradb2 oradb1
List all nodes participating in the cluster with their assigned node numbers:
[oracle@oradb4 tmp]$ olsnodes -n
oradb4 1 oradb3 2 oradb2 3 oradb1 4
List all nodes participating in the cluster with the private interconnect assigned to each
node:
[oracle@oradb4 tmp]$ olsnodes -p
oradb4 oradb4-priv oradb3 oradb3-priv oradb2 oradb2-priv oradb1 oradb1-pr
Check the health of the Oracle Clusterware daemon processes:
[oracle@oradb4 oracle]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

Query and administer css vote disks :


[root@oradb4 root]# crsctl add css votedisk /u03/oradata/ CssVoteDisk.dbf
Now formatting voting disk: /u03/oradata/CssVoteDisk.dbf Read -1 bytes of 512 at offset 0 in voting
device (CssVoteDisk.dbf) successful addition of votedisk /u03/oradata/CssVoteDisk.dbf
For dynamic state dump of the CRS:
[root@oradb4 root]# crsctl debug statedump crs
dumping State for crs objects Dynamic state dump information is appended to the crsd log file located in
the $ORA_CRS_HOME/log/oradb4/crsd directory.
Verify the Oracle Clusterware version:
[oracle@oradb4 log]$ crsctl query crs softwareversion
CRS software version on node [oradb4] is [10.2.0.0.0]
Verify the current version of Oracle Clusterware being used:
[oracle@oradb4 log]$ crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.0.0]
How to add a New Node
1. Reconfigure listeners for the new node with netca.
2. Configure the OS and hardware for the new node.
3. Add the node to the cluster using the addnode.sh script run from ORA_CRS_HOME.
Add node http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle10gRAC/CLUSTER_20.shtml
Remove Node http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle10gRAC/CLUSTER_23.shtml
How to Analyze database for right statistics?
Different DBAs have different views on % for analyze. The oracle documentation recommends to carry
full analyze on entire database which is not possible for most live systems.
In past I had performance issue on my database just over 300 GB. There were one table ORDER_DATA
with 400 million rows. That one table pulled entire system down number of times just because it wasnt
properly analyzed and oracle didnt knew data distribution in the table. In technical terms the data was
skewed ...
I was struggling to understand where things are going wrong as we were analysing entire table every
night but not with hash buckets-histograms and surprisingly in SQL execution it was using a right index.
After spending days & weeks investigating the issue, I reanalyzed it with new oracle API for histograms
and SQL which used to take between 15-60 min started running at less than 100 milliseconds.
What to look for ?
- First check STATSPACK and find out the most active tables
- Analyse most active tables once a week with 10-15% sampling
- For BIG tables start with 1% sampling and buld over period of time
I also observed adding parallel option in ANALYZE can reduce time taken significantly.
--- Added to 9i init.ora
-- parallel_automatic_tuning=true
-- parallel_max_servers=16
-- parallel_min_servers=4
-- Changed percent to 1, all idx cols changed degree to 16 from 10
-begin
dbms_stats.gather_table_stats
(ownname=>'USER',tabname =>'TABLE_NAME'
,estimate_percent => 1
,method_opt=>'for all indexed columns'
,degree=>16
, CASCADE=>TRUE);
end ;
Other Examples
GATHER_DATABASE_STATS(estimate_percent,block_sample,method_opt,degree,
granularity,cascade,stattab,statid,
options,statown,gather_sys,no_invalidate,gather_temp,gather_fixed,stattype);
GATHER_INDEX_STATS
(ownname,indname,partname,estimate_percent,stattab,statid,statown,degree,granularity,no_invalidate,s
tattype);

GATHER_SCHEMA_STATS(ownname,estimate_percent,block_sample,method_opt,degree,granularity,casca
de, stattab,statid,options,statown,no_invalidate,gather_temp,gather_fixed);
SQL> begin
dbms_stats.gather_schema_stats
(ownname=>'DOTCOM',estimate_percent => 100,method_opt=>'for all indexed columns',degree=>16,
CASCADE=>TRUE);
end ;
GENERATE_STATS(ownname,objname,organized);
GATHER_SYSTEM_STATS (gathering_mode,interval,stattab,statid,statown);
GATHER_TABLE_STATS
(ownname,tabname,partname,estimate_percent,block_sample,method_opt,
degree,granularity,cascade,stattab,statid,statown,no_invalidate,stattype);
How to Backup/Export Oracle Optimizer Statistics into Table
Exporting and Importing Statistics
Caveats: Always use import/export and use imp/exp utility on schema user who owns tables.
I have wasted a week where I was exporting as DBA for XYZ user and then importing into
different system under different username.
Statistics can be exported and imported from the data dictionary to user-owned tables. This enables you
to create multiple versions of statistics for the same schema. It also enables you to copy statistics from
one database to another database. You may want to do this to copy the statistics from a production
database to a scaled-down test database.
Note:
Exporting and importing statistics is a distinct concept from the EXP and IMP utilities of the database. The
DBMS_STATS export and import packages do utilize IMP and EXP dump files.
Before exporting statistics, you first need to create a table for holding the statistics. This statistics table
is created using the procedure DBMS_STATS.CREATE_STAT_TABLE. After this table is created, then you
can export statistics from the data dictionary into your statistics table using the
DBMS_STATS.EXPORT_*_STATS procedures. The statistics can then be imported using the
DBMS_STATS.IMPORT_*_STATS procedures.
Note that the optimizer does not use statistics stored in a user-owned table. The only statistics used by
the optimizer are the statistics stored in the data dictionary. In order to have the optimizer use the
statistics in a user-owned tables, you must import those statistics into the data dictionary using the
statistics import procedures.
In order to move statistics from one database to another, you must first export the statistics on the first
database, then copy the statistics table to the second database, using the EXP and IMP utilities or other
mechanisms, and finally import the statistics into the second database.
Note:
The EXP and IMP utilities export and import optimizer statistics from the database along with the table.
One exception is that statistics are not exported with the data if a table has columns with systemgenerated names.
14.5.3 Restoring Statistics Versus Importing or Exporting Statistics
The functionality for restoring statistics is similar in some respects to the functionality of importing and
exporting statistics. In general, you should use the restore capability when:
* You want to recover older versions of the statistics. For example, to restore the optimizer behavior to an
earlier date. * You want the database to manage the retention and purging of statistics histories.You
should use EXPORT/IMPORT_*_STATS procedures when:
* You want to experiment with multiple sets of statistics and change the values back and forth.
* You want to move the statistics from one database to another database. For example, moving statistics
from a production system to a test system.
* You want to preserve a known set of statistics for a longer period of time than the desired retention
date for restoring statistics.
1. Create the statistics table.
exec DBMS_STATS.CREATE_STAT_TABLE(ownname =>'SCHEMA_NAME' ,stat_tab => 'STATS_TABLE' ,
tblspace => 'STATS_TABLESPACE');
>>>>>>>> For 10G
exec DBMS_STATS.CREATE_STAT_TABLE(ownname =>'SYSTEM',stat_tab => 'STATS_TABLE');
>>>>>>>> For 9i and earlier
begin
DBMS_STATS.CREATE_STAT_TABLE('dba_admin','STATS_TABLE');
end;
2. Export statistics to statistics table
EXEC DBMS_STATS.EXPORT_SCHEMA_STATS('ORIGINAL_SCHEMA' ,'STATS_TABLE',NULL,'SYSTEM');
3. Import statistics into the data dictionary.
exec DBMS_STATS.IMPORT_SCHEMA_STATS('NEW_SCHEMA','STATS_TABLE',NULL,'SYSTEM');

4. Drop the statistics table.


exec DBMS_STATS.DROP_STAT_TABLE('SYSTEM','STATS_TABLE');
FOR 9i
begin
DBMS_STATS.CREATE_STAT_TABLE('dba_admin','STATISTICS_TABLE_060307');
end;
begin
DBMS_STATS.EXPORT_SCHEMA_STATS('SAPBP2' ,'STATISTICS_TABLE_060307',NULL,'DBA_ADMIN');
end;
SQL> exec DBMS_STATS.IMPORT_SCHEMA_STATS('SAGAR','STATISTICS_TABLE_060307',NULL,
'SAGAR');
PL/SQL procedure successfully completed.
>>>>>>>>> Monitor export Process >>>>>>>>
select count(*) from &STATS_NAME
Stats table can grow exponentially so look at table size while export is active.
select sum(bytes)/1000000 from dba_extents where segment_name='&TABLE_NAME'
Sample statistics at SAP BW System of size 4.2 Tera bytes
Time Elapsed for Export : 40 Mins
Total stats Table Size : 2GB
Time Elapsed for Import :
How to Validate that Stats are reflected after exp/imp
select TABLE_NAME, NUM_ROWS, BLOCKS, EMPTY_BLOCKS,
AVG_SPACE, CHAIN_CNT, AVG_ROW_LEN
from dba_tables where owner='&USER'
At both Databases and see they are very similar.
Virtual Indexes in 9i & 10g
http://www.dbazine.com/blogs/blog-cf/chrisfoot/blogentry.2007-05-31.6959101573
Building our virtual index using the NOSEGMENT clause.
07:59:12 orcl> create index hr.emp2_emp_id_virtual on hr.employees2(employee_id) nosegment;
Index created.
Setting the hidden startup parameter "_use_nosegment_indexes" to TRUE so that our
session will recognize our new virtual index.
08:00:09 orcl> alter session set "_use_nosegment_indexes" = true;
Running our statement again to see if it will use our new virtual index. Check out the access path below.
The optimizer has chosen our virtual index.
1 select employee_id, a.department_id, b.department_name
2 from
3 hr.departments b, hr.employees2 a
4 where
5 a.department_id = b.department_id
6* and employee_id = 203
Execution Plan
---------------------------------------------------------Plan hash value: 2516110069
---------------------------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 1 | 25 | 3 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 25 | 3 (0)| 00:00:01 |
| 2 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES2 | 1 | 9 | 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | EMP2_EMP_ID_VIRTUAL | 1 | | 1 (0)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| DEPARTMENTS | 1 | 16 | 1 (0)| 00:00:01 |
|* 5 | INDEX UNIQUE SCAN | DEPT_ID_PK | 1 | | 0 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------Setting the "_use_nosegment_indexes to FALSE. Note that the optimizer did NOT choose the virtual
index.
08:01:09 orcl> alter session set "_use_nosegment_indexes" = false;
Session altered.
08:01:33 orcl> select employee_id, a.department_id, b.department_name
08:01:47 2 from
08:01:47 3 hr.departments b, hr.employees2 a
08:01:47 4 where
08:01:47 5 a.department_id = b.department_id
08:01:47 6 and employee_id = 203;
Execution Plan
---------------------------------------------------------Plan hash value: 2641883601
-------------------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 1 | 25 | 818 (3)| 00:00:10 |

| 1 | NESTED LOOPS | | 1 | 25 | 818 (3)| 00:00:10 |


|* 2 | TABLE ACCESS FULL | EMPLOYEES2 | 1 | 9 | 817 (3)| 00:00:10 |
| 3 | TABLE ACCESS BY INDEX ROWID| DEPARTMENTS | 1 | 16 | 1 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN | DEPT_ID_PK | 1 | | 0 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------Executing DBMS_STATS to gather statistics on both the virtual and standard index. I have run tests with
statistics and without and it does seem to affect virtual index access paths.
08:21:57 orcl> exec dbms_stats.gather_index_stats('HR', 'EMP2_EMP_ID_NON_VIRTUAL');
PL/SQL procedure successfully completed.
08:23:10 orcl> exec dbms_stats.gather_index_stats('HR', 'EMP2_EMP_ID_VIRTUAL');
PL/SQL procedure successfully completed.
Looking for information on indexes built on the EMPLOYEES2 table.
Oracle returns a row for the standard index but not the virtual index.
08:20:31 orcl> select index_name, last_analyzed from dba_indexes where
2* table_name = 'EMPLOYEES2'
INDEX_NAME LAST_ANAL
------------------------------ --------EMP2_EMP_ID_NON_VIRTUAL 31-MAY-07
Determining f we can find the virtual index in DBA_SEGMENTS. No success.
08:26:09 orcl> select segment_name, segment_type from dba_segments where segment_name like
'EMP2%';
SEGMENT_NAME SEGMENT_TYPE
-------------------- -----------------EMP2_EMP_ID_NON_VIRT INDEX
UAL
Looking for the the virtual index in DBA_OBJECTS. Finally, we find some sort of evidence that the virtual
index exists in the database!
08:30:21 orcl> col object_name for a30
08:30:29 orcl> r
1 select object_name, object_type, created, status, temporary
2* from dba_objects where object_name like 'EMP2%'
OBJECT_NAME OBJECT_TYPE CREATED STATUS T
------------------------------ ------------------- --------- ------- EMP2_EMP_ID_NON_VIRTUAL INDEX 31-MAY-07 VALID N
EMP2_EMP_ID_VIRTUAL INDEX 31-MAY-07 VALID N
How to retrieve entire SQL + Execution PLAN from Statspack
To retrieve SQL plan , you need to have statspack running on level 7
http://orafaq.com/node/1405
1. sprepsql.sql
The SQL report (sprepsql.sql)is a report for a specific SQL statement.
The SQL report is usually run after examining the high-load SQL sections
of the instance health report.The SQL report provides detailed
statistics and data for a single SQL statement (as identified by the
Hash Value in Statspack report).
2. Hash Value is known
- Select * from STATS$SQLTEXT where hash_value='%from stats pack%' order by piece;
- For an Object first locate the OBJECT#
select * from sys.obj$ where name='TRANSACTION'
select snap_time
snap_id,
plan_hash_value,
OBJECT# ,
OBJECT_NAME ,
OPERATION ,
OPTIONS ,
COST ,
IO_COST ,
CARDINALITY ,
POSITION ,
CPU_COST ,
OPTIMIZER ,
SEARCH_COLUMNS ,
BYTES ,
DISTRIBUTION ,
TEMP_SPACE ,
ACCESS_PREDICATES ,
FILTER_PREDICATES
from stats$SQL_PLAN a , STATS$SNAPSHOT b where object#='&&OBJECT_ID' and a.snap_id=b.snap_id

Locate Hard hitting SQL from Statpack Reposistory


1. Login as PERFSTAT user on database.
It won't work unless U login as PERFSTAT user.
2. Find DBID using
"select dbid from stats$sql_summary"
3. Locate MIN(SNAP_ID) pBgnSnap & MAX(SNAP_ID) pEndSnap from
select min(snap_id),max(snap_id),min(snap_time),max(snap_time) from stats$snapshot
where to_number(to_char(snap_time,'HH24')) > 10 and to_number(to_char(snap_time,'HH24')) < 13 and
trunc(snap_time)=trunc(sysdate)
Show All SQL Stmts ordered by Logical Reads
select

e.hash_value "E.HASH_VALUE"
, e.module "Module"
, e.buffer_gets - nvl(b.buffer_gets,0) "Buffer Gets"
, e.executions - nvl(b.executions,0) "Executions"
, Round( decode ((e.executions - nvl(b.executions, 0)), 0, to_number(NULL)
, (e.buffer_gets - nvl(b.buffer_gets,0)) /
(e.executions - nvl(b.executions,0))) ,3) "Gets / Execution"
, Round(100*(e.buffer_gets nvl(b.buffer_gets,0))/sp920.getGets(:pDbID,:pInstNum,:pBgnSnap,:pEndSnap,'NO'),3) "Percent of Total"
, Round((e.cpu_time - nvl(b.cpu_time,0))/1000000,3) "CPU (s)"
, Round((e.elapsed_time - nvl(b.elapsed_time,0))/1000000,3) "Elapsed (s)"
, Round(e.fetches - nvl(b.fetches,0)) "Fetches"
, sp920.getSQLText ( e.hash_value , 400) "SQL Statement"
from stats$sql_summary e
, stats$sql_summary b
where b.snap_id(+)
= :pBgnSnap
and b.dbid(+)
= e.dbid
and b.instance_number(+) = e.instance_number
and b.hash_value(+)
= e.hash_value
and b.address(+)
= e.address
and b.text_subset(+)
= e.text_subset
and e.snap_id
= :pEndSnap
and e.dbid
= :pDbId
and e.instance_number = :pInstNum
order by 3 desc
Show SQL Stmts where SQL_TEXT like '%'
select
e.hash_value "E.HASH_VALUE"
, e.module "Module"
, e.buffer_gets - nvl(b.buffer_gets,0) "Buffer Gets"
, e.executions - nvl(b.executions,0) "Executions"
, Round( decode ((e.executions - nvl(b.executions, 0)), 0, to_number(NULL)
, (e.buffer_gets - nvl(b.buffer_gets,0)) /
(e.executions - nvl(b.executions,0))) ,3) "Gets / Execution"
, Round(100*(e.buffer_gets nvl(b.buffer_gets,0))/sp920.getGets(:pDbID,:pInstNum,:pBgnSnap,:pEndSnap,'NO'),3) "Percent of Total"
, Round((e.cpu_time - nvl(b.cpu_time,0))/1000000,3) "CPU (s)"
, Round((e.elapsed_time - nvl(b.elapsed_time,0))/1000000,3) "Elapsed (s)"
, Round(e.fetches - nvl(b.fetches,0)) "Fetches"
, sp920.getSQLText ( e.hash_value , 400) "SQL Statement"
from stats$sql_summary e
, stats$sql_summary b
where b.snap_id(+)
= :pBgnSnap
and b.dbid(+)
= e.dbid
and b.instance_number(+) = e.instance_number
and b.hash_value(+)
= e.hash_value
and b.address(+)
= e.address
and b.text_subset(+)
= e.text_subset
and e.snap_id
= :pEndSnap
and e.dbid
= 2863128100
and e.instance_number = :pInstNum
and sp920.getSQLText ( e.hash_value , 400) like '%ZPV_DATA%'
order by 3 desc
Locate Server Workload from Statspack for days in Past
Change a.statistic# to respective value

Stats for Working Hours


select to_char(trunc(b.snap_time),'DD-MM-YYYY') ,statistic#,name, sum(value) from STATS$SYSSTAT A,
stats$snapshot B
where a.snap_id=b.snap_id
and trunc(b.snap_time) > trunc(sysdate -30)
and to_char(b.SNAP_TIME,'HH24') > 8
and to_char(b.SNAP_TIME,'HH24') <18
and a.statistic#=54
group by trunc(b.snap_time) ,statistic#,name
order by trunc(b.snap_time)
Locate kind of stats you want to pull from statspack
(select * from STATS$SYSSTAT where name like '%XXX%' )
9 session logical reads
Physical Reads
54 Physical Reads
56 Physical reads direct
58 physical read bytes
39 physical read total bytes
42 physical write total bytes
66 physical write bytes
66 physical writes
CPU Related
355 OS Wait-cpu (latency) time
328 parse time cpu
8 recursive cpu usage
Rollback Related 176 transaction tables consistent read rollbacks
180 rollbacks only - consistent read gets
181 cleanouts and rollbacks - consistent read gets
187 transaction rollbacks
5 user rollbacks
239 IMU CR rollbacks
186 rollback changes - undo records applied
Sample Report built using SQL in post Stats_Report

Stats for Entire Day


select to_char(trunc(b.snap_time),'DD-MM-YYYY') ,statistic#,name, sum(value) from STATS$SYSSTAT A,
stats$snapshot B
where a.snap_id=b.snap_id
and trunc(b.snap_time) > trunc(sysdate -30)
and a.statistic#=54
group by trunc(b.snap_time) ,statistic#,name
order by trunc(b.snap_time)

How to Monitor rman Backup ?

Datafiles Backed up during past 24 Hours


SELECT dbfiles||' from '||numfiles "Datafiles backed up",
cfiles "Control Files backed up", spfiles "SPFiles backed up"
FROM (select count(*) numfiles from sys.v_$datafile),
(select count(*) dbfiles
from sys.v_$backup_datafile a, sys.v_$datafile b
where a.file# = b.file#
and a.completion_time > sysdate - 1),
(select count(*) cfiles from sys.v_$backup_datafile
where file# = 0 and completion_time > sysdate - 1),
(select count(*) spfiles from sys.v_$backup_spfile
where completion_time > sysdate - 1)
Archlog Files Backed up during past 24 Hours
SELECT backedup||' from '||archived "Archlog files backed up",
ondisk "Archlog files still on disk"
FROM (select count(*) archived
from sys.v_$archived_log where completion_time > sysdate - 1),
(select count(*) backedup from sys.v_$archived_log
where backup_count > 0
and completion_time > sysdate - 1),
(select count(*) ondisk from sys.v_$archived_log
where archived = 'YES' and deleted = 'NO')
RMAN Backups Still Running:
SELECT to_char(start_time,'DD-MON-YY HH24:MI') "BACKUP STARTED",
sofar, totalwork,
elapsed_seconds/60 "ELAPSE (Min)",
round(sofar/totalwork*100,2) "Complete%"
FROM sys.v_$session_longops
WHERE compnam = 'dbms_backup_restore'
/
BACKUP STARTED,SOFAR,TOTALWORK,ELAPSE (Min),Complete%
27-JUN-07 09:54,755,45683,2.73333333333333,1.65
27-JUN-07 09:52,1283,10947,4.36666666666667,11.72
27-JUN-07 09:46,11275,11275,0.783333333333333,100
27-JUN-07 09:46,58723,58723,5.73333333333333,100
27-JUN-07 09:46,12363,12363,0.333333333333333,100
27-JUN-07 09:44,11115,11115,4.53333333333333,100
27-JUN-07 09:44,12371,12371,0.183333333333333,100
27-JUN-07 07:34,4706,4706,0.166666666666667,100
27-JUN-07 07:34,83729,83729,118.35,100
27-JUN-07 05:21,8433,8433,0.333333333333333,100
27-JUN-07 05:21,83729,83729,132.25,100

Anda mungkin juga menyukai