Anda di halaman 1dari 17

1

1 Introduction
Hi, the goal of this paper is not to make you an expert in tuning, else to introduce to
the several areas in the database you must tune, a help I would had appreciated if I
would had received on time, so I think this could be appreciated for newbies in tuning
Oracle.
I tried not to include complex explanations, only the basic.
If you want to become a serious tuning DBA you have to read some books, and some
thousand of pages and solve real performance tuning problems. There are dozens of
points you must remember to tune, including bugs. Here are the most I could get, I’ll
be adding periodiaclly more points.
if you don’t know what is for example an extent, you can find it at the concept
manual http://tahiti.oracle.com/,.
You can search the papers from Oracle Worlds, they are excellent.
For example http://otn.oracle.com/products/manageability/database/conf
And in experts sites I have a brief list on my page.

Don’t forget the SYS schema is magical, so you don’t have to touch it. Don’t analyze
sys tables neither change the buffer cache in sys tables ;). But you can pin (keep in
memory) the packages frequently used, there are in SYS.

This paper is oriented to Oracle 9i database., basically based on documentation and


asktom.com advices, like a Todo list.
Tom Kyte, is a very serious opinion you can get from Oracle Corporation under any
database problem. any question to HTTP://ASKTOM.ORACLE.COM .

Any bug to juancarlosreyesp@yahoo.com


Thanis to Daza Software
Thanks to oracle-l@freelists.org

For more information, 95% of the information in here comes from


http://tahiti.oracle.com/, is amazing what you can found in the documentation. Specially in
new releases some features are improved and extended.
And remember to read specific platform specifications for example LOCK_SGA = TRUE
parameter don’t work in Oracle 9i for NT

My web page http://www.geocities.com/juancarlosreyesp/

Quick Reference Manual:


101 basic tips for tuning Oracle 9i
Juan Carlos Reyes Pacheco
Oracle Certified Professional
6 years of experience as DBA and Developer
Daza Software S. A.
2
6.1.9 Executing counts.............................................................................................................7 13.1 MAX_ENABLED_ROLES........................................................16
2 Table of Contents 6.1.10 Check queries...................................................................................................................7 14 Hardware ................................................................................16
1 Introduction.............................................................................. 1 6.2 Indexes......................................................................................7 14.1 Don’t use Raid 5.......................................................................16
2 Table of Contents ...................................................................... 2 6.2.1 Different indexes types..................................................................................................7
15 Operating system.....................................................................16
3 Tuning Goals: What and how to tune........................................ 3 6.2.2 Cases that avoid the use of an index ..........................................................................7
6.2.3 Verify the use of index and full scan .........................................................................7
15.1 NT ........................................................................................... 16
4 THE DESIGN: database ........................................................... 3 6.2.4 How are you getting the data.......................................................................................7 15.1.1 Screen Savers ................................................................................................................16
4.1 Optimizer................................................................................... 3 6.2.5 Index for columns used in order by............................................................................7 15.1.2 Defragmentation...........................................................................................................16
4.1.1 Optimizer Index parameters ........................................................................................3 15.1.3 Cache In NT4 ................................................................................................................16
6.2.6 Specify the columns in every query ...........................................................................7
4.1.2 DB_FILE_MULTIBLOCK_READ_COUNT MUST BE SET ........................3 6.2.7 Verify index usage..........................................................................................................7 16 Other considerations................................................................16
4.1.3 OPTIMIZER_FEATURES_ENABLE.....................................................................3 6.2.8 Order the table in the way it is more frequently queried ......................................7 16.1 Bugs and hidden parameters....................................................... 16
4.1.4 OPTIMIZER_MAX_PERMUTATIONS (4-80000)............................................3 6.2.9 Proper order of column in indexes .............................................................................7 16.2 Try don’t apply techniques that works in other database................16
4.1.5 OPTIMIZER_MODE....................................................................................................3 6.2.10 If you are using function, create a function index..................................................7 16.3 Test your solutions....................................................................16
4.1.6 Good HINTS ...................................................................................................................3 6.2.11 Rebuild indexes periodically........................................................................................8 16.4 What to optimize.......................................................................16
4.2 Statistics.................................................................................... 4 6.3 Unions.......................................................................................8 16.5 Patches..................................................................................... 16
4.2.1 Test database ...................................................................................................................4 6.3.1 Indexes in join columns.................................................................................................8
4.2.2 Recalculating statistics..................................................................................................4 6.3.2 EXISTS VS. IN subqueries..........................................................................................8
17 How to get specific information from you Oracle Database.......16
4.2.3 TIMED_STATISTICS ..................................................................................................4 6.3.3 IN() NO take Null Values.............................................................................................8
17.1 Database Objects.......................................................................17
4.3 Structure.................................................................................... 4 7 17.2 Views for Tuning......................................................................17
PL/SQL.....................................................................................8
4.3.1 Block Size ........................................................................................................................4 7.1 Execute Immediate vs. dbms_sql..................................................8 17.3 Version ....................................................................................17
4.3.2 Temporary operations...................................................................................................4 17.3.1 Interpretación del número de versión.....................................................................17
7.2 Bulk Collect ...............................................................................8
4.3.3 Locally Managed Tablespaces....................................................................................4 17.4 Operating System......................................................................17
4.3.4 Undo Tablespace............................................................................................................5 7.3 Updating only the necessary.........................................................8
7.4 Executing several DML commands faster......................................9 17.5 Options availables on your database release..................................17
4.4 Memory..................................................................................... 5 18 Bibliography............................................................................17
4.4.1 Adding more physical memory to the server ..........................................................5 7.5 Use analytic function when possible..............................................9
4.4.2 Cache.................................................................................................................................5 7.6 Null Values ................................................................................9
4.4.3 Frequently accessed stored procedures....................................................................5 8 Native Compilation in Java and C++................................ ..........9
4.4.4 PGA Memory Management.........................................................................................5 9 THE TUNING PROCESS .........................................................9
4.4.5 Large Pool ........................................................................................................................5 9.1 Check de Alert.log file.................................................................9
4.4.6 Shared Pool ......................................................................................................................5
4.4.7 Java Pool Size..................................................................................................................5
10 Performance Analysis................................................................9
4.4.8 Shared Servers .................................................................................................................5 10.1 Timing.......................................................................................9
4.4.9 Tuning Memory ..............................................................................................................5 10.2 SQL Trace..................................................................................9
4.5 Tables........................................................................................ 5 10.2.1 recursive calls...............................................................................................................10
4.5.1 Order of the records in the table affects execution plan.......................................5 10.2.2 db block gets.................................................................................................................10
4.5.2 Null Columns ..................................................................................................................5 10.2.3 consistent gets...............................................................................................................10
4.5.3 Triggers.............................................................................................................................5 10.2.4 physical reads................................................................................................................10
4.5.4 Partitioning data..............................................................................................................5 10.2.5 redo size..........................................................................................................................10
4.5.5 LOBs..................................................................................................................................5 10.2.6 bytes sent via SQL*Net to client .............................................................................10
4.5.6 Delete all rows................................................................................................................5 10.2.7 bytes received via SQL*Net from client................................................................10
4.5.7 Faster Updates.................................................................................................................5 10.2.8 SQL*Net roundtrips to/from client .........................................................................10
4.5.8 Faster Inserts....................................................................................................................5 10.2.9 sorts (memory)..............................................................................................................10
4.5.9 Table and index compression......................................................................................6 10.2.10 sorts (disk).....................................................................................................................10
4.5.10 Materialized Views ........................................................................................................6 10.2.11 rows processed..............................................................................................................10
4.5.11 PARALLEL.....................................................................................................................6 10.2.12 Cardinality (card=) ......................................................................................................10
4.6 Locks and Transactions............................................................... 6 10.2.13 Cost (cost=)...................................................................................................................10
4.6.1 10.3 Gathering Statistics for all user activity in a period ....................... 10
Locks.................................................................................................................................6
4.6.2 Transactions.....................................................................................................................6 10.4 Tom Kyte’s Runstats: To compare two solutions......................... 10
4.7 Connections............................................................................... 6 10.5 StatPacks.................................................................................. 10
4.7.1 Avoid connecting to the database every process you do......................................6 10.6 Waits....................................................................................... 16
5 Stored Procedures..................................................................... 6 10.6.1 Waits by object.............................................................................................................16
5.1 Packages.................................................................................... 6 11 Backups .................................................................................. 10
5.2 Anonymous blocks..................................................................... 6 11.1 Full Backup .............................................................................. 16
5.3 Pinning...................................................................................... 6 11.2 Archivelog Mode ...................................................................... 16
6 Querys...................................................................................... 6 11.3 RMAN..................................................................................... 16
6.1 The Execution Plan ..................................................................... 6 11.4 You too can use export and import to backup............................... 16
6.1.1 Full Scan Vs. Index Scan .............................................................................................6 11.5 Test your database backups ALWAYS................................ ........ 16
6.1.2 Binding, Hard parse.......................................................................................................6 12 24x716
6.1.3 Histograms .......................................................................................................................6 12.1 Tables and indexes.................................................................... 16
6.1.4 Soft parse ..........................................................................................................................7 12.2 Backups................................................................................... 16
6.1.5 Open Cursors...................................................................................................................7
6.1.6 Avoid unnecessary sorts ...............................................................................................7
12.3 Recovery.................................................................................. 16
12.3.1 Backup recover .............................................................................................................16
6.1.7 Fixed Exe cution plans...................................................................................................7
12.3.2 Instance Recovery........................................................................................................16
6.1.8 Try to get all in one query............................................................................................7
13 Other Parameters.................................................................... 16
3
A value of 80,000 indicates there is no limit
3 Tuning Goals: What and how to tune Here there are advices for setting it, you should read This value is used to reduce the time to get an execution plan, and to get a
I think the tuning goals are basically http://www.evdbt.com/SearchIntelligenceCBO.doc better execution plan in tuning phase.
• Enough as the customer need. http://www.oracleadvice.com/Tips/optind.htm This parameter works with _OPTIMIZER_SEARCH_LIMIT parameter, (
• The next step then is enough so they don’t bother for execution http://www.dbazine.com/jlewis18.shtml NOTE the underscore means this is a hidden parameter, only to be
time. 4.1.1.1 OPTIMIZER_INDEX_COST_ADJ (1-10000) modified by experts).
An excellent article Default value 100, a value of 100 indicates that access a table from an What some experts uses to do in complex querys in a high tune is to set the
http://www.hotsos.com/downloads/registered/00000028.pdf index is the same that accessing all the table, THIS IS FALSE. value in _OPTIMIZER_SEARCH_LIMIT equal to the number of tables in
But going to specific tasks we can see the following: Suggested values: the query and increases the value of
• Tune response time 10-30 OLTP database, too much inserts, y OPTIMIZER_MAX_PERMUTATIONS to 80,000. To get the best
• Tune throughput 50 Data Warehousing, big queries few inserts execution plan.
• Tune disk access I suggest to start with a value of 10 and then you can increase it. Once they get it they end the optimization and uses hints to force the best
4.1.1.2 OPTIMIZER_INDEX_CACHING (0-99)
• Tune waits execution plan.
This parameter sets the possibility to find blocks accessed through an Is not advisable to change this parameter permanently.
• Tune memory usage index in memory. 4.1.5 OPTIMIZER_MODE
• Database availability Default value is 0, 0 means that no one block accessed through index will The optimizer chooses between a cost -based approach and a rule-based
• Database recovery be found in memory THIS IS FALSE. A value of 90% is advisable. approach, depending on whether statistics are available.
To tune something you must understand how the database works, if you 4.1.1.3 Example CHOOSE
don’t know how it works, I think is going to be really really hard to An example that shows how a value incorrect in both parameter can This is the default value, it uses database statistics to get the best execution
understand how to tune. change your execution plan. plan.
Additionaly if you implement some features from Oracle to tune, you must SQL> select * from cuentas_me where cts_cuenta = '1'; =ALL_ROWS
Execution Plan
test it before, for example there is a bug in developer 6i that prohibits the Cost-based approach to get the minimum resource use to complete the
----------------------------------------------------------
use of IOT tables. 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=2 Bytes=288) entire statement, for example to get reports.
An introduction to wait events you can read 1 0 TABLE ACCESS (BY INDEX ROWID) OF 'CUENTAS_ME' (Cost=2 Card=2
=FIRST_ROWS_n
http://otn.oracle.com/products/manageability/database/pdf/OWPerformanc Bytes=288)
2 1 INDEX (SKIP SCAN) OF 'CST_CTS_CUENTA' (UNIQUE) (Cost=3 Card=1) Cost-based approach, to get best response time to return the first n number
eMgmtPaper.pdf ****USES INDEX of rows; n can equal 1, 10, 100, or 1000, Oracle suggest to use it, for
Statistics
4 THE DESIGN: database ----------------------------------------------------------
example
24 consistent gets ** this is 28*8K needed to be read ALTER SESSION
4.1 Optimizer SET OPTIMIZER_MODE = FIRST_ROWS(10);
SQL> ALTER SESSION SET OPTIMIZER_INDEX_COST_ADJ= 10000;
The optimizer is who defines de optimum access path to get data. SQL> ALTER SESSION SET OPTIMIZER_INDEX_CACHING=0;
Select /* FIRST_ROWS(10) */ column from table
To get that real optimum access the database and the session must give SQL> select * from cuentas_me where cts_cuenta = '1'; ( FIRST_ROWS hint, not suggested FIRST_ROWS() is improved.)
Execution Plan Uses a mix of cost and heuristics to find a best plan for fast delivery of the
enough information about what he is doing. ----------------------------------------------------------
There are parameter which instructs the optimizer how to interpret the data 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=39 Card=2 Bytes=288) first few rows.
and what to do. 1 0 TABLE ACCESS (FULL) OF 'CUENTAS_ME' (Cost=39 Card=2 Bytes= 288) **** Note: Using heuristics sometimes leads the CBO to generate a plan with a
The optimizer becomes more intelligent with every Oracle release, and can DO A FULL SCAN
Statistics
cost that is significantly larger than the cost of a plan without applying the
ignore some hints, even when you specify them. ---------------------------------------------------------- heuristic. =FIRST_ROWS is available for backward compatibility and
4.1.1 Optimizer Index parameters 388 consistent gets ** this is 388*8K hended to be read, this is more than 1,600% plan stability.
Setting both parameter can be confusing, defaults values are definetively RULE
wrong, but your system can works without problems without changing 4.1.2 DB_FILE_MULTIBLOCK_READ_COUNT MUST BE SET This is discontinued from 10G, so doesn’t deserve attention, it was a
them. This parameter indic ates the number of block that are read by the method that used fixed rules to get the execution path, as it didn’t use
Now even when in Oracle 9i you can tun e gathering system statistics, this operating system at once, the optimizer uses this value to evaluate the cost stat istics it was frequently inaccurate.
not guarante a perfect behavior. of a full scan against through an index.
This value is Operating System dependent, is better you ask information 4.1.6 Good HINTS
Is possible you will have to set this parameters for every distinct workload about it. The hint adds weight to a cost estimation toward some CBO behavior, this
could requiere a distinct value. (alter system, if this is based on time, day In Windows NT is 128, in consequence the value should be 128/(block means if even with the hint this is not a good execution path for the CBO,
oltp, night dss), or (alter session, if this is related to specific process). size)8K=16. it will be ignored.
Tom kyte does an excellent analysis across his book to set this parameter
There can be situations where this parameters could solve a bad execution “Effective
4.1.3
Oracle by Design”.
OPTIMIZER_FEATURES_ENABLE
Tom Kyte’s good list hints (hint you can use when needed), if not on good
plan. list, it would be on the other kind of list(hints you should not use without a
Every Oracle release , the optimizer change the way it interprets the really good reason).
There is a possibility you could get benefit setting this parameters.
information, and are introduce new features.
Setting this parameters incorrectly could cause incorrectly execution paths. But as all this change can cause that old execution plans “to fail” , Oracle 4.1.6.1 ALL_ROWS
Use them, only after you test their efects in a test database and see your Optimize a statement block for best throughput (minimum total resource
offer a way to use a previous optimizer behavior consumption).
are getting a real improvement.
Is not advisable to change this parameter. SELECT /*+ ALL_ROWS */ columns FROM table
4.1.4 OPTIMIZER_MAX_PERMUTATIONS (4-80000) 4.1.6.2 FIRST_ROWS(n) or FIRST_ROWS
In my personal experiense is better to set them, I think default values are This parameter indicates the maximum number of permutation the
definitively wrong, but there are other experienced dbas, that never The hints FIRST_ROWS(n) (where n is any positive integer) or
optimizer analyze before generate an execution plan, when there are table FIRST_ROWS instruct
changed them, and don’t have problems with their execution plans in big unions.
and complex databases.
4
Oracle to optimize an individual SQL statement for fast response. 4.1.6.10 CARDINALITY EXTENT MANAGEMENT LOCAL UNIFORM SIZE 640K
FIRST_ROWS(n) It works for procedure tables, setting it, indicates the number of records ....
Optimize to return the first n rows most efficiently. you will get. ;
SELECT /*+ FIRST_ROWS(7) */ columns FROM emp SELECT /*+ cardinality(table 10 ) You can verify if it is temporary with the following query.
If you are using as subquery, in the subquery include a WHERE SQL> SELECT CONTENTS FROM DBA_TABLESPACES WHERE
The optimizer ignores this hint in DELETE and UPDATE statement TABLESPACE_NAME = 'TBL_TEMP';
blocks and in ROWNUM>0, for more information read:
SELECT statement blocks that contain any of the following syntax: http://asktom.oracle.com/pls/ask/ CONTENTS
• Set operators (UNION, INTERSECT, MINUS, UNION ALL) f?p=4950:8:::::F4950_P8_DISPLAYID:3779680732446 ---------
• GROUP BY clause TEMPORARY
4.2 Statistics
• FOR UPDATE clause The optimizer works based on statistics, if they are old, or inaccurate
SQL>
To be effective you must assign temporary tablespaces to users,
• Aggregate functions you’ll get a wrong execution plan. CREATE USER adm
• DISTINCT operator You must recalculate statistics every time an important change had DEFAULT TABLESPACE TBL_USERS
• ORDER BY clauses, when there is no index on the ordering columns happened in it: TEMPORARY TABLESPACE TBL_TEMP
These statements cannot be optimized for best response time, because • Periodically, based on normal changes in a database (you make PROFILE ADM_PROFILE
Oracle must retrieve all rows accessed by the statement before returning this automatically them) /
4.3.3 Locally Managed Tablespaces
the first row. • After importing a big amount of data
If you specify either the ALL_ROWS or the FIRST_ROWS hint in a SQL Locally managed tablespaces have the following advantages over
• When distinct values in primary columns change dictionary managed tablespaces (old):
statement, and if the data dictionary does not have statistics about tables •
accessed by thestatement, then the optimizer uses default statistical values
After creating indexes and table • Local management of extents automatically tracks adjacent free space,
4.2.1 Test database
(such as allocatedstorage for such tables) to estimate the missing statistics eliminating the need to coalesce free extents.
If this is the first time you get statistics, you must remember some database
4.1.6.3 CHOOSE • Local management of extents avoids recursive space management
has fixed execution plans, or any other consideration can cause a statistics
Causes the optimizer to choose between the rule-based and cost-based recalculation cause serious problems, as general rule do it first in ha test operations. Such recursive operations can occur in dictionary managed
approaches for a SQL statement. The optimizer bases its selection on the database, before doing in the production database. tablespaces if consuming or releasing space in an extent results in
presence of statistics for the tables accessed by the statement. 4.2.2 Recalculating statistics
another operation that consumes or releases space in a data dictionary
4.1.6.4 (NO)REWRITE
Oracle recommends DON’T USE ANALYZE to gather statistics, the table or rollback segment.
(NO)REWRITE hint forces the cost -based optimizer to (no) rewrite a package USE DBMS_STATS, this package get more statistics, specially You specify it with the LOCAL clause
query in terms of materialized views, when possible, without cost 4.3.3.1 Segment Space Management in Locally Managed Tablespaces
for new features.
consideration. Don’t execute DBMS_STATS on SYS schema. When you create a locally managed tablespace using the CREATE
4.1.6.5 DRIVING_SITE TABLESPACE statement, the SEGMENT SPACE MANAGEMENT
This command has dozens of options, like parallel execution, etc. etc. you
Is useful if you are using distributed query. clause lets you specify how free and used space within a segment is to be
SELECT /*+DRIVING_SITE(table)*/ * FROM table2, table@remote; must read them. To gather all statistics
managed. Your choices are:
Without the hint, rows from table@remote are sent to the local site, and the EXEC DBMS_STATS.GATHER_DATABASE_STATS(); AUTO (SUGGESTED), This keyword tells Oracle that you want to use
join is executed there. With the hint, the rows are sent to the remote site, To gather statistics in a schema: bitmaps to manage the free space within segments. A bitmap, in this case,
and the query is executed there, returning the result to the local site. EXEC DBMS_UTILITY.ANALYZE_SCHEMA('ADM','COMPUTE'); is a map that describes the status of each data block within a segment with
To gather statistics in a schema, more precisely, the one we use, because our database is a small
4.1.6.6 (NO)PARALLEL database: respect to the amount of space in the block available for inserting rows. As
Specify the desired number of concurrent servers that EXEC DBMS_STATS.GATHER_SCHEMA_STATS( OWNNAME=>'ADM', more or less space becomes available in a data block, its new state is
can be used for a parallel operation. ESTIMATE_PERCENT=>100,METHOD_OPT=>'FOR ALL COLUMNS SIZE SKEWONLY'); reflected in the bitmap. Bitmaps enable Oracle to manage free space more
SELECT /*+ PARALLEL(table, 3) */ ename Additionally there are several features about statistics, like to gather automatically; thus, this form of space management is called automatic
SELECT /*+ NOPARALLEL(table) */ automatically statistics, you have to investigate about them. segment-space management.
4.1.6.7 (NO) APPEND 4.2.3 TIMED_STATISTICS
MANUAL, This keyword tells Oracle that you want to use free lists for
Append enables direct-path faster inserts. No append conventional inserts. TIMED_STATISTICS parameter should be TRUE (it is its default managing free space within segments. Free lists are lists of data blocks that
4.1.6.8 CURSOR_SHARING_EXACT value). have space available for inserting rows. MANUAL is the default.
If you had set CURSOR_SHARING for fix binding problems, you can use 4.3.3.2 Extents Management in Locally Managed Tablespaces
this hint to get a query use CURSOR_SHARING binding mode. 4.3 Structure
4.3.1 Block Size You can have Oracle manage extents for you automatically with the
4.1.6.9 DYNAMIC_SAMPLING
It depends on what data you get, the size of the rows, etc. AUTOALLOCATE option (the default), or you can specify that the
enables dynamic sampling if all of the following conditions are true: tablespace is managed with uniform extents of a specific size (UNIFORM
A good size is 8k for a OLTP, DSS small database. You can test different
• There is more than one table in the query. sizes and see if it improves performance, remember you can set different SIZE).
• Some table has not been analyzed and has no indexes. block sizes in different tablespaces. If the tablespace is expected to contain objects of varying sizes requiring
• The optimizer determines that a relatively expensive table scan 4.3.2 Temporary operations different extent sizes and having many extents, then AUTOALLOCATE is
would be required for this unanalyzed table. You should create a TEMPORARY tablespace, this is a special tablespace the best choice. If it is not important to you to have a lot of control over
Basically lets sql optimizer process interrogate the database table that special for temporary operations, optimized for that purposes, for example space allocation and deallocation, AUTOALLOCATE presents a
is not analyzed but used in a query with other tables that are before parsing it don’t save recovery information in log files, this means faster. simplified way for you to manage a tabl espace. Some space may be wasted
the query. So , the database "can have a clue" as to the statistics regarding Example: but the benefit of having Oracle manage your space most likely outweighs
the unanalyzed table. CREATE DATABASE xxx this drawback.
For Global Temporary Tables, at least a value of 2 -- in order to get all …. On the other hand, if you want exact control over unused space, and you
unanalyzed tables (the GTT in this case) to be sampled (since 0 disables DEFAULT TEMPORARY TABLESPACE TBL_TEMP TEMPFILE can predict exactly the space to be allocated for an object or objects and
this and 1 doesn't do anything if an index exists) 'D:\oraxxx\datafiles\DFL_TEMP_xxx'
SIZE 50M
the number and size of extents, then UNIFORM is a good choice. This
REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
5
ensures that you will never have an unusable amount of space in your First you should optimize reads from memory (LIOs logical read call) and 4.4.9 Tuning Memory
tablespace. then, you should start to optimize reads from disk (PIOs physical read call) Take a loot at
For example The one we use is 4.4.2 Cache http://download-
CREATE TABLESPACE TBL_readonly DATAFILE 4.4.2.1 Small Tables west.oracle.com/docs/cd/B10501_01/server.920/a96533/memory.htm#443
'e:\OraxxxReadOnly\DFL_READONLY_xxx' To get frequently accesed (small tables) be in memory always, you must specify
SIZE 100M 34
ALTER TABLE ADM.ACTF_UBIC_ME
REUSE AUTOEXTEND ON NEXT 640k MAXSIZE UNLIMITED
SEGMENT SPACE MANAGEMENT AUTO
STORAGE ( BUFFER_POOL KEEP); 4.5 Tables
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K ; To specify rare accessed tables be out of memory as soon as possible, 4.5.1 Order of the records in the table affects execution plan
4.3.4 Organizing data in tablespaces for faster disk access ALTER TABLE ADM.ACTF_UBIC_ME The order of the records in a table affects the execution plan, periodically
You create tablespace to organize and distribute the contents for example: STORAGE ( BUFFER_P OOL RECYCLE);
could be useful to recreate the table in the order it is accesed, for example
read only data, indexes, data, other kind of data. You set the size of the cache for every cache, with the following CREATE TEST FROM SELECT * FROM TEST ORDER BY COLUMN1;
Then you can point all datafiles in a tablespace to a specific hard disk. parameters, the size must be set based on you requeriments and memory 4.5.2 Null Columns
To get increased performance you should use distinct physical hard disk available. Columns that uses to be null should be put at the end in the table, to save
to: DB_CACHE_SIZE = 60M space.
DB_KEEP_CACHE_SIZE = 25M
Tables and indexes, LOBs( columns) , redologs, operating system, oracle The columns equal to null, between other columns save 1 byte, but if this
DB_RECYCLE_CACHE_SIZE = 10M
software; this depends on your system and the kind of specific contention 4.4.3 Frequently accessed stored procedures column is at the end of the table it takes 0 bytes.
you can suffer. 4.5.3 Triggers
To be sure frequently executed stored procedures be always on memory
4.3.4.1 Separating Data and Indexes improves performance is FALSE Whe the code in the trigger is parsed once for every execution, for example
create trigger to be executed every time the database go up. 9 inserts (one by one) (9 parses) and one insert of 100 records( 1 parse)
To get data, the indexes and tables are read serially, so is not true you can CREATE OR REPLACE TRIGGER sys.tgr_startup
get an important improvement separating indexes and data in distinct AFTER STARTUP ON DATABASE will parse 10 times, because the code in the trigger is only cached for the
tablespaces and physical disks. DECLARE duration of the trigger.
4.3.5 Undo Tablespace SYS.DBMS_SHARED_POOL.KEEP( 'SCOTT.MANAGERS'); When the code is in a stored procedure, and it is called from the trigger, it
Oracle strongly recommends operating in automatic undo management END; is only parsed once.
mode. The database server can manage undo more efficiently, and 4.4.4 PGA Memory Management Pinning the object don’t resolve this problem.
automatic undo management mode is less complex to implement and Enabling this option allows Oracle to size or SQL work areas, and saves 4.5.4 Partitioning data
manage. the work of setting the several *_AREA_SIZE parameters. In Oracle Enterprise you can partition tables, based on several conditions,
To do it you must set the following parameter You enable this functionality with this par ameter: this speed up specially full table scan access.
# Enable UNDO mode WORKAREA_SIZE_POLICY = AUTO 4.5.5 LOBs
UNDO_MANAGEMENT = AUTO An set the size of Memory Oracle will manage Tables can have lot of information in LOB types, you can save images,
# Identify Undo Tablespace PGA_AGGREGATE_TARGET = 100M books, any thing in a BLOB, that is why is a good idea to store them in a
UNDO_TABLESPACE = TBL_UNDO From the memory available, deducting memory used for OS and other different tablespace
# Define the undo retention time in seconds devices. ALTER TABLE ADM.ANF_RATIOS_ME
For OLTP: PGA_AGGREGATE_TARGET = (total_mem * 80%) * 20% ADD ( photo CLOB )
UNDO_RETENTION = 56000 LOB (photo) STORE AS
If an active transaction requires undo space and the undo tablespace does For DSS: PGA_AGGREGATE_TARGET = (total_mem * 80%) * 50%
4.4.5 Large Pool (TABLESPACE TBL_BLOB);
not have available space, the system starts reusing unexpired undo space. 4.5.6 Delete all rows
Such action can potentially cause some queries to fail with the "snapshot The large pool allocation heap is used in shared server systems for session
TRUNCATE TABLE is the fastest way, is a DDL it don’t generate
too old" error, default is 900. memory, by parallel execution for message buffers, and by backup
processes for disk I/O buffers. (Parallel execution allocates buffers out of rollback information.
This parameter affects flash back behavior too. 4.5.7 Faster Updates
the large pool only when PARALLEL_AUTOMATIC_TUNING is set to 1) If you are executing millions of updates a better option can be
An example to create undo datafile true.) create table2 as select … from table1;
CREATE DATABASE xxx …… LARGE_POOL_SIZE = 8000000 drop table table1;
4.4.6 Shared Pool
UNDO TABLESPACE TBL_UNDO DATAFILE The shared pool contains shared cursors, stored procedures, control rename table2 to table1
'E:\oraxxx \datafiles\dfl_undo_xxx' add indexes and constraints;
structures, and other structures. If you set
SIZE 50M You additionally use nologging tables(avoid redo generation) and append
PARALLEL_AUTOMAT IC_TUNING to false, then Oracle also allocates
REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED.. ; parallel execution message buffers from the shared pool. Larger values hint(avoid undo generation), but you must be aware about the
improve performance in multi-user systems. Smaller values use less consequences explained in “Faster Inserts”.
4.4 Memory 2)If you know several rows has the new value you want to set, do the
Tuning memory means to give the precise size, giving more memory to memory. following
what is more frequently accessed, to avoid that data be read from disk, etc. SHARED_POOL_SIZE = 50000000
4.4.7 Java Pool Size UPDATE TABLE SET COLUMN=’VALUE’ WHERE NOT
An adequate execution path for a query is important, because it indicates to
put in memory only the most important data. It defines the memory used by java in Oracle, if you don’t use Java set it to COLUMN=’VALUE’;
4.5.8 Faster Inserts
4.4.1 Adding more physical memory to the server JAVA_POOL_SIZE = 1000000, otherwise, set for installation to
JAVA_POOL_SIZE = 33000000 0) Always if possible try to do all in one command
There is point when excessive data in memory start to decrease Insert into DESTINY select * from SOURCE, you can use CASE to
performance, because excesive logical reads (reads from data in memory) 4.4.8 Shared Servers
consume CUP and latches. Shared servers is and interest feature from Oracle, to get several clients modify data in the same source select.
Blocks in memory are not only stored in memory instead of disk, they are sharing the same space of memory, this feature is usefully, when they do 1) To execute several inserts, deletes or updates you can disable the
other process that registers and keeps tracks of them. very short transactions, and when they are not working all the time. logging, this means you will have to do a full ba ck up after that, because
Otherwise use dedicate server mode. it eliminates redo generation, needed for backups in archivelog mode.
Here is an excellent artilce from hotsos, the registration is free. You can do directly in the table
http://www.hotsos.com/downloads/registered/00000006.pdf Shared server modo requires a different memory parameter setting.
ALTER TABLE ADM.ANF_RATIOS_ME NOLOGGING;
6
5 Stored Procedures cost, the cost used by the CBO represents an estimate of the number of
2) insert /*+ append */ into…, remember indexes will save log disk I/Os and the amount of CPU and memory used in performing an
information. 5.1 Packages operation
, this byp asses undo generation, your table will have to be commit, before There are several reasons to use packages to group relates functions and 6.1.1 Full Scan Vs. Index Scan
issuing this command and after issuing if you want to access it again. packages, instead of simple functions or procedures, a few or them are: Both are good access path, the idea is if you are getting the 80% of the
This is completely safe. • Better organization rows from the database, a full scan is advisable, meanwhile if you are
• Can share variables getting a 1% a index scan is the best.
6.1.2 Binding, Hard parse
3) Analyze the use of import or load utilities to load that table or data, • Compiling packages body doesn’t uncompile dependents
usually is the fastest. procedures calling them. Before every query, Oracle must parse the statement, this means, it have to
Definetively you should check their functionality and use them. analyze and see the best execution plan.
4) When you have to insert (if not exists) and update if it exists you can Once a statement is parsed, the following statement exactly as this, don’t
use MERGE command 5.2 Anonymous blocks parse, they simply execute.
Is always advisable to convert a anonymous block, executed frequently, in For example a query
5) If you are using loops, to insert data, use bulk collect. a procedure. SELECT COLUMN FROM TABLE WHERE COLUMN=’A’;
6) If you are not using APPEND hint and if is possible in the logic of your DECLARE Becomes in
program, you can do frequently commits, to avoid undo size, growing too BEGIN SELECT COLUMN FROM TABLE WHERE COLUMN=:1;
much. END;
4.5.9 Table and index compression Because everytime you execute an anonymous block, it is parsed once So any query using SELECT COLUMN FROM TABLE WHERE
Usually compressing tables and indexes, you can get a benefit not only in every execution. COLUMN=;
space, but too in performance. If you use a stured procedure (procedure/function/package), it is parsed Will use that statement parsed.
But for massive DML (updates, deletes, inserts ) processing in comprssed once per session.
tables and compressed index, you could have a serious impact in There is a problem specially in non Developer/2000 applications, there is
performance, increasingz waits. the problem because they don’t parse the query for example
The best is to test, the performance impact you can suffer. 5.3 Pinning
4.5.10 Materialized Views When space in memory is needed, sometimes large procedure are took out In this case, the statement is parsed every distinct value you query.
A materialized view is a “table” where is summarized the contents of a of memory, and when they are reloaded they are fragmented. INCORRECT
query, this implies that querys can be rewriten if possible to use To avoid fragmentation can pin in memory, creating a startup trigger in the
materialized views instead of tables. database. (You can pin triggers too) DECLARE
This really helps to tune. nReturn NUMBER;
Instead of reading a 1Mgbyte table to get a sum from a column, for CREATE OR REPLACE TRIGGER sys.tgr_startup BEGIN
AFTER EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM ADM.TRANSAC_ME
example, it read it from a 1k table. WHERE TRC_EMPRESAGESTION = ''DEF '' '
4.5.11 PARALLEL STARTUP
INTO nReturn
In Oracle Enterprise Edition, you can use PARALLEL processing in tables ON DATABASE USING cEmpresa ;
in several tasks, you should check this option too. DECLARE DBMS_OUTPUT.PUT_LINE (nReturn);
BEGIN END;
4.6 Locks and Transactions SYS. DBMS_SHARED_POOL.KEEP ( 'SYS.DBMS_STANDARD');
4.6.1 Locks SYS.DBMS_SHARED_POOL.KEEP('ADM.TGR_TRANSAC_ME','R'); In this case, the statement is parsed once, and executed several times.
Something very important to note in Oracle, is the low cost Oracle locks END; CORRECT
has, it doesn’t takes as much resources as in other databases. /
Automatic locks are enough, you don’t need to create a sophisticated lock DECLARE
system neither in tables neither in record, to guarantee the consistency in 6 Querys nReturn NUMBER;
the data. cEmpresa VARCHAR2(3):= 'DEF';
Oracle automatically show consistent data, thanks to logs, to get a query 6.1 The Execution Plan BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM ADM.TRANSAC_ME
that started at 12:30 and took 3 hours, show always the data as it was at The execution path is the step Oracle execute to get the data.
(7.1 Trace, explains how to trace ) WHERE TRC_EMPRESAGESTION = :cEmpresa'
12:30. INTO nReturn
An exclusive lock only will prejudice your system. For example USING cEmpresa ;
4.6.2 Transactions The most important initially is to see if it uses a index or full scan when it DBMS_OUTPUT.PUT_LINE (nReturn);
Too, is an advantage in Oracle, the possibility to have long transaction, is necessary. END;
with several thousands on modification, before do a commit.
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=2 Bytes=288)
The most advisable is to execute the COMMIT, only when all the 1 0 TABLE ACCESS (BY INDEX ROWID) OF 'CUENTAS_ME' (Cost=2 Card=2 To fix this situation there is the database parameter, the correct value is
transaction had been completed. Bytes=288) EXACT, but to fix all this situations temporally can be set to SIMILAR
Unnecessary COMMITs are dangerous, because they can give inconsistent 2 1 INDEX (SKIP SCAN) OF 'CST_CTS_CUENTA' (UNIQUE) (Cost=3 Card=1 OR FORCE.
****USES INDEX
information. CURSOR_SHARING = EXACT
6.1.3 Histograms
4.7 Connections 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=39 Card=2 Bytes=288) There is situation when we don’t want bind, this is when for example when
4.7.1 Avoid connecting to the database every process you do 1 0 TABLE ACCESS (FULL) OF 'CUENTAS_ME' (Cost=39 Card=2 Bytes= 288) ****
DO A FULL SCAN we always query a big table by sex, the 20% of the table is male, and the
Try to stablish a permanente connection an use it. 80% of the database if female, if we query by male, it should be use an
Reconecting every time, has a performance price, because several querys, index, meanwhile if we do a query by female this should do a full scan.
and assignments to the new user conection are executed. card, is the cardinality the number of rows in a row set
In this situation we avoid the bind, and get statistics for the histogram.
7
6.1.4 Soft parse 417 bytes received via SQL*Net from client
Lets you specify the number of session cursors to cache. 2 SQL*Net roundtrips to/from client
After the first “soft parse”, subsequent “soft parse” calls will find the COUNT(*) 0 sorts (memory)
cursor in the cache and do not need to reopen the cursor. To get placed in --------- 0 sorts (disk)
1 1 rows processed
the session cache the same statement has to be parsed 3 times within the
same cursor. Oracle uses a least recently used algorithm to remove entries
in the session cursor cache to make room for new entries when needed. Executio n Plan 6.1.10 Check queries
Session cached cursors is a great help in reducing latching that takes place ---------------------------------------------------------- An important source for análisis of quer ys is the view V$SQLAREA, in
due to excessive soft parsing (where a program parses, executes, closes a 0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=381 Card=1 the view V$SQLTEXT, you’ll can find the complete statement.
statement over and over) )
The suggested value is a non zero value. 6.2 Indexes
1 0 SORT (AGGREGATE) 6.2.1 Different indexes types
SESSION_CACHED_CURSORS = 300 2 1 VIEW (Cost=381 Card=1)
6.1.5 Open Cursors 6.2.1.1 Index Organized Tables IOT
3 2 COUNT (STOPKEY) This indexes are adviced for tables which are always accesed through
This parameter determines the amount of open cursor you can have, 4 3 VIEW OF 'HICUENTASF' (Cost=381 Card=3919151)
depending the application you can set it to primary keys.
5 4 UNION-ALL
OPEN_CURSORS=800 6 5 INDEX (FAST FULL SCAN) OF 'CST_HCF_FECHA_STATUS_
Meanwhile you don’t use forms developer 6i, to query, you can use them.
In the same way you have another cursor, that usually is in false RW' (NON-UNIQUE) (Cost=93 Card=1985907) Because there is a bug there.
6.2.1.2 Bitmapped indexes
CURSOR_SPACE_FOR_TIME = FALSE This is a feature for Oracle Enterprise edition, to use when you have a
If you set to true, this parameter not close open cursors, specially if using 7 5 INDEX (FAST FULL SCAN) OF ' CST_HCF_FECHA_STATUS'
(NON-UNIQUE) (Cost=288 Card=1933244) column with few distinct values.
developer 2000, don’t use it.
6.1.6 Avoid unnecessary sorts But this can give too good results with columns with s everal distinct
Statistics values, you can test it.
En lo posible realiza un UNION ALL, en lugar de UNION, porque ---------------------------------------------------------- 6.2.2 Cases that avoid the use of an index
UNION ordena los datos, para eliminar registros idénticos duplicados. 12 recursive calls
Si realiza una operación con una columna, haga una función índice para There are several situation that definitively avoid the use of an index
0 db block gets
esta. 71 consistent gets • ¡=, NOT, LIKE y <>
6.1.7 Fixed Execution plans 0 physical reads • NULL values, are not indexed but you can do a trick, in the
Even when Oracle does a hard work to define the best execution plan, 0 redo size where clause use NVL(COLUMN,’~’) and create a function
there are several situation, when you can get a better execution plan and set 246 bytes sent via SQL*Net to client index based on this expression
417 bytes received via SQL*Net from client CREATE INDEX idxnull ON table ( NVL(COLUMN,'~') ASC ).
it using hints (IN UPPER CASE, and between /*+ */) 2 SQL*Net roundtrips to/from client
SELECT /*+ HINT */ (*) from adm.cuentas_me; • OR, try to use instead UNION
1 sorts (memory)
If there is a syntax error in the hint, there is no error generated, it simple 0 sorts (disk) In situations you can avoid a full scan, you can put all the columns you
don’t execute, and some times Oracle decides not to consider the hints in 1 rows processed need in a index, then Oracle will do a full index scan, instead of a full table
execution. scan.
You too could create stored execution plans using stored outlines. 6.2.3 Verify the use of index and full scan
6.1.8 Try to get all in one query You must verify if your query a small group of records use index, if it
Is better to get all in only one query, to avoid unnecessary traffic to the SQL> SELECT COUNT( *) FROM HICUENTASF; query almost all the table a full scan.
server. 6.2.4 How are you getting the data
Always avoid something like COUNT(*) If you are getting all t he data at once you can use the /*+ ALL_ROWS */
--------- hint.
For a in (select x from y) loop ß This should be done using bulk ;) 4155458
Select z from t where u=x; XXXX If you are getting only a few records use the /*+ FIRST_ROWS(x) */ hint.
6.2.5 Index for columns used in order by
End loop;
Try to do this Select x,z fromy,t where u=x Execution Plan There should be a index for the columns in the order by clause
6.2.6 Specify the columns in every query
6.1.9 Executing counts ----------------------------------------------------------
If you need to verify if there are some row that compliment you condition 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=379 Card=1) If the CBO finds all the columns in a index, then it query only the index
instead of 1 0 SORT (AGGREGATE) not the table.
SELECT COUNT(*) INTO nCount FROM TABLE WHERE COLUMN = ‘A’; 2 1 VIEW OF 'HICUENTASF' (Cost=379 Card=3919151) To optimize large tables try to put all columns you query in the index.
IF nCount =0 THEN …. 3 2 UNION -ALL (PARTITION) 6.2.7 Verify index usage
Do a 4 3 INDEX (FAST FULL SCAN) OF 'CST_HCF_FECHA_STATUS_RW' To use a index is not enough, you must verify it is using the right index.
(NON-UNIQUE) (Cost=93 Card=1985907) 6.2.8 Order the table in the way it is more frequently queried
BEGIN For DSS housing, create the table in the order it is more frequently
5 3 INDEX (FAST FULL SCAN) OF 'CST_HCF_FECHA_STATUS' (NO
SELECT /*+ FIRST_ROWS(1) */ ‘A’ INTO nCoun t accessed.
FROM TABLE WHERE COLUMN= ‘XX’ … AND ROWNUM=1 ; N-UNIQUE) (Cost=288 Card=1933244) 6.2.9 Proper order of column in indexes
EXCEPTION WHEN NO_DATA_FOUND THEN The order of columns in the indexes, even when the query execute in
END; Statistics
---------------------------------------------------------- almost the same time, can cause to read lots of block unnecessarily.
7 recursive calls 6.2.10 If you are using function, create a function index
SQL> RUN 0 db block gets If you are using function in a index, you must create a function based
1 SELECT COUNT(*) 7966 consistent gets index, the function must return the same value always under the same
2* FROM ( SELECT /*+ FIRST_ROWS(1) */ 'A' FROM HICUENTASF 7737 physical reads parameters.
WHERE ROWNUM = 1) 0 redo size
249 b ytes sent via SQL*Net to client
8
6.2.11 Rebuild indexes periodically DECLARE tCOC_CTACORR DBMS_SQL.NUMBER_TABLE;
You don’t need to rebuild indexes, because you have too, unless there is a TYPE Numlist IS VARRAY (100) OF NUMBER; tCUOTAS DBMS_SQL.NUMBER_TABLE;
performance issue, if you think you must rebuild an index for example Id NUMLIST := NUMLIST(7902, 7698, 7839); tCOC_COMISION DBMS_SQL.NUMBER_TABLE;
because periodically there is a important change in the data, and you are BEGIN
-- Efficient method, using a bulk bind
tCOC_COMISION_EXITO DBMS_SQL.NUMBER_TABLE;
not sure you can get statistics before and after rebuilding the index and see FORALL i IN Id.FIRST..Id.LAST -- bulk-bind the VARRAY
if this was really necessary. UPDATE Emp_tab SET Sal = 1.1 * Sal BEGIN
WHERE Mgr = Id(i);
6.3 Unions -- Slower method, running the UPDATE statements within a regular loop SELECT CUF_CODCLI, CUF_CTACORR, NVL(( CUF_DBCUO -
6.3.1 Indexes in join columns
FOR i IN Id.FIRST..Id.LAST LOOP CUF_CRCUO ),0 ) CUOTAS
There must be indexes in columns being joined. UPDATE Emp_tab SET Sal = 1.1 * Sal
6.3.2 EXISTS VS. IN subqueries BULK COLLECT INTO tCOC_CODCLI, tCOC_CTACORR,
WHERE Mgr = Id(i);
The IN subquery is execute once, meanwhile in EXI STS the subquery is END LOOP;
tCUOTAS
executed once per row, that is why you have to include in the EXISTS END; FROM CUENTASF WHERE NOT NVL(( CUF_DBCUO -
subquery the condition of join CUF_CRCUO ),0 )=0;
DECLARE
Select * from table where column in (select column2 from table where TYPE Var_tab IS TABLE OF VARCHAR2(20) INDEX BY BINARY_INTEGER; FOR i IN 1..tCOC_CTACORR.count LOOP
Empno VAR_TAB;
columnb=column) Ename VAR_TAB;
Select * from table where exists (select column2 from table2 where tCOC_COMISION_EXITO(i) := ROUND( nPeso *
Counter NUMBER;
table.columnb=table2.column) CURSOR C IS SELECT Empno, Ename FROM Emp_tab WHERE Mgr = 7698;
nParComisionExito, 2 );
BEGIN END LOOP;
6.3.3 IN() NO take Null Values -- Efficient method, using a bulk bind
When comparing any value using IN with NULL values, IN don’t return SELECT Empno, Ename BULK COLLECT INTO Empno, Ename FORALL I IN 1..tCOC_CTACORR.count
TRUE hence, ignore null values as you can see in the table there are 310 FROM Emp_Tab WHERE Mgr = 7698; INSERT INTO FON.COMCLI_RW
null values in column TBL_DSM. -- Slower method, assigning each collection element within a loop. ( COC_CODCLI, COC_CTACORR, COC_FECHA,
SQL> select count(*) from daz.utl_tablas_me
counter := 1; COC_COMISION, COC_DSC, COC_COMISION_EXITO )
2 where tbl_dsm in FOR rec IN C LOOP
Empno(Counter) := rec.Empno;
VALUES
3 (select null from daz.utl_tablas_me);
COUNT(*) Ename(Counter) := rec.Ename; ( tCOC_CODCLI(i), tCOC_CTACORR(i), dFecha,
--------- Counter := Counter + 1; tCOC_COMISION(i), cDs, tCOC_COMISION_EXITO(i) )
0 END LOOP; ;
SQL> select count(*) from daz.utl_tablas_me where tbl_dsm is null;
END;
COUNT(*) 7.3 Updating only the necessary
--------- DECLARE Like in this example, I had seen several times is better update only the
359 TYPE NumList IS VARRA Y(20) OF NUMBER; necessary
A more clear example of this s ituation you can see in this way depts NumList := NumList(10, 30, 70); -- department numbers
select count(*) from dual where null in (null );
WHEN you have several columns with the values you want to update.
COUNT(*)
BEGIN Specially check the redo size.
---------
FORALL i IN depts.FIRST..depts.LAST
0 DELETE FROM emp WHERE deptno = depts(i);
END; SQL> UPDATE CUENTASF SET
7 PL/SQL 2 CUF_ AUTORIZACIONHOY=0,
Pl/sql is the SQL language used for Oracle, in the reality there is no a pure DECLARE 3 CUF_DEPOSITOHOY=0,
SQL language, every database adds its own options. Don’t waste time TYPE NumTab IS TABLE OF NUMBER(4) INDEX BY BINARY_INTEGER; 4 CUF_NRORETHOY=0,
TYPE NameTab IS TABLE OF CHAR(15) INDEX BY BINARY_INTEGER; 5 CUF_RETIROHOY = 0
trying to create “Standard” SQL could be executed in any database, else pnums NumTab;
concentrate in creating optimum for every database. 6 WHERE NOT CUF_STATUS = 'CE';
pnames NameTab;
BEGIN
147 filas actualizadas.
7.1 Execute Immediate vs. dbms_sql FOR j IN 1..5000 LOOP -- load index-by tables real: 90
dbms_sql, is a procedural API, lets you parse once, execute over and over. pnums(j) := j;
Execute Immediate NDS (native dynamic sql), easier to code but parses pnames(j) := ’Part No. ’ || TO_CHAR(j); Statistics
ONCE per execute always. END LOOP; ----------------------------------------------------------
0 recursive calls
7.2 Bulk Collect FORALL i IN 1..5000 -- use FORALL statement
301 db block gets
There are to engines to run PL/SQL blocks and subprograms. INSERT INTO parts VALUES (pnums(i), pnames(i));
171 consistent gets
PL/SQL engine runs procedural statements, while the SQL engine runs 0 physical reads
FOR i IN 1..5000 LOOP -- use FOR loop
SQL statements. INSERT INTO parts VALUES (pnums(i), pnames(i)); 68804 redo size
During execution, ever y SQL statement causes a context switch between END LOOP; 405 bytes sent via SQL*Net to client
the two engines. Performance can be improved reducing the number of 664 bytes received via SQL*Net from client
contexts switches using FORALL for bulk collection to one. END; 3 SQL*Net roundtrips t o/from client
Without the bulk bind, PL/SQL sends a SQL statement to the SQL engine 1 sorts (memory)
for each DML( Insert, Update, Delete) command you can use too with To use bulk operation read PL/SQL users guides and reference, to know 0 sorts (disk)
SELECT statements. about more features of bulk operations. 147 rows processed
tCOC_CODCL I DBMS_SQL.NUMBER_TABLE; SQL> rollback;
9
Rollback terminado. NULL a != 10 UNKNOWN http://otn.or acle.com/tech/pl_sql/pdf/PLSQL_9i_New_Features_Doc.pdf

real: 50 Null is different


SQL> UPDATE CUENTASF SET For example a salary of 1000, means you get 1000
2 CUF_AUTORIZACIONHOY=0, A salary of 0, means you get 0 9 THE TUNING PROCESS
3 CUF_DEPOSITOHOY=0, But a salary of NULL, means you didn’t enter the value Solving a problem:
4 CUF_NRORETHOY=0, Here are some example of the problems you can have using null values. • Locate the problem
5 CUF_RETIROHOY = 0 • determine the reason
x := 5;
6 WHERE NOT CUF_STATUS = 'CE' AND y := NULL; • solve
7 NOT (CUF_AUTORIZACIONHOY=0 AND IF x != y THEN -- get NULL, not TRUE • test solution
CUF_DEPOSITOHOY=0 AND ( NOT EXECUTED )
END IF;
CUF_NRORETHO 9.1 Check de Alert.log file
Y=0 AND CUF_RETIROHOY = 0 ); a := NULL;
The alert log files is a archive that must the checked periodically, there are
2 filas actualizadas. b := NULL; archived several events like startup (including parameter values distinct to
.IF a = b THEN -- da NULL, no TRUE default), shutdown, block corruption, etc.
real: 50 ( NOT EXECUTED )
END IF;
You can find here:
Statistics select value from v$parameter where name = 'background_dump_dest'
---------------------------------------------------------- To save this situation you could do the following
0 recursive calls x := 5; 10 Performance Analysis
5 db block gets y := NULL; There are several methods to analyze performance, tkprof, dbms profiler,
IF NVL(x,’~’) != NVL(y,’~’) THEN -- da TRUE
171 consistent gets (SE EJECUTA )
runstats, etc, you can get more information in documentation and tuning
0 physical reads END IF; books
1056 redo size a := NULL; 10.1 Timing
405 bytes sent via SQL*Net to client b := NULL; To get time in SQL PLUS for your statement, set timing on
771 bytes received via SQL*Net from client .IF NVL(a,’~’) = NVL(b,’~’) THEN -- da TRUE
SQL> set timing on
(SE EJECUTA )
3 SQL*Net roundtrips to/from client END IF;
SQL> select count(*) from dual;
1 sorts (memory) COUNT(*)
Or you could use IS NULL o IS NOT NULL functions, previously to the ---------
0 sorts (disk) comparison. 1
2 rows processed real: 2
SQL> rollback; When you execute a query comparing values, this ignores NULL values, 10.2
real: 10 SQL Trace
you should use IS NULL function The execution plan shows how Oracle will get the data.
First execute the script
Remember the comparison not include null values, you can put a default SQL> select count(*) from adm.cuentas_me where cts_dsm like '%'; %ORACLE_HOME%\RDBMS\ADMIN.\UTLXPLAN.SQL in the server
value to the columns. COUNT(*) After connecting with SQLPLUS to the database run
--------- SQL> set autotrace on;
2549 To get all the statistics
SQL> select count(*) from adm.cuentas_me where cts_dsm is null; SQL> alter session set sql_trace=true
7.4 Executing several DML commands faster
If you are executing several inserts some hundreds, you could use in COUNT(*) Every time you execute a query you’ll get your execution plan, for
sqlplus autocommit. --------- example:
This will commit periodically, and improve your performance. 13882 SQL> select count(*) from adm.cuentas_me;
SQL> set autocommit 100 SQL> select count(*) from adm.cuentas_me; COUNT(*)
---------
SQL> show autocommit
COUNT(*) 17886
AUTOCOMMIT ON every 100 sentences DML
--------- Execution Plan
7.5 Use analytic function when possible ----------------------------------------------------------
16431 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=5 Card=1)
Analytic function are designed to get more efficiently data, to compare
1 0 SORT (AGGREGATE)
records and ranges of data previous or following to the current record. 8 Native Compilation in Java and C++ 2 1 INDEX (FULL SCAN) OF 'IDX_CTS_CONTROL_PAGOS' (NON-
Usually you do in Java or C++, what you can’t do with PL/SQL, or what UNIQUE
7.6 Null Values
Even when this is not a performance issue, you should be very careful to need to be some faster, based on Tom Kyte advice, you get up to 2x in ) (Cost=48 Card=17886)
speed. Statistics
use null values ----------------------------------------------------------
Si A es: Resultado de la evaluación: 0 recursive calls
10 a IS NULL FALSE About Native Compilation 0 db block gets
10 a IS NOT NULL TRUE http://download-
NULL a IS NULL TRUE 46 consistent gets
NULL a IS NOT NULL FALSE west.oracle.com/docs/cd/B10501_01/appdev.920/a96590/adg10pck.htm#3 46 physical reads
10 a = NULL UNKNOWN 6041 0 redo size
10 a != NULL UNKNOWN Bryn Llewellyn and Chris Racicol shows they got a improvement of 33% 235 bytes sent via SQL*Net to client
NULL a = NULL UNKNOWN
NULL a != NULL UNKNOWN
in computational intensive procedures and 3% in procedures executing sql 417 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
NULL a = 10 UNKNOWN statements.
10
0 sorts (memory) E:\oraSID\trace el archivo sSID_ora_xxx.trc Then to get the report exe cute
0 sorts (disk) T he execute the tkprof to analyze the archive @C:\oracle\ora92\rdbms\admin \spreport.sql;
1 rows processed TKPROF \sbbv_ora_740.trc C:\revisar.TXT Then you will be asked about the perdio you want to analyze and the
You can execute two times a query before comparing, because usually the first time
output path and filename
takes more time because there is the parse process, read data from disk, etc. www.oraperf.com don’t support analyzing this files
Is important to set a baseline statpacks report, and save future statpacks
10.2.1 recursive calls View select * from v$statistics_level; show status of statistics or repor, to compare statistics across the time, they help to find faster new
SQL execution as direct consequence of the query, for example extending advisories enabled with the parameter STATISTICS_LEVEL. performance problems and their reasons.
a datafile.
10.2.2 db block gets 10.4 Tom Kyte’s Runstats: To compare two solutions You can send it to www.oraperf.com
Database blocks got from database directly, as they are there. This tool allows you easily to compare two distinct solutions. http://www.oracle.com/oramag/oracle/00-Mar/o20tun.html
10.2.3 consistent gets Don’t forget to run once every one before executing the comparison, http://otn.oracle.com/oramag/oracle/03-jan/o13expert.html
Database block got en read consisted mode (as they were in a specific because in the first time always usually there is first parsing and disk reads.
moment of time, usually when you started the query) You can get it from his site 10.6 How to read statpacks report
10.2.4 physical reads http://asktom.oracle.com/~tkyte/runstats.html One of the most important things you must learn to tune is learn to read
Number of block read from disk And here the example of its use trace and statpacs report, trace report don’t need really a long explanation,
Total blocks read are "physical reads direct" plus reads from buffer cache. http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:9 but statpacks reports, yes.
10.2.5 redo size 496983726463 You must remember totake this report for a short time, otherwise the
Amount of bytes generated for the redo process. statistics are averaged and lose sense. The time should be, the time the
10.2.6 bytes sent via SQL*Net to client exec runstats_pkg.rs_start; problem is present; if it the problem is present all the time, 10 minutos
Total number of bytes sent to the client. --Execute test1 should be enough.
10.2.7 bytes received via SQL*Net from client
exec runstats_pkg.rs_middle;
Total number of bytes received from client through Oracle Net. --Execute test2
10.2.8 SQL*Net roundtrips to/from client STATSPACK report for
exec runstats_pkg.rs_stop(500) /* only show those things
Number of distinct sends to and from the client.
10.2.9 sorts (memory) that differ by at least 500 */; DB Name DB Id Instance Inst Num Release Cluster Host
------------ ----------- ------------ -------- ----------- ------- ------------
Number of ordering operations executed completely in memory. ERPE 3639735049 erpe 1 9.2.0.1.0 NO SRVDAZ03
10.2.10 sorts (disk) There you can see the comparison between two proposed solutions to tune. Snap Id Snap Time Sessions Curs/Sess Comment
Number of ordering operations executed requiring at least on disk read. ------- ------------------ -------- --------- -------------------
10.2.11 rows processed 10.5 StatPacks Begin Snap:
End Snap:
1 20 -Abr-04 10:19:03
2 20-Abr-04 10:21:27
16
18
5.4
11.7
Number of rows processed This tool gives you more amount and more precise information. Elapsed: 2.40 (mins)
10.2.12 Cardinality (card=) Cache Sizes (end)
Is the amount of records you get each step Create another tablespace only for st atpacks., for example ~~~~~~~~~~~~~~~~~
Buffer Cache: 112M Std Block Size: 8K
10.2.13 Cost (cost=) CREATE TABLESPACE TBL_STATPACK DATAFILE Shared Pool Size: 48M Log Buffer: 977K
Is a unity Oracle obtain based on the use of the server resources, the query 'C:\ DFL_STATPAzCK_sfon' SIZE 100M Here you have basic information about the snap and Oracle release and
use. REUSE AUTOEXTEND ON NEXT 640k MAXSIZE UNLIMITED memory configuration.
SEGMENT SPACE MANAGEMENT AUTO
Load Profile
10.3 Gathering Statistics for all user activity in a period EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K ; ~~~~~~~~~~~~ Per Second Per Transaction
To analyze what does a user, without analyze command by command, you (Don’t forget to set the local database to the current if you have more than --------------- ---------------
Redo size: 12,461.78 358,899.20
can gather statistics. one database in that server) Logical reads: 447.10 12,876.40
Block changes: 62.58 1,802.40
Create a trigger which will enable the gathering of statistics. Physical reads: 70.42 2,028.20
CREATE OR REPLACE TRIGGER SYS.TGR_ENABLE_TRACE conn sys/password as sysdba; Physical writes:
User calls:
2.17
7.10
62.60
204.60
AFTER @c:\oracle\ora92\rdbms\admin\spcreate.sql Parses: 9.09 261.80
LOGON Hard parses: 1.63 47.00
ON DATABASE
-- password PERFSTAT Sorts: 4.85 139.80
BEGIN -- default tablespace TBL_STATPACK Logons:
Executes:
0.03
53.41
1.00
1,538.20
IF USER = 'SUP' THEN -- temporary tablespace TBL_TEMP Transactions: 0.03
EXECUTE IMMEDIATE 'alter session set TIMED_STATISTICS=TRUE';
EXECUTE IMMEDIATE 'alter session set STATISTICS_LEVEL=ALL';
Once created enable statistics
EXECUTE IMMEDIATE 'alter session set max_dump_file_size=UNLIMITED'; ALTER SYSTEM SET TIMED_STATISTICS = TRUE; Here you have per every second and every transaction
EXECUTE IMMEDIATE 'ALTER SESSION SET EVENTS ''10046 TRACE NAME CONTEXT ALTER SYSTEM SET STATISTICS_LEVEL = ALL; The amount of redo size in bytes, data changed.
FOREVER, LEVEL 12'''; Logical reads (from memory), number of times a block current (db block
END IF;
END; Now statspacks is installed, to start statpacks (before the events you want gets) or consistent block (consistent gets) were read.
/ to monitor) Block changes, number of block changed.
EXECUTE STATSPACK.SNAP(i_snap_level =>7); Physical reads (from disk), total number of block reads from disk.
backup or delete your files fro m the trace directory. After that events happens stop statpacs Physical writes, number of writes to disk.
To get the spid from a session, execute EXECUTE STATSPACK.SNAP(i_snap_level=>7); User calls, number of user calls such as login, parse, fetch, or execute
SELECT A.SPID wait a moment it can take some time. Parses, total parses (hard parses + soft parses).
FROM V$PROCESS A,V$SESSION B (i_snap_level=>7, gives you the most amount of information useful, you Hard parses, number of sql commands parsed by first time, you should
WHERE A.ADDR = B.paddr can set it to a lower level.) bind correctly.
AND B.AUDSID = USERENV( 'sessionid' ); Sorts, number of sorts.
Search in the directory the file Logons, number of logons. They should minimized.
11
log file single write 4 0 0 50 0.8 SELECT USR_GESTION FROM UTL_USUARIO_SIS WHERE USR_NOMSIS = :b
Executes, total number of calls (user, and recursive caused by internal log file parallel write 39 38 0 1 7.8 1
system or user calls) log file sync
db file parallel write
3
12
0
6
0
0
4
0
0.6
2.4 1,394 4 348.5 2.2 0.17 0.15 889807219
Transactions control file sequential read 20 0 0 0 4.0 Module: MOD_LOGON
SQL*Net more data to client 8 0 0 0 1.6 SELECT DB_OYM_VALIDA(:b1,:b2,:b3) FROM DUAL
% Blocks changed per Read: 14.00 Recursive Call %: 96.70 log file sequential read 4 0 0 0 0.8
Rollback per transaction %: 40.00 Rows per Sort: 30.86 latch free 1 0 0 0 0.2 1,344 448 3.0 2.1 0.06 0.10 2793984522
rdbms ipc message 269 197 765 2844 53.8 SELECT ASCII(SUBSTR(:b1,:b2,1)) FROM DUAL
SQL*Net message from client 126 0 10 77 25.2
%Blocks changed per Read, show the amount of blocks changed. SQL*Net message to client 127 0 0 0 25.4 843 319 2.6 1.3 0.06 0.68 787810128
------------------------------------------------------------- select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, tim
Recursive Call %, percentage of recursive calls over the total sql executed, estamp#, sample_size, minimum, maximum, distcnt, lowval, hival,
generated as consecuence of user call or internal system processing. density, col#, spare1, spare2, avgcln from hist_head$ where obj#
=:1 and intcol#=:2
Rollback per transaction %, shows th e percentage of rollbacks per
SQL ordered by Gets for DB: ERPE Instance: erpe Snaps: 1 -2 688 12 57.3 1.1 0.03 0.59 3096853145
transactions. Minimize rollbacks, and investigate if they are frequent. -> End Buffer Gets Threshold: 10000 SELECT DSV_VERSION FROM ( SELECT DSV_VERSION FROM DS_
Rows per Sort%,, average number of rows per sort for all types of sorts -> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
VER WHERE UPPER( DSV_NOMBREUNIDAD ) = :b1
ER BY SUBSTR(DSV_VERSION,6,20) DESC ) WHERE ROWNUM=1
ORD
performed. statements are also reported, it is possible and valid for the summed
total % to exceed 100 464 18 25.8 0.7 0.13 0.11 2583796797
Instance Efficiency Percentages (Target 100%) SELECT MEM_FECHA_HOY, MEM_PROPIETARIO FROM VEW_MULTIEMPRESA
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ CPU Elapsd
Buffer Nowait %: 100.00 Redo NoWait %: 99.96 Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value 460 28 16.4 0.7 0.00 0.01 114078687
Buffer Hit %: 84.25 In -memory Sort %: 100.00 --------------- ------------ -------------- ------ -------- --------- ---------- select con#,obj#,rcon#,enabled,nvl(defer,0) from cdef$ where rob
Library Hit %: 96.40 Soft Parse %: 82.05 14,672 1 14,672.0 22.8 1.03 15.50 3711728688 j#=:1
Execute to Parse %: 82.98 Latch Hit %: 100.00 Module: SQL*Plus
Parse CPU to Parse Elapsd %: 59.17 % Non-Parse CPU: 75.32 BEGIN STATSPACK.SNAP(i_snap_level=>7); END; 432 6 72.0 0.7 0.00 0.04 3975431941
set role DAZADBA ide
Buffer nowait %, 10,009 2 5,004.5 15.5 0.33
SELECT TO_CHAR( ESE_FECHA, 'DD Month YYYY HH24:MI' ) || ' ' || E
3.49 217170230
402 1 402.0 0.6 0.05 0.63 992032462
SE_USURED, ESE_USURED || '/' || ESE_MAQUINA FROM SGR_EVENTOS_S SELECT DB_OYM_VALIDA('NAVEGADOR.PLL',:b1,'DAZ001') FROM DUAL
Shared Pool Statistics Begin End
------ ------ EGURIDAD WHERE ESE_USUDAZ = USER AND ESE_EVENTO = 'Ingreso al S
istema ' AND ESE_FECHA =( SELECT MAX(ESE_FECHA) FROM UTL_EVENTO 376 89 4.2 0.6 0.00 0.02 2085632044
Memory Usage %: 94.25 93.10
S_SEGURIDAD WHERE ESE_USUDAZ = USER AND ESE_EVENTO = 'Ingreso select intcol#,nvl(pos#,0),col# from ccol$ where con#=:1
% SQL with executions>1: 83.68 89.32
% Memory for SQL w/exec>1: 77.00 90.43
6,565 257 25.5 10.2 0.72 0.79 2014795777 348 1 348.0 0.5 0.03 0.04 1159648473
SELECT VALOR FROM DAZ.VEW_UTL_PARAMETROS WHERE PAR_NOMBRE = : SELECT DB_OYM_VALIDA('FORMS_ADM.PLL',:b1,'DAZ001') FROM DUAL
b1
348 1 348.0 0.5 0.03 0.04 1788477810
Top 5 Timed Events 4,638 1,546 3.0 7.2 0.23 0.16 1039632228 SELECT DB_OYM_VALIDA('FORMS_BBV.PLL',:b1,'DAZ001') FROM DUAL
~~~~~~~~~~~~~~~~~~ % Total SELECT user from sys.dual
Event Waits Time (s) Ela Time 348 1 348.0 0.5 0.05 0.04 1859090299
-------------------------------------------- ------------ ----------- -------- 3,432 12 286.0 5.3 0.39 0.39 3281169334 SELECT DB_OYM_VALIDA('FORMS.PLL',:b1,'DAZ001') FROM DUAL
CPU time 7 36.14 SELECT DB_LIC_VALIDA( :b1 ) FROM DUAL
db file scattered read 1,308 6 29.37
control file parallel write 53 2 12.22 3,352 838 4.0 5.2 0.03 0.07 3180290489
log buffer space 2 2 8.83 select /*+ all_rows */ count(1) from "DAZ"."DOC_SEGUIMIENTO" wh
db file sequential read 53 1 6.71 ere "DOC_CODDOC" = :1 and "SEG_EMPRESA" = :2
------------------------------------------------------------- SQL ordered by Gets for DB: ERPE Instance: erpe Snaps: 1 -2
3,339 1,113 3.0 5.2 0.33 0.26 446226751 -> End Buffer Gets Threshold: 10000
SELECT ASCII(SUBSTR(:b2,:b1,1)) FROM DUAL -> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
2,262 2 1,131.0 3.5 0.38 0.35 2357027530 statements are also reported, it is possible and valid for the summed
Wait Events for DB: ERPE Instance: erpe Snaps: 1 -2 SELECT DSI_OPERACION_BORR,DSI_CODCLIFON,DSI_FONDO1,DSI_IDFONDO1, total % to exceed 100
-> s - second DSI_FONDO2,DSI_IDFONDO2,DSI_FONDO3,DSI_IDFONDO3,DSI_FONDO4,DSI_I
-> cs - centisecond - 100th of a second DFONDO4,DSI_FONDO5,DSI_IDFONDO5,DSI_TOLERANCIA_RUC,DSI_IVA_EXTRA CPU Elapsd
-> ms - millisecond - 1000th of a second NJERO,DSI_CTAAGENTFONDO,DSI_CTALIQPPAGRCIVAYCOMN,DSI_CTAPATRIMON Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
-> us - microsecond - 1000000th of a second IO,DSI_EGPAGOCOMIS,DSI_CTAABONORCIVAYCOM,DSI_CTAPXPRCIVA,DSI_INC --------------- ------------ -------------- ------ -------- --------- ----------
-> ordered by wait time desc, waits desc (idle events last) 348 1 348.0 0.5 0.05 0.04 2417110790
1,845 2 922.5 2.9 0.28 0.29 2474733692 SELECT DB_OYM_VALIDA('FORMS_SOA.PLL',:b1,'DAZ001') FROM DUAL
Avg SELECT DSI_FECHA_SIS,DSI_FECHA_INIC_GESTION,DSI_MODO_TC,DSI_DATE
Total Wait wait Waits _FORMAT,DSI_MULTIMONEDA,DSI_COMPROB_BORR,DSI_COD_SUCURSAL,DSI_FO 348 1 348.0 0.5 0.03 0.04 4097353177
Event Waits Timeouts Time (s) (ms) /txn RM_CTA,DSI_CORRELATIVO,DSI_CODMN,DSI_ULT_MES_CERR,DSI_NIVELES,DS SELECT DB_OYM_VALIDA('FINANCIERA.PLL',:b1,'DAZ001') FROM DUAL
---------------------------- ------------ ---------- ---------- ------ -------- I_IVA,DSI_IT,DSI_CTA_GANGEST,DSI_CTA_PERGEST,DSI_MONEDAFACTURA,D
db file scattered read 1,308 0 6 4 261.6 SI_LETRAMAYORES,DSI_LETRAANALITICAS,DSI_MODO_TC_FAC,DSI_PROPIETA 279 3 93.0 0.4 0.05 0.07 1971270413
control file parallel write 53 0 2 44 10.6 SELECT 'ALTER TABLE ' || OWNER || '.' || TABLE_NAME || ' ENAB
log buffer space 2 0 2 846 0.4 1,777 2 888.5 2.8 0.30 0.32 1578257067 LE CONSTRAINT ' || CONSTRAINT_NAME || ' ' CMD FROM DBA_CONS
db file sequential read 53 0 1 24 10.6 SELECT DSI_NFACTURAORDEN,DSI_CTA_CHDELPAIS,DSI_CTA_CHDELEXTR,DSI TRAINTS WHERE NOT STATUS = 'ENABLED'
log file sync 31 0 1 24 6.2 _FORMA_CONT_CLI,DSI_CTA_CLIENTESMN,DSI_CTA_CLIENTESME,DSI_CTA_AB
log file switch completion 2 0 0 141 0.4 ONO_CHEQUESME,DSI_CTA_ABONO_CHEQUESMN,DSI_NUMVALCBTE,DSI_LNUMERA 264 2 132.0 0.4 0.03 0.13 2350713235
log file single write 4 0 0 50 0.8 SINTIPO,DSI_CTA_RESULTACUM,DSI_DIAS_NOTIF_GAR,DSI_NOTIF_GAR_USU1 Module: MOD_PRINCIPAL
latch free 3 0 0 14 0.6 ,DSI_NOTIF_GAR_USU1_ON,DSI_NOTIF_GAR_USU2,DSI_NOTIF_GAR_USU2_ON, SELECT COUNT(*) FROM VERSINANT
log file parallel write 39 38 0 1 7.8
control file sequential read 123 0 0 0 24.6 1,624 812 2.0 2.5 0.19 0.16 1467004782 251 1 251.0 0.4 0.05 0.29 1425565754
SQL*Net more data to client 39 0 0 0 7.8 UPDATE DAZ.UTL_PARAMETROS_DEFINICION SET PAR_VALORDEFECTO = :b2
db file parallel write 12 6 0 0 2.4 WHERE PAR_NOMBRE = :b1
log file sequential read 4 0 0 0 0.8
SQL*Net message from client 1,059 0 488 461 211.8 244 3 81.3 0.4 0.05 0.03 3719019909
SQL*Net message to client 1,060 0 0 0 212.0 SQL ordered by Gets for DB: ERPE Instance: erpe Snaps: 1 -2 SELECT 'ALTER TRIGGER ' || OWNER || '.' || TRIGGER_NAME || '
------------------------------------------------------------- -> End Buffer Gets Threshold: 10000 ENABLE ' CMD FROM ALL_TRIGGERS WHERE STATUS = 'DISABLED'
-> Note that resources reported for PL/SQL includes the resource s used by
all SQL statements called within the PL/SQL code. As individual SQL 238 10 23.8 0.4 0.03 0.15 9565645
statements are also reported, it is possible and valid for the summed SELECT DSI_PROPIETARIO FROM DS_INIC
total % to exceed 100
Background Wait Events for DB: ERPE Instance: erpe Snaps: 1 -2 230 33 7.0 0.4 0.00 0.12 931956286
-> ordered by wait time desc, waits desc (idle events last) CPU Elapsd select grantee#,privilege#,nvl(col#,0),max(mod(nvl(option$,0),2)
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value )from objauth$ where obj#=:1 group by grantee#,privilege#,nvl(co
Avg --------------- ------------ -------------- ------ -------- --------- ---------- l#,0) order by grantee#
Total Wait wait Waits SELECT USR_EMPRESA FROM UTL_USUARIO_SIS WHERE USR_NOMSIS = :b
Event Waits Timeouts Time (s) (ms) /txn 1 -------------------------------------------------------------
---------------------------- ------------ ---------- ---------- ------ --------
control file parallel write 53 0 2 44 10.6 1,460 730 2.0 2.3 0.16 0.13 2247476756
12
0 40 0.0 0.0 0.03 0.03 17641746 select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, tim
SELECT :b1 + :b2 * DECODE(:b3,'A',10,'B',11,'C',12,'D',13,'E',14 estamp#, sample_size, minimum, maximum, distcnt, lowval, hival,
SQL ordered by Reads for DB: ERPE Instance: erpe Snaps: 1 -2 ,'F',15,:b3) FROM DUAL density, col#, spare1, spare2, avgcln from hist_head$ where obj#
-> End Disk Reads Threshold: 1000 =:1 and intcol#=:2
0 28 0.0 0.0 0.00 0.01 114078687
CPU Elapsd select con#,obj#,rcon#,enabled,nvl(defer,0) from cdef$ where rob 257 257 1.0 0.00 0.00 2014795777
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value j#=:1 SELECT VALOR FROM DAZ.VEW_UTL_PARAMETROS WHERE PAR_NOMBRE = :
--------------- ------------ -------------- ------ -------- --------- ---------- b1
9,728 2 4,864.0 95.9 0.33 3.49 217170230 0 1 0.0 0.0 0.00 0.00 125044122
SELECT TO_CHAR( ESE_FECHA, 'DD Month YYYY HH24:MI' ) || ' ' || E SELECT SEQ_SGR_EVENTOS_SEGURIDAD.NEXTVAL FROM DUAL 89 99 1.1 0.00 0.00 2085632044
SE_USURED, ESE_USURED || '/' || ESE_MAQUINA FROM SGR_EVENTOS_S select intcol#,nvl(pos#,0),col# from ccol$ where con#=:1
EGURIDAD WHERE ESE_USUDAZ = USER AND ESE_EVENTO = 'Ingreso al S 0 10 0.0 0.0 0.00 0.00 181436173
istema ' AND ESE_FECHA =( SELECT MAX(ESE_FECHA) FROM UTL_EVENTO select /*+ index(idl_sb4$ i_idl_sb41) +*/ max(version) from id 76 76 1.0 0.00 0.00 1966425544
S_SEGURIDAD WHERE ESE_USUDAZ = USER AND ESE_EVENTO = 'Ingreso l_sb4$ where obj#=:1 and version<=:2 and (part=0 or part=2) an select text from view$ where rowid=:1
d piece#=0
64 1 64.0 0.6 1.03 15.50 3711728688 74 74 1.0 0.00 0.00 189272129
Module: SQL*Plus 0 74 0.0 0.0 0.05 0.04 189272129 select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.su
BEGIN STATSPACK.SNAP(i_snap_level=>7); END; select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.su bname,o.dataobj#,o.flags from obj$ o where o.obj#=:1
bname,o.dataobj#,o.flags from obj$ o where o.obj#=:1
49 1 49.0 0.5 0.05 0.63 992032462 64 64 1.0 0.00 0.00 2091761008
SELECT DB_OYM_VALIDA('NAVEGADOR.PLL',:b1,'DAZ001') FROM DUAL 0 2 0.0 0.0 0.02 0.00 199384860 select condition from cdef$ where rowid=:1
Module: MOD_PRINCIPAL
49 12 4.1 0.5 0.03 0.59 3096853145 SELECT 'TC Com:' || LTRIM(TO_CHAR(TCS_TC_COMPRADOR,'999G999D9999 45 50 1.1 0.00 0.00 1930240031
SELECT DSV_VERSION FROM ( SELECT DSV_VERSION FROM DS_ ')) || ' TC Ven:' || LTRIM(TO_CHAR(TCS_TC_VENDEDOR,'999G999D99 select pos#,intcol#,col#,spare1,bo#,spare2 from icol$ where obj#
VER WHERE UPPER( DSV_NOMBREUNIDAD ) = :b1 ORD 99')) FROM TCS WHERE TCS_MONEDA = 'USD' AND TCS_FECHA = :b1 =:1
ER BY SUBSTR(DSV_VERSION,6,20) DESC ) WHERE ROWNUM=1
44 21 0.5 0.00 0.00 2591785020
9 3 3.0 0.1 0.05 0.07 1971270413 0 4 0.0 0.0 0.02 0.02 285257921 select obj#,type#,ctime,mtime,stime,status,dataobj#,flags,oid$,
SELECT 'ALTER TABLE ' || OWNER || '.' || TABLE_NAME || ' ENAB SELECT DB_UTL_GETPARAMETRO('ADM_PATH_FORM') FROM DUAL spare1, spare2 from obj$ where owner#=:1 and name=:2 and namespa
LE CONSTRAINT ' || CONSTRAINT_NAME || ' ' CMD FROM DBA_CONS ce=:3 and(remoteowner=:4 or remoteowner is null and :4 is null)a
TRAINTS WHERE NOT STATUS = 'ENABLED' 0 2 0.0 0.0 0.00 0.00 295601517 nd(linkname=:5 or linkname is null and :5 is null)and(subname=:6
SELECT VALUE FROM NLS_SESSION_PARAMETERS WHERE PARAMETER = 'N or subname is null and :6 is null)
7 257 0.0 0.1 0.72 0.79 2014795777
SELECT VALOR FROM DAZ.VEW_UTL_PARAMETROS WHERE PAR_NOMBRE = :
b1

5 2 2.5 0.0 0.02 0.11 638099598 SQL ordered by Reads for DB: ERPE Instance: erpe Snaps: 1 -2 SQL ordered by Executions for DB: ERPE Instance: erpe Snaps: 1 -2
Module: MOD_PRINCIPAL -> End Disk Reads Threshold: 1000 -> End Executions Threshold: 100
SELECT COUNT(*) FROM AGENDA WHERE TRUNC(AGE_FECHA) <= SYSDATE
AND AGE_NOTIFICAR = 'T' AND AGE_PARA = :b1 CPU Elapsd CPU per Elap per
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value Executions Rows Processed Rows per Exec Exec (s) Exec (s) Hash Value
5 12 0.4 0.0 0.05 0.03 3731325089 --------------- ------------ -------------- ------ -------- --------- ---------- ------------ --------------- ---------------- ----------- ---------- ----------
Module: MOD_PRINCIPAL LS_NUMERIC_CHARACTERS'
SELECT DUP_NOMBRE_PARAUSUARIO FROM DAZ.DEFINICION_UNIDAD_PROGR 40 40 1.0 0.00 0.00 17641746
AMACION WHERE DUP_NOMBRE = :b1 0 9 0.0 0.0 0.00 0.02 365454555 SELECT :b1 + :b2 * DECODE(:b3,'A',10,'B',11,'C',12,'D',13,'E',14
select cols,audit$,textlength,intcols,property,flags,rowid from ,'F',15,:b3) FROM DUAL
3 3 1.0 0.0 0.00 0.06 484076908 view$ where obj#=:1
UPDATE WMSYS.WM$WORKSPACES_TABLE SET FREEZE_STATUS='UNLOCKED',FR 40 40 1.0 0.00 0.00 4059714361
EEZE_MODE= NULL ,FREEZE_WRITER= NULL ,FREEZE_OWNER= NULL ,SESSIO 0 1 0.0 0.0 0.00 0.00 398896841 select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#
N_DURATION=0 WHERE FREEZE_OWNER = :b1 || ',' || :b2 AND SESSI select count(*) from sys.job$ where next_date < :1 and (field1 = ,iniexts,NVL(lists,65535),NVL(groups,65535),cachehint,hwmincr, N
ON_DURATION = 1 :2 or (field1 = 0 and 'Y' = :3)) VL(spare1,0) from seg$ where ts#=:1 and file#=:2 and block#=:3

3 3 1.0 0.0 0.02 0.11 2108527011 0 1,113 0.0 0.0 0.33 0.26 446226751 39 394 10.1 0.00 0.00 2385919346
BEGIN logoff_proc(lt.getSid); END; SELECT ASCII(SUBSTR(:b2,:b1,1)) FROM DUAL select name,intcol#,segcol#,type#,length,nvl(precision#,0),decod
e(type#,2,nvl(scale,-127/*MAXSB1MINAL*/),178,scale,179,scale,180
1 319 0.0 0.0 0.06 0.68 787810128 0 5 0.0 0.0 0.30 0.29 456604738 ,scale,181,scale,182,scale,183,scale,231,scale,0),null$,fixedsto
select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, tim BEGIN logon_proc; END; rage,nvl(deflength,0),default$,rowid,col#,property, nvl(charseti
estamp#, sample_size, minimum, maximum, distcnt, lowval, hival, d,0),nvl(charsetform,0),spare1,spare2,nvl(spare3,0) from col$ wh
density, col#, spare1, spare2, avgcln from hist_head$ where obj# 0 1 0.0 0.0 0.02 0.19 490117922
=:1 and intcol#=:2 INSERT INTO DAZ.DSUTL_PARAMETROS_DEFINICION ( PAR_NOMBRE, PAR_D 34 20 0.6 0.00 0.00 2843627416
update sys.aud$ set ses$actions=merge$actions(ses$actions,:3), s
1 33 0.0 0.0 0.00 0.12 931956286 ------------------------------------------------------------- pare2=nvl(spare2,:4) where sessionid=:1 and ses$tid=:2 and act io
n#=103 and (priv$used=:5 or priv$used is null and :5 is null)

33 1,117 33.8 0.00 0.00 931956286


select grantee#,privilege#,nvl(col#,0),max(mod(nvl(option$,0),2)
SQL ordered by Reads for DB: ERPE Instance: erpe Snaps: 1 -2 SQL ordered by Executions for DB: ERPE Instance: erpe Snaps: 1 -2 )from objauth$ where obj#=:1 group by grantee#,privilege#,nvl(co
-> End Disk Reads Threshold: 1000 -> End Executions Threshold: 100 l#,0) order by grantee#

CPU Elapsd CPU per Elap per 30 20 0.7 0.00 0.00 957616262
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value Executions Rows Processed Rows per Exec Exec (s) Exec (s) Hash Value select /*+ index(idl_char$ i_idl_char1) +*/ piece#,length,piece
--------------- ------------ -------------- ------ -------- --------- ---------- ------------ --------------- ---------------- ----------- ---------- ---------- from idl_char$ where obj#=:1 and part=:2 and version=:3 order by
select grantee#,privilege#,nvl(col#,0),max(mod(nvl(option$,0),2) 1,546 1,546 1.0 0.00 0.00 1039632228 piece#
)from objauth$ where obj#=:1 group by grantee#,privilege#,nvl(co SELECT user from sys.dual
l#,0) order by grantee# 30 40 1.3 0.00 0.00 1428100621
1,113 1,113 1.0 0.00 0.00 446226751 select /*+ index(idl_ub2$ i_idl_ub21) +*/ piece#,length,piece fr
1 2 0.5 0.0 0.02 0.05 2218243056 SELECT ASCII(SUBSTR(:b2,:b1,1)) FROM DUAL om idl_ub2$ where obj#=:1 and part=:2 and version=:3 order by pi
Module: MOD_PRINCIPAL ece#
SELECT TCS_TC_COMPRADOR,TCS_TC_VENDEDOR FROM TCS WHERE TCS_FE 838 838 1.0 0.00 0.00 3180290489
CHA = :b1 AND TCS_MONEDA = :b2 select /*+ all_rows */ count(1) from "DAZ"."DOC_SEGUIMIENTO" wh 30 20 0.7 0.00 0.00 3111103299
ere "DOC_CODDOC" = :1 and "SEG_EMPRESA" = :2 select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece fr
1 3 0.3 0.0 0.05 0.03 3719019909 om idl_ub1$ where obj#=:1 and part=:2 and version=:3 order by pi
SELECT 'ALTER TRIGGER ' || OWNER || '.' || TRIGGER_NAME || ' 812 812 1.0 0.00 0.00 1467004782 ece#
ENABLE ' CMD FROM ALL_TRIGGERS WHERE STATUS = 'DISABLED' SELECT USR_EMPRESA FROM UTL_USUARIO_SIS WHERE USR_NOMSIS = :b
1 30 40 1.3 0.00 0.00 3218356218
1 3 0.3 0.0 0.03 0.03 3748531129 select /*+ index(idl_sb4$ i_idl_sb41) +*/ piece#,length,piece fr
SELECT /*+ ALL_ROWS IGNORE_WHERE_CLAUSE */ NVL(SUM(C1),0), NVL(S 730 730 1.0 0.00 0.00 2247476756 om idl_sb4$ where obj#=:1 and part=:2 and version=:3 order by pi
UM(C2),0), COUNT(DISTINCT C3) FROM (SELECT /*+ NOPARALLEL("S") * SELECT USR_GESTION FROM UTL_USUARIO_SIS WHERE USR_NOMSIS = :b ece#
/ 1 AS C1, 1 AS C2, "S"."FILE#" AS C3 FROM "SYS"."SEG$" SAMPLE B 1
LOCK (40.259740) "S") SAMPLESUB 29 0 0.0 0.00 0.00 1453445442
448 448 1.0 0.00 0.00 2793984522 select col#, grantee#, privilege#,max(mod(nvl(option$,0),2)) fro
0 10 0.0 0.0 0.03 0.15 9565645 SELECT ASCII(SUBSTR(:b1,:b2,1)) FROM DUAL m objauth$ where obj#=:1 and col# is not null group by privilege
SELECT DSI_PROPIETARIO FROM DS_INIC #, col#, grantee# order by col#, grantee#
319 200 0.6 0.00 0.00 787810128
13
28 202 7.2 0.00 0.00 114078687 % Total cleanout - number of ktugct calls 34 0.2
select con#,obj#,rcon#,enabled,nvl(defer,0) from cdef$ where rob Parse Calls Executions Parses Hash Value 6.8
------------ ------------ -------- ---------- cleanouts and rollbacks - consist 0 0.0 0.0
l#,0) order by grantee# cleanouts only - consistent read 2 0.0 0.4
cluster key scan block gets 7,341 51.0 1,468.2
28 39 2.14 2385919346 cluster key scans 4,185 29.1 837.0
SQL ordered by Executions for DB: ERPE Instance: erpe Snaps: 1 -2 select name,intcol#,segcol#,type#,length,nvl(pre cision#,0),decod commit cleanout failures: buffer 2 0.0 0.4
-> End Executions Threshold: 100 e(type#,2,nvl(scale,-127/*MAXSB1MINAL*/),178,scale,179,scale,180 commit cleanout failures: callbac 7 0.1 1.4
,scale,181,scale,182,scale,183,scale,231,scale,0),null$,fixedsto commit cleanouts 181 1.3 36.2
CPU per Elap per rage,nvl(deflength,0),default$,rowid,col#,property, nvl(charseti commit cleanouts successfully com 172 1.2 34.4
Executions Rows Processed Rows per Exec Exec (s) Exec (s) Hash Value d,0),nvl(charsetform,0),spare1,spare2,nvl(spare3,0) from col$ wh commit txn count during cleanout 19 0.1 3.8
------------ --------------- ---------------- ----------- ---------- ---------- consistent changes 0 0.0 0.0
j#=:1 24 29 1.83 1453445442 consistent gets 55,313 384.1 11,062.6
select col#, grantee#, privilege#,max(mod(nvl(option$,0),2)) fro consistent gets - examination 14,598 101.4 2,919.6
28 90 3.2 0.00 0.00 1536916657 m objauth$ where obj#=:1 and col# is not null group by privilege CPU used by this session 693 4.8 138.6
select con#,type#,condlength,intcols,robj#,rcon#,match#,re fact,n #, col#, grantee# order by col#, grantee# CPU used when call started 693 4.8 138.6
vl(enabled,0),rowid,cols,nvl(defer,0),mtime,nvl(spare1,0) from c CR blocks created 0 0.0 0.0
def$ where obj#=:1 23 24 1.76 2918884618 cursor authentications 124 0.9 24.8
select node,owner,name from syn$ where obj#=:1 data blocks consistent reads - un 0 0.0 0.0
27 0 0.0 0.00 0.00 2963598673 db block changes 9,012 62.6 1,802.4
select job, nvl2(last_date, 1, 0) from sys.job$ where (((:1 <= n 20 20 1.53 502510949 db block gets 9,069 63.0 1,813.8
select privilege#,level from sysauth$ connect by grantee#=prior DBWR buffers scanned 88 0.6 17 .6
-------- ----------------------------------------------------- privilege# and privilege#>0 start with grantee#=:1 and privilege DBWR checkpoint buffers written 312 2.2 62.4
#>0 DBWR checkpoints 1 0.0 0.2
DBWR free buffers found 88 0.6 17.6
19 28 1.45 114078687 DBWR lru scans 1 0.0 0.2
select con#,obj#,rcon#,enabled,nvl(defer,0) from cdef$ where rob DBWR make free requests 1 0.0 0.2
SQL ordered by Parse Calls for DB: ERPE Instance: erpe Snaps: 1 -2 j#=:1 DBWR summed scan depth 88 0.6 17.6
-> End Parse Calls Threshold: 1000 DBWR transaction table writes 0 0.0 0.0
19 28 1.45 1536916657 DBWR undo block writes 65 0.5 13.0
% Total select con#,type#,condlength,intcols,robj#,rcon#,match#,refact,n deferred (CURRENT) block cleanout 59 0.4 11.8
Parse Calls Executions Parses Hash Value vl(enabled,0),rowid,cols ,nvl(defer,0),mtime,nvl(spare1,0) from c dirty buffers inspected 1 0.0 0.2
------------ ------------ -------- ---------- def$ where obj#=:1 enqueue conversions 27 0.2 5.4
81 1,546 6.19 1039632228 enqueue releases 1,275 8.9 255.0
SELECT user from sys.dual 18 22 1.38 3844343967 enqueue requests 1,275 8.9 255.0
select i.obj#,i.ts#,i.file#,i.block#,i.intcols,i.type#,i.flags, enqueue waits 0 0.0 0.0
76 76 5.81 1966425544 i.property,i.pctfree$,i.initrans,i.maxtrans,i.blevel,i.leafcnt,i execute count 7,691 53.4 1,538.2
select text from view$ where rowid=:1 .distkey, i.lblkkey,i.dblkkey,i.clufac,i.cols,i.analyzetime,i.sa free buffer inspected 1 0.0 0.2
mplesize,i.dataobj#, nvl(i.degree,1),nvl(i.instances,1),i.rowcnt free buffer requested 10,384 72.1 2,076.8
64 64 4.89 2091761008 ,mod(i.pctthres$,256),i.indmethod#,i.trunccnt,nvl(c.unicols,0),n hot buffers moved to head of LRU 214 1.5 42.8
select condition from cdef$ where rowid=:1 immediate (CR) block cleanout app 2 0.0 0.4
15 45 1.15 1930240031 immediate (CURRENT) block cleanou 30 0.2 6.0
40 40 3.06 4059714361 select pos#,intcol#,col#,spare1,bo#,spare2 from icol$ where obj# index fetch by key 10,952 76.1 2,190.4
select type#,blocks,extents,minexts,maxexts,extsize,extpct,user# =:1
,iniexts,NVL(lists,65535),NVL(groups,65535),cachehint,hwmincr, N
VL(spare1,0) from seg$ where ts#=:1 and file#=:2 and block#=:3 15 89 1.15 2085632044
select intcol#,nvl(pos#,0),col# from ccol$ where con#=:1
34 319 2.60 787810128 Instance Activity Stats for DB: ERPE Instance: erpe Snaps: 1 -2
select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, tim 15 15 1.15 2321865901
estamp#, sample_size, minimum, maximum, distcnt, lowval, hival, select l.col#, l.intcol#, l.lobj#, l.ind#, l.ts#, l.file#, l.blo Statistic Total per Second per Trans
density, col#, spare1, spare2, avgcln from hist_head$ where obj# ck#, l.chunk, l.pctversion$, l.flags, l.property, l.retention, l --------------------------------- ------------------ -------------- ------------
=:1 and intcol#=:2 .freepools from lob$ l where l.obj# = :1 order by l.intcol# asc index scans kdiixs1 6,371 44.2 1,274.2
leaf node splits 14 0.1 2.8
31 74 2.37 189272129 14 14 1.07 1491008679 leaf node 90-10 splits 6 0.0 1.2
select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.su select u.name,o.name, t.update$, t.insert$, t.delete$, t.enabled logons cumulative 5 0.0 1.0
bname,o.dataobj#,o.flags from obj$ o where o.obj#=:1 from obj$ o,user$ u,trigger$ t where t.baseobject=:1 and t.ob messages received 48 0.3 9.6
j#=o.obj# and o.owner#=u.user# order by o.obj# messages sent 48 0.3 9.6
31 44 2.37 2591785020 no buffer to keep pinned count 0 0.0 0.0
select obj#,type#,ctime,mt ime,stime,status,dataobj#,flags,oid$, no work - consistent read gets 29,962 208.1 5,992.4
spare1, spare2 from obj$ where owner#=:1 and name=:2 and namespa opened cursors cumulative 1,138 7.9 227.6
ce=:3 and(remoteowner=:4 or remoteowner is null and :4 is null)a parse count (failures) 1 0.0 0.2
nd(linkname=:5 or linkname is null and :5 is null)and(subname=:6 SQL ordered by Parse Calls for DB: ERPE Instance: erpe Snaps: 1 -2 parse count (hard) 235 1.6 47.0
or subname is null and :6 is null) -> End Parse Calls Threshold: 1000 parse count (total) 1,309 9.1 261.8
parse time cpu 171 1.2 34.2
30 30 2.29 957616262 % Total parse time elapsed 289 2.0 57.8
select /*+ index(idl_char$ i_idl_char1) +*/ piece#,length,piece Parse Calls Executions Parses Hash Value physical reads 10,141 70.4 2,028.2
from idl_char$ where obj#=:1 and part=:2 and version=:3 order by ------------ ------------ -------- ---------- physical reads direct 0 0.0 0.0
piece# physical reads direct (lob) 0 0.0 0.0
13 15 0.99 1159012319 physical write s 313 2.2 62.6
30 30 2.29 1428100621 select col#,intcol#,toid,version#,packed,intcols,intcol#s,flags, physical writes direct 0 0.0 0.0
select /*+ index(idl_ub2$ i_idl_ub21) +*/ piece#,length,piece fr physical writes non checkpoint 196 1.4 39.2
om idl_ub2$ where obj#=:1 and part=:2 and version=:3 order by pi ------------------------------------------------------------- pinned buffers inspected 0 0.0 0.0
ece# prefetched blocks 8,780 61.0 1,756.0
prefetched blocks aged out before 0 0.0 0.0
30 30 2.29 3111103299 process last non-idle time 5,412,354,092 37,585,792.3 ############
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece fr recursive calls 30,023 208.5 6,004.6
om idl_ub1$ where obj#=:1 and part=:2 and version=:3 order by pi Instance Activity Stats for DB: ERPE Instance: erpe Snaps: 1 -2 recursive cpu usage 482 3.4 96.4
ece# redo blocks written 3,596 25.0 719.2
Statistic Total per Second per Trans redo buffer allocation retries 4 0.0 0.8
30 30 2.29 3218356218 --------------------------------- ------------------ -------------- ------------ redo entries 4,640 32.2 928.0
select /*+ index(idl_sb4$ i_idl_sb41) +*/ piece#,length,piece fr active txn count during cleanout 32 0.2 6.4 redo log space requests 2 0.0 0.4
om idl_sb4$ where obj#=:1 and part=:2 and version=:3 order by pi background checkpoints completed 0 0.0 0.0 redo log space wait time 28 0.2 5.6
ece# background checkpoints started 1 0.0 0.2 redo ordering marks 4 0.0 0.8
background timeouts 165 1.2 33.0 redo size 1,794,496 12,461.8 358,899.2
28 33 2.14 931956286 buffer is not pinned count 38,379 266.5 7,675.8 redo synch time 80 0.6 16.0
select grantee#,privilege#,nvl(col#,0),max(mod(nvl(option$,0),2) buffer is pinned count 27,134 188.4 5,426.8 redo synch writes 32 0.2 6.4
)from objauth$ where obj#=:1 group by grantee#,privilege#,nvl(co bytes received via SQL*Net from c 99,907 693.8 19,981.4 redo wastage 11,320 78.6 2,264.0
bytes received via SQL*Net from d 0 0.0 0.0 redo write time 101 0.7 20.2
bytes sent via SQL*Net to client 182,314 1,266.1 36,462.8 redo writer latching time 0 0.0 0.0
bytes sent via SQL*Net to dblink 0 0.0 0.0 redo writes 39 0.3 7.8
calls to get snapshot scn: kcmgss 14,797 102.8 2,959.4 rollback changes - undo records a 0 0.0 0.0
SQL ordered by Parse Calls for DB: ERPE Instance: erpe Snaps: 1 -2 calls to kcmgas 76 0.5 15.2 rollbacks only - consistent read 0 0.0 0.0
-> End Parse Calls Threshold: 1000 calls to kcmgcs 32 0.2 6.4 rows fetched via callback 5,347 37.1 1,069.4
change write time 169 1.2 33.8 session connect time 5,412,354,092 37,585,792.3 ############
14
session cursor cache hits 283 2.0 56.6 P Buffers Hit % Gets Reads Writes Waits Waits Waits
session logical reads 64,382 447.1 12,876.4 --- ---------- ----- ----------- ----------- ---------- ------- -------- ------ Low High
session pga memory 2,324,4 48 16,142.0 464,889.6 D 8,008 88.8 90,585 10,141 313 0 0 0 Optimal Optimal Total Execs Optimal Execs 1 -Pass Execs M-Pass Execs
session pga memory max 10,456,436 72,614.1 2,091,287.2 K 4,004 100.0 97 0 0 0 0 0 ------- ------- -------------- ------------- ------------ ------------
session uga memory 1,109,772 7,706.8 221,954.4 R 2,002 0 0 0 0 0 0 8K 16K 621 621 0 0
session uga memory max 7,708,868 53,533.8 1,541,773.6 ------------------------------------------------------------- 16K 32K 10 10 0 0
shared hash latch upgrades - no w 5,731 39.8 1,146.2 32K 64K 1 1 0 0
sorts (disk) 0 0.0 0.0 Instance Recovery Stats for DB: ERPE Instance: erpe Snaps: 1 -2 64K 128K 3 3 0 0
sorts (memory) 699 4.9 139.8 -> B: Begin snapshot, E: End snapshot 256K 512K 2 2 0 0
sorts (rows) 21,571 149.8 4,314.2 512K 1024K 18 18 0 0
SQL*Net roundtrips to/from client 1,049 7.3 209.8 Targt Estd Log File Log Ckpt Log Ckpt -------------------------------------------------------------
SQL*Net roundtrips to/from dblink 0 0.0 0.0 MTTR MTTR Recovery Actual Target Size Timeout Interval
summed dirty queue length 0 0.0 0.0 (s) (s) Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks PGA Memory Advisory for DB: ERPE Instance: erpe End Snap: 2
- ----- ----- ---------- ---------- ---------- ---------- ---------- ---------- -> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
B 0 0 9569 9216 9216 19975 where Estd PGA Overalloc Count is 0
E 0 0 9158 9216 9216 23249
------------------------------------------------------------- Estd Extra Estd PGA Estd PGA
Instance Activity Stats for DB: ERPE Instance: erpe Snaps: 1 -2 PGA Target Size W /A MB W/A MB Read/ Cache Overalloc
Buffer Pool Advisory for DB: ERPE Instance: erpe End Snap: 2 Est (MB) Factr Processed Written to Disk Hit % Count
Statistic Total per Second per Trans -> Only rows with estimated physical reads >0 are displayed ---------- ------- ---------------- ---------------- -------- ----------
--------------------------------- ------------------ -------------- ------------ -> ordered by Block Size, Buffers For Estimate 13 0.1 23.3 8.5 73.0 2
switch current to new buffer 23 0.2 4.6 25 0.3 23.3 0.0 100.0 0
table fetch by rowid 14,814 102.9 2,962.8 Size for Size Buffers for Est Physical Estimated 50 0.5 23.3 0.0 100.0 0
table fetch continued row 336 2.3 67.2 P Estimate (M) Factr Estimate Read Factor Physical Reads 75 0.8 23.3 0.0 100.0 0
table scan blocks gotten 15,303 106.3 3,060.6 --- ------------ ----- ---------------- ------------- ------------------ 100 1.0 23.3 0.0 100.0 0
table scan rows gotten 248,283 1,724.2 49,656.6 K 8 .3 1,001 1.00 47 120 1.2 23.3 0.0 100.0 0
table scans (long tables) 5 0.0 1.0 D 8 .1 1,001 10.61 5,172,842 140 1.4 23.3 0.0 100.0 0
table scans (rowid ranges) 0 0.0 0.0 K 16 .5 2,002 1.00 47 160 1.6 23.3 0.0 100.0 0
table scans (short tables) 3,346 23.2 669.2 D 16 .3 2,002 8.74 4,257,541 180 1.8 23.3 0.0 100.0 0
transaction rollbacks 0 0.0 0.0 K 24 .8 3,003 1.00 47 200 2.0 23.3 0.0 100.0 0
user calls 1,023 7.1 204.6 D 24 .4 3,003 4.34 2,112,822 300 3.0 23.3 0.0 100.0 0
user commits 3 0.0 0.6 K 32 1.0 4,004 1.00 47 400 4.0 23.3 0.0 100.0 0
user rollbacks 2 0.0 0.4 D 32 .5 4,004 2.80 1,366,917 600 6.0 23.3 0.0 100.0 0
workarea executions - onepass 0 0.0 0.0 K 40 1.3 5,005 1.00 47 800 8.0 23.3 0.0 100.0 0
workarea executions - optimal 657 4.6 131.4 D 40 .6 5,005 1.93 940,840 -------------------------------------------------------------
write clones created in backgroun 0 0.0 0.0 K 48 1.5 6,006 1.00 47
write clones created in foregroun 0 0.0 0.0 D 48 .8 6,006 1.55 753,879
------------------------------------------------------------- K 56 1.8 7,007 1.00 47
D 56 .9 7,007 1.18 573,060
K 64 2.0 8,008 1.00 47 Rollback Segment Stats for DB: ERPE Instance: erpe Snaps: 1 -2
D 64 1.0 8,008 1.00 487,392 ->A high value for "Pct Waits" suggests more rollback segments may be required
K 72 2.3 9,009 1.00 47 ->RBS stats may not be accurate between begin and end snaps when using Auto Undo
Tablespace IO Stats for DB: ERPE Instance: erpe Snaps: 1 -2 D 72 1.1 9,009 0.95 462,176 managment, as RBS may be dynamically created and dropped as needed
->ordered by IOs (Reads + Writes) desc K 80 2.5 10,010 1.00 47
D 80 1.3 10,010 0.89 433,082 Trans Table Pct Undo Bytes
Tablespace K 88 2.8 11,011 1.00 47 RBS No Gets Waits Written Wraps Shrinks Extends
------------------------------ D 88 1.4 11,011 0.88 428,017 ------ -------------- ------- --------------- -------- -------- --------
Av Av Av Av Buffer Av Buf K 96 3.0 12,012 1.00 47 0 1.0 0.00 0 0 0 0
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms) D 96 1.5 12,012 0.85 414,116 1 94.0 0.00 377,872 0 0 0
-------------- ------- ------ ------- ------------ -------- ---------- ------ K 104 3.3 13,013 1.00 47 2 877.0 0.00 241,942 0 0 0
TBL_USERS D 104 1.6 13,013 0.72 351,616 3 9.0 0.00 712 0 0 0
1,299 9 4.0 7.6 0 0 0 0.0 K 112 3.5 14,014 1.00 47 4 8.0 0.00 2,350 0 0 0
SYSTEM D 112 1.8 14,014 0.70 343,319 5 7.0 0.00 816 0 0 0
62 0 29.0 4.1 176 1 0 0.0 K 120 3.8 15,015 1.00 47 6 9.0 0.00 1,114 0 0 0
TBL_STATPACK D 120 1.9 15,015 0.66 324,030 7 7.0 0.00 438 0 0 0
0 0 0.0 72 1 0 0.0 K 128 4.0 16,016 1.00 47 8 7.0 0.00 562 0 0 0
TBL_UNDO D 128 2.0 16,016 0.66 319,289 9 7.0 0.00 438 0 0 0
0 0 0.0 65 0 0 0.0 K 136 4.3 17,017 1.00 47 10 7.0 0.00 500 0 0 0
------------------------------------------------------------- D 136 2.1 17,017 0.65 315,625 -------------------------------------------------------------
K 144 4.5 18,018 1.00 47
D 144 2.3 18,018 0.64 313,793
K 152 4.8 19,019 1.00 47
D 152 2.4 19,019 0.64 310,883
File IO Stats for DB: ERPE Instance: erpe Snaps: 1 -2 K 160 5.0 20,020 1.00 47 Rollback Segment Storage for DB: ERPE Instance: erpe Snaps: 1 -2
->ordered by Tablespace, File D 160 2.5 20,020 0.61 298,922 ->Optimal Size should be larger than Avg Active
-------------------------------------------------------------
Tablespace Filename RBS No Segment Size Avg Active Optimal Size Maximum Size
------------------------ ---------------------------------------------------- ------ --------------- --------------- --------------- ---------------
Av Av Av Av Buffer Av Buf 0 401,408 0 401,408
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms) 1 5,365,760 555,095 5,365,760
-------------- ------- ------ ------- ------------ -------- ---------- ------ PGA Aggr Target Stats for DB: ERPE Instance: erpe Snaps: 1 -2 2 4,317,184 540,043 5,365,760
SYSTEM E:\ORAERPE\DATAFILES\DFL_SYS_ERPE -> B: Begin snap E: End snap (rows dentified with B or E contain data 3 4,317,184 440,947 4,317,184
62 0 29.0 4.1 176 1 0 which is absolute i.e. not diffed over the interval) 4 4,317,184 567,423 5,365,760
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in -memory 5 4,317,184 481,451 4,317,184
TBL_STATPACK E:\ORAERPE\DATAFILES\DFL_STATPACK_ERPE -> Auto PGA Target - actual workarea memory target 6 3,268,608 619,733 6,414,336
0 0 72 1 0 -> W/A PGA Used - amount of memory used for all Workareas (manual + auto) 7 5,365,760 492,645 5,365,760
-> %PGA W/A Mem - percentage of PGA memory allocated to workareas 8 4,317,184 470,036 5,365,760
TBL_UNDO E:\ORAERPE\DATAFILES\DFL_UNDO_ERPE -> %Auto W/A Mem - percentage of workarea memory controlled by Auto Mem Mgmt 9 2,220,032 471,052 5,365,760
0 0 65 0 0 -> %Man W/A Mem - percentage of workarea memory under manual control 10 4,317,184 435,859 4,317,184
-------------------------------------------------------------
TBL_USERS E:\ORAERPE\DATAFILES\DFL_USER_ERPE PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written
1,299 9 4.0 7.6 0 0 0 --------------- ---------------- -------------------------
100.0 22 0
-------------------------------------------------------------
%PGA %Auto %Man Latch Activity for DB: ERPE Instance: erpe Snaps: 1 -2
PGA Aggr Auto PGA PGA Mem W/A PGA W/A W/A W/A Global Mem ->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
Target(M) Target(M) Alloc(M) Used(M) Mem Mem Mem Bound(K) willing-to-wait latch get requests
- --------- --------- ---------- ---------- ------ ------ ------ ---------- ->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
Buffer Pool Statistics for DB: ERPE Instance: erpe Snaps: 1 -2 B 100 75 20.0 0.0 .0 .0 .0 5,120 ->"Pct Misses" for both should be very close to 0.0
-> Standard block size Pools D: default, K: keep, R: recycle E 100 75 21.2 0.0 .0 .0 .0 5,120
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k ------------------------------------------------------------- Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Free Write Buffer PGA Aggr Target Histogram for DB: ERPE Instance: erpe Snaps: 1 -2 Latch Requests Miss /Miss (s) Requests Miss
Number of Cache Buffer Physical Physical Buffer Complete Busy -> Optimal Executions are purely in-memory operations ------------------------ -------------- ------ ------ ------ ------------ ------
15
active checkpoint queue 51 0.0 0 0 ------------------------------------------------------------- 96 2.0 95 10,060 33,416 1.0
archive control 4 0.0 0 0 5,254,195
archive process latch 3 0.0 0 0 -------------------------------------------------------------
cache buffer handles 48 0.0 0 0
cache buffers chains 141,301 0.0 0 19,178 0.0
cache buffers lru chain 20,695 0.0 0 82 0.0 Top 5 Logical Reads per Segment for DB: ERPE Instance: erpe Snaps: 1 -2
channel handle pool latc 7 0.0 0 0 -> End Segment Logical Reads Threshold: 10000
channel operations paren 101 0.0 0 0 SGA Memory Summary for DB: ERPE Instance: erpe Snaps: 1 -2
checkpoint queue latch 3,256 0.0 0 292 0.0 Subobject Obj. Logical
child cursor hash table 2,351 0.0 0 0 Owner Tablespace Object Name Name Type Reads %Total SGA regions Size in Bytes
Consistent RBA 37 0.0 0 0 ---------- ---------- -------------------- ---------- ----- ------------ ------- ------------------------------ ----------------
dml lock allocation 1,840 0.0 0 0 SYS SYSTEM DUAL TABLE 9,984 23.01 Database Buffers 117,440,512
dummy allocation 8 0.0 0 0 DAZ TBL_USERS SGR_EVENTOS_SEGURIDA TABLE 9,936 22.90 Fixed Size 453,392
enqueue hash chains 2,543 0.0 0 0 SYS SYSTEM USER$ TABLE 3,776 8.70 Redo Buffers 1,077,248
enqueues 767 0.0 0 0 SYS SYSTEM SEG$ TABLE 1,920 4.42 Variable Size 109,051,904
event group latch 4 0.0 0 0 SYS SYSTEM TS$ TABLE 1,552 3.58 ----------------
file number translation 693 0.0 0 0 ------------------------------------------------------------- sum 228,023,056
FOB s.o list latch 17 0.0 0 0 -------------------------------------------------------------
hash table column usage 0 0 10,023 0.0
hash table modification 1 0.0 0 0 Top 5 Physical Reads per Segment for DB: ERPE Instance: erpe Snaps: 1 -2
job_queue_processes para 3 0.0 0 0 -> End Segment Physical Reads Threshold: 1000 SGA breakdown difference for DB: ERPE Instance: erpe Snaps: 1 -2
lgwr LWN SCN 71 0.0 0 0
library cache 62,899 0.0 0 284 0.0 Subobject Obj. Physical Pool Name Begin value End value % Diff
library cache load lock 534 0.0 0 0 Owner Tablespace Object Name Name Type Reads %Total ------ ------------------------------ ---------------- ---------------- -------
library cache pin 29,880 0.0 0 0 ---------- ---------- -------------------- ---------- ----- ------------ ------- java free memory 25,448,448 25,448,448 0.00
library cache pin alloca 11,396 0.0 0 0 DAZ TBL_USERS SGR_EVENTOS_SEGURIDA TABLE 9,728 96.24 java memory in use 8,105,984 8,105,984 0.00
list of block allocation 15 0.0 0 0 SYS SYSTEM TAB$ TABLE 226 2.24 large free memory 8,388,608 8,388,608 0.00
messages 591 0.0 0 0 DAZ TBL_USERS UTL_TOOLBAR TABLE 79 .78 shared dictionary cache 1,610,880 1,610,880 0.00
mostly latch-free SCN 71 0.0 0 0 DAZ TBL_USERS DS_VER TABLE 49 .48 shared errors 42,824 43,048 0.52
multiblock read objects 2,662 0.0 0 0 SYS SYSTEM CDEF$ TABLE 9 .09 shared fixed allocation callback 180 180 0.00
ncodef allocation latch 2 0.0 0 0 ------------------------------------------------------------- shared free memory 3,856,888 4,630,552 20.06
object stats modificatio 427 0.0 0 0 shared joxlod: in ehe 518,604 518,604 0.00
post/wait queue 77 0.0 0 33 0.0 shared joxs heap init 4,220 4,220 0.00
process allocation 4 0.0 0 4 0.0 shared KGK heap 1,064 1,064 0.00
process group creation 7 0.0 0 0 shared kgl simulator 2,260,476 2,260,476 0.00
redo allocation 4,736 0.0 0 0 Dictionary Cache Stats for DB: ERPE Instance: erpe Snaps: 1 -2 shared KGLS heap 2,809,232 2,650,808 -5.64
redo copy 0 0 4,626 0.0 ->"Pct Misses" should be very low (< 2% in most cases) shared KQR L SO 136,192 136,192 0.00
redo writing 253 0.0 0 0 ->"Cache Usage" is the number of cache entries being used shared KQR M PO 1,783,368 1,847,888 3.62
row cache enqueue latch 53,027 0.0 0 0 ->"Pct SGA" is the ratio of usage to allocated size for that cache shared KQR M SO 121,884 121,884 0.00
row cache objects 64,252 0.0 0 325 0.0 shared KQR S PO 189,600 189,600 0.00
sequence cache 20 0.0 0 0 Get Pct Scan Pct Mod Final shared KQR S SO 24,668 24,668 0.00
session allocation 5,392 0.0 0 0 Cache Requests Miss Reqs Miss Reqs Usage shared KQR X PO 51,072 51,072 0.00
session idle bit 2,230 0.0 0 0 ------------------------- ------------ ------ ------- ----- -------- ---------- shared KSXR pending messages que 841,036 841,036 0.00
session switching 4 0.0 0 0 dc_histogram_data 67 11.9 0 0 59 shared KSXR receive buffers 1,033,000 1,033,000 0.00
session timer 50 0.0 0 0 dc_histogram_data_values 54 9.3 0 0 41 shared library cache 8,040,064 8,230,180 2.36
shared pool 33,815 0.0 1.0 0 0 dc_histogram_defs 1,673 19.1 0 0 1,095 shared miscellaneous 6,287,652 6,361,932 1.18
simulator hash latch 2,454 0.0 0 0 dc_object_ids 5,382 1.4 0 0 674 shared parameters 24,344 19,152 -21.33
simulator lru latch 679 0.0 0 11 0.0 dc_objects 919 4.8 0 0 1,198 shared PLS non-lib hp 2,068 2,068 0.00
sort extent pool 2 0.0 0 0 dc_profiles 5 0.0 0 0 2 shared PL/SQL DIANA 4,499,004 4,020,688 -10.63
SQL memory manager latch 1 0.0 0 45 0.0 dc_segments 571 7.0 0 0 268 shared PL/SQL MPCODE 1,104,724 946,728 -14.30
dc_sequences 1 0.0 0 1 5 shared PL/SQL PPCODE 27,616 27,616 0.00
dc_tablespaces 2,900 0.0 0 0 13 shared PL/SQL SOURCE 4,428 4,428 0.00
dc_user_grants 10,580 0.0 0 0 71 shared sim memory hea 127,064 127,064 0.00
dc_usernames 442 0.0 0 0 26 shared sql area 29,594,324 29,292,804 -1.02
Latch Activity for DB: ERPE Instance: erpe Snaps: 1 -2 dc_users 14,771 0.0 0 0 90 shared table definiti 6,312 6,348 0.57
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for ------------------------------------------------------------- shared trigger defini 2,508 2,968 18.34
willing-to-wait latch get requests shared trigger inform 1,760 1,728 -1.82
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests shared trigger source 3,632 1,812 -50.11
->"Pct Misses" for both should be very close to 0.0 Library Cache Activity for DB: ERPE Instance: erpe Snaps: 1 -2 shared 1M buffer 2,098,176 2,098,176 0.00
->"Pct Misses" should be very low buffer_cache 117,440,512 117,440,512 0.00
Pct Avg Wait Pct fixed_sga 453,392 453,392 0.00
Get Get Slps Time NoWait NoWait Get Pct Pin Pct Invali- log_buffer 1,067,008 1,067,008 0.00
Latch Requests Miss /Miss (s) Requests Miss Namespace Requests Miss Requests Miss Reloads dations -------------------------------------------------------------
------------------------ -------------- ------ ------ ------ ------------ ------ --------------- ------------ ------ -------------- ------ ---------- --------
SQL memory manager worka 3,432 0.0 0 0 BODY 48 0.0 48 0.0 0 0
transaction allocation 14 0.0 0 0 CLUSTER 152 0.0 159 0.0 0 0
transaction branch alloc 2 0.0 0 0 SQL AREA 901 0.6 10,998 2.6 16 6
undo global data 213 0.0 0 0 TABLE/PROCEDURE 5,819 0.4 5,151 5.7 151 0 init.ora Parameters for DB: ERPE Instance: erpe Snaps: 1 -2
user lock 28 0.0 0 0 TRIGGER 14 0.0 14 14.3 2 0
-------------------------------------------------------- ----- ------------------------------------------------------------- End value
Parameter Name Begin value (if different)
----------------------------- ------------------------ --------- --------------
audit_sys_operations TRUE
audit_trail DB
Latch Sleep breakdown for DB: ERPE Instance: erpe Snaps: 1 -2 Shared Pool Advisory for DB: ERPE Instance: erpe End Snap: 2 background_dump_dest E:\ORAERPE\TRACE\BACKGROUND
-> ordered by misses desc -> Note there is often a 1:Many correlation between a single logical object compatible 9.2.0.1.0
in the Library Cache, and the physical number of memory objects associated control_files E:\ORAERPE\CONTROLFILES\CTL_ERPE0
Get Spin & with it. Therefore comparing the number of Lib Cache objects (e.g. in core_dump_dest E:\ORAERPE\TRACE\CDUMP
Latch Name Requests Misses Sleeps Sleeps 1 ->4 v$librarycache), with the number of Lib Cache Memory Objects is invalid cursor_sharing EXACT
-------------------------- -------------- ----------- ----------- ------------ cursor_space_for_time FALSE
shared pool 33,815 1 1 0/1/0/0/0 Estd db_block_size 8192
------------------------------------------------------------- Shared Pool SP Estd Estd Estd Lib LC Time db_cache_size 67108864
Size for Size Lib Cache Lib Cache Cache Time Saved Estd Lib Cache db_domain
Estim (M) Factr Size (M) Mem Ob j Saved (s) Factr Mem Obj Hits db_file_multiblock_read_count 16
----------- ----- ---------- ------------ ------------ ------- --------------- db_files 15
24 .5 26 4,677 33,278 1.0 5,242,330 db_keep_cache_size 33554432
Latch Miss Sources for DB: ERPE Instance: erpe Snaps: 1 -2 32 .7 33 5,002 33,328 1.0 5,246,193 db_name ERPE
-> only latches with sleeps are shown 40 .8 40 5,922 33,356 1.0 5,248,679 db_recycle_cache_size 16777216
-> ordered by name, sleeps desc 48 1.0 47 6,159 33,380 1.0 5,250,424 fast_start_mttr_target 0
56 1.2 60 6,972 33,389 1.0 5,251,380 instance_name ERPE
NoWait Waiter 64 1.3 67 7,149 33,394 1.0 5,252,262 java_pool_size 33554432
Latch Name Where Misses Sleeps Sleeps 72 1.5 74 7,320 33,406 1.0 5,253,086 job_queue_processes 1
------------------------ -------------------------- ------- ---------- -------- 80 1.7 81 8,149 33,412 1.0 5,253,588 large_pool_size 8388608
shared pool kghalo 0 1 0 88 1.8 88 8,881 33,416 1.0 5,254,134 license_max_users 70
16
log_archive_dest D:\ERPEARCHLOG
log_archive_start TRUE Recover damaged blocks. 15.2 NT specific
log_buffer 1000448 This is definetively the tool you should use to do backups. 15.2.1 Screen Savers
log_checkpoint_interval 0
max_dump_file_size UNLIMITED Any screen saver in NT consume excessive CPU on the server they MUST
max_enabled_roles 70 11.4 You too can use export and import to backup BE AVOIDED.
open_cursors 800 You can do incremental exports, but I’ll not recommend, usually you can 15.2.2
open_links 4 Cache In NT4
optimizer_index_caching 90 export all database and reimport to optimize, or to move one table or NT 4 don’t detect automatically the cache, by default it only the minimal.
optimizer_index_cost_adj 10
optimizer_max_permutations 2500 schema, but not for periodicall backups You have to set the SecondLevelDataCache using the registry editor.
pga_aggregate_target 104857600
processes 50
query_rewrite_enabled TRUE
11.5 Test your database backups ALWAYS 16 Other considerations
query_rewrite_integrity ENFORCED Test if you backups are burned without writing errors (compressing winrar
read_only_open_delayed FALSE
remote_dependencies_mode SIGNATURE and testing) 16.1 Bugs and hidden parameters
remote_login_passwordfile
session_cached_cursors
SHARED
1000
Periodically test your backup restauration works, this will test all. If you There are bugs and specially situations that will need you to fill a tar in
session_max_open_files 30 don’t want to do, you don’t have to do, but you can lose your job in a very metalink.oracle.com (paying support).
shared_pool_size 50331648 bad manner if you don’t and if restauration fails when needed. For example in 9i, a improvement using views,cuased all queryes failed, in
statistics_level ALL
timed_statistics TRUE that situation you could use _complex_view_merging=FALSE, hidden
undo_management
undo_retention
AUTO
56000 12 24x7 parameter (note the first letter is an underscore)
undo_tablespace TBL_UNDO Oracle offers several tools to keep always your database up. 24 hours and 7 You can see hidden parameter with the following view
user_dump_dest E:\ORAERPE\TRACE\USER
utl_file_dir * days a week.
workarea_size_policy AUTO
------------------------------------------------------------- Select ksppinm, ksppstvl from
12.1 Tables and indexes x$ksppi a, x$ksppsv b
End of Report rdbms_redefinition, allows severan online operation in tables and indexes where a.indx=b.indx and substr(ksppinm,1,1) = '_';
12.2 Backups
10.7 Waits RMAN, hot backups, without need to shut down database, to do a full Remember hidden parameter are NOT SUPPORTED by Oracle if used
Here are better explanations than the one I could give you. backup. without their commandment, and must be used through Oracle support,
http://otn.oracle.com/products/manageability/database/pdf/OWPerformanc only exceptional situations, only a good expertise allow the use of hidden
eMgmtPaper.pdf 12.3 Recovery parameters.
12.3.1 Backup recover
http://www.dbspecialists.com/presentations/wait_events2.html
http://ww.dbazine.com/burleston8.html The time to recover a backup in a database, must be tested individually in 16.2 Try don’t apply techniques that works in other database
http://download- every database, and there are several factor involved, distinct physical For example I had a serious problem because I didn’t knew in Sql Server,
west.oracle.com/docs/cd/B10501_01/server.920/a96533/instance_tune.htm disk, processor, etc. etc. stored procedure you had to check after each statement if there was a
#18211
12.3.2 Instance Recovery return value.
10.7.1 Waits by object Every time a instance crashes, there is a process to automatically recover it To transport your old technique of tuning from one database work as other
There is a new view in 9.2 to view statistics by segment (usually a table, using redologs, the time this recovery takes, can be tunned. could be the worst mistake you can do.
unless partitioned is one segment). You can read here
http://download- 16.3 Test your solutions
SELECT OBJECT_NAME,STATISTIC_NAME, VALUE Several techniques don’t work as expected, for several reason, including
FROM V$SEGMENT_STATISTICS west.oracle.com/docs/cd/B10501_01/server.920/a96533/instreco.htm#442
821 bugs; some advices could not be the best for you specific database
WHERE OBJECT_NAME = ‘TABLE_NAME’' situation, there are always exceptions for a rule.
11 Backups 13 Other Parameters
16.4 What to opti mize
Oracles offers a rich and interesting ways to do backups. 13.1 MAX_ENABLED_ROLES You must optimize what is not enough fast, what gives the parameter of
But you must know how to restore them too, and need to practice it, before If you got the impression as I, that this parameter can severly affect how fast should be a process, is the client requirement.
you need to do that. performance, this is not true, you can have dozens of roles, and this will For example a 10 hours process that must be run once weekly, and can be
not take too much resources. executed in 20 hours and no body will complaint, is not a priority.
11.1 Full Backup A 0,50 seconds update, run hundreds of thousands times a day, seems to be
If you shutdown the database and copy all archives (controlfiles, logfiles, 14 Hardware more important.
datafiles, etc.), you’ll can after shutdown again the database, replace that
archives and the database will work OK. 14.1 Don’t use Raid 5 16.5 Patches
Use Raid 5 in a highly updated database, is disastrous for performance, You can get them at http://metalink.oracle.com/, but you must pay the
11.2 Archivelog Mode support.
Oracle (in archivelog mode), saves old logs to a directory automatically. because it implies a high update penalty.
If you know the exactly SCN (number that corresponds to every commit) Oracle recommends using RAID 10 (a.k.a., RAID 0+1) for all systems that 17 How to get specific information from you
you wanto to return to, you can (after backing up all including all log experience significant updates.
files), replace current datafiles with previous datafiles (from a backup) and 15 Operating system Oracle Database
rollforward (reapply) changes and get your database as it was. Some information useful you use to get from your database
It avoids having to do full backups frequently. You must remember almost all the information from the database are 95%
15.1 Defragmentation accesible through views.
Operating System Logical disks, defragmentation can give a small
11.3 RMAN So is important you memorize some of them, to search information, they
improvement, you can use Diskeeper or other .
Rman is THE TOOL for backups, it allows hot backups (without shutting are extremely useful.
down the database), but you must test any way, thre are some bugs around.
Allow to resume a long backup after a crash.
17
SCHEMA PROCEDURE
17.1 Database Objects STARTUP PARENT_ID
There are several views showing information at three leves CATALOG Oracle9i Catalog Views
DBA_TABLES (all tables plus additional information), USER_TABLES 9.2.0.1.0 VALID 05-SEP-2002 14:42:59 SYS
(user owned objects), ALL_TABLES (all tables). SYS DBMS_REGISTRY_SYS .VALIDATE_CATALOG

Remember always DBA_ prefixed view have more information CATPROC Oracle9i Packages and Types
9.2.0.1.0 VALID 05-SEP-2002 14:42:59 SYS
SYS DBMS_REGISTRY_SYS .VALIDATE_CATPROC
DBA_OBJECTS, has all objects, it is useful to find objects
There are one view for any kind of objects DBA_TABLES, 18 Bibliography
DBA_INDEXES, DBA_SYNONYMS, etc. Some books usually suggested to read are:
Any way if you don’t find the one you are looking for you can search Oracle Concepts
select * from dba_views where VIEW_NAME LIKE '%PRIV%'; Oracle Administration Guide
Oracle Application Developer
17.2 Views for Tuning Orcle Performance Tuning
There is a good list and explanation here:
http://download- To a more detailed explanation get Tom Kyte’s ‘Effective Oracle by
west.oracle.com/docs/cd/B10501_01/server.920/a96533/sqlviews.htm#172
Design” book and/or go to asktom.oracle.com
14
17.3 Version
Para obtener la versión de Oracle que utiliza
SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Release 9.2.0.1.0 - Production
PL/SQL Release 9.2.0.1.0 - Production
CORE 9.2.0.1.0 Production
TNS for 32-bit Windows: Version 9.2.0.1.0 - Production
NLSRTL Version 9.2.0.1.0 - Production
17.3.1 Interpretación del número de versión
A partir de Oracle 9i versión 2, se ha puesto en vigencia un nuevo método
de numeración, por ejemplo.
9.2.0.1.0
9 Numero principal de versión
2 Numero de mantenimiento
0 Número de versión de Servidor de Aplicaciones
1 Versión de componentes específicos
0 Versión específica de plataforma
17.4 Operating System
SQL> select dbms_utility.port_string from dual;

PORT_STRING
--------------------------------------------------
IBMPC/WIN_NT -8.1.0
17.5 Options availables on your database release
To see which options are available in your release
SQL> select * from v$option;
PARAMETER VALUE
---------------------------------------------------------------- ------
Partitioning FALSE
Objects TRUE
Real Application Clusters FALSE
Advanced replication FALSE
Bit-mapped indexes FALSE
……..
To see the status and version from your database componentes
COMP_ID COMP_NAME
VERSION STATUS MODIFIED CONTROL

Anda mungkin juga menyukai