Anda di halaman 1dari 137

PLSQLFundamentas Ramp up

Contents
CONTENTS................................................................................................1 INTRODUCTION.........................................................................................2 TOPICS COVERED .....................................................................................2 COLLECTIONS ...........................................................................................3 BULK BIND AND BULK COLLECT WITH BULK EXCEPTIONS...........................28 TRIGGERS ..............................................................................................34 PRAGMA ................................................................................................38 ANALYTIC FUNCTIONS.............................................................................43 HIERARCHICAL QUERIES ..........................................................................50 GLOBAL TEMPORARY TABLES ..................................................................60 QUERY OPTIMIZATION.............................................................................67 PERFORMANCE TUNNING TOPICS.............................................................70 CURSORS AND REF CURSORS...................................................................90 RECORDS................................................................................................95 REDO LOGS, ROLLBACK SEGMENT, FLASHBACK.......................................102 DATA DICTIONARY.................................................................................108 MISLANEOUS TOPICS ............................................................................110

INTRODUCTION
This document covers all the Oracle PL/SQL fundamentals in detail that are explained with the support of examples. The target audience of this document is developers who need to get ramped up on Oracle PL/SQL.

TOPICS COVERED
COLLECTIONS BULK BIND AND BULK COLLECT WITH BULK EXCEPTIONS TRIGGERS PRAGMA ANALYTIC FUNCTIONS GLOBAL TEMPORARY TABLES QUERY OPTIMISATION PERFORMANCE TUNNING TOPICS CURSORS AND REF CURSORS RECORDS REDO LOGS, ROLLBACK SEGMENT, FLASHBACK DATA DICTIONARY MISLANEOUS TOPICS 1) DBMS_ERROR_LOG 2) Purity Rules for Functions 3) Table partitioning 4) Function based index 5) UNION vs. UNION ALL 6) How to compare data in two similar tables. 7) Database modeling / Database designing and Normalization 8) Multi table inserts in oracle 9) Oracle tunning hints 10) EXCERCISE

Page 2 of 137

COLLECTIONS
1. Overview: A collection is an ordered group of elements, all of the same type. It is a general concept that encompasses lists, arrays, and other familiar data types. Each element has a unique subscript that determines its position in the collection. PL/SQL offers these collection types:
Index-by tables, also known as associative arrays, let you look up elements using arbitrary numbers and strings for subscript values. (They are similar to hash tables in other programming languages.) Nested tables hold an arbitrary number of elements. They use sequential numbers as subscripts. You can define equivalent SQL types, allowing nested tables to be stored in database tables and manipulated through SQL. Varrays (short for variable-size arrays) hold a fixed number of elements (although you can change the number of elements at runtime). They use sequential numbers as subscripts. You can define equivalent SQL types, allowing varrays to be stored in database tables. They can be stored and retrieved through SQL, but with less flexibility than nested tables.

Although collections can have only one dimension, you can model multidimensional arrays by creating collections whose elements are also collections.

Page 3 of 137

To use collections in an application, you define one or more PL/SQL types, then define variables of those types. You can define collection types in a procedure, function, or package. You can pass collection variables as parameters, to move data between client-side applications and stored subprograms. To look up data that is more complex than single values, you can store PL/SQL records or SQL object types in collections. Nested tables and varrays can also be attributes of object types. 2. Persistent and non-persistent collections Index-by tables cannot be stored in database tables, so they are nonpersistent. You cannot use them in a SQL statement and are available only in PL/SQL blocks. Nested tables and Varrays are persistent. You can use the CREATE TYPE statement to create them in the database, you can read and write them from/to a database column. Nested tables and Varrays must have been initialized before you can use them. 3. Declarations Nested Table TYPE type_name IS TABLE OF element_type [NOT NULL]; With nested tables declared within PL/SQL, element_type can be any PL/SQL datatype except: REF CURSOR Nested tables declared in SQL (CREATE TYPE) have additional restrictions. They cannot use the following element types:

BINARY_INTEGER, PLS_INTEGER BOOLEAN LONG, LONG RAW NATURAL, NATURALN POSITIVE, POSITIVEN REF CURSOR SIGNTYPE STRING

Pl/SQL Declare Page 4 of 137

TYPE TYP_NT_NUM IS TABLE OF NUMBER; SQL CREATE [OR REPLACE] TYPE TYP_NT_NUM IS TABLE OF NUMBER; Varrays TYPE type_name IS {VARRAY | VARYING ARRAY} (size_limit) OF element_type [NOT NULL]; Size_limit is a positive integer literal representing the maximum number of elements in the array. PL/SQL Declare TYPE TYP_V_CHAR IS VARRAY (10) OF VARCHAR2 (20); SQL CREATE [OR REPLACE] TYPE TYP_V_CHAR IS VARRAY (10) OF VARCHAR2 (20); Index by Table TYPE type_name IS TABLE OF element_type [NOT NULL] INDEX BY [BINARY_INTEGER | PLS_INTEGER | VARCHAR2 (size_limit)]; INDEX BY key_type; The key_type can be numeric, either BINARY_INTEGER or PLS_INTEGER (9i). It can also be VARCHAR2 or one of its subtypes VARCHAR, STRING, or LONG. You must specify the length of a VARCHAR2-based key, except for LONG which is equivalent to declaring a key type of VARCHAR2 (32760). The types RAW, LONG RAW, ROWID, CHAR, and CHARACTER are not allowed as keys for an associative array. Declare TYPE TYP_TAB_VAR IS TABLE OF VARCHAR2 (50) INDEX BY BINARY_INTEGER;

4. Initalization Only Nested tables and varrays need initialization. To initialize a collection, you use the constructor of the collection which name is the same as the collection. Nested tables Declare

Page 5 of 137

TYPE TYP_NT_NUM IS TABLE OF NUMBER; Nt_tab TYP_NT_NUM; Begin Nt_tab:= TYP_NT_NUM (5, 10, 15, 20); End; Varrays Declare TYPE TYP_V_DAY IS VARRAY (7) OF VARCHAR2 (15) ; v_tab TYP_V_DAY; Begin v_tab:= TYP_NT_NUM ( Sunday,Monday,Tuesday,Wedneday,Thursday,Friday,Saturday); End; It is not required to initialize all the elements of a collection. You can either initialize no element. In this case, use an empty constructor. v_tab:= TYP_NT_NUM (); This collection is empty, which is different than a NULL collection (not initialized). Index by Table Declare TYPE TYP_TAB IS TABLE OF NUMBER INDEX BY PLS_INTEGER; my_tab TYP_TAB; Begin my_tab (1):= 5; my_tab (2):= 10; my_tab (3):= 15; End; 5. Handle the collection While the collection is not initialized (Nested tables and Varrays), it is not possible to manipulate it. You can test if a collection is initialized: Declare TYPE TYP_VAR_TAB is VARRAY (30) of varchar2 (100); tab1 TYP_VAR_TAB; -- declared but not initialized Begin If Tab1 IS NULL Then

Page 6 of 137

-- NULL collection, have to initialize it -Tab1:= TYP_VAR_TAB ('','','','','','','','','',''); End if; -- Now, we can handle the collection -End; To access an element of a collection, we need to use a subscript value that indicates the unique element of the collection. The subscript is of type integer or varchar2. Declare Type TYPE_TAB_EMP IS TABLE OF Varchar2 (60) INDEX BY BINARY_INTEGER; emp_tab TYPE_TAB_EMP; i pls_integer ; Begin For i in 0..10 Loop emp_tab( i+1 ) := 'Emp ' || ltrim( to_char( i ) ) ; End loop; End; Declare Type TYPE_TAB_DAYS IS TABLE OF PLS_INTEGER INDEX BY VARCHAR2 (20); day_tab TYPE_TAB_DAYS; Begin day_tab( 'Monday' ) := 10 ; day_tab( 'Tuesday' ) := 20 ; day_tab( 'Wednesday' ) := 30 ; End ; It is possible to assign values of a collection to another collection if they are of the same type. Declare Type TYPE_TAB_EMP IS TABLE OF EMP%ROWTYPE INDEX BY BINARY_INTEGER ; Type TYPE_TAB_EMP2 IS TABLE OF EMP%ROWTYPE INDEX BY BINARY_INTEGER ; tab1 TYPE_TAB_EMP := TYPE_TAB_EMP( ... ); tab2 TYPE_TAB_EMP := TYPE_TAB_EMP( ... ); tab3 TYPE_TAB_EMP2 := TYPE_TAB_EMP2( ... ); Begin tab2 := tab1 ; -- OK tab3 := tab1 ; -- Error : types not similar ... End ; Comparing collections

Page 7 of 137

Until the 10g release, collections cannot be directly compared for equality or inequality. The 10g release allows doing some comparisons between collections: You can compare collections of same type to verify if they are equals or not equals DECLARE TYPE Colors IS TABLE OF VARCHAR2(64); primaries Colors := Colors('Blue','Green','Red'); rgb Colors := Colors('Red','Green','Blue'); traffic_light Colors := Colors('Red','Green','Amber'); BEGIN -- We can use = or !=, but not < or >. -- 2 collections are equal even if the members are not in the same order. IF primaries = rgb THEN dbms_output.put_line('OK, PRIMARIES & RGB have same members.'); END IF; IF rgb != traffic_light THEN dbms_output.put_line('RGB & TRAFFIC_LIGHT have different members'); END IF; END; 6. COLLECTION Methods We can use the following methods on a collection: EXISTS COUNT LIMIT FIRST and LAST PRIOR and NEXT EXTEND TRIM DELETE

A collection method is a built-in function or procedure that operates on collections and is called using dot notation. collection_name.method_name[(parameters)] Collection methods cannot be called from SQL statements. Only the EXISTS method can be used on a NULL collection. all other methods applied on a null collection raise the COLLECTION_IS_NULL error.

Page 8 of 137

6.1 EXISTS(index) Returns TRUE if the index element exists in the collection, else it returns FALSE. Use this method to be sure you are doing a valid operation on the collection. This method does not raise the SUBSCRIPT_OUTSIDE_LIMIT exception if used on an element that does not exists in the collection. If my_collection.EXISTS(10) Then My_collection.DELETE(10) ; End if ; 6.2 COUNT Returns the number of elements in a collection. SQL> Declare 2 TYPE TYP_TAB IS TABLE OF NUMBER; 3 my_tab TYP_TAB := TYP_TAB( 1, 2, 3, 4, 5 ); 4 Begin 5 Dbms_output.Put_line( 'COUNT = ' || To_Char( my_tab.COUNT ) ) ; 6 my_tab.DELETE(2) ; 7 Dbms_output.Put_line( 'COUNT = ' || To_Char( my_tab.COUNT ) ) ; 8 End ; 9 / COUNT = 5 COUNT = 4 PL/SQL procedure successfully completed. 6.3 LIMIT Returns the maximum number of elements that a array can contain. Return NULL for Nested tables and Index-by tables SQL> Declare 2 TYPE TYP_ARRAY IS ARRAY (30) OF NUMBER; 3 my_array TYP_ARRAY:= TYP_ARRAY( 1, 2, 3 ) ; 4 Begin 5 dbms_output.put_line( 'Max array size is ' || my_array.LIMIT ) ; 6 End; 7/ Max array size is 30 6.4 FIRST and LAST Returns the first or last subscript of a collection.

Page 9 of 137

If the collection is empty, FIRST and LAST return NULL SQL> Declare 2 TYPE TYP_TAB IS TABLE OF NUMBER; 3 my_tab TYP_TAB := TYP_TAB( 1, 2, 3, 4, 5 ); 4 Begin 5 For i IN my_tab.FIRST .. my_tab.LAST Loop 6 Dbms_output.Put_line( 'my_tab(' || Ltrim(To_Char(i)) || ') = ' || To_Char( my_tab(i) ) ) ; 7 End loop ; 8 End ; 9 10 / my_tab(1) = 1 my_tab(2) = 2 my_tab(3) = 3 my_tab(4) = 4 my_tab(5) = 5 PL/SQL procedure successfully completed. SQL> Declare 2 TYPE TYP_TAB IS TABLE OF PLS_INTEGER INDEX BY VARCHAR2(1); 3 my_tab TYP_TAB; 4 Begin 5 For i in 65 .. 69 Loop 6 my_tab( Chr(i) ) := i ; 7 End loop ; 8 Dbms_Output.Put_Line( 'First= ' || my_tab.FIRST || ' Last= ' || my_tab.LAST ) ; 9 End ; 10 / First= A Last= E PL/SQL procedure successfully completed. 6.5 PRIOR(index) and NEXT(index) Returns the previous or next subscript of the index element. If the index element has no predecessor, PRIOR(index) returns NULL. Likewise, if index has no successor, NEXT(index) returns NULL. SQL> Declare 2 TYPE TYP_TAB IS TABLE OF PLS_INTEGER INDEX BY VARCHAR2(1) ; 3 my_tab TYP_TAB ; 4 c Varchar2(1) ; 5 Begin

Page 10 of 137

6 For i in 65 .. 69 Loop 7 my_tab( Chr(i) ) := i ; 8 End loop ; 9 c := my_tab.FIRST ; -- first element 10 Loop 11 Dbms_Output.Put_Line( 'my_tab(' || c || ') = ' || my_tab(c) ) ; 12 c := my_tab.NEXT(c) ; -- get the successor element 13 Exit When c IS NULL ; -- end of collection 14 End loop ; 15 End ; 16 / my_tab(A) = 65 my_tab(B) = 66 my_tab(C) = 67 my_tab(D) = 68 my_tab(E) = 69 PL/SQL procedure successfully completed. Use the PRIOR() or NEXT() method to be sure that you do not access an invalid element: SQL> Declare 2 TYPE TYP_TAB IS TABLE OF PLS_INTEGER ; 3 my_tab TYP_TAB := TYP_TAB( 1, 2, 3, 4, 5 ); 4 Begin 5 my_tab.DELETE(2) ; -- delete an element of the collection 6 For i in my_tab.FIRST .. my_tab.LAST Loop 7 Dbms_Output.Put_Line( 'my_tab(' || Ltrim(To_char(i)) || ') = ' || my_tab(i) ) ; 8 End loop ; 9 End ; 10 / my_tab(1) = 1 Declare * ERROR at line 1: ORA-01403: no data found ORA-06512: at line 7 In this example, we get an error because one element of the collection was deleted. One solution is to use the PRIOR()/NEXT() method: SQL> Declare 2 TYPE TYP_TAB IS TABLE OF PLS_INTEGER ; 3 my_tab TYP_TAB := TYP_TAB( 1, 2, 3, 4, 5 ); 4 v Pls_Integer ; 5 Begin 6 my_tab.DELETE(2) ;

Page 11 of 137

7 v := my_tab.first ; 8 Loop 9 Dbms_Output.Put_Line( 'my_tab(' || Ltrim(To_char(v)) || ') = ' || my_tab(v) ) ; 10 v := my_tab.NEXT(v) ; -- get the next valid subscript 11 Exit When v IS NULL ; 12 End loop ; 13 End ; 14 / my_tab(1) = 1 my_tab(3) = 3 my_tab(4) = 4 my_tab(5) = 5 PL/SQL procedure successfully completed. Another solution is to test if the index exists before use it: SQL> Declare 2 TYPE TYP_TAB IS TABLE OF PLS_INTEGER ; 3 my_tab TYP_TAB := TYP_TAB( 1, 2, 3, 4, 5 ); 4 Begin 5 my_tab.DELETE(2) ; 6 For i IN my_tab.FIRST .. my_tab.LAST Loop 7 If my_tab.EXISTS(i) Then 8 Dbms_Output.Put_Line( 'my_tab(' || Ltrim(To_char(i)) || ') = ' || my_tab(i) ) ; 9 End if ; 10 End loop ; 11 End ; 12 / my_tab(1) = 1 my_tab(3) = 3 my_tab(4) = 4 my_tab(5) = 5 PL/SQL procedure successfully completed. 6.6 EXTEND[(n[,i])] Used to extend a collection (add new elements)

EXTEND appends one null element to a collection. EXTEND(n) appends n null elements to a collection. EXTEND(n,i) appends n copies of the ith element to

a collection.

SQL> Declare 2 TYPE TYP_NES_TAB is table of Varchar2(20) ; 3 tab1 TYP_NES_TAB ; Page 12 of 137

4 i Pls_Integer ; 5 Procedure Print( i in Pls_Integer ) IS 6 BEGIN Dbms_Output.Put_Line( 'tab1(' || ltrim(to_char(i)) ||') = ' || tab1(i) ) ; END ; 7 Procedure PrintAll IS 8 Begin 9 Dbms_Output.Put_Line( '* Print all collection *' ) ; 10 For i IN tab1.FIRST..tab1.LAST Loop 11 If tab1.EXISTS(i) Then 12 Dbms_Output.Put_Line( 'tab1(' || ltrim(to_char(i)) ||') = ' || tab1(i) ) ; 13 End if ; 14 End loop ; 15 End ; 16 Begin 17 tab1 := TYP_NES_TAB('One') ; 18 i := tab1.COUNT ; 19 Dbms_Output.Put_Line( 'tab1.COUNT = ' || i ) ; 20 Print(i) ; 21 -- the following line raise an error because the second index does not exists in the collection -22 -- tab1(2) := 'Two' ; 23 -- Add one empty element -24 tab1.EXTEND ; 25 i := tab1.COUNT ; 26 tab1(i) := 'Two' ; Printall ; 27 -- Add two empty elements -28 tab1.EXTEND(2) ; 29 i := i + 1 ; 30 tab1(i) := 'Three' ; 31 i := i + 1 ; 32 tab1(i) := 'Four' ; Printall ; 33 -- Add three elements with the same value as element 4 -34 tab1.EXTEND(3,1) ; 35 i := i + 3 ; Printall ; 36 End; / tab1.COUNT = 1 tab1(1) = One * Print all collection * tab1(1) = One tab1(2) = Two * Print all collection * tab1(1) = One tab1(2) = Two tab1(3) = Three tab1(4) = Four * Print all collection * tab1(1) = One tab1(2) = Two tab1(3) = Three tab1(4) = Four

Page 13 of 137

tab1(5) = One tab1(6) = One tab1(7) = One PL/SQL procedure successfully completed. 6.7 TRIM[(n)] Used to decrease the size of a collection TRIM removes one element from the end of a collection. TRIM(n) removes n elements from the end of a collection.

SQL> Declare 2 TYPE TYP_TAB is table of varchar2(100) ; 3 tab TYP_TAB ; 4 Begin 5 tab := TYP_TAB( 'One','Two','Three' ) ; 6 For i in tab.first..tab.last Loop 7 dbms_output.put_line( 'tab(' || ltrim( to_char( i ) ) || ') = ' || tab(i) ) ; 8 End loop ; 9 -- add 3 element with second element value -10 dbms_output.put_line( '* add 3 elements *' ) ; 11 tab.EXTEND(3,2) ; 12 For i in tab.first..tab.last Loop 13 dbms_output.put_line( 'tab(' || ltrim( to_char( i ) ) || ') = ' || tab(i) ) ; 14 End loop ; 15 -- suppress the last element -16 dbms_output.put_line( '* suppress the last element *' ) ; 17 tab.TRIM ; 18 For i in tab.first..tab.last Loop 19 dbms_output.put_line( 'tab(' || ltrim( to_char( i ) ) || ') = ' || tab(i) ) ; 20 End loop ; 21 End; 22 / tab(1) = One tab(2) = Two tab(3) = Three * add 3 elements * tab(1) = One tab(2) = Two tab(3) = Three tab(4) = Two tab(5) = Two tab(6) = Two * suppress the last element * tab(1) = One tab(2) = Two tab(3) = Three

Page 14 of 137

tab(4) = Two tab(5) = Two PL/SQL procedure successfully completed. If you try to suppress more elements than the collection contents, you get a SUBSCRIPT_BEYOND_COUNT exception. 6.8 DELETE[(n[,m])] DELETE removes all elements from a collection. DELETE(n) removes the nth element from an associative array with a numeric key or a nested table. If the associative array has a string key, the element corresponding to the key value is deleted. If n is null, DELETE(n) does nothing. DELETE(n,m) removes all elements in the range m..n from an associative array or nested table. If m is larger than n or if m or n is null, DELETE(n,m) does nothing Caution : LAST returns the greatest subscript of a collection and COUNT returns the number of elements of a collection. If you delete some elements, LAST != COUNT. Suppression of all the elements SQL> Declare 2 TYPE TYP_TAB is table of varchar2(100) ; 3 tab TYP_TAB ; 4 Begin 5 tab := TYP_TAB( 'One','Two','Three' ) ; 6 dbms_output.put_line( 'Suppression of all elements' ) ; 7 tab.DELETE ; 8 dbms_output.put_line( 'tab.COUNT = ' || tab.COUNT) ; 9 End; 10 / Suppression of all elements tab.COUNT = 0 PL/SQL procedure successfully completed. Suppression of the second element SQL> Declare 2 TYPE TYP_TAB is table of varchar2(100) ; 3 tab TYP_TAB ; 4 Begin 5 tab := TYP_TAB( 'One','Two','Three' ) ; Page 15 of 137

6 dbms_output.put_line( 'Suppression of the 2nd element' ) ; 7 tab.DELETE(2) ; 8 dbms_output.put_line( 'tab.COUNT = ' || tab.COUNT) ; 9 dbms_output.put_line( 'tab.LAST = ' || tab.LAST) ; 10 For i IN tab.FIRST .. tab.LAST Loop 11 If tab.EXISTS(i) Then 12 dbms_output.put_line( tab(i) ) ; 13 End if ; 14 End loop ; 15 End; 16 / Suppression of the 2nd element tab.COUNT = 2 tab.LAST = 3 One Three PL/SQL procedure successfully completed. Caution: For Varrays, you can suppress only the last element. If the element does not exists, no exception is raised. Main collection examples DECLARE TYPE NumList IS TABLE OF NUMBER; nums NumList; -- atomically null BEGIN /* Assume execution continues despite the raised exceptions. */ nums(1) := 1; -- raises COLLECTION_IS_NULL (1) nums := NumList(1,2); -- initialize table nums(NULL) := 3 -- raises VALUE_ERROR (2) nums(0) := 3; -- raises SUBSCRIPT_OUTSIDE_LIMIT (3) nums(3) := 3; -- raises SUBSCRIPT_BEYOND_COUNT (4) nums.DELETE(1); -- delete element 1 IF nums(1) = 1 THEN ... -- raises NO_DATA_FOUND (5) 7. Multi-level Collections A collection is a one-dimension table. You can have multi-dimension tables by creating collection of collection. SQL> Declare 2 TYPE TYP_TAB is table of NUMBER index by PLS_INTEGER ; 3 TYPE TYP_TAB_TAB is table of TYP_TAB index by PLS_INTEGER ; 4 tab1 TYP_TAB_TAB ; 5 Begin

Page 16 of 137

6 For i IN 1 .. 3 Loop 7 For j IN 1 .. 2 Loop 8 tab1(i)(j) := i + j ; 9 dbms_output.put_line( 'tab1(' || ltrim(to_char(i)) 10 || ')(' || ltrim(to_char(j)) 11 || ') = ' || tab1(i)(j) ) ; 12 End loop ; 13 End loop ; 14 End; 15 / tab1(1)(1) = 2 tab1(1)(2) = 3 tab1(2)(1) = 3 tab1(2)(2) = 4 tab1(3)(1) = 4 tab1(3)(2) = 5 PL/SQL procedure successfully completed. Collections of records SQL> Declare 2 TYPE TYP_TAB is table of DEPT%ROWTYPE index by PLS_INTEGER ; 3 tb_dept TYP_TAB ; 4 rec DEPT%ROWTYPE ; 5 Cursor CDEPT IS Select * From DEPT ; 6 Begin 7 Open CDEPT ; 8 Loop 9 Fetch CDEPT Into rec ; 10 Exit When CDEPT%NOTFOUND ; 11 tb_dept(CDEPT%ROWCOUNT) := rec ; 12 End loop ; 13 For i IN tb_dept.FIRST .. tb_dept.LAST Loop 14 dbms_output.put_line( tb_dept(i).DNAME || ' - ' ||tb_dept(i).LOC ) ; 15 End loop ; 16 End; 17 / ACCOUNTING - NEW YORK RESEARCH - DALLAS SALES - CHICAGO OPERATIONS - BOSTON PL/SQL procedure successfully completed.

8. Collections and database tables

Page 17 of 137

Nested tables and Varrays can be stored in a database column of relational or object table. To manipulate collection from SQL, you have to create the types in the database with the CREATE TYPE statement. Nested tables CREATE [OR REPLACE] TYPE [schema. .] type_name

{ IS | AS } TABLE OF datatype;
Varrays CREATE [OR REPLACE] TYPE [schema. .] type_name { IS | AS } { VARRAY | VARYING ARRAY } ( limit ) OF datatype; One or several collections can be stored in a database column. Lets see an example with a relational table. You want to make a table that store the invoices and the currents invoice lines of the company. You need to define the invoice line type as following: -- type of invoice line -CREATE TYPE TYP_LIG_ENV AS OBJECT ( lig_num Integer, lig_code Varchar2(20), lig_Pht Number(6,2), lig_Tva Number(3,1), ligQty Integer ); -- nested table of invoice lines -CREATE TYPE TYP_TAB_LIG_ENV AS TABLE OF TYP_LIG_ENV ; Then create the invoice table as following: -- table of invoices -CREATE TABLE INVOICE ( inv_num Number(9), inv_numcli Number(6), inv_date Date, inv_line TYP_TAB_LIG_ENV ) - lines collection NESTED TABLE inv_line STORE AS inv_line_table ;

Page 18 of 137

You can query the USER_TYPES view to get information on the types created in the database. -- show all types -SQL> select type_name, typecode, attributes from user_types 2 / TYPE_NAME TYPECODE ATTRIBUTES ------------------------------ ------------------------------ ---------TYP_LIG_ENV OBJECT 5 TYP_TAB_LIG_ENV COLLECTION 0 SQL> You can query the USER_COLL_TYPES view to get information on the collections created in the database. -- show collections -SQL> select type_name, coll_type, elem_type_owner, elem_type_name from user_coll_types 2 / TYPE_NAME COLL_TYPE ELEM_TYPE_OWNER ------------------------- ---------------------- ------------------------- ------TYP_TAB_LIG_ENV TABLE TEST ELEM_TYPE_NAME TYP_LIG_ENV

You can query the USER_TYPE_ATTRS view to get information on the collection attributes. -- show collection attributes -SQL> select type_name, attr_name, attr_type_name, length, precision, scale, attr_no 2 from user_type_attrs 3 / TYPE_NAME ATTR_NAME ATTR_TYPE_ LENGTH PRECISION SCALE ATTR_NO --------------- --------------- ---------- ---------- ---------- ---------- ---------TYP_LIG_ENV LIG_NUM INTEGER 1 TYP_LIG_ENV LIG_CODE VARCHAR2 20 2 TYP_LIG_ENV LIG_PHT NUMBER 6 2 3 TYP_LIG_ENV LIG_TVA NUMBER 3 1 4 TYP_LIG_ENV LIGQTY INTEGER 5 Constraints on the collection attribute You can enforce constraints on each attribute of a collection -- constraints on collection attributes -alter table inv_line_table

Page 19 of 137

add constraint lignum_notnull CHECK( lig_num IS NOT NULL ) ; alter table inv_line_table add constraint ligcode_unique UNIQUE( lig_code ) ; alter table inv_line_table add constraint ligtva_check CHECK( lig_tva IN( 5.0,19.6 ) ) ; Constraints on the whole collection -- constraints on the whole collection -alter table invoice add constraint invoice_notnull CHECK( inv_line IS NOT NULL ) Check the constraints SQL> select constraint_name, constraint_type, table_name 2 from user_constraints 3 where table_name IN ('INVOICE','INV_LINE_TABLE') 4 order by table_name 5 / CONSTRAINT_NAME C TABLE_NAME ------------------------------ - -----------------------------LIGNUM_NOTNULL C INV_LINE_TABLE LIGCODE_UNIQUE U INV_LINE_TABLE LIGTVA_CHECK C INV_LINE_TABLE SYS_C0011658 U INVOICE INVOICE_NOTNULL C INVOICE SQL> select constraint_name, column_name, table_name 2 from user_cons_columns 3 where table_name IN ('INVOICE','INV_LINE_TABLE') 4 order by table_name 5 / CONSTRAINT_NAME COLUMN_NAME TABLE_NAME ------------------------------ -------------------- -----------------LIGNUM_NOTNULL LIG_NUM INV_LINE_TABLE LIGCODE_UNIQUE LIG_CODE INV_LINE_TABLE LIGTVA_CHECK LIG_TVA INV_LINE_TABLE SYS_C0011658 SYS_NC0000400005$ INVOICE INVOICE_NOTNULL SYS_NC0000400005$ INVOICE INVOICE_NOTNULL INV_LINE INVOICE 6 rows selected.

Page 20 of 137

8.1 Insertion Add a line in the INVOICE table Use the INSERT statement with all the constructors needed for the collection SQL> INSERT INTO INVOICE 2 VALUES 3 ( 4 1 5 ,1000 6 ,SYSDATE 7 , TYP_TAB_LIG_ENV -- Table of objects constructor 8 ( 9 TYP_LIG_ENV( 1 ,'COD_01', 1000, 5.0, 1 ) - object constructor 10 ) 11 ) 12 / 1 row created. Add a line to the collection Use the INSERT INTO TABLE statement INSERT INTO TABLE ( SELECT the_collection FROM the_table WHERE ... ) The sub query must return a single collection row. SQL> INSERT INTO TABLE (SELECT inv_line FROM INVOICE WHERE inv_num = 1) 2 VALUES( TYP_LIG_ENV( 2 ,'COD_02', 50, 5.0, 10 ) ) 3 / 1 row created. Multiple inserts You can add more than one element in a collection by using the SELECT statement instead of the VALUES keyword. INSERT INTO TABLE (SELECT inv_line FROM INVOICE WHERE inv_num = 1) SELECT nt.* FROM TABLE (SELECT inv_line FROM INVOICE WHERE inv_num = 1) nt /

Page 21 of 137

Update Nested table Use the UPDATE TABLE statement UPDATE TABLE ( SELECT the_collection FROM the_table WHERE ... ) alias SET Alias.col_name = ... WHERE ... The sub query must return a single collection row. Update a single row of the collection SQL> UPDATE TABLE (SELECT inv_line FROM INVOICE WHERE inv_num = 1) nt 2 SET nt.ligqty = 10 3 WHERE nt.lig_num = 1 4 / 1 row updated. Update all the rows of the collection SQL> UPDATE TABLE (SELECT inv_line FROM INVOICE WHERE inv_num = 1) nt 2 SET 3 / nt.lig_pht = nt.lig_pht * .1

2 rows updated. VARRY It is not possible to update one element of a VARRAY collection with SQL. You cannot use the TABLE keyword for this purpose (because Varrays are not stored in particular table like Nested tables). So, a single VARRAY element of a collection must be updated within a PL/SQL block: -- varray of invoice lines -CREATE TYPE TYP_VAR_LIG_ENV AS VARRAY(5) OF TYP_LIG_ENV ; -- table of invoices with varray -CREATE TABLE INVOICE_V ( inv_num Number(9), inv_numcli Number(6), inv_date Date, inv_line TYP_VAR_LIG_ENV ) ;

Page 22 of 137

-- insert a row -Insert into INVOICE_V Values ( 1, 1000, SYSDATE, TYP_VAR_LIG_ENV ( TYP_LIG_ENV( 1, 'COD_01', 1000, 5, 1 ), TYP_LIG_ENV( 2, 'COD_02', 500, 5, 10 ), TYP_LIG_ENV( 3, 'COD_03', 10, 5, 100 ) ) ); SQL> -- Query the varray collection -SQL> Declare 2 v_table TYP_VAR_LIG_ENV ; 3 LC$Head Varchar2(200) ; 4 LC$Lig Varchar2(200) ; 5 Begin 6 LC$Head := 'Num Code Pht Tva Qty' ; 7 Select inv_line Into v_table From INVOICE_V Where inv_num = 1 For Update of inv_line ; 8 dbms_output.put_line ( LC$Head ) ; 9 For i IN v_table.FIRST .. v_table.LAST Loop 10 LC$Lig := Rpad(To_char( v_table(i).lig_num ),3) || ' ' 11 || Rpad(v_table(i).lig_code, 10) || ' ' 12 || Rpad(v_table(i).lig_pht,10) || ' ' 13 || Rpad(v_table(i).lig_tva,10) || ' ' 14 || v_table(i).ligqty ; 15 dbms_output.put_line( LC$Lig ) ; 16 End loop ; 17 End ; 18 / Num Code Pht Tva Qty 1 COD_01 1000 5 1 2 COD_02 500 5 10 3 COD_03 10 5 100 PL/SQL procedure successfully completed.. Update the second line of the varray to change the quantity SQL> Declare 2 v_table TYP_VAR_LIG_ENV ; 3 Begin 4 Select inv_line 5 Into v_table 6 From INVOICE_V 7 Where inv_num = 1

Page 23 of 137

8 For Update of inv_line ; 9 v_table(2).ligqty := 2 ; -- update the second element 10 Update INVOICE_V Set inv_line = v_table Where inv_num = 1 ; 11 End ; 12 / PL/SQL procedure successfully completed. Display the new varray: SQL> -- Query the varray collection -SQL> Declare 2 v_table TYP_VAR_LIG_ENV ; 3 LC$Head Varchar2(200) ; 4 LC$Lig Varchar2(200) ; 5 Begin 6 LC$Head := 'Num Code Pht Tva Qty' ; 7 Select inv_line Into v_table From INVOICE_V Where inv_num = 1 For Update of inv_line ; 8 dbms_output.put_line ( LC$Head ) ; 9 For i IN v_table.FIRST .. v_table.LAST Loop 10 LC$Lig := Rpad(To_char( v_table(i).lig_num ),3) || ' ' 11 || Rpad(v_table(i).lig_code, 10) || ' ' 12 || Rpad(v_table(i).lig_pht,10) || ' ' 13 || Rpad(v_table(i).lig_tva,10) || ' ' 14 || v_table(i).ligqty ; 15 dbms_output.put_line( LC$Lig ) ; 16 End loop ; 17 End ; 18 / Num Code Pht Tva Qty 1 COD_01 1000 5 1 2 COD_02 500 5 2 3 COD_03 10 5 100 PL/SQL procedure successfully completed. DELETE Nested table Use the DELETE FROM TABLE statement Delete a single collection row DELETE FROM TABLE ( SELECT the_collection FROM the_table WHERE ... ) alias WHERE alias.col_name = ...

Page 24 of 137

SQL> DELETE FROM TABLE (SELECT inv_line FROM INVOICE WHERE inv_num = 1) nt 2 WHERE nt.lig_num = 2 3 / 1 row deleted. Delete all the collection rows SQL> DELETE FROM TABLE (SELECT inv_line FROM INVOICE WHERE inv_num = 1) nt 2 / 1 row deleted.

Use of a PL/SQL record to handle the whole structure SQL> Declare 2 TYPE TYP_REC IS RECORD 3 ( 4 inv_num INVOICE.inv_num%Type, 5 inv_numcli INVOICE.inv_numcli%Type, 6 inv_date INVOICE.inv_date%Type, 7 inv_line INVOICE.inv_line%Type - collection line 8 ); 9 rec_inv TYP_REC ; 10 Cursor C_INV IS Select * From INVOICE ; 11 Begin 12 Open C_INV ; 13 Loop 14 Fetch C_INV into rec_inv ; 15 Exit when C_INV%NOTFOUND ; 16 For i IN 1 .. rec_inv.inv_line.LAST Loop - loop through the collection lines 17 dbms_output.put_line( 'Numcli/Date ' || rec_inv.inv_numcli || '/' || rec_inv.inv_date 18 || ' Line ' || rec_inv.inv_line(i).lig_num 19 || ' code ' || rec_inv.inv_line(i).lig_code || ' Qty ' 20 || To_char(rec_inv.inv_line(i).ligqty) ) ; 21 End loop ; 22 End loop ; 23 End ; 24 / Numcli/Date 1000/11/11/05 Line 1 code COD_01 Qty 1 Numcli/Date 1000/11/11/05 Line 2 code COD_02 Qty 10 PL/SQL procedure successfully completed. Varray

Page 25 of 137

Varrays are more complicated to handle. It is not possible to delete a single element in a Varray collection. To do the job, you need a PL/SQL block and a temporary Varray that keep only the lines that are not deleted. SQL> Declare 2 v_table TYP_VAR_LIG_ENV ; 3 v_tmp v_table%Type := TYP_VAR_LIG_ENV() ; 4 ind pls_integer := 1 ; 5 Begin 6 -- select the collection -7 Select inv_line 8 Into v_table 9 From INVOICE_V 10 Where inv_num = 1 11 For Update of inv_line ; 12 -- Extend the temporary varray -13 v_tmp.EXTEND(v_table.LIMIT) ; 14 For i IN v_table.FIRST .. v_table.LAST Loop 15 If v_table(i).lig_num <> 2 Then 16 v_tmp(ind) := v_table(i) ; ind := ind + 1 ; 17 End if ; 18 End loop ; 19 20 Update INVOICE_V Set inv_line = v_tmp Where inv_num = 1 ; 21 End ; 22 / PL/SQL procedure successfully completed. Display the new collection: SQL> Declare 2 v_table TYP_VAR_LIG_ENV ; 3 LC$Head Varchar2(200) ; 4 LC$Lig Varchar2(200) ; 5 Begin 6 LC$Head := 'Num Code Pht Tva Qty' ; 7 Select inv_line Into v_table From INVOICE_V Where inv_num = 1 For Update of inv_line ; 8 dbms_output.put_line ( LC$Head ) ; 9 For i IN v_table.FIRST .. v_table.LAST Loop 10 LC$Lig := Rpad(To_char( v_table(i).lig_num ),3) || ' ' 11 || Rpad(v_table(i).lig_code, 10) || ' ' 12 || Rpad(v_table(i).lig_pht,10) || ' ' 13 || Rpad(v_table(i).lig_tva,10) || ' ' 14 || v_table(i).ligqty ; 15 dbms_output.put_line( LC$Lig ) ; 16 End loop ;

Page 26 of 137

17 End ; 18 / Num Code 1 COD_01 3 COD_03

Pht 1000 10

Tva 5 5

Qty 1 100

PL/SQL procedure successfully completed. The second line of the Varray has been deleted. Here is a Procedure that do the job with any Varray collection CREATE OR REPLACE PROCEDURE DEL_ELEM_VARRAY ( PC$Table in Varchar2, -- Main table name PC$Pk in Varchar2, -- PK to identify the main table row PC$Type in Varchar2, -- Varray TYPE PC$Coll in Varchar2, -- Varray column name PC$Index in Varchar2, -- value of PK PC$Col in Varchar2, -- Varray column PC$Value in Varchar2 -- Varray column value to delete ) IS LC$Req Varchar2(2000); Begin LC$Req := 'Declare' || ' v_table ' || PC$Type || ';' || ' v_tmp v_table%Type := ' || PC$Type || '() ;' || ' ind pls_integer := 1 ;' || 'Begin' || ' Select ' || PC$Coll || ' Into v_table' || ' From ' || PC$Table || ' Where ' || PC$Pk || '=''' || PC$Index || '''' || ' For Update of ' || PC$Coll || ';' || ' v_tmp.EXTEND(v_table.LIMIT) ;' || ' For i IN v_table.FIRST .. v_table.LAST Loop' || ' If v_table(i).' || PC$Col|| '<>''' || PC$Value || ''' Then' || ' v_tmp(ind) := v_table(i) ; ind := ind + 1 ;' || ' End if ;' || ' End loop ;' || ' Update ' || PC$Table || ' Set ' || PC$Coll || ' = v_tmp Where ' || PC$Pk || '=''' || PC$Index || ''';' || ' End;' ; Execute immediate LC$Req ; End ; /

Page 27 of 137

Lets delete the third element of the Varray: SQL> Begin 2 DEL_ELEM_VARRAY 3 ( 4 'INVOICE_V', 5 'inv_num', 6 'TYP_VAR_LIG_ENV', 7 'inv_line', 8 '1', 9 'lig_num', 10 '3' 11 ); 12 End ; 13 / PL/SQL procedure successfully completed.

BULK BIND AND BULK COLLECT WITH BULK EXCEPTIONS


1. BULK COLLECT This keyword asks the SQL engine to return all the rows in one or several collections before returning to the PL/SQL engine. So, there is one single roundtrip for all the rows between SQL and PL/SQL engine. (Select)(Fetch)(Returning)(execute immediate) BULK COLLECT Into collection_name [,collection_name, ] [LIMIT max_lines] ; LIMIT is used to limit the number of rows returned SQL> set serveroutput on SQL> Declare 2 TYPE TYP_TAB_EMP IS TABLE OF EMP.EMPNO%Type ; 3 Temp_no TYP_TAB_EMP ; -- collection of EMP.EMPNO%Type 4 Cursor C_EMP is Select empno From EMP ; 5 Pass Pls_integer := 1 ; 6 Begin 7 Open C_EMP ; 8 Loop 9 -- Fetch the table 3 by 3 --

Page 28 of 137

10 Fetch C_EMP BULK COLLECT into Temp_no LIMIT 3 ; 11 Exit When C_EMP%NOTFOUND ; 12 For i In Temp_no.first..Temp_no.last Loop 13 dbms_output.put_line( 'Pass ' || to_char(Pass) || ' Empno= ' || Temp_no(i) ) ; 14 End loop ; 15 Pass := Pass + 1 ; 16 End Loop ; 17 End ; 18 / Pass 1 Empno= 9999 Pass 1 Empno= 7369 Pass 1 Empno= 7499 Pass 2 Empno= 7521 Pass 2 Empno= 7566 Pass 2 Empno= 7654 Pass 3 Empno= 7698 Pass 3 Empno= 7782 Pass 3 Empno= 7788 Pass 4 Empno= 7839 Pass 4 Empno= 7844 Pass 4 Empno= 7876 Pass 5 Empno= 7900 Pass 5 Empno= 7902 Pass 5 Empno= 7934 PL/SQL procedure successfully completed. You can use the LIMIT keyword to preserve your rollback segment: Declare TYPE TYP_TAB_EMP IS TABLE OF EMP.EMPNO%Type ; Temp_no TYP_TAB_EMP ; Cursor C_EMP is Select empno From EMP ; max_lig Pls_Integer := 3 ; Begin Open C_EMP ; Loop Fetch C_EMP BULK COLLECT into Temp_no LIMIT max_lig ; Forall i In Temp_no.first..Temp_no.last Update EMP set SAL = Round(SAL * 1.1) Where empno = Temp_no(i) ; Commit ; -- Commit every 3 rows Temp_no.DELETE ; Exit When C_EMP%NOTFOUND ; End Loop ; End ; BULK COLLECT can also be used to retrieve the result of a DML statement that uses the RETURNING INTO clause:

Page 29 of 137

SQL> Declare 2 TYPE TYP_TAB_EMPNO IS TABLE OF EMP.EMPNO%Type ; 3 TYPE TYP_TAB_NOM IS TABLE OF EMP.ENAME%Type ; 4 Temp_no TYP_TAB_EMPNO ; 5 Tnoms TYP_TAB_NOM ; 6 Begin 7 -- Delete rows and return the result into the collection -8 Delete From EMP where sal > 3000 9 RETURNING empno, ename BULK COLLECT INTO Temp_no, Tnoms ; 10 For i in Temp_no.first..Temp_no.last Loop 11 dbms_output.put_line( 'Fired employee : ' || To_char( Temp_no(i) ) || ' ' || Tnoms(i) ) ; 12 End loop ; 13 End ; 14 / Fired employee : 7839 KING PL/SQL procedure successfully completed. 2 BULK BIND (FORALL) FORALL index IN min_index .. max_index [SAVE EXCEPTION] sql_order This instruction allows to compute all the rows of a collection in a single pass. SQL> Declare 2 TYPE TYP_TAB_TEST IS TABLE OF TEST%ROWTYPE ; 3 tabrec TYP_TAB_TEST ; 4 CURSOR C_test is select A, B From TEST ; 5 Begin 6 -- Load the collection from the table -7 Select A, B BULK COLLECT into tabrec From TEST ; 8 9 -- Insert into the table from the collection -10 Forall i in tabrec.first..tabrec.last 11 Insert into TEST values tabrec(i) ; 12 13 -- Update the table from the collection -14 For i in tabrec.first..tabrec.last Loop 15 tabrec(i).B := tabrec(i).B * 2 ; 16 End loop ; 17 18 -- Use of cursor -19 Open C_test ; 20 Fetch C_test BULK COLLECT Into tabrec ; 21 Close C_test ; 22

Page 30 of 137

23 End ; 24 / Implementation and Restrictions It is not allowed to use the FORALL statement and an UPDATE order that use the SET ROW functionality SQL> Declare 2 TYPE TAB_EMP is table of EMP%ROWTYPE ; 3 emp_tab TAB_EMP ; 4 Cursor CEMP is Select * From EMP ; 5 Begin 6 Open CEMP; 7 Fetch CEMP BULK COLLECT Into emp_tab ; 8 Close CEMP ; 9 10 Forall i in emp_tab.first..emp_tab.last 11 Update EMP set row = emp_tab(i) where EMPNO = emp_tab(i).EMPNO ; -- ILLEGAL 12 13 End ; 14 / Update EMP set row = emp_tab(i) where EMPNO = emp_tab(i).EMPNO ; -ILLEGAL * ERROR at line 11: ORA-06550: line 11, column 52: PLS-00436: implementation restriction: cannot reference fields of BULK InBIND table of records You have to use a standard FOR LOOP statement: For i in emp_tab.first..emp_tab.last loop Update EMP set row = emp_tab(i) where EMPNO = emp_tab(i).EMPNO ; End loop ; Or use simple collections: Declare TYPE TAB_EMPNO is table of EMP.EMPNO%TYPE ; TYPE TAB_EMPNAME is table of EMP.ENAME%TYPE ; no_tab TAB_EMPNO ; na_tab TAB_EMPNAME ; Cursor CEMP is Select EMPNO, ENAME From EMP ; Begin Open CEMP; Fetch CEMP BULK COLLECT Into no_tab, na_tab ; Close CEMP ;

Page 31 of 137

Forall i in no_tab.first..no_tab.last Update EMP set ENAME = na_tab(i) where EMPNO = no_tab(i) ; End ; FORALL and Execptions If an error is raised by the FORALL statement, all the rows processed are rolled back. You can save the rows that raised an error (and do not abort the process) with the SAVE EXCEPTION keyword. Every exception raised during execution is stored in the %BULK_EXCEPTIONS collection. This is a collection of records composed by two attributes: %BULK_EXCEPTIONS(n).ERROR_INDEX which contains the index number %BULK_EXCEPTIONS(n).ERROR_CODE which contains the error code

The total amount of errors raised by the FORALL instruction is stored in the SQL %BULK_EXCEPTIONS.COUNT attribute. SQL> Declare 2 TYPE TYP_TAB IS TABLE OF Number ; 3 tab TYP_TAB := TYP_TAB( 2, 0, 1, 3, 0, 4, 5 ) ; 4 nb_err Pls_integer ; 5 Begin 6 Forall i in tab.first..tab.last SAVE EXCEPTIONS 7 Delete from EMP where SAL = 5 / tab(i) ; 8 Exception 9 When others then 10 nb_err := SQL%BULK_EXCEPTIONS.COUNT ; 11 dbms_output.put_line( to_char( nb_err ) || ' Errors ' ) ; 12 For i in 1..nb_err Loop 13 dbms_output.put_line( 'Index ' || to_char( SQL %BULK_EXCEPTIONS(i).ERROR_INDEX ) || ' Er ror : ' 14 || to_char( SQL%BULK_EXCEPTIONS(i).ERROR_CODE ) ) ; 15 End loop ; 16 End ; 17 / 2 Errors Index 2 Error : 1476 Index 5 Error : 1476

Page 32 of 137

PL/SQL procedure successfully completed. The %BULK_ROWCOUNT attribute. This is an INDEX-BY table that contains for each SQL order the number of rows processed. If no row is impacted, SQL%BULK_ROWCOUNT(n) equals 0. SQL> Declare 2 TYPE TYP_TAB_TEST IS TABLE OF TEST%ROWTYPE ; 3 TYPE TYP_TAB_A IS TABLE OF TEST.A%TYPE ; 4 TYPE TYP_TAB_B IS TABLE OF TEST.B%TYPE ; 5 tabrec TYP_TAB_TEST ; 6 taba TYP_TAB_A ; 7 tabb TYP_TAB_B ; 8 total Pls_integer := 0 ; 9 CURSOR C_test is select A, B From TEST ; 10 begin 11 -- Load the collection from the table -12 Select A, B BULK COLLECT into tabrec From TEST ; 13 14 -- Insert rows -15 Forall i in tabrec.first..tabrec.last 16 insert into TEST values tabrec(i) ; 17 18 For i in tabrec.first..tabrec.last Loop 19 total := total + SQL%BULK_ROWCOUNT(i) ; 20 End loop ; 21 22 dbms_output.put_line('Total insert : ' || to_char( total) ) ; 23 24 total := 0 ; 25 -- Upadate rows -26 For i in tabrec.first..tabrec.last loop 27 update TEST set row = tabrec(i) where A = tabrec(i).A ; 28 End loop ; 29 30 For i in tabrec.first..tabrec.last Loop 31 total := total + SQL%BULK_ROWCOUNT(i) ; 32 End loop ; 33 34 dbms_output.put_line('Total upfdate : ' || to_char( total) ) ; 35 36 End ; 37 / Total insert : 20 Total upfdate : 20 PL/SQL procedure successfully completed.

Page 33 of 137

TRIGGERS
Triggers are named pl/sql blocks that are fired implicitly when the triggering event happens. There are 3 types of triggers (not covering them in detail): 1) DML trigger 2) Instead of trigger 3) System triggers Mutating Trigger error Mutating means changing, mutating trigger error says that the table is changing and it is not allowed to read or modify the changing table. It occurs in two scenarios: 1) When we try to read or modify the triggering table inside a row level trigger t 2) When we try to read or modify the key columns of constraining table of triggering table inside a row level trigger. t
able exceptions occur when we try to reference the triggering table in a query from within row-

Resolution to Mutating Trigger error

Don't use triggers - The best way to avoid the mutating table error is not to use triggers. While the object-oriented Oracle provides "methods" that are associated with tables, most savvy PL/SQL developers avoid triggers unless absolutely necessary. Use an "after" trigger - If you must use a trigger, it's best to avoid the mutating table error by using an "after" trigger, to avoid the currency issues associated with a mutating table. For example, using a trigger ":after update on xxx", the original update has completed and the table will not be mutating. Re-work the trigger syntax There are some other ways to avoid mutating tables with a combination of row-level and statement-level triggers. See example below Use autonomous transactions - You can avoid the mutating table error by marking your trigger as an autonomous transaction, making it independent

Page 34 of 137

from the table that calls the procedure. It can be used only for reading the mutating table inside a trigger. Example problem: Let's assume we need to audit the actions on a table and record the total number of records when this audit record was created. We might have a schema like: CREATE TABLE tab1 ( id NUMBER(10) NOT NULL, description VARCHAR2(50) NOT NULL ); ALTER TABLE tab1 ADD ( CONSTRAINT tab1_pk PRIMARY KEY (id) ); CREATE SEQUENCE tab1_seq; CREATE TABLE tab1_audit ( id NUMBER(10) NOT NULL, action VARCHAR2(10) NOT NULL, tab1_id NUMBER(10), record_count NUMBER(10), created_time TIMESTAMP ); ALTER TABLE tab1_audit ADD ( CONSTRAINT tab1_audit_pk PRIMARY KEY (id) ); ALTER TABLE tab1_audit ADD ( CONSTRAINT tab1_audit_tab1_fk FOREIGN KEY (tab1_id) REFERENCES tab1(id) ); CREATE SEQUENCE tab1_audit_seq; Following best practices we place all our trigger code into a package as follows: CREATE OR REPLACE PACKAGE trigger_api AS PROCEDURE tab1_row_change (p_id IN tab1.id%TYPE, p_action IN VARCHAR2); END trigger_api; /

Page 35 of 137

CREATE OR REPLACE PACKAGE BODY trigger_api AS PROCEDURE tab1_row_change (p_id IN tab1.id%TYPE, p_action IN VARCHAR2) IS l_count NUMBER(10); BEGIN SELECT COUNT(*) INTO l_count FROM tab1; INSERT INTO tab1_audit (id, action, tab1_id, record_count, created_time) VALUES (tab1_audit_seq.NEXTVAL, p_action, p_id, l_count, SYSTIMESTAMP); END tab1_row_change; END trigger_api; / SHOW ERRORS Next we create the row-level trigger itself to catch any changes to the table: CREATE OR REPLACE TRIGGER tab1_ariu_trg AFTER INSERT OR UPDATE ON tab1 FOR EACH ROW BEGIN IF inserting THEN trigger_api.tab1_row_change(p_id => :new.id, p_action => 'INSERT'); ELSE trigger_api.tab1_row_change(p_id => :new.id, p_action => 'UPDATE'); END IF; END; / SHOW ERRORS If we try to insert into the TAB1 table we might expect the insert to complete and the audit record to be created but as you can see below this is not the case: SQL> INSERT INTO tab1 (id, description) VALUES (tab1_seq.NEXTVAL, 'ONE'); INSERT INTO tab1 (id, description) VALUES (tab1_seq.NEXTVAL, 'ONE') * ERROR at line 1: ORA-04091: table TIM_HALL.TAB1 is mutating, trigger/function may not see it ORA-06512: at "TIM_HALL.TRIGGER_API", line 7 ORA-06512: at "TIM_HALL.TAB1_ARIU_TRG", line 3 ORA-04088: error during execution of trigger 'TIM_HALL.TAB1_ARIU_TRG' We can get round this issue by using a combination of row-level and statementlevel triggers. First we alter the TRIGGER_API package to store any data passed by the row-level trigger in a PL/SQL table. We also add a new statement-level procedure to process each of the rows in the PL/SQL table: CREATE OR REPLACE PACKAGE trigger_api AS PROCEDURE tab1_row_change (p_id IN tab1.id%TYPE,

Page 36 of 137

p_action IN VARCHAR2); PROCEDURE tab1_statement_change; END trigger_api; / SHOW ERRORS CREATE OR REPLACE PACKAGE BODY trigger_api AS TYPE t_change_rec IS RECORD ( id tab1.id%TYPE, action tab1_audit.action%TYPE ); TYPE t_change_tab IS TABLE OF t_change_rec; g_change_tab t_change_tab := t_change_tab(); PROCEDURE tab1_row_change (p_id IN tab1.id%TYPE, p_action IN VARCHAR2) IS BEGIN g_change_tab.extend; g_change_tab(g_change_tab.last).id := p_id; g_change_tab(g_change_tab.last).action := p_action; END tab1_row_change; PROCEDURE tab1_statement_change IS l_count NUMBER(10); BEGIN FOR i IN g_change_tab.first .. g_change_tab.last LOOP SELECT COUNT(*) INTO l_count FROM tab1; INSERT INTO tab1_audit (id, action, tab1_id, record_count, created_time) VALUES (tab1_audit_seq.NEXTVAL, g_change_tab(i).action, g_change_tab(i).id, l_count, SYSTIMESTAMP); END LOOP; g_change_tab.delete; END tab1_statement_change; END trigger_api; / SHOW ERRORS Our existing row-level trigger is fine, but we need to create a statement-level trigger to call our new procedure: CREATE OR REPLACE TRIGGER tab1_asiu_trg AFTER INSERT OR UPDATE ON tab1

Page 37 of 137

BEGIN trigger_api.tab1_statement_change; END; / SHOW ERRORS The TAB1 inserts/updates will now work without mutation errors: SQL> INSERT INTO tab1 (id, description) VALUES (tab1_seq.NEXTVAL, 'ONE'); 1 row created. SQL> INSERT INTO tab1 (id, description) VALUES (tab1_seq.NEXTVAL, 'TWO'); 1 row created. SQL> UPDATE tab1 SET description = description; 2 rows updated. SQL> SELECT * FROM tab1; ID DESCRIPTION ---------- ----------2 ONE 3 TWO 2 rows selected. SQL> SELECT * FROM tab1_audit; ID ACTION TAB1_ID RECORD_COUNT CREATED_TIME ---------- ---------- ---------- ------------ ------------------------1 INSERT 2 1 03-DEC-03 14.42.47.515589 2 INSERT 3 2 03-DEC-03 14.42.47.600550 3 UPDATE 2 2 03-DEC-03 14.42.49.178678 4 UPDATE 3 2 03-DEC-03 14.42.49.179655 4 rows selected.

PRAGMA
Overview: PRAGMA:-Signifies that the statement is a pragma (compiler directive). Pragmas are processed at compile time, not at run time. They pass information to the compiler. A pragma is an instruction to the Oracle compiler that tells it to do something. In this case you are telling Oracle to associate an error that you choose to a variable that you choose. This pragma must go in the declarative section of your block.

Page 38 of 137

Types of PRAGMA 1.AUTONOMOUS_TRANSACTION 2.EXCEPTION_INIT 3.RESTRICT_REFERENCES 4.SERIALLY_REUSABLE

Details: 1. AUTONOMOUS_TRANSACTION An autonomous transaction is an independent transaction that is initiated by another transaction (the parent transaction). An autonomous transaction can modify data and commit or rollback, independent of the state of the parent transaction. The autonomous transaction must commit or roll back before the autonomous transaction is ended and the parent transaction continues. An autonomous transaction is defined in the declaration of a pl/sql block. This can be an anonymous block, function, procedure, object method or trigger. This is done by adding the statement 'PRAGMA AUTONOMOUS_TRANSACTION;' anywhere in the declaration block. There isn't much involved in defining a PL/SQL block as an autonomous transaction. You simply include the following statement in your declaration section: PRAGMA AUTONOMOUS_TRANSACTION; Sample code: PROCEDURE test_autonomous IS PRAGMA AUTONOMOUS_TRANSACTION; BEGIN insert .... commit; END test_autonomous; 2. EXCEPTION_INIT The pragma EXCEPTION_INIT associates an exception name with an unnamed Oracle error number (like missed not null column value error no is 1400 but there is no name associated with this error). You can intercept any ORA- error and write a specific handler for it instead of using the OTHERS handler Usage Notes: You can use EXCEPTION_INIT in the declarative part of any PL/SQL block, subprogram, or package. The pragma must appear in the same declarative

Page 39 of 137

part as its associated exception, somewhere after the exception declaration.Be sure to assign only one exception name to an error number Example: DECLARE deadlock_detected EXCEPTION; PRAGMA EXCEPTION_INIT(deadlock_detected, -60); BEGIN NULL; -- Some operation that causes an ORA-00060 error EXCEPTION WHEN deadlock_detected THEN NULL; -- handle the error END; 3. RESTRICT_REFERENCES To be callable from SQL statements, a stored function must obey certain "purity" rules, which control side-effects. The same rules that apply to the function itself also apply to any functions or procedures that it calls. If any SQL statement inside the function body violates a rule, you get an error at run time (when the statement is parsed). To check for violations of the rules at compile time, you can use the compiler directive PRAGMA RESTRICT_REFERENCES. This pragma asserts that a function does not read and/or write database tables and/or package variables. Functions that do any of these read or write operations are difficult to optimize, because any call might produce different results or encounter errors. Keyword and Parameter Description DEFAULT Specifies that the pragma applies to all subprograms in the package spec or object type spec. You can still declare the pragma for individual subprograms. Such pragmas override the default pragma. RNDS Asserts that the subprogram reads no database state (does not query database tables). RNPS Asserts that the subprogram reads no package state (does not reference the values of packaged variables) TRUST Asserts that the subprogram can be trusted not to violate one or more rules. This value is needed for functions written in C or Java that are called from PL/SQL, since PL/SQL cannot verify them at run time. WNDS Asserts that the subprogram writes no database state (does not modify database tables).

Page 40 of 137

WNPS Asserts that the subprogram writes no package state (does not change the values of packaged variables). Usage Notes: You can declare the pragma RESTRICT_REFERENCES only in a package spec or object type spec. You can specify up to four constraints (RNDS, RNPS, WNDS, WNPS) in any order. To call a function from parallel queries, you must specify all four constraints. No constraint implies another. When you specify TRUST, the function body is not checked for violations of the constraints listed in the pragma. The function is trusted not to violate them. Skipping these checks can improve performance. If you specify DEFAULT instead of a subprogram name, the pragma applies to all subprograms in the package spec or object type spec (including the systemdefined constructor for object types). You can still declare the pragma for individual subprograms, overriding the default pragma. A RESTRICT_REFERENCES pragma can apply to only one subprogram declaration. A pragma that references the name of overloaded subprograms always applies to the most recent subprogram declaration. Typically, you only specify this pragma for functions. If a function calls procedures, then you need to specify the pragma for those procedures as well. Examples: This example asserts that the function BALANCE writes no database state (WNDS) and reads no package state (RNPS). That is, it does not issue any DDL or DML statements, and does not refer to any package variables, and neither do any procedures or functions that it calls. It might issue queries or assign values to package variables. CREATE PACKAGE loans AS FUNCTION balance(account NUMBER) RETURN NUMBER; PRAGMA RESTRICT_REFERENCES (balance, WNDS, RNPS); END loans; Another example: CREATE PACKAGE emp1 AS FUNCTION abc(p_empno NUMBER) RETURN NUMBER; PRAGMA RESTRICT_REFERENCES (balance, WNDS, RNPS); END emp1; CREATE or replace PACKAGE body emp1 AS FUNCTION abc(p_empno NUMBER) RETURN NUMBER is begin insert into employee (empno) values(p_empno); ----should not be there return 1;

Page 41 of 137

end abc; END emp1; The above body will give compilation error as it is violation the rule of pragma 4. SERIALLY_REUSABLE The pragma SERIALLY_REUSABLE indicates that the package state is needed only for the duration of one call to the server. After this call, the storage for the package variables can be reused, reducing the memory overhead for longrunning sessions. Usage Notes: This pragma is appropriate for packages that declare large temporary work areas that are used once and not needed during subsequent database calls in the same session. You can mark a bodiless package as serially reusable. If a package has a spec and body, you must mark both. You cannot mark only the body. The global memory for serially reusable packages is pooled in the System Global Area (SGA), not allocated to individual users in the User Global Area (UGA). That way, the package work area can be reused. When the call to the server ends, the memory is returned to the pool. Each time the package is reused, its public variables are initialized to their default values or to NULL. Serially reusable packages cannot be accessed from database triggers or other PL/SQL subprograms that are called from SQL statements. If you try, Oracle generates an error. Examples: CREATE PACKAGE pkg1 IS PRAGMA SERIALLY_REUSABLE; num NUMBER := 0; PROCEDURE init_pkg_state(n NUMBER); PROCEDURE print_pkg_state; END pkg1; / CREATE PACKAGE BODY pkg1 IS PRAGMA SERIALLY_REUSABLE; PROCEDURE init_pkg_state (n NUMBER) IS BEGIN pkg1.num := n; END; PROCEDURE print_pkg_state IS BEGIN DBMS_OUTPUT.PUT_LINE('Num: ' || pkg1.num); END; END pkg1; /

Page 42 of 137

ANALYTIC FUNCTIONS
Overview:
The general syntax of analytic function is: Function(arg1,..., argn) OVER ( [PARTITION BY <...>] [ORDER BY <....>] [<window_clause>] )

Analytic functions used with partition by clause Max Min Sum Avg Count Other Functions include Lag Lead Rank Dence_rank Row_number Coalease Details: How are analytic functions different from group or aggregate functions?
SELECT deptno, COUNT(*) DEPT_COUNT FROM emp WHERE deptno IN (20, 30) GROUP BY deptno;

DEPTNO

DEPT_COUNT

---------------------- ---------------------20 30 5 6

2 rows selected

Query-1

Consider the Query-1 and its result. Query-1 returns departments and their employee count. Most importantly it groups the records into departments in

Page 43 of 137

accordance with the GROUP BY clause. As such any non-"group by" column is not allowed in the select clause.

SELECT empno, deptno, COUNT(*) OVER (PARTITION BY deptno) DEPT_COUNT FROM emp WHERE deptno IN (20, 30);

EMPNO

DEPTNO DEPT_COUNT

---------- ---------- ---------7369 7566 7788 7902 7876 7499 7900 7844 7698 7654 7521 20 20 20 20 20 30 30 30 30 30 30 5 5 5 5 5 6 6 6 6 6 6

11 rows selected.

Now consider the analytic function query (Query-2) and its result. Note the repeating values of DEPT_COUNT column. This brings out the main difference between aggregate and analytic functions. Though analytic functions give aggregate result they do not group the result set. They return the group value multiple times with each record. As such any other non-"group by" column or expression can be present in the select clause, for example, the column EMPNO in Query-2. Analytic functions are computed after all joins, WHERE clause, GROUP BY and HAVING are computed on the query. The main ORDER BY clause of the query operates after the analytic functions. So analytic functions can only appear in the select list and in the main ORDER BY clause of the query.

Query-2

Page 44 of 137

In absence of any PARTITION or <window_clause> inside the OVER( ) portion, the function acts on entire record set returned by the where clause. Note the results of Query-3 and compare it with the result of aggregate function query Query-4.

SELECT empno, deptno, COUNT(*) OVER ( ) CNT FROM emp WHERE deptno IN (10, 20) ORDER BY 2, 1;

EMPNO

DEPTNO

CNT

---------- ---------- ---------7782 7839 7934 7369 7566 7788 7876 7902 10 10 10 20 20 20 20 20 8 8 8 8 8 8 8 8

Query-3
SELECT COUNT(*) FROM emp WHERE deptno IN (10, 20);

COUNT(*) ---------8

Query-4

How to break the result set in groups or partitions? It might be obvious from the previous example that the clause PARTITION BY is used to break the result set into groups. PARTITION BY can take any non-analytic SQL expression. Some functions support the <window_clause> inside the partition to further limit the records they act on. In the absence of any <window_clause> analytic functions are computed on all the records of the partition clause.

Page 45 of 137

The functions SUM, COUNT, AVG, MIN, MAX are the common analytic functions the result of which does not depend on the order of the records. Functions like LEAD, LAG, RANK, DENSE_RANK, ROW_NUMBER, FIRST, FIRST VALUE, LAST, LAST VALUE depends on order of records. In the next example we will see how to specify that. How to specify the order of the records in the partition? The answer is simple, by the "ORDER BY" clause inside the OVER( ) clause. This is different from the ORDER BY clause of the main query which comes after WHERE. In this section we go ahead and introduce each of the very useful functions LEAD, LAG, RANK, DENSE_RANK, ROW_NUMBER, FIRST, FIRST VALUE, LAST, LAST VALUE and show how each depend on the order of the record. The general syntax of specifying the ORDER BY clause in analytic function is: ORDER BY <sql_expr> [ASC or DESC] NULLS [FIRST or LAST] The syntax is self-explanatory. ROW_NUMBER, RANK and DENSE_RANK All the above three functions assign integer values to the rows depending on their order. That is the reason of clubbing them together. ROW_NUMBER( ) gives a running serial number to a partition of records. It is very useful in reporting, especially in places where different partitions have their own serial numbers. In Query-5, the function ROW_NUMBER( ) is used to give separate sets of running serial to employees of departments 10 and 20 based on their HIREDATE.

SELECT empno, deptno, hiredate, ROW_NUMBER( ) OVER (PARTITION BY deptno ORDER BY hiredate NULLS LAST) SRLNO FROM emp WHERE deptno IN (10, 20) ORDER BY deptno, SRLNO;

EMPNO

DEPTNO HIREDATE

SRLNO

------ ------- --------- ---------7782 10 09-JUN-81 1

Page 46 of 137

7839 7934 7369 7566 7902 7788 7876

10 17-NOV-81 10 23-JAN-82 20 17-DEC-80 20 02-APR-81 20 03-DEC-81 20 09-DEC-82 20 12-JAN-83

2 3 1 2 3 4 5

8 rows selected.

Query-5 (ROW_NUMBER example)

RANK and DENSE_RANK both provide rank to the records based on some column value or expression. In case of a tie of 2 records at position N, RANK declares 2 positions N and skips position N+1 and gives position N+2 to the next record. While DENSE_RANK declares 2 positions N but does not skip position N+1. Query-6 shows the usage of both RANK and DENSE_RANK. For DEPTNO 20 there are two contenders for the first position (EMPNO 7788 and 7902). Both RANK and DENSE_RANK declares them as joint toppers. RANK skips the next value that is 2 and next employee EMPNO 7566 is given the position 3. For DENSE_RANK there are no such gaps.

SELECT empno, deptno, sal, RANK() OVER (PARTITION BY deptno ORDER BY sal DESC NULLS LAST) RANK, DENSE_RANK() OVER (PARTITION BY deptno ORDER BY sal DESC NULLS LAST) DENSE_RANK FROM emp WHERE deptno IN (10, 20) ORDER BY 2, RANK;

EMPNO

DEPTNO

SAL

RANK DENSE_RANK

------ ------- ----- ----- ---------7839 7782 7934 10 10 10 5000 2450 1300 1 2 3 1 2 3

Page 47 of 137

7788 7902 7566 7876 7369

20 20 20 20 20

3000 3000 2975 1100 800

1 1 3 4 5

1 1 2 3 4

8 rows selected.

Query-6 (RANK and DENSE_RANK example)

LEAD and LAG LEAD has the ability to compute an expression on the next rows (rows which are going to come after the current row) and return the value to the current row. The general syntax of LEAD is shown below: LEAD (<sql_expr>, <offset>, <default>) OVER (<analytic_clause>) <sql_expr> is the expression to compute from the leading row. <offset> is the index of the leading row relative to the current row. <offset> is a positive integer with default 1. <default> is the value to return if the <offset> points to a row outside the partition range. The syntax of LAG is similar except that the offset for LAG goes into the previous rows. Query-7 and its result show simple usage of LAG and LEAD function.

SELECT deptno, empno, sal, LEAD(sal, 1, 0) OVER (PARTITION BY dept ORDER BY sal DESC NULLS LAST) NEXT_LOWER_SAL, LAG(sal, 1, 0) OVER (PARTITION BY dept ORDER BY sal DESC NULLS LAST) PREV_HIGHER_SAL FROM emp WHERE deptno IN (10, 20) ORDER BY deptno, sal DESC;

DEPTNO

EMPNO

SAL NEXT_LOWER_SAL PREV_HIGHER_SAL

------- ------ ----- -------------- --------------10 10 10 20 7839 7782 7934 7788 5000 2450 1300 3000 2450 1300 0 3000 0 5000 2450 0

Page 48 of 137

20 20 20 20

7902 7566 7876 7369

3000 2975 1100 800

2975 1100 800 0

3000 3000 2975 1100

8 rows selected.

Query-7 (LEAD and LAG)

COALESCE COALESCE returns the first non-null expr in the expression list. You must specify at least two expressions. If all occurrences of expr evaluate to null, then the function returns null. Oracle Database uses short-circuits evaluation. The database evaluates each expr value and determines whether it is NULL, rather than evaluating all of the expr values before determining whether any of them is NULL. If all occurrences of expr are numeric datatype or any nonnumeric datatype that can be implicitly converted to a numeric datatype, then Oracle Database determines the argument with the highest numeric precedence, implicitly converts the remaining arguments to that datatype, and returns that datatype. This function is a generalization of the NVL function. You can also use COALESCE as a variety of the CASE expression. For example, COALESCE (expr1, expr2) is equivalent to: CASE WHEN expr1 IS NOT NULL THEN expr1 ELSE expr2 END Similarly, COALESCE (expr1, expr2, ..., exprn), for n>=3 is equivalent to: CASE WHEN expr1 IS NOT NULL THEN expr1 ELSE COALESCE (expr2, ..., exprn) END Examples Page 49 of 137

The following example uses the sample oe.product_information table to organize a clearance sale of products. It gives a 10% discount to all products with a list price. If there is no list price, then the sale price is the minimum price. If there is no minimum price, then the sale price is "5": SELECT product_id, list_price, min_price, COALESCE(0.9*list_price, min_price, 5) "Sale" FROM product_information WHERE supplier_id = 102050 ORDER BY product_id, list_price, min_price, "Sale"; PRODUCT_ID LIST_PRICE MIN_PRICE ---------- ---------- ---------- ---------1769 48 43.2 1770 73 73 2378 305 247 274.5 2382 850 731 765 3355 5 Sale

HIERARCHICAL QUERIES
A relational database does not store data in a hierarchical way. Then how do I get the data in a hierarchical manner? Here we get to know about how to use the hierarchical querying feature which Oracle has given. This article talks about how you can interpret the hierarchical query conceptually and build hierarchical queries catering your needs. Using hierarchical queries, you can retrieve records from a table by their natural relationship. Be it a family tree or a employee/manager tree or what ever. Tree walking enables you to construct a hierarchical tree if the relationship lies in the same table. For instance, a manager column which exists in the EMP table defines the managerial hierarchy. We shall take up an example of the EMP table in Scott schema. Here King is top most in the hierarchy empno 7369 SMITH 7499 ALLEN 7521 WARD 7566 JONES ename job CLERK 7902 mgr hiredate 20-Feb-81 22-Feb-81 2-Apr-81

17-Dec-80

SALESMAN 7698 SALESMAN 7698 MANAGER 7839

Page 50 of 137

7654 MARTIN SALESMAN 7698 7698 BLAKE MANAGER 7782 CLARK MANAGER 7788 SCOTT ANALYST 7839 KING PRESIDENT 7839 7839 7566

28-Sep-81 1-May-81 9-Jun-81 19-Apr-87 8-Sep-81

17-Nov-81 23-May-87 3-Dec-81 7566 3-Dec-81 23-Jan-82

7844 TURNER SALESMAN 7698 7876 ADAMS CLERK 7788 7900 JAMES 7902 FORD CLERK 7698 ANALYST

7934 MILLER CLERK 7782

If we have to query the employees reporting to King directly, SELECT empno, ename, job, mgr, hiredate FROM emp WHERE mgr = 7839 7566 JONES MANAGER 7839 7839 7839 2-Apr-81 1-May-81 9-Jun-81

7698 BLAKE MANAGER 7782 CLARK MANAGER

But if we have to walk down the tree and check who all are reporting to Jones, Blake and Clark (recursively) SELECT empno, ename, job, mgr, hiredate FROM emp START WITH mgr IS NULL CONNECT BY PRIOR empno = mgr

Page 51 of 137

We will quickly see what are all the key words used in this query. START WITH Specifies the root rows of the hierarchy or in other words, where to start parsing from. This clause is necessary for true hierarchical queries CONNECT BY PRIOR This explains the relationship between the parent and the child. PRIOR This is used to achieve the recursive condition (The actual walking) Direction of walking the tree To explain more on the CONNECT BY clause, this is used to determine if you are walking from top to bottom or bottom to top. CONNECT BY PRIOR col_1 = col_2 If walking from top to bottom col_1 is the parent Key(One which identifies the parent) and col_2 is the child key (this identifies the child) And here it is CONNECT BY PRIOR empno = mgr SELECT empno, ename, job, mgr, hiredate, level FROM emp START WITH mgr IS NULL CONNECT BY PRIOR empno = mgr Gets me this result 7839 KING 7566 JONES 7788 SCOTT 7876 ADAMS PRESIDENT MANAGER 7839 ANALYST 7566 CLERK 17-Nov-81 2-Apr-81 19-Apr-87 4 1 2 3

7788 23-May-87

Page 52 of 137

7902 FORD 7369 SMITH 7698 BLAKE 7499 ALLEN 7521 WARD 7654 MARTIN 7844 TURNER 7900 JAMES 7782 CLARK 7934 MILLER

ANALYST 7566 CLERK MANAGER 7839 SALESMAN 7698 SALESMAN 7698 SALESMAN 7698 SALESMAN 7698 CLERK CLERK MANAGER 7839

3-Dec-81 4 1-May-81 20-Feb-81 22-Feb-81 28-Sep-81 8-Sep-81 3 3 9-Jun-812

3 2 3 3 3 3

7902 17-Dec-80

7698 3-Dec-81 7782 23-Jan-82

If walking from bottom to top Col_1 should be the child key and col_2 should be the parent key CONNECT BY PRIOR mgr = empno

Using Level LEVEL psedo column shows the level or rank of the particular row in the hierarchical tree. If you see the below query, It shows the level of KING and the level of the guys reporting directly to him SELECT empno, ename, job, mgr, hiredate, LEVEL FROM emp WHERE LEVEL <= 2 START WITH mgr IS NULL CONNECT BY PRIOR empno = mgr empno 7839 KING 7566 JONES ename job PRESIDENT MANAGER 7839 mgr hiredatelevel 17-Nov-81 2-Apr-81 1 2

Page 53 of 137

7698 BLAKE MANAGER 7782 CLARK MANAGER

7839 7839

1-May-81 9-Jun-812

Here the level is used in the where clause to restrict the records till the second level. Level also can be used to format the Output to form a graph structure SELECT Lpad(ename,Length(ename) + LEVEL * 10 - 10,'-') FROM emp START WITH mgr IS NULL CONNECT BY PRIOR empno = mgr KING ----------JONES --------------------SCOTT ------------------------------ADAMS --------------------FORD ------------------------------SMITH ----------BLAKE --------------------ALLEN --------------------WARD --------------------MARTIN --------------------TURNER --------------------JAMES ----------CLARK --------------------MILLER

Pruning branches/children There might be business requirements to partially retrieve a hierarchical tree and to prune branches. If you do not want to do so, use the where condition to restrict the branch but process the child row

Page 54 of 137

SELECT empno, ename, job, mgr, hiredate FROM emp WHERE ename <> 'JONES' START WITH mgr IS NULL CONNECT BY PRIOR empno = mgr

This will restrict the value Jones in the result set but still will retrieve Scott and Ford. Please refer to the attached Picture to get a complete understanding. To Restrict the value clark and its children, you should be adding the condition after the CONNECT BY SELECT empno, ename, job, mgr, hiredate FROM emp START WITH mgr IS NULL CONNECT BY PRIOR empno = mgr AND ename <> 'CLARK'

Hierarchical Joins We can write a join into a hierarchical query. The key to write a hierarchical join successfully is to understand the order in which Oracle processes hierarchical queries. Conceptually, the following sequence of events occurs when you execute a hierarchical query containing a join. The join is executed first, which means any join predicates are evaluated first. The CONNECT BY processing is applied to the rows returned from the join operation. Page 55 of 137

Any filtering predicates from the WHERE clause are applied to the results of the CONNECT BY operation. It's important to understand that the CONNECT BY processing runs from the results of the join. E.g.:

SELECT assembly_id id, parent_assembly parent, assembly_name name, bom.part_number part, current_inventory inventory FROM bill_of_materials bom, part p WHERE bom.part_number = p.part_number (+) AND p.current_inventory < 500 START WITH assembly_id = 200 CONNECT BY parent_assembly = PRIOR assembly_id;
The WHERE clause contains two predicates: one for the join and one restricting query results to parts with low inventories. When this query executes, the following sequence of events occurs: 1. The bill_of_materials and part tables are joined, with part being the optional table. The bom.part_number = p.part_number (+) predicate is evaluated as part of this step, because it's a join condition. 2. CONNECT BY processing occurs, producing the hierarchical listing of assemblies and parts that make up an airplane. 3. The filtering predicate p.current_inventory < 500 is applied to restrict the query's final output to only those rows representing parts having low inventory levels.

Page 56 of 137

You'll notice we used the old join syntax in the previous example. There's a reason for that. There is, unfortunately, a bug in Oracle9i Database involving the new JOIN clause and CONNECT BY queries. The following query uses the new JOIN clause to express the same join as in the previous query. Unfortunately, use of the newer join syntax causes all predicate evaluation to occur before CONNECT BY processing. In this case, the result is that the root row is filtered out too early, the CONNECT BY processing finds no root to start from, and the query returns no rows: SELECT assembly_id id, parent_assembly parent, assembly_name name, bom.part_number part, current_inventory inventory FROM bill_of_materials bom LEFT OUTER JOIN part p ON bom.part_number = p.part_number WHERE p.current_inventory < 500 START WITH assembly_id = 200 CONNECT BY parent_assembly = PRIOR assembly_id; no rows selected Finding the Path Oracle9i introduced a new, hierarchical function named SYS_CONNECT_BY_PATH. For any row in a hierarchy, this function gives you the complete path to that row from the root. SYS_CONNECT_BY_PATH takes two arguments: a column name and a separator character. For each row in the hierarchy, the function returns each value in the given column from the root row down to the current row. The values are separated by whatever separator character you specify. You can generate a path using any column; you aren't limited to the CONNECT BY column. EX Query: select emp.*, sys_connect_by_path (ename,'/') name_path from emp start with mgr is null connect by prior empno = mgr output:
EMPNO
7839 7782 7934 7698 7900 7844 7654

ENAME
KING CLARK MILLER BLAKE JAMES TURNER MARTIN

JOB
PRESIDENT MANAGER CLERK MANAGER CLERK SALESMAN SALESMAN -

MGR
7839 7782 7839 7698 7698 7698

HIREDATE
17-NOV-81 09-JUN-81 23-JAN-82 01-MAY-81 03-DEC-81 08-SEP-81 28-SEP-81

SAL
5500 2695 1430 3135 1045 1650 1375 0

COMM

DEPTNO
10 10 10 30 30 30 30

NAME_PATH
/KING /KING/CLARK /KING/CLARK/MILLER /KING/BLAKE /KING/BLAKE/JAMES /KING/BLAKE/TURNER /KING/BLAKE/MARTIN

1400

Page 57 of 137

7521 7499 7566

WARD ALLEN JONES

SALESMAN SALESMAN MANAGER

7698 7698 7839

22-FEB-81 20-FEB-81 02-APR-81

1375 1760 3273

500 300 -

30 30 20

/KING/BLAKE/WARD /KING/BLAKE/ALLEN /KING/JONES

New Features in Oracle Database 10g There are three new CONNECT BY features in Oracle Database 10g. The examples are based on the same table, and use the same data, as those in the previous section. CONNECT_BY_ISCYCLE CONNECT_BY_ISLEAF CONNECT_BY_ROOT These are well thought out and welcome improvements to Oracle's hierarchical query support. These features solve common and longstanding problems inherent in querying hierarchical data, problems that are difficult to solve otherwise. Getting the Root The CONNECT_BY_ROOT operator enables you to reference root-row values from anywhere in a hierarchy. One use for CONNECT_BY_ROOT is to identify all products containing a given part. Suppose we work for a manufacturing company. You've just discovered that part 1019 is defective, and the Consumer Product Safety Commission has ordered us to recall all products sold containing that part; you can take advantage of the new CONNECT_BY_ROOT operator: E.g.

SELECT DISTINCT CONNECT_BY_ROOT assembly_id, CONNECT_BY_ROOT assembly_name FROM bill_of_materials WHERE part_number = 1019 START WITH parent_assembly IS NULL CONNECT BY parent_assembly = PRIOR assembly_id; CONNECT_BY_ROOTASSEMBLY_ID CONNECT_BY_ROOTASSEMBLY -------------------------- ----------------------100 Automobile
Getting the leaf The CONNECT_BY_ISLEAF flag tells if the current row has child rows or not. This returns 1 If the row is the last node in the hierarchy or it returns 0 If the row has one or more child rows below it, in the hierarchy.

Page 58 of 137

E.g. SELECT ASSEMBLY_ID, RPAD (' ', 2*(LEVEL-1)) || assembly_name assembly_name, quantity, CONNECT_BY_ISLEAF FROM bill_of_materials WHERE LEVEL = 2 START WITH assembly_id = 110 CONNECT BY parent_assembly = PRIOR assembly_id; ASSEMBLY_ID ASSEMBLY_NAME QUANTITY CONNECT_BY_ISLEAF ----------- ----------------------- ---------- ----------------111 Piston 6 1 112 Air Filter 1 1 113 Spark Plug 6 1 114 Block 1 1 115 Starter System 1 0 We can see two values for CONNECT_BY_ISLEAF. The value of 1 for Piston, Air Filter, Spark Plug and Block indicates that those assemblies are leaf nodes under which no more assemblies are to be found. Knowing that, we can adjust our display so the user knows not to bother drilling down on those elements. On the other hand, the Starter System has a CONNECT_BY_ISLEAF value of 0, indicating that there are still subassemblies to be retrieved. Getting rid of loop error Whenever we work with hierarchical data, there's the chance you might encounter a hierarchy that's circular. For example, someone might set the parent of an automobile to be a spark plug, An attempt to query the tree of assemblies for "Automobile" will now fail.

ERROR: ORA-01436: CONNECT BY loop in user data


When we get an error message like this, you can use the CONNECT_BY_ISCYCLE pseudocolumn to locate the row (or rows) causing the problem. To do that, we must also add the NOCYCLE keyword to the CONNECT BY clause, to prevent the database from following any loops in the hierarchy. E.g.

SELECT RPAD(' ', 2*(LEVEL-1)) || assembly_name assembly_name, quantity, CONNECT_BY_ISCYCLE FROM bill_of_materials START WITH assembly_id = 100 CONNECT BY NOCYCLE parent_assembly = PRIOR assembly_id; ASSEMBLY_NAME QUANTITY CONNECT_BY_ISCYCLE

Page 59 of 137

------------------------------ ---------- -----------------Automobile 0 Combustion Engine 1 0 Piston 6 0 Air Filter 1 0 Spark Plug 6 1 Block 1 0 Note that CONNECT_BY_ISCYCLE returns a 1 for the "Spark Plug" row. When we use NOCYCLE, the database keeps track of its path through the hierarchy, constantly checking to ensure it's not following a loop. After going through from "Automobile" to "Combustion Engine" to "Spark Plug", the database sees that "Spark Plug's" child is "Automobile," a row that is already in the path taken to get to "Spark Plug". Such a row represents a loop. NOCYCLE prevents the database from following the loop, and CONNECT_BY_ISCYCLE returns a 1 to identify the row in which the loop occurs. Now that we know where the problem is, we can fix it. Final Tips Oracle provides strong support for querying hierarchical data. Take advantage of these features! And when we do use them, keep the following points in mind: Always write a START WITH clause to identify the root nodes you wish the query to traverse. Be sure to properly identify the relationship between child and parent rows in the CONNECT BY clause. Remember that join operations precede CONNECT BY operations. Don't write a join condition that filters out rows from the hierarchies you want to traverse. Writing such a condition will result in, at best, no results at all; you may get erroneous results. Be careful of WHERE clauses in hierarchical queries that include joins, especially when a predicate references a column in the nonhierarchical table.

In short, START WITH and CONNECT BY are your tools for defining your hierarchy. After that, it's just a matter of being careful with joins and WHERE clauses.

GLOBAL TEMPORARY TABLES


.

Creating and Using Temporary Tables in Oracle A useful feature for any type of programming is the ability to store and use temporary data. Oracle provides us this ability with temporary tables. These temporary tables are created just like any other table (it uses some special modifiers), and the data definition of this table is visible to all sessions, just like

Page 60 of 137

regular tables. The temporary aspect of these tables is in regards to the data. The data is temporary and is visible to only that session inserting the data. Creating a temporary table The definition of a temporary table persists just like a permanent table, but contains either session-specific or transaction-specific data. Both of these types control how temporary you want the data to be. The session using the temporary table gets bound to the session with the first insert into the table. This binding goes away, and thus the data disappears, by issuing a truncate of the table or by ending either the session or transaction depending on the temporary table type. Session-specific Data that's stored in a session-specific temporary table exists for the duration of the session and is truncated (delete all of the rows) when the session is terminated. This means that data can be shared between transactions in a single session. This type of temporary table is useful for client/server applications that have a persistent connection to the database. The DDL for creating a session-specific temporary table is presented here: CREATE GLOBAL TEMPORARY TABLE search_results (search_id NUMBER, result_key NUMBER) ON COMMIT PRESERVE ROWS; Transaction-specific Data that's stored in transaction-specific temporary tables is good for the duration of the transaction and will be truncated after each commit. This type of table allows only one transaction at a time. So, if there are several autonomous transactions in the scope of a single transaction, they must wait until the previous one commits. This type of temporary table can be used for client/server applications and is the best choice for Web applications since Web-based applications typically use a connection pool for database connectivity. Here's an example DDL for creating a transactionspecific temporary table: CREATE GLOBAL TEMPORARY TABLE search_results (search_id NUMBER, result_key NUMBER) ON COMMIT DELETE ROWS; What you can and can't do There are certain features that are still available when using temporary tables, and there are specific restrictions that are primarily due to the temporary nature of the data. The following sections detail specific features that still work with temporary tables and those notable exceptions that don't apply when working with temporary tables. Features of temporary tables

Page 61 of 137

Data is visible only to the session. The table definition is visible to all sessions. In rolling back a transaction to a save point, the data will be lost but the table definition persists. You can create indexes on temporary tables. The indexes created are also temporary, and the data in the index has the same session or transaction scope as the data in the table. You can create views that access both temporary and permanent tables. You can create triggers on a temporary table. You can use the TRUNCATE command against the temporary table. This will release the binding between the session and the table but won't affect any other sessions that are using the same temporary table. The export and import utilities handle the definition of the temporary table, but not the data. While using Global temp tables if I call a proc with autonomous transaction which has a commit inside it then that commit would not wipe out the data in global temp table even if it is on commit delete rows table. Restrictions Temporary tables can't be index organized, partitioned, or clustered. You can't specify foreign key constraints. Columns can't be defined as either varray or nested tables. You can't specify a tablespace in the storage clause. It will always use the temporary tablespace. Parallel DML and queries aren't supported. A temporary table must be either session- or transaction-specificit can't be both. Backup and recovery of a temporary table's data isn't available. Data in a temporary table can't be exported using the Export utility. Query Plans with Temporary Tables Below is an example problem with Global temp tables: I have two queries: Query 1: I have a query between a global temporary table and a table with 3 million records. This query takes about seven seconds to give the results. Query 2: I have a second query between a normal table and the table with 3 million records. This query takes less than one second to give the results.

When doing an explain plan on the query with a global temporary table (Query 1), I found out that a full scan is being done on the table with 3 million records. The full scan does not happen on the query with the normal table (Query 2). How can I reduce the time from seven seconds in Query 1? Why is a full scan being done on the table with 3 million records when it is joined with a global temporary table, and how can this be avoided? The root cause here is the optimizer's lack of statistical information. By default, the optimizer will assume that there are N rows in a global temporary table (N is Page 62 of 137

8,168 in an 8K block size database). Since it would be rare for your global temporary table to have 8,168 rows in real life, you need to give a hand to the optimizer and provide it with realistic statistics. Before you do that, let's see how many rows the temporary table is assumed to have by using autotrace in SQL*Plus: 9iR2> create global temporary table 2 gtt ( x int ); Table created. 9iR2> set autotrace traceonly explain 9iR2> select /*+ first_rows */ * 2 from gtt; Execution Plan --------------------------------------SELECT STATEMENT (Cost=11 Card=8168 Bytes=106184) TABLE ACCESS (FULL) OF 'GTT' (Cost=11 Card=8168 Bytes=106184) It is interesting to note that, in Oracle 10g, you will observe something very different by default in the EXPLAIN plan: 10G> select * 2 from gtt; Execution Plan --------------------------------------SELECT STATEMENT (Cost=2 Card=1 Bytes=13) TABLE ACCESS (FULL) OF 'GTT' (Cost=2 Card=1 Bytes=13) The underlying reason Oracle 10g is getting a much better estimate of the true size of the global temporary table is going to be one of your three solution choices. In Oracle 10g, since the Cost-Based Optimizer (CBO) is the only optimizer, it is much more important to have correct statistics. Therefore, the database employs a technique called dynamic sampling, first introduced with Oracle9i Release 2. Dynamic sampling permits the optimizer to take a quick look at the table when statistics are missing. It will sample the data in the table to come up with better estimates of what it is dealing with. So the three solutions available to you are Using dynamic sampling Using DBMS_STATS.SET_TABLE_STATS Using the CARDINALITY hint

I'll demonstrate how to use each in turn. In Oracle 10g, dynamic sampling will work out-of-the-box, because the default setting has been increased from 1 to 2. At level 2, the optimizer will dynamically sample any unanalyzed object referenced in a query processed by the optimizer prior to evaluating the query plan. I can use an ALTER SESSION|SYSTEM command in Page 63 of 137

Oracle9i Release 2 to make it behave the way Oracle 10g does by default, or I can use the dynamic sampling hint as follows: 9iR2> select /*+ first_rows 2 dynamic_sampling(gtt 2) */ * 3 from gtt; Execution Plan ---------------------------------SELECT STATEMENT (Cost=11 Card=1 Bytes=13) TABLE ACCESS (FULL) OF 'GTT' (Cost=11 Card=1 Bytes=13) There I set the dynamic sampling to level 2 for the table GTT in this query. The optimizer therefore quickly scans the table to come up with more-realistic estimates of the true size of this table. The following example adds rows to the GTT table and runs the dynamic sampling hint to have the optimizer sample the unanalyzed objects referenced in the query. Note the increased Card= values: 9iR2> insert into gtt 2 select rownum 3 from all_objects; 32073 rows created. 9iR2> set autotrace traceonly explain 9iR2> select /*+ first_rows 2 dynamic_sampling(gtt 2) */ * 3 from gtt; Execution Plan -----------------------------------SELECT STATEMENT (Cost=11 Card=32073 Bytes=416949) TABLE ACCESS (FULL) OF 'GTT' (Cost=11 Card=32073 Bytes=416949) Another solution, useful in versions prior to Oracle9i Release 2, is to use DBMS_STATS to set representative statistics on the global temporary table. You can do this after the table is created, or you can do it after filling the global temporary table with data (and hence you'll know how many rows it has). If the temporary table is generally the same size from run to run, you would want to set this value just once and be done with it. If the temporary table sizes vary widely, you might consider setting the statistics after populating the table. Since DBMS_STATS implicitly commits (it is like DDL in that sense), you need to be careful how you use it. The following example demonstrates using an AUTONOMOUS_TRANSACTION to permit the use of DBMS_STATS without committing your current transaction's work: 9iR2> declare 2 pragma autonomous_transaction; 3 begin 4 dbms_stats.set_table_stats

Page 64 of 137

5 6 7 8 9

( user, 'GTT', numrows=> 12345 ); commit; end; /

PL/SQL procedure successfully completed. 9iR2> set autotrace traceonly explain 9iR2> select * from gtt; Execution Plan -----------------------------------SELECT STATEMENT Optimizer=CHOOSE (Cost=11 Card=12345 Bytes=160485) TABLE ACCESS (FULL) OF 'GTT' (Cost=11 Card=12345 Bytes=160485) Note that a commit in this example could well clear out a global temporary table, undoing all of your work on it! That is why the AUTONOMOUS_TRANSACTION is really important here. The optimizer now believes that the table GTT has 12,345 rows in it and will use that fact whenever it optimizes a query that references that table. The third solution is the CARDINALITY hint. I include this because it is the only option when using collection variables (not global temporary tables, but rather inmemory tables contained in a PL/SQL collection). The database will not dynamically sample these, nor are they real tables, so no statistics can be stored about them. The only way to communicate to the database the estimated size of this sort of object is to use this hint, as in the following: 9iR2> select 2 /*+ cardinality( gtt 999 ) */ * 3 from gtt; Execution Plan ----------------------------------SELECT STATEMENT Optimizer=CHOOSE (Cost=11 Card=999 Bytes=12987) TABLE ACCESS (FULL) OF 'GTT' (Cost=11 Card=999 Bytes=12987) Here I explicitly told the optimizer how many rows it could expect to find in my global temporary table. Note that the CARDINALITY hint is available only in Oracle9i Release 1 and later releases. See Next Steps for a link to a full example demonstrating this technique with in-memory PL/SQL tables.

More on Cardinality Hint Oracle's cardinality hint Page 65 of 137

Whenever you are trying to use in memory array in your SQL statement, Oracle's query optimiser seems to get confused in doing its job. You have to help it by giving the cardinality hint, otherwise a full table scan is guaranteed to happen. For example you have a type numtable is table of number And you populate your internal array like this:

select client_id bulk collect into client_ids from client where client_type = "XXX"; When you're trying to use the array in this query:

select a.client_name, a.address from client_detail a, table(cast(client_ids as numtable )) t where a.client_id = t.column_value This looks simple, but regardless how you define the index, Oracle will do a full table scan. Don't really know why. To fix this you need to give it a cardinality hint, doesn't have to be precise but at least some rough estimate how many records you think the array will contain. For example:

select /*+ cardinality (t 8) */ a.client_name, a.address from client_detail a, table(cast(client_ids as numtable )) t where a.client_id = t.column_value That will make the query peforms heaps better.

Page 66 of 137

QUERY OPTIMIZATION
Query optimization guidelines: 1) Create indexes on tables. Use INDEX hint to explicitly tell oracle to use this index. 2) Create an Index on the Join Column(s) in a join. 3) Bypassing the Query Optimizer using ordered hint: dept has 100 rows, emp 1000 and abc 100000 rows then the order of join is important in from clause. Table with least rows should come first as ....dept, emp, abc... Use ordered hint to make sure that the given ordering is used by the optimizer SELECT /++ordered++/ d.NAME, e.NAME FROM DEPT d, EMP e WHERE d.MGR = e.SS# or: SELECT /++ordered++/ d.NAME, e.NAME FROM EMP e, DEPT d WHERE d.MGR = e.SS# Suppose that there are 10 departments and 1000 employees, and that the inner table in each query has an index on the join column. In the first query, the first table produces 10 qualifying rows (in this case, the whole table). In the second query, the first table produces 1000 qualifying rows. The first query will access the EMP table 10 times and scan the DEPT table once. The second query will scan the EMP table once but will access the DEPT table 1000 times 4) Converts IN subquery to a join when the select list in the subquery is uniquely indexed. Assume that c1 is the primary key of table t2 SELECT c2 FROM t1 WHERE c2 IN (SELECT c1 FROM t2); becomes: SELECT c2 FROM t1, t2 WHERE t1.c2 = t2.c1; 5) Group by and Order by clauses attempt to avoid sorting if a suitable index is available. This eliminates the sorting step for an ORDER BY clause in a select statement if ALL of the following conditions are met: 1. All ORDER BY columns are in ascending order or in descending order. 2. Only columns appear in the ORDER BY clause. That is, no expressions are used in the ORDER BY clause. 3. ORDER BY columns are a prefix of some base table index---part of initial columns of the index. 4. The cost of accessing by the index is less than sorting the result set. Page 67 of 137

6) If GROUP BY columns is the prefix of a base table index, the sorting step in the grouping operation is also eliminated. 7) Rewrite complex sub queries with Global temporary tables i) If the amount of data to be processed or utilized from your PL/SQL procedure is too large to fit comfortably in a PL/SQL table, use a GLOBAL TEMPORARY table rather than a normal table. A GLOBAL TEMPORARY table has a persistent definition but data is not persistent and the global temporary table generates no redo or rollback information. For example if you are processing a large number of rows, the results of which are not needed when the current session has ended, you should create the table as a temporary table instead. ii) Use cardinality hint/ dbms_stats.set_table_stats/dynamic sampling level 2 to gather statistics of GTT as these tables are not analyzed. 8) Re-write NOT IN and NOT EXISTS subqueries as outer joins (NOT doest use indexes) select book_key from book where book_key NOT IN (select book_key from sales); Below we combine the outer join with a NULL test in the WHERE clause without using a sub-query, giving a faster execution plan. select b.book_key from book b, sales s where b.book_key = s.book_key(+) and s.book_key IS NULL; 9) Index your NULL values. If you have SQL that frequently tests for NULL, consider creating function based index on NULL values. create index emp_null_ename_idx on emp (nvl(ename,'null')); 1- Add a hint to force the index 2 - Change the WHERE predicate to match the function -- test the index access (change predicate to use FBI) select /*+ index(emp_null_ename_idx) */ ename from emp e

Page 68 of 137

where nvl(ename,'null') = 'null' 10) Avoid the LIKE predicate and not equal to <> as they dont use index. 11) Use Decode or case instead of IF else 12) Don't fear full-table scans - Not all queries are optimal when they use indexes. If your query will return a large percentage of the table rows, a full-table scan may be faster than an index scan. Use hint /*full(tablename)*/ to force full table scan. 13) When doing any referential validations i.e., foreign key validations use EXISTS operator. The reason for this is that with the EXISTS operator the oracle kernal knows that once it has found one match it can stop. It doesnt have to continue the Full Table Scan as it does with In. 14) Use union all instead of UNION as union does sorting where as union all just combines the data and returns without sorting. 15) Do not query views in complex joins, instead query on the base tables. This is because the view itself needs to be executed first. 16) Use composit index : A concatenated index is simply an index comprising of more than one column. It is often more selective than a single key index. The combination of columns will point to a smaller number of rows than indexes composed of the individual columns. A concatenated index that contains all of the columns referred in an SQL statements WHERE clause will usually be very effective. 1) If more than one column from a table appears in the WHERE clause and there is no concatenated index on the columns concerned but there are indexes on the individual columns, then Oracle may decide to perform an index merge. 2) In order to perform an index merge, Oracle retrieves all rows from each index with matching values and then merges these two lists or result sets and returns only those that appear in both the lists. 3) Performing index merges is almost always less efficient than the equivalent concatenated index. If you see an index merge (shown in execution plans with AND EQUALS operator), consider creating an appropriate concatenated index. 17) When using function based index the function should be deterministic (A function that returns the same value each time for the same given input). 18) Use bind variables so that queries can be reused and need not to hard parse the queries every time.

Page 69 of 137

Each time the query is submitted, Oracle first checks in the shared pool to see whether this statement has been submitted before. If it has, the execution plan that this statement previously used is retrieved, and the SQL is executed. If the statement cannot be found in the shared pool, Oracle has to go through the process of parsing the statement, working out the various execution paths and coming up with an optimal access plan before it can be executed. This process is known as a hard parse and for OLTP applications can actually take longer to carry out that the DML instruction itself. Hard parsing is very CPU intensive, and involves obtaining latches on key shared memory areas, which whilst it might not affect a single program running against a small set of data, can bring a multi-user system to it's knees if hundreds of copies of the program are trying to hard parse statements at the same time. 19) PLS_INTEGER instead of BINARY_INTEGER or number where ever possible. Pls_integer is faster as it uses machine arithmetic where as binary_integer uses library arithmetic.

PERFORMANCE TUNNING TOPICS


1) 2) 3) 4) 5) Nested Loop Vs Hash Join Vs Sort Merge Join Explain Plan DBMS_APPLICATION_INFO Access Paths DBMS_PROFILER

1) Nested Loop Vs Hash Join Vs Merge Join Joins allow rows from two or more tables to be merged, usually based on common key values. Oracle supports three join techniques The NESTED Loop Joins The HASH Join The SORT-MERGE join

Nested loop (loop over loop) In this algorithm, an outer loop is formed which consists of few entries and then for each entry, an inner loop is processed. Ex: Select tab1.*, tab2.* from tabl, tab2 where tabl.col1=tab2.col2; It is processed like:

Page 70 of 137

For i in (select * from tab1) loop For j in (select * from tab2 where col2=i.col1) loop Display results; End loop; End loop; The Steps involved in doing nested loop are: a) <!--[endif]-->Identify outer (driving) table <!--[if !supportLists]--> b) <!--[endif]-->Assign inner (driven) table to outer table. <!--[if !supportLists]--> c) <!--[endif]-->For every row of outer table, access the rows of inner table. In execution plan it is seem like this: NESTED LOOPS outer_loop inner_loop When optimizer uses nested loops? Optimizer uses nested loop when we are joining tables containing small number of rows with an efficient driving condition. It is important to have an index on column of inner join table as this table is probed every time for a new value from outer table. Optimizer may not use nested loop in case: <!--[endif]--> 1.No of rows of both the table is quite high 2.Inner query always results in same set of records 3.The access path of inner table is independent of data coming from outer table. <!--[endif]--> Note: You will see more use of nested loop when using FIRST_ROWS optimizer mode as it works on model of showing instantaneous results to user as they are fetched. There is no need for selecting caching any data before it is returned to user. In case of hash join it is needed and is explained below.

Hash join Hash joins are used when the joining large tables. The optimizer uses smaller of the 2 tables to build a hash table in memory and scans the large tables and compares the hash value (of rows from large table) with this hash table to find the joined rows.

Page 71 of 137

The algorithm of hash join is divided in two parts <!--[if !supportLists]--><!--[endif]--> 1.Build a in-memory hash table on smaller of the two tables. 2.Probe this hash table with hash value for each row second table <!--[endif]--> In simpler terms it works like Build phase For each row RW1 in small (left/build) table loop Calculate hash value on RW1 join key Insert RW1 in appropriate hash bucket. End loop; Probe Phase For each row RW2 in big (right/probe) table loop Calculate the hash value on RW2 join key For each row RW1 in hash table loop If RW1 joins with RW2 Return RW1, RW2 End loop; End loop; When optimizer uses hash join? Optimizer uses has join while joining big tables or big fraction of small tables. Unlike nested loop, the output of hash join result is not instantaneous as hash joining is blocked on building up hash table. Note: You may see more hash joins used with ALL_ROWS optimizer mode, because it works on model of showing results after all the rows of at least one of the tables are hashed in hash table. Sort merge join Sort merge join is used to join two independent data sources. They perform better than nested loop when the volume of data is big in tables but not as good as hash joins in general. They perform better than hash join when the join condition columns are already sorted or there is no sorting required. The full operation is done in two parts: <!--[if !supportLists]-->Sort join operationget first row RW1 from input1

Page 72 of 137

get first row RW2 from input2. Merge join operationwhile not at the end of either input loop if RW1 joins with RW2 get next row R2 from input 2 return (RW1, RW2) else if RW1 < style=""> get next row RW1 from input 1 else get next row RW2 from input 2 end loop Note: If the data is already sorted, first step is avoided. Important point to understand is, unlike nested loop where driven (inner) table is read as many number of times as the input from outer table, in sort merge join each of the tables involved are accessed at most once. So they prove to be better than nested loop when the data set is large. When optimizer uses Sort merge join? <!--[if !supportLists]-->a) <!--[endif]-->When the join condition is an inequality condition (like <, <=, >=). This is because hash join cannot be used for inequality conditions and if the data set is large, nested loop is definitely not an option. <!--[if !supportLists]-->b) <!--[endif]-->If sorting is anyways required due to some other attribute (other than join) like order by, optimizer prefers sort merge join over hash join as it is cheaper. Note: Sort merge join can be seen with both ALL_ROWS and FIRST_ROWS optimizer hint because it works on a model of first sorting both the data sources and then start returning the results. So if the data set is large and you have FIRST_ROWS as optimizer goal, optimizer may prefer sort merge join over nested loop because of large data. And if you have ALL_ROWS as optimizer goal and if any inequality condition is used the SQL, optimizer may use sort-merge join over hash join 2) Explain Plan Queries can be analyzed using EXPLAIN PLAN Command The EXPLAIN PLAN command displays the execution plan chosen by the Oracle optimizer for SELECT, UPDATE, INSERT, and DELETE statements. A statement's execution plan is the sequence of operations that Oracle performs to execute the statement. By examining the execution plan, you can see exactly how Oracle executes your SQL statement. EXPLAIN PLAN results alone cannot tell you which statements will perform well, and which badly. For example, just because EXPLAIN PLAN indicates that a statement will use an index does not mean that the statement will run quickly. The index might be very inefficient! Use EXPLAIN PLAN to determine the access plan, and to test modifications to improve the performance. Page 73 of 137

EXPLAIN PLAN tells you the execution plan the optimizer would choose if it were to produce an execution plan for a SQL statement at the current time, with the current set of initialization and session parameters. But this is not necessarily the same as the plan that was used at the time the given statement was actually executed. The optimizer bases its analysis on many pieces of data--some of which may have changed!

How to get the EXECUTION PLAN for the query? TOAD has the facility to see the query execution plan. If you are not having the TOAD, then on the SQL Plus session issue the following command SQL > SET AUTOTRACE TRACEONLY EXP Run your query.

How to Read the Plan? It is easier to form a tree structure of the plan where each node is a statement to be executed. Then start reading from bottom to top.

Example Execution plan 0 1 2 3 4 5 6 7 SELECT STATEMENT Optimizer=CHOOSE (Cost=56132 card=62639 Bytes=2818755) 0 HASH JOIN (Cost=56132 Card=62639 Bytes=2818755) 1 TABLE ACCESS (FULL) OF 'PAR_BA_PRODUCTTYPE' (Cost=1 Card =110 Bytes=2090) 1 MERGE JOIN (Cost=56097 Card=62639 Bytes=1628614) 3 SORT (JOIN) (Cost=568 Card=62639 Bytes=626390) 4 TABLE ACCESS (FULL) OF 'PAR_PORTFOLIO_DETAILS' (Cost =55 Card=62639 Bytes=626390) 3 SORT (JOIN) (Cost=55529 Card=2260835 Bytes=36173360) 6 TABLE ACCESS (FULL) OF 'PAR_LIMIT' (Cost=15481 Card= 2260835 Bytes=36173360)

Page 74 of 137

Tree Diagram:

SELECT STATEMENT

1 TABLE ACCESS (FULL) OF PAR_BA_PRODUCTT YPE SORT (JOIN) TABLE ACCESS (FULL) OF PAR_PORTFOLIO_DETA ILS

HASH JOIN

MERGE JOIN

SORT (JOIN)

TABLE ACCESS (FULL) OF 7 PAR_LIMIT

How to read above execution plan Tables at node 5 and 7 are sorted and then merged using sort-merge join and then the result from this join is joined with table at node 2 and then hash join is applied. 3) DBMS_APPLICATION_INFO Overview: Application developers can use the DBMS_APPLICATION_INFO package with Oracle Trace and the SQL trace facility to record names of executing modules or transactions in the database for later use when tracking the performance of various modules. Registering the application allows system administrators and performance tuning specialists to track performance by module. System administrators can also use this information to track resource use by module. When an application registers with the database, its name and actions are recorded in V$SESSION and V$SQLAREA views.

Page 75 of 137

Subprogram

Description

SET_MODULE procedure Sets the name of the module that is currently running to a new
module.

SET_ACTION procedure Sets the name of the current action within the current module. READ_MODULE procedure SET_CLIENT_INFO procedure READ_CLIENT_INFO procedure SET_SESSION_LONGOPS procedure
Examples: Once the program initiates it registers itself using the SET_MODULE procedure. In doing so it also sets the initial action: BEGIN DBMS_APPLICATION_INFO.set_module(module_name => 'add_order', action_name => 'insert into orders'); -- Do insert into ORDERS table. END; / Subsequent processing can use the SET_ACTION procedure to make sure the action description stays relevant: Assuming that the "fireid" user is to be audtited: BEGIN DBMS_APPLICATION_INFO.set_action(action_name => 'insert into order_lines'); -- Do insert into ORDER_LINES table. END; /

Reads the values of the module and action fields of the current session. Sets the client info field of the session.

Reads the value of the client_info field of the current session.

Sets a row in the V$SESSION_LONGOP table.

Page 76 of 137

The SET_CLIENT_INFO procedure can be used if any additional information is needed: BEGIN DBMS_APPLICATION_INFO.set_action(action_name => 'insert into orders'); DBMS_APPLICATION_INFO.set_client_info(client_info => 'Issued by Web Client'); -- Do insert into ORDERS table. END; / The information set by these procedures can be read from the V$SESSION view as follows: SET LINESIZE 500 SELECT sid, serial#, username, osuser, module, action, client_info FROM v$session; The SET_SESSION_LONGOPS procedure can be used to show the progress of long operations by inserting rows in the V$SESSION_LONGOPS view: DECLARE v_rindex PLS_INTEGER; v_slno PLS_INTEGER; v_totalwork NUMBER; v_sofar NUMBER; v_obj PLS_INTEGER; BEGIN v_rindex := DBMS_APPLICATION_INFO.set_session_longops_nohint; v_sofar := 0; v_totalwork := 10; WHILE v_sofar < 10 LOOP -- Do some work DBMS_LOCK.sleep(5); v_sofar := v_sofar + 1; DBMS_APPLICATION_INFO.set_session_longops(rindex slno => v_slno, => v_rindex,

Page 77 of 137

op_name => 'Batch Load', target => v_obj, context => 0, sofar => v_sofar, totalwork => v_totalwork, target_desc => 'BATCH_LOAD_TABLE', units => 'rows processed'); END LOOP; END; / The information in the V$SESSION_LONGOPS view can be queried using: SELECT opname, target_desc, sofar, totalwork, units FROM v$session_longops; NOTE: RINDEX => A token which represents the v$session_longops row to update. Set this

to set_session_longops_nohint to start a new row. Use the returned value from the prior call to reuse a row. EXAMPLE:
create table f(g number);

create or replace procedure long_proc as rindex pls_integer := dbms_application_info.set_session_longops_nohint; slno pls_integer; -- Name of task op_name varchar2(64) := 'long_proc';

target on context sofar

pls_integer := 0; pls_integer; number;

-- ie. The object being worked -- Any info -- how far proceeded

Page 78 of 137

totalwork sofar=totalwork

number := 1000000;

-- finished when

-- desc of target target_desc varchar2(32) := 'A long running procedure';

units varchar2(32) := 'inserts'; sofar and totalwork begin

-- unit of

dbms_application_info.set_module('long_proc',null);

dbms_application_info.set_session_longops ( rindex, slno);

for sofar in 0..totalwork loop

insert into f values (sofar);

if mod(sofar,1000) = 0 then dbms_application_info.set_session_longops ( rindex, slno, op_name, target, context, sofar, totalwork, target_desc,

Page 79 of 137

units);

end if;

end loop; end long_proc;


If the procedure long_proc is run, you can issue the following query to get feedback on its progress:

select time_remaining,sofar,elapsed_seconds from v$session_longops l, v$session s where l.sid=s.sid and l.serial# = s.serial# and s.module='long_proc'

4) Access paths for the query optimizer

Overview: Access paths are ways in which data is retrieved from the database. In general, index access paths should be used for statements that retrieve a small subset of table rows, while full scans are more efficient when accessing a large portion of the table. Online transaction processing (OLTP) applications, which consist of short-running SQL statements with high selectivity, often are characterized by the use of index access paths. Decision support systems, on the other hand, tend to use partitioned tables and perform full scans of the relevant partitions. This section describes the data access paths that can be used to locate and retrieve any row in any table.

Full Table Scans Rowid Scans Index Scans Cluster Access Hash Access Sample Table Scans How the Query Optimizer Chooses an Access Path

1) Full table scans

Page 80 of 137

This type of scan reads all rows from a table and filters out those that do not meet the selection criteria. During a full table scan, all blocks in the table that are under the high water mark are scanned. The high water mark indicates the amount of used space, or space that had been formatted to receive data. Each row is examined to determine whether it satisfies the statement's WHERE clause. When Oracle performs a full table scan, the blocks are read sequentially. Because the blocks are adjacent, I/O calls larger than a single block can be used to speed up the process. The size of the read calls range from one block to the number of blocks indicated by the initialization parameter DB_FILE_MULTIBLOCK_READ_COUNT. Using multiblock reads means a full table scan can be performed very efficiently. Each block is read only once. Why a Full Table Scan Is Faster for Accessing Large Amounts of Data Full table scans are cheaper than index range scans when accessing a large fraction of the blocks in a table. This is because full table scans can use larger I/O calls, and making fewer large I/O calls is cheaper than making many smaller calls. When the Optimizer Uses Full Table Scans The optimizer uses a full table scan in any of the following cases: Lack of Index If the query is unable to use any existing indexes, then it uses a full table scan. For example, if there is a function used on the indexed column in the query, the optimizer is unable to use the index and instead uses a full table scan.If you need to use the index for case-independent searches, then either do not permit mixed-case data in the search columns or create a function-based index, such as UPPER(last_name), on the search column. Large Amount of Data If the optimizer thinks that the query will access most of the blocks in the table, then it uses a full table scan, even though indexes might be available. Small Table If a table contains less than DB_FILE_MULTIBLOCK_READ_COUNT blocks under the high water mark, which can be read in a single I/O call, then a full table scan might be cheaper than an index range scan, regardless of the fraction of tables being accessed or indexes present. Full Table Scan Hints Use the hint FULL(table alias) if you want to force the use of a full table scan. For more information on the FULL hint, see "FULL". Example Using EXPLAIN PLAN EXPLAIN PLAN FOR SELECT e.employee_id, j.job_title, e.salary, d.department_name

Page 81 of 137

FROM employees e, jobs j, departments d WHERE e.employee_id < 103 AND e.job_id = j.job_id AND e.department_id = d.department_id; The resulting output table in Example shows the execution plan chosen by the optimizer to execute the SQL statement in the example: Example EXPLAIN PLAN Output ----------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| ----------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 3 | 189 | 10 (10)| | 1 | NESTED LOOPS | | 3 | 189 | 10 (10)| | 2 | NESTED LOOPS | | 3 | 141 | 7 (15)| |* 3 | TABLE ACCESS FULL | EMPLOYEES | 3 | 60 | 4 (25)| | 4 | TABLE ACCESS BY INDEX ROWID| JOBS | 19 | 513 | 2 (50)| |* 5 | INDEX UNIQUE SCAN | JOB_ID_PK | 1 | | | | 6 | TABLE ACCESS BY INDEX ROWID | DEPARTMENTS | 27 | 432 | 2 (50)| |* 7 | INDEX UNIQUE SCAN | DEPT_ID_PK | 1 | | | ----------------------------------------------------------------------------------Full table scan on employees table Rowid scan on jobs and department table 2) Rowid Scans The rowid of a row specifies the datafile and data block containing the row and the location of the row in that block. Locating a row by specifying its rowid is the fastest way to retrieve a single row, because the exact location of the row in the database is specified. To access a table by rowid, Oracle first obtains the rowids of the selected rows, either from the statement's WHERE clause or through an index scan of one or more of the table's indexes. Oracle then locates each selected row in the table based on its rowid. Please see above example When the Optimizer Uses Rowids This is generally the second step after retrieving the rowid from an index. The table access might be required for any columns in the statement not present in the index.

Page 82 of 137

Access by rowid does not need to follow every index scan. If the index contains all the columns needed for the statement, then table access by rowid might not occur. 3) Index Scan In this method, a row is retrieved by traversing the index, using the indexed column values specified by the statement. An index scan retrieves data from an index based on the value of one or more columns in the index. To perform an index scan, Oracle searches the index for the indexed column values accessed by the statement. If the statement accesses only columns of the index, then Oracle reads the indexed column values directly from the index, rather than from the table. The index contains not only the indexed value, but also the rowids of rows in the table having that value. Therefore, if the statement accesses other columns in addition to the indexed columns, then Oracle can find the rows in the table by using either a table access by rowid or a cluster scan. An index scan can be one of the following types:

Index Unique Scans Index Range Scans Index Range Scans Descending Index Skip Scans Full Scans Fast Full Index Scans Index Joins

Index Unique Scans This scan returns, at most, a single rowid. Oracle performs a unique scan if a statement contains a UNIQUE or a PRIMARY KEY constraint that guarantees that only a single row is accessed. In explan plan example above, an index scan is performed on the jobs and departments tables, using the job_id_pk and dept_id_pk indexes respectively. When the Optimizer Uses Index Unique Scans This access path is used when all columns of a unique (B-tree) index or an index created as a result of a primary key constraint are specified with equality conditions. Index Unique Scan Hints The hint INDEX(alias index_name) specifies the index to use, but not an access path (range scan or unique scan). Index Range Scans

Page 83 of 137

An index range scan is a common operation for accessing selective data. It can be bounded (bounded on both sides) or unbounded (on one or both sides). Data is returned in the ascending order of index columns. Multiple rows with identical values are sorted in ascending order by rowid. If data must be sorted by order, then use the ORDER BY clause, and do not rely on an index. If an index can be used to satisfy an ORDER BY clause, then the optimizer uses this option and avoids a sort. In below example, the order has been imported from a legacy system, and you are querying the order by the reference used in the legacy system. Assume this reference is the order_date. Example Index Range Scan SELECT order_status, order_id FROM orders WHERE order_date = :b1; --------------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| --------------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 1 | 20 | 3 (34)| | 1 | TABLE ACCESS BY INDEX ROWID| ORDERS | 1 | 20 | 3 (34)| |* 2 | INDEX RANGE SCAN | ORD_ORDER_DATE_IX | 1 | | 2 (50)| --------------------------------------------------------------------------------------This should be a highly selective query, and you should see the query using the index on the column to retrieve the desired rows. The data returned is sorted in ascending order by the rowids for the order_date. Because the index column order_date is identical for the selected rows here, the data is sorted by rowid. When the Optimizer Uses Index Range Scans The optimizer uses a range scan when it finds one or more leading columns of an index specified in conditions, such as the following:
col1 = :b1 col1 < :b1 col1 > :b1 AND combination of the preceding conditions for leading columns in the index col1 like 'ASD%' wild-card searches should not be in a leading position otherwise the condition col1 like '%ASD' does not result in a range scan.

Range scans can use unique or nonunique indexes. Range scans avoid sorting when index columns constitute the ORDER BY/GROUP BY clause. Index Range Scan Hints

Page 84 of 137

A hint might be required if the optimizer chooses some other index or uses a full table scan. The hint INDEX(table_alias index_name) Index Range Scans Descending An index range scan descending is identical to an index range scan, except that the data is returned in descending order. Indexes, by default, are stored in ascending order. Usually, this scan is used when ordering data in a descending order to return the most recent data first, or when seeking a value less than a specified value. When the Optimizer Uses Index Range Scans Descending The optimizer uses index range scan descending when an order by descending clause can be satisfied by an index. Index Range Scan Descending Hints The hint INDEX_DESC(table_alias index_name) is used for this access path. Index Skip Scans Index skip scans improve index scans by nonprefix columns. Often, scanning index blocks is faster than scanning table data blocks. Skip scanning lets a composite index be split logically into smaller subindexes. In skip scanning, the initial column of the composite index is not specified in the query. In other words, it is skipped. The number of logical subindexes is determined by the number of distinct values in the initial column. Skip scanning is advantageous if there are few distinct values in the leading column of the composite index and many distinct values in the nonleading key of the index. Example Index Skip Scan Consider, for example, a table employees (sex, employee_id, address) with a composite index on (sex, employee_id). Splitting this composite index would result in two logical subindexes, one for M and one for F. For this example, suppose you have the following index data:

('F',98) ('F',100) ('F',102) ('F',104) ('M',101) ('M',103) ('M',105)


Page 85 of 137

The index is split logically into the following two subindexes:


The first subindex has the keys with the value F. The second subindex has the keys with the value M.

Figure Index Skip Scan Illustration

The column sex is skipped in the following query:

SELECT * FROM employees WHERE employee_id = 101;


A complete scan of the index is not performed, but the subindex with the value F is searched first, followed by a search of the subindex with the value M. Full Scans A full scan is available if a predicate references one of the columns in the index. The predicate does not need to be an index driver. A full scan is also available when there is no predicate, if both the following conditions are met:

All of the columns in the table referenced in the query are included in the index. At least one of the index columns is not null.

A full scan can be used to eliminate a sort operation, because the data is ordered by the index key. It reads the blocks singly. Fast Full Index Scans Fast full index scans are an alternative to a full table scan when the index contains all the columns that are needed for the query, and at least one column in the index key has the NOT NULL constraint. A fast full scan accesses the data in the index itself, without accessing the table. It cannot be used to eliminate a sort operation, because the data is not ordered by the index key. It reads the entire index using multiblock reads, unlike a full index scan, and can be parallelized.
Page 86 of 137

You can specify it with the initialization parameter OPTIMIZER_FEATURES_ENABLE or the INDEX_FFS hint. Fast full index scans cannot be performed against bitmap indexes. A fast full scan is faster than a normal full index scan in that it can use multiblock I/O and can be parallelized just like a table scan. Fast Full Index Scan Hints The fast full scan has a special index hint, INDEX_FFS, which has the same format and arguments as the regular INDEX hint. Index Joins An index join is a hash join of several indexes that together contain all the table columns that are referenced in the query. If an index join is used, then no table access is needed, because all the relevant column values can be retrieved from the indexes. An index join cannot be used to eliminate a sort operation. Bitmap Indexes A bitmap join uses a bitmap for key values and a mapping function that converts each bit position to a rowid. Bitmaps can efficiently merge indexes that correspond to several conditions in a WHERE clause, using Boolean operations to resolve AND and OR conditions.

5) DBMS_PROFILER Overview

The DBMS_PROFILER package allows developers to profile the run-time behavior of PL/SQL code, making it easier to identify performance bottlenecks which can then be investigated more closely. Subprogram STOP_PROFILER Function and Procedure 1) DBMS_PROFILER.START_PROFILER( run_comment IN VARCHAR2 := sysdate, run_comment1 IN VARCHAR2 :='', run_number OUT BINARY_INTEGER); 2) DBMS_PROFILER.STOP_PROFILER RETURN BINARY_INTEGER; Description Stops profiler data collection in the user's session

START_PROFILER Functions and Procedures Starts profiler data collection in the user's session

Page 87 of 137

There are 3 tables in which it logs the data, we can refer to these tables to get the information about the application performance 1)plsql_profiler_runns 2)plsql_profiler_units 3)plsql_profiler_data The first step is to install the DBMS_PROFILER package: CONNECT sys/password@service AS SYSDBA @$ORACLE_HOME/rdbms/admin/profload.sql CREATE USER profiler IDENTIFIED BY profiler DEFAULT TABLESPACE users QUOTA UNLIMITED ON users; GRANT connect TO profiler; CREATE PUBLIC SYNONYM plsql_profiler_runs FOR profiler.plsql_profiler_runs; CREATE PUBLIC SYNONYM plsql_profiler_units FOR profiler.plsql_profiler_units; CREATE PUBLIC SYNONYM plsql_profiler_data FOR profiler.plsql_profiler_data; CREATE PUBLIC SYNONYM plsql_profiler_runnumber FOR profiler.plsql_profiler_runnumber; CONNECT profiler/profiler@service @$ORACLE_HOME/rdbms/admin/proftab.sql GRANT SELECT ON plsql_profiler_runnumber TO PUBLIC; GRANT SELECT, INSERT, UPDATE, DELETE ON plsql_profiler_data TO PUBLIC; GRANT SELECT, INSERT, UPDATE, DELETE ON plsql_profiler_units TO PUBLIC; GRANT SELECT, INSERT, UPDATE, DELETE ON plsql_profiler_runs TO PUBLIC; Next we create a dummy procedure to profile: CREATE OR REPLACE PROCEDURE do_something (p_times IN NUMBER) AS l_dummy NUMBER; BEGIN FOR i IN 1 .. p_times LOOP SELECT l_dummy + 1 INTO l_dummy FROM dual; END LOOP; END; / Next we start the profiler, run our procedure and stop the profiler: DECLARE l_result BINARY_INTEGER; BEGIN

Page 88 of 137

l_result := DBMS_PROFILER.start_profiler(run_comment => 'do_something: ' || SYSDATE); do_something(p_times => 100); l_result := DBMS_PROFILER.stop_profiler; END; / With the profile complete we can analyze the data to see which bits of the process took the most time, with all times presented in nanoseconds. First we check out which runs we have: SET LINESIZE 200 SET TRIMOUT ON COLUMN runid FORMAT 99999 COLUMN run_comment FORMAT A50 SELECT runid, run_date, run_comment, run_total_time FROM plsql_profiler_runs ORDER BY runid; RUNID RUN_DATE RUN_COMMENT RUN_TOTAL_TIME ----- --------- ---------------------------------- -------------1 21-AUG-03 do_something: 21-AUG-2003 14:51:54 131072000 We can then use the appropriate RUNID value in the following query: COLUMN runid FORMAT 99999 COLUMN unit_number FORMAT 99999 COLUMN unit_type FORMAT A20 COLUMN unit_owner FORMAT A20 SELECT u.runid, u.unit_number, u.unit_type, u.unit_owner, u.unit_name, d.line#, d.total_occur, d.total_time, d.min_time, d.max_time FROM plsql_profiler_units u JOIN plsql_profiler_data d ON u.runid = d.runid AND u.unit_number = d.unit_number WHERE u.runid = 1 ORDER BY u.unit_number, d.line#;

Page 89 of 137

RUNID UNIT_NU UNIT_TYPE UNIT_OWNER UNIT_NAME LINE# TOTAL_OCCUR TOTAL_TIME MIN_TIME MAX_TIME ----- ------- --------------- ----------- ------------ ----- ----------- ---------- -------- -------1 1 ANONYMOUS BLOCK <anonymous> <anonymous> 4 1 0 0 0 1 1 ANONYMOUS BLOCK <anonymous> <anonymous> 5 1 0 0 0 1 1 ANONYMOUS BLOCK <anonymous> <anonymous> 6 1 0 0 0 1 2 PROCEDURE MY_SCHEMA DO_SOMETHING 4 101 0 0 0 1 2 PROCEDURE MY_SCHEMA DO_SOMETHING 5 100 17408000 0 2048000 5 rows selected. The results of this query show that line 4 of the DO_SOMETHING procedure ran 101 times but took very little time, while line 5 ran 100 times and took proportionately more time. We can check the line numbers of the source using the following query: SELECT line || ' : ' || text FROM all_source WHERE owner = 'MY_SCHEMA' AND type = 'PROCEDURE' AND name = 'DO_SOMETHING'; LINE||':'||TEXT --------------------------------------------------1 : PROCEDURE do_something (p_times IN NUMBER) AS 2 : l_dummy NUMBER; 3 : BEGIN 4 : FOR i IN 1 .. p_times LOOP 5 : SELECT l_dummy + 1 6 : INTO l_dummy 7 : FROM dual; 8 : END LOOP; 9 : END; As expected, the query took proportionately more time than the procedural loop.

CURSORS AND REF CURSORS A cursor is a mechanism by which you can assign a name to a "select statement" and manipulate the information within that SQL statement or the active set returned by that statement. We've categorized cursors into the following topics: (Not covering below topics as they are very basic) Declare a Cursor OPEN Statement

Page 90 of 137

FETCH Statement CLOSE Statement Cursor Attributes (%FOUND, %NOTFOUND, %ISOPEN, %ROWCOUNT) SELECT FOR UPDATE Statement WHERE CURRENT OF Statement The Select For Update statement allows you to lock the records in the cursor result set. You are not required to make changes to the records in order to use this statement. The record locks are released when the next commit or rollback statement is issued. The syntax for the Select For Update is: CURSOR cursor_name IS select_statement FOR UPDATE [of column_list] [NOWAIT]; For example, you could use the Select For Update statement as follows: CURSOR c1 IS SELECT course_number, instructor from courses_tbl FOR UPDATE of instructor; If you plan on updating or deleting records that have been referenced by a Select For Update statement, you can use the Where Current Of statement. The syntax for the Where Current Of statement is either: UPDATE table_name SET set_clause WHERE CURRENT OF cursor_name; OR DELETE FROM table_name WHERE CURRENT OF cursor_name; The Where Current Of statement allows you to update or delete the record that was last fetched by the cursor. Updating using the WHERE CURRENT OF Statement Here is an example where we are updating records using the Where Current Of Statement: CREATE OR REPLACE Function FindCourse ( name_in IN varchar2 ) RETURN number

Page 91 of 137

IS cnumber number; CURSOR c1 IS SELECT course_number from courses_tbl where course_name = name_in FOR UPDATE of instructor; BEGIN open c1; fetch c1 into cnumber; if c1%notfound then cnumber := 9999; else UPDATE courses_tbl SET instructor = 'SMITH' WHERE CURRENT OF c1; COMMIT; end if; close c1; RETURN cnumber; END; NOTE: You can not use commit inside for update cursor loop as it will release the locks held by for update cursor. Commit will raise run time error for any subsequent fetches. Ref cursor example for returning a result set of a query from a procedure
Note: A REF CURSOR that specifies a specific return type. Package Header

CREATE OR REPLACE PACKAGE strongly_typed IS TYPE return_cur IS REF CURSOR RETURN all_tables %ROWTYPE; PROCEDURE child(p_return_rec OUT return_cur); PROCEDURE parent(p_NumRecs PLS_INTEGER); END strongly_typed; / CREATE OR REPLACE PACKAGE BODY strongly_typed IS PROCEDURE child(p_return_rec OUT return_cur) IS

Package Body

Page 92 of 137

BEGIN OPEN p_return_rec FOR SELECT * FROM all_tables; END child; -================================================== PROCEDURE parent (p_NumRecs PLS_INTEGER) IS p_retcur return_cur; at_rec all_tables%ROWTYPE; BEGIN child(p_retcur); FOR i IN 1 .. p_NumRecs LOOP FETCH p_retcur INTO at_rec; dbms_output.put_line(at_rec.table_name || ' - ' || at_rec.tablespace_name || ' - ' || TO_CHAR(at_rec.initial_extent) || ' - ' || TO_CHAR(at_rec.next_extent)); END LOOP; END parent; END strongly_typed; / set serveroutput on
To Run The Demo

exec strongly_typed.parent(1) exec strongly_typed.parent(8)

Weakly Typed
Note: A REF CURSOR that does not specify the return type such as SYS_REFCURSOR. Child Procedure

CREATE OR REPLACE PROCEDURE child ( p_NumRecs IN PLS_INTEGER, p_return_cur OUT SYS_REFCURSOR) IS BEGIN OPEN p_return_cur FOR 'SELECT * FROM all_tables WHERE rownum <= ' || p_NumRecs ; END child; / CREATE OR REPLACE PROCEDURE parent (pNumRecs VARCHAR2) IS

Parent Procedure

Page 93 of 137

p_retcur SYS_REFCURSOR; at_rec all_tables%ROWTYPE; BEGIN child(pNumRecs, p_retcur); FOR i IN 1 .. pNumRecs LOOP FETCH p_retcur INTO at_rec; dbms_output.put_line(at_rec.table_name || ' - ' || at_rec.tablespace_name || ' - ' || TO_CHAR(at_rec.initial_extent) || ' - ' || TO_CHAR(at_rec.next_extent)); END LOOP; END parent; / set serveroutput on
To Run The Demo

exec parent(1) exec parent(17)

Passing Ref Cursors CREATE TABLE employees ( Ref Cursor Passing empid NUMBER(5), Demo empname VARCHAR2(30)); INSERT INTO employees (empid, empname) VALUES (1, 'Dan Morgan'); INSERT INTO employees (empid, empname) VALUES (2, 'Hans Forbrich'); INSERT INTO employees (empid, empname) VALUES (3, 'Caleb Small'); COMMIT; CREATE OR REPLACE PROCEDURE pass_ref_cur(p_cursor SYS_REFCURSOR) IS TYPE array_t IS TABLE OF VARCHAR2(4000) INDEX BY BINARY_INTEGER; rec_array array_t; BEGIN FETCH p_cursor BULK COLLECT INTO rec_array; FOR i IN rec_array.FIRST .. rec_array.LAST
Page 94 of 137

LOOP dbms_output.put_line(rec_array(i)); END LOOP; END pass_ref_cur; / set serveroutput on DECLARE rec_array SYS_REFCURSOR; BEGIN OPEN rec_array FOR 'SELECT empname FROM employees'; pass_ref_cur(rec_array); CLOSE rec_array; END; /

RECORDS
OVERVIEW:

A record is a group of related data items stored in fields, each with its own name and datatype. Suppose you have various data about an employee such as name, salary, and hire date. These items are logically related but dissimilar in type. A record containing a field for each item lets you treat the data as a logical unit. Thus, records make it easier to organize and represent information. The attribute %ROWTYPE lets you declare a record that represents a row in a database table. However, you cannot specify the datatypes of fields in the record or declare fields of your own. The datatype RECORD lifts those restrictions and lets you define your own records.
Defining and Declaring Records

DECLARE TYPE DeptRec IS RECORD ( dept_id dept.deptno%TYPE, dept_name VARCHAR2(14), dept_loc VARCHAR2(13)); v_deptRec DeptRec variable of Record type BEGIN ...

Page 95 of 137

END;
Record as function return type

DECLARE TYPE EmpRec IS RECORD ( emp_id NUMBER(4) last_name VARCHAR2(10), dept_num NUMBER(2), job_title VARCHAR2(9), salary NUMBER(7,2)); ... FUNCTION nth_highest_salary (n INTEGER) RETURN EmpRec IS ... BEGIN ... END;
Passing record in procedure

DECLARE TYPE EmpRec IS RECORD ( emp_id emp.empno%TYPE, last_name VARCHAR2(10), job_title VARCHAR2(9), salary NUMBER(7,2)); ... PROCEDURE raise_salary (emp_info EmpRec); BEGIN ... END;
Referencing Records

Unlike elements in a collection, which are accessed using subscripts, fields in a record are accessed by name. To reference an individual field, use dot notation and the following syntax: record_name.field_name For example, you reference field hire_date in record emp_info as follows: emp_info.hire_date ... When calling a function that returns a user-defined record, use the following syntax to reference fields in the record:

Page 96 of 137

function_name(parameter_list).field_name For example, the following call to function nth_highest_sal references the field salary in record emp_info: DECLARE TYPE EmpRec IS RECORD ( emp_id NUMBER(4), job_title VARCHAR2(9), salary NUMBER(7,2)); middle_sal NUMBER(7,2); FUNCTION nth_highest_sal (n INTEGER) RETURN EmpRec IS emp_info EmpRec; BEGIN ... RETURN emp_info; -- return record END; BEGIN middle_sal := nth_highest_sal(10).salary; -- call function ... END; When calling a parameterless function, use the following syntax: function_name().field_name -- note empty parameter list To reference nested fields in a record returned by a function, use extended dot notation. The syntax follows: function_name(parameter_list).field_name.nested_field_name For instance, the following call to function item references the nested field minutes in record item_info: DECLARE TYPE TimeRec IS RECORD (minutes SMALLINT, hours SMALLINT); TYPE AgendaItem IS RECORD ( priority INTEGER, subject VARCHAR2(100), duration TimeRec); FUNCTION item (n INTEGER) RETURN AgendaItem IS item_info AgendaItem;

Page 97 of 137

BEGIN ... RETURN item_info; -- return record END; BEGIN ... IF item(3).duration.minutes > 30 THEN ... -- call function END;
Assigning Null Values to Records

To set all the fields in a record to null, simply assign to it an uninitialized record of the same type, as shown in the following example: DECLARE TYPE EmpRec IS RECORD ( emp_id emp.empno%TYPE, job_title VARCHAR2(9), salary NUMBER(7,2)); emp_info EmpRec; emp_null EmpRec; BEGIN emp_info.emp_id := 7788; emp_info.job_title := 'ANALYST'; emp_info.salary := 3500; emp_info := emp_null; -- nulls all fields in emp_info ... END;
Assigning Records

You can assign the value of an expression to a specific field in a record using the following syntax: record_name.field_name := expression; In the following example, you convert an employee name to upper case: emp_info.ename := UPPER(emp_info.ename); Instead of assigning values separately to each field in a record, you can assign values to all fields at once. This can be done in two ways. First, you can assign one user-defined record to another if they have the same datatype. Having fields that match exactly is not enough. Consider the following example: DECLARE TYPE DeptRec IS RECORD ( dept_num NUMBER(2), dept_name VARCHAR2(14)); TYPE DeptItem IS RECORD ( dept_num NUMBER(2),
Page 98 of 137

dept_name VARCHAR2(14)); dept1_info DeptRec; dept2_info DeptItem; BEGIN ... dept1_info := dept2_info; -- illegal; different datatypes END; As the next example shows, you can assign a %ROWTYPE record to a user-defined record if their fields match in number and order, and corresponding fields have compatible datatypes: DECLARE TYPE DeptRec IS RECORD ( dept_num NUMBER(2), dept_name VARCHAR2(14), location VARCHAR2(13)); dept1_info DeptRec; dept2_info dept%ROWTYPE; BEGIN SELECT * INTO dept2_info FROM dept WHERE deptno = 10; dept1_info := dept2_info; ... END; Second, you can use the SELECT or FETCH statement to fetch column values into a record, as the example below shows. The columns in the select-list must appear in the same order as the fields in your record. DECLARE TYPE DeptRec IS RECORD ( dept_num NUMBER(2), dept_name VARCHAR2(14), location VARCHAR2(13)); dept_info DeptRec; BEGIN SELECT * INTO dept_info FROM dept WHERE deptno = 20; ... END; However, you cannot assign a list of values to a record using an assignment statement. The following syntax is not allowed: record_name := (value1, value2, value3, ...); -- not allowed

Page 99 of 137

The example below shows that you can assign one nested record to another if they have the same datatype. Such assignments are allowed even if the enclosing records have different datatypes. DECLARE TYPE TimeRec IS RECORD (mins SMALLINT, hrs SMALLINT); TYPE MeetingRec IS RECORD ( day DATE, time_of TimeRec, -- nested record room_no INTEGER(4)); TYPE PartyRec IS RECORD ( day DATE, time_of TimeRec, -- nested record place VARCHAR2(25)); seminar MeetingRec; party PartyRec; BEGIN ... party.time_of := seminar.time_of; END;
Using the RETURNING Clause with a Record: Example

The INSERT, UPDATE, and DELETE statements can include a RETURNING clause, which returns column values from the affected row into a PL/SQL record variable. This eliminates the need to SELECT the row after an insert or update, or before a delete. You can use this clause only when operating on exactly one row. In the following example, you update the salary of an employee and, at the same time, retrieve the employee's name, job title, and new salary into a record variable: DECLARE TYPE EmpRec IS RECORD ( emp_name VARCHAR2(10), job_title VARCHAR2(9), salary NUMBER(7,2)); emp_info EmpRec; emp_id NUMBER(4); BEGIN emp_id := 7782; UPDATE emp SET sal = sal * 1.1 WHERE empno = emp_id RETURNING ename, job, sal INTO emp_info; END;
Comparing Records Page 100 of 137

Records cannot be tested for nullity, equality, or inequality. For instance, the following IF conditions are not allowed: BEGIN ... IF emp_info IS NULL THEN ... -- illegal IF dept2_info > dept1_info THEN ... -- illegal END;
Inserting a PL/SQL Record Using %ROWTYPE: Example

This example declares a record variable using a %ROWTYPE qualifier. You can insert this variable without specifying a column list. The %ROWTYPE declaration ensures that the record attributes have exactly the same names and types as the table columns. DECLARE dept_info dept%ROWTYPE; BEGIN -- deptno, dname, and loc are the table columns. -- The record picks up these names from the %ROWTYPE. dept_info.deptno := 70; dept_info.dname := 'PERSONNEL'; dept_info.loc := 'DALLAS'; -- Using the %ROWTYPE means we can leave out the column list -- (deptno, dname, loc) from the INSERT statement. INSERT INTO dept VALUES dept_info; END;
Updating the Database with PL/SQL Record Values

A PL/SQL-only extension of the UPDATE statement lets you update database rows using a single variable of type RECORD or %ROWTYPE instead of a list of fields. The number of fields in the record must equal the number of columns listed in the SET clause, and corresponding fields and columns must have compatible datatypes.
Updating a Row Using a Record: Example

You can use the keyword ROW to represent an entire row: DECLARE dept_info dept%ROWTYPE; BEGIN dept_info.deptno := 30; dept_info.dname := 'MARKETING'; dept_info.loc := 'ATLANTA';

Page 101 of 137

-- The row will have values for the filled-in columns, and null -- for any other columns. UPDATE dept SET ROW = dept_info WHERE deptno = 30; END; The keyword ROW is allowed only on the left side of a SET clause.
SET ROW Not Allowed with Subquery: Example

You cannot use ROW with a subquery. For example, the following UPDATE statement is not allowed: UPDATE emp SET ROW = (SELECT * FROM mgrs); -- not allowed

Restrictions on Record Inserts/Updates

Currently, the following restrictions apply to record inserts/updates:

Record variables are allowed only in the following places:


On the right side of the SET clause in an UPDATE statement In the VALUES clause of an INSERT statement In the INTO subclause of a RETURNING clause

Record variables are not allowed in a SELECT list, WHERE clause, GROUP BY clause, or ORDER BY clause.

The keyword ROW is allowed only on the left side of a SET clause. Also, you cannot use ROW with a subquery. In an UPDATE statement, only one SET clause is allowed if ROW is used. If the VALUES clause of an INSERT statement contains a record variable, no other variable or value is allowed in the clause. If the INTO subclause of a RETURNING clause contains a record variable, no other variable or value is allowed in the subclause.

REDO LOGS, ROLLBACK SEGMENT, FLASHBACK


OVERVIEW:

Page 102 of 137

Redo log files (physical unit of DB) hold information used for recovery in the event of a system failure. Redo log files, known as the redo log, store a log of all changes made to the database. This information is used in the event of a system failure to reapply changes that have been made and committed but that might not have been made to the datafiles. The redo log files must perform well and be protected against hardware failures (through software or hardware fault tolerance). If redo log information is lost, you cannot recover the system. For example, if a user UPDATEs a salary-value in a table containing employee-related data, the DBMS generates a redo record containing change-vectors that describe changes to the data segment block for the table. And if the user then COMMITs the update, Oracle generates another redo record and assigns the change a "system change number" (SCN).
Structure

A single transaction may involve multiple changes to data blocks, so it may have more than one redo record. Redo log files (sometimes simply called "log files") contain redo entries for both committed and uncommitted transactions in medium-term storage. Oracle redo log files contain the following information about database changes made by transactions:

indicators specifying when the transaction started a transaction-identifier or "XID" the name of the data object updated (for example, an application table) the before image of the transaction, i.e. the data as it existed before the changes (former versions < 9) the after image of the transaction. i.e the data as it appeared after the transaction made the changes commit-indicators that record whether and when the transaction completed

Usage

Before a user receives a "Commit complete" message, the system must first successfully write the new or changed data to a redo log file. If a database crashes, the recovery process has to apply all transactions, both uncommitted as well as committed, to the data-files on disk, using the information in the redo log files. Oracle must redo all redo-log transactions that have both a begin and a commit entry, and it must undo all transactions that have a begin entry but no commit entry. (Re-doing a transaction in this context simply means applying the information in the redo log files to the database; the system does not rerun the transaction itself.) The system thus re-creates committed transactions by applying the after image records in the redo log files to the database, and undoes incomplete transactions by using the before image records in the undo tablespace.

Page 103 of 137

Viewing Redo Log Information

The following views provide information on redo logs. View V$LOG V$LOGFILE V$LOG_HISTORY Description Displays the redo log file information from the control file Identifies redo log groups and members and member status Contains log history information

The following query returns the control file information about the redo log for a database. SELECT * FROM V$LOG; GROUP# THREAD# SEQ BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM ------ ------- ----- ------- ------- --- --------- ------------- --------1 1 10605 1048576 1 YES ACTIVE 11515628 16-APR-00 2 1 10606 1048576 1 NO CURRENT 11517595 16-APR-00 3 1 10603 1048576 1 YES INACTIVE 11511666 16-APR-00 4 1 10604 1048576 1 YES INACTIVE 11513647 16-APR-00 To see the names of all of the member of a group, use a query similar to the following: SELECT * FROM V$LOGFILE; GROUP# STATUS MEMBER ------ ------- ---------------------------------1 D:\ORANT\ORADATA\IDDB2\REDO04.LOG 2 D:\ORANT\ORADATA\IDDB2\REDO03.LOG 3 D:\ORANT\ORADATA\IDDB2\REDO02.LOG 4 D:\ORANT\ORADATA\IDDB2\REDO01.LOG Rollback segments (logical unit of database) Rollback segments are areas in your database which are used to temporarily save the previous values when some updates (by 'updates' we mean inserts or deletes as well) are going on. They have two main purposes: If for one reason or another user wants to cancel his/her update with a ROLLBACK statement, the former values are restored. This is possible only during the life of the transaction. If the user

Page 104 of 137

executes COMMIT instead, the values in the rollback segment are marked as invalidated and the change becomes permanent. This is where other, concurrent sessions read the data when they access the changed tables before the transactions are committed. Note that if a SELECT starts on a table while a transaction is modifying the same table, the old value of the data will be read from a rollback segment even if the changed data is actually read by the SELECT after the transaction has been committed (this is what is called read consistency - what you see is, in fact, like a snapshot of the table at the time when the SELECT starts). This explains why it's important for Oracle to keep track of what changed values were orginally, for as long as possible - some queries take a pretty long time to run. In practise, rollback segments are segments like any other segment - i.e. a collection of contiguous Oracle blocks known as extents which increases as the need arises - when the segment has to grow, a new extent is allocated and given a size computed from the parameters in the storage clause for the segment. They must be created in a tablespace and the universal practice is to create a tablespace dedicated to rollback segments, where all the rollback segments (but for the rollback segment named system which is created at database creation in the similarly named tablespace and is used by the oracle kernel) are created. NOTE: Processing of Global temp tables is not stored in rollback segment FLASHBACK Flashback is a feature introduced in Oracle9i and improved (and hyped) in Oracle10g. Effectively, it's a "oh shit!" protection mechanism for DBAs. FlashBack leverages Oracle undo segments (remember the undo tablespace undo01.dbf?). Undo segments used to be called rollback segments and you might have heard that muttered before. To be more plain, Flashback is similar in idea to the Undo feature of your word processor or GIMP/Photoshop. You work along happily and then suddenly realize you really don't like where things are going, so rather than having to fix it we can just Undo. Applications in the last several years actually allow you use an Undo history to undo things you did several changes ago. Flashback is the same concept but for the database. Once Flashback is enabled and you have been granted permission to use it you could do something as important as "flashbacking" a dropped table, or something as minor as undoing youre last SQL statements changes to a table. You can even flash back an entire database to get back to an earlier point in time across the board! To understand Flashback you need to be clarified on two things: The "recycle bin" and Oracle SCNs. A System Change Number or SCN is an integer value associated with each change to the database. You might think of revision numbers in a source control system. Each time you do something, whether youre adding or removing data, a unique number is associated with the change. Reverting to an earlier state is as easy as telling Flashback which SCN you want to revert to. Obviously the kink is that if you drop a table the SCN isn't going to help you, therefore Oracle puts dropped objects into a recycle bin rather than blowing them into the nether regions immediately. Because of this you won't reclaim space immediately when you drop an object, however you can forcibly purge objects from the recycle bin using the "PURGE" SQL statement.

Page 105 of 137

The components needed for Flashback have actually been in database for awhile to facilitate OLTP. All OLTP changes need to be atomic (discussed later) so when a transaction is modifying the database and for some reason fails (or in dba speak "throws an exception") the transactions that were uncommitted are rolled back. Rollback Segments, now called undo segments, provided the necessary historical information to allow for this. All this is leveraged, repackaged and dubbed "Flashback". Before you get started playing with flashback, there is one little catch you need to be aware of: it doesn't work on the system tablespaces. This means that if you connect to Oracle as sys (who uses the system tablespace by default) and create a table, drop it, and then try to flashback it, it will fail. Flashback works great on the non-system tablespace, but if you blow away a system table youre going to take more extreme measures, not just flashback restore it. The easiest way to enable flashback is to enable it during database creation with dbca. And, as usual, Enterprise Manager makes everything a snap. We'll discuss it's setup here in case you want to enable it on existing databases using the SQL*Plus interface. In order to utilize Flashback you'll need to put your database in ARCHIVELOG mode. Then you can set the DB_FLASHBACK_RETENTION_TARGET parameter that defines the period of time which we want to retain flashback logs, and finally turn Flashback on with an ALTER DATABASE statement. Lets look at the setup. SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Total System Global Area 184549376 bytes Fixed Size 1300928 bytes Variable Size 157820480 bytes Database Buffers 25165824 bytes Redo Buffers 262144 bytes Database mounted. SQL> alter database archivelog; Database altered. SQL> alter system set DB_FLASHBACK_RETENTION_TARGET=4320; System altered. SQL> alter system set DB_RECOVERY_FILE_DEST_SIZE=536870912; System altered. SQL> alter system set DB_RECOVERY_FILE_DEST='/u02/fra'; System altered.

Page 106 of 137

SQL> alter database flashback on; Database altered. SQL> alter database open; Database altered. Okey, Flashback is now enabled for this database. We've defined a flashback retension of 4320 minutes (or 72 hours), a recovery file size of 512MB and defined the location for the file recovery area (FRA) as /u02/fra. Lets see Flashback in action now. You can look at the contents of the recycle bin by querying select DBA_RECYCLEBIN the table. oracle@nexus6 ~$ sqlplus ben/passwd SQL*Plus: Release 10.1.0.2.0 - Production on Thu Nov 4 00:41:36 2004 Copyright (c) 1982, 2004, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production With the Partitioning, OLAP and Data Mining options SQL> create table test_table( 2 id number(2), 3 name varchar2(30) 4 ); Table created. SQL> insert into test_table values (1, 'Ben Rockwood'); 1 row created. SQL> insert into test_table values (2, 'Tamarah Rockwood'); 1 row created. SQL> insert into test_table values (3, 'Nova Rockwood'); 1 row created. SQL> insert into test_table values (4, 'Hunter Rockwood'); 1 row created. SQL> select * from test_table; ID NAME ---------- -----------------------------1 Ben Rockwood 2 Tamarah Rockwood

Page 107 of 137

3 Nova Rockwood 4 Hunter Rockwood SQL> drop table test_table; Table dropped. SQL> select * from test_table; select * from test_table * ERROR at line 1: ORA-00942: table or view does not exist SQL> flashback table "test_table" to before drop; flashback table "test_table" to before drop * ERROR at line 1: ORA-38305: object not in RECYCLE BIN SQL> flashback table "TEST_TABLE" to before drop; Flashback complete. SQL> select * from test_table; ID NAME ---------- -----------------------------1 Ben Rockwood 2 Tamarah Rockwood 3 Nova Rockwood 4 Hunter Rockwood

DATA DICTIONARY
OVERVIEW

The data dictionary is full of Metadata, information about what is going-on inside your database. The data dictionary is presented to us in the form of a number of views. The dictionary views come in two primary forms: The DBA, ALL or USER views - These views are used to manage database structures. The V$ Dynamic Performance Views - These views are used to monitor real time database statistics

Page 108 of 137

Throughout the rest of this book we will introduce you to data dictionary views that you can use to manage your database. You will find the entire list of Oracle Data Dictionary views documented in the Oracle documentation online. There are hundreds of views in the data dictionary. To see the depth of the data dictionary views, here are the views that store data about Oracle tables: * dba_all_tables * dba_indexes * dba_ind_partitions * dba_ind_subpartitions * dba_object_tables * dba_part_col_statistics * dba_subpart_col_statistics * dba_tables * dba_tab_cols * dba_tab_columns * dba_tab_col_statistics * dba_tab_partitions * dba_tab_subpartitions You can query the DICTIONARY view to see a list of all views and comments about them that exist in the data dictionary. This view is a quick way to find exactly what youre looking for in the data dictionary. Inside the Oracle Data Dictionary If you are like me, you are a bit forgetful. The data dictionary is a repository of information about the Oracle database, known as metadata. Metadata is information about information, and the data dictionary is information about the database. In this section we want to show you how to use the data dictionary to get information on tables. Oracle provides several data dictionary views that you can use to collect information on views in the database. These views include: * dba_tables, all_tables, user_tables * dba_tab_columns, all_tab_columns and user_tab_columns So, we forgot where the BOOKS table is located. From the SYSTEM account, we can query the dba_tables view to find our table: CONNECT system/your_password SELECT owner, table_name, tablespace_name FROM dba_tables WHERE table_name=BOOKS; Other views that show you where your tables are include user_tables and all_tables. Oracle also provides views that allow you to view the attributes of table columns. The dba_tab_columns view (and all_tab_columns and user_tab_columns) gives you a variety of information on table columns. Managing Oracle requires the use of a number of Oracle supplied views. These views include the data dictionary and the dynamic performance views. Together these views allow you to:

Page 109 of 137

* Manage the database * Tune the database * Monitor the database

MISLANEOUS TOPICS
1) DBMS_ERROR_LOG 2) Purity Rules for Functions 3) Table partitioning 4) Function based index 5) UNION vs. UNION ALL 6) How to compare data in two similar tables. 7) Database modeling / Database designing and Normalization 8) MULTI-TABLE INSERTS IN ORACLE 9) Oracle tunning hints 10)EXCERCISE

DBMS_ERRLOG The DBMS_ERRLOG package provides a procedure that enables you to create an error logging table so that DML operations can continue after encountering errors rather than abort and roll back. This enables you to save time and system resources. Security Model Security on this package can be controlled by granting EXECUTE on this package to selected users or roles. The EXECUTE privilege is granted publicly. However, to create an error logging table, you need SELECT access on the base table or view, the CREATE TABLE privilege, as well as tablespace quota for the target tablespace. Subprogram CREATE_ERROR_LOG Procedure Description Creates the error logging table used in DML error logging

CREATE_ERROR_LOG Procedure This procedure creates the error logging table needed to use the DML error logging capability. LONG, CLOB, BLOB, BFILE, and ADT datatypes are not supported in the columns. Syntax DBMS_ERRLOG.CREATE_ERROR_LOG ( dml_table_name IN VARCHAR2, err_log_table_name IN VARCHAR2 := NULL, err_log_table_owner IN VARCHAR2 := NULL, err_log_table_space IN VARCHAR2 := NULL,
Page 110 of 137

skip_unsupported Parameters

IN BOOLEAN := FALSE);

CREATE_ERROR_LOG Procedure Parameters Parameter dml_table_name Description The name of the DML table to base the error logging table on. The name can be fully qualified (for example, emp, scott.emp, "EMP", "SCOTT"."EMP"). If a name component is enclosed in double quotes, it will not be upper cased.

err_log_table_name The name of the error logging table you will create. The default is the first 25 characters in the name of the DML table prefixed with 'ERR$_'. Examples are the following: dml_table_name: 'EMP', err_log_table_name: 'ERR$_EMP' dml_table_name: '"Emp2"', err_log_table_name: 'ERR$_Emp2' err_log_table_owner The name of the owner of the error logging table. You can specify the owner in dml_table_name. Otherwise, the schema of the current connected user is used. err_log_table_space The tablespace the error logging table will be created in. If not specified, the default tablespace for the user owning the DML error logging table will be used. skip_unsupported When set to TRUE, column types that are not supported by error logging will be skipped over and not added to the error logging table. When set to FALSE, an unsupported column type will cause the procedure to terminate. The default is FALSE.

Examples First, create an error log table for the channels table in the SH schema, using the default name generation. Then, see all columns of the table channels: SQL> DESC channels Name Null? Type --------------------------------- ----CHANNEL_ID NOT NULL CHAR(1) CHANNEL_DESC NOT NULL VARCHAR2(20) CHANNEL_CLASS VARCHAR2(20) Finally, see all columns of the generated error log table. Note the mandatory control columns that are created by the package: SQL> DESC ERR$_CHANNELS Name Null? Type -------------------- ----ORA_ERR_NUMBER$ ORA_ERR_MESG$ VARCHAR2(2000)

NUMBER

Page 111 of 137

ORA_ERR_ROWID$ ORA_ERR_OPTYP$ ORA_ERR_TAG$ CHANNEL_ID CHANNEL_DESC CHANNEL_CLASS

ROWID VARCHAR2(2) VARCHAR2(2000) VARCHAR2(4000) VARCHAR2(4000) VARCHAR2(4000)

Ex: BEGIN DBMS_ERRLOG.create_error_log (dml_table_name => 'EMP'); END; Add the LOG ERRORS clause to your DML statement: BEGIN UPDATE emp SET sal = sal * 2 LOG ERRORS REJECT LIMIT UNLIMITED; END; Oracle logs the following errors during DML operations: * Column values that are too large. * Constraint violations (NOT NULL, unique, referential, and check constraints). * Errors raised during trigger execution. * Errors resulting from type conversion between a column in a subquery and the corresponding column of the table. * Partition mapping errors. The following conditions cause the statement to fail and roll back without invoking the error logging capability: * Violated deferred constraints. * Out of space errors. * Any direct-path INSERT operation (INSERT or MERGE) that raises a unique constraint or index violation. * Any UPDATE operation (UPDATE or MERGE) that raises a unique constraint or index violation.

Purity Rules for Functions Functions can be called in select list, where clause, group by, order by in select statement. To be called like below Select * from table (function ()); ---we should use DB objects

Page 112 of 137

To be callable from SQL statements, a stored function (and any subprograms called by that function) must obey certain "purity" rules, which are meant to control side effects:

When called from a SELECT statement or a parallelized INSERT, UPDATE, or DELETE statement, the function cannot modify any database tables. When called from an INSERT, UPDATE, or DELETE statement, the function cannot query or modify any database tables modified by that statement. When called from a SELECT, INSERT, UPDATE, or DELETE statement, the function cannot execute SQL transaction control statements (such as COMMIT), session control statements (such as SET ROLE), or system control statements (such as ALTER SYSTEM). Also, it cannot execute DDL statements (such as CREATE) because they are followed by an automatic commit.

If any SQL statement inside the function body violates a rule, you get an error at run time (when the statement is parsed). Table partitioning
Overview

Partitioning enables tables and indexes or index-organized tables to be subdivided into smaller manageable pieces and these each small piece is called a "partition". From an "Application Development" perspective, there is no difference between a partitioned and a non-partitioned table. The application need not be modified to access a partitioned table if that application was initially written on non partitioned tables. So now you know partitioning in oracle now the only thing that you needs to know is little bit of syntax and thats it, and you are a partitioning guru. Oracle introduced partitioning with 8.0. With this version only, Range Partitioning" was supported. I will come to details later about what that means. Then with Oracle 8i Hash and Composite Partitioning" was also introduced and with 9i " List Partitioning", it was introduced with lots of other features with each upgrade. Each method of partitioning has its own advantages and disadvantages and the decision which one to use will depend on the data and type of application. Also one can MODIFY, RENAME, MOVE, ADD, DROP, TRUNCATE, SPLIT partitions. We will go thru the details now. Advantages of using Partitions in Table 1. Smaller and more manageable pieces of data (Partitions) 2. Reduced recovery time 3. Failure impact is less 4. Import / export can be done at the Partition Level". 5. Faster access of data 6. Partitions work independent of the other partitions. 7. Very easy to use

Page 113 of 137

Types of Partitioning Methods 1. RANGE Partitioning This type of partitioning creates partitions based on the " Range of Column" values. Each partition is defined by a " Partition Bound" (non inclusive ) that basically limits the scope of partition. Most commonly used values for " Range Partition" is the Date field in a table. Lets say we have a table SAMPLE_ORDERS and it has a field ORDER_DATE. Also, lets say we have 5 years of history in this table. Then, we can create partitions by date for, lets say, every quarter. So Every Quarter Data becomes a partition in the SAMPLE_ORDER table. The first partition will be the one with the lowest bound and the last one will be the Partition with the highest bound. So if we have a query that want to look at the Data of first quarter of 1999 then instead of going through the complete data it will directly go to the Partition of first quarter 1999. This is example of the syntax needed for creating a RANGE PARTITION. CREATE TABLE SAMPLE_ORDERS (ORDER_NUMBER NUMBER, ORDER_DATE DATE, CUST_NUM NUMBER, TOTAL_PRICE NUMBER, TOTAL_TAX NUMBER, TOTAL_SHIPPING NUMBER) PARTITION BY RANGE(ORDER_DATE) ( PARTITION SO99Q1 VALUES LESS THAN TO_DATE(01-APR-1999, DD-MON-YYYY), PARTITION SO99Q2 VALUES LESS THAN TO_DATE(01-JUL-1999, DD-MON-YYYY), PARTITION SO99Q3 VALUES LESS THAN TO_DATE(01-OCT-1999, DD-MON-YYYY), PARTITION SO99Q4 VALUES LESS THAN TO_DATE(01-JAN-2000, DD-MON-YYYY), PARTITION SO00Q1 VALUES LESS THAN TO_DATE(01-APR-2000, DD-MON-YYYY), PARTITION SO00Q2 VALUES LESS THAN TO_DATE(01-JUL-2000, DD-MON-YYYY), PARTITION SO00Q3 VALUES LESS THAN TO_DATE(01-OCT-2000, DD-MON-YYYY), PARTITION SO00Q4 VALUES LESS THAN TO_DATE(01-JAN-2001, DD-MON-YYYY) ) ; the above example basically created 8 partitions on the SAMPLE_ORDERS Table all these partitions correspond to one quarter. Partition SO99Q1 will contain the orders for only first quarter of 1999. 2. HASH Partitioning Under this type of partitioning the records in a table, are partitions based of a Hash value found in the value of the column, that is used for partitioning. " Hash Partitioning" does not have any logical

Page 114 of 137

meaning to the partitions as do the range partitioning. Lets take one example. CREATE TABLE SAMPLE_ORDERS (ORDER_NUMBER NUMBER, ORDER_DATE DATE, CUST_NUM NUMBER, TOTAL_PRICE NUMBER, TOTAL_TAX NUMBER, TOTAL_SHIPPING NUMBER, ORDER_ZIP_CODE) PARTITION BY HASH (ORDER_ZIP_CODE) (PARTITION P1_ZIP TABLESPACE TS01, PARTITION P2_ZIP TABLESPACE TS02, PARTITION P3_ZIP TABLESPACE TS03, PARTITION P4_ZIP TABLESPACE TS04) ENABLE ROW MOVEMENT; The above example creates four hash partitions based on the zip codes from where the orders were placed. 3. List Partitioning ( Only with 9i) Under this type of partitioning the records in a table are partitioned based on the List of values for a table with say communities column as a defining key the partitions can be made based on that say in a table we have communities like Government , Asian , Employees , American, European then a List Partition can be created for individual or a group of communities lets say American-partition will have all the records having the community as American Lets take one example. In fact, we will modify the same example. CREATE TABLE SAMPLE_ORDERS (ORDER_NUMBER NUMBER, ORDER_DATE DATE, CUST_NUM NUMBER, TOTAL_PRICE NUMBER, TOTAL_TAX NUMBER, TOTAL_SHIPPING NUMBER, SHIP_TO_ZIP_CODE, SHIP_TO_STATE) PARTITION BY LIST (SHIP_TO_STATE) (PARTITION SHIP_TO_ARIZONA VALUES (AZ) TABLESPACE TS01, PARTITION SHIP_TO_CALIFORNIA VALUES (CA) TABLESPACE TS02, PARTITION SHIP_TO_ILLINOIS VALUES (IL) TABLESPACE TS03, PARTITION SHIP_TO_MASACHUSETTES VALUES (MA) TABLESPACE TS04, PARTITION SHIP_TO_MICHIGAN VALUES (MI) TABLESPACE TS05) ENABLE ROW MOVEMENT;

Page 115 of 137

The above example creates List partition based on the SHIP_TO_STATE each partition allocated to different table spaces. Altering partition tables To add a partition You can add add a new partition to the "high" end (the point after the last existing partition). To add a partition at the beginning or in the middle of a table, use the SPLIT PARTITION clause. For example to add a partition to sales table give the following command. alter table sales add partition p6 values less than (1996); To add a partition to a Hash Partition table give the following command. Alter table products add partition; Then Oracle adds a new partition whose name is system generated and it is created in the default tablespace. To add a partition by user define name and in your specified tablespace give the following command. Alter table products add partition p5 tablespace u5; To add a partition to a List partition table give the following command. alter table customers add partition central_India values (BHOPAL,NAGPUR); Any value in the set of literal values that describe the partition(s) being added must not exist in any of the other partitions of the table. Coalescing Partitions Coalescing partitions is a way of reducing the number of partitions in a hash-partitioned table, or the number of subpartitions in a composite-partitioned table. When a hash partition is coalesced, its contents are redistributed into one or more remaining partitions determined by the hash function. The specific partition that is coalesced is selected by Oracle, and is dropped after its contents have been redistributed. To coalesce a hash partition give the following statement. Alter table products coalesce partition;

Page 116 of 137

This reduces by one the number of partitions in the table products. DROPPING PARTITIONS To drop a partition from Range Partition table, List Partition or Composite Partition table give the following command. Alter table sales drop partition p5; a. Local indexes This is created the same manner as the index on existing partitioned table. Each partition of a local index corresponds to one partition only.--b. Global Partitioned Indexes This can be created on a partitioned or a non-partitioned tables. But for now, they can be partitioned using the " Range Partitioning" only. For example, in above example, where I divided the table into partitions representing a quarter, a " Global Index" can be created by using a different " Partitioning Key" and can have different number of partitions. c. Global Non- Partitioned Indexes This is no different than the ordinary index created on a non-partitioned table. The index structure is not partitioned. Partitioning Existing Tables The ALTER TABLE ... EXCAHNGE PARTITION ... syntax can be used to partition an existing table, as shown by the following example. First we must create a non-partitioned table to act as our starting point. CREATE TABLE my_table ( id NUMBER,

description VARCHAR2(50) ); INSERT INTO my_table (id, description) VALUES (1, 'One'); INSERT INTO my_table (id, description) VALUES (2, 'Two'); INSERT INTO my_table (id, description) VALUES (3, 'Three');

Page 117 of 137

INSERT INTO my_table (id, description) VALUES (4, 'Four'); COMMIT; Next we create a new partitioned table with a single partition to act as our destination table. CREATE TABLE my_table_2 ( id NUMBER,

description VARCHAR2(50) ) PARTITION BY RANGE (id) (PARTITION my_table_part VALUES LESS THAN (MAXVALUE)); Next we switch the original table segment with the partition segment. ALTER TABLE my_table_2 EXCHANGE PARTITION my_table_part WITH TABLE my_table WITHOUT VALIDATION; We can now drop the original table and rename the partitioned table. DROP TABLE my_table; RENAME my_table_2 TO my_table; Finally we can split the partitioned table into multiple partitions as required and gather new statistics. ALTER TABLE my_table SPLIT PARTITION my_table_part AT (3) INTO (PARTITION my_table_part_1, PARTITION my_table_part_2);

EXEC DBMS_STATS.gather_table_stats(USER, 'MY_TABLE', cascade => TRUE);

Page 118 of 137

The following query shows that the partitioning process is complete. COLUMN high_value FORMAT A20 SELECT table_name, partition_name, high_value, num_rows FROM user_tab_partitions ORDER BY table_name, partition_name; TABLE_NAME NUM_ROWS PARTITION_NAME HIGH_VALUE

------------------------------ ------------------------------ -------------------- ---------MY_TABLE MY_TABLE Function based index It is the ability to index functions and use these indexes in query. In a nutshell, this capability allows you to have case insensitive searches or sorts, search on complex equations, and extend the SQL language efficiently by implementing your own functions and operators and then searching on them. Example create index emp_upper_idx on emp(upper(ename)); usage select ename, empno, sal from emp where upper(ename) = 'KING'; this will use the function based index and would be less costlier. Another usage is to enforce type of check constraint Example 1 Create unique index tidx on emp (decode (dept, IT, ename, null)); MY_TABLE_PART_1 MY_TABLE_PART_2 3 MAXVALUE 2 2

Page 119 of 137

This index enforces a constraint on emp table that when dept column has value IT then name should be unique. Example 2 Create unique index tidx on emp (decode (dept, IT, ename, ABC, ename, null)); Create unique index tidx on emp (case when status = Y then 1 else null); In status column Y can appear only once. UNION vs UNION ALL Union suppresses the duplicate values where as union all gives all the values from both the tables including the duplicates. Union does the sorting before giving the result where as union all just combines the data from both the tables and returns without any sorting. Union all is faster than Union because it doesnt sort the data.

How to compare data in two similar tables Using set operators in combinations we can compare the data of two tables if the tables have same structure. Ex 1: If we want the distinct records that are there in table A and in table B excluding those that are in both A and B (Select * from A union select * from B) minus (select * from A intersection Select * from B) Similarly we can use the combination to get the comparison of two tables.

DATABASE MODELLING / DATABASE DESIGNING Database design is the process of producing a detailed data model of a database. This logical data model contains all the needed logical and physical design choices and physical storage parameters needed to generate a design in a Data Definition Language, which can then be used to create a database. A fully attributed data model contains detailed attributes for each entity.

Page 120 of 137

The term database design can be used to describe many different parts of the design of an overall database system. Principally, and most correctly, it can be thought of as the logical design of the base data structures used to store the data. In the relational model these are the tables and views. The process of doing database design generally consists of a number of steps which will be carried out by the database designer. Usually, the designer must:

Determine the relationships between the different data elements. Superimpose a logical structure upon the data on the basis of these relationships.

A sample Entity-relationship diagram Database design also include ER (Entity-relationship model) diagram. ER diagram is diagram help to design database in effective and efficient way. Example of ER diagram which is used for database design:

Page 121 of 137

Attributes in ER diagrams are usually modeled as an oval with the name of the attribute, linked to the entity or relationship that contains the attribute. Within the relational model the final step can generally be broken down into two further steps that of determining the grouping of information within the system, generally determining what are the basic objects about which information is being stored, and then determining the relationships between these groups of information, or objects. Design Process Determine the purpose of your database - This helps prepare you for the remaining steps. Find and organize the information required - Gather all of the types of information you might want to record in the database, such as product name and order number.

Page 122 of 137

Divide the information into tables - Divide your information items into major entities or subjects, such as Products or Orders. Each subject then becomes a table. Turn information items into columns - Decide what information you want to store in each table. Each item becomes a field, and is displayed as a column in the table. For example, an Employees table might include fields such as Last Name and Hire Date. Specify primary keys - Choose each tables primary key. The primary key is a column that is used to uniquely identify each row. An example might be Product ID or Order ID. Set up the table relationships - Look at each table and decide how the data in one table is related to the data in other tables. Add fields to tables or create new tables to clarify the relationships, as necessary. Refine your design - Analyze your design for errors. Create the tables and add a few records of sample data. See if you can get the results you want from your tables. Make adjustments to the design, as needed. Apply the normalization rules - Apply the data normalization rules to see if your tables are structured correctly. Make adjustments to the tables, as needed. Determining data to be stored In a majority of cases, a person who is doing the design of a database is a person with expertise in the area of database design, rather than expertise in the domain from which the data to be stored is drawn e.g. financial information, biological information etc. Therefore the data to be stored in the database must be determined in cooperation with a person who does have expertise in that domain, and who is aware of what data must be stored within the system. This process is one which is generally considered part of requirements analysis, and requires skill on the part of the database designer to elicit the needed information from those with the domain knowledge. This is because those with the necessary domain knowledge frequently cannot express clearly what their system requirements for the database are as they are unaccustomed to thinking in terms of the discrete data elements which must be stored. Data to be stored can be determined by Requirement Specification. Normalization Normalization is the process of efficiently organizing data in a database. There are two goals of the normalization process: eliminating redundant data (for example, storing the same data in more than one table) and ensuring data dependencies make sense (only storing related data in a table). Both of these are worthy goals as they reduce the amount of space a database consumes and ensure that data is logically stored.

Page 123 of 137

The process of applying the rules to your database design is called normalizing the database, or just normalization. Normalization is most useful after you have represented all of the information items and have arrived at a preliminary design. The idea is to help you ensure that you have divided your information items into the appropriate tables. What normalization cannot do is ensure that you have all the correct data items to begin with. You apply the rules in succession, at each step ensuring that your design arrives at one of what is known as the "normal forms." Five normal forms are widely accepted the first normal form through the fifth normal form. This article expands on the first three, because they are all that is required for the majority of database designs. First normal form First normal form states that at every row and column intersection in the table there exist a single value, and never a list of values. For example, you cannot have a field named Price in which you place more than one Price. If you think of each intersection of rows and columns as a cell, each cell can hold only one value. Second normal form Second normal form requires that each non-key column be fully dependent on the entire primary key, not on just part of the key. This rule applies when you have a primary key that consists of more than one column. For example, suppose you have a table containing the following columns, where Order ID and Product ID form the primary key: Order ID (primary key) Product ID (primary key) Product Name This design violates second normal form, because Product Name is dependent on Product ID, but not on Order ID, so it is not dependent on the entire primary key. You must remove Product Name from the table. It belongs in a different table (Products). Third normal form Third normal form requires that not only every non-key column be dependent on the entire primary key, but that non-key columns be independent of each other. Another way of saying this is that each non-key column must be dependent on the primary key and nothing but the primary key. For example, suppose you have a table containing the following columns: ProductID (primary key) Name SRP

Page 124 of 137

Discount Assume that Discount depends on the suggested retail price (SRP). This table violates third normal form because a non-key column, Discount, depends on another non-key column, SRP. Column independence means that you should be able to change any non-key column without affecting any other column. If you change a value in the SRP field, the Discount would change accordingly, thus violating that rule. In this case Discount should be moved to another table that is keyed on SRP.

MULTI-TABLE INSERTS IN ORACLE Multi-table insert is an extension to INSERT..SELECT, this feature enables us to define multiple insert targets for a source dataset. Until the introduction of this feature, only SQL*Loader had a similar capability. This article provides an overview of multi-table inserts and how they are used. SYNTAX OVERVIEW There are two types of multi-table insert as follows:

INSERT FIRST; and INSERT ALL.

Multi-table inserts are an extension to INSERT..SELECT. Syntax is of the following form:

INSERT ALL|FIRST [WHEN condition THEN] INTO target [VALUES] [WHEN condition THEN] INTO target [VALUES] ... [ELSE] INTO target [VALUES] SELECT ... FROM source_query;
We define multiple INTO targets between the INSERT ALL/FIRST and the SELECT. The inserts can be conditional or unconditional and if the record structure of the datasource matches the target table, the VALUES clause can be omitted. We will describe the various permutations in this article. SETUP For the examples in this article, we shall use the ALL_OBJECTS view as our source data. For simplicity, we will create four tables with the same structure as follows.

SQL> CREATE TABLE t1


Page 125 of 137

2 3 4 5 6 7

( owner , object_name , object_type , object_id , created );

VARCHAR2(30) VARCHAR2(30) VARCHAR2(30) NUMBER DATE

Table created. SQL> CREATE TABLE t2 AS SELECT * FROM t1; Table created. SQL> CREATE TABLE t3 AS SELECT * FROM t1; Table created. SQL> CREATE TABLE t4 AS SELECT * FROM t1; Table created.
These tables will be our targets for the ALL_OBJECTS view data. SIMPLE MULTI-TABLE INSERT To begin, we will unconditionally INSERT ALL the source data into every target table. The source records and target tables are all of the same structure so we will omit the VALUES clause from each INSERT.

SQL> SELECT COUNT(*) FROM all_objects; COUNT(*) ---------28981 1 row selected. SQL> INSERT ALL 2 INTO t1 3 INTO t2 4 INTO t3 5 INTO t4 6 SELECT owner 7 , object_type
Page 126 of 137

8 9 10 11

, , , FROM

object_name object_id created all_objects;

115924 rows created. SQL> SELECT COUNT(*) FROM t1; COUNT(*) ---------28981 1 row selected.
Note the feedback from sqlplus and compare this to the count of ALL_OBJECTS. We get the total number of records inserted and this is evenly distributed between our target tables (although in practice, this will usually be distributed unevenly between the target tables). Before we continue with extended syntax, note that multi-table inserts can turn single source records into multiple target records (i.e. to re-direct portions of records to different tables). We can see this in the previous example where we insert four times the number of source records. We can also generate multiple records for a single table (i.e. the same table is repeatedly used as a target) whereby each record picks a different set of attributes from the source record (similar to pivotting). CONDITIONAL MULTI-TABLE INSERT Multi-table inserts can also be conditional (i.e. we do not need to insert every record into every table in the list). There are some key points to note about conditional multi-table inserts as follows.

we cannot mix conditional with unconditional inserts. This means that in situations where we need a conditional insert on a subset of target tables, we will often need to "pad out" the unconditional inserts with a dummy condition such as "WHEN 1=1"; we can optionally include an ELSE clause in our INSERT ALL|FIRST target list for when none of the explicit conditions are satisfied; an INSERT ALL conditional statement will evaluate every insert condition for each record. With INSERT FIRST, each record will stop being evaluated on the first condition it satisfies; the conditions in an INSERT FIRST statement will be evaluated in order from top to bottom. Oracle makes no such guarantees with an INSERT ALL statement.

With these restrictions in mind, we can now see an example of a conditional INSERT FIRST statement. Each source record will be directed to one target table at most. Note that for demonstration purposes, the following example includes varying column lists and an ELSE clause.

Page 127 of 137

SQL> INSERT FIRST 2 --<>-3 WHEN owner = 'SYSTEM' 4 THEN 5 INTO t1 (owner, object_name) 6 VALUES (owner, object_name) 7 --<>-8 WHEN object_type = 'TABLE' 9 THEN 10 INTO t2 (owner, object_name, 11 VALUES (owner, object_name, 12 --<>-13 WHEN object_name LIKE 'DBMS%' 14 THEN 15 INTO t3 (owner, object_name, 16 VALUES (owner, object_name, 17 --<>-18 ELSE 19 INTO t4 (owner, object_type, created, object_id) 20 VALUES (owner, object_type, created, object_id) 21 --<>-22 SELECT owner 23 , object_type 24 , object_name 25 , object_id 26 , created 27 FROM all_objects; 28981 rows created. SQL> SELECT COUNT(*) FROM t1; COUNT(*) ---------362 1 row selected. SQL> SELECT COUNT(*) FROM t2; COUNT(*)

object_type) object_type)

object_type) object_type) object_name, object_name,

Page 128 of 137

---------844 1 row selected. SQL> SELECT COUNT(*) FROM t3; COUNT(*) ---------266 1 row selected. SQL> SELECT COUNT(*) FROM t4; COUNT(*) ---------27509 1 row selected.
We can see that each source record was inserted into one table only. INSERT FIRST is a good choice for performance when each source record is intended for one target only, but in practice, INSERT ALL is much more common. Remember that we cannot mix conditional with unconditional inserts. The following example shows the unintuitive error message we receive if we try.

SQL> 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

INSERT ALL --<>-INTO t1 (owner, object_name) --<-- unconditional VALUES (owner, object_name) --<>-WHEN object_type = 'TABLE' --<-- conditional THEN INTO t2 (owner, object_name, object_type) VALUES (owner, object_name, object_type) --<>-SELECT owner , object_type , object_name , object_id , created FROM all_objects;
Page 129 of 137

INTO t1 (owner, object_name) * ERROR at line 3: ORA-00905: missing keyword

--<-- unconditional

The workaround to this, as stated earlier, is to include a dummy TRUE condition as follows.

SQL> INSERT ALL 2 --<>-3 WHEN 1 = 1 condition 4 THEN 5 INTO t1 (owner, 6 VALUES (owner, 7 --<>-8 WHEN object_type = 9 THEN 10 INTO t2 (owner, 11 VALUES (owner, 12 --<>-13 SELECT owner 14 , object_type 15 , object_name 16 , object_id 17 , created 18 FROM all_objects; 29958 rows created.

--<-- dummy TRUE object_name) object_name) 'TABLE' --<-- conditional

object_name, object_type) object_name, object_type)

Counter-intuitive to this is the fact that in a conditional multi-table insert, each INTO clause inherits the current condition until it changes. We can see this below by loading T1, T2 and T3 from a single condition in an INSERT ALL statement. The T4 table will be loaded from the ELSE clause.

SQL> INSERT ALL 2 WHEN owner 3 THEN 4 INTO t1 5 VALUES 6 --<>-7 INTO t2 owner = 'SYSTEM 8 VALUES 9 --<>--

= 'SYSTEM' (owner, object_name) (owner, object_name) (owner, object_name, object_type) (owner, object_name, object_type) --<--

Page 130 of 137

10 INTO t3 (owner, owner = 'SYSTEM 11 VALUES (owner, 12 ELSE 13 INTO t4 (owner, created, object_id) 14 VALUES (owner, created, object_id) 15 SELECT owner 16 , object_type 17 , object_name 18 , object_id 19 , created 20 FROM all_objects; 29705 rows created.

object_name, object_type) object_name, object_type) object_type, object_name, object_type, object_name,

--<--

SQL> SELECT COUNT(*) FROM all_objects WHERE owner = 'SYSTEM'; COUNT(*) ---------362 1 row selected. SQL> SELECT COUNT(*) FROM t1; COUNT(*) ---------362 1 row selected. SQL> SELECT COUNT(*) FROM t2; COUNT(*) ---------362 1 row selected. SQL> SELECT COUNT(*) FROM t3;

Page 131 of 137

COUNT(*) ---------362 1 row selected.


ORACLE TUNNING HINTS
Oracle Hint + ALL_ROWS CHOOSE FIRST_ROWS RULE Access Method Oracle Hints: This tells Oracle to do a cluster scan to access the table. This tells the optimizer to do a full scan of the FULL(table) specified table. Tells Oracle to explicitly choose the hash access HASH(table) method for the table. HASH_AJ(table) Transforms a NOT IN subquery to a hash anti-join. ROWID(table) Forces a rowid scan of the specified table. Forces an index scan of the specified table using the specified index(s). If a list of indexes is specified, INDEX(table [index]) the optimizer chooses the one with the lowest cost. If no index is specified then the optimizer chooses the available index for the table with the lowest cost. Same as INDEX only performs an ascending search INDEX_ASC (table [index]) of the index chosen, this is functionally identical to the INDEX statement. Same as INDEX except performs a descending INDEX_DESC(table [index]) search. If more than one table is accessed, this is ignored. Combines the bitmapped indexes on the table if the INDEX_COMBINE(table cost shows that to do so would give better index) performance. Perform a fast full index scan rather than a table INDEX_FFS(table index) scan. CLUSTER(table) Meaning Must be immediately after comment indicator, tells Oracle this is a list of hints. Use the cost based approach for best throughput. Default, if statistics are available will use cost, if not, rule. Use the cost based approach for best response time. Use rules based approach; this cancels any other hints specified for this statement.

Page 132 of 137

MERGE_AJ (table) AND_EQUAL(table index index [index index index]) NL_AJ

HASH_SJ(t1, t2)

MERGE_SJ (t1, t2)

NL_SJ

Transforms a NOT IN subquery into a merge antijoin. This hint causes a merge on several single column indexes. Two must be specified, five can be. Transforms a NOT IN subquery into a NL anti-join (nested loop). Inserted into the EXISTS subquery; This converts the subquery into a special type of hash join between t1 and t2 that preserves the semantics of the subquery. That is, even if there is more than one matching row in t2 for a row in t1, the row in t1 is returned only once. Inserted into the EXISTS subquery; This converts the subquery into a special type of merge join between t1 and t2 that preserves the semantics of the subquery. That is, even if there is more than one matching row in t2 for a row in t1, the row in t1 is returned only once. Inserted into the EXISTS subquery; This converts the subquery into a special type of nested loop join between t1 and t2 that preserves the semantics of the subquery. That is, even if there is more than one matching row in t2 for a row in t1, the row in t1 is returned only once.

Oracle Hints for join orders and transformations: ORDERED STAR STAR_TRANSFORMATION FACT(table) NO_FACT(table) PUSH_SUBQ This hint forces tables to be joined in the order specified. If you know table X has fewer rows, then ordering it first may speed execution in a join. Forces the largest table to be joined last using a nested loops join on the index. Makes the optimizer use the best plan in which a start transformation is used. When performing a star transformation use the specified table as a fact table. When performing a star transformation do not use the specified table as a fact table. This causes nonmerged subqueries to be evaluated at the earliest possible point in the execution plan. If possible forces the query to use the specified materialized view, if no materialized view is specified, the system chooses what it calculates is the appropriate view. Turns off query rewrite for the statement, use it for when data returned must be concurrent and can't come from a materialized view. Forces combined OR conditions and IN processing Page 133 of 137

REWRITE(mview)

NOREWRITE USE_CONCAT

NO_MERGE (table) NO_EXPAND Oracle Hints for Join Operations: USE_HASH (table)

in the WHERE clause to be transformed into a compound query using the UNION ALL set operator. This causes Oracle to join each specified table with another row source without a sort-merge join. Prevents OR and IN processing expansion.

This causes Oracle to join each specified table with another row source with a hash join. This operation forces a nested loop using the USE_NL(table) specified table as the controlling table. This operation forces a sort-merge-join operation of USE_MERGE(table,[table, - ]) the specified tables. The hint forces query execution to be done at a different site than that selected by Oracle. This hint DRIVING_SITE can be used with either rule-based or cost-based optimization. The hint causes Oracle to use the specified table as LEADING(table) the first table in the join order. Oracle Hints for Parallel Operations: This specifies that data is to be or not to be [NO]APPEND appended to the end of a file rather than into existing free space. Use only with INSERT commands. This specifies the operation is not to be done in NOPARALLEL (table parallel. PARALLEL(table, instances) This specifies the operation is to be done in parallel. Allows parallelization of a fast full index scan on any PARALLEL_INDEX index. Other Oracle Hints: Specifies that the blocks retrieved for the table in CACHE the hint are placed at the most recently used end of the LRU list when the table is full table scanned. Specifies that the blocks retrieved for the table in NOCACHE the hint are placed at the least recently used end of the LRU list when the table is full table scanned. For insert operations will append (or not append) [NO]APPEND data at the HWM of table. Turns on the UNNEST_SUBQUERY option for UNNEST statement if UNNEST_SUBQUERY parameter is set to FALSE. Turns off the UNNEST_SUBQUERY option for NO_UNNEST statement if UNNEST_SUBQUERY parameter is set to TRUE. PUSH_PRED Pushes the join predicate into the view.

Page 134 of 137

EXERCISE
1) Write code (preferably a packages function) which returns details from employee_sal_det table for an input employee_id. The user should give this command to fetch the data in Pl/Sql developer session. select * from table(package_name.function_nm());

CREATE OR REPLACE TYPE TEST_OBJ_TYPE IS OBJECT ( name varchar2(30), salary NUMBER(9) ); create or replace package test_pkg as function abc (p_empid in number) return TEST_TABTYPE; --PRAGMA RESTRICT_REFERENCES (abc, WNDS); end test_pkg CREATE OR REPLACE TYPE TEST_TABTYPE AS TABLE OF TEST_OBJ_TYPE create or replace package body test_pkg as function abc (p_empid in number) return TEST_TABTYPE as v_name_sal TEST_TABTYPE; begin SELECT TEST_OBJ_TYPE(employee.name, employee.salary) bulk collect into v_name_sal FROM employee where empno = p_empid; return v_name_sal; end abc; end test_pkg; select * from table (test_pkg.abc(7839))
2) DELETING FROM A TABLE WITH 6 MIL RECORDS CREATE OR REPLACE PROCEDURE delete_tab (tablename IN VARCHAR2, empno IN NUMBER , nrows IN NUMBER ) IS sSQL1 VARCHAR2(2000);

Page 135 of 137

sSQL2 nCount BEGIN

VARCHAR2(2000); NUMBER;

nCount := 0; sSQL1:='delete ' where sSQL2:='select ' where LOOP

from '|| tablename || ROWNUM < ' || nrows || ' and empno=' || empno; count(ROWID) from ' || tablename || empno= ' || empno;

EXECUTE IMMEDIATE sSQL1; EXECUTE IMMEDIATE sSQL2 INTO nCount; DBMS_OUTPUT.PUT_LINE('Existing records: ' || to_char(ncount) ); commit; EXIT WHEN nCount = 0; END LOOP; END delete_tab;

3) Write a View that uses the SYS_CONTEXT and is Dynamic in Nature Create or replace view v_test as Select col1, col2 from emp Where col3 = sys_context (v_context, 'col3' ); This is a dynamic view which will take value of col3 dynamically from a session. We call this view as below. Context is session based memory area. In below statement it is setting the value of col3 = 123 in v_context area. After this run the view and view will run with col3=123 dbms_session.set_context('v_context', col3, 123); select * from v_test;

4) B-tree index It is used to index columns with low cardinality. Ex. Gender column have only two values (F, M) so low cardinality columns hence is a gud candidate for b-tree index. It works as below:

Page 136 of 137

It creates an array structure. For each row in the table, it marks 0 or 1 and then calculates the binary equivalent of that binary value (01 = 1 and 10 =) and then it creates buckts according to the 01 and 10 values (in this case two buckts). When we fire a query where columns value is F then it will fetch the records in 1st buckt if its M them it wud fetch the second bucket.

No. of rows 1 2 3 4 5 6 7 8 9 10

F 0 0 1 1 1 0 1 0 1 1

M 1 1 0 0 0 1 0 1 0 0

= 01 = 01 = 10 = 10 = 10 = 01 = 10 = 01 = 10 = 10

Page 137 of 137

Anda mungkin juga menyukai