Anda di halaman 1dari 33

Triggers:

Triggers are named PL/SQL blocks that get executed implicitly when a triggering event occurs. Rather that being executed when called (as is the case with procedures and functions), triggers get fired automatically when certain events occur in the system. The action of executing a trigger is called firing the trigger. A trigger fires when a triggering event occurs.

Triggering Events:
Triggering Events are events that occur due to the users actions (or system events) that cause a trigger to be fired. Triggering events can be insertion, deletion, update etc. When any of these events occurs, it executes the triggers written on that event implicitly.

Types of Triggers:
Although, there may be many types and classifications of triggers, basically, there are three types of triggers: DML Triggers: DML triggers are fired by the execution of a DML statement. The DML triggers can be defined on insert, update or delete operations. Whenever a DML operation occurs on a table, the trigger will execute. Also, the triggers can be created in such a way that they get executed either before or after the DML operation occurs. System Triggers: System triggers fire when a system event such as a database startup or shutdown happens. System triggers can also be fired on DDL operations such as create table. Instead-of Triggers: Instead-of triggers can be defined on operations performed on views only. When you define a instead of trigger on an operation on a view, the trigger code will be executed instead of the operation that fired it. This type of triggers can only be row level.

Syntax for trigger creation:


The syntax for trigger creation is as below: CREATE [OR REPLACE] TRIGGER trigger_name {BEFORE/AFTER/INSTEAD OF}triggering_event [WHEN trigger_condition] [FOR EACH ROW] trigger_body; Below is an example of creating a trigger:

Figure -4 Trigger Creation

Frequently Asked Questions


1. What are the various PL/SQL objects? 2. What is a package? 3. What is a procedure? 4. What is a function? 5. What is a trigger? 6. What are the differences between procedures and functions? 7. Where are the procedures and functions stored? 8. What are triggering events? 9. What are the various types of triggers? 10. What are instead-of triggers?

Oracle Optimal Flexible Architecture (OFA)

Standard is recommendations for naming files, mount points, directory structures, filenames, scripting techniques and folders when installing 10g and implementing an oracle database.. Anyone who knows the OFA can navigate an Oracle environment to quickly find the software and files used for the database and the instance. This standardization increases productivity and avoids errors. Benefits by using Optimal Flexible Architecture Organizes large amounts of complicated software and data on disk to avoid device bottlenecks and poor performance Facilitates routine administrative tasks, such as software and data backup functions, which are often vulnerable to data corruption Alleviates switching among multiple Oracle databases Adequately manages and administers database growth Helps to eliminate fragmentation of free space in the data dictionary, isolates other fragmentation, and minimizes resource contention

Specifications
A Pentium III or Pentium 4 based PC with at least 800 MHz processor, 256 MB of ram (512 is such better) and at least 10 Gigabytes of free disk space. If you only have 256 MB of RAM, make sure you have Windows manage at least 400 MB swap file (virtual memory). . At least 6 GB of free disk space: Space to download or copy source ZIP files: 1.5 GB Space to unpack source ZIP files: 1.5 GB Space to install Oracle10g Software: 2.0 GB Space for Oracle data files (varies): 2.0 to 5.0 GB

About Views
A view is a logical representation of another table or combination of tables. A view derives its data from the tables on which it is based. These tables are called base tables. Base tables might in turn be actual tables or might be views themselves. All operations performed on a view actually affect the base table of the view. You can use views in almost the same way as tables. You can query, update, insert into, and delete from views, just as you can standard tables. Views can provide a different representation (such as subsets or supersets) of the data that resides within other tables and views. Views are very powerful because they allow you to tailor the presentation of data to different types of users.

Creating Views
To create a view, you must meet the following requirements: To create a view in your schema, you must have the CREATE VIEW privilege. To create a view in another user's schema, you must have the CREATE ANY VIEW system privilege. You can acquire these privileges explicitly or through a role. . The owner of the view (whether it is you or another user) must have been explicitly granted privileges to access all objects referenced in the view definition. The owner cannot have obtained these privileges through roles. Also, the functionality of

the view is dependent on the privileges of the view owner. For example, if the owner of the view has only the INSERT privilege for Scott's emp table, the view can only be used to insert new rows into the emp table, not to SELECT, UPDATE, or DELETE rows. . If the owner of the view intends to grant access to the view to other users, the owner must have received the object privileges to the base objects with the GRANT OPTION or the system privileges with the ADMIN OPTION. You can create views using the CREATE VIEW statement. Each view is defined by a query that references tables, materialized views, or other views. As with all subqueries, the query that defines a view cannot contain the FOR UPDATE clause. The following statement creates a view on a subset of data in the emp table: CREATE VIEW sales_staff AS SELECT empno, ename, deptno FROM emp WHERE deptno = 10 WITH CHECK OPTION CONSTRAINT sales_staff_cnst; Figure 13. Creating Views with check option The query that defines the sales_staff view references only rows in department 10. Furthermore, the CHECK OPTION creates the view with the constraint (named sales_staff_cnst) that INSERT and UPDATE statements issued against the view cannot result in rows that the view cannot select. For example, the following INSERT statement successfully inserts a row into the emp table by means of the sales_staff view, which contains all rows with department number 10: INSERT INTO sales_staff VALUES (7584, 'OSTER', 10); However, the following INSERT statement returns an error because it attempts to insert a row for department number 30, which cannot be selected using the sales_staff view: INSERT INTO sales_staff VALUES (7591, 'WILLIAMS', 30);

Figure 14. Error because can not select specified row

The view could optionally have been constructed specifying the WITH READ ONLY clause, which prevents any updates, inserts, or deletes from being done to the base table through the view. If no WITH clause is specified, the view, with some restrictions, is inherently updatable

Document Summary
This document is a simple guide to understanding PL/SQL objects such as stored procedures, functions, triggers etc. This guide provides insight into creating the above-mentioned objects using SQL Plus.

Identifying PL/SQL Objects


PL/SQL objects are named blocks of PL/SQL with declarative, executable and exception handling sections. The PL/SQL objects are stored in the data dictionary with a name and can be reused. PL/SQL objects include Packages, Stored Procedures, Functions and triggers.

Packages:
A Package is a PL/SQL construct that allows related objects to be stored together. Apart from grouping related objects, packages also provide many more advantages such as improved performance and ability to reference the PL/SQL objects contained within a package from other PL/SQL blocks. A Package consists of two portions: a) Package Specification Package Specification consists of information about the contents of the package. Essentially, package specification contains forward declarations of the procedures and functions in that package but does not contain the codes for any of these subprograms. The syntax for creating package specification is as below: CREATE [OR REPLACE] PACKAGE package_name {IS/AS} type_definition/ procedure_specification/ function_specification/ variable_declaration/ exception_declaration/ cursor_declaration/ pragma_declaration END [package_name]; b) Package Body Package Body contains the actual code for the subprograms in the package. Package body is a separate data dictionary object from the package header. The package body cannot be compiled without the package specification being compiled successfully. The package body should compulsorily contain the definition for all sub program definitions contained in the package specification. The syntax for creating package specification is as below: CREATE [OR REPLACE] PACKAGE BODY package_name {IS/AS} procedure_definitions function_definitions END [package_name]; Below is an example of creating a package:

Figure -1 Package Creation

In this tutorial you will learn about Discarded and Rejected Records - The Bad File, SQL*Loader Rejects, Oracle Database Rejects, The Discard File and Log File and Logging Information

Discarded and Rejected Records


Records read from the input file might not be inserted into the database. Such records are placed in either a bad file or a discard file.

The Bad File


The bad file contains records that were rejected, either by SQL*Loader or by the Oracle database. Some of the possible reasons for rejection are discussed in the next sections.

SQL*Loader Rejects
Datafile records are rejected by SQL*Loader when the input format is invalid. For example, if the second enclosure delimiter is missing, or if a delimited field exceeds its maximum length, SQL*Loader rejects the record. Rejected records are placed in the bad file.

Oracle Database Rejects

After a datafile record is accepted for processing by SQL*Loader, it is sent to the Oracle database for insertion into a table as a row. If the Oracle database determines that the row is valid, then the row is inserted into the table. If the row is determined to be invalid, then the record is rejected and SQL*Loader puts it in the bad file. The row may be invalid, for example, because a key is not unique, because a required field is null, or because the field contains invalid data for the Oracle datatype.

The Discard File


As SQL*Loader executes, it may create a file called the discard file. This file is created only when it is needed, and only if you have specified that a discard file should be enabled. The discard file contains records that were filtered out of the load because they did not match any recordselection criteria specified in the control file. The discard file therefore contains records that were not inserted into any table in the database. You can specify the maximum number of such records that the discard file can accept. Data written to any database table is not written to the discard file.

Log File and Logging Information


When SQL*Loader begins execution, it creates a log file. If it cannot create a log file, execution terminates. The log file contains a detailed summary of the load, including a description of any errors that occurred during the load. The other loading parameters such as the log file, the bad file, the discard file, and maximum number of errors are specified in the screen below.

The below screen is the final screen to initiate the data load process using the SQL*Loader. The data load can be scheduled to run immediately or at a specific time later

Let us learn about Creating Index-Organized Tables by Creating an Index-Organized Table, further by Creating Index-Organized Tables that Contain Object Types and also you will learn how to View Information about Tables.

Creating Index-Organized Tables


You use the CREATE TABLE statement to create index-organized tables, but you must provide additional information: An ORGANIZATION INDEX qualifier, which indicates that this is an index-organized table A primary key, specified through a column constraint clause (for a single column primary key) or a table constraint clause (for a multiple-column primary key). Optionally, you can specify the following: An OVERFLOW clause, which preserves dense clustering of the B-tree index by storing the row column values exceeding a specified threshold in a separate overflow data segment. . A PCTTHRESHOLD value, which defines the percentage of space reserved in the index block for an index-organized table. Any portion of the row that exceeds the specified threshold is stored in the overflow segment. In other words, the row is broken at a column boundary into two pieces, a head piece and tail piece. The head piece fits in the specified threshold and is stored along with the key in the index leaf block. The tail piece is stored in the overflow area as one or more row pieces. Thus, the index entry contains the key value, the nonkey column values that fit the specified threshold, and a pointer to the rest of the row. .

An INCLUDING clause, which can be used to specify nonkey columns that are to be stored in the overflow data segment.

Creating an Index-Organized Table


The following statement creates an index-organized table: CREATE TABLE admin_docindex( token char(20), doc_id NUMBER, token_frequency NUMBER, token_offsets VARCHAR2(512), CONSTRAINT pk_admin_docindex PRIMARY KEY (token, doc_id)) ORGANIZATION INDEX TABLESPACE admin_tbs PCTTHRESHOLD 20 OVERFLOW TABLESPACE admin_tbs2;

Figure 12. Creating Index organized table Specifying ORGANIZATION INDEX causes the creation of an index-organized table, admin_docindex, where the key columns and nonkey columns reside in an index defined on columns that designate the primary key or keys for the table. In this case, the primary keys are token and doc_id. An overflow segment is specified and is discussed in "Using the Overflow Clause ".

Creating Index-Organized Tables that Contain Object Types


Index-organized tables can store object types. The following example creates object type admin_typ, then creates an index-organized table containing a column of object type admin_typ:

CREATE OR REPLACE TYPE admin_typ AS OBJECT (col1 NUMBER, col2 VARCHAR2(6)); CREATE TABLE admin_iot (c1 NUMBER primary key, c2 admin_typ) ORGANIZATION INDEX; You can also create an index-organized table of object types. For example: CREATE TABLE admin_iot2 OF admin_typ (col1 PRIMARY KEY) ORGANIZATION INDEX; Another example, that follows, shows that index-organized tables store nested tables efficiently. For a nested table column, the database internally creates a storage table to hold all the nested table rows. CREATE TYPE project_t AS OBJECT(pno NUMBER, pname VARCHAR2(80)); / CREATE TYPE project_set AS TABLE OF project_t; / CREATE TABLE proj_tab (eno NUMBER, projects PROJECT_SET) NESTED TABLE projects STORE AS emp_project_tab ((PRIMARY KEY(nested_table_id, pno)) ORGANIZATION INDEX) RETURN AS LOCATOR; The rows belonging to a single nested table instance are identified by a nested_table_id column. If an ordinary table is used to store nested table columns, the nested table rows typically get declustered. But when you use an index-organized table, the nested table rows can be clustered based on the nested_table_id column.

Creating Tables
To create a new table in your schema, you must have the CREATE TABLE system privilege. To create a table in another user's schema, you must have the CREATE ANY TABLE system privilege. Additionally, the owner of the table must have a quota for the tablespace that contains the table, or the UNLIMITED TABLESPACE system privilege. Create tables using the SQL statement CREATE TABLE.

Creating a Table
When you issue the following statement, you create a table named Employee in the your default schema and default tablespace. The below mentioned code can either be executed through SQL*PLUS or iSQL*PLUS. CREATE TABLE employee ( ..........empno NUMBER(5) PRIMARY KEY, ..........ename VARCHAR2(15) NOT NULL, ..........job VARCHAR2(10), ..........mgr NUMBER(5), ..........hiredate DATE DEFAULT (sysdate), ..........sal NUMBER(7,2), ..........comm NUMBER(7,2), ..........deptno NUMBER(3) NOT NULL ); ..........................................................................

Figure 1. Table creation through SQL*PLUS

Parallelizing Table Creation


When you specify the AS SELECT clause to create a table and populate it with data from another table, you can utilize parallel execution. The CREATE TABLE ... AS SELECT statement contains two parts: a CREATE part (DDL) and a SELECT part (query). Oracle Database can parallelize both parts of the statement. The CREATE part is parallelized if one of the following is true: A PARALLEL clause is included in the CREATE TABLE ... AS SELECT statement An ALTER SESSION FORCE PARALLEL DDL statement is specified If you parallelize the creation of a table, that table then has a parallel declaration (the PARALLEL clause) associated with it. Any subsequent DML or queries on the table, for which parallelization is possible, will attempt to use parallel execution. The following simple statement parallelizes the creation of a table and stores the result in a compressed format, using table compression: CREATE TABLE admin_emp_dept ..........PARALLEL COMPRESS ..........AS SELECT * FROM employee ..........WHERE deptno = 10;

In this training you will learn about Dropping Tables, Consequences of Dropping a Table, CASCADE Clause and the PURGE Clause.

Dropping Tables
To drop a table that you no longer need, use the DROP TABLE statement. The table must be contained in your schema or you must have the DROP ANY TABLE system privilege. Caution: Before dropping a table, familiarize yourself with the consequences of doing so: Dropping a table removes the table definition from the data dictionary. All rows of the table are no longer accessible. . All indexes and triggers associated with a table are dropped. . All views and PL/SQL program units dependent on a dropped table remain, yet become invalid (not usable). . All synonyms for a dropped table remain, but return an error when used. . All extents allocated for a table that is dropped are returned to the free space of the tablespace and can be used by any other object requiring new extents or new

objects. All rows corresponding to a clustered table are deleted from the blocks of the cluster. The following statement drops the admin_emp_dept table: DROP TABLE admin_emp_dept;

Figure 3. Drop Table If the table to be dropped contains any primary or unique keys referenced by foreign keys of other tables and you intend to drop the FOREIGN KEY constraints of the child tables, then include the CASCADE clause in the DROP TABLE statement, as shown below: DROP TABLE admin_emp_dept CASCADE CONSTRAINTS;

Figure 4. Drop Table Cascade Constraints . When you drop a table, normally the database does not immediately release the space associated with the table. Rather, the database renames the table and places it in a recycle bin, where it can later be recovered with the FLASHBACK TABLE statement if you find that you dropped the table in error. If you should want to immediately release the space associated with the table at the time you issue the DROP TABLE statement, include the PURGE clause as shown in the following statement: DROP TABLE admin_emp_dept PURGE;

Figure 5. Drop Table Purge Perhaps instead of dropping a table, you want to truncate it. The TRUNCATE statement provides a fast, efficient method for deleting all rows from a table, but it does not affect any structures associated with the table being truncated (column definitions, constraints, triggers, and so forth) or authorizations.

SQL*Loader - Input Data and Datafiles


SQL*Loader reads data from one or more files (or operating system equivalents of files) specified in the control file. From SQL*Loader's perspective, the data in the datafile is organized as records. A particular datafile can be in fixed record format, variable record format, or stream record format. The record format can be specified in the control file with the INFILE parameter. If no record format is specified, the default is stream record format.

Fixed Record Format


A file is in fixed record format when all records in a datafile are the same byte length. Although this format is the least flexible, it results in better performance than variable or stream format. Fixed format is also simple to specify. For example: INFILE datafile_name "fix n"

Variable Record Format


A file is in variable record format when the length of each record in a character field is included at the beginning of each record in the datafile. This format provides some added flexibility over the

fixed record format and a performance advantage over the stream record format. For example, you can specify a datafile that is to be interpreted as being in variable record format as follows: INFILE "datafile_name" "var n"

Stream Record Format


A file is in stream record format when the records are not specified by size; instead SQL*Loader forms records by scanning for the record terminator. Stream record format is the most flexible format, but there can be a negative effect on performance. The specification of a datafile to be interpreted as being in stream record format looks similar to the following: INFILE datafile_name ["str terminator_string"] The terminator_string is specified as either 'char_string' or X'hex_string' where: 'char_string' is a string of characters enclosed in single or double quotation marks X'hex_string' is a byte string in hexadecimal format When the terminator_string contains special (nonprintable) characters, it should be specified as a X'hex_string'. However, some nonprintable characters can be specified as ('char_string') by using a backslash. For example: \n indicates a line feed \t indicates a horizontal tab \f indicates a form feed \v indicates a vertical tab \r indicates a carriage return

If the character set specified with the NLS_LANG parameter for your session is different from the character set of the datafile, character strings are converted to the character set of the datafile. This is done before SQL*Loader checks for the default record terminator. Hexadecimal strings are assumed to be in the character set of the datafile, so no conversion is performed. On UNIX-based platforms, if no terminator_string is specified, SQL*Loader defaults to the line feed character, \n. On Windows NT, if no terminator_string is specified, then SQL*Loader uses either \n or \r\n as the record terminator, depending on which one it finds first in the datafile. This means that if you know that one or more records in your datafile has \n embedded in a field, but you want \r\n to be used as the record terminator, you must specify it. The screen below asks for the data file details if it is not already specified in the control file.

The below screen asks for the load method options while loading the data using SQL*Loader.

how to use Views in Queries along with Restrictions on DML operations for Views. how to update a join View along with rules for updatable join views. how to use DML Statements on Join Views how to Update Views that Involve Outer Joins

Using Views in Queries


To issue a query or an INSERT, UPDATE, or DELETE statement against a view, you must have the SELECT, INSERT, UPDATE, or DELETE object privilege for the view, respectively, either explicitly or through a role. Views can be queried in the same manner as tables. For example, to query the Division1_staff view, enter a valid SELECT statement that references the view: SELECT * FROM Division1_staff; ENAME EMPNO JOB DNAME CLARK 7782 MANAGER ACCOUNTING KING 7839 PRESIDENT ACCOUNTING MILLER 7934 CLERK ACCOUNTING ALLEN 7499 SALESMAN SALES WARD 7521 SALESMAN SALES JAMES 7900 CLERK SALES TURNER 7844 SALESMAN SALES MARTIN 7654 SALESMAN SALES BLAKE 7698 MANAGER SALES With some restrictions, rows can be inserted into, updated in, or deleted from a base table using a view. The following statement inserts a new row into the emp_tab table using the sales_staff view: INSERT INTO Sales_staff VALUES (7954, 'OSTER', 30); Restrictions on DML operations for views use the following criteria in the order listed: 1. If a view is defined by a query that contains SET or DISTINCT operators, a GROUP BY clause, or a group function, then rows cannot be inserted into, updated in, or deleted from the base tables using the view. 2. If a view is defined with WITH CHECK OPTION, then a row cannot be inserted into, or updated in, the base table (using the view), if the view cannot select the row from the base table. 3. If a NOT NULL column that does not have a DEFAULT clause is omitted from the view, then a row cannot be inserted into the base table using the view. 4. If the view was created by using an expression, such as DECODE(deptno, 10, "SALES", ...), then rows cannot be inserted into or updated in the base table using the view. The constraint created by WITH CHECK OPTION of the sales_staff view only allows rows that have a department number of 10 to be inserted into, or updated in, the emp_tab table. Alternatively, assume that the sales_staff view is defined by the following statement (that is, excluding the deptno column): CREATE VIEW Sales_staff AS SELECT Empno, Ename FROM Emp_tab WHERE Deptno = 10 WITH CHECK OPTION CONSTRAINT Sales_staff_cnst; Considering this view definition, you can update the empno or ename fields of existing records, but you cannot insert rows into the emp_tab table through the sales_staff view because the view does not let you alter the deptno field. If you had defined a DEFAULT value of 10 on the deptno field, then you could perform inserts. When a user attempts to reference an invalid view, the database returns an error message to the user: ORA-04063: view 'view_name' has errors

This error message is returned when a view exists but is unusable due to errors in its query (whether it had errors when originally created or it was created successfully but became unusable later because underlying objects were altered or dropped).

Updating a Join View


An updatable join view (also referred to as a modifiable join view) is a view that contains more than one table in the top-level FROM clause of the SELECT statement, and is not restricted by the WITH READ ONLY clause. The rules for updatable join views are as follows. Views that meet this criteria are said to be inherently updatable. Rule General Rule UPDATE Rule Description Any INSERT, UPDATE, or DELETE operation on a join view can modify only one underlying base table at a time. All updatable columns of a join view must map to columns of a key-preserved table. If the view is defined with the WITH CHECK OPTION clause, then all join columns and all columns of repeated tables are not updatable. DELETE Rows from a join view can be deleted as long as there is exactly one key-preserved table in the join. If the view is defined with the WITH CHECK OPTION clause and the key Rule preserved table is repeated, then the rows cannot be deleted from the view. INSERT An INSERT statement must not explicitly or implicitly refer to the columns of a non-keyRule preserved table. If the join view is defined with the WITH CHECK OPTION clause, INSERT statements are not permitted. There are data dictionary views that indicate whether the columns in a join view are inherently updatable. Note: There are some additional restrictions and conditions which can affect whether a join view is inherently updatable. If a view is not inherently updatable, it can be made updatable by creating an INSTEAD OF trigger on it. Additionally, if a view is a join on other nested views, then the other nested views must be mergeable into the top level view. For a discussion of mergeable and unmergeable views, and more generally, how the optimizer optimizes statements referencing views.

Examples illustrating the rules for inherently updatable join views, and a discussion of keypreserved tables, are presented in succeeding sections. The examples in these sections work only if you explicitly define the primary and foreign keys in the tables, or define unique indexes. The following statements create the appropriately constrained table definitions for emp and dept. CREATE TABLE dept ( deptno NUMBER(4) PRIMARY KEY, dname VARCHAR2(14), loc VARCHAR2(13)); CREATE TABLE emp ( empno NUMBER(4) PRIMARY KEY,

ename VARCHAR2(10), job VARCHAR2(9), mgr NUMBER(4), sal NUMBER(7,2), comm NUMBER(7,2), deptno NUMBER(2), FOREIGN KEY (DEPTNO) REFERENCES DEPT(DEPTNO)); You could also omit the primary and foreign key constraints listed in the preceding example, and create a UNIQUE INDEX on dept (deptno) to make the following examples work.

In this training you will learn about Altering Tables - Modifying an Existing Column Definition, Adding Table Columns, Renaming Table Columns, Dropping Table Columns, Removing Columns from Tables, Marking Columns Unused and Removing Unused Columns.

Oracle 10g Free Training : Altering Tables


Altering Tables
You alter a table using the ALTER TABLE statement. To alter a table, the table must be contained in your schema, or you must have either the ALTER object privilege for the table or the ALTER ANY TABLE system privilege. Many of the usages of the ALTER TABLE statement are presented in the following sections: Altering Physical Attributes of a Table Moving a Table to a New Segment or Tablespace Manually Allocating Storage for a Table Modifying an Existing Column Definition Adding Table Columns Renaming Table Columns Dropping Table Columns You alter a table using the ALTER TABLE statement. To alter a table, the table must be contained in your schema, or you must have either the ALTER object privilege for the table or the ALTER ANY TABLE system privilege. Many of the usages of the ALTER TABLE statement are presented in the following sections: Altering Physical Attributes of a Table Moving a Table to a New Segment or Tablespace Manually Allocating Storage for a Table Modifying an Existing Column Definition Adding Table Columns Renaming Table Columns Dropping Table Columns Caution: If a view, materialized view, trigger, domain index, function-based index, check constraint, function, procedure of package depends on a base table, the alteration of the base table or its columns can affect the dependent object.

Modifying an Existing Column Definition


Use the ALTER TABLE ... MODIFY statement to modify an existing column definition. You can modify column datatype, default value, or column constraint. You can increase the length of an existing column, or decrease it, if all existing data satisfies the new length. You can change a column from byte semantics to CHAR semantics or vice versa. You

must set the initialization parameter BLANK_TRIMMING=TRUE to decrease the length of a nonempty CHAR column. If you are modifying a table to increase the length of a column of datatype CHAR, realize that this can be a time consuming operation and can require substantial additional storage, especially if the table contains many rows. This is because the CHAR value in each row must be blank-padded to satisfy the new column length.

Adding Table Columns


To add a column to an existing table, use the ALTER TABLE ... ADD statement. The following statement alters the admin_emp_dept table to add a new column named bonus: ALTER TABLE admin_emp_dept ADD (bonus NUMBER (7,2));

Figure 6. Alter Table If a new column is added to a table, the column is initially NULL unless you specify the DEFAULT clause. When you specify a default value, the database updates each row in the new column with the values specified. Specifying a DEFAULT value is not supported for tables using table compression. You can add a column with a NOT NULL constraint to a table only if the table does not contain any rows, or you specify a default value.

Renaming Table Columns


Oracle Database lets you rename existing columns in a table. Use the RENAME COLUMN clause of the ALTER TABLE statement to rename a column. The new name must not conflict with the name of any existing column in the table. No other clauses are allowed in conjunction with the RENAME COLUMN clause. The following statement renames the comm column of the admin_emp_dept table.

ALTER TABLE admin_emp_dept RENAME COLUMN comm TO commission;

Figure 7. Alter Table Change Column Name As noted earlier, altering a table column can invalidate dependent objects. However, when you rename a column, the database updates associated data dictionary tables to ensure that functionbased indexes and check constraints remain valid. Oracle Database also lets you rename column constraints. Note: The RENAME TO clause of ALTER TABLE appears similar in syntax to the RENAME COLUMN clause, but is used for renaming the table itself.

Overview of Schemas and Common Schema Objects


A schema is a collection of database objects. A schema is owned by a database user and has the same name as that user. Schema objects are the logical structures that directly refer to the database's data. Schema objects include structures like tables, views, and indexes. (There is no relationship between a tablespace and a schema. Objects in the same schema can be in different tablespaces, and a tablespace can hold objects from different schemas.) Some of the most common schema objects are defined in the following section.

Tables
Tables are the basic unit of data storage in an Oracle database. Database tables hold all useraccessible data. Each table has columns and rows. Columns in a table is the different types of information that the table will contain and all instances of such data is stored in rows.

Indexes
Indexes are optional structures associated with tables. Indexes can be created to increase the performance of data retrieval. Just as the index in a book helps you quickly locate specific information, an Oracle index provides an access path to table data. When processing a request, Oracle can use some or all of the available indexes to locate the requested rows efficiently. Indexes are useful when applications frequently query a table for a range of rows (for example, all employees with a salary greater than 1000 dollars) or a specific row. Indexes are created on one or more columns of a table. After it is created, an index is automatically maintained and used by Oracle. Changes to table data (such as adding new rows, updating rows, or deleting rows) are automatically incorporated into all relevant indexes with complete transparency to the users.

Views
Views are customized presentations of data in one or more tables or other views. A view can also be considered a stored query. Views do not actually contain data. Rather, they derive their data from the tables on which they are based, referred to as the base tables of the views. Like tables, views can be queried, updated, inserted into, and deleted from, with some restrictions. All operations performed on a view actually affect the base tables of the view. Views provide an additional level of table security by restricting access to a predetermined set of rows and columns of a table. They also hide data complexity and store complex queries.

Clusters
Clusters are groups of one or more tables physically stored together because they share common columns and are often used together. Because related rows are physically stored together, disk access time improves. Like indexes, clusters do not affect application design. Whether a table is part of a cluster is transparent to users and to applications. Data stored in a clustered table is accessed by SQL in the same way as data stored in a nonclustered table.

Synonyms
A synonym is an alias for any table, view, materialized view, sequence, procedure, function, package, type, Java class schema object, user-defined object type, or another synonym. Because a synonym is simply an alias, it requires no storage other than its definition in the data dictionary

Tables
Tables are the basic unit of data storage in an Oracle Database. A table consists of columns and rows. Data is stored in these rows and columns. A table definition required a table name and the description of the columns that would be contained in this table. Rows are automatically created when the data is inserted into the table. You define a table with a table name, such as employees, and a set of columns. You give each column a column name, such as employee_id, last_name, and job_id; a datatype, such as VARCHAR2, DATE, or NUMBER; and a width. The width can be predetermined by the datatype, as in DATE. If columns are of the NUMBER datatype, define precision and scale instead of width. The table 1 below lists all the available datatypes in Oracle which can be used to define column types in a table.

Summary of Oracle Built-In Datatypes

Table 1. Summary of Oracle Built-In Datatypes Datatype CHAR [(size [BYTE | CHAR])] Description Column Length / Default Values Fixed-length character data of length Fixed for every row in the table (with size bytes or characters. trailing blanks); maximum size is 2000 bytes per row; default size is 1 byte per row. When neither BYTE nor CHAR is specified, the setting of NLS_LENGTH_SEMANTICS at the time of column creation determines which is used. Consider the character set (single-byte or multibyte) before setting size. Variable-length character data, with maximum length size bytes or characters. BYTE or CHAR indicates that the column has byte or character semantics, respectively. A size must be specified. Variable for each row, up to 4000 bytes per row. When neither BYTE nor CHAR is specified, the setting of NLS_LENGTH_SEMANTICS at the time of column creation determines which is used. Consider the character set (single-byte or multibyte) before setting size.

VARCHAR2 (size [BYTE | CHAR])

NCHAR [(size)]

Fixed-length Unicode character data Fixed for every row in the table (with of length size characters. The number trailing blanks). The upper limit is 2000 of bytes is twice this number for the bytes per row. Default size is 1 AL16UTF16 encoding and 3 times character. this number for the UTF8 encoding.)

NVARCHAR2 (size) Variable-length Unicode character Variable for each row. The upper limit is data of maximum length size 4000 bytes per row. characters. The number of bytes may be up to 2 times size for a the AL16UTF16 encoding and 3 times this number for the UTF8 encoding. A size must be specified. CLOB Single-byte or multibyte character Up to 232 - 1 bytes * (database block data. Both fixed-width and variable- size), or 4 gigabytes * block size. width character sets are supported, and both use the CHAR character set. Unicode national character set Up to 232 - 1 bytes * (database block (NCHAR) data. Both fixed-width and size), or 4 gigabytes * block size. variable-width character sets are supported, and both use the NCHAR character set. Variable-length character data. Variable for each row in the table, up to Provided for backward compatibility. 231 - 1 bytes, or 2 gigabytes, per row. 32-bit floating-point number. 4 bytes. 8 bytes.

NCLOB

LONG BINARY_FLOAT

BINARY_DOUBLE 64-bit floating-point number.

NUMBER Variable-length numeric data. Variable for each row. The maximum [(prec | prec, scale)] Precision prec is the the total number space available for a given column is of digits; scale scale is the number of 21 bytes per row. digits to the right of the decimal point.

Datatype

Description Precision can range from 1 to 38. Scale can range from -84 to 127. With precision specified, this is a floatingpoint number; with no precision specified, it is a fixed-point number.

Column Length / Default Values

DATE

Fixed-length date and time data, ranging from Jan. 1, 4712 B.C.E. to Dec. 31, 9999 C.E.

Fixed at 7 bytes for each row in the table. Default format is a string (such as DD-MON-RR) specified by the NLS_DATE_FORMAT parameter.

INTERVAL YEAR [(yr_prec)] TO MONTH

A period of time, represented as Fixed at 5 bytes. years and months. The yr_prec is the number of digits in the YEAR field of the date. The precision can be from 0 to 9, and defaults to 2 digits. A period of time, represented as days, Fixed at 11 bytes. hours, minutes, and seconds. The day_prec and frac_sec_prec are the number of digits in the DAY and the fractional SECOND fields of the date, respectively. These precision values can each be from 0 to 9, and they default to 2 digits for day_prec and 6 digits for frac_sec_prec. A value representing a date and time, including fractional seconds. (The exact resolution depends on the operating system clock.) The frac_sec_prec specifies the number of digits in the fractional second part of the SECOND date field. The frac_sec_prec can be from 0 to 9, and defaults to 6 digits. Varies from 7 to 11 bytes, depending on the precision. The default is determined by the NLS_TIMESTAMP_FORMAT initialization parameter.

INTERVAL DAY [(day_prec)] TO SECOND [(frac_sec_prec)]

TIMESTAMP [(frac_sec_prec)]

TIMESTAMP A value representing a date and time, Fixed at 13 bytes. The default is [(frac_sec_prec)] plus an associated time zone setting. determined by the WITH TIME ZONE The time zone can be an offset from NLS_TIMESTAMP_TZ_FORMAT UTC, such as '-5:0', or a region name, initialization parameter. such as 'US/Pacific'. The frac_sec_prec is as for datatype TIMESTAMP. TIMESTAMP [(frac_sec_prec)] WITH LOCAL TIME ZONE Similar to TIMESTAMP WITH TIME Varies from 7 to 11 bytes, depending ZONE, except that the data is on frac_sec_prec. The default is normalized to the database time zone determined by the when stored, and adjusted to match NLS_TIMESTAMP_FORMAT the client's time zone when retrieved. initialization parameter. The frac_sec_prec is as for datatype TIMESTAMP. Unstructured binary data. Up to 232 - 1 bytes * (database block size), or 4 gigabytes * block size

BLOB

Datatype BFILE

Description

Column Length / Default Values

Address of a binary file stored outside The referenced file can be up to 232 - 1 the database. Enables byte-stream bytes * (database block size), or 4 I/O access to external LOBs residing gigabytes * block size. on the database server. Variable-length raw binary data. A Variable for each row in the table, up to size, which is the maximum number 2000 bytes per row. of bytes, must be specified. Provided for backward compatibility. Variable-length raw binary data. Variable for each row in the table, up to Provided for backward compatibility. 231 - 1 bytes, or 2 gigabytes, per row. Base 64 binary data representing a row address. Used primarily for values returned by the ROWID pseudocolumn. Fixed at 10 bytes (extended ROWID) or 6 bytes (restricted ROWID) for each row in the table.

RAW (size)

LONG RAW ROWID

UROWID [(size)]

Base 64 binary data representing the Maximum size and default are both logical address of a row in an index- 4000 bytes. organized table. The optional size is the number of bytes in a column of type UROWID.

You can also specify rules for each column of a table. These rules are called integrity constraints. One example is a NOT NULL integrity constraint. This constraint forces the column to contain a value in every row. After you create a table, insert rows of data using SQL statements. Table data can then be queried, deleted, or updated using SQL.

Types of tables
There are four types of tables as mentioned below in table 2. Table 2. Types of tables Type of Table Description Ordinary (heap- This is the basic, general-purpose type of table, which is the primary subject of this organized) chapter. Its data is stored as an unordered collection (heap) table Clustered table A clustered table is a table that is part of a cluster. A cluster is a group of tables that share the same data blocks because they share common columns and are often used together. IndexUnlike an ordinary (heap-organized) table, data for an index-organized table is organized table stored in a B-tree index structure in a primary key sorted manner. Besides storing the primary key column values of an index-organized table row, each index entry in the B-tree stores the non-key column values as well. Partitioned table Partitioned tables allow your data to be broken down into smaller, more manageable pieces called partitions, or even sub-partitions. Each partition can be managed individually, and can operate independently of the other partitions, thus providing a structure that can be better tuned for availability and performance.

Restrictions to consider when Creating Tables

Here are some restrictions that may affect your table planning and usage: Tables containing object types cannot be imported into a pre-Oracle8 database. You cannot merge an exported table into a preexisting table having the same name in a different schema. You cannot move types and extent tables to a different schema when the original data still exists in the database. Oracle Database has a limit on the total number of columns that a table (or attributes that an object type) can have. The table 2 below specifies the databse limits for the various scema objects.

Logical Database Limits


Item GROUP BY clause Indexes Type Maximum length Limit The GROUP BY expression and all of the nondistinct aggregate functions (for example, SUM, AVG) must fit within a single database block. Unlimited 75% of the database block size minus some overhead 1000 columns maximum 32 columns maximum 30 columns maximum Unlimited Unlimited in the FROM clause of the top-level query 255 subqueries in the WHERE clause

Maximum per table total size of indexed column

Columns

Per table Per index (or clustered index) Per bitmapped index

Constraints Subqueries

Maximum per column Maximum levels of subqueries in a SQL statement

Partitions

Maximum length of linear 4 KB - overhead partitioning key Maximum number of 16 columns columns in partition key Maximum number of partitions allowed per table or index 64 KB - 1 partitions

Rows Stored Packages

Maximum number per table Maximum size

Unlimited PL/SQL and Developer/2000 may have limits on the size of stored procedures they can call. The limits typically range from 2000 to 3000 lines of code.

Item

Type

Limit Operating system-dependent, typically 32 2,147,483,638 32 tables

Trigger Maximum value Cascade Limit Users and Roles Tables Maximum Maximum per clustered table

Maximum per database Unlimited Further, when you create a table that contains user-defined type data, the database maps columns of user-defined type to relational columns for storing the user-defined type data. This causes additional relational columns to be created. This results in "hidden" relational columns that are not visible in a DESCRIBE table statement and are not returned by a SELECT * statement. Therefore, when you create an object table, or a relational table with columns of REF, varray, nested table, or object type, be aware that the total number of columns that the database actually creates for the table can be more than those you specify.

Oracle Database Architecture


Oracle is an RDBMS (Relational Database Management System). The Oracle database architecture can be described in terms of logical and physical structures. The advantage of separating the logical and physical structure is that the physical storage structure can be changed without affecting the logical structure.

Logical Structure
The logical structure for Oracle RDBMS consists of the following elements: Tablespace Schema

Tablespace
The Oracle database consists of one or more logical portions called as Tablespaces. A tablespace is a logical grouping of related data. A database administrator can use Tablespaces to do the following: Control disk space allocation for database data. Assign specific space quotas for database users. Perform partial database backup or recovery operations. Allocate data storage across devices to improve performance.

Each database has at least one Tablespace called SYSTEM Tablespace. As part of the process of creating the database, Oracle automatically creates the SYSTEM tablespace. Although a small database can fit within the SYSTEM tablespace, it's recommended that to create a separate tablespace for user data. Oracle uses the SYSTEM tablespace to store information like the data dictionary. Data dictionary

stores the metadata (or the data about data). This includes information like table access permissions, information about keys etc. Data is stored in the database in form of files called as datafiles. Each Tablespace is a collection of one or more Datafiles. Each data file consists of Data blocks, extents and segments.

Data Blocks
At the finest level of granularity, an ORACLE database's data is stored in data blocks (also called logical blocks, ORACLE blocks, or pages). An ORACLE database uses and allocates free database space in ORACLE data blocks.

Extents
The next level of logical database space is called an extent. An extent is a specific number of contiguous data blocks that are allocated for storing a specific type of information.

Segments
The level of logical database storage above an extent is called a segment. A segment is a set of extents that have been allocated for a specific type of data structure, and all are stored in the same tablespace. For example, each table's data is stored in its own data segment, while each index's data is stored in its own index segment. ORACLE allocates space for segments in extents. Therefore, when the existing extents of a segment are full, ORACLE allocates another extent for that segment. Because extents are allocated as needed, the extents of a segment may or may not be contiguous on disk, and may or may not span files. An Oracle database can use four types of segments: Data segment--Stores user data within the database. Index segment--Stores indexes. Rollback segment--Stores rollback information. This information is used when data must be rolled back. Temporary segment--Created when a SQL statement needs a temporary work area; these segments are destroyed when the SQL statement is finished. These segments are used during various database operations, such as sorts.

Schema
The database schema is a collection of logical-structure objects, known as schema objects that define how you see the database's data. A schema also defines a level of access for the users. All the logical objects in oracle are grouped into a schema. A scheme is a logical grouping of objects such as: Tables Clusters Indexes Views Stored procedures Triggers Sequences

Physical Structure
The physical layer of the database consists of three types of files: 1. 2. 3. One or more Datafiles Two or more redo log files One or more control files

Datafiles (.dbf files):


Datafiles store the information contained in the database. One can have as few as one data file or as many as hundreds of datafiles. The information for a single table can span many datafiles or many tables can share a set of datafiles. Spreading tablespaces over many datafiles can have a significant positive effect on performance. The number of datafiles that can be configured is limited by the Oracle parameter MAXDATAFILES.

Redo Log Files (.rdo & .arc):


Oracle maintains logs of all the transaction against the database. These transactions are recorded in files called Online Redo Log Files (Redo Logs). The main purpose of the Redo log files is to hold information as recovery in the event of a system failure. Redo log stores a log of all changes made to the database. The redo log files must perform well and be protected against hardware failures (through software or hardware fault tolerance). If redo log information is lost, one cannot recover the system. When a transaction occurs in the database, it is entered in the redo log buffers, while the data blocks affected by the transactions are not immediately written to disk. In an Oracle database there are at least three or more Redo Log files. Oracle writes to redo log file in a cyclical order i.e. after the first log file is filled, it writes to the second log file, until that one is filled. When all the Redo Log files have been filled, it returns to the first log file and begin overwrite its content with new transaction data. Note, if the database is running in the ARCHIVELOG Mode, the database will make a copy of the online redo log files before overwriting them.

Control Files (.ctl):


Control files record control information about all of the files within the database. These files maintain internal consistency and guide recovery operation. Control files contain information used to start an instance, such as the location of datafiles and redo log files; Oracle needs this information to start the database instance. Control files must be protected. Oracle provides a mechanism for storing multiple copies of control files. These multiple copies are stored on separate disks to minimize the potential damage due to disk failure. The names of the databases control files are specified via the CONTROL_FILES initialization parameter.

Synonyms
This section describes aspects of managing synonyms, and contains the following topics: About Synonyms Creating Synonyms

Using Synonyms in DML Statements Dropping Synonyms

About Synonyms
A synonym is an alias for a schema object. Synonyms can provide a level of security by masking the name and owner of an object and by providing location transparency for remote objects of a distributed database. Also, they are convenient to use and reduce the complexity of SQL statements for database users. Synonyms allow underlying objects to be renamed or moved, where only the synonym needs to be redefined and applications based on the synonym continue to function without modification. You can create both public and private synonyms. A public synonym is owned by the special user group named PUBLIC and is accessible to every user in a database. A private synonym is contained in the schema of a specific user and available only to the user and the user's grantees.

Creating Synonyms
To create a private synonym in your own schema, you must have the CREATE SYNONYM privilege. To create a private synonym in another user's schema, you must have the CREATE ANY SYNONYM privilege. To create a public synonym, you must have the CREATE PUBLIC SYNONYM system privilege. Create a synonym using the CREATE SYNONYM statement. The underlying schema object need not exist, nor do you need privileges to access the object. The following statement creates a public synonym named public_emp on the emp table contained in the schema of jward: CREATE PUBLIC SYNONYM public_emp FOR jward.emp

Figure 29. Creating a synonym. When you create a synonym for a remote procedure or function, you must qualify the remote object with its schema name. Alternatively, you can create a local public synonym on the database

where the remote object resides, in which case the database link must be included in all subsequent calls to the procedure or function.

Using Synonyms in DML Statements


You can successfully use any private synonym contained in your schema or any public synonym, assuming that you have the necessary privileges to access the underlying object, either explicitly, from an enabled role, or from PUBLIC. You can also reference any private synonym contained in another schema if you have been granted the necessary object privileges for the private synonym. You can only reference another user's synonym using the object privileges that you have been granted. For example, if you have the SELECT privilege for the jward.emp_tab synonym, then you can query the jward.emp_tab synonym, but you cannot insert rows using the synonym for jward.emp_tab. A synonym can be referenced in a DML statement the same way that the underlying object of the synonym can be referenced. For example, if a synonym named emp_tab refers to a table or view, then the following statement is valid:

Overview of Relational Databases, SQL and PL/SQL: A little background on the evolution of databases and database theory will help you understand the workings of SQL. Database systems store information in every conceivable business environment. From large tracking databases such as airline reservation systems to a child's baseball card collection, database systems store and distribute the data that we depend on. Until the last few years, large database systems could be run only on large mainframe computers. These machines have traditionally been expensive to design, purchase, and maintain. However, today's generation of powerful, inexpensive workstation computers enables programmers to design software that maintains and distributes data quickly and inexpensively. A relational model organizes DATA into TABLES and only TABLES.A row and column intersection is called a "cell" The columns are placeholders, having data types such as character or integer.The rows themselves are the data. A Relational table must meet the following criteria:

1.

The data stored in the cells must be atomic. Each cell can only hold one piece of data.When a cell contains more than one piece of information this is known as information coding 2. Data stored under columns must be of the same data type 3. Each row is unique (No duplicate rows) 4. Columns have no order in them 5. Rows have no order in them 6. Columns have a unique name 7. Two fundamental integrity rules: 1. Entity Integrity rule :States that the primary key cannot be totally or partially empty. 2. Referential Integrity rule : States that the foreign key must either be null or match currently existing value of the primary key that it references. The major objective of physical design is to eliminate or at least minimize contention . Follow these rules to avoid contention :

1. 2. 3. 4. disks) 5. 6. 7. 8. 9. 10.

Separate Tables and Indexes Place large Tables and Indexes on disks of their own Place frequently joined tables on separate disks, or cluster them. Place infrequently joined tables on the same disks if necessary (if your short on Separate the RDBMS software from tables and indexes. Separate the Data Dictonary from tables and indexes. Separate the (undo) rollback logs and redo logs onto their own disks if possible. Use RAID 1 for undo or redo logs Use RAID 3 or 5 for Table Data. Use RAID 0 for indexes.

Oracle helps with the problem of object-orientated developement and RDBMS back-end situation , with the following built-in object-orientated capabilities: 1. 2. 3. 4. 5. Relationships as Datatypes Inheritance Collections as Datatypes, including nesting (containers) User-defined (extensible) datatypes Improved large objects (LOBs)

Oracle extended the already complex RDBMS with the following: 1. Object Types: Records or classes 2. Object Views : 3. Object Language: Extensions to the Oracle SQL and PL/SQL 4. Object APIs : Objects supported through Oracle precompilers PL/SQL, OCI. 5. Object Portability : Through the object type translator (OTT) which can port for example an Oracle8 object type to a C++ class. Also user defined datatypes can be built on any of the built-in datatypes plus previously userdefined datatypes. When creating user-defined datatypes these can be used : 1. 2. 3. 4. 5. As a column of a relational table As an attribute within another object type. As part of an object view of relational tables. As the basis for an object table. As the basis for PL/SQL variables.

Describe the use and benefits of PL/SQL PL/SQL is a Procedural Language extension to Oracle's version of ANSI standard SQL. SQL is non-procedural language , the programmer only describes what work to perform. How to perform the work is left to the "Oracle Optimizer", in contrast PL/SQL is like any 3GL procedural language, it requires step by step instructions defininig what to do next. PL/SQL combines the power and flexibility of SQL (4GL) with the procedural constructs of a 3GL. This results in a robust, powerful language suited for designing complex applications. Please download these CBT Tutorials(PDF Format) : There are related Oracle 9i, there is not much difference between 8i and 9i with the Database concepts. This module introduces the basic concepts of Relational Databases and the architecure of Oracle Database.

Anda mungkin juga menyukai