Triggers are named PL/SQL blocks that get executed implicitly when a triggering event occurs. Rather that being executed when called (as is the case with procedures and functions), triggers get fired automatically when certain events occur in the system. The action of executing a trigger is called firing the trigger. A trigger fires when a triggering event occurs.
Triggering Events:
Triggering Events are events that occur due to the users actions (or system events) that cause a trigger to be fired. Triggering events can be insertion, deletion, update etc. When any of these events occurs, it executes the triggers written on that event implicitly.
Types of Triggers:
Although, there may be many types and classifications of triggers, basically, there are three types of triggers: DML Triggers: DML triggers are fired by the execution of a DML statement. The DML triggers can be defined on insert, update or delete operations. Whenever a DML operation occurs on a table, the trigger will execute. Also, the triggers can be created in such a way that they get executed either before or after the DML operation occurs. System Triggers: System triggers fire when a system event such as a database startup or shutdown happens. System triggers can also be fired on DDL operations such as create table. Instead-of Triggers: Instead-of triggers can be defined on operations performed on views only. When you define a instead of trigger on an operation on a view, the trigger code will be executed instead of the operation that fired it. This type of triggers can only be row level.
Standard is recommendations for naming files, mount points, directory structures, filenames, scripting techniques and folders when installing 10g and implementing an oracle database.. Anyone who knows the OFA can navigate an Oracle environment to quickly find the software and files used for the database and the instance. This standardization increases productivity and avoids errors. Benefits by using Optimal Flexible Architecture Organizes large amounts of complicated software and data on disk to avoid device bottlenecks and poor performance Facilitates routine administrative tasks, such as software and data backup functions, which are often vulnerable to data corruption Alleviates switching among multiple Oracle databases Adequately manages and administers database growth Helps to eliminate fragmentation of free space in the data dictionary, isolates other fragmentation, and minimizes resource contention
Specifications
A Pentium III or Pentium 4 based PC with at least 800 MHz processor, 256 MB of ram (512 is such better) and at least 10 Gigabytes of free disk space. If you only have 256 MB of RAM, make sure you have Windows manage at least 400 MB swap file (virtual memory). . At least 6 GB of free disk space: Space to download or copy source ZIP files: 1.5 GB Space to unpack source ZIP files: 1.5 GB Space to install Oracle10g Software: 2.0 GB Space for Oracle data files (varies): 2.0 to 5.0 GB
About Views
A view is a logical representation of another table or combination of tables. A view derives its data from the tables on which it is based. These tables are called base tables. Base tables might in turn be actual tables or might be views themselves. All operations performed on a view actually affect the base table of the view. You can use views in almost the same way as tables. You can query, update, insert into, and delete from views, just as you can standard tables. Views can provide a different representation (such as subsets or supersets) of the data that resides within other tables and views. Views are very powerful because they allow you to tailor the presentation of data to different types of users.
Creating Views
To create a view, you must meet the following requirements: To create a view in your schema, you must have the CREATE VIEW privilege. To create a view in another user's schema, you must have the CREATE ANY VIEW system privilege. You can acquire these privileges explicitly or through a role. . The owner of the view (whether it is you or another user) must have been explicitly granted privileges to access all objects referenced in the view definition. The owner cannot have obtained these privileges through roles. Also, the functionality of
the view is dependent on the privileges of the view owner. For example, if the owner of the view has only the INSERT privilege for Scott's emp table, the view can only be used to insert new rows into the emp table, not to SELECT, UPDATE, or DELETE rows. . If the owner of the view intends to grant access to the view to other users, the owner must have received the object privileges to the base objects with the GRANT OPTION or the system privileges with the ADMIN OPTION. You can create views using the CREATE VIEW statement. Each view is defined by a query that references tables, materialized views, or other views. As with all subqueries, the query that defines a view cannot contain the FOR UPDATE clause. The following statement creates a view on a subset of data in the emp table: CREATE VIEW sales_staff AS SELECT empno, ename, deptno FROM emp WHERE deptno = 10 WITH CHECK OPTION CONSTRAINT sales_staff_cnst; Figure 13. Creating Views with check option The query that defines the sales_staff view references only rows in department 10. Furthermore, the CHECK OPTION creates the view with the constraint (named sales_staff_cnst) that INSERT and UPDATE statements issued against the view cannot result in rows that the view cannot select. For example, the following INSERT statement successfully inserts a row into the emp table by means of the sales_staff view, which contains all rows with department number 10: INSERT INTO sales_staff VALUES (7584, 'OSTER', 10); However, the following INSERT statement returns an error because it attempts to insert a row for department number 30, which cannot be selected using the sales_staff view: INSERT INTO sales_staff VALUES (7591, 'WILLIAMS', 30);
The view could optionally have been constructed specifying the WITH READ ONLY clause, which prevents any updates, inserts, or deletes from being done to the base table through the view. If no WITH clause is specified, the view, with some restrictions, is inherently updatable
Document Summary
This document is a simple guide to understanding PL/SQL objects such as stored procedures, functions, triggers etc. This guide provides insight into creating the above-mentioned objects using SQL Plus.
Packages:
A Package is a PL/SQL construct that allows related objects to be stored together. Apart from grouping related objects, packages also provide many more advantages such as improved performance and ability to reference the PL/SQL objects contained within a package from other PL/SQL blocks. A Package consists of two portions: a) Package Specification Package Specification consists of information about the contents of the package. Essentially, package specification contains forward declarations of the procedures and functions in that package but does not contain the codes for any of these subprograms. The syntax for creating package specification is as below: CREATE [OR REPLACE] PACKAGE package_name {IS/AS} type_definition/ procedure_specification/ function_specification/ variable_declaration/ exception_declaration/ cursor_declaration/ pragma_declaration END [package_name]; b) Package Body Package Body contains the actual code for the subprograms in the package. Package body is a separate data dictionary object from the package header. The package body cannot be compiled without the package specification being compiled successfully. The package body should compulsorily contain the definition for all sub program definitions contained in the package specification. The syntax for creating package specification is as below: CREATE [OR REPLACE] PACKAGE BODY package_name {IS/AS} procedure_definitions function_definitions END [package_name]; Below is an example of creating a package:
In this tutorial you will learn about Discarded and Rejected Records - The Bad File, SQL*Loader Rejects, Oracle Database Rejects, The Discard File and Log File and Logging Information
SQL*Loader Rejects
Datafile records are rejected by SQL*Loader when the input format is invalid. For example, if the second enclosure delimiter is missing, or if a delimited field exceeds its maximum length, SQL*Loader rejects the record. Rejected records are placed in the bad file.
After a datafile record is accepted for processing by SQL*Loader, it is sent to the Oracle database for insertion into a table as a row. If the Oracle database determines that the row is valid, then the row is inserted into the table. If the row is determined to be invalid, then the record is rejected and SQL*Loader puts it in the bad file. The row may be invalid, for example, because a key is not unique, because a required field is null, or because the field contains invalid data for the Oracle datatype.
The below screen is the final screen to initiate the data load process using the SQL*Loader. The data load can be scheduled to run immediately or at a specific time later
Let us learn about Creating Index-Organized Tables by Creating an Index-Organized Table, further by Creating Index-Organized Tables that Contain Object Types and also you will learn how to View Information about Tables.
An INCLUDING clause, which can be used to specify nonkey columns that are to be stored in the overflow data segment.
Figure 12. Creating Index organized table Specifying ORGANIZATION INDEX causes the creation of an index-organized table, admin_docindex, where the key columns and nonkey columns reside in an index defined on columns that designate the primary key or keys for the table. In this case, the primary keys are token and doc_id. An overflow segment is specified and is discussed in "Using the Overflow Clause ".
CREATE OR REPLACE TYPE admin_typ AS OBJECT (col1 NUMBER, col2 VARCHAR2(6)); CREATE TABLE admin_iot (c1 NUMBER primary key, c2 admin_typ) ORGANIZATION INDEX; You can also create an index-organized table of object types. For example: CREATE TABLE admin_iot2 OF admin_typ (col1 PRIMARY KEY) ORGANIZATION INDEX; Another example, that follows, shows that index-organized tables store nested tables efficiently. For a nested table column, the database internally creates a storage table to hold all the nested table rows. CREATE TYPE project_t AS OBJECT(pno NUMBER, pname VARCHAR2(80)); / CREATE TYPE project_set AS TABLE OF project_t; / CREATE TABLE proj_tab (eno NUMBER, projects PROJECT_SET) NESTED TABLE projects STORE AS emp_project_tab ((PRIMARY KEY(nested_table_id, pno)) ORGANIZATION INDEX) RETURN AS LOCATOR; The rows belonging to a single nested table instance are identified by a nested_table_id column. If an ordinary table is used to store nested table columns, the nested table rows typically get declustered. But when you use an index-organized table, the nested table rows can be clustered based on the nested_table_id column.
Creating Tables
To create a new table in your schema, you must have the CREATE TABLE system privilege. To create a table in another user's schema, you must have the CREATE ANY TABLE system privilege. Additionally, the owner of the table must have a quota for the tablespace that contains the table, or the UNLIMITED TABLESPACE system privilege. Create tables using the SQL statement CREATE TABLE.
Creating a Table
When you issue the following statement, you create a table named Employee in the your default schema and default tablespace. The below mentioned code can either be executed through SQL*PLUS or iSQL*PLUS. CREATE TABLE employee ( ..........empno NUMBER(5) PRIMARY KEY, ..........ename VARCHAR2(15) NOT NULL, ..........job VARCHAR2(10), ..........mgr NUMBER(5), ..........hiredate DATE DEFAULT (sysdate), ..........sal NUMBER(7,2), ..........comm NUMBER(7,2), ..........deptno NUMBER(3) NOT NULL ); ..........................................................................
In this training you will learn about Dropping Tables, Consequences of Dropping a Table, CASCADE Clause and the PURGE Clause.
Dropping Tables
To drop a table that you no longer need, use the DROP TABLE statement. The table must be contained in your schema or you must have the DROP ANY TABLE system privilege. Caution: Before dropping a table, familiarize yourself with the consequences of doing so: Dropping a table removes the table definition from the data dictionary. All rows of the table are no longer accessible. . All indexes and triggers associated with a table are dropped. . All views and PL/SQL program units dependent on a dropped table remain, yet become invalid (not usable). . All synonyms for a dropped table remain, but return an error when used. . All extents allocated for a table that is dropped are returned to the free space of the tablespace and can be used by any other object requiring new extents or new
objects. All rows corresponding to a clustered table are deleted from the blocks of the cluster. The following statement drops the admin_emp_dept table: DROP TABLE admin_emp_dept;
Figure 3. Drop Table If the table to be dropped contains any primary or unique keys referenced by foreign keys of other tables and you intend to drop the FOREIGN KEY constraints of the child tables, then include the CASCADE clause in the DROP TABLE statement, as shown below: DROP TABLE admin_emp_dept CASCADE CONSTRAINTS;
Figure 4. Drop Table Cascade Constraints . When you drop a table, normally the database does not immediately release the space associated with the table. Rather, the database renames the table and places it in a recycle bin, where it can later be recovered with the FLASHBACK TABLE statement if you find that you dropped the table in error. If you should want to immediately release the space associated with the table at the time you issue the DROP TABLE statement, include the PURGE clause as shown in the following statement: DROP TABLE admin_emp_dept PURGE;
Figure 5. Drop Table Purge Perhaps instead of dropping a table, you want to truncate it. The TRUNCATE statement provides a fast, efficient method for deleting all rows from a table, but it does not affect any structures associated with the table being truncated (column definitions, constraints, triggers, and so forth) or authorizations.
fixed record format and a performance advantage over the stream record format. For example, you can specify a datafile that is to be interpreted as being in variable record format as follows: INFILE "datafile_name" "var n"
If the character set specified with the NLS_LANG parameter for your session is different from the character set of the datafile, character strings are converted to the character set of the datafile. This is done before SQL*Loader checks for the default record terminator. Hexadecimal strings are assumed to be in the character set of the datafile, so no conversion is performed. On UNIX-based platforms, if no terminator_string is specified, SQL*Loader defaults to the line feed character, \n. On Windows NT, if no terminator_string is specified, then SQL*Loader uses either \n or \r\n as the record terminator, depending on which one it finds first in the datafile. This means that if you know that one or more records in your datafile has \n embedded in a field, but you want \r\n to be used as the record terminator, you must specify it. The screen below asks for the data file details if it is not already specified in the control file.
The below screen asks for the load method options while loading the data using SQL*Loader.
how to use Views in Queries along with Restrictions on DML operations for Views. how to update a join View along with rules for updatable join views. how to use DML Statements on Join Views how to Update Views that Involve Outer Joins
This error message is returned when a view exists but is unusable due to errors in its query (whether it had errors when originally created or it was created successfully but became unusable later because underlying objects were altered or dropped).
Examples illustrating the rules for inherently updatable join views, and a discussion of keypreserved tables, are presented in succeeding sections. The examples in these sections work only if you explicitly define the primary and foreign keys in the tables, or define unique indexes. The following statements create the appropriately constrained table definitions for emp and dept. CREATE TABLE dept ( deptno NUMBER(4) PRIMARY KEY, dname VARCHAR2(14), loc VARCHAR2(13)); CREATE TABLE emp ( empno NUMBER(4) PRIMARY KEY,
ename VARCHAR2(10), job VARCHAR2(9), mgr NUMBER(4), sal NUMBER(7,2), comm NUMBER(7,2), deptno NUMBER(2), FOREIGN KEY (DEPTNO) REFERENCES DEPT(DEPTNO)); You could also omit the primary and foreign key constraints listed in the preceding example, and create a UNIQUE INDEX on dept (deptno) to make the following examples work.
In this training you will learn about Altering Tables - Modifying an Existing Column Definition, Adding Table Columns, Renaming Table Columns, Dropping Table Columns, Removing Columns from Tables, Marking Columns Unused and Removing Unused Columns.
must set the initialization parameter BLANK_TRIMMING=TRUE to decrease the length of a nonempty CHAR column. If you are modifying a table to increase the length of a column of datatype CHAR, realize that this can be a time consuming operation and can require substantial additional storage, especially if the table contains many rows. This is because the CHAR value in each row must be blank-padded to satisfy the new column length.
Figure 6. Alter Table If a new column is added to a table, the column is initially NULL unless you specify the DEFAULT clause. When you specify a default value, the database updates each row in the new column with the values specified. Specifying a DEFAULT value is not supported for tables using table compression. You can add a column with a NOT NULL constraint to a table only if the table does not contain any rows, or you specify a default value.
Figure 7. Alter Table Change Column Name As noted earlier, altering a table column can invalidate dependent objects. However, when you rename a column, the database updates associated data dictionary tables to ensure that functionbased indexes and check constraints remain valid. Oracle Database also lets you rename column constraints. Note: The RENAME TO clause of ALTER TABLE appears similar in syntax to the RENAME COLUMN clause, but is used for renaming the table itself.
Tables
Tables are the basic unit of data storage in an Oracle database. Database tables hold all useraccessible data. Each table has columns and rows. Columns in a table is the different types of information that the table will contain and all instances of such data is stored in rows.
Indexes
Indexes are optional structures associated with tables. Indexes can be created to increase the performance of data retrieval. Just as the index in a book helps you quickly locate specific information, an Oracle index provides an access path to table data. When processing a request, Oracle can use some or all of the available indexes to locate the requested rows efficiently. Indexes are useful when applications frequently query a table for a range of rows (for example, all employees with a salary greater than 1000 dollars) or a specific row. Indexes are created on one or more columns of a table. After it is created, an index is automatically maintained and used by Oracle. Changes to table data (such as adding new rows, updating rows, or deleting rows) are automatically incorporated into all relevant indexes with complete transparency to the users.
Views
Views are customized presentations of data in one or more tables or other views. A view can also be considered a stored query. Views do not actually contain data. Rather, they derive their data from the tables on which they are based, referred to as the base tables of the views. Like tables, views can be queried, updated, inserted into, and deleted from, with some restrictions. All operations performed on a view actually affect the base tables of the view. Views provide an additional level of table security by restricting access to a predetermined set of rows and columns of a table. They also hide data complexity and store complex queries.
Clusters
Clusters are groups of one or more tables physically stored together because they share common columns and are often used together. Because related rows are physically stored together, disk access time improves. Like indexes, clusters do not affect application design. Whether a table is part of a cluster is transparent to users and to applications. Data stored in a clustered table is accessed by SQL in the same way as data stored in a nonclustered table.
Synonyms
A synonym is an alias for any table, view, materialized view, sequence, procedure, function, package, type, Java class schema object, user-defined object type, or another synonym. Because a synonym is simply an alias, it requires no storage other than its definition in the data dictionary
Tables
Tables are the basic unit of data storage in an Oracle Database. A table consists of columns and rows. Data is stored in these rows and columns. A table definition required a table name and the description of the columns that would be contained in this table. Rows are automatically created when the data is inserted into the table. You define a table with a table name, such as employees, and a set of columns. You give each column a column name, such as employee_id, last_name, and job_id; a datatype, such as VARCHAR2, DATE, or NUMBER; and a width. The width can be predetermined by the datatype, as in DATE. If columns are of the NUMBER datatype, define precision and scale instead of width. The table 1 below lists all the available datatypes in Oracle which can be used to define column types in a table.
Table 1. Summary of Oracle Built-In Datatypes Datatype CHAR [(size [BYTE | CHAR])] Description Column Length / Default Values Fixed-length character data of length Fixed for every row in the table (with size bytes or characters. trailing blanks); maximum size is 2000 bytes per row; default size is 1 byte per row. When neither BYTE nor CHAR is specified, the setting of NLS_LENGTH_SEMANTICS at the time of column creation determines which is used. Consider the character set (single-byte or multibyte) before setting size. Variable-length character data, with maximum length size bytes or characters. BYTE or CHAR indicates that the column has byte or character semantics, respectively. A size must be specified. Variable for each row, up to 4000 bytes per row. When neither BYTE nor CHAR is specified, the setting of NLS_LENGTH_SEMANTICS at the time of column creation determines which is used. Consider the character set (single-byte or multibyte) before setting size.
NCHAR [(size)]
Fixed-length Unicode character data Fixed for every row in the table (with of length size characters. The number trailing blanks). The upper limit is 2000 of bytes is twice this number for the bytes per row. Default size is 1 AL16UTF16 encoding and 3 times character. this number for the UTF8 encoding.)
NVARCHAR2 (size) Variable-length Unicode character Variable for each row. The upper limit is data of maximum length size 4000 bytes per row. characters. The number of bytes may be up to 2 times size for a the AL16UTF16 encoding and 3 times this number for the UTF8 encoding. A size must be specified. CLOB Single-byte or multibyte character Up to 232 - 1 bytes * (database block data. Both fixed-width and variable- size), or 4 gigabytes * block size. width character sets are supported, and both use the CHAR character set. Unicode national character set Up to 232 - 1 bytes * (database block (NCHAR) data. Both fixed-width and size), or 4 gigabytes * block size. variable-width character sets are supported, and both use the NCHAR character set. Variable-length character data. Variable for each row in the table, up to Provided for backward compatibility. 231 - 1 bytes, or 2 gigabytes, per row. 32-bit floating-point number. 4 bytes. 8 bytes.
NCLOB
LONG BINARY_FLOAT
NUMBER Variable-length numeric data. Variable for each row. The maximum [(prec | prec, scale)] Precision prec is the the total number space available for a given column is of digits; scale scale is the number of 21 bytes per row. digits to the right of the decimal point.
Datatype
Description Precision can range from 1 to 38. Scale can range from -84 to 127. With precision specified, this is a floatingpoint number; with no precision specified, it is a fixed-point number.
DATE
Fixed-length date and time data, ranging from Jan. 1, 4712 B.C.E. to Dec. 31, 9999 C.E.
Fixed at 7 bytes for each row in the table. Default format is a string (such as DD-MON-RR) specified by the NLS_DATE_FORMAT parameter.
A period of time, represented as Fixed at 5 bytes. years and months. The yr_prec is the number of digits in the YEAR field of the date. The precision can be from 0 to 9, and defaults to 2 digits. A period of time, represented as days, Fixed at 11 bytes. hours, minutes, and seconds. The day_prec and frac_sec_prec are the number of digits in the DAY and the fractional SECOND fields of the date, respectively. These precision values can each be from 0 to 9, and they default to 2 digits for day_prec and 6 digits for frac_sec_prec. A value representing a date and time, including fractional seconds. (The exact resolution depends on the operating system clock.) The frac_sec_prec specifies the number of digits in the fractional second part of the SECOND date field. The frac_sec_prec can be from 0 to 9, and defaults to 6 digits. Varies from 7 to 11 bytes, depending on the precision. The default is determined by the NLS_TIMESTAMP_FORMAT initialization parameter.
TIMESTAMP [(frac_sec_prec)]
TIMESTAMP A value representing a date and time, Fixed at 13 bytes. The default is [(frac_sec_prec)] plus an associated time zone setting. determined by the WITH TIME ZONE The time zone can be an offset from NLS_TIMESTAMP_TZ_FORMAT UTC, such as '-5:0', or a region name, initialization parameter. such as 'US/Pacific'. The frac_sec_prec is as for datatype TIMESTAMP. TIMESTAMP [(frac_sec_prec)] WITH LOCAL TIME ZONE Similar to TIMESTAMP WITH TIME Varies from 7 to 11 bytes, depending ZONE, except that the data is on frac_sec_prec. The default is normalized to the database time zone determined by the when stored, and adjusted to match NLS_TIMESTAMP_FORMAT the client's time zone when retrieved. initialization parameter. The frac_sec_prec is as for datatype TIMESTAMP. Unstructured binary data. Up to 232 - 1 bytes * (database block size), or 4 gigabytes * block size
BLOB
Datatype BFILE
Description
Address of a binary file stored outside The referenced file can be up to 232 - 1 the database. Enables byte-stream bytes * (database block size), or 4 I/O access to external LOBs residing gigabytes * block size. on the database server. Variable-length raw binary data. A Variable for each row in the table, up to size, which is the maximum number 2000 bytes per row. of bytes, must be specified. Provided for backward compatibility. Variable-length raw binary data. Variable for each row in the table, up to Provided for backward compatibility. 231 - 1 bytes, or 2 gigabytes, per row. Base 64 binary data representing a row address. Used primarily for values returned by the ROWID pseudocolumn. Fixed at 10 bytes (extended ROWID) or 6 bytes (restricted ROWID) for each row in the table.
RAW (size)
UROWID [(size)]
Base 64 binary data representing the Maximum size and default are both logical address of a row in an index- 4000 bytes. organized table. The optional size is the number of bytes in a column of type UROWID.
You can also specify rules for each column of a table. These rules are called integrity constraints. One example is a NOT NULL integrity constraint. This constraint forces the column to contain a value in every row. After you create a table, insert rows of data using SQL statements. Table data can then be queried, deleted, or updated using SQL.
Types of tables
There are four types of tables as mentioned below in table 2. Table 2. Types of tables Type of Table Description Ordinary (heap- This is the basic, general-purpose type of table, which is the primary subject of this organized) chapter. Its data is stored as an unordered collection (heap) table Clustered table A clustered table is a table that is part of a cluster. A cluster is a group of tables that share the same data blocks because they share common columns and are often used together. IndexUnlike an ordinary (heap-organized) table, data for an index-organized table is organized table stored in a B-tree index structure in a primary key sorted manner. Besides storing the primary key column values of an index-organized table row, each index entry in the B-tree stores the non-key column values as well. Partitioned table Partitioned tables allow your data to be broken down into smaller, more manageable pieces called partitions, or even sub-partitions. Each partition can be managed individually, and can operate independently of the other partitions, thus providing a structure that can be better tuned for availability and performance.
Here are some restrictions that may affect your table planning and usage: Tables containing object types cannot be imported into a pre-Oracle8 database. You cannot merge an exported table into a preexisting table having the same name in a different schema. You cannot move types and extent tables to a different schema when the original data still exists in the database. Oracle Database has a limit on the total number of columns that a table (or attributes that an object type) can have. The table 2 below specifies the databse limits for the various scema objects.
Columns
Per table Per index (or clustered index) Per bitmapped index
Constraints Subqueries
Partitions
Maximum length of linear 4 KB - overhead partitioning key Maximum number of 16 columns columns in partition key Maximum number of partitions allowed per table or index 64 KB - 1 partitions
Unlimited PL/SQL and Developer/2000 may have limits on the size of stored procedures they can call. The limits typically range from 2000 to 3000 lines of code.
Item
Type
Trigger Maximum value Cascade Limit Users and Roles Tables Maximum Maximum per clustered table
Maximum per database Unlimited Further, when you create a table that contains user-defined type data, the database maps columns of user-defined type to relational columns for storing the user-defined type data. This causes additional relational columns to be created. This results in "hidden" relational columns that are not visible in a DESCRIBE table statement and are not returned by a SELECT * statement. Therefore, when you create an object table, or a relational table with columns of REF, varray, nested table, or object type, be aware that the total number of columns that the database actually creates for the table can be more than those you specify.
Logical Structure
The logical structure for Oracle RDBMS consists of the following elements: Tablespace Schema
Tablespace
The Oracle database consists of one or more logical portions called as Tablespaces. A tablespace is a logical grouping of related data. A database administrator can use Tablespaces to do the following: Control disk space allocation for database data. Assign specific space quotas for database users. Perform partial database backup or recovery operations. Allocate data storage across devices to improve performance.
Each database has at least one Tablespace called SYSTEM Tablespace. As part of the process of creating the database, Oracle automatically creates the SYSTEM tablespace. Although a small database can fit within the SYSTEM tablespace, it's recommended that to create a separate tablespace for user data. Oracle uses the SYSTEM tablespace to store information like the data dictionary. Data dictionary
stores the metadata (or the data about data). This includes information like table access permissions, information about keys etc. Data is stored in the database in form of files called as datafiles. Each Tablespace is a collection of one or more Datafiles. Each data file consists of Data blocks, extents and segments.
Data Blocks
At the finest level of granularity, an ORACLE database's data is stored in data blocks (also called logical blocks, ORACLE blocks, or pages). An ORACLE database uses and allocates free database space in ORACLE data blocks.
Extents
The next level of logical database space is called an extent. An extent is a specific number of contiguous data blocks that are allocated for storing a specific type of information.
Segments
The level of logical database storage above an extent is called a segment. A segment is a set of extents that have been allocated for a specific type of data structure, and all are stored in the same tablespace. For example, each table's data is stored in its own data segment, while each index's data is stored in its own index segment. ORACLE allocates space for segments in extents. Therefore, when the existing extents of a segment are full, ORACLE allocates another extent for that segment. Because extents are allocated as needed, the extents of a segment may or may not be contiguous on disk, and may or may not span files. An Oracle database can use four types of segments: Data segment--Stores user data within the database. Index segment--Stores indexes. Rollback segment--Stores rollback information. This information is used when data must be rolled back. Temporary segment--Created when a SQL statement needs a temporary work area; these segments are destroyed when the SQL statement is finished. These segments are used during various database operations, such as sorts.
Schema
The database schema is a collection of logical-structure objects, known as schema objects that define how you see the database's data. A schema also defines a level of access for the users. All the logical objects in oracle are grouped into a schema. A scheme is a logical grouping of objects such as: Tables Clusters Indexes Views Stored procedures Triggers Sequences
Physical Structure
The physical layer of the database consists of three types of files: 1. 2. 3. One or more Datafiles Two or more redo log files One or more control files
Synonyms
This section describes aspects of managing synonyms, and contains the following topics: About Synonyms Creating Synonyms
About Synonyms
A synonym is an alias for a schema object. Synonyms can provide a level of security by masking the name and owner of an object and by providing location transparency for remote objects of a distributed database. Also, they are convenient to use and reduce the complexity of SQL statements for database users. Synonyms allow underlying objects to be renamed or moved, where only the synonym needs to be redefined and applications based on the synonym continue to function without modification. You can create both public and private synonyms. A public synonym is owned by the special user group named PUBLIC and is accessible to every user in a database. A private synonym is contained in the schema of a specific user and available only to the user and the user's grantees.
Creating Synonyms
To create a private synonym in your own schema, you must have the CREATE SYNONYM privilege. To create a private synonym in another user's schema, you must have the CREATE ANY SYNONYM privilege. To create a public synonym, you must have the CREATE PUBLIC SYNONYM system privilege. Create a synonym using the CREATE SYNONYM statement. The underlying schema object need not exist, nor do you need privileges to access the object. The following statement creates a public synonym named public_emp on the emp table contained in the schema of jward: CREATE PUBLIC SYNONYM public_emp FOR jward.emp
Figure 29. Creating a synonym. When you create a synonym for a remote procedure or function, you must qualify the remote object with its schema name. Alternatively, you can create a local public synonym on the database
where the remote object resides, in which case the database link must be included in all subsequent calls to the procedure or function.
Overview of Relational Databases, SQL and PL/SQL: A little background on the evolution of databases and database theory will help you understand the workings of SQL. Database systems store information in every conceivable business environment. From large tracking databases such as airline reservation systems to a child's baseball card collection, database systems store and distribute the data that we depend on. Until the last few years, large database systems could be run only on large mainframe computers. These machines have traditionally been expensive to design, purchase, and maintain. However, today's generation of powerful, inexpensive workstation computers enables programmers to design software that maintains and distributes data quickly and inexpensively. A relational model organizes DATA into TABLES and only TABLES.A row and column intersection is called a "cell" The columns are placeholders, having data types such as character or integer.The rows themselves are the data. A Relational table must meet the following criteria:
1.
The data stored in the cells must be atomic. Each cell can only hold one piece of data.When a cell contains more than one piece of information this is known as information coding 2. Data stored under columns must be of the same data type 3. Each row is unique (No duplicate rows) 4. Columns have no order in them 5. Rows have no order in them 6. Columns have a unique name 7. Two fundamental integrity rules: 1. Entity Integrity rule :States that the primary key cannot be totally or partially empty. 2. Referential Integrity rule : States that the foreign key must either be null or match currently existing value of the primary key that it references. The major objective of physical design is to eliminate or at least minimize contention . Follow these rules to avoid contention :
1. 2. 3. 4. disks) 5. 6. 7. 8. 9. 10.
Separate Tables and Indexes Place large Tables and Indexes on disks of their own Place frequently joined tables on separate disks, or cluster them. Place infrequently joined tables on the same disks if necessary (if your short on Separate the RDBMS software from tables and indexes. Separate the Data Dictonary from tables and indexes. Separate the (undo) rollback logs and redo logs onto their own disks if possible. Use RAID 1 for undo or redo logs Use RAID 3 or 5 for Table Data. Use RAID 0 for indexes.
Oracle helps with the problem of object-orientated developement and RDBMS back-end situation , with the following built-in object-orientated capabilities: 1. 2. 3. 4. 5. Relationships as Datatypes Inheritance Collections as Datatypes, including nesting (containers) User-defined (extensible) datatypes Improved large objects (LOBs)
Oracle extended the already complex RDBMS with the following: 1. Object Types: Records or classes 2. Object Views : 3. Object Language: Extensions to the Oracle SQL and PL/SQL 4. Object APIs : Objects supported through Oracle precompilers PL/SQL, OCI. 5. Object Portability : Through the object type translator (OTT) which can port for example an Oracle8 object type to a C++ class. Also user defined datatypes can be built on any of the built-in datatypes plus previously userdefined datatypes. When creating user-defined datatypes these can be used : 1. 2. 3. 4. 5. As a column of a relational table As an attribute within another object type. As part of an object view of relational tables. As the basis for an object table. As the basis for PL/SQL variables.
Describe the use and benefits of PL/SQL PL/SQL is a Procedural Language extension to Oracle's version of ANSI standard SQL. SQL is non-procedural language , the programmer only describes what work to perform. How to perform the work is left to the "Oracle Optimizer", in contrast PL/SQL is like any 3GL procedural language, it requires step by step instructions defininig what to do next. PL/SQL combines the power and flexibility of SQL (4GL) with the procedural constructs of a 3GL. This results in a robust, powerful language suited for designing complex applications. Please download these CBT Tutorials(PDF Format) : There are related Oracle 9i, there is not much difference between 8i and 9i with the Database concepts. This module introduces the basic concepts of Relational Databases and the architecure of Oracle Database.