Course Material
records, or were fields in records password can prevent certain users from
retrieving unauthorized data.
5. Program Development: Programmers must use standard names for data items
rather than invent their own from program to program. This allows the
programmer to focus on desired function.
6. Program Maintenance: Changes and repairs to a system are relatively easy.
7. Special Information: Special-purpose reports generators are produce reports
with minimum effort.
Data Models
The structure of a database is the concept of a data model, a collection of conceptual
tools for describing data, data relationships and consistency constraints. The various
data models that have been proposed fall in to three different groups: object based
logical models, record based logical models, and physical data models.
Object based logical models are used in describing data at the conceptual and view
levels. There are many different models, some of the more widely known ones are:
The entity relationship model
The object oriented model
Binary model
We examine the entity relationship model only.
The Entity Relationship (E-R) data model is based on a perception of a real world which
consists of a collection of basic objects called entities and relationships among these
objects. An entity is an object that is distinguishable from other objects by a specific set
of attributes. For example, the number and balance describe one particular account in a
bank. A relationship is an association among several entities. For example, a CustAcct
relationship associates a customer with each account that she or he has. The set of all
entities of the same type and relationships of the type are termed as antity set and
relationship set, respectively.
The over all logical structure of a database can be expressed graphically by an E-R
diagram, which consists of the following components.
Rectangles, which represent entity sets.
Ellipses, which represents attributes.
Diamonds, which represent relationship among entity sets.
Lines, which link attributes to entity sets and entity sets to relationships.
street
numbe balanc
name city r e
CustAc
account
customer
ct
A-102 400
Hayes 677-89-9011 Main Harrison
A-305 350
Disadvantages
1. The use of pointers leads to complexity in the structure. As a result of increased
complexity mapping of related data becomes very difficult.
Hierarchical Model: The hierarchical model is similar to the network model in the sense
that data and relationships among data are represented by records and links,
respectively. It differs from the network model in that the records are organized as
collections of trees rather than arbitrary graphs. It is also referred as tree structure. In
this method data is stored in the form of a parent – child relationship. The origin of data
tree is root. Data located at different levels along a particular branch from the root is
called the node. Each node may be subdivided into two or more additional nodes. The
last node in the series is called leaf. This method supports one to many relationship.
Disadvantages
1. It is not possible to enter any new level into the existing system, and to do this
entire structure has to be rearranged.
2. It does not support many to many relationships
Relational Models represents data and relationships among data by a collection of
tables, each of which has a number of columns with unique names. The following is a
sample relational database showing customers and the accounts they have.
Name Street City Number
Number Balance
900 55
556 100000
647 10534
801 355676
Table Relation
Column Attribute/Domain
Row Tuple
Types of Attributes
Simple and composite Attributes
Simple Attribute the one which cannot have the scope to divide in to small sub products
Composite attributes can be divided into smaller subparts. These subparts represent
basic attributes with independent meanings of their own. For example, take Name, we
can divide it into sub-parts like First name, Middle name, and Last name.
Single-valued and multi-valued attributes
Attributes that can have single value at a particular instance of time are called single
valued. A person can’t have more than one age value. Therefore, age of a person is a
single-values attribute.
A multi-valued attribute can have more than one value at one time. For example, degree
of a person is a multi-valued attribute since a person can have more than one degree.
Derived or Stored Attributes
The attribute from which another attribute value is derived is called derived or stored
attribute.
There may be a case when two or more attributes values are related. Take the example
of age. Age of a person can be calculated from person’s date of birth and present date.
Difference between the two gives the value of age. In this case, age is the derived
attribute.
Key Attributes
Primary Key – An attribute that is used by the database designer for unique
identification of each row in a table is known as Primary Key.
Secondary Key – An Attribute other than primary key that is used for identification of
each row in a table is known as secondary key or an alternate key.
Super Key – When a Primary key is defined with a combination of two or more
attributes for unique identification of each row in a table is known as Super key
Candidate Key – The sub set of a super key is called as a candidate key and it is used to
verify the functional dependency of non key attributes (to be discussed in Normalization
session)
Foreign Key – A foreign key is an attribute or combination of attribute in one base table
that points to the candidate key (generally it is the primary key) of another table. The
purpose of the foreign key is to ensure referential integrity of the data i.e. Only values
that are supposed to appear in the database are permitted. It is also called as Reference
Key
3) Systematic Treatment of Null Values: Null values (distinct from empty character
string or a string of blank characters and distinct from zero or any other number) are
supported in the fully relational DBMS for representing missing information in a
systematic way, independent of data type.
4) Dynamic On-line Catalog Based on the Relational Model: The database description
is represented at the logical level in the same way as ordinary data, so authorized
users can apply the same relational language to its interrogation as they apply to
regular data.
5) Comprehensive Data Sublanguage Rule: A relational system may support several
languages and various modes of terminal use (for example, the fill-in-blanks mode).
However, there must be at least one language whose statements are expressible,
per some well-defined syntax, as character strings and whose ability to support all of
the following is comprehensible: data definition, view definition, data manipulation
(interactive and by program), integrity constraints, and transaction boundaries
(begin, commit, and rollback). In practice all commercial relational databases use
forms of the standard SQL (Structured Query Language) as their supported
comprehensive language.
6) View Updating Rule: All views that are theoretically updateable are also updateable
by the system.
7) High-level Insert, Update, and Delete: The capability of handling a base relation or a
derived relation as a single operand applies nor only to the retrieval of data but also
to the insertion, update, and deletion of data.
8) Physical Data Independence: Application programs and terminal activities remain
logically unimpaired whenever any changes are made in either storage
representation or access methods.
9) Logical Data Independence: Application programs and terminal activities remain
logically unimpaired when information preserving changes of any kind that
theoretically permit unimpairment are made to the base tables.
10) Integrity Independence: Integrity constraints specific to a particular relational
database must be definable in the relational data sublanguage and storable in the
catalogue, not in the application programs.
A minimum of the following two integrity constraints must be supported:
a. Entity integrity: No components of a primary key are allowed to have a null
value.
b. Referential integrity: For each distinct non-null foreign key value in a relational
database, there must exist a matching primary key value from the same domain.
11) Distribution Independence: A relational DBMS has distribution independence.
Distribution independence implies that users should not have to be aware of
whether a database is distributed.
12) Non-subversion Rule: If a relational system has a low-level (single-record-at-a-time)
language, that low-level language cannot be used to subvert or bypass the integrity
rules or constraints expressed in the higher-level (multiple-records-at-a-time)
relational language.
Note: There is a rider to these 12 rules known as Rule Zero: "For any system that is
claimed to be a relational database management system, that system must be able to
manage data entirely through capabilities."
Normalization
Normalization is the process of organizing data in a database. This includes creating
tables and establishing relationships between those tables according to rules designed
both to protect the data and to make the database more flexible by eliminating
redundancy and inconsistent dependency.
Redundant data wastes disk space and creates maintenance problems. If data that exists
in more than one place must be changed, the data must be changed in exactly the same
way in all locations. A customer address change is much easier to implement if that data
is stored only in the Customers table and nowhere else in the database.
What is an "inconsistent dependency"? While it is intuitive for a user to look in the
Customers table for the address of a particular customer, it may not make sense to look
there for the salary of the employee who calls on that customer. The employee's salary
is related to, or dependent on, the employee and thus should be moved to the
Employees table. Inconsistent dependencies can make data difficult to access because
the path to find the data may be missing or broken.
There are a few rules for database normalization. Each rule is called a "normal form." If
the first rule is observed, the database is said to be in "first normal form." If the first
three rules are observed, the database is considered to be in "third normal form."
Although other levels of normalization are possible, third normal form is considered the
highest level necessary for most applications.
As with many formal rules and specifications, real world scenarios do not always allow
for perfect compliance. In general, normalization requires additional tables and some
customers find this cumbersome. If you decide to violate one of the first three rules of
normalization, make sure that your application anticipates any problems that could
occur, such as redundant data and inconsistent dependencies.
Normalization is a design technique that is widely used as a guide in designing relational
databases. Normalization is essentially a two step process that puts data into tabular
form by removing repeating groups and then removes duplicated data from the
relational tables.
Normalization is a body of rules addressing analysis and conversion of data structures
into relations that exhibit more desirable properties of internal consistency, minimal
redundancy and maximum stability. If a normalized design were converted, without any
change to the physical model, it would have the following properties: -
Data storage requirements would be minimized since the normalization process
systematically removes duplication of data.
Since data items are stored in the minimum number of places, the chances of
data inconsistencies would be minimized.
Normalization structures would be optimal for updates (insert, update and
delete) at the expense of retrieval. Since data items exist at the minimal number
of places, an update operation (insert, update and delete) would need to access
the minimum amount of data.
Edgar F. Codd originally established three normal forms: 1NF, 2NF and 3NF. There are
now others that are generally accepted, but 3NF is sufficient for most practical
applications. Normalizing beyond that point rarely yields enough benefit to warrant the
added complexity.
Functional Dependencies
The concept of functional dependencies is the basis for the first three normal forms. A
column, Y, of the relational table R is said to be functionally dependent upon column X
of R if and only if each value of X in R is associated with precisely one value of Y at any
given time. X and Y may be composite. Saying that column Y is functionally dependent
upon X is the same as saying the values of column X identify the values of column Y. If
column X is a primary key, then all columns in the relational table R must be functionally
dependent upon X.
In the example
Stud_Code Stud_Name Address
K105 Smith 501, Down Street, Los Angels
K106 Martin 811, Berkeley Gardens, Boston
the attribute Stud_Name is functionally dependent on the attribute Stud_Code, because
a value of the later determines the value of the former. Similarly, Address is also
functionally dependent on Stud_Code.
Full functional dependency applies to tables with composite keys. Column Y in
relational table R is fully functional on X of R if it is functionally dependent on X and not
functionally dependent upon any subset of X. Full functional dependence means that
when a primary key is composite, made of two or more columns, then the other
columns must be identified by the entire key and not just some of the columns that
make up the key.
We now proceed with the rules of normalization.
The following descriptions include examples.
EXCEPTION: Adhering to the third normal form, while theoretically desirable, is not
always practical. If you have a Customers table and you want to eliminate all possible
interfiled dependencies, you must create separate tables for cities, ZIP codes, sales
representatives, customer classes, and any other factor that may be duplicated in
multiple records. In theory, normalization is worth pursing. However, many small tables
may degrade performance or exceed open file and memory capacities.
It may be more feasible to apply third normal form only to data that changes frequently.
If some dependent fields remain, design your application to require the user to verify all
related fields when any one is changed.
Other Normalization Forms
Fourth normal form, also called Boyce Codd Normal Form (BCNF), and fifth normal form
do exist, but are rarely considered in practical design. Disregarding these rules may
result in less than perfect database design, but should not affect functionality.
Normalizing an Example Table
These steps demonstrate the process of normalizing a fictitious student table.
1. Unnormalized table:
Student# Advisor Adv-Room Class1 Class2 Class3
1022 Jones 412 101-07 143-01 159-02
4123 Smith 216 201-01 211-02 214-01
2. First Normal Form: No Repeating Groups
Tables should have only two dimensions. Since one student has several classes, these
classes should be listed in a separate table. Fields Class1, Class2, and Class3 in the above
records are indications of design trouble.
Spreadsheets often use the third dimension, but tables should not. Another way to look
at this problem is with a one-to-many relationship, do not put the one side and the
many side in the same table. Instead, create another table in first normal form by
eliminating the repeating group (Class#), as shown below:
Student# Advisor Adv-Room Class#
1022 Jones 412 101-07
1022 Jones 412 143-01
1022 Jones 412 159-02
Those of us who have an ordered mind but aren't quite aware of relational databases
might try to capture the Invoice data in a spreadsheet, such as Microsoft Excel.
Figure A-1: orders spreadsheet
This isn't a bad approach, since it records every purchase made by every customer. But
what if you started to ask complicated questions, such as:
How many 3" Red Freens did Freens R Us order in 2002?
What are total sales of 56" Blue Freens in the state of Texas?
What items were sold on July 14, 2003?
As the spreadsheet grows it becomes increasingly difficult to ask the spreadsheet these
questions. In an attempt to put the data into a state where we can reasonably expect to
answer such questions, we begin the normalization process.
First Normal Form: No Repeating Elements or Groups of Elements
Take a look at rows 2, 3 and 4 on the spreadsheet in Figure A-1. These represent all the
data we have for a single invoice (Invoice #125).
In database lingo, this group of rows is referred to as a single database row. Never mind
the fact that one database row is made up here of three spreadsheet rows: It's an
unfortunate ambiguity of language. Academic database theoreticians have a special
word that helps a bit with the ambiguity: they refer to the "thing" encapsulated by rows
2, 3 and 4 as a tuple (pronounced tu'ple or too'ple). We're not going to use that word
here (and if you're lucky, you'll never hear it again for the rest of your life). Here, we will
refer to this entity as a row.
So, First Normal Form (NF1) wants us to get rid of repeating elements. What are those?
Again we turn our attention to the first invoice (#125) in Figure A-1. Cells H2, H3, and H4
contain a list of Item ID numbers. This is a column within our first database row.
Similarly, I2-I4 constitute a single column; same with J2-J4, K2-K4, L2-L4, and M2-M4.
Database columns are sometimes referred to as attributes (rows/columns are the same
as tuples/attributes).
You will notice that each of these columns contains a list of values. It is precisely these
lists that NF1 objects to: NF1 abhors lists or arrays within a single database column. NF1
craves atomicity: the indivisibility of an attribute into similar parts.
Therefore it is clear that we have to do something about the repeating item information
data within the row for Invoice #125. On Figure A-1, that is the following cells:
H2 through M2
H3 through M3
H4 through M4
Similar (though not necessarily identical) data repeats within Invoice #125's row. We can
satisfy NF1's need for atomicity quite simply: by separating each item in these lists into
its own row.
Figure A-2: flattened orders spreadsheet
I can hear everyone objecting: We were trying to reduce the amount of duplication, and
here we have introduced more! Just look at all that duplicated customer data!
Don't worry. The kind of duplication that we introduce at this stage will be addressed
when we get to the Third Normal Form. Please be patient; this is a necessary step in the
process.
We have actually only told half the story of NF1. Strictly speaking, NF1 addresses two
issues:
1. A row of data cannot contain repeating groups of similar data (atomicity); and
2. Each row of data must have a unique identifier (or Primary Key).
We have already dealt with atomicity. But to make the point about Primary Keys, we
shall bid farewell to the spreadsheet and move our data into a relational database
management system (RDBMS). Here we shall use Microsoft Access to create the orders
table, as in Figure B:
Figure B: orders table
This looks pretty much the same as the spreadsheet, but the difference is that within an
RDBMS we can identify a primary key. A primary key is a column (or group of columns)
that uniquely identifies each row.
As you can see from Figure B, there is no single column that uniquely identifies each
row. However, if we put a number of columns together, we can satisfy this requirement.
The two columns that together uniquely identify each row are order_id and item_id: no
two rows have the same combination of order_id and item_id. Therefore, together they
qualify to be used as the table's primary key. Even though they are in two different table
columns, they are treated as a single entity. We call them concatenated.
A value that uniquely identifies a row is called a primary key.
When this value is made up of two or more columns, it is referred to as a concatenated
primary key.
The underlying structure of the orders table can be represented as Figure C:
We identify the columns that make up the primary key with the PK notation. Figure C is
the beginning of our Entity Relationship Diagram (or ERD).
Our database schema now satisfies the two requirements of First Normal Form:
atomicity and uniqueness. Thus it fulfills the most basic criterion of a relational
database.
What's next?
Figure C: orders table structure
The short answer is yes: order_date relies on order_id, not item_id. Some of you might
object, thinking that this means you could have a dated order with no items (an empty
invoice, in effect). But this is not what we are saying at all: All we are trying to establish
here is whether a particular order on a particular date relies on a particular item.
Clearly, it does not. The problem of how to prevent empty orders falls under a
discussion of "business rules" and could be resolved using check constraints or
application logic; it is not an issue for Normalization to solve.
Therefore: order_date fails Second Normal Form.
Our table has already failed Second Normal Form. But let's continue with testing the
other columns. We have to find all the columns that fail the test, and then we do
something special with them.
customer_id is the ID number of the customer who placed the order. Does it rely on
order_id? No: a customer can exist without placing any orders. Does it rely on item_id?
No: for the same reason. This is interesting: customer_id (along with the rest of the
customer_* columns) does not rely on either member of the primary key. What do we
do with these columns?
We don't have to worry about them until we get to Third Normal Form. We mark them
as "unknown" for now.
item_description is the next column that is not itself part of the primary key. This is the
plain-language description of the inventory item. Obviously it relies on item_id. But can
it exist without an order_id?
Yes! An inventory item (together with its "description") could sit on a warehouse shelf
forever, and never be purchased... It can exist independent of an order.
item_description fails the test.
item_qty refers to the number of items purchased on a particular invoice. Can this
quantity exist without an item_id? Impossible: we cannot talk about the "amount of
nothing" (at least not in database design). Can the quantity exist without an order_id?
No: a quantity that is purchased with an invoice is meaningless without an invoice. So
this column does not violate Second Normal Form: item_qty depends on both parts of
our concatenated primary key.
item_price is similar to item_description. It depends on the item_id but not on the
order_id, so it does violate Second Normal Form.
item_total_price is a tricky one. On the one hand, it seems to depend on both order_id
and item_id, in which case it passes Second Normal Form. On the other hand, it is a
derived value: it is merely the product of item_qty and item_price. What to do with this
field?
In fact, this field does not belong in our database at all. It can easily be reconstructed
outside of the database proper; to include it would be redundant (and could quite
possibly introduce corruption). Therefore we will discard it and speak of it no more.
Figure C:
order_total_price, the sum of all the item_total_price fields for a particular order, is
another derived value. We discard this field too.
Here is the markup from our NF2 analysis of the orders table:
Figure C (revised):
What do we do with a table that fails Second Normal Form, as this one has? First we
take out the second half of the concatenated primary key (item_id) and put it in its own
table.
All the columns that depend on item_id - whether in whole or in part - follow it into the
new table. We call this new table order_items (see Figure D).
The other fields - those that rely on just the first half of the primary key (order_id) and
those we aren't sure about - stay where they are.
Figure D: orders and order_items tables
Figure F:
...Take the fields that fail NF2, and create a new table. We call this new table items:
But wait, something's wrong. When we did our first pass through the NF2 test, we took
out all the fields that relied on item_id and put them into the new table. This time, we
are only taking the fields that failed the test: in other words, item_qty stays where it is.
Why? What's different this time?
The difference is that in the first pass, we removed the item_id key from the orders
table altogether, because of the one-to-many relationship between orders and order-
items. Therefore the item_qty field had to follow item_id into the new table.
In the second pass, item_id was not removed from the order_items table because of
the many-to-one relationship between order-items and items. Therefore, since item_qty
does not violate NF2 this time, it is permitted to stay in the table with the two primary
key parts that it relies on.
Figure G: order_items and items table
This should be clearer with a new ERD. Here is how the items table fits into the overall
database schema:
The line that connects the items and order_items tables means the following:
Each item can be associated with any number of lines on any number of invoices,
including zero;
each order-item is associated with one item, and only one.
Figure H:
These two lines are examples of one-to-many relationships. The whole three-table
structure, considered as a whole, is how we express a many-to-many relationship:
Each order can have many items; each item can belong to many orders.
Notice that this time, we did not bring a copy of the order_id column into the new table.
This is because individual items do not need to have knowledge of the orders they are
part of. The order_items table takes care of remembering this relationship via the
order_id and item_id columns. Taken together these columns comprise the primary key
of order_items, but taken separately they are foreign keys or pointers to rows in other
tables. More about foreign keys when we get to Third Normal Form.
Notice, too, that our new table does not have a concatenated primary key, so it passes
NF2. At this point, we have succeeded in attaining Second Normal Form!
We have to restore the relationship by creating an entity called a foreign key (indicated
in our diagram by (FK)) in the orders table. A foreign key is essentially a column that
points to the primary key in another table. Figure J describes this relationship, and
shows our completed ERD:
The relationship that has been established between the orders and customers table
may be expressed in this way:
each order is made by one, and only one customer;
each customer can make any number of orders, including zero.
Figure J:
And finally, here is what the data in each of the four tables looks like. Notice that NF3
removed columns from a table, rather than rows.
Figure K:
EXERCISE
Normalize the following database and design the logical relations:
1. Employee Code
2. Employee First Name
3. Employee Surname
4. Department Code
5. Job Title
6. Sex
7. Grade Code
8. Grade Description
9. Minimum Basic
10. Maximum Basic
11. Increment Step
12. Current Basic
13. Department Name
14. Department Location
15. Year of Birth
16. Year of Joining
17. Boss’s Employee Code
18. Training Course Code
19. Training Topic
20. Training Duration in Days
21. Training Year
22. Training Cost
Note: Fields 18 to 22 repeated as many times as the number of training
programs attended by an employee.
Introduction to Oracle
Origin of the name ORACLE
IBM is an abbreviation for International Business Machines and SUN is an acronym for
Stanford University Network (or Networking). However, the word ORACLE is neither an
abbreviation nor an acronym (see an English dictionary for the meaning of the word
oracle). It appears in uppercase because this is the branding style Oracle Corp. chose for
itself.
The word Oracle means:
Prophecy or prediction; answer to a question believed to come from the gods; a
statement believed to be infallible and authoritative; a shrine at which these
answers are given
There is, however, more to the word Oracle: used for the name of the database engine,
and then later for the company itself. Larry Ellison and Bob Miner were working on a
consulting project for the CIA (Central Intelligence Agency in USA) where the CIA wanted
to use this new SQL language that IBM had written a white paper about. The code name
for the project was Oracle (the CIA saw this as the system to give all answers to all
questions or something such ;-).
The project eventually died (of sorts) but Larry and Bob saw the opportunity to take
what they had started and market it. So they used that project's codename of Oracle to
name their new RDBMS engine. Funny thing is, that one of Oracle's first customers was
the CIA...
Oracle database is a commercial, relational database management system from Oracle
Corporation.
Oracle Database, the relational database management system from Oracle Corporation,
is arguably the most powerful and feature rich database on the market.
Oracle Corporation was founded in 1977 in Redwood, California. They introduced the
first Relational Database Management System based on the IBM System/R model and
the first database management system utilizing IBM's Structured Query Language (SQL)
technology.
Today, the Oracle DBMS is supported on over 80 different operating environments,
ranging from IBM mainframes, DEC VAX minicomputers, UNIX-based minicomputers,
Windows NT and several proprietary hardware-operating system platforms, and is
clearly the world's largest RDBMS vendor.
Oracle employs more than 42,000 professionals in 93 countries around the world. Their
expenditure for research and development is approximately 13% of their revenues.
Larry Ellison founded Software Development Laboratories in 1977. In 1979 SDL changed
its company name to Relational Software, Inc. (RSI) and introduced its product Oracle V2
as an early commercially-available relational database system. The version did not
support transactions, but implemented the basic SQL functionality of queries and joins.
There was no version 1, instead the first version was called version 2 as a marketing
strategy.
In 1983, RSI was renamed Oracle Corporation to more closely align itself with its flagship
product. Oracle version 3 was released which had been re-written in the C Programming
Language and supported commit and rollback transaction functionalities. Platform
support was extended to UNIX with this version, which until then had run on Digital
VAX/VMS systems.
Lawrence Joseph (Larry) Ellison (Born: 1944, Chicago) is president and CEO of Oracle
Corporation. He's the Oracle worlds hero, and he should be. Oracle Corporation, the
company he founded with Robert N. (Bob) Miner and Edward A. (Ed) Oates back in
1977, has emerged as the world's largest vendor of software that helps large
corporations and governments better manage their information.
Bruce Scott was one of the first employees at Oracle (then Software Development
Laboratories). He co-founded Gupta Technology (now known as Centura Software) in
1984 with Umang Gupta, and later became CEO and founder of PointBase, Inc.
Bruce was co-author and co-architect of Oracle V1, V2 and V3. The SCOTT schema (EMP
and DEPT tables), with password TIGER, was created by him. Tiger was the name of his
cat.
On June 16, 1903, Ford Motor Company was incorporated and the Industrial Revolution
started to hit a major acceleration point. On June 16, 1977, a similar event occurred
with the incorporation of Oracle Corporation (then called Systems Development
Laboratories -- SDL).
America is all about freedom, resilience and opportunity. Larry Ellison and Bob Miner
are examples of what is possible in a free society. Larry's surname is even based on Ellis
Island. Larry's entrepreneurial success story shows that anything is possible where
people enjoy freedom and an entrepreneur spirit burns. On the Statue of Liberty it
reads: "Give me your tired, your poor, your huddled masses yearning to breathe free, the
wretched refuse of your teeming shore. Send these, the homeless, tempest-tost to me, I
lift my lamp beside the golden door!" That golden door eventually led Larry to the
Golden Gate Bridge and the establishment of Oracle Corporation in Silicon Valley in
1977.
The early years at Oracle through the eyes of Bruce Scott
Prior to forming Oracle, Bob Miner was Larry Ellison's manager where they worked at
Ampex together on a CIA project code-named "Oracle." Larry chose Bob as his manager
because he liked Bob a lot more than his original manager. Ed Oates, another founder of
Oracle, happened to be walking by Bob Miner's door when Larry Ellison mentioned his
(Larry's) wife's name. It turned out to be Ed Oates' lab partner from high school. Bruce
Scott, who would be hired upon the formation of the company, is the "Scott" in
scott/tiger (Tiger was Bruce's daughter's cat).
When Larry went on to work at Precision Instruments, he discovered Precision
Instruments had a need to do a $400,000 consulting project. For three or four
engineers, that was a lot of money back then, since wages were about one-tenth what
they are now. Larry landed the deal. Larry was not part of the new company when it was
founded; he was still at Precision Instruments. The new company was called Software
Development Labs (SDL). It had three employees when he started the company in
August of 1977. Bob Miner was the president; Ed Oates and Larry were both software
engineers. They did 90% of the work on this two-year project in the first year, so they
had the next year to work on Oracle. Ed Oates finished the other 10% of the project over
the next year while Bob Miner and Larry started to write the Oracle database.
When they completed the Precision Instruments work, they had about $200,000 in the
bank. They decided and wanted to be a product company and not a consulting
company. Bob wanted to build an ISAM product for the PDP11. He felt there was a need
for an access layer. Larry wasn't interested in that at all. Larry had been following what
IBM was doing and he found a paper on the System/R based on Codd's 1970 paper on
relational databases. It described the SQL language, which was at the time called
SEQUEL/2.
1970 -- Dr. Edgar Codd publishes his theory of relational data modeling.
1977 -- Software Development Laboratories (SDL) formed by Larry Ellison, Bob Miner, Ed
Oates and Bruce Scott with $2,000 of startup cash. Larry and Bob come from Ampex
where they were working on a CIA project code-named "Oracle." Bob and Bruce begin
work on the database.
1978 -- The CIA is the first customer, yet the product is not released commercially as of
yet. SDL changes its name to Relational Software Inc. (RSI).
1979 -- RSI ships the first commercial version, Version 2 (there is no V1 shipped on fears
that people won't buy a first version of software) of the database written in Assembler
Language. The first commercial version of the software is sold to Wright-Patterson Air
Force Base. It is the first commercial RDBMS on the market.
1981 -- The first tool, Interactive Application Facility (IAF), which is a predecessor to
Oracle's future SQL*Forms tool, is created.
1982 -- RSI changes its name to Oracle Systems Corporation (OSC) and then simplifies
the name to Oracle Corporation.
1983 -- Version 3, written in C (which makes it portable) is shipped. Bob Miner writes
half, while also supporting the Assembler-based V2, and Bruce Scott writes the other
half. It is the first 32-bit RDBMS.
1984 -- Version 4 released. First tools released, (IAG-genform, IAG-runform, RPT). First
database with read consistency. Oracle ported to the PC.
1985 -- Version 5 and 5.1 are released. First Parallel Server database on VMS/VAX.
1986 -- Oracle goes public March 12 (the day before Microsoft and eight days after Sun).
The stock opens at $15 and closes at $20.75. Oracle Client/Server is introduced; first
client/server database. Oracle5.1 is released.
1987 -- Oracle is the largest DBMS company. Oracle Applications group started. First
SMP (symmetrical multi-processing) database introduced.
1987 -- Oracle's Rich Niemiec along with Brad Brown and Joe Trezzo working at Oracle
(now at TUSC) implement the first production client/server application running Oracle
on a souped-up 286 running 16 concurrent client/server users for NEC Corporation.
1988 -- Oracle V6 released. First row-level locking. First hot database backup. Oracle
moves from Belmont to Redwood Shores. PL/SQL introduced.
1992 -- Oracle V7 is released.
his determination to make this thing work no matter what. It's just the way Larry thinks.
I can give you an example I tell people that exemplifies his thought process: We had
space allocated to us and we needed to get our terminals strung to the computer room
next door. We didn't have anywhere to really string the wiring. Larry picks up a hammer,
crashes a hole in the middle of the wall and says there you go. It's just the way he
thinks, make a hole, make it happen somehow. It was Larry, the right thing and the right
time."
Use Oracle Ultra Search for searching databases, file systems, etc. The
UltraSearch crawler fetch data and hand it to Oracle Text to be indexed.
Oracle Nameserver is still available, but deprecate in favor of LDAP Naming
(using the Oracle Internet Directory Server). A nameserver proxy is provided for
backwards compatibility as pre-8i client cannot resolve names from an LDAP
server.
Oracle Parallel Server's (OPS) scalebility was improved - now called Real
Application Clusters (RAC). Full Cache Fusion implemented. Any application can
scale in a database cluster. Applications doesn't need to be cluster aware
anymore.
The Oracle Standby DB feature renamed to Oracle Data Guard. New Logical
Standby databases replay SQL on standby site allowing the database to be used
for normal read write operations. The Data Guard Broker allows single step fail-
over when disaster strikes.
Scrolling cursor support. Oracle9i allows fetching backwards in a result set.
Dynamic Memory Management - Buffer Pools and shared pool can be resized
on-the-fly. This eliminates the need to restart the database each time parameter
changes were made.
On-line table and index reorganization.
VI (Virtual Interface) protocol support, an alternative to TCP/IP, available for use
with Oracle Net (SQL*Net). VI provides fast communications between
components in a cluster.
Build in XML Developers Kit (XDK). New data types for XML (XMLType), URI's, etc.
XML integrated with AQ.
Cost Based Optimizer now also consider memory and CPU, not only disk access
cost as before.
PL/SQL programs can be natively compiled to binaries.
Deep data protection - fine grained security and auditing. Put security on DB
level. SQL access do not mean unrestricted access.
Resumable backups and statements - suspend statement instead of rolling back
immediately.
List Partitioning - partitioning on a list of values.
ETL (eXtract, transformation, load) Operations - with external tables and
pipelining.
OLAP - Express functionality included in the DB.
Data Mining - Oracle Darwin's features included in the DB.
Oracle 8i (8.1.7)
Static HTTP server included (Apache)
Oracle 8i (8.1.5)
Fast Start recovery - Checkpoint rate auto-adjusted to meet roll forward criteria
Reorganize indexes/index only tables which users accessing data - Online index
rebuilds
Log Miner introduced - Allows on-line or archived redo logs to be viewed via SQL
OPS Cache Fusion introduced avoiding disk I/O during cross-node
communication
Advanced Queueing improvements (security, performance, OO4O support
User Security Improvements - more centralisation, single enterprise user,
users/roles across multiple databases.
Virtual private database
JAVA stored procedures (Oracle Java VM)
Oracle iFS
Resource Management using proirities - resource classes
Hash and Composite partitioned table types
SQL*Loader direct load API
Copy optimizer statistics across databases to ensure same access paths across
different environments.
Stanby Database - Auto shipping and application of redo logs. Read Only queries
on standby database allowed.
Enterprise Manager v2 delivered
NLS - Euro Symbol supported
Analyze tables in parallel
Temporary tables supported.
Net8 support for SSL, HTTP, HOP protocols
Transportable tablespaces between databases
Locally managed tablespaces - automatic sizing of extents, elimination of
tablespace fragmentation, tablespace information managed in tablespace (i.e
noved from data dictionary) improving tablespace reliability
Drop Column on table (Finally !!!!!)
DBMS_DEBUG PL/SQL package, DBMS_SQL replaced by new EXECUTE
IMMEDIATE statement
Progress Monitor to track long running DML,DDL
Functional Indexes - NLS, case insensitive, descending
Oracle 8.0 - June 1997
Object Relational database
Object Types (not just date, character, number as in v7
SQL3 standard
Call external procedures
LOB >1 per table
Partitioned Tables and Indexes
export/import individual partitions
partitions in multiple tablespaces
online/offline, backup/recover individual partitions
merge/balance partitions
Advanced Queuing for message handling
Many performance improvements to SQL/PLSQL/OCI making more efficient use
of CPU/Memory. V7 limits extended (e.g. 1000 columns/table, 4000 bytes
VARCHAR2)
Parallel DML statements
Connection Pooling ( uses the physical connection for idle users and
transparently re-establishes the connection when needed ) to support more
concurrent users.
Improved "STAR" Query optimizer
Oracle Trace
Advanced Replication Object Groups
PL/SQL - UTL_FILE
Oracle 7.2
Resizable, autoextend data files
Shrink Rollback Segments manually
Create table, index UNRECOVERABLE
Subquery in FROM clause
PL/SQL wrapper
PL/SQL Cursor variables
Checksums - DB_BLOCK_CHECKSUM, LOG_BLOCK_CHECKSUM
Parallel create table
Job Queues - DBMS_JOB
DBMS_SPACE
DBMS Application Info
Sorting Improvements - SORT_DIRECT_WRITES
Oracle 7.1
ANSI/ISO SQL92 Entry Level
Advanced Replication - Symmetric Data replication
Snapshot Refresh Groups
Parallel Recovery
Dynamic SQL - DBMS_SQL
Parallel Query Options - query, index creation, data loading
Server Manager introduced
Read Only tablespaces
Oracle 5.1
Distributed queries
Oracle 4 - 1984
Read consistency
Oracle 3 - 1981
Supports COMMIT and ROLLBACK of transactions
Re-written in the C Programming Language
Oracle 2 - 1979
First public release
basic SQL functionality of queries and joins
Oracle SQLJ lets applications programmers embed static SQL operations in Java code in
a way that is compatible with the Java design philosophy. A SQLJ program is a Java
program containing embedded static SQL statements that comply with the ANSI-
standard SQLJ Language Reference syntax.
Although some Oracle tools and applications simplify or mask SQL use, all database
operations are performed using SQL. Any other data access method circumvents the
security built into Oracle and potentially compromise data security and integrity.
Oracle was the first company to release a product that used English based structure
query language. This language allowed end users to extract information themselves,
without using a systems group for every little report
Oracle’s language has structure, just as English or any other language has structure. It
has rules of grammar and syntax, but they are basically the normal rules of careful
English speech and can be readily understood.
SQL was invented and developed by IBM in early 1970,s IBM was able to demonstrate
how to control relational database using SQL. This ANSI standardized SQL was
implemented by Oracle Corporation thoroughly 100%. Oracle database language SQL,
which is used for storing and retrieving information in Oracle. A table is a primary
database object of SQL that is used to store data. A table holds data in the form of rows
and columns.
In order to communicate with the database SQL supports the following categories of
commands.
1. Data Definition Language
(create, alter, drop commands)
2. Data Manipulation Language
(insert, select, delete and up date commands )
3. Transaction Control Language
(commit, savepoint, and rollback command)
4. Data Control Language
(grant and revoke commands )
Advantages of SQL:
1. Non-procedural language, because more than are record can be accessed rather
than one record at a time
2. It is the common language for all relational databases. In other words it is
portable and it requires very few modifications so that it can work on other
databases.
3. Very simple commands for querying, inserting, deleting, and modifying data and
objects.
SQL*Plus (User Friendly Interface): It is a powerful Oracle product that can take your
instructions from Oracle, check them for correctness, submit them to Oracle, and then
modify or reformat the response Oracle gives back, based on the order or directions
that you have set in place.
It may be little confusing at first to understand the difference between what SQL*Plus is
doing and Oracle is doing, especially since the error messages that Oracle produces are
simply passed onto you by SQL*Plus. In SQL*Plus case is irrelevant.
With Oracle 6, a new tool called SQL*DBA was introduces which overtook many of the
database responsibilities, such as backup and recovery, startup and shutdown.
Oracle 7, 8, 9 provide a new interface called Enterprise Manager to replace SQL*DBA as
a database management tool.
Through SQL*PLUS we can store, retrieve, edit, enter and run SQL commands and
PL/SQL blocks. Using SQL*Plus we can perform calculations, list column definitions for
any table and can also format query results in the form of a report.
SQL*Plus is most commonly used for simple queries and print reports. Getting SQL*Plus
to format information in reports according to your needs requires, only a handful of
commands or keywords that interact SQL*Plus how to behave.
SQL*Plus also has a tiny, built in editor of its own, some times called as command line
editor, which will allow you to quickly modify a SQL query without leaving SQL*Plus.
Commands and its Description of SQL*Plus
Remark: Tells SQL*Plus that the words to follow are to be treated as comments, note
restrictions.
Set headsep: The heading separator identifies the single character that tells SQL*Plus to
split a little one, two or more lines.
Title: Sets the top title for each page of a report.
Btitle: Sets the bottom title for each page of a report.
Column: Gives SQL*Plus a variety of instructions on the heading, format, and treatment
of a column.
Break on: Tells SQL*Plus where to put spaces between sections of a report, or where to
break for subtotals and totals.
Compute sum: Makes PL/SQL calculates subtotal.
/**/: Marks the beginning and ending of a comment within SQL entry. This command is
similar to Remark.
--: Marks the beginning of an inline comment within a SQL entry. Treats every thing
from the mark to the end of the line as a comment. This command is similar to Remark.
Set pause: Makes the screen display stop between pages of display.
Save: Saves the SQL queries you are creating into the file of you own choice.
Host: Sends any command to the host operation system.
Start: Tells SQL*Plus to follow (Execute) the instructions you have saved in a file.
Edit: Pops you art of SQL*Plus and into an editor of your choice.
Examples
1. Rem name: activity-sql type: start file report
2. Set headsep:
3. Title: ‘sales by product during 2001’
4. B title ‘by G. B. Talbolt’
5. Column item heading ‘what was sold’
Column item format a18
Column item truncated
Column rate format 90.99
Column ext format 990.99
Break on item skip 2
6. Compute sum on item
7. Set line size 79
8. Set page size 24
SQL Vs SQL*PLUS
SQL is a standard structured query language common to all relational databases. SQL is
database language used for storing and retrieving data from the database. Most
Relational Database management systems provide extensions to SQL to make it easy for
application developers. SQL*PLUS is an Oracle specific program which accepts SQL
commands and PL/SQL blocks and executes them. SQL*PLUS enables manipulation of
SQL commands and PL/SQL Blocks. It also performs many additional tasks such as
1. Enter, edit, store, retrieve, and run SQL commands and PL/SQL blocks.
2. Format, perform calculations, store, and print query results in the form of
reports.
3. List column definitions for any table
4. Access and copy data between SQL databases
5. Send messages to and accept responses from an end user
Advantages of Oracle 9i for choosing it as the RDBMS for effectively managing data:
1. Ability to retrieve data spread across multiple tables.
2. Oracle specific SQL*PLUS functions used when required to query the database
especially to decide future course of action.
Recursive SQL
When a DDL statement is issued, Oracle implicitly issues recursive SQL statements that
modify data dictionary information. Users need not be concerned with the recursive SQL
internally performed by Oracle.
hundreds of times faster than disks. Don’t get burned by tiny pools. See the Google
search ”oracle cache disk speed” for details.
Disk storage is non-volatile. This means that the data stored on a disk will remain after
the power is turned off. Disks are always slower than RAM, but disks are hundreds of
times cheaper than RAM.
There is a trade-off between memory and disks: Memory is fast but expensive (about
$1,000 per gigabyte), whereas disks are slower but very cheap. Thus, memory is used for
short-term storage of information that is frequently needed and disks are used for long-
term storage of information.
Oracle has a number of memory areas that it uses to store information. In this section
we will address the main Oracle memory areas. They are called:
* The System Global Area (SGA) – RAM areas for the Oracle programs.
* The Program Global Area (PGA) – Private RAM areas for individual client connections
to Oracle
Oracle segregates “physical components” (the .dbf files on disk) my mapping them into
“logical” containers called tablespaces. In turn, we allocate our tables and indexes
inside these tablespace, and Oracle takes-care of the interface to the physical disk files.
In this section we will look at the following logical database elements.
* Tablespaces (not completely a logical thing actually!)
* Blocks
* Extents
* Segments
We will then give you a big-picture summary of the relationships between these logical
objects. Note that this section just provides you with some basic concepts. We will get
into much more detail in the chapters to come!
* PL/SQL Area - Used to hold parsed and compiled PL/SQL program units, allowing the
execution plans to be shared by many users.
* Control Structures - Common control structure information, for example, lock
information
The dictionary cache stores “metadata” (data about your tables and indexes) and it’s
also known as the row cache. It is used to cache data dictionary related information in
RAM for quick access. The dictionary cache is like the buffer cache, except it’s for Oracle
data dictionary information instead of user information. We will discuss the data
dictionary later in this book.
As with the database buffer cache, the shared pool is critical to performance. Later in
this book we will discuss the concept of Oracle SQL statement reuse. Reusability is a
concept that is very important when it comes to performance relating to the shared
pool!
Thus far we have discussed Oracle’s in-memory storage of data, SQL and control
structures but there is one other very important SGA structure to be mentioned, the
redo log buffer.
The Redo Log Buffer
The redo log buffer is a RAM area (defined by the initialization parameter log_buffer)
that works to save changes to data, in case something fails and Oracle has to put it back
into its original state (a “rollback”). When Oracle SQL updates a table (a process called
Data Manipulation Language, or DML), redo images are created and stored in the redo
log buffer. Since RAM is faster than disk, this makes the storage of redo very fast.
Oracle will eventually flush the redo log buffer to disk. This can happen in a number of
special cases, but what’s really important is that Oracle guarantees that the redo log
buffer will be flushed to disk after a commit operation occurs. When you make changes
in the database you must commit them to make them permanent and visible to other
users.
Since RAM is wiped out if you lose power to the computer, all the redo data in the redo
buffer would be lost in a power outage. To protect against this problem, a commit asks
Oracle to save the redo to disk, which is permanent. The redo log disk files are called
online redo logs.
Oracle PGA Concepts
Not all RAM in Oracle is shared memory. When you start a user process, that process
has a private RAM area, used for sorting SQL results and managing special joins called
“hash” joins. This private RAM is known as the Program Global Area (PGA). Each
individual PGA memory area is allocated each time a new user connects to the
database.
Oracle Database 10g will manage the PGA for you if you set the pga_aggregate_target
parameter (we will discuss parameters and how they are set later in this book), but you
can manually allocate the size of the PGA via parameters such as sort_area_size and
hash_area_size. We recommend that you allow Oracle to configure these areas, and just
configure the pga_aggregate_target parameter.
The PGA can be critical to performance, particularly if your application is doing a large
number of sorts. Sort operations occur if you use ORDER by and GROUP BY commands
in your SQL statements.
You would be amazed at how many times I’ve seem basic questions asked in forums
where it’s clear the person asking the question had not read the instillation guide. I’ve
been told that the DBA guild is considering doubling its rates for any emergency calls to
fix install disasters, so read up folks! Let’s get started.
Checking the Edition and Version of Oracle
When preparing to install the Oracle software for the first time, you need determine the
release and features that you desire. When considering what version of Oracle to use,
consider the following issues:
1. Features - Different versions of the Oracle database software have different
functionality. Versions in Oracle are numbered, like 9.2.0.6 or 10.1.0.3, and they indicate
the release level of the software. Go to Oracle’s web site, www.oracle.com, to find out
what features are available in the version of Oracle you are planning to use.
2. Edition - Oracle version is one consideration, but in each version there are editions
that come with different functionality. Make sure that the features you will need to use
are available in your edition of Oracle. The names change with each release, but Oracle
Database 10g comes in Enterprise Edition, Standard Edition and Personal Edition.
Advanced functionality is missing from Standard Edition (SE) and Express Edition (XE).
3. Version - Make sure that the version you are installing is still supported by Oracle, and
that it will not soon be de-supported. You should do this because it can be very time
consuming (and sometimes expensive) to migrate to different versions of Oracle.
If you are running on another platform, follow the directions in the platform-specific
install instructions:
Mounting the CD
Changing to the CD mount point (don’t do this if you are installing 9i or you
won’t be able to eject the CD on linux or UNIX platforms!)
Running the installer. In UNIX this is typically called runInstaller.
The Oracle installer makes the install process quite easy and you simply follow the
prompts. Let’s look at a typical install and see just how easy it is!
Tip: If you are trying to run the Oracle installer on UNIX, you need to make sure you can
start an X-windows session on the console you are using! You must be able to run X in
order for the Oracle installer to function correctly.
What we have gone over is the “quick and dirty” installation and setup of Oracle on your
computer. While there are many more options available during the installation phase,
they are things you will be working with in time.
The main points of this chapter include:
The Oracle Universal installer will guide you through the installation process.
There are several steps to installing Oracle, including the loading of the
executables and the creation of databases.
Oracle provides the Database Creation Assistant (DBCA) to help us create
databases.
In the chapters to come, we will use Oracle in many different ways. Through the
information you will gain from these chapters, you will begin to understand more of the
advanced options that can be performed as early as the installation phase of the
software.
address. This is typical if you are running in a Windows environment where there is a
dynamic IP address. To avoid this, create a temporary entry in your hosts file (operating
system dependent) with the current static IP address of your server.
Once the installer has collected external information it will present you a summary
screen of all the products that you may install.
At this point, click the install button and Oracle will begin to install the database
software on your computer. As it works, you will see a thermometer that displays the
progress of the install:
The install may take up to ten minutes, so just be patient as it loads and links the
executables. You can stop the install by clicking on the stop installation button.
Once the install is complete you will see the following screen:
This screen gives you a lot of information, and you will want to note the Ultra Search
URL and the iSQL*Plus URL. When the end of installation screen appears the install is
complete and you can click on the exit button to exit the installer.
Tip: On UNIX platforms you will also need to run a script called root.sh during the install
as the “root” user.
In this case, we are connecting to the SYS account of the database we just created. The
SYS account is an all powerful account in Oracle, and it should never be used lightly.
Notice the host string setting. Here we put in our database name (booktest), and we
added the syntax as sysdba. Most non-DBA’s will never use the as sysdba command, for
it implies special privileges are granted to the person logging in. In your case, you are a
DBA, so you need the special privileges that as sysdba provides.
At this point, you will be logged into Oracle, and SQL*Plus will be ready to accept your
commands. FYI, there is another version of SQL*Plus out there, called iSQL*Plus, with
an Internet interface.
This screen print shows you what the SQL*Plus client looks like once it comes up:
export ORACLE_HOME=your_install_location
export PATH=$PATH:$ORACLE_HOME/bin
2. Next, we start SQL*Plus with the sqlplus command. When starting SQL plus include
the user name that you wish to connect to. Here are examples of the use of this
command:
C:\>sqlplus john_dba
Enter password:
Connected to:
SQL>
Note in the example above that when you start SQL*Plus, you must provide the name of
the user account. In this case, we logged into the account john_dba. Next, SQL*Plus will
prompt you for the password to this account. If you enter the correct password (you will
not be able to see the password when you enter it in) then Oracle will pass you to the
SQL prompt (SQL>).
You can also include the password when you call SQL*Plus, but this means that your
password will not be hidden, and this can have serious security implications. Here is an
example of using the user account and the password to log into SQL*Plus:
/u01/app> sqlplus john_dba/my_password
If you do this in a UNIX/Linux environment, the command “ps –ef” will display your
password for anyone to see.
To perform specific DBA activities on the database (such as backing up the database or
shutting it down), you will need to log in as a special type of user. This is because these
operations require a special set of administrative privileges called the SYSDBA and
SYSOPER privileges.
To activate these privileges, your user ID must be allowed to access these privileges (we
will discuss setting up such an account later in this book) and you must use a special
connection string to connect to the database to activate these privileges. If you want to
connect using the SYSDBA privileges (these are super DBA privileges) then use the
following syntax:
C:\>sqlplus “sys as sysdba”
You can also include the password if you prefer:
C:\>sqlplus “sys/my_password as sysdba”
The same method of connecting also works with the SYSOPER privilege. You simply
replace sysdba with sysoper in the connection string. One final thing to notice is that the
connection string is enclosed in quotes. This is required because there is white space
(blanks) inside the connection string.
CLOB
BLOB
BFILE
9. XML : Supported in Oracle 9i
Char data type {1 – 2000 bytes}
It is used when fixed length string is required. It can store alphanumeric values. The
column length of such a data type can vary between 1-2000bytes. By default it is one
byte.
If the user enters a value shorter than the specific length than the database black
pads to the fixed length.
In case, if the user enters a value larger than the specified length than the
database would return an error.
Varchar {1 – 4000bytes}
It supports variable length character string. It also stores alphanumeric values. The size
of this data type ranges from 1-4000bytes. Using varchar saves disk space when
compared to char data type. Because if a user is not utilizing the space of this data type
as declared while initializing, depending on this data it will occupy only that space in
disk.
It is to store binary data in variable length, which can have a maximum size of 2GB. This
data type cannot be indexed. Further all limitations faced by long data type also holds
good for long raw data type.
Large Object data types (LOB) {4GB}
It can store unstructured information such as Scanned clips, video files etc., up to 4giga
bytes in size. They allow efficient, random, piece-wise access to data. The LOB types
store values, which are known as Locators. These Locators store the Location of the
large objects.
The location may be out -of-line (not with in table) or in an external file. The DBMS_LOB
package can be used to manipulate LOB’S. LOB’S can be either external or internal
depending on their location with regards to the database. Data stored in a LOB column
is known as LOB value.
Internal LOB’S are stored in the db table space this provides efficient access and
optimization of space. External LOB’S are also referred to as BFILE. These are stored in
operating system files outside the db table space. The files use reference semantics.
They may be stored in CD-ROM’S, photo CD’S, or hard disks etc. but the storage cannot
extend from one device to another. The external LOB’S do not participate in
transactions.
Changes to internal LOB’S can be done using SQL DML, packages provided in PL/SQL
called DBMS _LOB. Changes can also be done through series of API calls from the Oracle
call Interference (OCI). Changes can be made to entire Internal LOB, or piece wise to the
beginning, middle or end of the Internal LOB. Both Internal and External LOBS can be
accessed for read purpose.
PL/SQL provides a set of intrinsic data types for the support of LOB’S, although SQL
cannot directly manipulate these data types, they are accessible from SQL through
PL/SQL function calls.
The different Internal LOB’S are:
CLOB
BLOB
BFILE
CLOB
A column with its data type as CLOB stores characters objects with single byte
characters. It cannot contain character sets of varying widths. A table can have multiple
columns as its data type.
BLOB
A column with its data type as BLOB can store large binary objects such as graphics,
video clips and sound files. A table can have multiple columns with BLOB as its data
type.
BFILE
A Bfile column stores file pointers to LOB’S managed by file systems external to
database. A Bfile column may contain file names for photos stored on a CD-ROM.
For the sake of backward compatibility Oracle 9i supports all other data types such as
RAW and LONG RAW. The LOB data types have several advantages over the data types
already present in previous versions of Oracle.
The advantages are:
1. A table can have multiple LOB columns.
2. Multiple LOB’s are allowed in a single row.
3. LOB’S can be attributes of a user defined data type.
4. A table stores small Locators for the LOB’S in a column in place of the actual
objects. In contract a table stores the LONG column within the table itself.
5. A LOB column can have storage characteristics different from those for the table.
It is possible to separate the primary table data from those of the LOB columns
to different Locations.
6. Accessing the LOB column returns the Locators.
7. The maximum size of a LOB column is 4GB.
8. Applications can manipulate and access pieces of the LOB. However, for LONG
data type the entire data must be accessed.
LOB column can be indexed. The index on the LOB column cannot be dropped and
rebuilt.
The user can specify setting of internal LOB’S to null or empty. An empty LOB stored in a
table is a LOB of zero length and is assigned a Locator. This Locator can have been used
to populate the LOB Later.
A table space can be referred to as Logical storage units which make up the database. It
is normally the job of DBA to create, add and drop a table space through a user can also
perform the same. Defining a table space for the LOB is optional.
List of datatypes available in Oracle.
Character Datatypes
Data Type Oracle 9i Oracle 10g Oracle 11g Explanation
Syntax (if applicable)
char(size) Maximum size of Maximum size of Maximum size of Where size is the number of characters to store.
2000 bytes. 2000 bytes. 2000 bytes. Fixed-length strings. Space padded.
nchar(size) Maximum size of Maximum size of Maximum size of Where size is the number of characters to store.
2000 bytes. 2000 bytes. 2000 bytes. Fixed-length NLS string Space padded.
nvarchar2(size) Maximum size of Maximum size of Maximum size of Where size is the number of characters to store.
4000 bytes. 4000 bytes. 4000 bytes. Variable-length NLS string.
varchar2(size) Maximum size of Maximum size of Maximum size of Where size is the number of characters to store.
4000 bytes. 4000 bytes. 4000 bytes. Variable-length string.
long Maximum size of Maximum size of Maximum size of Variable-length strings. (backward compatible)
2GB. 2GB. 2GB.
raw Maximum size of Maximum size of Maximum size of Variable-length binary strings
2000 bytes. 2000 bytes. 2000 bytes.
long raw Maximum size of Maximum size of Maximum size of Variable-length binary strings. (backward
2GB. 2GB. 2GB. compatible)
Numeric Datatypes
Data Type Oracle 9i Oracle 10g Oracle 11g Explanation
Syntax (if applicable)
number(p,s) Precision can Precision can Precision can Where p is the precision and s is the scale.
range from 1 to range from 1 to range from 1 to
For example, number(7,2) is a number that has 5
38. 38. 38.
digits before the decimal and 2 digits after the
Scale can range Scale can range Scale can range
decimal.
from -84 to 127. from -84 to 127. from -84 to 127.
numeric(p,s) Precision can Precision can Precision can Where p is the precision and s is the scale.
range from 1 to range from 1 to range from 1 to
For example, numeric(7,2) is a number that has 5
38. 38. 38.
digits before the decimal and 2 digits after the
decimal.
float
dec(p,s) Precision can Precision can Precision can Where p is the precision and s is the scale.
range from 1 to range from 1 to range from 1 to
For example, dec(3,1) is a number that has 2 digits
38. 38. 38.
before the decimal and 1 digit after the decimal.
decimal(p,s) Precision can Precision can Precision can Where p is the precision and s is the scale.
range from 1 to range from 1 to range from 1 to
For example, decimal(3,1) is a number that has 2
38. 38. 38.
digits before the decimal and 1 digit after the
decimal.
integer
int
smallint
real
double
precision
Date/Time Datatypes
Data Type Oracle 9i Oracle 10g Oracle 11g Explanation
Syntax (if applicable)
date A date between Jan 1, A date between Jan 1, A date between Jan 1,
4712 BC and Dec 31, 4712 BC and Dec 31, 4712 BC and Dec 31,
9999 AD. 9999 AD. 9999 AD.
timestamp fractional seconds fractional seconds fractional seconds Includes year, month, day, hour,
(fractional seconds precision must be a precision must be a precision must be a minute, and seconds.
precision) number between 0 and number between 0 and number between 0 and
For example:
9. (default is 6) 9. (default is 6) 9. (default is 6)
timestamp(6)
timestamp fractional seconds fractional seconds fractional seconds Includes year, month, day, hour,
(fractional seconds precision must be a precision must be a precision must be a minute, and seconds; with a
precision) with time number between 0 and number between 0 and number between 0 and time zone displacement value.
zone 9. (default is 6) 9. (default is 6) 9. (default is 6)
For example:
timestamp(5) with time zone
timestamp fractional seconds fractional seconds fractional seconds Includes year, month, day, hour,
(fractional seconds precision must be a precision must be a precision must be a minute, and seconds; with a
precision) with local number between 0 and number between 0 and number between 0 and time zone expressed as the
time zone 9. (default is 6) 9. (default is 6) 9. (default is 6) session time zone.
For example:
timestamp(4) with local time
zone
interval year year precision is the year precision is the year precision is the Time period stored in years and
(year precision) number of digits in the number of digits in the number of digits in the months.
to month year. (default is 2) year. (default is 2) year. (default is 2)
For example:
interval year(4) to month
interval day day precision must be day precision must be day precision must be Time period stored in days,
(day precision) a number between 0 a number between 0 a number between 0 hours, minutes, and seconds.
to second (fractional and 9. (default is 2) and 9. (default is 2) and 9. (default is 2)
For example:
seconds precision)
fractional seconds fractional seconds fractional seconds interval day(2) to second(6)
precision must be a precision must be a precision must be a
number between 0 and number between 0 and number between 0 and
9. (default is 6) 9. (default is 6) 9. (default is 6)
32 64
bfile Maximum file Maximum file size of 2 -1 bytes. Maximum file size of 2 -1 bytes. File locators that point to a
size of 4GB. binary file on the server file
system (outside the database).
blob Store up to 4GB Store up to (4 gigabytes -1) * (the Store up to (4 gigabytes -1) * (the Stores unstructured binary
of binary data. value of the CHUNK parameter of value of the CHUNK parameter of large objects.
LOB storage). LOB storage).
clob Store up to 4GB Store up to (4 gigabytes -1) * (the Store up to (4 gigabytes -1) * (the Stores single-byte and multi-
of character value of the CHUNK parameter of value of the CHUNK parameter of byte character data.
data. LOB storage) of character data. LOB storage) of character data.
nclob Store up to 4GB Store up to (4 gigabytes -1) * (the Store up to (4 gigabytes -1) * (the Stores unicode data.
of character text value of the CHUNK parameter of value of the CHUNK parameter of
data. LOB storage) of character text LOB storage) of character text
data. data.
SQL Commands
Data Definition Language
It is used to create an object (e.g., Table), alter the structure of an object and also to
drop the object created.
Table Definition
A table is a unit of storage that holds data in the form of rows and columns. The DDL
used for table definition can be classified into following four:
1. Create Table command
2. Alter Table command
3. Truncate Table command
4. Drop Table command
Syntax for creating table:
Create Table <table name> (Column Definition, Column Definition,…);
In a Table
We should specify a unique column name.
We should specify proper data type along with its width.
Example
Create Table test (no number(3), name varchar2(10), fee number(7,2));
Table name should adhere strictly the following norms:
1. While naming a table the first letter should be an alphabet.
2. Oracle reserved words cannot be used to name a table.
3. Maximum length for a table name is 30 characters
4. Two different tables should not have same name
5. Underscore, numerals and letters allowed but not blank space and single quotes
If the user uses double quotes for naming the table likes “inf” then upper and lower
case are not equivalent.
Syntax to alter a table:
Alter table < table name > modify (column def);
The effect of distinct clause is clear if we select any particular field or else if we select
two columns and add distinct clause to one column it has no effect, because at that
instant this distinct clause concentrates on both of those fields.
Select command with where clause
Select * from emp where empno = 7788;
Select * from emp where sal > 5000;
Select * from emp where sal < 5000;
Select * from emp where job = ‘MANAGER’;
Select *from emp where job like ‘MANAGER’;
Select * from emp where ename like ‘S%’;
To select specific rows from a table we include a ‘where’ clause in the select command.
It can appear only after the ‘from’ clause. We can retrieve only the rows which satisfy
the ‘where’ condition.
To arrange the displayed rows according to some pre-defined order we can use the
‘order by’ clause. It is also used to arrange rows in ascending and descending order. The
order clause can also be used to arrange multiple columns.
Note: The ‘order by’ clause should be the last clause in the select command.
The ‘where’ clause in a select command which retrieves specific rows from a table is
also applicable to delete and update row or rows from a table according to the
conditions specified by the where clause.
Select * from emp order by sal;
Select * from emp order by empno;
Select * from emp order by hiredate;
Select * from emp order by sal desc;
Select * from emp order by empno desc;
Select * from emp order by hiredate desc;
Select * from emp where sal > 5000 order by sal desc.
Update command
Some times changes to the database become imminent and to reflect these changes to
the existing records in a table the update command is used. With this update command
we can update rows in a table. A single column or multiple columns could be updated.
Specific rows could be updated based on a specific condition.
The update command consists of a ‘set’ clause and an optional where clause.
Syntax
Update <table> set field = value, ….where <condition>;
Update emp set sal = 3000 where empno = 115;
Update emp set job = ‘Manager’ where sal = 3000;
Update emp set hiredate = ’12-jan-98’ where empno = 111;
Delete command
This command is used to delete existing rows, the delete command consists of a ‘from’
clause followed by an optical ‘where’ clause.
Syntax
Delete from <table name> where <conditions>;
Using this delete command not only specific records rather all the records for a table
can be deleted without affecting the structure of the table.
Example
Delete from emp where empno = 111;
The above query deletes the record where empno is 111.
Delete from emp where sal <3000;
The above query deletes all the records where its sal is lesser than 3000;
Syntax
Commit work; (or)
Commit;
Save point
Save points are like markers to divide a very lengthy transaction to smaller ones. They
are used to identify a point in a transaction to which we can later rollback. Thus save
point is used in conjunction with rollback, to rollback portions of the current
transactions.
Syntax
Save point savepoint_id;
Rollback
A rollback is used to undo the work done in the current transaction. We can either
rollback the entire transaction so that all changes made by SQL statements are undo, or
rollback a transaction to a save point so that the SQL statements after the save points
are rolled back.
Syntax
Rollback work;
Rollback;
Rollback to save point save point_id;
Example
update emp set hiredate =’30-jan-98’where empno =1111;
Save point S1;
Delete from emp where empno =1111;
Save point S2;
Rollback to save point S1;
Rollback;
Select command’s Group By:
This is another optional part of the query. It is used only when the results of the query
have to be grouped based on criteria.
For ex: if the average salary of employees in a dept is to be found.
Query Processing
Different select statements
Apart from just providing information this command can be clubbed with some DDL and
DML statements to perform relational operations.
Select command to create table
Syntax
Create table <table name> as select * from <another table name>;
Create table <new table> as select clmn1, clmn2 from <old table>;
Create table <new table> (clmn1, clmn2) as select clmn1, clmn2 from <old table>;
Create table <new table> as select * from <old table> where <condition>;
Create table <new table> as select * from <old table> where 1=2;
The above syntax creates a table with the structure of old without copying its data.
Examples
Create table newemp as select * from emp;
Create table emp1 as select empno, ename, sal from emp;
Create table emp2 (No, Name, Salary, desig) as select empno, ename, sal job from emp;
Create table emp3 as select * from emp where sal <5000;
Create table emp4 as select * from emp where hiredate <’01-jan-90’;
Create table emp5as select * from emp where 1=2;
The condition 1=2 will never satisfy. Hence a new table will be created with only the
structure of the emp table but not its records.
Select command to insert records
Alternatively inserting records from one table into another can also be accomplished. In
this case either a table should be created or exist prior to performing such an insert.
Syntax
Insert into <table1> (select * from <table2>);
Insert into<table1> (select c1, c2 from <table2>);
Note: Care should be taken to ensure that the two tables’ structures are the same.
Examples
SQL*Plus Functions
SQL*Plus provides specialized functions to perform operations using the DML
commands. A function takes one or more arguments and returns a value. One can
broadly classify functions into single row functions and group functions.
Numeric functions
These functions accept numeric input and returns numeric values as out put. The values
that the numeric functions returns are accurate up to 38 decimal digits.
Absolute value (Abs)
Absolute value is the measurement of the magnitude of something. For instance, in a
temperature change or a stock index change, the magnitude of the change has meaning
in itself, regardless of the direction of the change. Absolute value is always a positive
number.
ABS (value)
Select abs (-15) from dual; o/p=15
Ceiling (CEIL)
It simply produces the smallest integer (or whole number) that is greater than or equal
to specific value. Pay special attention to its effect on negative numbers.
CEIL (value)
Select ceil (44.778) from dual; =>45
Select ceil (1.3) from dual; =>2
Select ceil (-2) from dual; =>-2
Select ceil (-2.3) from dual; =>-2
Floor
It is the intuitive opposite of ceil.
Select floor (100.2) from dual; =>100
Select floor (100.3) from dual;
Select floor (-100) from dual;
Select floor (-100.3) from dual;
Modulus (MOD)
Character functions
Character functions accept character input and return either character or number
values. The character functions supported by oracle are listed below.
Initcap (char)
Select initcap (‘hello’) from dual;
Select initcap (this-is _a. test, of : punctuation ; for + initcap) from dual;
Lower (char)
Select lower (‘FUN’) from dual;
Upper (char):
Select upper (‘fun’) from dual;
Intcap
It puts the first letter of every word into uppercase, it determines the beginning of a
word based on its being preceded by any character other than a letter.
Ltrim (char, set)
Returns a string that has all characters removed uptill the set specified.
Select Ltrim (‘xyzadams’, ‘xyz’) from dual;
From the above query the characters xyz from the left of string will be removed and rest
of the characters will be displayed.
Select Ltrim (‘RDBMS’, ‘R’) from dual;
Select Ltrim (‘what’, ‘W’) from dual;
Select Ltrim (‘display’, ‘dis’) from dual;
Rtrim (char, set)
Returns the string with the final characters removed after the last character that is not
in the character set.
Select rtrim (‘character’, ‘acter’) from dual;
Select rtrim (‘given’, ‘n’) from dual;
Select rtrim (‘example’, ‘ple’) from dual;
Select rtrim (‘sunil’, ‘il’) from dual;
Translate (char, from, to)
It converts characters in a string into different characters, based on a substitution plan
you give it, from “from” to “to”.
Select translate (‘Now Vowels are under attack’,’ taeious’, TAEIOUS’) from dual;
O/P => Now VOwEls ArE UndEr ATTACK:
Select translate (‘jack’, ‘j’, ‘b’) from dual;
Select translate (‘soundex’, ‘x’, ‘d’) from dual;
Select translate (‘consider the following example’, ‘aeiou’, ‘AEIOU’);
O/P: cOnsIdEr thE fOllOwIng ExAmplE.
Replace (string, search string, [restring])
Returns a string with every occurance of the specified string replaced with another
string.
Select replace (‘jack and jue’, ‘j’, ‘bl’) from dual;
Select replace (‘george’, ‘ge’, ‘null’) from dual;
Select replace (‘quality’, ‘q’, ‘eq’) from dual;
Substr (char, m, n)
It is used to clip out a piece of a string. m=> referst for beginning character in string n=>
No of characters. If ‘n’is not mentioned then it will start at point ‘m’ and continues till
end of the string.
Select substr (‘abcdefg’, 3, 2) from dual;
Select substr (‘replace’, 3) from dual;
Select substr(‘programs’, 5, 3) from dual;
Soundex
It compares words that spelled differently but sound alike. To use soundex option, you
must precede the search term with an exclamation mark (!), with no space between !
and the search term. During the search, oracle evaluates the soundex values of the
terms in the text index and searches for all words that have same soundex value.
Select ename from emp where soundex (ename)=soundex (‘ ‘);
Chr (number)
This returns the character value for the given number within braces.
Select chr (67) from dual;
ASCII (char)
This returns the decimal equivalent of a character.
Select ascii (‘A’) from dual; 65
Select assii (‘a’) from dual; 97
Length (char)
This function returns the length of input string.
Select length (‘ramesh’) from dual;
Lpad (string, length, symbol)
It returns the string padded on the left hand side with the third argument.
Select Lpad (‘function’, 15, ‘=’) from dual;
Rpad (string, length, symbol)
It returns the string padded on the right hand side with the character / string in the third
argument.
Select rpad (‘function’, 15, ‘*’) from dual;
Concateurtion (||) operator
This operator is used to merge two or more strings, or a string and a data value
together.
Select (‘the employee’||ename||‘bearing empno=’||empno||’draws’sal||’rupees per
month’) from emp;
Instr ( )
Select * from emp where instr (ename, ‘-‘);
Date functions
Time Zones
AST / ADT Atlantic Standard / Day Light Time
BST / BDT Bering Standard / Day Light Time
CST / CDT Central Standard / Day Light Time
EST / EDT Eastern Standard / Day Light Time
GMT Greenwich Mean Time
HST / HDT Alaska-Hawaii Standard / Day Light Time
MST / MDT Mountain Standard / Day Light Time
NST New Fund Land Standard time
PST / PDT Pacific Standard / Day Light Time
YST / YDT Yukon Standard / Day Light Time
Add_months
This function returns a date after adding a specified date with specified number of
months. The format is add_months (d, n), where d is the date and n represents the
number of months.
Select ename, hiredate, add_months (hiredate, 2) from emp;
Last_day
This function returns the date corresponding to the last day of the month. Format is
last_day(d);
Select hiredate, last_day (hiredate) from emp where hiredate < ’01-nov-81’;
Select sysdate, last_day (hiredate) from emp;
Next_day
This function returns the day of that day which comes next in our condition. Format is
next_day (sysdate, day);
Select next_day (sysdate, ‘Tuesday’) from dual;
The above query returns the date when it is Tuesday after or system date.
Select next_day (hiredate, ‘Sunday’) from emp;
Month_between
The function returns the number of months between two dates. Format is
months_between (d1, d2,);
Select months_between (hiredate, sysdate) from emp;
If d1 is later than d2, result is positive, if earlier negative, if d1 and d2 are either the
same day of the month or both last days of the months, the result is always an integer,
otherwise Oracle calculates the fractional portion of the result based on a 31-day month
and considers the difference in time components of d1 and d2.
Round
This function returns the date which is rounded to the unit specified by the format
model. Its format is round (d, (fmt)) where d is the date and fmt is the format model.
Fmt is only an option, by default date will be rounded to the nearest day.
Select hiredate, round (hiredate, ‘year’) from emp;
Select hiredate, round (hiredate, ‘month’) from emp;
Select hiredate, round (hiredate, ‘day’) from emp;
The above query causes the date to be rounded by the nearest Sunday.
Select hiredate, round (hiredate) from emp;
The above query does not include fmt, and therefore it is rounded to the nearest day.
Select sysdate, round (sysdate) from dual;
Select sysdate, round (sysdate, ‘year’) from dual;
Select sysdate, round (sysdate, ‘month’) from dual;
Select sysdate, round (sysdate, ‘day’) from dual;
Truncate
This function returns the date with the time portion of the day truncated to the unit
specified by format model. The syntax is trunc (d, (fmt)). If fmt is neglected, then date is
converted to the nearest day.
WW Week of year (1-53) where week 1 starts on the first day of the year and
continues to the seventh day of the year.
W Week of month (1-5) where week 1 starts on the first day of the month and ends
on the seventh.
MI Minute (0-59).
SS Second (0-59).
AD or A.D AD indicator
BC or B.C. BC indicator
Conversion functions
These functions convert value from one data type to another. They are broadly
classified as
To_char ( )
It converts date to a value of varchar2 data type in a form specified by date format fmt.
If fmt is neglected then it converts date to varchar2 in the default date format.
To_char (date, [fmt])
Select to_char (sysdate, ‘ddth “of” fm month yyyy’) from dual;
Ans: 27th of January 1999
In the above example, fill mode (fm) format mask is used to avoid blank padding of
characters and zero padding to numeric.
Select to_char (sysdate, ‘yyyy “years “mm “ months “dd “days” ’) from dual;
To_date ( )
It converts char or varchar type to date type its format is To_date (char, [fmt]).
Format model, fmt specifies the form of character.
Select to_date (‘January 27 1999’, ‘month-dd-yyyy’) from dual;
To_number ( )
This function allows the conversion of string containing numbers into the number data
type on which arithmetic operations can be performed. This is largely unnecessary as
oracle does an implicit conversion of numbers contained in a string.
Select to_number (‘100’) from dual;
Miscellaneous functions
Uid
This function returns the integer value corresponding to the user currently logged in.
Select uid from dual;
User
This function returns the login’s user name, which is varchar2 type.
Select user from dual;
Null value (NVL)
This function is used in cases where we want to consider Null values as zeros. The syntax
is
Nvl (expression 1, expression 2)
* if expression 1 is Null, nvl will return expression 2.
* if expression 1 is not Null, nvl will return expression 1.
* if expression 1 and 2 to the data type of expression 1 and then compares it.
Select empno, nvl (deptno, o) from emp;
Select empo, nvl (comm, o) from emp;
Null values and zeros are not equivalent. Null values are represented by blank and zeros
by (0).
Decode
Unlike the translate function which performs a character replacement the DECEDE
function does a value by value replacement.
Syntax
Select decode (<value, if1, then1, if2, then2, …. >) from <table _ name>;
Select decode (ename, ‘SCOTT’, ‘scotts’) from emp;
Select decode (rtrim (ename), ‘SCOTT’, ‘SUNIL’,’MILLER’, ‘MURALI’) from emp;
Vsize
This function returns the no of bytes in expression vsize.
Select vsize (‘hello’) from dual;
Group functions
Group functions returns a result based on a group of rows. Some of these are pure
mathematical functions.
Average (avg)
This function will return the average of values of the column specified in the argument
of the column.
Select avg (sal) from emp;
Minimum (min)
This function will give the least of all the values of the column present in the argument.
Select min (sal) from emp;
Maximum (max)
This function returns the maximum of all the values of the column present in the
argument.
Select max (sal) from emp;
Sum
This returns sum of all the values of the column present in the argument.
Select sum (sal) from emp;
Count
It is used to count no of rows. It takes three arguments.
Select count (empno) from emp;
Count(*)
This function counts all the rows inclusive of duplicates and Nulls.
Select count (*) from emp;
Count (column_name)
Select count (sal) from emp;
Count (distinct col_name)
Select count (distinct sal) from emp;
The above query eliminates duplicate and null values in the column and provide the
result.
Group by clause
Beyond the group functions, there are also two group clauses: “having” and “group by”.
These are parallel to the where and order by clauses except that they act on groups, not
on individual rows. These clauses can provide very powerful insights into your data.
If an SQL statement consists of ‘where’ clause and ‘group by’, than the latter should
follow the former.
Select dept, max (sal) from emp group by dept;
Select deptno, avg (sal) from emp group by dept;
Having clause
It is used to specify certain conditions on rows, retrieved by using ‘group by’ clause. This
clause should be preceded by a ‘group by’ clause.
Select dept, max (sal) from emp group by dept having dept not in 20;
Stddev (standard deviation)
Select deptno, stddev (sal) from emp group by deptno;
Varience
Joins
The purpose of a join is to combine the data spread across tables. A join is actually
performed by the ‘where’ clause which combines the specified rows of tables. For
combining data where a common column will provide the join condition the references
to the table names must be made to avoid ambiguity if a join involves over two tables
then oracle joins the first two based on the condition and then compares the result with
the next table and so on.
The syntax for joining tables is as follows:
Select columns from table1, table2 where logical expression;
The logical expression amounts to providing the join condition.
There are basically three different types of joins. They are
Simple join
Self join
Outer join
Simple Join
It is the most common type of join. It retrieves rows from two tables having a common
column and is further classified into
Equi join
A join which is based on equalities, is called an equi-join. The equi-join combines rows
that have equivalent values for a specified columns.
Select empno, ename, sal, dname, from emp, dept where emp.deptno = dept.deptno;
Non equi-join
It specifies the relationship between columns belonging to different tables by making
use of the relational operators, other then equal.
Select itemdesc, max_level, qty_ord, qty_deld from itemfile, order_detail, where
((itemfile.max_level<order_detail.qty_ord)anditemfile_itemcode=order_detail.itemcod
e);
Select empno, ename sal dname loc from emp, dept where emp.deptno>dept.deptno;
Table Aliases
Table aliases are used to make multiple table queries shorter and more readable. As a
result, we given an alias to the table in the ‘form’ clause. The aliases can be used instead
of the table name throughout the query.
Select e.empno, e.ename, e.sal, e.loc, d.dname from emp e, dept d where
e.deptno=d.deptno;
Self Join
Joining of a table to itself is known as a selfjoin i.e., it joins one row in a table to another.
The join is performed by mirroring the table using the ‘where’ clause, it can compare
each row of the table to itself and also with other rows of the same table.
Select e.empno, e.ename, e.job, e.sal, d.location from emp e, dept d where
e.deptno<d.deptno and e.job=’manager’ and e.hiredate!=’01-jan-82’;
Select e.empno, e.name, e.sal, e.hiredate, d.name, d.loc from emp e dept d where
e.deptno<d.deptno and e.job=’manager’ and e.hiredate <’01-jan-83’;
Outer Join
This extends the results of a simple join. An outer join returns all the rows returned by
simple join as well as those rows from one table that do not match any row from the
other table. The symbol (+) represents outer join.
Select empno, ename, sal, dname, location from emp, dept where
emp.deptno=dept.deptno(+);
The above query returns one record that does not matches with the above condition
from emp
Select e.empno, e.ename, e.job, d.dame, d.loc from emp e, dept d when
e.deptno(+)=d.deptno ;
The above query will return one separate record from dept table.
Sub Queries
Nesting of queries, one within the other is termed as a sub queries. A statement
containing a sub query is called as parent statement. Sub queries are used to retrieve
data from tables that depend on the values in the table itself.
The sub query features is less well known than SQL’s join features, but it plays an
important role in SQL for three reasons:
An SQL statement with a sub query is often the most natural way to express a query,
because it most closely parallels the English-Language description of the query.
Sub queries make it easier to write “select” statements, because they let you “break a
query down into pieces” and then “put the pieces together”.
There are some queries that cannot be expressed in SQL without using a sub query.
Example:
Select * from emp where deptno = (select deptno from dept where dname =’SALES’);
Select * from emp where deptno = (select deptno from dept where loc=’DALLAS’);
Sub queries that return several values:
Sub queries can also return more than one value. In such cases we should include
operator like “any, all, in or not in” between the comparison operator and the sub
query.
Select * from emp where sal < any (select sal from myemp where deptno between 10
and 30);
Note:
=any is equivalent to in
!=all is equivalent to not in.
Multiple sub queries:
Oracle places a no limit on the number of queries included in a where clause. It allows
nesting of query within a sub query. Doing so narrows down the area of search from
which the result is obtained.
Select empno, ename, sal, job from emp where deptno= (select deptno from dept
where loc=’chicago’) or deptno in (select deptno from emp e, dept d where
e.deptno=d.deptno);
Sub query within sub query:
Select * from emp where deptno=(select deptno from dept where deptno=(select
deptno from myemp where empno =1111 and ename=’AKHIL’));
Correlated sub query:
A sub query is evaluated once for the entire parent statement where as a correlated sub
query is evaluated once for every row processed by the parent statement.
Note:
If a sub query is selected from the same table as the main query, then the main query
must define an alias for the table name, and the sub query must use the alias to refer to
the column’s value in the main query.
Select * from emp x where sal > (select avg(sal) from emp y where x.deptno=y.deptno);
Select distinct (sal) from emp x where &n= (select count (dist (sal)) from emp y where
y.sal >= x.sal);
To delete only duplicate records
Delete from emp e where rowed not in (select min (rowed) from emp e1where
e.empno=e1.empno);
Exists /not exists operators:
The exists operator is a Boolean operator that evaluates to either true or false. This
operator takes a sub query as an argument. It evaluates to true if the sub query
produces some rows and false if no rows are fetched.
Select * from emp e where exists (select * from dept where deptno=e.deptno);
Select * from dept where not exists (select * from emp where
dept.deptno=emp.deptno);
Single row sub query
Here the inner query returns a single value to the outer query column then it is called
single row query
Select * from emp where sal = (select max (sal) from emp);
Using distinct in sub query
Select * from dept where deptno = (select distinct deptno from emp where
job=’MANAGER’);
Differences between sub query & correlated sub query:
Sub Query Correlated Sub Query
1. Inner query will be executed first. 1. Outer query will be executed first.
2. Inner query will be executed and it 2. For each & every row processed by
will assign that value to the outer outer query, the inner query will be
query then the outer query will be executed repeatedly.
executed. 3. There is a referencing outer query
3. Here there is no referencing outer column in inner query.
Integrity Constraints
Constraints are declarations of conditions about the database that must remain true.
These include attributed-based, tuple-based, key, and referential integrity constraints.
The system checks for the violation of the constraints on actions that may cause a
violation, and aborts the action accordingly.
A constraint is a rule that the database manager enforces.
There are three types of constraints:
A unique constraint is a rule that forbids duplicate values in one or more
columns within a table. Unique and primary keys are the supported unique
constraints. For example, a unique constraint could be defined on the supplier
identifier in the supplier table to ensure that the same supplier identifier is not
given to two suppliers.
A referential constraint is a logical rule about values in one or more columns in
one or more tables. For example, a set of tables shares information about a
corporation's suppliers. Occasionally, a supplier's name changes. A referential
constraint could be defined stating that the ID of the supplier in a table must
match a supplier id in the supplier information. This constraint prevents inserts,
updates or deletes that would otherwise result in missing supplier information.
A table check constraint sets restrictions on data added to a specific table. For
example, it could define the salary level for an employee to never be less than
$20,000.00 when salary data is added or updated in a table containing personnel
information.
A unique constraint is the rule that the values of a key are valid only if they are unique
within the table. Unique constraints are optional and can be defined in the CREATE
TABLE or ALTER TABLE statement using the PRIMARY KEY clause or the UNIQUE clause.
The columns specified in a unique constraint must be defined as NOT NULL. A unique
index is used by the database manager to enforce the uniqueness of the key during
changes to the columns of the unique constraint.
A table can have an arbitrary number of unique constraints, with at most one unique
constraint defined as a primary key. A table cannot have more than one unique
constraint on the same set of columns.
Unique Constraints
A unique constraint is the rule that the values of a key are valid only if they are unique
within the table. Unique constraints are optional and can be defined in the CREATE
TABLE or ALTER TABLE statement using the PRIMARY KEY clause or the UNIQUE clause.
The columns specified in a unique constraint must be defined as NOT NULL. A unique
index is used by the database manager to enforce the uniqueness of the key during
changes to the columns of the unique constraint.
A table can have an arbitrary number of unique constraints, with at most one unique
constraint defined as a primary key. A table cannot have more than one unique
constraint on the same set of columns.
A unique constraint that is referenced by the foreign key of a referential constraint is
called the parent key.
Referential Constraints
Referential integrity is the state of a database in which all values of all foreign keys are
valid. A foreign key is a column or set of columns in a table whose values are required to
match at least one primary key or unique key value of a row of its parent table. A
referential constraint is the rule that the values of the foreign key are valid only if:
they appear as values of a parent key, or
Some component of the foreign key is null.
The table containing the parent key is called the parent table of the referential
constraint, and the table containing the foreign key is said to be a dependent of that
table.
Referential constraints are optional and can be defined in CREATE TABLE statements
and ALTER TABLE statements. Referential constraints are enforced by the database
manager during the execution of INSERT, UPDATE, DELETE, ALTER TABLE ADD
CONSTRAINT, and SET INTEGRITY statements. The enforcement is effectively performed
at the completion of the statement.
Table Check Constraints
A table check constraint is a rule that specifies the values allowed in one or more
columns of every row of a table. They are optional and can be defined using the SQL
statements CREATE TABLE and ALTER TABLE. The specification of table check constraints
is a restricted form of a search condition. One of the restrictions is that a column name
in a table check constraint on table T must identify a column of T.
A table can have an arbitrary number of table check constraints. They are enforced
when:
A row is inserted into the table
Triggers
A trigger defines a set of actions that are executed at, or triggered by, a delete, insert, or
update operation on a specified table. When such an SQL operation is executed, the
trigger is said to be activated.
Triggers can be used along with referential constraints and check constraints to enforce
data integrity rules. Triggers can also be used to cause updates to other tables,
automatically generate or transform values for inserted or updated rows, or invoke
functions to perform tasks such as issuing alerts.
Triggers are a useful mechanism to define and enforce transitional business rules which
are rules that involve different states of the data (for example, salary cannot be
increased by more than 10 percent). For rules that do not involve more than one state
of the data, check and referential integrity constraints should be considered.
Using triggers places the logic to enforce the business rules in the database and relieves
the applications using the tables from having to enforce it. Centralized logic enforced on
all the tables means easier maintenance, since no application program changes are
required when the logic changes.
Triggers are optional and are defined using the CREATE TRIGGER statement.
Examples
UNIQUE Constraints
The UNIQUE constraint designates a column or combination of columns as a unique key.
To satisfy a UNIQUE constraint, no two rows in the table can have the same value for
the unique key. However, the unique key made up of a single column can contain nulls.
A unique key column cannot be of datatype LONG or LONG RAW. You cannot designate
the same column or combination of columns as both a unique key and a primary key or
as both a unique key and a cluster key. However, you can designate the same column or
combination of columns as both a unique key and a foreign key.
Example
The following statement creates the DEPT table and defines and enables a unique key
on the DNAME column:
CREATE TABLE dept
(deptno NUMBER(2),
dname VARCHAR2(9) CONSTRAINT unq_dname UNIQUE,
loc VARCHAR2(10) );
The constraint UNQ_DNAME identifies the DNAME column as a unique key. This
constraint ensures that no two departments in the table have the same name. However,
the constraint does allow departments without names.
Defining Composite Unique Keys
A composite unique key is a unique key made up of a combination of columns. Since
Oracle creates an index on the columns of a unique key, a composite unique key can
contain a maximum of 16 columns. To define a composite unique key, you must use
table_constraint syntax, rather than column_constraint syntax.
To satisfy a constraint that designates a composite unique key, no two rows in the table
can have the same combination of values in the key columns. Also, any row that
contains nulls in all key columns automatically satisfies the constraint. However, two
rows that contain nulls for one or more key columns and the same combination of
values for the other key columns violate the constraint.
Example
The following statement defines and enables a composite unique key on the
combination of the CITY and STATE columns of the CENSUS table:
ALTER TABLE census
ADD CONSTRAINT unq_city_state
UNIQUE (city, state);
The UNQ_CITY_STATE constraint ensures that the same combination of CITY and STATE
values does not appear in the table more than once.
The PK_DEPT constraint identifies the DEPTNO column as the primary key of the
DEPTNO table. This constraint ensures that no two departments in the table have the
same department number and that no department number is NULL.
Alternatively, you can define and enable this constraint with table_constraint syntax:
CREATE TABLE dept
(deptno NUMBER(2),
dname VARCHAR2(9),
loc VARCHAR2(10),
The value of one of the columns that makes up the foreign key must be null.
Example
The following statement creates the EMP table and defines and enables a foreign key on
the DEPTNO column that references the primary key on the DEPTNO column of the
DEPT table:
CREATE TABLE emp
(empno NUMBER(4), ename VARCHAR2(10), job VARCHAR2(9),
mgr NUMBER(4), hiredate DATE, sal NUMBER(7,2),
comm NUMBER(7,2), deptno NUMBER(2) CONSTRAINT fk_deptno REFERENCES
dept(deptno) );
The constraint FK_DEPTNO ensures that all departments given for employees in the
EMP table are present in the DEPT table. However, employees can have null department
numbers, meaning they are not assigned to any department. If you wished to prevent
the latter, you could create a NOT NULL constraint on the deptno column in the EMP
table, in addition to the REFERENCES constraint.
Before you define and enable this constraint, you must define and enable a constraint
that designates the DEPTNO column of the DEPT table as a primary or unique key.
Note that the referential integrity constraint definition does not use the FOREIGN KEY
keyword to identify the columns that make up the foreign key. Because the constraint is
defined with a column constraint clause on the DEPTNO column, the foreign key is
automatically on the DEPTNO column.
Note that the constraint definition identifies both the parent table and the columns of
the referenced key. Because the referenced key is the parent table's primary key, the
referenced key column names are optional.
Note that the above statement omits the DEPTNO column's datatype. Because this
column is a foreign key, Oracle automatically assigns it the datatype of the
DEPT.DEPTNO column to which the foreign key refers.
Alternatively, you can define a referential integrity constraint with table_constraint
syntax:
CREATE TABLE emp
(empno NUMBER(4), ename VARCHAR2(10), job VARCHAR2(9),
mgr NUMBER(4), hiredate DATE, sal NUMBER(7,2),
comm NUMBER(7,2), deptno NUMBER(2) CONSTRAINT fk_deptno
FOREIGN KEY (deptno) REFERENCES dept(deptno) );
Note that the foreign key definitions in both of the above statements omit the ON
DELETE CASCADE option, causing Oracle to forbid the deletion of a department if any
employee works in that department.
Specifying Referential Actions for Foreign Keys
Oracle allows different types of referential integrity actions to be enforced, as specified
with the definition of a FOREIGN KEY constraint:
The UPDATE/DELETE No Action Restriction This action prevents the update or deletion
of a parent key if there is a row in the child table that references
the key. By default, all FOREIGN KEY constraints enforce the no action restriction; no
option needs to be specified when defining the constraint to enforce the
no action restriction. For example:
CREATE TABLE Emp_tab ( FOREIGN KEY (Deptno) REFERENCES Dept_tab);
The ON DELETE CASCADE Action This action allows data that references the parent key
to be deleted (but not updated). If referenced data in the
parent key is deleted, all rows in the child table that depend on the deleted parent key
values are also deleted. To specify this referential action, include the ON
DELETE CASCADE option in the definition of the FOREIGN KEY constraint. For example:
CREATE TABLE Emp_tab ( FOREIGN KEY (Deptno) REFERENCES Dept_tab
ON DELETE CASCADE);
The ON DELETE SET NULL Action This action allows data that references the parent key
to be deleted (but not updated). If referenced data in the
parent key is deleted, all rows in the child table that depend on the deleted parent key
values have their foreign keys set to null. To specify this referential
action, include the ON DELETE SET NULL option in the definition of the FOREIGN KEY
constraint. For example:
CREATE TABLE Emp_tab ( FOREIGN KEY (Deptno) REFERENCES Dept_tab
ON DELETE SET NULL);
Enabling FOREIGN KEY Integrity Constraints
FOREIGN KEY integrity constraints cannot be enabled if the referenced primary or
unique key's constraint is not present or not enabled.
Partitioning
Partitioning addresses the key problem of supporting very large tables and indexes by
allowing you to decompose them into smaller and more manageable pieces called
partitions. Once partitions are defined, SQL statements can access and manipulate the
partitions rather than entire tables or indexes. Partitions are especially useful in data
warehouse applications, which commonly store and analyze large amounts of historical
data.
Partitioning Methods
Two primary methods of partitioning are available: range partitioning, which partitions
the data in a table or index according to a range of values, and hash partitioning, which
partitions the data according to a hash function. Another method, composite
partitioning, partitions the data by range and further subdivides the data into
subpartitions using a hash function.
Range Partitioning
Range partitioning maps rows to partitions based on ranges of column values. Range
partitioning is defined by the partitioning specification for a table or index:
PARTITION BY RANGE ( column_list )
and by the partitioning specifications for each individual partition:
VALUES LESS THAN ( value_list )
where:
column_list is an ordered list of columns that determines the partition to which a
row or an index entry belongs. These columns are called the partitioning
columns. The values in the partitioning columns of a particular row constitute
that row's partitioning key.
value_list is an ordered list of values for the columns in column_list. Each value
in value_list must be either a literal or a TO_DATE() or RPAD() function with
constant arguments. The value_list contained in the partitioning specification for
each partition defines an open (noninclusive) upper bound for the partition,
referred to as the partition bound. The partition bound for each partition must
compare less than the partition bound for the next partition.
For example, in the following table of four partitions (one for each quarter's sales), a
row with SALE_YEAR=1997, SALE_MONTH=7, and SALE_DAY=18 has partitioning key
(1997, 7, 18). Therefore it belongs in the third partition and is stored in tablespace TSC.
A row with SALE_YEAR=1997, SALE_MONTH=7, and SALE_DAY=1 has partitioning key
(1997, 7, 1) and also belongs in the third partition, stored in tablespace TSC.
CREATE TABLE sales
( invoice_no NUMBER, sale_year INT NOT NULL, sale_month INT NOT NULL,
sale_day INT NOT NULL )
PARTITION BY RANGE (sale_year, sale_month, sale_day)
( PARTITION sales_q1 VALUES LESS THAN (1997, 04, 01)
TABLESPACE tsa,
PARTITION sales_q2 VALUES LESS THAN (1997, 07, 01)
TABLESPACE tsb,
PARTITION sales_q3 VALUES LESS THAN (1997, 10, 01)
TABLESPACE tsc,
PARTITION sales_q4 VALUES LESS THAN (1998, 01, 01)
TABLESPACE tsd );
Hash Partitioning
Although partitioning by range is well-suited for historical databases, it may not be the
best choice for other purposes. Another method of partitioning, hash partitioning, uses
a hash function on the partitioning columns to stripe data into partitions. Hash
partitioning allows data that does not lend itself to range partitioning to be easily
partitioned for performance reasons such as parallel DML.
Hash partitioning is a better choice than range partitioning when:
You do not know beforehand how much data will map into a given range
Sizes of range partitions would differ quite substantially
The following example creates a table that names and stores a hash partition in a
specific tablespace:
CREATE TABLE product( ... ) STORAGE (INITIAL 10M) PARTITION BY HASH(column_list)
( PARTITION p1 TABLESPACE h1, PARTITION p2 TABLESPACE h2 );
Composite Partitioning
Composite partitioning partitions data using the range method and, within each
partition, subpartitions it using the hash method. This type of partitioning supports
historical operations data at the partition level and parallelism (parallel DML) and data
placement at the subpartition level.
CREATE TABLE orders(
ordid NUMBER,
orderdate DATE,
productid NUMBER,
quantity NUMBER)
PARTITION BY RANGE(orderdate)
SUBPARTITION BY HASH(productid) SUBPARTITIONS 8
STORE IN(ts1,ts2,ts3,ts4,ts5,ts6,ts7,ts8)
( PARTITION q1 VALUES LESS THAN('01-APR-1998'),
PARTITION q2 VALUES LESS THAN('01-JUL-1998'),
PARTITION q3 VALUES LESS THAN('01-OCT-1998'),
PARTITION q4 VALUES LESS THAN(MAXVALUE));
In this example, the ORDERS table is range partitioned on the ORDERDATE key, in four
separate ranges representing quarters of the year. Each range partition is further
Table Partitions
Dividing the rows of a single table into multiple parts is called partitioning the table, the
table that is partitioned is called a partitioned table and the parts are called partitions.
Although the partitions are held and managed independently they can be queried and
updated by reference to the name of the logical table.
There is a difference between a table, which has a single partition and a table that has
no partitions. A non-partitioned table cannot be partitioned later. Each partition is
stored in different table spaces. Oracle 9i also provides partition independence. We can
access and manipulate data in one partition even if some or all of the other partitions
are unavailable. This is a major benefit to administrators and users alike.
Storing the partitions in different table spaces has its advantages:
1. It reduces the possibility of data corruption in multiple partitions.
2. Backup and recovery of each partition can be done independently.
3. Controlling the mapping of partitions to disk drives (important for balancing I/O
load) is possible.
Partitioned tables cannot contain any columns with long or long raw data types, LOB
data types (BLOB, CLOB, NCLOB or BFILE), or object types.
Partitioning is useful for very large tables. By splitting a large table rows across multiple
smaller partitions, you accomplish several important goals.
1. The performance of queries against the tables may improve, since oracle may
have to only search one partition instead of the entire table to resolve a query.
2. The table may be easier to manage, since the partitioned tables’ data is stored in
multiple parts, it may be easier to load and delete data in the partitions than in
the large table.
3. Backup and recovery operations may perform better, since the partitions are
smaller than the partitioned table, you may have more options for backing up
and recovering the partitions then you would have for a single large table.
Advantages of Table Partitions
There are two primary reasons for partitioning a table (1) Disk Space and (2) Processing
Time.
Partitions can be altered, dropped, rebuilt, merged and truncated. Partitions cannot
have synonyms.
Syntax
Create table <table_name>(column1, column2) partition by range (column_name)
(partition <partition_name> values less than <value>, partition <partition_name> values
less then <value>);
Create table emp_d (no number(4), ename varchar2(10), post varchar2(10), salary
number(8,2), deptno number(2)) partition by range (no) (partition p1 values less than
(20), partition p2 values less than (40));
Select * from emp_d partition (p1);
Select * from emp_d partition (p2);
A table can also be partitioned on more than one column. The partition key specified in
the insert statement is compared with partition bound defined when creating the
partition table.
Create table ordmast (ordno number(3) constraint ok primary key, ordate date, vcode
varchar2(8), deldate date) partition by range (ordno, vcode) (partition p1 values less
than (10,’v10’), partition p2 values less than (20,’v20’), partiion p3 values less than
(maxvalues, maxvalues));
The key word ‘maxvalues’ can be specified for any value in the partition bound value list.
This keyword represents a virtual ‘infinite’ value that sorts higher than any other value
for the data type, including the null values.
The table partitions can be also placed in table spaces defined by the user as per his
discretion through this optional.
Syntax
Create table <table_name> (c1, c2) partition by range (c_name) (partition <p_name>
values less than <value> tablespace <tablespace_name>, partition <p_name> values less
than <value> tablespace <tablespace_name>);
Adding Partition
It is used to add a new partition after the existing last partition (referred to as the high
end).
Alter table emp_d add partition p3 values less than (30);
The add partition option shown above is only for tables where the last existing
partition has been defined with a specific key value.
Database Objects
Synonyms
A synonym is a database object, which is used as an alias (alternative name) for a table,
view or sequence.
The main reasons for creating synonyms are:
1. Simplify SQL statements
2. The true name of the owner or table needs to be hidden.
3. The original location of the table needs to be hidden, for remote objects of a
distributed database.
4. Provide public access to an object.
Synonyms can be either public or private. A private synonym can be created by normal
user, which is available only to that person where as a public synonyms are created by
DBA, which can be availed by any database user.
Syntax
Create [public] synonym <syn_name> for table_name>;
Create synonym emp_sy for emp;
Grant all on emp_sy to <user_name>;
The user who has permission on a synonym can do all DML manipulations such as insert,
delete, update on the synonym, but he cannot perform any DDL operations on synonym
actually affect table.
TAB is the public synonym created by DBA.
If we have public & private objects with the same name, the private objects take
precedence.
Create table tab (a number);
Select * from tab;
No rows
Sequences
Sequences are numeric column values that are computer generated. A sequence is a
database object, which can generate unique, sequential integer values. It can be used to
Views
A view is a SQL query that is permanently stored in database and assigned a name. it is a
tailored presentation of data contained in one or more tables. A view takes the output
of a query and treats it as a table; therefore a view can be thought of as a “stored
query” or a “virtual table”. Using SQL, you can create alternative views of the
information in your tables. Views let you restrict access to data, allowing different users
to see only certain rows or certain columns of a table.
Advantages of Views
Security
Each user can be given permission to access the database only through a small set of
views that contain the specific data the user is authorized to see, thus restricting the
users access to stored data.
Query Simplicity
A view can draw data from several different tables and present it as a single table,
turning multi-table queries into single-table queries against view.
Structural Simplicity
Views can give a user a “personalized” view of the database structure, presenting the
database as set of virtual tables that make sense for the user.
Insulation from Change
A view can present a consistent unchanged image of the structure of the database, even
if the underling source tables are split, restructured or renamed.
Data Integrity
If data is accessed and entered through a view the DBMS can automatically check the
data to ensure that it meets specified integrity constraints.
Syntax
Create *or replace+ **no+ *force++ view <view_name> *column alias name….+ as <query>
[with [check option] [read only] [constraint]];
Create view v1 as select * from emp_d;
Index
Indexes are used to explicitly speed up SQL statement execution on a table, an oracle
index provides a faster access path to table data. The index points directly to the
location of the rows containing the values. Indexes are primary means of reducing disk
I/O when properly used.
We can create an index on a column or combinations of columns using “create index”
command. When e create index, oracle fetches and sorts the columns to be indexed
And sorts the rowed along with the index value of each row. Then oracle loads index
from bottom up.
Indexes are logically and physically independent of the data in the associated table. We
can create or drop index at any time without affecting the base tables or other indexes.
Oracle automatically maintains and uses indexes once they are created. Oracle
automatically reflects changes to data, such as addition of new rows, updating rows, or
deleting rows, in all relevant indexes with no additional action by users.
Unique Indexes
Indexes can be unique or non-unique. Unique indexes guarantee that no two rows of a
table have duplicate values in the columns that define index. Non-unique indexes do not
impose this restriction on the column values. Oracle enforces unique integrity
constraints by automatically defining a unique index on a unique key. A unique index is
created by using create unique index command.
Create unique index ind on emp (empno);
A unique index I automatically created when we create unique or primary key
constraint.
We cannot create index for a column which is already indexed and unique index o
columns with duplicate values.
Composite Index (Concatenated Index)
It is created on multiple columns of a table. Columns in a composite index ca appear in
any order and need not be adjacent columns o the table.
These indexes enhance the retrieval sped of data for select statements in which the
‘where’ clause references all or the leading portion of the columns in the composite
index. The most commonly accessed or most selective columns go first in the order of
the column list.
Create index ind1 on emp (empno, deptno);
The above created index is used to retrieve data, when the where clause has either both
of these column or empno alone but not deptno alone can be used.
Reverse Key Index
It reverses each byte of the column being indexed while keeping the column order. Such
an arrangement can help avoiding performance degradation in indexes when
modifications to the index are concentrated on a small set of blocks. By reversing the
keys of the index, the insertion become distributed all over the index.
Create index rind on emp (empno) reverse;
Alter index rind rebuild noreverse;
Clusters
Clusters are an optional method of storing table data. A cluster is a group of tables that
share the same data blocks because they share common columns and are often used
together. Clusters are special configurations to use when two or more tables are stored
in close physical proximity to improve performance on SQL join statements using those
tables.
Cluster is a method of storing tables that are intimately related and often joined
together into the same area on the disk. For example instead of EMP table being in one
section of the disk and the dept table being somewhere else, their rows could be
interleaved together in a single area, called a cluster. Clusters do not change the data
that is stored in tables, however, they do change the way that data is stored and
accessed. Oracle physically stores all rows for each department from both EMP and
DEPT tables in the same data blocks. To cluster tables, you must own the tables you are
going to cluster together.
The cluster key
The cluster key is the column, or group of columns, that are the clustered tables have in
common, for example deptno in emp and dept tables. You specify the columns of the
cluster ken when creating the cluster. You subsequently specify the same columns when
creating every table added to the cluster.
The cluster name follows the table naming conventions, and column datatype is the
name and datatype you will use as the cluster key. The column name may be same as
one of the columns of a table or it may be any other valid name.
Syntax for creating cluster command:
Create cluster clustername(column datatype *,column datatype+….)*other options+;
Example
Create Cluster Emp_Dept(Deptno_Key Number(2));
Create Table Dept1(Deptno Number(2)
Primary Key, Dname Varchar2(10)) Cluster Emp_Dept(Deptno);
Create Table Emp1(Empno Number(4) Primary Key,
Ename Varchar2(10), Job Varchar2(10),Sal Number(7,2),
Deptno Number(2) References Dept1(Deptno)) Cluster Emp_Dept(Deptno);
Object in Oracle
Oracle supports different types of objects. The major object types include:
Abstract data types
Varying arrays
Nested Tables
Street_name Varchar2(20)
City Varchar2(20)
State Varchat2(20)
When a table that uses the address information is created, a column that uses the
abstract data type for addresses could be used. The creation of the abstract data type
for address of an employee can be done at the SQL prompt. The code is as given below:
SQL> create or replace type address_ty as object (street_no number(3), street_name
varchar2(20), city varchar2(20), state varchar2(20));
With abstract data type inserting or selecting values cannot be done without knowing
the exact data type of attributes.
Updating Records in abstract data types
Updating is done in a similar manner to that of the select statement. The dot notation
and an alias of the table are used.
SQL> update vend_st a set a.venadd.street_no=10 where venname .. 'charu' ,
Varying Arrays
Varray helps in storing repeating attributes of a record in a single row. Varying arrays
have a fixed lower value i.e., 0 and a flexible upper value i.e., could be any valid number.
Varray cannot be extended beyond the limit that was defined when the Varray was
created.
Collectors such as varying arrays allow repetition of only those column values that
change, potentially saving storage space. Collectors can be used to accurately represent
relationships between data types in the database objects.
Creating a Varying Array
A varying array can be based on an abstract data type or on one of Oracle's standard
data types. Assuming that ordecdetails table structure is known, a single ordemo
column could contain many itemcodes.
SQL> create type itemcode as varray(5) of varchar2(5);
The command given above creates a varying array that could contain a maximum of 5
values and each value could be of data type varchar2. The upper limit has to be
specifIed otherwise an error is raised by Oracle.
Let us create two other types called qty_ord and qty_deld so that these could be used in
the creation of a table.
SOL> create type qty_ord as varray(5) of number(5);
The example given above creates a varying array, which could contain a maximum of 5
values and each of these values could be of number data type.
SOL> create type qty_deld as varray(5) of number(5);
The example given above creates a varying array, which could contain a maximum of 5
values and each of these values could be of number data type. The varying arrays have
been created and a table can now be created using these arrays.
SOL> create table order_detail ( orderno varchar2(5), item_va it_code, qty_va qty_ord,
qtyd_va qty_deld);
When values are inserted into the varying arrays care should be taken such that the
maximum limit is not exceeded. The maximum number is specified at the time of
creation of the array and can be queried using the user_col_types data dictionary view.
Selecting Data from Varying Arrays
SQL> select * from order_detail,
Nested Tables
Varying arrays are used to create collectors within the table. Varying arrays have a
limited number of entries, whereas nested tables have no limit on the number of entries
per row. A nested table is a table within a table. A table is represented as a column
within another table. Multiple rows can be present in the nested table for each row in
the main table.
Creating Nested Tables
Let us assume that the table contains a structure as shown below:
Orderno Varchar2(5)
Odate Date
Vencode Varchar2(5)
Itemcode Varchar2(2)
Qty_ord Number(5)
Qty_deld Number(5)
There are a number of itemcode for each ordemo. Each itemcode could contain a
unique qty _ord, qty _deld. The exact number of itemcode is not known and hence let
us create a type that is based on itemcode, qty _ord and qty _deld.
SQL> create type ord_ty as object (itemcode varchar2(5) ,qty_ord number (5) , qty_deld
number (5) );
The ord_ty contains a record for each order - itemcode, the quantity ordered and the
quantity delivered. To use this data type as the basis for a nested table a new abstract
data type has to be created and is done as shown below:
SQL> create type ord_nt as table of ord_ty;
The "as table of' clause tells Oracle that this type will be used as the basis of a nested
table. This type i.e., ord_nt can be used to create a table i.e., ordecmaster.
SQL> create table orderJlUister (orderno varchar2 (5), odate date,vencode varchar2 (5) ,dets
ord_nt) nested table dets store as ord_nt_tab;
When a table is created it includes a table data type and the name of the table that will
be used to store the nested table's data is specified. In the example that is given above
the nested table is stored in a table that is named ord_nt_tab. The data for the nested
table is not stored 'in-line' with the rest of the table's data. Instead it is stored apart
from the main table. Thus the dets column will be stored in one table and the data for
ordemo, odate and vencode is stored in a separate table. Oracle maintains pointers
between the tables.
Inserting Records into Nested Tables
SQL> insert into order_master values ('O100','18-jul-99','v001', ord_nt(
ord_ty('i100',10,5),ord_ty('i101',50,25), ord_ty('i102',5,5) ));
The command given above inserts values to the ordecmaster table. Dets column values
are stored in a separate nested table called ord_nCtab. These data stored in a nested
table ord_nCtab cannot be accessed directly using SQL command.
SQL> INSERT INTO TABLE(SELECT p.dets FROM order_master p WHERE p.orderno = 'O100')
VALUES ('i103',30,25);
The values for the dets collection column can be inserted using TABLE expression. Here
TABLE expression facilitates us to select dets column from the order_master table. Data
entered in the value clause is inserted into the nested table ord_nt_tab.
Querying Nested Tables
Nested tables support a greater variety of querying than varying arrays. The nature of
the table must be taken into consideration while querying. A nested table is a column
inside a table. To support queries of columns and rows of a nested table, Oracle
provides a new keyword i.e., Table. Assume that the nested table is a normal relational
table then, it could be queried using the normal select command as shown below:
SQL> select * from ord_nt_tab;
Make use of the above query enclosed in TABLE function as if it were a table.
SQL> SELECT * FROM TABLE(SELECT t.dets FROM order_master t WHERE t.orderno.= 'O100’
Updates can be performed in the same manner as inserts. An example makes updates
clearer.
SQL> UPDATE TABLE(SELECT e.dets from order_master e WHERE e.orderno.'O100') p SET
VALUE(p) .ord_ty ('i103',50,45) WHERE p.1temcode . 'i103',
Varrays
Example:1
create type one_yr as varray(6) of number(3)
/
create type two_yr as varray(6) of number(3)
/
Nested Tables
Example: 1
create type ord_ty as object ( itemcode varchar2(5),
qty_ord number(5), qty_deld number(5) )
/
create type ord_nt as table of ord_ty;
/
create table order_master ( orderno varchar2(5),
odate date, vcode varchar2(5), details ord_nt )
nested table details store as ord_nt_tab
/
insert into order_master values ('o100','18-jun-99','v001',
ord_nt(ord_ty('i100',10,5),
ord_ty('i101',50,25),
ord_ty('i102',5,5)
) )
/
insert into order_master values ('o101','28-jul-99','v002',
ord_nt(ord_ty('i100',20,15),
ord_ty('i101',20,5),
ord_ty('i102',15,15)
) )
/
insert into order_master values ('o102','13-aug-99','v003',
ord_nt(ord_ty('i100',10,5),
ord_ty('i101',25,15),
ord_ty('i102',35,25)
) )
/
select * from order_master
/
update table ( select e.details from order_master e where e.orderno='o100' ) p
set value(p)=ord_ty('i103',50,45) where p.itemcode='i101'
/
update table
(select e.details from order_master e where e.orderno='o100') p
set value(p) = ord_ty('i100',20,15) where p.itemcode = 'i100'
/
select o.itemcode from the (select details from order_master where orderno='o100') o
/
select o.itemcode,o.qty_ord,o.qty_deld from the
( select details from order_master where orderno='o101') o
/
Example: 2
Creating Nested Table Type
Create Type ANIMAL_NT as TABLE of Animal_Ty;
LOBs
A LOB is a Large Object. LOBs are use to store large, unstructured data, such as video,
audio, photo images etc. With a LOB you can store up to 4 Gigabytes of data. They are
similar to a LONG or LONG RAW but differ from them in quite a few ways.
Why use LOBs and not LONG or LONG RAW: -
LOBs offer more features to the developer than a LONG or LONG RAW. The main
differences between the data types also indicate why you would use a LOB instead of a
LONG or LONG RAW. These differences include the following: -
You can have more than one LOB column in a table, whereas you are restricted to
just one LONG or LONG RAW column per table.
When you insert into a LOB, the actual value of the LOB is stored in a separate
segment (except for in-line LOBs) and only the LOB locator is stored in the row, thus
making it more efficient from a storage as well as query perspective. With LONG or
LONG RAW, the entire data is stored in-line with the rest of the table row.
LOBs allow a random access to its data, whereas with a LONG you have to go in for a
sequential read of the data from beginning to end.
The maximum length of a LOB is 4 Gig as compared to a 2 Gig limit on LONG
Querying a LOB column returns the LOB locator and not the entire value of the LOB.
On the other hand, querying LONG returns the entire value contained within the
LONG column.
LOB types:
You can have two categories of LOBs based on their location with respect to the
database. The categories include internal LOBs and external LOBs. As the names
suggest, internal LOBs are stored within the database, as table columns. External LOBs
are stored outside the database as operating system files. Only a reference to the
actual OS file is stored in the database. An internal LOB can also be persistent or
temporary depending on the life of the internal LOB.
An internal LOB can be one of three different data types as follows: -
CLOB – A Character LOB. Used to store character data.
BLOB – A Binary LOB. Used to store binary, raw data
NCLOB – A LOB that stores character data that corresponds to the national character
set defined for the database.
the row. External LOBs on the other hand use reference semantics. That is only the
BFILE location is copied and not the actual operating system file.
5. Each internal LOB column has a distinct LOB locator for each row and a distinct copy
of the LOB value. Each BFILE column has its own BFILE locator for each row.
However you could have two rows in the table that contain BFILE locators pointing
to the same operating system file.
Data Definition Language (DDL) commands and LOBs:
This section tells you how to create a table with LOB columns and also how to alter or
modify details of a LOB column in a table.
Creating a Table having LOB columns: -
You can create tables that have one or more LOB columns of the same or different type.
When you create a LOB column you also have the option of specifying if you want the
data within the LOB to be stored in-line or out-of-line. This is done using the ENABLE |
DISABLE STORAGE IN ROW clause. If you enable storage in the row, Oracle stores the
LOB value within the row, provided the length of the row (and the locator) is less than
4K. If it changes to more than 4K, then the LOB value will be moved out of the row into
the LOB segment. Only the locator is stored within the row.
It is important to note that when you create LOB column(s), you are actually creating
separate LOB segments. These LOB segments can be in the same tablespace as your
table. However you also have the option of specifying a different tablespace for the LOB
segment.
In addition, for each LOB column you create, Oracle implicitly creates a LOB index. This
index is maintained by Oracle and you cannot alter or drop it.
The name of the LOB segment defaults to SYS_LOBxxxx, where xxxx is a hexadecimal
number. The name of the LOB index defaults to SYS_ILxxxx where xxxx is a hexadecimal
number. The hexadecimal numbers for both the LOB segment and the LOB index are
the same.
When creating a LOB column you have the option of specifying the name of the LOB
segment as well as the LOB index. In addition you also can specify which tablespace
they go into, the storage details etc.
To create tables with LOB columns, you specify one of the LOB types as the data type of
the column. So, for example, the following statement
CREATE TABLE lobtable
(employee_id NUMBER,
resume CLOB,
comments CLOB);
would create a table lobtable having 3 columns, two of which are LOB columns. For
each of these LOB columns, you would have implicitly created a LOB segment and a LOB
index. To view information about the LOB columns you’ve created, check out the
XXX_LOBS (where XXX could be ALL, DBA or USER) dictionary views. For our lobtable,
we find the following information: -
SQL> SELECT table_name “Table”, column_name “Column”, segment_name “Segment”,
2 index_name “Index”
3 FROM user_lobs;
Table Column Segment Index
---------------- ------------------ ----------------------------------------- ------------------------------------
LOBTABLE COMMENTS SYS_LOB0000016119C00002$$ SYS_IL0000016119C00002$$
LOBTABLE RESUME SYS_LOB0000016119C00003$$ SYS_IL0000016119C00003$$
SQL>SELECT segment_name, segment_type, tablespace_name
2 FROM user_segments
3 WHERE segment_name like 'SYS_LOB%';
SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME
---------------------------------------- ------------------------ ------------------------------
SYS_LOB0000016119C00002$$ LOBSEGMENT USERS
SYS_LOB0000016119C00003$$ LOBSEGMENT USERS
SQL> SELECT segment_name, segment_type, tablespace_name
2 FROM user_segments
3 WHERE segment_name like 'SYS_IL%';
SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME
---------------------------------------- ------------------------- -----------------------------
SYS_IL0000016119C00002$$ LOBINDEX USERS
SYS_IL0000016119C00003$$ LOBINDEX USERS
When creating tables with LOB columns, you can specify the name of the LOB segment,
the LOB index, storage characteristics and LOB specific details. This is done using the
LOB clause of the CREATE TABLE statement. Although you can specify details for the
LOB index, this clause is deprecated as of Oracle 8i. You can still use it however without
an error, but it is a good idea to leave it out and let Oracle manage the index details. The
general syntax for this is as follows: -
CREATE TABLE <tabname>
(col_list)
[Physical attributes]
[Storage details]
*LOB (<lobcol1> *, <lobcol2>…+) STORE AS *<lob_segment_name>+
([TABLESPACE <name>]
[{ENABLE | DISABLE} STORAGE IN ROW]
[CHUNK <chunk_size>]
[PCTVERSION <version_number>]
[{CACHE | NO CACHE [{LOGGING | NO LOGGING}]
| CACHE READS [{LOGGING | NOLOGGING}]}]
[<storage_clause_for_LOB_segment>]
[INDEX [<lob_ind_name>] [physical attributes] [<storage_for_LOB_index>]]
)
]
*LOB (<lobcol1> *, <lobcol2>….+)…+
The LOB clause can be specified for a single LOB or for some/all LOBs in your table. If
you specify more than one LOB column in a single LOB clause, you cannot name the LOB
segments. You can use the LOB clause with several LOB columns for example if you
wanted to specify the same storage or other attributes for all of them.
Using the LOB clause above, we could re-create our lobtable table as follows, specifying
the tablespace and other details for one or more LOB columns.
Example 1: - Specifying names for the LOB segment as well as the LOB index for each of
the LOB columns
SQL> CREATE TABLE lobtable (employee_id NUMBER,
resume CLOB, comments CLOB)
LOB (comments) STORE AS comments_seg
(TABLESPACE lobtbs
CHUNK 4096 CACHE
STORAGE (MINEXTENTS 2)
INDEX comments_ind (TABLESPACE indxtbs
STORAGE (MAXEXTENTS UNLIMITED)
)
)
LOB (resume) STORE AS resume_seg
(TABLESPACE lobtbs
ENABLE STORAGE IN ROW
INDEX resume_ind (TABLESPACE indxtbs)
);
Example 1: - Creating a table with default values for the LOB column in it.
SQL> CREATE TABLE lobtable
(employee_id NUMBER,
photo BLOB DEFAULT EMPTY_BLOB(),
resume CLOB DEFAULT EMPTY_CLOB(),
comments CLOB DEFAULT EMPTY_CLOB());
This function returns the length of the specified LOB. Remember an empty LOB has a
length 0.
DBMS_LOB.COPY (
dest_lob IN OUT NOCOPY {BLOB | CLOB CHARACTER SET ANY_CS},
src_lob IN {BLOB | CLOB CHARACTER SET dest_lob%CHARSET},
amount IN INTEGER,
dest_offset IN INTEGER: = 1,
src_offset IN INTEGER:= 1);
This procedure is used to copy all or part of the source internal LOB into the destination
internal LOB. If the offset specified for the destination LOB is beyond the end of data
currently in the LOB, zero-byte fillers or spaces are inserted into the BLOB or CLOB
respectively. If the offset specified is less than current length of the destination LOB,
then data in the destination LOB will be overwritten by the new data.
Methods to work on a BFILE: -
The following are methods that are used with external LOBs or BFILE. Not all
functions/procedures have been included. Please check out the on-line documentation
for a complete listing.
DBMS_LOB.LOADFROMFILE (
Dest_lob IN OUT NOCOPY {BLOB | CLOB CHARACTER SET ANY_CS},
Src_lob IN BFILE,
Amount IN INTEGER,
Dest_offset IN INTEGER: = 1,
Src_offset IN INTEGER: = 1)
This procedure copies all or part of the source BFILE into the destination BFILE. This
procedure is quite similar to the way COPY works, except that it works with BFILEs
DBMS_LOB.OPEN (
lob_loc IN OUT NOCOPY {BLOB | BFILE | CLOB CHARACTER SET ANY_CS},
open_mode IN BINARY_INTEGER);
This procedure opens the given internal or external LOB in one of two modes, either
read-only or read-write. The value for the open_mode parameter could be either
lob_readonly or lob_readwrite. A BFILE can only be opened in the read-only mode, that
is the only valid value for open_mode when working with BFILE is lob_readonly.
DBMS_LOB.FILECLOSE (
file_loc IN OUT NOCOPY BFILE);
DBMS_LOB.FILECLOSEALL;
The above two procedures are used to close an opened BFILE. The first takes in a BFILE
locator and closes the BFILE referenced by it. The second function closes all open BFILEs
in the current session.
Reading and Writing into LOBs:
DBMS_LOB.READ (
lob_loc IN {BLOB | BFILE | CLOB CHARACTER SET ANY_CS},
amount IN OUT NOCOPY BINARY_INTEGER,
offset IN INTEGER,
buffer OUT {RAW | VARCHAR2 CHARACTER SET lob_lob%CHARSET};
This procedure reads the specified portion of the LOB and places it in the buffer
specified. The form of the VARCHAR2 buffer must match the form of the CLOB
parameter.
DBMS_LOB.APPEND (
Dest_lob IN OUT NOCOPY {BLOB | CLOB CHARACTER SET ANY_CS},
Src_lob IN {BLOB | CLOB CHARACTER SET dest_lob%CHARSET});
Use the append procedure to add contents of the source LOB into the destination LOB.
Obviously you can only append to/from LOBs of the same type, so CLOB to CLOB or
BLOB to BLOB.
DBMS_LOB.WRITE (
Lob_loc IN OUT NOCOPY {BLOB | CLOB CHARACTER SET ANY_CS},
Amount IN BINARY_INTEGER,
Offset IN INTEGER,
Buffer IN {RAW | VARCHAR2 CHARACTER SET ANY_CS});
The write procedure is used to write a specified amount of data from the buffer into an
internal LOB, starting at the offset position. Any data already in the LOB is overwritten.
The form of the VARCHAR2 buffer must match the form of the CLOB parameter.
There are many more procedures and functions that haven’t been included in this list.
Please check the documentation for a complete list.
Examples for working with persistent LOBs: -
Example 1: -
DECLARE
L_resume CLOB;
L_comments CLOB;
Comm_buf VARCHAR2 (30): = 'Joined company on the 10/10/01';
Resume_buf VARCHAR2 (30): = ‘Resume for Scott’;
BEGIN
/* Create a row in our lobtable, initialising the lob columns to empty and
returning the locator values into local variables */
INSERT INTO lobtable (employee_id, resume, comments)
VALUES (10, empty_clob (), empty_clob ())
RETURNING resume, comments INTO L_resume, L_comments;
--Write info from the buffers into the LOBs pointed to by the locators
dbms_lob.write(L_resume, length(Resume_buf), 1, Resume_buf);
dbms_lob.write(L_comments, length(Comm_buf), 1, Comm_buf);
END IF;
BEGIN
SELECT myfile INTO l_bfile
FROM lobtest;
IF (dbms_lob.fileexists (l_bfile) = 1) THEN
IF (dbms_lob.fileisopen (l_bfile) = 1) THEN
dbms_output.put_line('File is open');
ELSE
dbms_lob.open(l_bfile, dbms_lob.lob_readonly);
END IF;
dbms_output.put_line('Length of file is ' || dbms_lob.getlength(l_bfile));
ELSE
dbms_lob.filegetname(l_bfile, l_dir, l_file);
dbms_output.put_line('File with name ' || l_file || ' does not exist in directory '||
l_dir);
END IF;
END;
You can create, access, update and then free your temporary LOBs. There is no logging
or redo information generated for temporary LOBs, thus giving you better performance.
Temporary LOBs are created in your temporary tablespace. You use
DBMS_LOB.CREATETEMPORARY() procedure to create a temporary LOB. When you
create a temporary LOB you are automatically setting it to empty. You cannot use the
EMPTY_BLOB or EMPTY_CLOB functions with temporary LOBs.
The following procedures and functions are provided to you as part of the DBMS_LOB
package to work with temporary LOBs.
Creating a temporary LOB:
DBMS_LOB.CREATETEMPORARY (
lob_loc IN OUT NOCOPY {BLOB | CLOB CHARACTER SET ANY_CS},
cache IN BOOLEAN
duration IN PLS_INTEGER := DBMS_LOB.SESSION)
This procedure creates a temporary LOB with the locator returned in lob_loc. In
addition, it also creates a temporary LOB index in the default temporary tablespace. The
duration parameter specifies the lifetime of the temporary LOB and defaults to the
session. If you want, you can also set it to the current program call using the integer
DBMS_LOB.CALL.
Freeing the temporary LOB:
DBMS_LOB.FREETEMPORARY (
lob_loc IN OUT NOCOPY {BLOB | CLOB CHARACTER SET ANY_CS})
This procedure frees the created temporary CLOB or BLOB in your temporary
tablespace. Once you call this procedure, the lob locator associated with the temporary
LOB is marked invalid. If you subsequently assign this lob locator to another lob locator,
the latter one is also freed and marked invalid.
Checking if a LOB is temporary:
DBMS_LOB.ISTEMPORARY (
Lob_loc IN {BLOB | CLOB CHARACTER SET ANY_CS})
RETURNS integer
Use this function to determine if a given lob locator points to a temporary or persistent
LOB. It returns an integer value: 1 for a temporary LOB and 0 for a persistent LOB.
An example:
DECLARE
tempLOB CLOB;
amt NUMBER := 14;
position NUMBER := 1;
buffer VARCHAR2(20) := 'this is a test';
BEGIN
/*Create and open a temporary lob for reading and writing */
DBMS_LOB.CREATETEMPORARY (tempLOB, true);
IF (DBMS_LOB.ISTEMPORARY (tempLOB) = 1) THEN
dbms_output.put_line('A temporary LOB has been created');
ELSE
dbms_output.put_line('Not a temporary LOB');
END IF;
DBMS_LOB.OPEN (tempLOB, DBMS_LOB.LOB_READWRITE);
DBMS_LOB.CLOSE (tempLOB);
DBMS_LOB.FREETEMPORARY (tempLOB);
END;
There are two approaches to using the flashback queries. One is a time based approach
and the other uses the SYSTEM CHANGE NUMBER - SCN - to identify the point we want
to go back to. Each of these approaches employs the AS OF clause as well as an Oracle
supplied package DBMS_FLASHBACK. Both are discussed here.
Using the Flashback Query with AS OF clause:
Suppose we want to recover data we have accidentally deleted for some of the
employees from the EMPLOYEE table and have committed the transaction. The
following query shows how we will use the AS OF clause for recovering data to a certain
point in time, at which we know our data existed.
SQL> INSERT INTO EMPLOYEE_TEMP
(SELECT * FROM EMPLOYEE AS OF TIMESTAMP ('13-SEP-04 8:50:58','DD-MON-YY
HH24: MI: SS'
Now if we ran a SELECT statement on EMPLOYEE_TEMP table, we will see all the lost
data in this temporary table, which we can add to the actual employees table.
Using a point in time is way of going back, another way of telling the system how far to
go back is the use of SCN - System Change Number. The procedure is the same as
earlier, trying to recover lost data using the DBMS_FLASHBACK utility. Only this time
instead of using the time we are using the system change number - SCN to enter the
flashback mode. This SCN number can be obtained before the transaction is initiated by
using the GET_SYSTEM_CHANGE_NUMBER function of the DBMS_FLASHBACK utility as
follows.
SQL> select DBMS_FLASHBACK. GET_SYSTEM_CHANGE_NUMBER from dual;
You do not have to be in the flashback mode to run this statement. As you might have
already guessed, the AS OF clause is used with SCN number as shown below.
SQL> INSERT INTO EMPLOYEE_TEMP
(SELECT * FROM EMPLOYEE AS OF SCN 10280403339);
We have recovered the data again but this time using the SCN number.
Using the DBMS_FLASHBACK package:
Prior to Oracle 9i release 2, the only way to use the flashback query feature was through
the use of the utility package DBMS_FLASHBACK. In order to use this method, the user
had to specify the intention to enter the flashback mode by supplying the time to which
the user wished to go back to. This was done by using the ENABLE_AT_TIME function of
It is worth mentioning that even when we use time instead of SCN number Oracle still
maps that time to an SCN number stored in the SMON_SCN_TIME every 5 minutes by
Oracle background process SMON.
Oracle 10G enhancements to Flashback:
Oracle 10G has enhanced the flashback feature further and has turned it into a much
more powerful feature by introducing numerous additions. Some of the more common
ones are discussed here.
Flashback Table
Flashback Drop
Flashback Database
Flashback Versions Query
Flashback Transaction Query
the recylce bin, these objects can be retrieved from the recycle bin or deleted
permanently by using the PURGE command. Either an indivisual object like a table or an
index can be deleted from the recycle bin:
SQL> PURGE TABLE Employee;
or the whole recylce bin can be 'emptied out' by using the PURGE command:
SQL> PURGE recyclebin;
If you take a look at the contents of the recycle bin using the following query,
SQL> select OBJECT_NAME, ORIGINAL_NAME, TYPE from user_recyclebin;
OBJECT_NAME ORIGINAL_NAME TYPE
---------------------------------------------------------------------------------------------------
BIN$G/gHMigrTRqHQukZSIpSLw==$0 EMPOLYEE TABLE
BIN$1UiHeUR7SymGHo20pTfGXA==$0 EMPLOYEE TABLE
BIN$6d6677f5T+K++npt+5p/jQ==$0 EMP_IDX1 INDEX
you will notice that the tables are not saved under their original names, instead they are
saved under their recycled names along with the column ORIGINAL_NAME that contans
the actual names of the objects. When the table is recoverd form the recycle bin, the
views and procedure using these tables that were rendered invalid at the time the table
was dropped remain invalid. The actual names of these objects indexes, views etc. have
to retrived from the recycle bin manually and applied to the table again. So for our
example the table rerieved from the recycle bin would have an index named
BIN$6d6677f5T+K++npt+5p/jQ==$0v instead of EMP_IDX1 and the actual name have to
be recoverd and applied from the recycle bin manually like this:
SQL> drop index BIN$6d6677f5T+K++npt+5p/jQ==$0;
SQL> create index EMP_IDX1 on EMPLOEE (EMPNO);
Flashback database
So far we have discussed the recovery of individual rows or individual objects. This
logically leads to the discussion of recovering the whole database to a point in time. Knit
tightly with the recovery manager - RMAN, the Flashback datbase feature provided in
Oracle 10g, provides yet another way of easy and efficient, point-in-time recovery in
case of data corruption or data loss. This is much faster than the traditional approach to
point-in-time recovery since no redo logs are required when using this approach. About
the Flashback database feature, Oracle documentation says the best: "Flashback
Database is like a 'rewind button' for your database."
As is generall y the case with most softwares, there is a speed/space trade-off here as
well. Flashback Database requires the creation and configurtion of an Oracle Flash
Recovery Area before this feature can be used.
Flash Recovery Area created by the DBA, is the allocation of space on the disk to hold all
the recovery related files in one, centralized place. Flash Recovery Area contains the
Flashback Logs, Redo Archive logs, backups files by RMAN and copies of control files.
The destination and the size of the recovery area are setup using the
db_recovery_file_dest and b_recovery_file_dest_size initializatin parameters. Now when
the setup is complete, let's see how the flashback database is used.
For this test suppose that a transaction ran that made significant changes ti the
database, yet this is not what the user intended. Going back and retrieving individual
objects and then recovernig and restoring the original data can be a very extensive, yet
timeconsuming and error-prone exercise. It is time to use the FLASHBACK DATABASE.
First the flashback is enabled to make Oracle database enter the flashback mode. The
database must be mounted Exclusive and not open. The database has to be in the
ARCHIVELOG MODE before we can use this feature. This is shown as below.
SQL> ALTER DATABASE ARCHIVELOG;
Now startup the database in EXCLUSIVE mode.
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT EXCLUSIVE
Now enter the flashback mode (the database should not be open at this time)
SQL> ALTER DATABASE FLASHBACK ON;
Issue the flashback command and take the database to the state it was in, one hour ago.
SQL> Flashback database to timestamp sysdate-(1/24);
After the system comes back with FLASHBACK COMPLETE, open the database.
SQL> ALTER DATABASE OPEN RESETLOGS;
Now if you select from any of the tables that were affected, you will see that the
affected tables are in the original state, i.e. an hour ago. And once again, we have the
option of using SCN instead of timestamp.
As is evident from this section, that while other flashback features are available to the
users of the database, the flashback database involves the DBA because of the system
level activities that have to be performed. Nevertheless, the whole exercise is much
simpler and easier than the traditional point-in-time recovery.
Flashback query is a powerful and useful feature introduced in Oracle 9i, and enhanced
greatly in Oracle 10g, that can help us recover data, lost or corrupted, due to human
error. One big advantages of using flashback over point-in-time recovery is that for the
latter not only transactions from the time of error to the current time would be lost but
also the system will be unavailable for the duration of the recovery. For flashback query,
on the other hand, there will be no down time needed and repair or recovery is less
labor and time intensive than what it used to be in earlier versions of Oracle. With the
new features like Recycle Bin, Flashback databases and Flashback Drop in Oracle 10g,
the flashback capability introduced in 9i has been improved tremendously now turning a
small feature into a powerful tool in the new Oracle releases.
Flashback is an insurance feature. Just like having car insurance does not mean that we
can be careless on the road, FLASHBACK too, should be considered another tool in the
belt, rather than a luxury that allows us to be careless about the data simply because we
have the ability to recover it easily.
Much of what makes grid computing possible today are the innovations surrounding
hardware. For example,
Processors: New low-cost, high-volume Intel Itanium 2, Sun SPARC, and IBM
PowerPC 64-bit processors now deliver performance equal to or better than
processors used in high-end SMP servers.
Servers: Blade server technology reduces the cost of hardware and increases the
density of servers, which further reduces expensive data center requirements.
These blade servers also come with remote management capabilities that make
it easy for data center administrators to manage these systems. Data centers
have started to leverage these technologies.
Networked storage: Network Attached Storage (NAS) and Storage Area Networks
(SANs) enable sharing of storage across systems, further reducing costs.
Network Interconnects: Gigabit Ethernet and Infiniband interconnect
technologies are driving down the cost of connecting clusters of servers.
Vlrtualization
Virtualization is the abstraction into a service of every physical and logical entity in a
grid. Virtualization is important because it enables grid components (such as storage,
processors, databases, application servers, and applications) to integrate tightly without
creating rigidity in the system.
For example, vendors such as Sun, HP, IBM, and Topspin are starting to deliver
hardware virtualization and provisioning technologies. These technologies allow you to
dynamically group and network a set of servers and storage components. You can also
dynamically move servers and storage components from one group to another. Some of
these technologies also allow dynamic loading and starting of the OS and applications
on these servers.
Software Trends
Linux runs very well on small computers (one to four CPUs) and provides the best price
for performance, making it ideal for a grid environment.
Linux continues to grow faster than any other OS. The economic advantage of blades
over SMP will cause blades to dominate in grid environments. Because Linux already
works well for blades, this will accelerate Linux growth. Because Linux has a price
advantage, which becomes even more important as the number of blades grows, Linux
adoption will further accelerate. Clusters of standard, low-cost blades naturally go well
with Linux, the standard inexpensive OS.
Hardware innovations can only be useful when the software running on them can
leverage those innovations. Software has started to leverage these hardware
innovations. One issue with software today is that it is designed to use the resources it is
provisioned, but it is not designed to give up resources it no longer needs. Oracle
provides software today - both Oracle Database and Oracle Application Server - that
leverage these hardware innovations. Oracle Database and Oracle Application Server
can utilize the resources they are provisioned and can easily relinquish resources they
no longer need.
Grid Momentum
In the technology industry, grid momentum is building. Major vendors, such as Oracle,
are already offering grid-enabling technology, and many others are preparing to. The
grid standards body, GGF, is in place and has the support of major technology vendors.
In IT organizations, grid momentum is also building. Grid technologies promise
increased utilization of existing hardware. Grids let you allocate your resources to meet
the needs of your business, instead of having islands of computing that are idle or
overloaded. As existing hardware needs to be replaced, blades offer the price for
performance. The economics are so compelling; enterprises have already started
leveraging blade servers for grid computing.
Fundamental Attributes of the Grid
All grids exhibit certain fundamental attributes. Enterprises can begin reaping the
benefits of grid computing by enhancing existing IT infrastructure with these attributes.
Virtualization at Every Layer
Provisioning means allocating resources where they are needed. Once the resources are
virtualized, resources need to be dynamically allocated for various enterprise tasks
Consolidation and pooling of resources is required for grids to achieve better utilization
of resources, a key contributor to lower costs. By pooling individual disks into storage
arrays and individual servers into blade farms, the grid runtime processes that
dynamically couple service consumers to service providers have more flexibility to
optimize the associations.
single, logical unit. An administrator can define a particular disk group as the default
disk group for a database, and Oracle automatically allocates storage for and creates or
deletes the files associated with the database object.
Automatic Storage Management also offers the benefits of storage technologies such as
RAID or Logical Volume Managers (LVMs). Oracle can balance I/O from multiple
databases across all of the devices in a disk group, and it implements striping and
mirroring to improve I/O performance and data reliability. In addition, Oracle can
reassign disks from node to node and cluster to cluster, automatically reconfiguring the
group. Because Automatic Storage Management is written to work exclusively with
Oracle, it achieves better performance than generalized storage virtualization solutions.
Portable Clusterware
Oracle Database 10g has enhancements to provide you with better performance and
scalability with upcoming high-speed interconnects such as Infiniband. You can use
Infiniband for all network communications. It offers many benefits:
Infiniband offers a tremendous performance improvement over Gigabit Ethernet
networks. The low latency and high-bandwidth of Infiniband makes it especially
useful as clusters interconnect.
You use a single network infrastructure for your communication between
different servers and between servers and storage. This simplifies the cabling
requirements of your data center.
With simplified network infrastructure, you use a single network backplane,
which makes network provisioning easier.
With Oracle Database 10g, you can now use Infiniband for your application
server to database server communication, for server-to-server communication in
a clustered database, and for server to storage communication. This provides
you with all around performance improvement and flexibility in your data
center.
Easy Client Install
The easy client install feature simplifies deployment of applications in a grid. Clients of
the database only need to download or copy a very small subset of Oracle client files
and set an environment variable. These applications - OCI or JDBC applications - can
access a database on your grid. You no longer need to go through the install process on
the database client. This feature is especially useful for deployment of ISV applications.
ISVs can include these Oracle client files in their install process and customers need not
install Oracle clients separately. Additionally, in grid environments where client
machines are dynamically identified and configured, this feature simplifies the
installation and configuration of Oracle client software.
Easy Oracle Database Install
Oracle Database 10g has simplified the installation of the Oracle database. You can
install the Oracle database with a single CD. Oracle Universal Installer (OUI) can also
perform multi-node installs of the clustered Oracle database. During the install, you
identify the hostnames where you would like to install the Oracle database. OUI then
installs the Oracle Database software on all of those nodes. You can also decide to have
either a single shared image of the software or a separate image on each host machine.
Compute Resource Provisioning
The tenet of grid computing is the ability to dynamically align resources to your
changing priorities. Oracle Database 10g has numerous enhancements and new features
that make it easy for you to align your computing resources with your business needs.
Real Application Clusters (RAC)
Oracle Real Application Clusters (RAC) enables high utilization of a cluster of standard
low-cost modular servers such as blades. You can run a single Oracle database on a
cluster of blade servers. Applications running on RAC can dynamically leverage more
blades provisioned to them. Similarly, these applications can easily relinquish these
blades when they no longer need them. Conversely, commodity databases have
remarkably low utilization on commodity components. On the commodity databases,
you need to allocate for peak loads and allocate spares. You cannot add and remove
blades to the commodity databases without bringing down the entire system.
RAC, based on shared disk architecture, can grow and shrink on demand. This is not
possible with databases from other vendors as they are based on shared nothing
architecture, which does not offer this flexibility. With shared nothing, data is
partitioned artificially. When more blades are added, all the data needs to be
repartitioned to allocate data to the new blades. Similarly, when blades need to be
taken off, data needs to be repartitioned before taking off the blades.
Oracle Database 10g offers automatic workload management for services within a RAC
database. RAC automatically load balances connections as they are made across
instances hosting a service. In addition, using Resource Manager, you can specify
policies for resource allocation to services running within a RAC database. To meet
these policies, RAC will automatically provision database instances to these services.
Oracle Database 10g also offers a single-button addition and removal of servers to a
cluster. With the push of a button, you can now add a server to your cluster and
provision this server to the database. Oracle database automatically installs all the
required software - Portable Clusterware and Oracle Database software and starts an
Oracle database instance on it. Similarly, with the push of a button, you can remove a
server.
Resonance
A cluster is a user-defined set of servers that are clustered together using Oracle
Portable Clusterware. You run many databases on the same cluster and define your
service policies for these databases. Resonance will dynamically grow and shrink the
number of servers your individual databases are running on in this cluster to meet your
service-level objectives. This is done automatically without requiring any user
intervention.
Imagine you have a large cluster with many databases in it. If you had to manage it
manually, you would have to constantly monitor the load on each of those databases
and then you would have to manually shut down or bring up additional instances of
those databases. Your shut down command might take a very long time, as there might
be active sessions on the database instance. Oracle Database 10g does this for you. It
constantly monitors the load for you. When it needs to shut down an instance, it
automatically migrates active sessions to other active instances of the database.
Similarly, when it brings up an additional instance of a database, it automatically
balances the workload across all the instances of this database.
Oracle Scheduler
Oracle Database 10g introduces Oracle Scheduler, which provides you with many
advanced capabilities to schedule and perform business and IT tasks in your grid. You
can provision your workload within a database across time to get more efficient
resource utilization. First, you define your jobs, which can be stored procedures or
external jobs such as C or Java programs. Next, you define your schedule. Then you
assign jobs to schedules. You have the ability to define any arbitrarily complex schedule.
You can also group jobs into job classes to simplify management and prioritization of
jobs. Using Oracle Resource Manager, you can define your resource plans and assign
these resource plans to these job classes. You can also change these resource plans
across time. For example, you may consider the jobs to load a data warehouse to be
critical jobs during non-peak hours but not during peak hours.
Database Resource Manager
Database 10g features such as Oracle Transportable Tablespaces and Oracle Streams
make it easy for you to efficiently share data between different databases to share
processing of information across different resources.
For certain information, for example a terabyte database, that is infrequently accessed,
it is more efficient to leave the data in place. For this information, you can use Oracle
Database 10g's federated features such as distributed SQL, gateways, and materialized
views to get access to that data on demand.
Ultra Large Database Support
Oracle Database 10g now supports a single database with up to 8 exabytes (8 million
terabytes) of data. This virtually removes a limit on how large your consolidated
database can be. You can also store data in much larger files, thus decreasing the
number of files in large databases. Additionally, the Bigfile Tablespace simplifies the
management of datafiles in large databases, minimizes scalability issues related to
having a large number of datafiles, and simplifies management of storage utilizing such
features as Automatic Storage Management and Oracle Managed Files.
Oracle Transportable Tablespaces
It is not always possible to pool your hardware resources together. For example, you
might have geographically distributed hardware resources that you cannot cluster
efficiently. Or your data center limitations may prevent you from pooling the hardware
together. In these situations, Oracle Transportable Tablespaces offers a very efficient
way of sharing large subsets of data and then sharing processing on this data on
different hardware resources.
Oracle Transportable Tablespaces offer grid users an extremely fast mechanism to move
a subset of data from one Oracle database to another. It allows a set of tablespaces to
be unplugged from a database, moved or copied to another location, and then plugged
into another database. Unplugging or plugging a data file involves only reading or
loading a small amount of metadata. Transportable Tablespaces also supports
simultaneous mounting of read-only tablespaces by two or more databases.
Oracle Database 10g now supports heterogeneous transportable tablespaces. This
feature allows tablespaces to be unplugged, converted with RMAN if need be, and
transported across different platforms, for example, from Solaris or HP/uX to Linux.
For example, consider a financial application at a typical enterprise. It receives very light
usage during normal time, with a couple of inserts or updates every hour. But, during
quarter end, it needs considerably more resources for reporting. To accommodate these
increased demands, you could use the transportable tablespace feature to move the
data to a more powerful resource during quarter end, and perform your processing
there.
Oracle Streams
Oracle Database 10g offers a new self-propelled database feature. This feature utilizes
Oracle Transportable Tablespaces and Oracle Streams and offers you an easy way to
share processing across distributed hardware resources. In addition, it offers an efficient
way for migrating your applications to the grid.
With a single command, you can snap a set of tablespaces from one database, ship the
tablespace to another database, reformat the tablespace if the second database is on a
different OS, and plug this tablespace into the second database. During this time, there
might be changes happening at the first database. Oracle Streams would have already
started capturing those changes, which can later be synced with the second database.
All of this is done with a single command. If the second database is on a grid, what you
have just done is migrate your application to a grid with a single command. You can
later migrate all the applications running on the first database to the second database
by simply repointing the connection string to the second database.
Oracle Data Pump
With Oracle Database 10g, Oracle introduces a new data movement facility that greatly
improves performance when getting data into and out of a database. Oracle Data Pump
is a high-speed, parallel infrastructure that enables quick movement of data and
metadata from one database to another. This technology is the basis for Oracle's new
data movement utilities, Data Pump Export and Data Pump Import, which have greatly
enhanced performance over Oracle's original Export and Import.
When moving a transportable tablespace, Data Pump Export and Import are used to
handle the extraction and recreation of the metadata for that tablespace. Data Pump is
a flexible and fast way to unload entire databases or subsets of databases, and reload
them on the target platform.
Distributed SQL, Gateways, and Distributed Transactions
The dynamic nature of the grid imposes stringent operational requirements on your grid
infrastructure. You would want the grid infrastructure to be self-reliant - it should be
able to tolerate system failures and adapt to changing business needs. You would like to
be able to control your grid environment in a more holistic manner rather than on a
component-by-component basis. Lastly, security is paramount in grid environments -
you would not like any unwanted exposure of your data and resources.
Self-Reliant Database
tolerate the failures of individual components and to provide high availability in all
circumstances.
High Avallablllty
Oracle Database 10g brings the highest levels of reliability and availability to the grid.
You get the same levels of reliability and availability on standard, lowcost modular
hardware - servers and storage. Acknowledging that failures will happen, Oracle
Database 10g provides near instantaneous recovery from system faults, meeting the
most stringent service level agreements. Automatic Storage Management provides the
reliability and availability on low-cost standard storage. RAC provides the reliability and
availability on low-cost standard servers.
Oracle Database 10g provides robust features to protect from data errors and disasters.
The new flashback database feature provides the ability to recover a database to a
specific time to recover from human error. The recovery time is proportional to the time
duration to which it needs to go back. With this flash backup feature, database
administrators can now use low-cost standard disks for maintaining their backups.
Oracle Database 10g also includes tools to minimize planned downtime, critical for any
interactions in a 24x7 environment. The new rolIing upgrade feature enables online
application of patches to the database software. You don't need to bring down the
entire database to apply a patch. You can apply patches to the clustered database - one
instance at a time--thus keeping the database online while applying the patch.
Self.Managlng
With the new self-managing features, Oracle Database 10g has taken a giant leap
towards making the Oracle database self-reliant. Oracle Database 10g includes an
intelIigent database monitor that records data regarding all aspects of database
performance. Using this information, Oracle Database Automatic Memory Management
dynamically allocates memory to different components of the Oracle database.
Automatic health management automatically generates alerts regarding various aspects
of the database that simplify database monitoring for DBAs. Automatic Storage
Management, covered previously in this paper, provides automatic rebalancing and
provisioning of storage.
Oracle Enterprise Manager Grid Control
Even with the self-managing Oracle Database 10g, there are aspects of the enterprise
grid that administrators will want to manage and control. Oracle Enterprise Manager
Grid Control provides a single tool that can monitor and manage not only every Oracle
software element - Oracle Application Server 10g and Oracle Database 10g - in your grid
but also web applications via APM (Application Performance Management), hosts,
storage devices, and server load balancers. It is also extensible via an SDK so customers
can use it to monitor additional components that are not supported out-of-box.
Grid Control provides a simplified, centralized management framework for managing
enterprise resources and analyzing a grid's performance. With Grid Control, grid
administrators can manage the complete grid environment through a web browser
throughout the whole system's software lifecycle, front-to-back, from any location on
the network.
With Grid Control, administrators can launch and execute any number of integrated
Oracle database features such as Data Pump, Resource Manager, Scheduler,
Transportable Tablespaces, etc. Administrators can also monitor, diagnose, modify, and
tune multiple databases throughout the grid.
Grid Control views the availability and performance of the grid infrastructure as a
unified whole rather than as isolated storage units, databases, and application servers.
IT staff can group hardware nodes, databases, and application servers into single logical
entities and manage a group of targets as one unit. Administrators can also schedule
tasks on multiple systems at varying time intervals, share tasks with other
administrators, and group related services together to facilitate administration.
By executing jobs, enforcing standard policies, monitoring performance and automating
many other tasks across a group of targets instead of on many systems individually, Grid
Control enables IT staff to scale with a growing grid. Because of this feature, the
existence of many small computers in a grid infrastructure does not increase
management complexity.
Managing Security in the Grid
The dynamic nature of the grid makes security extremely important. Enterprises need to
ensure that their data is secure. Exactly the right set of users must have access to the
right set of data. At the same time, they need an easy way to manage security
throughout their enterprise. Oracle Database 10g makes it easy for enterprises to
manage their security needs in the grid.
Enterprise User Security
Enterprise User Security centralizes the management of user credentials and privileges
in a directory. This avoids the need to create the same user in multiple databases across
a grid. A directory-based user can authenticate and access all the databases that are
within an enterprise domain based on the credentials and privileges specified in the
directory.
With Oracle Database 10g, grid users have the ability to store an SSL Certificate in a
smart card, for roaming access to the grid. Oracle Database 10g also comes with Oracle
Certificate Authority that simplifies provisioning of certificates to grid users.
Virtual Private Database (VPD)
Oracle product directions are aligned with the grid. Oracle Database 10g is the first
database designed for the grid. Oracle already supports more grid computing
technology than any of its competitors, as described in the previous sections. Your
investments in Oracle are well leveraged - you can incrementally adopt additional grid
computing technology as Oracle expands its technology stack.
Grid Standards Support
Oracle is committed to supporting industry standards. Oracle is working with the Global
Grid Forum to help define grid standards. Just as Oracle has
supported in its products, and is helping other standards, such as J2EE, Web Services,
Xquery, and SQL, Oracle intends to fully support grid standards. Often, the work of
Overview of PL/SQL
Advantages of PL/SQL
PL/SQL is a completely portable, high-performance transaction processing language that
offers the following advantages:
Support for SQL
Support for object-oriented programming
Better performance
Higher productivity
Full portability
Tight integration with Oracle
Tight security
PL/SQL also supports dynamic SQL, a programming technique that makes your
applications more flexible and versatile. Your programs can build and process SQL data
definition, data control, and session control statements at run time, without knowing
details such as table names and WHERE clauses in advance.
Better Performance
Without PL/SQL, Oracle must process SQL statements one at a time. Programs that issue
many SQL statements require multiple calls to the database, resulting in significant
network and performance overhead.
With PL/SQL, an entire block of statements can be sent to Oracle at one time. This can
drastically reduce network traffic between the database and an application. As Figure 1-
1 shows, you can use PL/SQL blocks and subprograms to group SQL statements before
sending them to the database for execution. PL/SQL even has language features to
further speed up SQL statements that are issued inside a loop.
PL/SQL stored procedures are compiled once and stored in executable form, so
procedure calls are efficient. Because stored procedures execute in the database server,
a single call over the network can start a large job. This division of work reduces network
traffic and improves response times. Stored procedures are cached and shared among
users, which lowers memory requirements and invocation overhead.
Higher Productivity
PL/SQL extends tools such as Oracle Forms and Oracle Reports. With PL/SQL in these
tools, you can use familiar language constructs to build applications. For example, you
can use an entire PL/SQL block in an Oracle Forms trigger, instead of multiple trigger
steps, macros, or user exits.
PL/SQL is the same in all environments. Once you learn PL/SQL with one Oracle tool, you
can transfer your knowledge to other tools.
Full Portability
Applications written in PL/SQL can run on any operating system and platform where the
Oracle database runs. With PL/SQL, you can write portable program libraries and reuse
them in different environments.
Tight Security
PL/SQL stored procedures move application code from the client to the server, where
you can protect it from tampering, hide the internal details, and restrict who has access.
For example, you can grant users access to a procedure that updates a table, but not
grant them access to the table itself or to the text of the UPDATE statement.
Triggers written in PL/SQL can control or record changes to data, making sure that all
changes obey your business rules.
Support for Object-Oriented Programming
Object types are an ideal object-oriented modeling tool, which you can use to reduce
the cost and time required to build complex applications. Besides allowing you to create
software components that are modular, maintainable, and reusable, object types allow
different teams of programmers to develop software components concurrently.
By encapsulating operations with data, object types let you move data-maintenance
code out of SQL scripts and PL/SQL blocks into methods. Also, object types hide
implementation details, so that you can change the details without affecting client
programs.
In addition, object types allow for realistic data modeling. Complex real-world entities
and relationships map directly into object types. This direct mapping helps your
programs better reflect the world they are trying to simulate.
As Figure 1-2 shows, a PL/SQL block has three parts: a declarative part, an executable
part, and an exception-handling part that deals with error conditions. Only the
executable part is required.
First comes the declarative part, where you define types, variables, and similar items.
These items are manipulated in the executable part. Exceptions raised during execution
can be dealt with in the exception-handling part.
Figure 1-2 Block Structure
You can nest blocks in the executable and exception-handling parts of a PL/SQL block or
subprogram but not in the declarative part. You can define local subprograms in the
declarative part of any block. You can call local subprograms only from the block in
which they are defined.
Variables and Constants
PL/SQL lets you declare constants and variables, then use them in SQL and procedural
statements anywhere an expression can be used. You must declare a constant or
variable before referencing it in any other statements.
Declaring Variables
Variables can have any SQL datatype, such as CHAR, DATE, or NUMBER, or a PL/SQL-only
datatype, such as BOOLEAN or PLS_INTEGER. For example, assume that you want to declare
a variable named part_no to hold 4-digit numbers and a variable named in_stock to hold
the Boolean value TRUE or FALSE. You declare these variables as follows:
part_no NUMBER(4);
in_stock BOOLEAN;
You can also declare nested tables, variable-size arrays (varrays for short), and records
using the TABLE, VARRAY, and RECORD composite datatypes.
Declaring Constants
Declaring a constant is like declaring a variable except that you must add the keyword
CONSTANT and immediately assign a value to the constant. No further assignments to the
constant are allowed. The following example declares a constant:
credit_limit CONSTANT NUMBER := 5000.00;
Processing Queries with PL/SQL
Processing a SQL query with PL/SQL is like processing files with other languages. For
example, a Perl program opens a file, reads the file contents, processes each line, then
closes the file. In the same way, a PL/SQL program issues a query and processes the
rows from the result set:
FOR someone IN (SELECT * FROM employees)
LOOP
%TYPE
The %TYPE attribute provides the datatype of a variable or database column. This is
particularly useful when declaring variables that will hold database values. For example,
assume there is a column named title in a table named books. To declare a variable
named my_title that has the same datatype as column title, use dot notation and the
%TYPE attribute, as follows:
my_title books.title%TYPE;
Declaring my_title with %TYPE has two advantages. First, you need not know the exact
datatype of title. Second, if you change the database definition of title (make it a longer
character string for example), the datatype of my_title changes accordingly at run time.
%ROWTYPE
In PL/SQL, records are used to group data. A record consists of a number of related
fields in which data values can be stored. The %ROWTYPE attribute provides a record type
that represents a row in a table. The record can store an entire row of data selected
from the table or fetched from a cursor or cursor variable.
Columns in a row and corresponding fields in a record have the same names and
datatypes. In the example below, you declare a record named dept_rec. Its fields have
the same names and datatypes as the columns in the dept table.
DECLARE
dept_rec dept%ROWTYPE; -- declare record variable
You use dot notation to reference fields, as the following example shows:
my_deptno := dept_rec.deptno;
If you declare a cursor that retrieves the last name, salary, hire date, and job title of an
employee, you can use %ROWTYPE to declare a record that stores the same information,
as follows:
DECLARE
CURSOR c1 IS
SELECT ename, sal, hiredate, job FROM emp;
emp_rec c1%ROWTYPE; -- declare record variable that represents
-- a row fetched from the emp table
the value in the ename column of the emp table is assigned to the ename field of emp_rec,
the value in the sal column is assigned to the sal field, and so on.
Control Structures
Control structures are the most important PL/SQL extension to SQL. Not only does
PL/SQL let you manipulate Oracle data, it lets you process the data using conditional,
iterative, and sequential flow-of-control statements such as IF-THEN-ELSE, CASE, FOR-LOOP,
WHILE-LOOP, EXIT-WHEN, and GOTO.
Conditional Control
Often, it is necessary to take alternative actions depending on circumstances. The IF-
THEN-ELSE statement lets you execute a sequence of statements conditionally. The IF
clause checks a condition; the THEN clause defines what to do if the condition is true; the
ELSE clause defines what to do if the condition is false or null.
Consider the program below, which processes a bank transaction. Before allowing you
to withdraw $500 from account 3, it makes sure the account has sufficient funds to
cover the withdrawal. If the funds are available, the program debits the account.
Otherwise, the program inserts a record into an audit table.
-- available online in file 'examp2'
DECLARE
acct_balance NUMBER(11,2);
Iterative Control
LOOP statements let you execute a sequence of statements multiple times. You place the
keyword LOOP before the first statement in the sequence and the keywords END LOOP
after the last statement in the sequence. The following example shows the simplest kind
of loop, which repeats a sequence of statements continually:
LOOP
-- sequence of statements
END LOOP;
The FOR-LOOP statement lets you specify a range of integers, then execute a sequence of
statements once for each integer in the range. For example, the following loop inserts
500 numbers and their square roots into a database table:
FOR num IN 1..500 LOOP
INSERT INTO roots VALUES (num, SQRT(num));
END LOOP;
Sequential Control
The GOTO statement lets you branch to a label unconditionally. The label, an undeclared
identifier enclosed by double angle brackets, must precede an executable statement or
a PL/SQL block. When executed, the GOTO statement transfers control to the labeled
statement or block, as the following example shows:
IF rating > 90 THEN
GOTO calc_raise; -- branch to label
END IF;
...
<<calc_raise>>
IF job_title = 'SALESMAN' THEN -- control resumes here
amount := commission * 0.25;
ELSE
amount := salary * 0.10;
END IF;
Subprograms
PL/SQL has two types of subprograms called procedures and functions, which can take
parameters and be invoked (called). As the following example shows, a subprogram is
like a miniature program, beginning with a header followed by an optional declarative
part, an executable part, and an optional exception-handling part:
PROCEDURE award_bonus (emp_id NUMBER) IS
bonus REAL;
comm_missing EXCEPTION;
BEGIN -- executable part starts here
SELECT comm * 0.15 INTO bonus FROM emp WHERE empno = emp_id;
IF bonus IS NULL THEN
RAISE comm_missing;
ELSE
UPDATE payroll SET pay = pay + bonus WHERE empno = emp_id;
END IF;
EXCEPTION -- exception-handling part starts here
WHEN comm_missing THEN
...
END award_bonus;
When called, this procedure accepts an employee number. It uses the number to select
the employee's commission from a database table and, at the same time, compute a
15% bonus. Then, it checks the bonus amount. If the bonus is null, an exception is
raised; otherwise, the employee's payroll record is updated.
Packages
PL/SQL lets you bundle logically related types, variables, cursors, and subprograms into
a package, a database object that is a step above regular stored procedures. The
packages defines a simple, clear, interface to a set of related procedures and types.
Packages usually have two parts: a specification and a body. The specification defines
the application programming interface; it declares the types, constants, variables,
exceptions, cursors, and subprograms. The body fills in the SQL queries for cursors and
the code for subprograms.
The following example packages two employment procedures:
CREATE PACKAGE emp_actions AS -- package specification
PROCEDURE hire_employee (empno NUMBER, ename CHAR, ...);
PROCEDURE fire_employee (emp_id NUMBER);
END emp_actions;
Collections
PL/SQL collection types let you declare high-level datatypes similar to arrays, sets, and
hash tables found in other languages. In PL/SQL, array types are known as varrays (short
for variable-size arrays), set types are known as nested tables, and hash table types are
known as associative arrays. Each kind of collection is an ordered group of elements, all
of the same type. Each element has a unique subscript that determines its position in
the collection.
To reference an element, use subscript notation with parentheses. For example, the
following call references the fifth element in the nested table (of type Staff) returned by
function new_hires:
DECLARE
TYPE Staff IS TABLE OF Employee;
staffer Employee;
FUNCTION new_hires (hiredate DATE) RETURN Staff IS
BEGIN ... END;
BEGIN
staffer := new_hires('10-NOV-98')(5);
END;
Collections can be passed as parameters, so that subprograms can process arbitrary
numbers of elements.You can use collections to move data into and out of database
tables using high-performance language features known as bulk SQL.
Records
Records are composite data structures whose fields can have different datatypes. You
can use records to hold related items and pass them to subprograms with a single
parameter.
You can use the %ROWTYPE attribute to declare a record that represents a row in a table
or a row from a query result set, without specifying the names and types for the fields.
Consider the following example:
DECLARE
TYPE TimeRec IS RECORD (hours SMALLINT, minutes SMALLINT);
TYPE MeetingTyp IS RECORD (
date_held DATE,
duration TimeRec, -- nested record
location VARCHAR2(20),
purpose VARCHAR2(50));
Object Types
PL/SQL supports object-oriented programming through object types. An object type
encapsulates a data structure along with the functions and procedures needed to
manipulate the data. The variables that form the data structure are known as attributes.
The functions and procedures that manipulate the attributes are known as methods.
Object types reduce complexity by breaking down a large system into logical entities.
This lets you create software components that are modular, maintainable, and reusable.
Object-type definitions, and the code for the methods, are stored in the database.
Instances of these object types can be stored in tables or used as variables inside PL/SQL
code.
CREATE TYPE Bank_Account AS OBJECT (
acct_number INTEGER(5),
balance REAL,
status VARCHAR2(10),
MEMBER PROCEDURE open (amount IN REAL),
MEMBER PROCEDURE verify_acct (num IN INTEGER),
MEMBER PROCEDURE close (num IN INTEGER, amount OUT REAL),
MEMBER PROCEDURE deposit (num IN INTEGER, amount IN REAL),
MEMBER PROCEDURE withdraw (num IN INTEGER, amount IN REAL),
MEMBER FUNCTION curr_bal (num IN INTEGER) RETURN REAL
);
Error Handling
PL/SQL makes it easy to detect and process error conditions known as exceptions. When
an error occurs, an exception is raised: normal execution stops and control transfers to
special exception-handling code, which comes at the end of any PL/SQL block. Each
different exception is processed by a particular exception handler.
Predefined exceptions are raised automatically for certain common error conditions
involving variables or database operations. For example, if you try to divide a number by
zero, PL/SQL raises the predefined exception ZERO_DIVIDE automatically.
You can declare exceptions of your own, for conditions that you decide are errors, or to
correspond to database errors that normally result in ORA- error messages. When you
detect a user-defined error condition, you execute a RAISE statement. The following
example computes the bonus earned by a salesperson. The bonus is based on salary and
commission. If the commission is null, you raise the exception comm_missing.
DECLARE
comm_missing EXCEPTION; -- declare exception
BEGIN
IF commission IS NULL THEN
RAISE comm_missing; -- raise exception
END IF;
bonus := (salary * 0.10) + (commission * 0.15);
EXCEPTION
WHEN comm_missing THEN ... -- process the exception
PL/SQL Architecture
The PL/SQL compilation and run-time system is an engine that compiles and executes
PL/SQL blocks and subprograms. The engine can be installed in an Oracle server or in an
application development tool such as Oracle Forms or Oracle Reports.
In either environment, the PL/SQL engine accepts as input any valid PL/SQL block or
subprogram. Figure 1-3 shows the PL/SQL engine processing an anonymous block. The
PL/SQL engine executes procedural statements but sends SQL statements to the SQL
engine in the Oracle database.
Figure 1-3 PL/SQL Engine
Anonymous Blocks
Anonymous PL/SQL blocks can be submitted to interactive tools such as SQL*Plus and
Enterprise Manager, or embedded in an Oracle Precompiler or OCI program. At run
time, the program sends these blocks to the Oracle database, where they are compiled
and executed.
Stored Subprograms
Subprograms can be compiled and stored in an Oracle database, ready to be executed.
Once compiled, it is a schema object known as a stored procedure or stored function,
which can be referenced by any number of applications connected to that database.
Stored subprograms defined within a package are known as packaged subprograms.
Those defined independently are called standalone subprograms.
Subprograms nested inside other subprograms or within a PL/SQL block are known as
local subprograms, which cannot be referenced by other applications and exist only
inside the enclosing block.
Stored subprograms are the key to modular, reusable PL/SQL code. Wherever you might
use a JAR file in Java, a module in Perl, a shared library in C++, or a DLL in Visual Basic,
you should use PL/SQL stored procedures, stored functions, and packages.
You can call stored subprograms from a database trigger, another stored subprogram,
an Oracle Precompiler or OCI application, or interactively from SQL*Plus or Enterprise
Manager. You can also configure a web server so that the HTML for a web page is
generated by a stored subprogram, making it simple to provide a web interface for data
entry and report generation.
For example, you might call the standalone procedure create_dept from SQL*Plus as
follows:
SQL> CALL create_dept('FINANCE', 'NEW YORK');
Subprograms are stored in a compact compiled form. When called, they are loaded and
processed immediately. Subprograms take advantage of shared memory, so that only
one copy of a subprogram is loaded into memory for execution by multiple users.
Database Triggers
A database trigger is a stored subprogram associated with a database table, view, or
event. The trigger can be called once, when some event occurs, or many times, once for
each row affected by an INSERT, UPDATE, or DELETE statement. The trigger can be
called after the event, to record it or take some followup action. Or, the trigger can be
called before the event to prevent erroneous operations or fix new data so that it
conforms to business rules. For example, the following table-level trigger fires whenever
salaries in the emp table are updated:
CREATE TRIGGER audit_sal
AFTER UPDATE OF sal ON emp
FOR EACH ROW
BEGIN
INSERT INTO emp_audit VALUES ...
END;
The executable part of a trigger can contain procedural statements as well as SQL data
manipulation statements. Besides table-level triggers, there are instead-of triggers for
views and system-event triggers for schemas.
In Oracle Tools
An application development tool that contains the PL/SQL engine can process PL/SQL
blocks and subprograms. The tool passes the blocks to its local PL/SQL engine. The
engine executes all procedural statements inside the application and sends only SQL
statements to the database. Most of the work is done inside the application, not on the
database server. If the block contains no SQL statements, the application executes the
entire block. This is useful if your application can benefit from conditional and iterative
control.
Frequently, Oracle Forms applications use SQL statements to test the value of field
entries or to do simple computations. By using PL/SQL instead, you can avoid calls to the
database. You can also use PL/SQL functions to manipulate field entries.
Declarations
Your program stores values in variables and constants. As the program executes, the
values of variables can change, but the values of constants cannot.
You can declare variables and constants in the declarative part of any PL/SQL block,
subprogram, or package. Declarations allocate storage space for a value, specify its
datatype, and name the storage location so that you can reference it.
A couple of examples follow:
DECLARE
birthday DATE;
emp_count SMALLINT := 0;
The first declaration names a variable of type DATE. The second declaration names a
variable of type SMALLINT and uses the assignment operator to assign an initial value of
zero to the variable.
The next examples show that the expression following the assignment operator can be
arbitrarily complex and can refer to previously initialized variables:
DECLARE
pi REAL := 3.14159;
radius REAL := 1;
area REAL := pi * radius**2;
BEGIN
NULL;
END;
/
By default, variables are initialized to NULL, so it is redundant to include ":= NULL" in a
variable declaration.
To declare a constant, put the keyword CONSTANT before the type specifier:
DECLARE
credit_limit CONSTANT REAL := 5000.00;
max_days_in_year CONSTANT INTEGER := 366;
urban_legend CONSTANT BOOLEAN := FALSE;
BEGIN
NULL;
END;
/
This declaration names a constant of type REAL and assigns an unchangeable value of
5000 to the constant. A constant must be initialized in its declaration. Otherwise, you
get a compilation error.
Using DEFAULT
You can use the keyword DEFAULT instead of the assignment operator to initialize
variables. For example, the declaration
blood_type CHAR := 'O';
can be rewritten as follows:
blood_type CHAR DEFAULT 'O';
Use DEFAULT for variables that have a typical value. Use the assignment operator for
variables (such as counters and accumulators) that have no typical value. For example:
The %TYPE attribute is particularly useful when declaring variables that refer to database
columns. You can reference a table and column, or you can reference an owner, table,
and column, as in
DECLARE
-- If the length of the column ever changes, this code
-- will use the new length automatically.
the_trigger user_triggers.trigger_name%TYPE;
BEGIN
NULL;
END;
/
When you use table_name.column_name.TYPE to declare a variable, you do not need to
know the actual datatype, and attributes such as precision, scale, and length. If the
database definition of the column changes, the datatype of the variable changes
accordingly at run time.
%TYPE variables do not inherit the NOT NULL column constraint. In the next example, even
though the database column employee_id is defined as NOT NULL, you can assign a null to
the variable my_empno:
DECLARE
my_empno employees.employee_id%TYPE;
BEGIN
my_empno := NULL; -- this works
END;
/
Using the %ROWTYPE Attribute
The %ROWTYPE attribute provides a record type that represents a row in a table (or view).
The record can store an entire row of data selected from the table, or fetched from a
cursor or strongly typed cursor variable:
DECLARE
-- %ROWTYPE can include all the columns in a table...
emp_rec employees%ROWTYPE;
-- ...or a subset of the columns, based on a cursor.
CURSOR c1 IS
SELECT department_id, department_name FROM departments;
dept_rec c1%ROWTYPE;
-- Could even make a %ROWTYPE with columns from multiple tables.
CURSOR c2 IS
SELECT employee_id, email, employees.manager_id, location_id
FROM employees, departments
WHERE employees.department_id = departments.department_id;
join_rec c2%ROWTYPE;
BEGIN
-- We know EMP_REC can hold a row from the EMPLOYEES table.
SELECT * INTO emp_rec FROM employees WHERE ROWNUM < 2;
-- We can refer to the fields of EMP_REC using column names
-- from the EMPLOYEES table.
IF emp_rec.department_id = 20 AND emp_rec.last_name = 'JOHNSON' THEN
emp_rec.salary := emp_rec.salary * 1.15;
END IF;
END;
/
Columns in a row and corresponding fields in a record have the same names and
datatypes. However, fields in a %ROWTYPE record do not inherit the NOT NULL column
constraint.
Aggregate Assignment
Although a %ROWTYPE declaration cannot include an initialization clause, there are ways
to assign values to all fields in a record at once. You can assign one record to another if
their declarations refer to the same table or cursor. For example, the following
assignment is allowed:
DECLARE
dept_rec1 departments%ROWTYPE;
dept_rec2 departments%ROWTYPE;
CURSOR c1 IS SELECT department_id, location_id FROM departments;
dept_rec3 c1%ROWTYPE;
BEGIN
Using Aliases
Select-list items fetched from a cursor associated with %ROWTYPE must have simple
names or, if they are expressions, must have aliases. The following example uses an alias
called complete_name to represent the concatenation of two columns:
BEGIN
-- We assign an alias (COMPLETE_NAME) to the expression value, because
-- it has no column name.
FOR item IN
(
SELECT first_name || ' ' || last_name complete_name
FROM employees WHERE ROWNUM < 11
)
LOOP
-- Now we can refer to the field in the record using this alias.
dbms_output.put_line('Employee name: ' || item.complete_name);
END LOOP;
END;
/
Restrictions on Declarations
PL/SQL does not allow forward references. You must declare a variable or constant
before referencing it in other statements, including other declarative statements.
PL/SQL does allow the forward declaration of subprograms
Some languages allow you to declare a list of variables that have the same datatype.
PL/SQL does not allow this. You must declare each variable separately:
DECLARE
-- Multiple declarations not allowed.
-- i, j, k, l SMALLINT;
-- Instead, declare each separately.
i SMALLINT;
j SMALLINT;
-- To save space, you can declare more than one on a line.
k SMALLINT; l SMALLINT;
BEGIN
NULL;
END;
/
IF condition1 THEN
sequence_of_statements1
ELSIF condition2 THEN
sequence_of_statements2
ELSE
sequence_of_statements3
END IF;
If the first condition is false or null, the ELSIF clause tests another condition. An IF
statement can have any number of ELSIF clauses; the final ELSE clause is optional.
Conditions are evaluated one by one from top to bottom. If any condition is true, its
associated sequence of statements is executed and control passes to the next
statement. If all conditions are false or null, the sequence in the ELSE clause is executed.
Consider the following example:
BEGIN
IF sales > 50000 THEN
bonus := 1500;
ELSIF sales > 35000 THEN
bonus := 500;
ELSE
bonus := 100;
END IF;
INSERT INTO payroll VALUES (emp_id, bonus, ...);
END;
If the value of sales is larger than 50000, the first and second conditions are true.
Nevertheless, bonus is assigned the proper value of 1500 because the second condition is
never tested. When the first condition is true, its associated statement is executed and
control passes to the INSERT statement.
Using the CASE Statement
Like the IF statement, the CASE statement selects one sequence of statements to
execute. However, to select the sequence, the CASE statement uses a selector rather
than multiple Boolean expressions. (Recall from Chapter 2 that a selector is an
expression whose value is used to select one of several alternatives.) To compare the IF
and CASE statements, consider the following code that outputs descriptions of school
grades:
IF grade = 'A' THEN
dbms_output.put_line('Excellent');
ELSIF grade = 'B' THEN
dbms_output.put_line('Very Good');
ELSIF grade = 'C' THEN
dbms_output.put_line('Good');
ELSIF grade = 'D' THEN
dbms_output. put_line('Fair');
ELSIF grade = 'F' THEN
dbms_output.put_line('Poor');
ELSE
dbms_output.put_line('No such grade');
END IF;
Notice the five Boolean expressions. In each instance, we test whether the same
variable, grade, is equal to one of five values: 'A', 'B', 'C', 'D', or 'F'. Let us rewrite the
preceding code using the CASE statement, as follows:
CASE grade
WHEN 'A' THEN dbms_output.put_line('Excellent');
WHEN 'B' THEN dbms_output.put_line('Very Good');
WHEN 'C' THEN dbms_output.put_line('Good');
WHEN 'D' THEN dbms_output.put_line('Fair');
WHEN 'F' THEN dbms_output.put_line('Poor');
ELSE dbms_output.put_line('No such grade');
END CASE;
The CASE statement is more readable and more efficient. When possible, rewrite lengthy
IF-THEN-ELSIF statements as CASE statements.
The CASE statement begins with the keyword CASE. The keyword is followed by a
selector, which is the variable grade in the last example. The selector expression can be
arbitrarily complex. For example, it can contain function calls. Usually, however, it
consists of a single variable. The selector expression is evaluated only once. The value it
yields can have any PL/SQL datatype other than BLOB, BFILE, an object type, a PL/SQL
record, an index-by-table, a varray, or a nested table.
The selector is followed by one or more WHEN clauses, which are checked sequentially.
The value of the selector determines which clause is executed. If the value of the
selector equals the value of a WHEN-clause expression, that WHEN clause is executed. For
instance, in the last example, if grade equals 'C', the program outputs 'Good'. Execution
never falls through; if any WHEN clause is executed, control passes to the next
statement.
The ELSE clause works similarly to the ELSE clause in an IF statement. In the last example,
if the grade is not one of the choices covered by a WHEN clause, the ELSE clause is
selected, and the phrase 'No such grade' is output. The ELSE clause is optional. However, if
you omit the ELSE clause, PL/SQL adds the following implicit ELSE clause:
ELSE RAISE CASE_NOT_FOUND;
There is always a default action, even when you omit the ELSE clause. If the CASE
statement does not match any of the WHEN clauses and you omit the ELSE clause, PL/SQL
raises the predefined exception CASE_NOT_FOUND.
The keywords END CASE terminate the CASE statement. These two keywords must be
separated by a space. The CASE statement has the following form:
[<<label_name>>]
CASE selector
WHEN expression1 THEN sequence_of_statements1;
WHEN expression2 THEN sequence_of_statements2;
...
WHEN expressionN THEN sequence_of_statementsN;
[ELSE sequence_of_statementsN+1;]
END CASE [label_name];
Like PL/SQL blocks, CASE statements can be labeled. The label, an undeclared identifier
enclosed by double angle brackets, must appear at the beginning of the CASE statement.
Optionally, the label name can also appear at the end of the CASE statement.
Exceptions raised during the execution of a CASE statement are handled in the usual
way. That is, normal execution stops and control transfers to the exception-handling
part of your PL/SQL block or subprogram.
An alternative to the CASEstatement is the CASE expression, where each WHEN clause is
an expression. For details, see "CASE Expressions".
[<<label_name>>]
CASE
WHEN search_condition1 THEN sequence_of_statements1;
WHEN search_condition2 THEN sequence_of_statements2;
...
WHEN search_conditionN THEN sequence_of_statementsN;
[ELSE sequence_of_statementsN+1;]
END CASE [label_name];
The searched CASE statement has no selector. Also, its WHEN clauses contain search
conditions that yield a Boolean value, not expressions that can yield a value of any type.
An example follows:
CASE
WHEN grade = 'A' THEN dbms_output.put_line('Excellent');
WHEN grade = 'B' THEN dbms_output.put_line('Very Good');
WHEN grade = 'C' THEN dbms_output.put_line('Good');
WHEN grade = 'D' THEN dbms_output.put_line('Fair');
WHEN grade = 'F' THEN dbms_output.put_line('Poor');
ELSE dbms_output.put_line('No such grade');
END CASE;
The search conditions are evaluated sequentially. The Boolean value of each search
condition determines which WHEN clause is executed. If a search condition yields TRUE,
its WHEN clause is executed. If any WHEN clause is executed, control passes to the next
statement, so subsequent search conditions are not evaluated.
If none of the search conditions yields TRUE, the ELSE clause is executed. The ELSE clause
is optional. However, if you omit the ELSE clause, PL/SQL adds the following implicit ELSE
clause:
ELSE RAISE CASE_NOT_FOUND;
Exceptions raised during the execution of a searched CASE statement are handled in the
usual way. That is, normal execution stops and control transfers to the exception-
handling part of your PL/SQL block or subprogram.
Guidelines for PL/SQL Conditional Statements
Avoid clumsy IF statements like those in the following example:
IF new_balance < minimum_balance THEN
overdrawn := TRUE;
ELSE
overdrawn := FALSE;
END IF;
...
IF overdrawn = TRUE THEN
RAISE insufficient_funds;
END IF;
The value of a Boolean expression can be assigned directly to a Boolean variable. You
can replace the first IF statement with a simple assignment:
overdrawn := new_balance < minimum_balance;
A Boolean variable is itself either true or false. You can simplify the condition in the
second IF statement:
IF overdrawn THEN ...
When possible, use the ELSIF clause instead of nested IF statements. Your code will be
easier to read and understand. Compare the following IF statements:
IF condition1 THEN | IF condition1 THEN
statement1; | statement1;
ELSE | ELSIF condition2 THEN
IF condition2 THEN | statement2;
statement2; | ELSIF condition3 THEN
ELSE | statement3;
IF condition3 THEN | END IF;
statement3; |
END IF; |
END IF; |
END IF; |
These statements are logically equivalent, but the second statement makes the logic
clearer.
To compare a single expression to multiple values, you can simplify the logic by using a
single CASE statement instead of an IF with several ELSIF clauses.
With each iteration of the loop, the sequence of statements is executed, then control
resumes at the top of the loop. You use an EXIT statement to stop looping and prevent
an infinite loop. You can place one or more EXIT statements anywhere inside a loop, but
not outside a loop. There are two forms of EXIT statements: EXIT and EXIT-WHEN.
Using the EXIT Statement
The EXIT statement forces a loop to complete unconditionally. When an EXIT statement is
encountered, the loop completes immediately and control passes to the next
statement:
LOOP
Remember, the EXIT statement must be placed inside a loop. To complete a PL/SQL block
before its normal end is reached, you can use the RETURN statement. For more
information, see "Using the RETURN Statement".
Using the EXIT-WHEN Statement
The EXIT-WHEN statement lets a loop complete conditionally. When the EXIT statement is
encountered, the condition in the WHEN clause is evaluated. If the condition is true, the
loop completes and control passes to the next statement after the loop:
LOOP
FETCH c1 INTO ...
EXIT WHEN c1%NOTFOUND; -- exit loop if condition is true
...
END LOOP;
CLOSE c1;
Until the condition is true, the loop cannot complete. A statement inside the loop must
change the value of the condition. In the previous example, if the FETCH statement
returns a row, the condition is false. When the FETCH statement fails to return a row, the
condition is true, the loop completes, and control passes to the CLOSE statement.
The EXIT-WHEN statement replaces a simple IF statement. For example, compare the
following statements:
IF count > 100 THEN | EXIT WHEN count > 100;
EXIT; |
END IF; |
These statements are logically equivalent, but the EXIT-WHEN statement is easier to read
and understand.
Labeling a PL/SQL Loop
Like PL/SQL blocks, loops can be labeled. The label, an undeclared identifier enclosed by
double angle brackets, must appear at the beginning of the LOOP statement, as follows:
<<label_name>>
LOOP
sequence_of_statements
END LOOP;
Optionally, the label name can also appear at the end of the LOOP statement, as the
following example shows:
<<my_loop>>
LOOP
...
END LOOP my_loop;
When you nest labeled loops, use ending label names to improve readability.
With either form of EXIT statement, you can complete not only the current loop, but any
enclosing loop. Simply label the enclosing loop that you want to complete. Then, use the
label in an EXIT statement, as follows:
<<outer>>
LOOP
...
LOOP
...
EXIT outer WHEN ... -- exit both loops
END LOOP;
...
END LOOP outer;
Before each iteration of the loop, the condition is evaluated. If it is true, the sequence of
statements is executed, then control resumes at the top of the loop. If it is false or null,
the loop is skipped and control passes to the next statement:
WHILE total <= 25000 LOOP
SELECT sal INTO salary FROM emp WHERE ...
total := total + salary;
END LOOP;
The number of iterations depends on the condition and is unknown until the loop
completes. The condition is tested at the top of the loop, so the sequence might execute
zero times. In the last example, if the initial value of total is larger than 25000, the
condition is false and the loop is skipped.
Some languages have a LOOP UNTIL or REPEAT UNTIL structure, which tests the condition at
the bottom of the loop instead of at the top, so that the sequence of statements is
executed at least once. The equivalent in PL/SQL would be:
LOOP
sequence_of_statements
EXIT WHEN boolean_expression;
END LOOP;
To ensure that a WHILE loop executes at least once, use an initialized Boolean variable in
the condition, as follows:
done := FALSE;
WHILE NOT done LOOP
sequence_of_statements
done := boolean_expression;
END LOOP;
A statement inside the loop must assign a new value to the Boolean variable to avoid an
infinite loop.
Using the FOR-LOOP Statement
Simple FOR loops iterate over a specified range of integers. The number of iterations is
known before the loop is entered. A double dot (..) serves as the range operator:
FOR counter IN [REVERSE] lower_bound..higher_bound LOOP
sequence_of_statements
END LOOP;
The range is evaluated when the FOR loop is first entered and is never re-evaluated.
As the next example shows, the sequence of statements is executed once for each
integer in the range. After each iteration, the loop counter is incremented.
FOR i IN 1..3 LOOP -- assign the values 1,2,3 to i
sequence_of_statements -- executes three times
END LOOP;
If the lower bound equals the higher bound, the loop body is executed once:
FOR i IN 3..3 LOOP -- assign the value 3 to i
sequence_of_statements -- executes one time
END LOOP;
By default, iteration proceeds upward from the lower bound to the higher bound. If you
use the keyword REVERSE, iteration proceeds downward from the higher bound to the
lower bound. After each iteration, the loop counter is decremented. You still write the
range bounds in ascending (not descending) order.
FOR i IN REVERSE 1..3 LOOP -- assign the values 3,2,1 to i
sequence_of_statements -- executes three times
END LOOP;
Inside a FOR loop, the counter can be read but cannot be changed:
FOR ctr IN 1..10 LOOP
IF NOT finished THEN
INSERT INTO ... VALUES (ctr, ...); -- OK
factor := ctr * 2; -- OK
ELSE
ctr := 10; -- not allowed
END IF;
END LOOP;
Internally, PL/SQL assigns the values of the bounds to temporary PLS_INTEGER variables,
and, if necessary, rounds the values to the nearest integer. The magnitude range of a
PLS_INTEGER is -2**31 .. 2**31. If a bound evaluates to a number outside that range, you
get a numeric overflow error when PL/SQL attempts the assignment.
Some languages provide a STEP clause, which lets you specify a different increment (5
instead of 1 for example). PL/SQL has no such structure, but you can easily build one.
Inside the FOR loop, simply multiply each reference to the loop counter by the new
increment. In the following example, you assign today's date to elements 5, 10, and 15
of an index-by table:
DECLARE
If the lower bound of a loop range evaluates to a larger integer than the upper bound,
the loop body is not executed and control passes to the next statement:
-- limit becomes 1
FOR i IN 2..limit LOOP
sequence_of_statements -- executes zero times
END LOOP;
-- control passes here
You do not need to declare the loop counter because it is implicitly declared as a local
variable of type INTEGER. It is safest not to use the name of an existing variable, because
the local declaration hides any global declaration:
DECLARE
ctr INTEGER := 3;
BEGIN
...
FOR ctr IN 1..25 LOOP
...
IF ctr > 10 THEN ... -- Refers to loop counter
END LOOP;
-- After the loop, ctr refers to the original variable with value 3.
END;
To reference the global variable in this example, you must use a label and dot notation,
as follows:
<<main>>
DECLARE
ctr INTEGER;
...
BEGIN
...
FOR ctr IN 1..25 LOOP
...
IF main.ctr > 10 THEN -- refers to global variable
...
END IF;
END LOOP;
END main;
The same scope rules apply to nested FOR loops. Consider the example below. Both loop
counters have the same name. To reference the outer loop counter from the inner loop,
you use a label and dot notation:
<<outer>>
FOR step IN 1..25 LOOP
FOR step IN 1..10 LOOP
...
Suppose you must exit early from a nested FOR loop. To complete not only the current
loop, but also any enclosing loop, label the enclosing loop and use the label in an EXIT
statement:
<<outer>>
FOR i IN 1..5 LOOP
...
FOR j IN 1..10 LOOP
FETCH c1 INTO emp_rec;
EXIT outer WHEN c1%NOTFOUND; -- exit both FOR loops
...
END LOOP;
END LOOP outer;
-- control passes here
Sequential Control: GOTO and NULL Statements
Unlike the IF and LOOP statements, the GOTO and NULL statements are not crucial to
PL/SQL programming. The GOTO statement is seldom needed. Occasionally, it can
simplify logic enough to warrant its use. The NULL statement can improve readability by
making the meaning and action of conditional statements clear.
Overuse of GOTO statements can result in code that is hard to understand and maintain.
Use GOTO statements sparingly. For example, to branch from a deeply nested structure
to an error-handling routine, raise an exception rather than use a GOTO statement.
The label end_loop in the following example is not allowed because it does not precede
an executable statement:
DECLARE
done BOOLEAN;
BEGIN
FOR i IN 1..50 LOOP
IF done THEN
GOTO end_loop;
END IF;
<<end_loop>> -- not allowed
END LOOP; -- not an executable statement
END;
As the following example shows, a GOTO statement can branch to an enclosing block
from the current block:
DECLARE
my_ename CHAR(10);
BEGIN
<<get_name>>
SELECT ename INTO my_ename FROM emp WHERE ...
BEGIN
GOTO get_name; -- branch to enclosing block
END;
END;
The GOTO statement branches to the first enclosing block in which the referenced label
appears.
A GOTO statement cannot branch from one IF statement clause to another, or from one
CASE statement WHEN clause to another.
A GOTO statement cannot branch from an outer block into a sub-block (that is, an inner
BEGIN-END block).
A GOTO statement cannot branch out of a subprogram. To end a subprogram early, you
can use the RETURN statement or use GOTO to branch to a place right before the end of
the subprogram.
A GOTO statement cannot branch from an exception handler back into the current BEGIN-
END block. However, a GOTO statement can branch from an exception handler into an
enclosing block.
my_ename employees.last_name%TYPE;
my_salary employees.salary%TYPE;
BEGIN
OPEN c1;
LOOP
FETCH c1 INTO my_ename, my_salary;
IF c1%FOUND THEN -- fetch succeeded
dbms_output.put_line('Name = ' || my_ename || ', salary = ' ||
my_salary);
ELSE -- fetch failed, so exit loop
EXIT;
END IF;
END LOOP;
END;
/
If a cursor or cursor variable is not open, referencing it with %FOUND raises the
predefined exception INVALID_CURSOR.
DECLARE
CURSOR c1 IS SELECT last_name, salary FROM employees WHERE ROWNUM < 11;
my_ename employees.last_name%TYPE;
my_salary employees.salary%TYPE;
BEGIN
OPEN c1;
LOOP
FETCH c1 INTO my_ename, my_salary;
IF c1%NOTFOUND THEN -- fetch failed, so exit loop
-- A shorter form of this test is "EXIT WHEN c1%NOTFOUND;"
EXIT;
ELSE -- fetch succeeded
dbms_output.put_line('Name = ' || my_ename || ', salary = ' ||
my_salary);
END IF;
END LOOP;
END;
/
Before the first fetch, %NOTFOUND returns NULL. If FETCH never executes successfully, the
loop is never exited, because the EXIT WHEN statement executes only if its WHEN
condition is true. To be safe, you might want to use the following EXIT statement instead:
EXIT WHEN c1%NOTFOUND OR c1%NOTFOUND IS NULL;
If a cursor or cursor variable is not open, referencing it with %NOTFOUND raises an
INVALID_CURSOR exception.
Table - shows what each cursor attribute returns before and after you execute an OPEN,
FETCH, or CLOSE statement.
Notes:
1. Referencing %FOUND, %NOTFOUND, or %ROWCOUNT before a cursor is opened or
A query work area remains accessible as long as any cursor variable points to it, as you
pass the value of a cursor variable from one scope to another. For example, if you pass a
host cursor variable to a PL/SQL block embedded in a Pro*C program, the work area to
which the cursor variable points remains accessible after the block completes.
If you have a PL/SQL engine on the client side, calls from client to server impose no
restrictions. For example, you can declare a cursor variable on the client side, open and
fetch from it on the server side, then continue to fetch from it back on the client side.
You can also reduce network traffic by having a PL/SQL block open or close several host
cursor variables in a single round trip.
Declaring REF CURSOR Types and Cursor Variables
To create cursor variables, you define a REF CURSOR type, then declare cursor variables of
that type. You can define REF CURSOR types in any PL/SQL block, subprogram, or package.
In the following example, you declare a REF CURSOR type that represents a result set from
the DEPARTMENTS table:
DECLARE
TYPE DeptCurTyp IS REF CURSOR RETURN departments%ROWTYPE;
REF CURSOR types can be strong (with a return type) or weak (with no return type).
Strong REF CURSOR types are less error prone because the PL/SQL compiler lets you
associate a strongly typed cursor variable only with queries that return the right set of
columns. Weak REF CURSOR types are more flexible because the compiler lets you
associate a weakly typed cursor variable with any query.
Because there is no type checking with a weak REF CURSOR, all such types are
interchangeable. Instead of creating a new type, you can use the predefined type
SYS_REFCURSOR.
Once you define a REF CURSOR type, you can declare cursor variables of that type in any
PL/SQL block or subprogram.
DECLARE
TYPE EmpCurTyp IS REF CURSOR RETURN emp%ROWTYPE; -- strong
TYPE GenericCurTyp IS REF CURSOR; -- weak
cursor1 EmpCurTyp;
cursor2 GenericCurTyp;
my_cursor SYS_REFCURSOR; -- didn't need to declare a new type above
To avoid declaring the same REF CURSOR type in each subprogram that uses it, you can
put the REF CURSOR declaration in a package spec. You can declare cursor variables of
that type in the corresponding package body, or within your own procedure or function.
Example - Cursor Variable Returning %ROWTYPE
In the RETURN clause of a REF CURSOR type definition, you can use %ROWTYPE to refer to a
strongly typed cursor variable:
DECLARE
TYPE TmpCurTyp IS REF CURSOR RETURN employees%ROWTYPE;
tmp_cv TmpCurTyp; -- declare cursor variable
TYPE EmpCurTyp IS REF CURSOR RETURN tmp_cv%ROWTYPE;
emp_cv EmpCurTyp; -- declare cursor variable
BEGIN
NULL;
END;
/
Example - Cursor Variable Returning %TYPE
You can also use %TYPE to provide the datatype of a record variable:
DECLARE
dept_rec departments%ROWTYPE; -- declare record variable
TYPE DeptCurTyp IS REF CURSOR RETURN dept_rec%TYPE;
dept_cv DeptCurTyp; -- declare cursor variable
BEGIN
NULL;
END;
/
Example - Cursor Variable Returning Record Type
This example specifies a user-defined RECORD type in the RETURN clause:
DECLARE
TYPE EmpRecTyp IS RECORD (
employee_id NUMBER,
last_name VARCHAR2(30),
salary NUMBER(7,2));
TYPE EmpCurTyp IS REF CURSOR RETURN EmpRecTyp;
emp_cv EmpCurTyp; -- declare cursor variable
BEGIN
NULL;
END;
/
BEGIN
-- First find 10 arbitrary employees.
OPEN emp FOR SELECT * FROM employees WHERE ROWNUM < 11;
process_emp_cv(emp);
CLOSE emp;
-- Then find employees matching a condition.
OPEN emp FOR SELECT * FROM employees WHERE last_name LIKE 'R%';
process_emp_cv(emp);
CLOSE emp;
END;
/
Note: Like all pointers, cursor variables increase the possibility of parameter aliasing.
Controlling Cursor Variables: OPEN-FOR, FETCH, and CLOSE
You use three statements to control a cursor variable: OPEN-FOR, FETCH, and CLOSE. First,
you OPEN a cursor variable FOR a multi-row query. Then, you FETCH rows from the result
set. When all the rows are processed, you CLOSE the cursor variable.
The cursor variable can be declared directly in PL/SQL, or in a PL/SQL host environment
such as an OCI program.
The SELECT statement for the query can be coded directly in the statement, or can be a
string variable or string literal. When you use a string as the query, it can include
placeholders for bind variables, and you specify the corresponding values with a USING
clause.
Note: This section discusses the static SQL case, in which select_statement is used. For the
dynamic SQL case, in which dynamic_string is used.
Unlike cursors, cursor variables take no parameters. Instead, you can pass whole queries
(not just parameters) to a cursor variable. The query can reference host variables and
PL/SQL variables, parameters, and functions.
The example below opens a cursor variable. Notice that you can apply cursor attributes
(%FOUND, %NOTFOUND, %ISOPEN, and %ROWCOUNT) to a cursor variable.
DECLARE
TYPE EmpCurTyp IS REF CURSOR RETURN employees%ROWTYPE;
emp_cv EmpCurTyp;
BEGIN
IF NOT emp_cv%ISOPEN THEN
/* Open cursor variable. */
OPEN emp_cv FOR SELECT * FROM employees;
END IF;
CLOSE emp_cv;
END;
/
Other OPEN-FOR statements can open the same cursor variable for different queries. You
need not close a cursor variable before reopening it. (Recall that consecutive OPENs of a
static cursor raise the predefined exception CURSOR_ALREADY_OPEN.) When you reopen a
cursor variable for a different query, the previous query is lost.
Example - Stored Procedure to Open a Ref Cursor
Typically, you open a cursor variable by passing it to a stored procedure that declares an
IN OUT parameter that is a cursor variable. For example, the following procedure opens
a cursor variable:
CREATE PACKAGE emp_data AS
TYPE EmpCurTyp IS REF CURSOR RETURN employees%ROWTYPE;
PROCEDURE open_emp_cv (emp_cv IN OUT EmpCurTyp);
END emp_data;
/
You can also use a standalone stored procedure to open the cursor variable. Define the
REF CURSOR type in a package, then reference that type in the parameter declaration for
the stored procedure.
Example - Stored Procedure to Open Ref Cursors with Different Queries
To centralize data retrieval, you can group type-compatible queries in a stored
procedure. In the example below, the packaged procedure declares a selector as one of
its formal parameters. When called, the procedure opens the cursor variable emp_cv for
the chosen query.
CREATE PACKAGE emp_data AS
TYPE EmpCurTyp IS REF CURSOR RETURN emp%ROWTYPE;
PROCEDURE open_emp_cv (emp_cv IN OUT EmpCurTyp, choice INT);
END emp_data;
IF choice = 1 THEN
OPEN generic_cv FOR SELECT * FROM emp;
ELSIF choice = 2 THEN
OPEN generic_cv FOR SELECT * FROM dept;
ELSIF choice = 3 THEN
OPEN generic_cv FOR SELECT * FROM salgrade;
END IF;
END;
END admin_data;
Host cursor variables are compatible with any query return type. They behave just like
weakly typed PL/SQL cursor variables.
This technique might be useful in Oracle Forms, for instance, when you want to
populate a multi-block form.
When you pass host cursor variables to a PL/SQL block for opening, the query work
areas to which they point remain accessible after the block completes, so your OCI or
Pro*C program can use these work areas for ordinary cursor operations. In the following
example, you open several such work areas in a single round trip:
BEGIN
OPEN :c1 FOR SELECT 1 FROM dual;
OPEN :c2 FOR SELECT 1 FROM dual;
OPEN :c3 FOR SELECT 1 FROM dual;
END;
The cursors assigned to c1, c2, and c3 behave normally, and you can use them for any
purpose. When finished, release the cursors as follows:
BEGIN
CLOSE :c1;
CLOSE :c2;
CLOSE :c3;
END;
Assign to the cursor variable the value of an already OPENed host cursor variable
or PL/SQL cursor variable.
If you assign an unopened cursor variable to another cursor variable, the second one
remains invalid even after you open the first one.
Be careful when passing cursor variables as parameters. At run time, PL/SQL raises
ROWTYPE_MISMATCH if the return types of the actual and formal parameters are
incompatible.
Restrictions on Cursor Variables
Currently, cursor variables are subject to the following restrictions:
You cannot declare cursor variables in a package spec. For example, the
following declaration is not allowed:
CREATE PACKAGE emp_stuff AS
TYPE EmpCurTyp IS REF CURSOR RETURN emp%ROWTYPE;
emp_cv EmpCurTyp; -- not allowed
END emp_stuff;
You cannot pass cursor variables to a procedure that is called through a database
link.
If you pass a host cursor variable to PL/SQL, you cannot fetch from it on the
server side unless you also open it there on the same server call.
You cannot use comparison operators to test cursor variables for equality,
inequality, or nullity.
You cannot assign nulls to a cursor variable.
A nested cursor is implicitly opened when the containing row is fetched from the parent
cursor. The nested cursor is closed only when:
The nested cursor is explicitly closed by the user
The parent cursor is reexecuted
The parent cursor is closed
The parent cursor is canceled
An error arises during a fetch on one of its parent cursors. The nested cursor is
closed as part of the clean-up.
Restrictions on Cursor Expressions
You cannot use a cursor expression with an implicit cursor.
Cursor expressions can appear only:
In a SELECT statement that is not nested in any other query expression,
except when it is a subquery of the cursor expression itself.
As arguments to table functions, in the FROM clause of a SELECT statement.
Cursor expressions can appear only in the outermost SELECT list of the query
specification.
Cursor expressions cannot appear in view declarations.
You cannot perform BIND and EXECUTE operations on cursor expressions.
Example of Cursor Expressions
In this example, we find a specified location ID, and a cursor from which we can fetch all
the departments in that location. As we fetch each department's name, we also get
another cursor that lets us fetch their associated employee details from another table.
DECLARE
TYPE emp_cur_typ IS REF CURSOR;
emp_cur emp_cur_typ;
dept_name departments.department_name%TYPE;
emp_name employees.last_name%TYPE;
CURSOR c1 IS SELECT
department_name,
-- The 2nd item in the result set is another result set,
-- which is represented as a ref cursor and labelled "employees".
CURSOR
(
SELECT e.last_name FROM employees e
WHERE e.department_id = d.department_id
) employees
FROM departments d
WHERE department_name like 'A%';
BEGIN
OPEN c1;
LOOP
FETCH c1 INTO dept_name, emp_cur;
EXIT WHEN c1%NOTFOUND;
dbms_output.put_line('Department: ' || dept_name);
-- For each row in the result set, we can process the result
-- set from a subquery. We could pass the ref cursor to a procedure
-- instead of processing it here in the loop.
LOOP
FETCH emp_cur INTO emp_name;
exception is raised or a SQL statement fails, a rollback lets you take corrective action
and perhaps start over.
SAVEPOINT names and marks the current point in the processing of a transaction.
Savepoints let you roll back part of a transaction instead of the whole transaction.
Consider a transaction that transfers money from one bank account to another. It is
important that the money come out of one account, and into the other, at exactly the
same moment. Otherwise, a problem partway through might make the money be lost
from both accounts or be duplicated in both accounts.
BEGIN
UPDATE accts SET bal = my_bal - debit
WHERE acctno = 7715;
UPDATE accts SET bal = my_bal + credit
WHERE acctno = 7720;
COMMIT WORK;
END;
Transactions are not tied to PL/SQL BEGIN-END blocks. A block can contain multiple
transactions, and a transaction can span multiple blocks.
The optional COMMENT clause lets you specify a comment to be associated with a
distributed transaction. If a network or machine fails during the commit, the state of the
distributed transaction might be unknown or in doubt. In that case, Oracle stores the
text specified by COMMENT in the data dictionary along with the transaction ID. The text
must be a quoted literal up to 50 characters long:
COMMIT COMMENT 'In-doubt order transaction; notify Order Entry';
PL/SQL does not support the FORCE clause of SQL, which manually commits an in-doubt
distributed transaction.
The following example inserts information about an employee into three different
database tables. If an INSERT statement tries to store a duplicate employee number, the
predefined exception DUP_VAL_ON_INDEX is raised. To make sure that changes to all three
tables are undone, the exception handler executes a ROLLBACK.
DECLARE
emp_id INTEGER;
BEGIN
SELECT empno, ... INTO emp_id, ... FROM new_emp WHERE ...
Statement-Level Rollbacks
Before executing a SQL statement, Oracle marks an implicit savepoint. Then, if the
statement fails, Oracle rolls it back automatically. For example, if an INSERT statement
raises an exception by trying to insert a duplicate value in a unique index, the statement
is rolled back. Only work started by the failed SQL statement is lost. Work done before
that statement in the current transaction is kept.
Oracle can also roll back single SQL statements to break deadlocks. Oracle signals an
error to one of the participating transactions and rolls back the current statement in
that transaction.
Before executing a SQL statement, Oracle must parse it, that is, examine it to make sure
it follows syntax rules and refers to valid schema objects. Errors detected while
executing a SQL statement cause a rollback, but errors detected while parsing the
statement do not.
The following example marks a savepoint before doing an insert. If the INSERT statement
tries to store a duplicate value in the empno column, the predefined exception
DUP_VAL_ON_INDEX is raised. In that case, you roll back to the savepoint, undoing just the
insert.
DECLARE
emp_id emp.empno%TYPE;
BEGIN
UPDATE emp SET ... WHERE empno = emp_id;
DELETE FROM emp WHERE ...
SAVEPOINT do_insert;
INSERT INTO emp VALUES (emp_id, ...);
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
ROLLBACK TO do_insert;
END;
When you roll back to a savepoint, any savepoints marked after that savepoint are
erased. The savepoint to which you roll back is not erased. A simple rollback or commit
erases all savepoints.
If you mark a savepoint within a recursive subprogram, new instances of the SAVEPOINT
statement are executed at each level in the recursive descent, but you can only roll back
to the most recently marked savepoint.
Savepoint names are undeclared identifiers. Reusing a savepoint name within a
transaction moves the savepoint from its old position to the current point in the
transaction. Thus, a rollback to the savepoint affects only the current part of your
transaction:
BEGIN
SAVEPOINT my_point;
UPDATE emp SET ... WHERE empno = emp_id;
SAVEPOINT my_point; -- move my_point to current point
INSERT INTO emp VALUES (emp_id, ...);
EXCEPTION
WHEN OTHERS THEN
ROLLBACK TO my_point;
END;
For example, in the SQL*Plus environment, if your PL/SQL block does not include a
COMMIT or ROLLBACK statement, the final state of your transaction depends on what you
do after running the block. If you execute a data definition, data control, or COMMIT
statement or if you issue the EXIT, DISCONNECT, or QUIT command, Oracle commits the
transaction. If you execute a ROLLBACK statement or abort the SQL*Plus session, Oracle
rolls back the transaction.
Oracle precompiler programs roll back the transaction unless the program explicitly
commits or rolls back work, and disconnects using the RELEASE parameter:
EXEC SQL COMMIT WORK RELEASE;
Setting Transaction Properties with SET TRANSACTION
You use the SET TRANSACTION statement to begin a read-only or read-write transaction,
establish an isolation level, or assign your current transaction to a specified rollback
segment. Read-only transactions are useful for running multiple queries while other
users update the same tables.
During a read-only transaction, all queries refer to the same snapshot of the database,
providing a multi-table, multi-query, read-consistent view. Other users can continue to
query or update data as usual. A commit or rollback ends the transaction. In the
example below a store manager uses a read-only transaction to gather sales figures for
the day, the past week, and the past month. The figures are unaffected by other users
updating the database during the transaction.
DECLARE
daily_sales REAL;
weekly_sales REAL;
monthly_sales REAL;
BEGIN
COMMIT; -- ends previous transaction
SET TRANSACTION READ ONLY NAME 'Calculate sales figures';
SELECT SUM(amt) INTO daily_sales FROM sales
WHERE dte = SYSDATE;
SELECT SUM(amt) INTO weekly_sales FROM sales
WHERE dte > SYSDATE - 7;
SELECT SUM(amt) INTO monthly_sales FROM sales
WHERE dte > SYSDATE - 30;
COMMIT; -- ends read-only transaction
END;
The SET TRANSACTION statement must be the first SQL statement in a read-only
transaction and can only appear once in a transaction. If you set a transaction to READ
ONLY, subsequent queries see only changes committed before the transaction began.
The use of READ ONLY does not affect other users or transactions.
The SELECT ... FOR UPDATE statement identifies the rows that will be updated or deleted,
then locks each row in the result set. This is useful when you want to base an update on
the existing values in a row. In that case, you must make sure the row is not changed by
another user before the update.
The optional keyword NOWAIT tells Oracle not to wait if requested rows have been
locked by another user. Control is immediately returned to your program so that it can
do other work before trying again to acquire the lock. If you omit the keyword NOWAIT,
Oracle waits until the rows are available.
All rows are locked when you open the cursor, not as they are fetched. The rows are
unlocked when you commit or roll back the transaction. Since the rows are no longer
locked, you cannot fetch from a FOR UPDATE cursor after a commit. (For a workaround,
see "Fetching Across Commits".)
When querying multiple tables, you can use the FOR UPDATE clause to confine row
locking to particular tables. Rows in a table are locked only if the FOR UPDATE OF clause
refers to a column in that table. For example, the following query locks rows in the emp
table but not in the dept table:
DECLARE
CURSOR c1 IS SELECT ename, dname FROM emp, dept
WHERE emp.deptno = dept.deptno AND job = 'MANAGER'
FOR UPDATE OF sal;
As the next example shows, you use the CURRENT OF clause in an UPDATE or DELETE
statement to refer to the latest row fetched from a cursor:
DECLARE
CURSOR c1 IS SELECT empno, job, sal FROM emp FOR UPDATE;
BEGIN
OPEN c1;
LOOP
FETCH c1 INTO ...
UPDATE emp SET sal = new_sal WHERE CURRENT OF c1;
END LOOP;
The lock mode determines what other locks can be placed on the table. For example,
many users can acquire row share locks on a table at the same time, but only one user
at a time can acquire an exclusive lock. While one user has an exclusive lock on a table,
no other users can insert, delete, or update rows in that table. For more information
about lock modes, see Oracle Database Application Developer's Guide -
Fundamentals.
A table lock never keeps other users from querying a table, and a query never acquires a
table lock. Only if two different transactions try to modify the same row will one
transaction wait for the other to complete.
Fetching Across Commits
PL/SQL raises an exception if you try to fetch from a FOR UPDATE cursor after doing a
commit. The FOR UPDATE clause locks the rows when you open the cursor, and unlocks
them when you commit.
DECLARE
CURSOR c1 IS SELECT ename FROM emp FOR UPDATE OF sal;
BEGIN
FOR emp_rec IN c1 LOOP -- FETCH fails on the second iteration
INSERT INTO temp VALUES ('still going');
COMMIT; -- releases locks
END LOOP;
END;
If you want to fetch across commits, use the ROWID pseudocolumn to mimic the CURRENT
OF clause. Select the rowid of each row into a UROWID variable, then use the rowid to
identify the current row during subsequent updates and deletes:
DECLARE
CURSOR c1 IS SELECT ename, job, rowid FROM emp;
my_ename emp.ename%TYPE;
my_job emp.job%TYPE;
my_rowid UROWID;
BEGIN
OPEN c1;
LOOP
FETCH c1 INTO my_ename, my_job, my_rowid;
EXIT WHEN c1%NOTFOUND;
UPDATE emp SET sal = sal * 1.05 WHERE rowid = my_rowid;
Because the fetched rows are not locked by a FOR UPDATE clause, other users might
unintentionally overwrite your changes. The extra space needed for read consistency is
not released until the cursor is closed, which can slow down processing for large
updates.
The next example shows that you can use the %ROWTYPE attribute with cursors that
reference the ROWID pseudocolumn:
DECLARE
CURSOR c1 IS SELECT ename, sal, rowid FROM emp;
emp_rec c1%ROWTYPE;
BEGIN
OPEN c1;
LOOP
FETCH c1 INTO emp_rec;
EXIT WHEN c1%NOTFOUND;
IF ... THEN
DELETE FROM emp WHERE rowid = emp_rec.rowid;
END IF;
END LOOP;
CLOSE c1;
END;
Using PL/SQL Subprograms
Database Triggers
A database triggers is stored PL/SQL program unit associated with a specific database
table or view. The code in the trigger defines the action the database needs to perform
whenever some database manipulation (INSERT, UPDATE, DELETE) takes place.
Unlike the stored procedure and functions, which have to be called explicitly, the
database triggers are fires (executed) or called implicitly whenever the table is affected
by any of the above said DML operations.
Till oracle 7.0 only 12 triggers could be associated with a given table, but in higher
versions of Oracle there is no such limitation. A database trigger fires with the privileges
of owner not that of user
1. A triggering event
2. A trigger constraint (Optional)
3. Trigger action
Types of Triggers
A Row trigger fires once for each row affected. It uses FOR EACH ROW clause. They are
useful if trigger action depends on number of rows affected.
Statement Trigger fires once, irrespective of number of rows affected in the table.
Statement triggers are useful when triggers action does not depend on
While defining the trigger we can specify whether to perform the trigger action (i.e.
execute trigger body) before or after the triggering statement. BEFORE and AFTER
triggers fired by DML statements can only be defined on tables.
BEFORE triggers The trigger action here is run before the trigger statement.
AFTER triggers The trigger action here is run after the trigger statement.
INSTEAD of Triggers provide a way of modifying views that can not be modified directly
using DML statements.
LOGON triggers fires after successful logon by the user and LOGOFF trigger fires at the
start of user logoff.
Points to ponder
BEFORE CREATE OR AFTER CREATE trigger is fired when a schema object is created.
BEFORE OR AFTER ALTER trigger is fired when a schema object is altered.
BEFORE OR AFTER DROP trigger is fired when a schema object is dropped.
A trigger can be enabled means can be made to run or it can disabled means it cannot
run. A trigger is automatically enabled when it is created. We need re-enable trigger for
using it if it is disabled. To enable or disable a trigger using ALTER TRIGGER command,
you must be owner of the trigger or should have ALTER ANY TRIGGER privilege. To
create a trigger you must have CREATE TRIGGER privilege, which is given to as part of
RESOURCE privilege at the time of user creation.
A trigger can be used to handle multiple situations as shown in the following example.
By using conditional predicates UPDATING, INSERTING, or DELETING we can handle each
situation.
CORRELATION NAMES
While using row triggers, the trigger action statement can access column values of the
row that is being processed currently. This is done using correlation names. There exist
two correlation names for every column of the table, one for the column old value and
the other for its new value. We use qualifier NEW with column name for new values and
qualifier OLD is used to refer old value of the column.
Example:
The REFERENCING option is used to avoid name conflicts between correlation names
and table names. For example if you are using a table by name new or old with field
names say SNO, NAME (though it is a very rare situation) then the ambiguity arises. To
avoid this we use REFERENCING option.
In the above example when we tried to delete a row using SQL statement (Mutating
Table) , which fires AFTER DELETE trigger. The body of this trigger is having a select
statement that tries to read the table. This operation is not allowed by oracle. Hence we
received a runtime error and total action is rolled back by Oracle. (The row is not
deleted)
A trigger can be used to handle multiple situations as shown in the following example.
By using conditional predicates UPDATING, INSERTING, or DELETING we can handle each
situation.
CORRELATION NAMES
While using row triggers, the trigger action statement can access column values of the
row that is being processed currently. This is done using correlation names. There exist
two correlation names for every column of the table, one for the column old value and
the other for its new value. We use qualifier NEW with column name for new values and
qualifier OLD is used to refer old value of the column.
Example:
The REFERENCING option is used to avoid name conflicts between correlation names
and table names. For example if you are using a table by name new or old with field
names say SNO, NAME (though it is a very rare situation) then the ambiguity arises. To
avoid this we use REFERENCING option.
In the above example when we tried to delete a row using SQL statement (Mutating
Table) , which fires AFTER DELETE trigger. The body of this trigger is having a select
statement that tries to read the table. This operation is not allowed by oracle. Hence we
received a runtime error and total action is rolled back by Oracle. (The row is not
deleted)
BEGIN
IF TO_CHAR(SYSDATE,'DY') = 'SUN' THEN
DBMS_OUTPUT.PUT_LINE('Today is Holiday ...');
ELSE
DBMS_OUTPUT.PUT_LINE('Welcome back ....');
END IF;
END;
/
DECLARE
N NUMBER(3);
BEGIN
SELECT COUNT(*) INTO N FROM emp;
DBMS_OUTPUT.PUT_LINE('No of Employees='||N);
END;
This tutorial covers Defining and Using Collections - Declaring, Initializing, and
Referencing PL/SQL Collections and Collection Methods - Using the Collection Methods.
Subprograms
Subprograms are named PL/SQL blocks that can be called with a set of parameters.
PL/SQL has two types of subprograms, procedures and functions. Generally, you use a
procedure to perform an action and a function to compute a value.
Like anonymous blocks, subprograms have:
A declarative part, with declarations of types, cursors, constants, variables,
exceptions, and nested subprograms. These items are local and cease to exist
when the subprogram ends.
An executable part, with statements that assign values, control execution, and
manipulate Oracle data.
An optional exception-handling part, which deals with runtime error conditions.
Example - Simple PL/SQL Procedure
The following example shows a string-manipulation procedure that accepts both input
and output parameters, and handles potential errors:
CREATE OR REPLACE PROCEDURE double
(
original IN VARCHAR2, new_string OUT VARCHAR2
)
AS
BEGIN
new_string := original || original;
EXCEPTION
WHEN VALUE_ERROR THEN
dbms_output.put_line('Output buffer not long enough.');
END;
/
Example - Simple PL/SQL Function
The following example shows a numeric function that declares a local variable to hold
temporary results, and returns a value when finished:
CREATE OR REPLACE FUNCTION square(original NUMBER)
RETURN NUMBER
AS
original_squared NUMBER;
BEGIN
original_squared := original * original;
RETURN original_squared;
END;
/
Advantages of PL/SQL Subprograms
Subprograms let you extend the PL/SQL language. Procedures act like new statements.
Functions act like new expressions and operators.
Subprograms let you break a program down into manageable, well-defined modules.
You can use top-down design and the stepwise refinement approach to problem solving.
A procedure has two parts: the specification (spec for short) and the body. The
procedure spec begins with the keyword PROCEDURE and ends with the procedure name
or a parameter list. Parameter declarations are optional. Procedures that take no
parameters are written without parentheses.
The procedure body begins with the keyword IS (or AS) and ends with the keyword END
followed by an optional procedure name. The procedure body has three parts: a
declarative part, an executable part, and an optional exception-handling part.
The declarative part contains local declarations. The keyword DECLARE is used for
anonymous PL/SQL blocks, but not procedures. The executable part contains
statements, which are placed between the keywords BEGIN and EXCEPTION (or END). At
least one statement must appear in the executable part of a procedure. You can use the
NULL statement to define a placeholder procedure or specify that the procedure does
nothing. The exception-handling part contains exception handlers, which are placed
between the keywords EXCEPTION and END.
A procedure is called as a PL/SQL statement. For example, you might call the procedure
raise_salary as follows:
raise_salary(emp_id, amount);
Understanding PL/SQL Functions
A function is a subprogram that computes a value. Functions and procedures are
structured alike, except that functions have a RETURN clause.
Functions have a number of optional keywords, used to declare a special class of
functions known as table functions. They are typically used for transforming large
amounts of data in data warehousing applications.
The CREATE clause lets you create standalone functions, which are stored in an Oracle
database. You can execute the CREATE FUNCTION statement interactively from SQL*Plus or
from a program using native dynamic SQL.
The AUTHID clause determines whether a stored function executes with the privileges of
its owner (the default) or current user and whether its unqualified references to schema
objects are resolved in the schema of the owner or current user. You can override the
default behavior by specifying CURRENT_USER.
The PARALLEL_ENABLE option declares that a stored function can be used safely in the
slave sessions of parallel DML evaluations. The state of a main (logon) session is never
shared with slave sessions. Each slave session has its own state, which is initialized when
the session begins. The function result should not depend on the state of session (static)
variables. Otherwise, results might vary across sessions.
The hint DETERMINISTIC helps the optimizer avoid redundant function calls. If a stored
function was called previously with the same arguments, the optimizer can elect to use
the previous result. The function result should not depend on the state of session
variables or schema objects. Otherwise, results might vary across calls. Only
DETERMINISTIC functions can be called from a function-based index or a materialized view
that has query-rewrite enabled. For more information, see Oracle Database SQL
Reference.
The pragma AUTONOMOUS_TRANSACTION instructs the PL/SQL compiler to mark a function
as autonomous (independent). Autonomous transactions let you suspend the main
transaction, do SQL operations, commit or roll back those operations, then resume the
main transaction.
You cannot constrain (with NOT NULL for example) the datatype of a parameter or a
function return value. However, you can use a workaround to size-constrain them
indirectly.
Like a procedure, a function has two parts: the spec and the body. The function spec
begins with the keyword FUNCTION and ends with the RETURN clause, which specifies the
datatype of the return value. Parameter declarations are optional. Functions that take
no parameters are written without parentheses.
The function body begins with the keyword IS (or AS) and ends with the keyword END
followed by an optional function name. The function body has three parts: a declarative
part, an executable part, and an optional exception-handling part.
The declarative part contains local declarations, which are placed between the
keywords IS and BEGIN. The keyword DECLARE is not used. The executable part contains
statements, which are placed between the keywords BEGIN and EXCEPTION (or END). One
or more RETURN statements must appear in the executable part of a function. The
exception-handling part contains exception handlers, which are placed between the
keywords EXCEPTION and END.
A function is called as part of an expression:
IF sal_ok(new_sal, new_title) THEN ...
Using the RETURN Statement
The RETURN statement immediately ends the execution of a subprogram and returns
control to the caller. Execution continues with the statement following the subprogram
call. (Do not confuse the RETURN statement with the RETURN clause in a function spec,
which specifies the datatype of the return value.)
A subprogram can contain several RETURN statements. The subprogram does not have to
conclude with a RETURN statement. Executing any RETURN statement completes the
subprogram immediately.
In procedures, a RETURN statement does not return a value and so cannot contain an
expression. The statement returns control to the caller before the end of the procedure.
In functions, a RETURN statement must contain an expression, which is evaluated when
the RETURN statement is executed. The resulting value is assigned to the function
identifier, which acts like a variable of the type specified in the RETURN clause. Observe
how the function balance returns the balance of a specified bank account:
FUNCTION balance (acct_id INTEGER) RETURN REAL IS
acct_bal REAL;
BEGIN
SELECT bal INTO acct_bal FROM accts
WHERE acct_no = acct_id;
RETURN acct_bal;
END balance;
/
The following example shows that the expression in a function RETURN statement can be
arbitrarily complex:
FUNCTION compound (
years NUMBER,
amount NUMBER,
rate NUMBER) RETURN NUMBER IS
BEGIN
RETURN amount * POWER((rate / 100) + 1, years);
END compound;
/
In a function, there must be at least one execution path that leads to a RETURN
statement. Otherwise, you get a function returned without value error at run time.
Declaring Nested PL/SQL Subprograms
You can declare subprograms in any PL/SQL block, subprogram, or package. The
subprograms must go at the end of the declarative section, after all other items.
You must declare a subprogram before calling it. This requirement can make it difficult
to declare several nested subprograms that call each other.
the value. For example, if you pass a number when the procedure expects a string,
PL/SQL converts the parameter so that the procedure receives a string.
The actual parameter and its corresponding formal parameter must have compatible
datatypes. For instance, PL/SQL cannot convert between the DATE and REAL datatypes, or
convert a string to a number if the string contains extra characters such as dollar signs.
Example - Formal Parameters and Actual Parameters
The following procedure declares two formal parameters named emp_id and amount:
PROCEDURE raise_salary (emp_id INTEGER, amount REAL) IS
BEGIN
UPDATE emp SET sal = sal + amount WHERE empno = emp_id;
END raise_salary;
/
This procedure call specifies the actual parameters emp_num and amount:
raise_salary(emp_num, amount);
Expressions can be used as actual parameters:
raise_salary(emp_num, merit + cola);
Using Positional, Named, or Mixed Notation for Subprogram Parameters
When calling a subprogram, you can write the actual parameters using either:
Positional notation. You specify the same parameters in the same order as they
are declared in the procedure.
This notation is compact, but if you specify the parameters (especially literals) in
the wrong order, the bug can be hard to detect. You must change your code if
the procedure's parameter list changes.
Named notation. You specify the name of each parameter along with its value.
An arrow (=>) serves as the association operator. The order of the parameters is
not significant.
This notation is more verbose, but makes your code easier to read and maintain.
You can sometimes avoid changing your code if the procedure's parameter list
changes, for example if the parameters are reordered or a new optional
parameter is added. Named notation is a good practice to use for any code that
calls someone else's API, or defines an API for someone else to use.
Mixed notation. You specify the first parameters with positional notation, then
switch to named notation for the last parameters.
You can use this notation to call procedures that have some required
parameters, followed by some optional parameters.
Example - Subprogram Calls Using Positional, Named, and Mixed Notation
DECLARE
acct INTEGER := 12345;
amt REAL := 500.00;
PROCEDURE credit_acct (acct_no INTEGER, amount REAL) IS
BEGIN NULL; END;
BEGIN
-- The following calls are all equivalent.
credit_acct(acct, amt); -- positional
credit_acct(amount => amt, acct_no => acct); -- named
credit_acct(acct_no => acct, amount => amt); -- named
credit_acct(acct, amount => amt); -- mixed
END;
/
Specifying Subprogram Parameter Modes
You use parameter modes to define the behavior of formal parameters. The three
parameter modes are IN (the default), OUT, and IN OUT.
Any parameter mode can be used with any subprogram. Avoid using the OUT and IN OUT
modes with functions. To have a function return multiple values is a poor programming
practice. Also, functions should be free from side effects, which change the values of
variables not local to the subprogram.
Before exiting a subprogram, assign values to all OUT formal parameters. Otherwise, the
corresponding actual parameters will be null. If you exit successfully, PL/SQL assigns
values to the actual parameters. If you exit with an unhandled exception, PL/SQL does
not assign values to the actual parameters.
If you exit a subprogram successfully, PL/SQL assigns values to the actual parameters. If
you exit with an unhandled exception, PL/SQL does not assign values to the actual
parameters.
IN OUT IN OUT
Formal parameter acts Formal parameter acts like Formal parameter acts like an
like a constant an uninitialized variable initialized variable
BEGIN
FOR i IN 1..n LOOP
tab(i) := SYSDATE;
END LOOP;
END initialize;
/
You might also write a procedure to initialize another kind of collection:
PROCEDURE initialize (tab OUT RealTabTyp, n INTEGER) IS
BEGIN
FOR i IN 1..n LOOP
tab(i) := 0.0;
END LOOP;
END initialize;
/
Because the processing in these two procedures is the same, it is logical to give them
the same name.
You can place the two overloaded initialize procedures in the same block, subprogram,
package, or object type. PL/SQL determines which procedure to call by checking their
formal parameters. In the following example, the version of initialize that PL/SQL uses
depends on whether you call the procedure with a DateTabTyp or RealTabTyp parameter:
DECLARE
TYPE DateTabTyp IS TABLE OF DATE INDEX BY BINARY_INTEGER;
TYPE RealTabTyp IS TABLE OF REAL INDEX BY BINARY_INTEGER;
hiredate_tab DateTabTyp;
comm_tab RealTabTyp;
indx BINARY_INTEGER;
PROCEDURE initialize (tab OUT DateTabTyp, n INTEGER) IS
BEGIN
NULL;
END;
PROCEDURE initialize (tab OUT RealTabTyp, n INTEGER) IS
BEGIN
NULL;
END;
BEGIN
indx := 50;
initialize(hiredate_tab, indx); -- calls first version
The spec holds public declarations, which are visible to stored procedures and other
code outside the package. You must declare subprograms at the end of the spec after all
other items (except pragmas that name a specific function; such pragmas must follow
the function spec).
The body holds implementation details and private declarations, which are hidden from
code outside the package. Following the declarative part of the package body is the
optional initialization part, which holds statements that initialize package variables and
do any other one-time setup steps.
The AUTHID clause determines whether all the packaged subprograms execute with the
privileges of their definer (the default) or invoker, and whether their unqualified
references to schema objects are resolved in the schema of the definer or invoker.
A call spec lets you map a package subprogram to a Java method or external C function.
The call spec maps the Java or C name, parameter types, and return type to their SQL
counterparts. To learn how to write Java call specs, see Oracle Database Java
Developer's Guide. To learn how to write C call specs, see Oracle Database Application
Developer's Guide - Fundamentals.
What Goes In a PL/SQL Package?
"Get" and "Set" methods for the package variables, if you want to avoid letting
other procedures read and write them directly.
Cursor declarations with the text of SQL queries. Reusing exactly the same query
text in multiple locations is faster than retyping the same query each time with
slight differences. It is also easier to maintain if you need to change a query that
is used in many places.
Declarations for exceptions. Typically, you need to be able to reference these
from different procedures, so that you can handle exceptions within called
subprograms.
Declarations for procedures and functions that call each other. You do not need
to worry about compilation order for packaged procedures and functions,
making them more convenient than standalone stored procedures and functions
when they call back and forth to each other.
Declarations for overloaded procedures and functions. You can create multiple
variations of a procedure or function, using the same names but different sets of
parameters.
Variables that you want to remain available between procedure calls in the same
session. You can treat variables in a package like global variables.
Type declarations for PL/SQL collection types. To pass a collection as a
parameter between stored procedures or functions, you must declare the type
in a package so that both the calling and called subprogram can refer to it.
Example of a PL/SQL Package
The example below packages a record type, a cursor, and two employment procedures.
The procedure hire_employee uses the sequence empno_seq and the function SYSDATE to
insert a new employee number and hire date.
CREATE OR REPLACE PACKAGE emp_actions AS -- spec
TYPE EmpRecTyp IS RECORD (emp_id INT, salary REAL);
CURSOR desc_salary RETURN EmpRecTyp;
PROCEDURE hire_employee (
ename VARCHAR2,
job VARCHAR2,
mgr NUMBER,
sal NUMBER,
comm NUMBER,
deptno NUMBER);
PROCEDURE fire_employee (emp_id NUMBER);
END emp_actions;
/
Information Hiding
With packages, you can specify which types, items, and subprograms are public (visible
and accessible) or private (hidden and inaccessible). For example, if a package contains
four subprograms, three might be public and one private. The package hides the
implementation of the private subprogram so that only the package (not your
application) is affected if the implementation changes. This simplifies maintenance and
enhancement. Also, by hiding implementation details from users, you protect the
integrity of the package.
Added Functionality
Packaged public variables and cursors persist for the duration of a session. They can be
shared by all subprograms that execute in the environment. They let you maintain data
across transactions without storing it in the database.
Better Performance
When you call a packaged subprogram for the first time, the whole package is loaded
into memory. Later calls to related subprograms in the package require no disk I/O.
Packages stop cascading dependencies and avoid unnecessary recompiling. For
example, if you change the body of a packaged function, Oracle does not recompile
other subprograms that call the function; these subprograms only depend on the
parameters and return value that are declared in the spec, so they are only recompiled
if the spec changes.
Understanding The Package Specification
The package specification contains public declarations. The declared items are
accessible from anywhere in the package and to any other subprograms in the same
schema.
Figure - Package Scope
The spec lists the package resources available to applications. All the information your
application needs to use the resources is in the spec. For example, the following
declaration shows that the function named fac takes one argument of type INTEGER and
returns a value of type INTEGER:
FUNCTION fac (n INTEGER) RETURN INTEGER; -- returns n!
That is all the information you need to call the function. You need not consider its
underlying implementation (whether it is iterative or recursive for example).
If a spec declares only types, constants, variables, exceptions, and call specs, the
package body is unnecessary. Only subprograms and cursors have an underlying
implementation. In the following example, the package needs no body because it
declares types, exceptions, and variables, but no subprograms or cursors. Such packages
let you define global variables—usable by stored procedures and functions and
triggers—that persist throughout a session.
CREATE PACKAGE trans_data AS -- bodiless package
TYPE TimeRec IS RECORD (
minutes SMALLINT,
hours SMALLINT);
TYPE TransRec IS RECORD (
category VARCHAR2,
account INT,
amount REAL,
time_of TimeRec);
minimum_balance CONSTANT REAL := 10.00;
number_processed INT;
insufficient_funds EXCEPTION;
END trans_data;
/
Restrictions
You cannot reference remote packaged variables, either directly or indirectly. For
example, you cannot call the a procedure through a database link if the procedure refers
to a packaged variable.
Inside a package, you cannot reference host variables.
Understanding the Package Body
The package body contains the implementation of every cursor and subprogram
declared in the package spec. Subprograms defined in a package body are accessible
outside the package only if their specs also appear in the package spec. If a subprogram
spec is not included in the package spec, that subprogram can only be called by other
subprograms in the same package.
To match subprogram specs and bodies, PL/SQL does a token-by-token comparison of
their headers. Except for white space, the headers must match word for word.
Otherwise, PL/SQL raises an exception, as the following example shows:
CREATE PACKAGE emp_actions AS
...
After writing the package, you can develop applications that reference its types, call its
subprograms, use its cursor, and raise its exception. When you create the package, it is
stored in an Oracle database for use by any application that has execute privilege on the
package.
CREATE PACKAGE emp_actions AS
/* Declare externally visible types, cursor, exception. */
TYPE EmpRecTyp IS RECORD (emp_id INT, salary REAL);
TYPE DeptRecTyp IS RECORD (dept_id INT, location VARCHAR2);
CURSOR desc_salary RETURN EmpRecTyp;
invalid_salary EXCEPTION;
comm REAL,
deptno REAL) RETURN INT IS
new_empno INT;
BEGIN
SELECT empno_seq.NEXTVAL INTO new_empno FROM dual;
INSERT INTO emp VALUES (new_empno, ename, job,
mgr, SYSDATE, sal, comm, deptno);
number_hired := number_hired + 1;
RETURN new_empno;
END hire_employee;
emp_rec EmpRecTyp;
BEGIN
OPEN desc_salary;
FOR i IN 1..n LOOP
FETCH desc_salary INTO emp_rec;
END LOOP;
CLOSE desc_salary;
RETURN emp_rec;
END nth_highest_salary;
PROCEDURE enter_transaction (
/* Add a transaction to transactions table. */
acct INT,
kind CHAR,
amount REAL) IS
BEGIN
INSERT INTO transactions
VALUES (acct, kind, amount, 'Pending', SYSDATE);
END enter_transaction;
END credit_account;
Items declared in the spec of emp_actions, such as the exception invalid_salary, are visible
outside the package. Any PL/SQL code can reference the exception invalid_salary. Such
items are called public.
To maintain items throughout a session or across transactions, place them in the
declarative part of the package body. For example, the value of number_hired is kept
between calls to hire_employee within the same session. The value is lost when the
session ends.
To make the items public, place them in the package spec. For example, the constant
minimum_balance declared in the spec of the package bank_transactions is available for
general use.
Overloading Packaged Subprograms
PL/SQL allows two or more packaged subprograms to have the same name. This option
is useful when you want a subprogram to accept similar sets of parameters that have
different datatypes. For example, the following package defines two procedures named
journalize:
CREATE PACKAGE journal_entries AS
...
PROCEDURE journalize (amount REAL, trans_date VARCHAR2);
PROCEDURE journalize (amount REAL, trans_date INT);
END journal_entries;
/
CREATE PACKAGE BODY journal_entries AS
...
PROCEDURE journalize (amount REAL, trans_date VARCHAR2) IS
BEGIN
INSERT INTO journal
VALUES (amount, TO_DATE(trans_date, 'DD-MON-YYYY'));
END journalize;
The first procedure accepts trans_date as a character string, while the second procedure
accepts it as a number (the Julian day). Each procedure handles the data appropriately.
For the rules that apply to overloaded subprograms, see "Overloading Subprogram
Names".
How Package STANDARD Defines the PL/SQL Environment
A package named STANDARD defines the PL/SQL environment. The package spec globally
declares types, exceptions, and subprograms, which are available automatically to
PL/SQL programs. For example, package STANDARD declares function ABS, which returns
the absolute value of its argument, as follows:
FUNCTION ABS (n NUMBER) RETURN NUMBER;
The contents of package STANDARD are directly visible to applications. You do not need to
qualify references to its contents by prefixing the package name. For example, you
might call ABS from a database trigger, stored subprogram, Oracle tool, or 3GL
application, as follows:
abs_diff := ABS(x - y);
If you declare your own version of ABS, your local declaration overrides the global
declaration. You can still call the built-in function by specifying its full name:
abs_diff := STANDARD.ABS(x - y);
Most built-in functions are overloaded. For example, package STANDARD contains the
following declarations:
FUNCTION TO_CHAR (right DATE) RETURN VARCHAR2;
FUNCTION TO_CHAR (left NUMBER) RETURN VARCHAR2;
FUNCTION TO_CHAR (left DATE, right VARCHAR2) RETURN VARCHAR2;
FUNCTION TO_CHAR (left NUMBER, right VARCHAR2) RETURN VARCHAR2;
PL/SQL resolves a call to TO_CHAR by matching the number and datatypes of the formal
and actual parameters.
predefined names, such as ZERO_DIVIDE and STORAGE_ERROR. The other internal exceptions
can be given names.
You can define exceptions of your own in the declarative part of any PL/SQL block,
subprogram, or package. For example, you might define an exception named
insufficient_funds to flag overdrawn bank accounts. Unlike internal exceptions, user-
defined exceptions must be given names.
When an error occurs, an exception is raised. That is, normal execution stops and
control transfers to the exception-handling part of your PL/SQL block or subprogram.
Internal exceptions are raised implicitly (automatically) by the run-time system. User-
defined exceptions must be raised explicitly by RAISE statements, which can also raise
predefined exceptions.
To handle raised exceptions, you write separate routines called exception handlers.
After an exception handler runs, the current block stops executing and the enclosing
block resumes with the next statement. If there is no enclosing block, control returns to
the host environment.
The following example calculates a price-to-earnings ratio for a company. If the
company has zero earnings, the division operation raises the predefined exception
ZERO_DIVIDE, the execution of the block is interrupted, and control is transferred to the
exception handlers. The optional OTHERS handler catches all exceptions that the block
does not name specifically.
SET SERVEROUTPUT ON;
DECLARE
stock_price NUMBER := 9.73;
net_earnings NUMBER := 0;
pe_ratio NUMBER;
BEGIN
-- Calculation might cause division-by-zero error.
pe_ratio := stock_price / net_earnings;
dbms_output.put_line('Price/earnings ratio = ' || pe_ratio);
declaring individual variables with %TYPE qualifiers, and declaring records to hold
query results with %ROWTYPE qualifiers.
Handle named exceptions whenever possible, instead of using WHEN OTHERS in
exception handlers. Learn the names and causes of the predefined exceptions. If
your database operations might cause particular ORA- errors, associate names
with these errors so you can write handlers for them. (You will learn how to do
that later in this chapter.)
Test your code with different combinations of bad data to see what potential
errors arise.
Write out debugging information in your exception handlers. You might store
such information in a separate table. If so, do it by making a call to a procedure
declared with the PRAGMA AUTONOMOUS_TRANSACTION, so that you can commit
your debugging information, even if you roll back the work that the main
procedure was doing.
Carefully consider whether each exception handler should commit the
transaction, roll it back, or let it continue. Remember, no matter how severe the
error is, you want to leave the database in a consistent state and avoid storing
any bad data.
Advantages of PL/SQL Exceptions
Using exceptions for error handling has several advantages.
With exceptions, you can reliably handle potential errors from many statements with a
single exception handler:
BEGIN
SELECT ...
SELECT ...
procedure_that_performs_select();
...
EXCEPTION
WHEN NO_DATA_FOUND THEN -- catches all 'no data found' errors
Instead of checking for an error at every point it might occur, just add an exception
handler to your PL/SQL block. If the exception is ever raised in that block (or any sub-
block), you can be sure it will be handled.
Sometimes the error is not immediately obvious, and could not be detected until later
when you perform calculations using bad data. Again, a single exception handler can
trap all division-by-zero errors, bad array subscripts, and so on.
If you need to check for errors at a specific spot, you can enclose a single statement or a
group of statements inside its own BEGIN-END block with its own exception handler.
You can make the checking as general or as precise as you like.
Isolating error-handling routines makes the rest of the program easier to read and
understand.
Summary of Predefined PL/SQL Exceptions
An internal exception is raised automatically if your PL/SQL program violates an Oracle
rule or exceeds a system-dependent limit. PL/SQL predefines some common Oracle
errors as exceptions. For example, PL/SQL raises the predefined exception
NO_DATA_FOUND if a SELECT INTO statement returns no rows.
You can use the pragma EXCEPTION_INIT to associate exception names with other Oracle
error codes that you can anticipate. To handle unexpected Oracle errors, you can use
the OTHERS handler. Within this handler, you can call the functions SQLCODE and SQLERRM
to return the Oracle error code and message text. Once you know the error code, you
can use it with pragma EXCEPTION_INIT and write a handler specifically for that error.
PL/SQL declares predefined exceptions globally in package STANDARD. You need not
declare them yourself. You can write handlers for predefined exceptions using the
names in the following list:
DUP_VAL_ON_INDEX ORA-00001 -1
ROWTYPE_MISMATCH The host cursor variable and PL/SQL cursor variable involved in
an assignment have incompatible return types. For example,
when an open host cursor variable is passed to a stored
subprogram, the return types of the actual and formal
parameters must be compatible.
DECLARE
past_due EXCEPTION;
Exception and variable declarations are similar. But remember, an exception is an error
condition, not a data item. Unlike variables, exceptions cannot appear in assignment
statements or SQL statements. However, the same scope rules apply to variables and
exceptions.
Scope Rules for PL/SQL Exceptions
You cannot declare an exception twice in the same block. You can, however, declare the
same exception in two different blocks.
Exceptions declared in a block are considered local to that block and global to all its sub-
blocks. Because a block can reference only local or global exceptions, enclosing blocks
cannot reference exceptions declared in a sub-block.
If you redeclare a global exception in a sub-block, the local declaration prevails. The sub-
block cannot reference the global exception, unless the exception is declared in a
labeled block and you qualify its name with the block label:
block_label.exception_name
where exception_name is the name of a previously declared exception and the number is a
negative value corresponding to an ORA- error number. The pragma must appear
somewhere after the exception declaration in the same declarative section, as shown in
the following example:
DECLARE
deadlock_detected EXCEPTION;
PRAGMA EXCEPTION_INIT(deadlock_detected, -60);
BEGIN
null; -- Some operation that causes an ORA-00060 error
EXCEPTION
WHEN deadlock_detected THEN
null; -- handle the error
END;
/
Defining Your Own Error Messages: Procedure RAISE_APPLICATION_ERROR
The procedure RAISE_APPLICATION_ERROR lets you issue user-defined ORA- error messages
from stored subprograms. That way, you can report errors to your application and avoid
returning unhandled exceptions.
To call RAISE_APPLICATION_ERROR, use the syntax
raise_application_error(error_number, message[, {TRUE | FALSE}]);
where error_number is a negative integer in the range -20000 .. -20999 and message is a
character string up to 2048 bytes long. If the optional third parameter is TRUE, the error
is placed on the stack of previous errors. If the parameter is FALSE (the default), the error
replaces all previous errors. RAISE_APPLICATION_ERROR is part of package DBMS_STANDARD,
and as with package STANDARD, you do not need to qualify references to it.
An application can call raise_application_error only from an executing stored subprogram
(or method). When called, raise_application_error ends the subprogram and returns a user-
defined error number and message to the application. The error number and message
can be trapped like any Oracle error.
In the following example, you call raise_application_error if an error condition of your
choosing happens (in this case, if the current schema owns less than 1000 tables):
DECLARE
num_tables NUMBER;
BEGIN
SELECT COUNT(*) INTO num_tables FROM USER_TABLES;
IF num_tables < 1000 THEN
/* Issue your own error code (ORA-20101) with your own error message. */
raise_application_error(-20101, 'Expecting at least 1000 tables');
ELSE
NULL; -- Do the rest of the processing (for the non-error case).
END IF;
END;
/
The calling application gets a PL/SQL exception, which it can process using the error-
reporting functions SQLCODE and SQLERRM in an OTHERS handler. Also, it can use the
pragma EXCEPTION_INIT to map specific error numbers returned by raise_application_error to
exceptions of its own, as the following Pro*C example shows:
EXEC SQL EXECUTE
/* Execute embedded PL/SQL block using host
variables my_emp_id and my_amount, which were
assigned values in the host environment. */
DECLARE
null_salary EXCEPTION;
/* Map error number returned by raise_application_error
to user-defined exception. */
PRAGMA EXCEPTION_INIT(null_salary, -20101);
BEGIN
raise_salary(:my_emp_id, :my_amount);
EXCEPTION
WHEN null_salary THEN
INSERT INTO emp_audit VALUES (:my_emp_id, ...);
END;
END-EXEC;
This technique allows the calling application to handle error conditions in specific
exception handlers.
Redeclaring Predefined Exceptions
Remember, PL/SQL declares predefined exceptions globally in package STANDARD, so you
need not declare them yourself. Redeclaring predefined exceptions is error prone
because your local declaration overrides the global declaration. For example, if you
declare an exception named invalid_number and then PL/SQL raises the predefined
exception INVALID_NUMBER internally, a handler written for INVALID_NUMBER will not catch
the internal exception. In such cases, you must use dot notation to specify the
predefined exception, as follows:
EXCEPTION
WHEN invalid_number OR STANDARD.INVALID_NUMBER THEN
ROLLBACK;
END;
/
How PL/SQL Exceptions Propagate
When an exception is raised, if PL/SQL cannot find a handler for it in the current block or
subprogram, the exception propagates. That is, the exception reproduces itself in
successive enclosing blocks until a handler is found or there are no more blocks to
search. If no handler is found, PL/SQL returns an unhandled exception error to the host
environment.
Exceptions cannot propagate across remote procedure calls done through database
links. A PL/SQL block cannot catch an exception raised by a remote subprogram. For a
workaround, see "Defining Your Own Error Messages: Procedure
RAISE_APPLICATION_ERROR".
Figure 10-1, Figure 10-2, and Figure 10-3 illustrate the basic propagation rules.
Figure - Propagation Rules: Example 1
An exception can propagate beyond its scope, that is, beyond the block in which it was
declared. Consider the following example:
BEGIN
DECLARE ---------- sub-block begins
past_due EXCEPTION;
To catch raised exceptions, you write exception handlers. Each handler consists of a
WHEN clause, which specifies an exception, followed by a sequence of statements to be
executed when that exception is raised. These statements complete execution of the
block or subprogram; control does not return to where the exception was raised. In
other words, you cannot resume processing where you left off.
The optional OTHERS exception handler, which is always the last handler in a block or
subprogram, acts as the handler for all exceptions not named specifically. Thus, a block
or subprogram can have only one OTHERS handler.
As the following example shows, use of the OTHERS handler guarantees that no exception
will go unhandled:
EXCEPTION
WHEN ... THEN
-- handle the error
WHEN ... THEN
-- handle the error
WHEN OTHERS THEN
-- handle all other errors
END;
If you want two or more exceptions to execute the same sequence of statements, list
the exception names in the WHEN clause, separating them by the keyword OR, as follows:
EXCEPTION
WHEN over_limit OR under_limit OR VALUE_ERROR THEN
-- handle the error
If any of the exceptions in the list is raised, the associated sequence of statements is
executed. The keyword OTHERS cannot appear in the list of exception names; it must
appear by itself. You can have any number of exception handlers, and each handler can
associate a list of exceptions with a sequence of statements. However, an exception
name can appear only once in the exception-handling part of a PL/SQL block or
subprogram.
The usual scoping rules for PL/SQL variables apply, so you can reference local and global
variables in an exception handler. However, when an exception is raised inside a cursor
FOR loop, the cursor is closed implicitly before the handler is invoked. Therefore, the
values of explicit cursor attributes are not available in the handler.
Handling Exceptions Raised in Declarations
Exceptions can be raised in declarations by faulty initialization expressions. For example,
the following declaration raises an exception because the constant credit_limit cannot
store numbers larger than 999:
DECLARE
credit_limit CONSTANT NUMBER(3) := 5000; -- raises an exception
BEGIN
NULL;
EXCEPTION
WHEN OTHERS THEN
-- Cannot catch the exception. This handler is never called.
dbms_output.put_line('Can''t handle an exception in a declaration.');
END;
/
Handlers in the current block cannot catch the raised exception because an exception
raised in a declaration propagates immediately to the enclosing block.
Handling Exceptions Raised in Handlers
When an exception occurs within an exception handler, that same handler cannot catch
the exception. An exception raised inside a handler propagates immediately to the
enclosing block, which is searched to find a handler for this new exception. From there
on, the exception propagates normally. For example:
EXCEPTION
WHEN INVALID_NUMBER THEN
INSERT INTO ... -- might raise DUP_VAL_ON_INDEX
WHEN DUP_VAL_ON_INDEX THEN ... -- cannot catch the exception
END;
Branching to or from an Exception Handler
A GOTO statement can branch from an exception handler into an enclosing block.
A GOTO statement cannot branch into an exception handler, or from an exception
handler into the current block.
Retrieving the Error Code and Error Message: SQLCODE and SQLERRM
In an exception handler, you can use the built-in functions SQLCODE and SQLERRM to find
out which error occurred and to get the associated error message. For internal
exceptions, SQLCODE returns the number of the Oracle error. The number that SQLCODE
returns is negative unless the Oracle error is no data found, in which case SQLCODE
returns +100. SQLERRM returns the corresponding error message. The message begins
with the Oracle error code.
For user-defined exceptions, SQLCODE returns +1 and SQLERRM returns the message: User-
Defined Exception.
unless you used the pragma EXCEPTION_INIT to associate the exception name with an
Oracle error number, in which case SQLCODE returns that error number and SQLERRM
returns the corresponding error message. The maximum length of an Oracle error
message is 512 characters including the error code, nested messages, and message
inserts such as table and column names.
If no exception has been raised, SQLCODE returns zero and SQLERRM returns the message:
ORA-0000: normal, successful completion.
You can pass an error number to SQLERRM, in which case SQLERRM returns the message
associated with that error number. Make sure you pass negative error numbers to
SQLERRM.
For example, in the Oracle Precompilers environment, any database changes made by a
failed SQL statement or PL/SQL block are rolled back.
Unhandled exceptions can also affect subprograms. If you exit a subprogram
successfully, PL/SQL assigns values to OUT parameters. However, if you exit with an
unhandled exception, PL/SQL does not assign values to OUT parameters (unless they are
NOCOPY parameters). Also, if a stored subprogram fails with an unhandled exception,
PL/SQL does not roll back database work done by the subprogram.
You can avoid unhandled exceptions by coding an OTHERS handler at the topmost level of
every PL/SQL program.