business rules. Objective of ST : To test the application for the correctness of how the application has been built and how its interface is been done. Reference document for UAT : BRD Reference document for ST : SRS or FS Environment for UAT : Simulated Live Environment(Prod Site) Environment for ST : Test Environment (Developers site) Data used for UAT : Simulated live data Data used for ST : Dummy data
ROWNUM
Conditions testing for ROWNUM values greater than a positive integer are always false. For example, this query returns no rows:
SELECT * FROM employees WHERE ROWNUM > 1;
The first row fetched is assigned a ROWNUM of 1 and makes the condition false. The second row to be fetched is now the first row and is also assigned a ROWNUM of 1 and makes the condition false. All rows subsequently fail to satisfy the condition, so no rows are returned.
CREATE OR REPLACE PROCEDURE skeleton IS BEGIN NULLL; END; Save your file as skeleton.sql. From SQL*Plus, open your skeleton.sql file. SQL*Plus loads the contents of your skeleton.sql file into the SQL*Plus buffer or memory area and presents the SQL*Plus command prompt: SQL> 1 2 3 4 5* SQL>
Execute the contents of the SQL*Plus buffer. Type a front slash and press <enter> like this: SQL> / Your procedure is compiled and saved on the database. However, SQL*Plus warns us of compilation errors: Warning: Procedure created with compilation errors. Lets see the compilation errors. First, we need to run two SET commands to ensure the SQL*Plus buffer does not overflow. At the SQL*Plus command prompt, type: SQL> SET ARRAYSIZE 1 SQL> SET MAXDATA 60000 SQL> Again, SQL*Plus remains secretive of the result. Let's see the errors. At the SQL*Plus command prompt, type: SQL> SHOW ERRORS PROCEDURE skeleton You should see the compilation error: LINE/COL ---------------------------------------------ERROR ---------------------------------------------4/3 PLS-00201: identifier 'NULLL' must be declared 4/3 PL/SQL: Statement ignored Oracle doesn't recognize the NULLL statement with the three "l"s. But Oracle won't hold it against you. Change your procedure declaration in Notepad by inserting the proper NULL statement, and follow the steps to create your procedure again on the Oracle database.
What if you want to completely remove a procedure from your database? That's what we'll cover next.
Unix commands
]
] 1 Log In Session
1.1 Log In
Enter username at login: prompt. Be carefull - Unix is case sensitive. Enter password at password: prompt.
1.2 Change Password
passwd
or exit
2 File System
2.1 Create a File
cat > file vi file Enter text and end with ctrl-D Edit file using the vi editor
chmod -R mode dir (changes all files in Mode Settings u g o + r w x user (owner) group other add permission remove permission read write execute
dir
Example: chmod go+rwx public.html adds read, write, and execute permissions for group and other on public.html.
2.6 List Files and Directories
ls ls -a ls -l list contents of directory include files with "." (dot files) list contents in long format (show modes)
2.12 Pathnames
simple:
One filename or directory name for accessing local file or directory. Example: foo.c absolute: List of directory names from root directory to desired file or directory name, each separated by /. Example: /src/shared relative: List of directory names from working directory to desired file or directory name, each separated by /. Example: Mail/inbox/23
2.13 Directory Abbreviations
~ ~username . .. ../.. Your home (login) directory Another user's home directory Working (current) directory Parent of working directory Parent of parent directory
3.0 Commands
3.1 Date
date display date and time
3.3 Printing
lpr file lpr -Pprinter file lpr -c# file print file on default printer print file on printer print # copies of file
interpret file as a dvi file show print queue (-Pprinter also valid) remove print request # (listed with lpq)
3.4 Redirection
command > file direct output of command to file instead of to standard output (screen), replacing current contents of file as above, except output is appended to the current contents of file command receives input from file instead of from standard input (keyboard) "pipe" output of cmd1 to input of cmd2 log everything displayed on the terminal to file; end with exit
command > > file command < file cmd1 | cmd2 script file
4 Search Files
grep string filelist grep -v string filelist grep -i string filelist show lines containing string in any file in filelist show lines not containing string show lines containing string, ignore case
5 Information on Users
finger user or finger user@machine finger @machine who get information on a user list users on machine list current users
6 Timesavers
6.1 Aliases
alias string command abbreviate command to string
for
On-line Documentation
man command-name man -k string string display on-line manual pages list one-line summaries of manual pages containing
DATA WAREHOUSE TESTING IS DIFFERENT All works in Data Warehouse population are mostly through batch runs. Therefore the testing is different from what is done in transaction systems. Unlike a typical transaction system, data warehouse testing is different on the following counts:
User-Triggered vs. System triggered
Most of the production/Source system testing is the processing of individual transactions, which are driven by some input from the users (Application Form, Servicing Request.). There are very few test cycles, which cover the system-triggered scenarios (Like billing, Valuation.) In data Warehouse, most of the testing is system triggered ('Extraction, Transformation and Loading'), the view refresh scripts etc. as per the scripts for ETL
Therefore typically Data-Warehouse testing is divided into two parts--> 'Back-end' testing where the source systems data is compared to the end-result data in Loaded area, and 'Front-end' testing where the user checks the data by comparing their MIS with the data displayed by the end-user tools like OLAP.
Batch vs. online gratification
This is something, which makes it a challenge to retain users interest. A transaction system will provide instant OR at least overnight gratification to the users, when they enter a transaction, which either is processed online OR maximum via overnight batch. In the case of data- warehouse, most of the action is happening in the back-end and users have to trace the individual transactions to the MIS and views produced by the OLAP tools. This is the same challenge, when you ask users to test the month-end mammoth reports/financial statements churned out by the transaction systems.
Volume of Test Data
The test data in a transaction system is a very small sample of the overall production data. Typically to keep the matters simple, we include as many test cases as are needed to comprehensively include all possible test scenarios, in a limited set of test data.. Data Warehouse has typically large test data as one does try to fill-up maximum possible combination and permutations of dimensions and facts. For example, if you are testing the location dimension, you would like the location-wise sales revenue report to have some revenue figures for most of the 100 cities and the 44 states. This would mean that you have to have thousands of sales transaction data at sales office level (assuming that sales office is lowest level of granularity for location dimension).
Possible scenarios/ Test Cases
If a transaction system has hundred (say) different scenarios, the valid and possible combination of those scenarios will not be unlimited. However, in case of Data Warehouse, the permutations and combinations one can possibly test is virtually unlimited due to the core objective of Data Warehouse is to allow all possible views of Data. In other words, 'You can never fully test a data Warehouse' Therefore one has to be creative in designing the test scenarios to gain a high level of confidence.
Test Data Preparation
This is linked to the point of possible test scenarios and volume of data. Given that a data- warehouse needs lots of both, the effort required to prepare the same is much more.
Programming for testing challenge
In case of transaction systems, users/business analysts typically test the output of the system. However, in case of data warehouse, as most of the action is happening at the back-end, most of the 'Data Warehouse data Quality testing' and 'Extraction,Transformation and Loading' testing is done by running separate stand-alone scripts. These scripts compare preTransformation to post Transformation (say) comparison of aggregates and throw out the pilferages. Users roles come in play, when their help is needed to analyze the same (if designers OR business analysts are not able to figure it out).
SQL*Loader
SQL*Loader is a high-speed data loading utility that loads data from external files into tables in an Oracle database. SQL*Loader accepts input data in a variety of formats, can perform filtering, and can load data into multiple Oracle database tables during the same load session. SQL*Loader provides three methods for loading data: Conventional Path Load, Direct Path Load, and External Table Load.
External Tables
A feature has been added to external tables that allows users to preprocess input data before it is sent to the access driver. The ability to manipulate input data with a preprocessor program results in additional loadable data formats, which greatly enhances the flexibility and processing power of external tables.
The types of preprocessor programs that can be used are versatile, ranging from system commands, usergenerated binaries (for example, a C program), or user-supplied shell scripts. Because the user supplies the program to preprocess the data, it can be tailored to meet the users specific needs. This means that the number of loadable formats is restricted only by the ability to manipulate the original data set.