TBD:
Director,
Information Management Systems & Services
Information Management Systems & Services
Software Development Testing Guide
Table of Contents
INTRODUCTION..............................................................................................................................3
PHILOSOPHY OF T ESTING............................................................................ 4
OVERALL GOALS OF THE T ESTING EFFORT .................................................... 4
T ESTING ASSUMPTIONS.............................................................................. 6
T ESTING ROLES ........................................................................................ 7
TESTING STRATEGY (METHODOLOGY) ....................................................................................9
INTRODUCTION ......................................................................................... 9
STAGES OF THE T ESTING LIFE CYCLE ........................................................... 9
T EST PLAN............................................................................................. 11
SAMPLE T EST STRATEGY (ORACLE UPGRADE) ............................................. 11
SAMPLE T EST STRATEGY (DATA WAREHOUSE )............................................ 14
TESTING PROCESS AND PROCEDURES .................................................................................18
INTRODUCTION ....................................................................................... 18
CREATE T EMPLATES................................................................................ 18
CREATE SCENARIOS (TEST CASES)............................................................. 18
CREATE T EST DATA ................................................................................ 18
CREATE SCRIPTS..................................................................................... 18
EXECUTE SCRIPTS................................................................................... 19
DOCUMENT RESULTS/CAPTURE BUGS ........................................................ 19
ENTERI NG AND T RACKING BUGS IN REMEDY ............................................... 20
LOGGING TAR S (ORACLE ONLY)............................................................... 22
CODE MIGRATION AND PATCHING BUGS ..................................................... 22
GENERAL INFORMATION ...........................................................................................................25
ENVIRONMENT........................................................................................ 25
COMMUNICATION.................................................................................... 25
MEETINGS .............................................................................................. 25
T ESTING ROOM /TEST BED ....................................................................... 25
STATUS REPORTING................................................................................. 26
STANDARD HEADER/FOOTER .................................................................... 26
LOCATIONS OF USEFUL RELATED DOCUMENTATION....................................... 26
APPENDIX .....................................................................................................................................28
FILE NAMING STANDARDS AND STORAGE LOCATIONS (EXAMPLE )................... 28
IMSS SYSTEMS AND MODULES (ACRONYMS) .............................................. 29
Page 2 of 32
Information Management Systems & Services
Software Development Testing Guide
INTRODUCTION
The purpose of this document is to provide guidance to software testers involved in IMSS software
development projects that may be either a new implementation or upgrade. Because this document is
intended to be general in nature, project specific information will be provided elsewhere, although some
project specific examples may be used. The goal of this document is to give those involved an
understanding and a set of tools to make the software testing effort more effective and consistent. This
document does not necessarily cover all aspects of testing but will address the following:
Page 3 of 32
Information Management Systems & Services
Software Development Testing Guide
PHILOSOPHY OF TESTING
Why is testing necessary? Testing minimizes bugs before production. This applies to both purchased
software and software developed in -house. Since purchased software is not always delivered to meet
customer requirements, customizations, interfaces, and other soft ware modifications must be tested.
Testing is needed to find faults (bugs) in the software, ensure user requirements are being met, and to
deliver a quality product to the customer.
The success of the testing effort and ultimately the success of the software development project depend on
several factors. Strategy, goals, assumptions, and roles are important in this effort and are described
below.
Page 4 of 32
Information Management Systems & Services
Software Development Testing Guide
7. Validate what worked by testing and documenting. As new problems are found in the system it is
extremely important to det ermine if they are new problems or not. This determination helps to reduce
the problem of finding what has changed since the test was previously executed. Documenting both
the successes and failures of testing helps to ease the process of determining what has changed. More
on documentation requirements will be discussed in a later section.
Page 5 of 32
Information Management Systems & Services
Software Development Testing Guide
TESTING ASSUMPTIONS
There are assumptions that underlie our overall approach to testing:
1. User Requirements are known and design is stable. Testing should be performed against the user
requirements. Test cases are identified from those user requirements. The test case scenarios shall be
written and scripted before testing begins. Changes to the software design once testing has begun can
delay and ultimately place the project at risk.
2. Most testing will be scripted. This helps to avoid difficulties in proving what worked and what did not
work, and also provide a record of what was tested. While some ‘ad-hoc’ testing is anticipated, it
should be done only with the specific coordination with the Testing Manager.
3. Testing could occur in a variety of settings. Testing may occur in a controlled test environment
(preferable) or at a user’s personal workstation. Because coordination and the timely execution of test
scenarios is more difficult outside a controlled test area, it becomes extremely important to identify
test cases in advance, write the scenarios, enter onto the test script, execute, and communicate hand-
off to others affected in a timely manner.
4. Tests will be executed using a variety of client computer configurations. While no specific effort will
be made to try every scenario or function on every combination of browser and operating system and
platform, using a variety of machines and reflecting typical configurations should help to identify any
problems that are related to client computer configuration. The minimum system requirements
recommended should be tested as well as any other advertised configuration. Contact the Testing
Manager for test bed access and availability.
5. Development st aff shall be available to assist testers with data validation. Because accurate data is a
key success factor, validating test scenarios is extremely important. Some validation may require back
end querying. Should this be required, IMSS developers will run the queries and be available to assist
with validating scenario results.
6. Functional testers will be available. Experienced functional system users will be available for testing.
Preferably the same set of users will be available throughout the testing effort.
7. Whenever possible, automated regression testing will be used. Tools exist that allow automation of
certain portions of the test execution process. If possible, such tools will be used during the testing
effort.
8. The testing instance shall remain controlled. There will be a high degree of control in changes to the
test instance. Patches and migrations shall be coordinated with the Testing Manager. Design changes
shall be communicated before they are applied.
9. There will be a high degree of control in patches and migrations. A good testing process requires an
understanding of the changes in the testing environment over time. Two key items for our
environment are the application of vendor patches and custom development migrations. We will
coordinate all patch applications and migrations in order to understand their potential impact.
Page 6 of 32
Information Management Systems & Services
Software Development Testing Guide
TESTING ROLES
It is extremely important for the success of this project to have participation from several groups of
people. A high level of communicat ion between these groups will increase the success of the test effort.
Below is a list of roles and their description:
Testing Manager/Coordinator -
? Directs testing effort
? Coordinate the development and execution of the testing methodology
? Coordinate the development of a testing guide document
? Determine testing tasks, coordinates resources, and establishes testing schedule
Internal Testers –
? Typically IMSS systems analysts and high level functional testers
? Support testing and issue resolution
? Identify and document test cases, testing scenarios or business processes, and prepares test scripts
? Execute and document test results
Functional Testers –
? Super users from organization outside of IMSS (typically orgs within VP Business & Finance and
Student Affairs)
? Works closely with IMSS systems analysts
? Support testing and issue resolution
? Identify and document test cases, testing scenarios or business processes, and prepares test scripts
? Facilitate acceptance testing
? Execute and document test results
Page 7 of 32
Information Management Systems & Services
Software Development Testing Guide
? Could identify and document testing scenarios or business processes, and prepares test scripts
? Facilitate acceptance testing
? Document test results
IMSS Developers –
? Support testing, testers, and issue resolution
? Document test results
Page 8 of 32
Information Management Systems & Services
Software Development Testing Guide
I NTRODUCTION
A test strategy describes the overall approach, objectives, and direction of the testing effort. The purpose
of a testing strategy or methodology is to limit risks and to ultimately deliver the best possible software to
the customer. The testing strategy chosen for a particular application will vary depending on the software,
amount of use, and its’ particular goals. For instance, the testin g strategy for a transactional system like
Oracle will be very different than the strategy developed for testing an analytical tool like the Data
Warehouse. Moreover, the strategy chosen to test a campus-wide purchasing system verses a limited user
base tool for Housing will entail much different test strategies. Because some of these examples have
higher exposure they also have higher risk.
Page 9 of 32
Information Management Systems & Services
Software Development Testing Guide
Unit Test Phase - The purpose of this test phase is to verify and validate independent modules function
properly. This is completed by the developers and must be completed before future phases can begin. The
Testing Manager is not normally involved in this phase.
CRP Phase (Conference Room Pilot - Optional). The purpose of this phase is to verify proof-of-concept.
A CRP is generally needed for new, large, and unproven projects.
? Assumption – Test instance is ready
? Assumption – Metadata is inserted into test instance
? Assumption – Unit testing and mock up is complete
? Assumption – Test scenarios have been identified (scripted or ad hoc)
? Task – Identify CRP participants
? Task – Determine and finalize CRP logistics
? Task – Set expectations.
? Task – Begin CRP
? Task – Gather and document feedback
? Task – End CRP
? Task – Obtain phase completion approval/sign-off
? Task – Gather/share/integrate lessons learned; incorporate necessary changes
? Task – Tune/revise/re-approve test plan
?
Integration Test Phase - The purpose of this test phase is to verify and validate all modules are
interfaced and work together.
? Assumption – Requirements are frozen and design is determined
? Assumption – Application is ready for integration testing
? Assumption – Metadata has been populated into test instance tables
? Assumption – Unit testing is complete
? Task – Test system and document using test scripts
? Task – Test interfaces
? Task – Identify and report bugs
? Task – Retest fixed bugs/regression test
? Task – Test security
? Task – Test browsers/platforms/operating systems
? Task – Obtain phase completion approval/sign-off
? Task – Gather/share/integrate lessons learned
? Task – Tune/revise/re-approve test plan
System Test Phase - The purpose of this test phase is to verify and validate the system works as if it were
production.
? Assumption – Metadata has been populated into test instance
Page 10 of 32
Information Management Systems & Services
Software Development Testing Guide
User Acceptance Phase - The purpose of this test phase is to verify and validate the system works by and
for the end users as if it were production
? Assumption – Show-stoppers and most high level bugs have been fixed or work-arounds have
been identified and approved
? Assumption – All other phases have been signed-off
? Assumption – Application is ready for user acceptance testing
? Assumption – Metadata has been populated into test instance tables
? Task – Train end user testers
? Task – Populate and approve test scripts
? Task – Test system and document using test scripts
? Task -- Obtain phase completion approval/sign-off
? Task -- Gather/share/integrate lessons learned
TEST P LAN
A test plan should be created for all significant projects. The t est plan documents the tasks that will be
performed, the sequence of testing, the schedule, and who is responsible for completing each task. It is
also a reflection of the strategy chosen for the testing effort. This plan should be linked to the overall
project plan. MS Project is often used to create a test plan.
Page 11 of 32
Information Management Systems & Services
Software Development Testing Guide
Dry Run
Before formal testing begins a dry run upgrade may be created for the development and infrastructure
teams. During this time several tasks may be performed including validation of system usability, testing
of third party systems, testing database compatibilities (links) for Exeter and FAMIS, code remediation,
identifying new functionality. Button and form testing can be performed during the dry run cycle in order
to minimize impact to the integration testing phase of cycle one. Testing and fixing issues early should
help expedite the formal testing process.
? Goals
o Confirm the basic application works
o Validate data integrity
o Ensure third party systems are compatible
o Ensure database compatibilities (links) for Exeter and FAMIS
o Remediate and migrate code including interfaces
o Identifying new functionality
o Test buttons and forms
o Report bugs encountered
Cycle 1
The first major testing cycle will include two phases after the application has been validated. These
phases include integration and system testing. The goal of integration testing is to test existing
functionality, tweaks and customizations (interfaces) , address open help desk tickets that the upgrade may
resolve, test custom and standard reports, and new functionality.
The second phase will be system testing. The purpose of this phase is to validate and verify the system
works as if production. This phase will also focus on finding and reducing bugs in the system found
during earlier phases and before go-live. It is expected that only IMSS and functional analysts will
participate in cycle one testing. Remember, these task can change or move to other phases at any time.
Page 12 of 32
Information Management Systems & Services
Software Development Testing Guide
Cycle 2
Cycle two will repeat integration and system test phases, but at a much quicker pace and with additional
goals and testers. Moreover, an abbreviated application validation should be performed to ensure the
application works before users begin testing. It is expected that most bugs will have been resolved in
cycle one.
Page 13 of 32
Information Management Systems & Services
Software Development Testing Guide
o Test Tweaks/Customizations
o Regression test fixed bugs
o Test interfaces
o Test Business Processes
o Test Responsibilities
o Test custom/standard reports
o Report bugs encountered
Introduction
Due to the nature of a Data Warehouse, testing the validity of each data mart will require an approach
different from that of testing a transactional system (e.g. Oracle Applications). Because the data
warehouse and the source system are designed to perform different functions, the table structures between
the two systems differ greatly. The main difficulty found when testing is validating query results between
the systems. Not all of the data in the source system is loaded into the warehouse, and the data that is
loaded is often transformed. Therefore, comparisons between the systems are difficult, and
troubleshooting becomes extremely complex when trying to identify points of failure. The testing method
introduced in this plan is designed to help streamline the process, making it easier to pinpoint problems
and cut down on confusion for the testers while at the same time expedite the testing phases.
Testing Phases
Page 14 of 32
Information Management Systems & Services
Software Development Testing Guide
Data mart testing should be divided into three distinct phases, Instance Validation, Data
Validation, and Application Usability. This testing is designed to test the completeness,
correctness, and performance of each data mart.
o Initial and incremental load: The designated development team will be assigned to compare the
row counts between the staging tables and the DW tables to make sure all push records are pulled
over to the DW tables. The DW tables should capture all changes that occur in the source tables
through incremental loads. A script should be used to capture this information.
o Push logic: All conditions, translations, formulas and calculations that are part of the push load
are tested to make sure that it is providing useful and valid data to the DW. These scenarios are
pulled directly from the code and business rules noted in the design document.
For example:
a) Can a PO agent be inactive in the HR table but still active in the PO agent table?
b) Are there any payments that are being pushed over to DW with no invoice associated
with them?
c) Does the PO distribution amount calculate correctly?
d) If the invoice status is CANCELLED, should the sum of all payments amount equal to 0?
e) Is there any translation or mapping errors? For example, ATTRIBUTE9 becomes
TRAVELER_NAME in the DW.
o Pull logic: All logic, conditions, translations, transformations and exceptions that are part of the
pull load are tested to make sure that the program is pulling the right data from staging tables into
the DW. These scenarios are pulled directly from the code and business rules noted in the design
document.
For example:
a) If a record in the Fact tables has no corresponding record in the Dimension tables, would
it still be loaded or does it get sent to the error table? For example, should a Vendor
exist in the fact table but not in the dimension table or can a PTA exist in the fact table
but not in the dimension table, etc.
b) If there are supporting Fact tables, do they need to have corresponding records in another
Fact table? For example, can a record exist in the Payment table when no corresponding
invoice exists in Invoice Overview and Invoice Detail tables? Or can a record appear in
the Detail table but not in the Overview table, etc.
Page 15 of 32
Information Management Systems & Services
Software Development Testing Guide
c) Due to data being displayed differently in the DW than in the source system, at times,
records may appear to be duplicated. Sufficient testing should be done to ensure that
there are no duplicate records in the DW. Any records that appear to be duplicated (but
are not) should be noted in the design and training documents.
The successful completion of this phase indicates that the load programs (push/pull) are
working the way they should and that the data is in sync in both systems.
o Forms – Front-end
By comparing results using the same selection criteria between the source application
front-end and application tool (e.g. Discoverer, Cognos, Webster) we will know that the
DW contains reliable information that can support transactional-level inquiries.
For example:
a) What is the status of invoice number 198273?
b) Who was the buyer that processed purchase order number PO374021?
c) What check number paid invoice number 198272? What was the payment date?
o Queries– Back-end
By using SQL to perform complex test scenarios, results are compared against the tables
in both databases. Since not all of the data from the source system is loaded into the DW,
testing must be done to ensure that the information that is loaded is sufficient to support
the business activities or decision-making processes for the Institute.
For example:
a) Can I see all invoices and payments associated for OFFICE DEPOT for this fiscal year?
b) Can I see all purchase orders for this award number for this month, and any invoice or
payment associated with each?
c) What purchase orders have invoices on hold and for what reasons?
Page 16 of 32
Information Management Systems & Services
Software Development Testing Guide
The successful completion of this phase of testing indicates that the databases are in sync.
This step eliminates any further need of having to use two systems to validate results.
Validation for the testing that occurs in the next phase is contained to the reporting tool
and any custom views used by that tool.
This testing determines the usability of the information in the DW when using the front-
end tool to perform multi-level (transactional and analytical) inquiries. To minimize bugs
found by end user testers, internal and functional super-users should first conduct this
testing.
It should be assumed that any bug found in this phase is directly related to the business
area, join or limitation of the tool. If a query produces unexpected results,
troubleshooting should begin with the tool. Further troubleshooting using backend tables
should occur only after the tool has been ruled out.
Page 17 of 32
Information Management Systems & Services
Software Development Testing Guide
I NTRODUCTION
Most testing will be scripted. Using scripts helps to avoid confusion and increase coordination during
testing. While some ‘open-ended’ testing is anticipated, it should be done only with the specific approval
of the Testing Manager. Prior to the execution of tests, test cases should be identified, and scripts should
be created. At a minimum, scripts should address user requirements.
CREATE TEMPLATES
The creation of test script template(s) provides a consistent approach to the documentation of the testing
effort. Example templates can be found at Y:\Testing\Test Script Templates\Blank Templates and an
example naming convention is provided in the Appendix. After obtaining a copy of the template users
should perform a “save as” and create a test script file according to the naming and saving conventions.
Use the templates according to which phase of the testing phase or cycle you are in.
CREATE SCRIPTS
The test scripts themselves should be derived from the test cases. These scenarios should be entered onto
the test script. Columns to be entered in advance include requirement number, action, expected outcome,
navigations, test data used, and tester information.
Page 18 of 32
Information Management Systems & Services
Software Development Testing Guide
EXECUTE SCRIPTS
Once the scenarios have been entered onto the scripts, testing may begin. The scripts created are meant to
be both a roadmap for the testing as well as a way of documenting the results of testing. Several important
areas should be documented as a result of testing. The results should include testing date, pass/fail results,
performance results, responsibility used, and pertinent comments (especially a Remedy ticket number for
failures – more on that later).
While testing, the tester should have the appropriate script open so that you may note necessary data in
the script as you go. Save often to avoid losing work. Keeping your script information up to date will ease
status reporting. The Testing Manager will review scripts periodically to obtain a count of the passes and
fails and report this at status meetings.
Pass/Fail Entries
On the test script enter the test results for each scenario as follows:
If a scenario Fails
? Identify data tested on your script
Page 19 of 32
Information Management Systems & Services
Software Development Testing Guide
Should you need to capture a screen shot use Snag-it or MS Word. Snag-it is preferred over Word for
saving screen shots due to file size, however Word can be used if you do not already have Snag-it
installed. Contact the Testing Manager should you need instructions for Snag-it.
Page 20 of 32
Information Management Systems & Services
Software Development Testing Guide
2. The Testing Manager, team lead or his or her designee assigns the ticket to an individual or
group. That individual is responsible for updating the ticket. If appropriate, updates can include
changes to description, summary, or categorization.
3. The tester shall create screen-shot of the error and file it. The file name and location of the
screen-shot plus any other backup information should be entered on the test script. The Bug
Report shall include file names and locations of screen shots, query names, and backup
information as well. Be sure to follow the naming convention and enter requested information
previously described.
4. Remedy prioritization is defined as follows:
5. Only after a fix has been confirmed should the HD ticket be set to ‘Resolved’ status. The ticket
shall be assigned back to the Testing Manager for coordination with and confirmation of the fix
by the end user.
Page 21 of 32
Information Management Systems & Services
Software Development Testing Guide
6. The Testing Manager will review open tickets on a regular basis to confirm progress is being
made. Close attention will be paid to tickets that have been open for a long time, have urgent or
high priority, or have not been addressed or received follow-up.
7. Always log a Remedy Ticket for bugs found and document, document, document! It is highly
recommended that you communicate via email when discussing the problem and resolution. You
can then cut and paste your emails into the Remedy work log. This is an excellent way to
document and a great reference should the problem occur in the future.
Only log a TAR after a Remedy ticket has been created. A TAR is used when an Oracle bug is found and
cannot be resolved by IMSS. TARs are logged in the Oracle tracking system called Metalink. Only IMSS
Analysts, Developers, and DBAs will log TARs. All other testers should seek out a member of one of
these groups if a TAR is required.
After requesting a user name and password from IMSS Infrastructure go to www.oracle.com and navigate
as follows:
? Select Support in the Resources menu
? Select Metalink from the Support Services menu
? Select Metalink login (for registered users)
? Enter your user name and password.
? Begin entering your TAR as instructed.
When logging a TAR with Oracle, be sure to use the correct CSI number. Contact the Infrastructure
manager to obtain this number.
Reminder: Please be sure a Remedy ticket is created before the TAR is logged. This becomes very
important, especially after the TAR is removed from Metalink and you can no longer retrieve it. Be sure
to update the “Vendor Bug Number” field in the Remedy ticket. After logging your TAR please notify
others of the problem and status.
Custom code migrations into the TEST instance will need to be closely coordinated to ensure proper
handoffs are made and that the test instance is not corrupted. For the most current version of the migration
procedure please contact the Development Manager.
Patching is often necessary to address bugs found during testing. Patches are normally provided by the
vendor. Like migration of code, patching will also be closely coordinated to minimize unnecessary code
changes and to prevent corruption to the test instance. The patching process will be tracked in Remedy via
Page 22 of 32
Information Management Systems & Services
Software Development Testing Guide
the change request process. The help desk ticket opened for the bug found should be related to the change
request ticket. A patch document must be created for each patch that is under consideration. A meeting is
normally held to determine if the patch is viable and whether the patch will be applied. The test manager
should be well informed of the application of the patch into the test instance and coordinate instance
availability and regression testing. For the most current version of the patching procedure please contact
the Development Manager.
Page 23 of 32
Information Management Systems & Services
Software Development Testing Guide
Failure
Capture Bugs
Page 24 of 32
Information Management Systems & Services
Software Development Testing Guide
G ENERAL INFORMATION
ENVIRONMENT
Environment means many things. The testing environment includes, but is not limited to system
architecture, testing tools, system access, a test instance, as well as where the testing will be performed.
Access to the test instance, a computer, the ‘Y’ drive, and a printer will be necessary.
? IMSS Security or the Testing Manager will grant access to the testing instance (application) with
appropriate responsibilities
? If you have access problems please contact the Testing Manager.
Familiarize yourself with the ‘Y’ drive structure. Note which phase or cycle of testing you are in and
where to store your files. Standards are described below. Understand where to find script templates and
where to store actual scripts with results. Also note where to store screen-shot backup. Contact the
Testing Manager if you questions about files or the ‘Y’ drive structure.
http://atcdba-support.caltech.edu/tnd_apps.htm
COMMUNICATION
Communicating hand-offs and other events affecting other testers/developers are extremely important to
the success and on-time delivery of this project. In order to maintain a continuous flow of effort, a
tester/developer should notify others when an action has been completed so the next action can begin. Use
of email, personal communication, telephone, and voice-mail can all play an integral part of
communicating an event.
An email distribution for most project s will be created. This list is intended to keep testers informed of the
various testing activities, including status, updates, reminders, instructions, procedures, et c. If you have a
need to use the distribution list , it is suggested you first contact the project or Testing Manager.
MEETINGS
For larger projects that involve many testers, a training/kickoff meeting should be held. Periodic status
meetings are especially important for the testers and management to attend. The focus of the discussion
should be around the following issues: pass/fail status, urgent and high level bugs/issues, schedule, and
other topics as required.
Page 25 of 32
Information Management Systems & Services
Software Development Testing Guide
STATUS R EPORTING
The Testing Manager will provide the testing team and management periodic status. Because the
information is taken from the test scripts, it is extremely important to keep them current. Several pieces of
data will be tracked:
Page 26 of 32
Information Management Systems & Services
Software Development Testing Guide
Page 27 of 32
Information Management Systems & Services
Software Development Testing Guide
APPENDIX
Page 28 of 32
Information Management Systems & Services
Software Development Testing Guide
Page 29 of 32
Information Management Systems & Services
Software Development Testing Guide
Page 30 of 32
Information Management Systems & Services
Software Development Testing Guide
ORACLE
AOL Application Objects Library
AP Accounts Payable
Page 31 of 32
Information Management Systems & Services
Software Development Testing Guide
AR Accounts Receivable
O RACLE
BB Benefit Billing
CM Cash Management
CWS Workstudy
FA Fixed Assets
GL General Ledger
GMS Grants Management System
HR Human Resources
IC Internal Charges
IN Inventory
LD Labor Distribution
PAN EPAN
PAY Payroll
PO Purchasin g
PRK Parking Database
See below for breakout WA Web Apps
Page 32 of 32