Anda di halaman 1dari 3

Automated Test Script Creation Process

Debugging and troubleshooting test scripts becomes extremely tedious when the test script has hundreds of lines of code,
verification points, branching logic, error handling, parameters, and test correlation among various recorded business
processes. A more manageable approach to debugging complex and lengthy test scripts is to record portions of the script
and debug those portions individually before recording other parts of the test script. After testing individual portions, you
can determine how one portion of the test script works with another portion and how data flows from one recorded process
to the other. After all sections for a test script have been recorded, you can playback the entire test script and ensure that it
properly plays back from the beginning to the end with one or more sets of data.

Always create a test plan that will assist in the creation of the regression test script. A test script is only as good as the
planning that takes place before it is written. This saves time and organizes the process.

Remember that all regression test scripts will be written to run in the QA environment as well as the production
environment. The automated regression test use will be a part of the production test used to determine if a build truly does
function as expected in production.

To manage the creation and edit of our automated regression tests QA will create automated tests in the following manner:

1. Record each individual process such as logging in and logging out. Save those recordings on your desktop.
2. Verify that the script will play back with no errors.
3. Continue recording and playing back throughout the test script creation process, verifying the script will play
back with each step taken.
4. Add multiple sets of data driven tests for each individual portion of each test where that kind of test is
applicable.
5. Verify the individual test scripts with multiple sets of data will play back with no errors.
6. Add various check points throughout the test.
7. Verify the individual test scripts with various check points will play back with no errors.
8. Now integrate all recorded processes into one large test script.
9. Verify the large test script will play back with no errors.

The key here is to ensure that each recorded process plays back successfully before proceeding to record the remaining
portions of the entire test script. Do not string the individual tests together for playback without first verifying that all of the
processes could play back successfully as individual processes.

Lesson to be learned here is to never, EVER wait to debug a script until the entire test script has been recorded.

Synchronization

Since QTP can play back recorded test scripts much faster than an end-users manual keystrokes all tests must be
synchronized. Introduce artificial wait times in the test scripts to make sure the script will run appropriately without errors
unattended. Take into account the fact that there will be times when the script will need to run slower than a manual test due
to network issues, etc. The goal here is we want to make sure the scripts will run unattended through Test Director. Slowing
down a script with wait times is not very scientific and does not contribute to the creation of a robust automated test script
that plays back successfully without user intervention. That said, sync times will be edited after the test script has been
written in its entirety and has been tested to ensure it runs with no errors.

Signed-off, Peer Reviewed

As part of the test readiness review criteria, test scripts will be formally accepted and approved prior to starting the test
cycle. QA, Business Analysts and Developers should be involved in approving recorded test scripts. The QA Analyst
writing the automated test script should demonstrate that the test script successfully plays back in the QA environment and,
if possible, with various sets of data.

Recording, Playing Back Against Hidden Objects

Scripts might be recorded to populate or double click values for a field within a table grid or an array where the location of
this field is not fixed. If the fields location within a table grid or array changes from the time it was recorded, the script
might fail during play back. Test scripts often fail during play back because the locations of objects that are not displayed or
visible within the screen have changed.

In order to play back scripts that are location sensitive or where the location is subject to change, it might be necessary to
enhance the script with functionality such as scroll down, next page, or find. Including such utilities ensures that
hidden objects requiring play back will be identified, and/or double clicked regardless of their location within an array, table
grid, or the displayed screen.

Create Automatic Notification for Critical Scripts

Test scripts should be enhanced with error handling programming logic that instantly sends error messages to a testers e-
mail address when problems occur. Since some test scripts are business critical and must run as batch jobs in the middle of
the night we need to know if something failed as soon as possible. The proper and successful execution of these business
critical test scripts can serve as a dependency or pre-condition for other automated tasks. Always include logic in business
critical test scripts that automatically sends notification in the event of a failure.

Documentation

To make test scripts reusable and easier to maintain, please document all relevant information for executing the test script, a
test script header, and any special conditions for execution of the test script. Example:

1. Adjust dates within the application in QA environment for running reports.
2. Update any fields that require unique data.
3. Display settings for context sensitive/analog/bitmap recording.
4. List other test scripts that are dependencies.
5. Specify necessary authorization levels or user roles for executing the script.
6. Conditions under which the script can fail and work around for re-launching the script.
7. Applications that need to be either opened or closed during the script execution.
8. Specific data formats, etc.

Scripts should contain a header with a description (example: what it is used for) and its particular purpose (example:
regression testing). The script header should also include the script author and owner, creation and modification date,
requirement identifiers that the script traces back to, the product the script supports and the number of variables and
parameters of the script. Providing this information in the test script header facilitates the execution, modification, and
maintenance of the script for future testing efforts.

Anda mungkin juga menyukai