June, 2002
Table of Contents
1.Purpose :...............................................................................................................1
2.Scope :...................................................................................................................1
3.Audience :.............................................................................................................1
4.Organization Of The Manual:...............................................................................1
5.Abbreviations :......................................................................................................1
6.Introduction to Software Testing........................................................................2
7.Testing Principles.................................................................................................2
8.Testing Philosophies............................................................................................4
9.Automated Software Testing...............................................................................5
10Code coverage ....................................................................................................7
10.1 General Guidelines for Usage of Coverage .......................................... ............9
11System Testing.....................................................................................................9
11.1 Objective ....................................................................................... ........................9
11.2 Processes to be followed in the activities.................................. .......................11
11.3 System Test Team...................................................................................... ..........11
11.4 Hypothetical Estimate of when the errors might be found..................... ..........11
11.5 Input............................................................................................................. .........11
.................................................................................................................. ....................11
11.6 Deliverables:............................................................................. ...........................11
11.7 Various Methods of System Testing................................................................ ...12
11.7.1 Functional Testing.............................................................................................. ............12
11.7.2 Security Testing.............................................................................................................. 12
11.7.3 Performance Testing......................................................................................... .............12
11.7.4 Stress Testing.......................................................................................... ......................13
11.7.5 Reliability Testing......................................................................................... ..................13
11.7.6 Usability Testing................................................................................................. ............13
11.7.7 Environment Testing.............................................................................. ........................13
11.7.8 Storage Testing.................................................................................................. ............14
11.7.9 Installation Testing................................................................................. ........................14
11.7.10 Recovery Testing................................................................................ ........................14
11.7.11Volume Testing...................................................................................................... ........14
11.7.12 Error Guessing......................................................................................................... ....14
11.7.13 Data Compatibility Testing................................................................ ...........................14
11.7.14 User Interface testing............................................................................................ .......15
11.7.15 Acceptance Testing......................................................................................... .............15
11.7.16 Limit testing............................................................................................................... ...15
11.7.17 Error Exit Testing........................................................................................................ ..15
11.7.18 Consistency testing................................................................................................ ......15
11.7.19 Help Information Testing.......................................................................... ....................15
V1.0 Page i of iv
11.7.20 Manual procedure testing................................................................... .........................16
11.7.21 User information Testing.......................................................................... ....................16
12 Testing GUI Applications .......................................17
12.1 Introduction....................................................................... .............................17
12.1.1 GUIs as universal client....................................................................................... ...........17
12.2 GUI Test Strategy................................................................................................. .17
12.2.1 Test Principles Applied to GUIs................................................................................... ....17
12.3 Types of GUI errors....................................................................... ......................17
12.4 Four Stages of GUI Testing.............................................................................. ...18
12.5 Types of GUI Test................................................................... .............................18
12.5.1 Checklist Testing....................................................................................................... .....18
12.5.2 Navigation Testing................................................................................... ......................19
12.5.3 Application Testing.............................................................................................. ...........19
12.5.4 Desktop Integration Testing........................................................................ ...................19
12.5.5 Synchronisation Testing.......................................................................................... .......20
12.6 Non-functional Testing of GUI.......................................................... ..................20
12.6.1 Soak Testing.................................................................................................... ..............20
12.6.2 Compatibility Testing................................................................................................ ......21
12.6.3 Platform/Environment Testing............................................................................... .........21
12.7 Automating GUI Tests................................................................... ......................21
12.7.1 Justifying Automation................................................................................................. ....21
12.7.2 Automating GUI Tests........................................................................................ ............21
12.7.3 Criteria for the Selection of GUI tool............................................................... ...............23
12.7.4 Points to be considered while designing GUI test suite:................................ ................23
12.8 Examples of GUI Tests: ................................................................................ ......23
13 Client / Server Testing....................................................................................24
13.1 Testing Issues................................................................................................ ......24
13.2C/S Testing Tactics...................................................................................... ..........24
14Web Testing........................................................................................................25
14.1 Standards of WEB Testing......................................................................... .........25
14.1.1 Frames.................................................................................................... ......................25
14.1.2 Gratuitous Use of Bleeding-Edge Technology........................................... ....................25
14.1.3 Scrolling Text, Marquees & Constantly Running Animations.........................................26
14.1.4 Long Scrolling Pages................................................................................................. ....26
14.1.5 Complex URLs..................................................................................... .........................26
14.1.6 Orphan Pages....................................................................................... ........................26
14.1.7 Non-standard Link Colors................................................................................... ...........26
14.1.8 Outdated Information........................................................................................... ..........26
14.1.9 Lack of Navigation Support........................................................................................... .26
14.1.10 Overly Long Download Times.............................................................................. ........26
14.2 Testing of User Friendly............................................................................... .......27
14.2.1 Use familiar, natural language .................................................................... ..................27
14.2.2 Checklist of User-friendliness:..................................................................... ..................27
V1.0 Page ii of iv
14.3 Testing of User Interface.............................................................................. .......28
14.3.1 Visual Appeal ....................................................................................................... .........28
14.3.2 Grammatical and Spelling Errors in the Content ......................................................... ..29
14.4 Server Load Testing .................................................................... .......................30
14.5 Database Testing................................................................................... ..............30
14.5.1 Relevance of Search Results................................................................ ........................30
14.5.2 Query Response Time......................................................................................... ..........30
14.5.3 Data integrity......................................................................................... ........................30
14.5.4 Data Validity................................................................................................................ ...31
14.5.5 Recovery of Data.................................................................................................. .........31
14.6 Security Testing............................................................................................ .......31
14.6.1 Network Security.................................................................................. .........................32
14.6.2 Payment Transaction Security.......................................................................... .............32
14.7 Software Performance Testing.......................................................... .................32
14.7.1 Correct Data Capture ........................................................................ ...........................32
14.7.2 Completeness of Transaction ..................................................................... ..................32
14.7.3 Gateway Compatibility ....................................................................................... ...........32
14.8 Web Testing Methods................................................................................... .......33
14.8.1 Stress Testing................................................................................................. ...............33
14.8.2 Regression Testing ...................................................................................... .................33
14.8.3 Acceptance Testing................................................................................................... .....34
15 Guidelines to prepare Test Plan.....................................................................34
15.1 Preparing Test Strategy ............................................................ .........................34
15.2 Standard Sections of a Test Plan.................................................... ...................34
16 Amendment History.........................................................................................39
17 Guideline for Test Specifications...................................................................39
References.............................................................................................................39
Appendix – 1..........................................................................................................40
List of Testing Tools:.............................................................................................40
Appendix - 2...........................................................................................................44
Sample system test plan.......................................................................................44
Appendix - 3 ..................................................................................50
Sample Test Plan for Web Testing:......................................................................50
Sample Test cases For Login Page........................................................................ .....52
GLOSSARY:............................................................................................................57
2. Scope :
This document is meant for providing the guidelines for testing processes. All testing projects
executed under any software development company would follow these guidelines for the
processes.
3. Audience :
Target audiences of this document are the Project Managers, Project Leaders, Test Personnel
and any new person with some basic understanding of Software Engineering, joining this activity.
5. Abbreviations :
GUI – Graphical User Interface
SUT – Software Under Test
DUT – Device Under Test
C/S - Client Server
V1.0 Page 1 of 52
6. Introduction to Software Testing
Testing is a process to detect the difference between the observed and stated behaviour of
software. Due to the fallibility of its human designers and its own abstract, complex nature,
software development must be accompanied by quality assurance activities. It is not unusual for
developers to spend 40% of the total project time on testing. For life-critical software (e.g. flight
control, reactor monitoring), testing can cost 3 to 5 times as much as all other activities
combined. The destructive nature of testing requires that the developer discard preconceived
notions of the correctness of his/her developed software.
Testing should systematically uncover different classes of errors in a minimum amount of time
and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that
the software appears to be working as stated in the specifications. The data collected through
testing can also provide an indication of the software’s reliability and quality. But, testing cannot
show the absence of defect—it can only show that software defects are present. Actually, testing
is used to confirm quality than to achieve it.
7. Testing Principles
Testing :
• A programmer should avoid attempting to test his or her own program.
• A programming organisation should not test its own programming.
• Thoroughly inspect the results of each test.
• Examining a program to check what it is supposed to do is only half of the battle. The other
half is seeing whether the program does what it is NOT supposed to do.
• The probability of the existence of more errors in a section of a program is proportional to the
number of errors already found in that section.
• Testing is an extremely creative and intellectually challenging task.
• Tools should be used for better control over testing and improve productivity.
A test process that complements object-oriented design and programming can significantly
increase reuse, quality, and productivity. Establishing such a process usually means dealing with
some common mis-perceptions (myths) about testing software. This article is about these
perceptions. We’ll explore these myths and their assumptions and then explain why the myth is
at odds with reality.
V1.0 Page 2 of 52
Myth 1: Testing is unnecessary—
Reality: Human error is as likely as ever. With iterative and incremental development we
obviate the need for a separate test activity, which was really only necessary in the first place
because conventional programming languages made it so easy to make mistakes.
Reality: Testing can be a complementary, integral part of development. The idea of testing to
find faults is fundamentally wrong—all we need to do is to keep “improving” our good ideas. The
simple act of expression is sufficient to create trustworthy classes. Testing is a destructive, rote
process—it isn’t a good use of developer’s creative abilities and technical skills.
Reality: Testing can be incremental and iterative. While the iterative and incremental nature of
object-oriented development is inconsistent with a simple, sequential test process (test each unit,
then try to integrate test all of them, then do system test), it does not mean that testing is
irrelevant. The boundary that defines the scope of unit and integration testing is different for
object-oriented development. Tests can be designed and exercised at many points in the
process. Thus “design a little, code a little” becomes “design a little, code a little, test a little.”
Myth 4: Testing is trivial. Testing is simply poking around until you run out of time. All we
need to do is start the app, try each use-case, and try some garbage input. Testing is neither
serious nor challenging work—hasn’t most of it has already been automated?
Reality: Hunches about testing completeness are notoriously optimistic. Adequate testing
requires a sophisticated understanding of the system under test. You need to be able to develop
an abstract view of the dynamics of control flow, data flow, and state space in a formal model,
system requirements. You need to be able to define the expected results for any input and state
you select as a test case. This is interesting work for which little automation is available.
Reality: Many bugs only surface during integration. There are many interactions among
components that cannot be easily foreseen until all or most components of a system are
integrated and exercised. So, even if we could eliminate all individual sources of error, integration
V1.0 Page 3 of 52
errors are highly likely. Static methods cannot reveal interaction errors with the target or transient
performance problems in hard real-time systems.
8. Testing Philosophies
The following steps should be followed for a testing project: Project Manager shall start
preparation of test plan – identifying the requirements and plan for test set-up and dividing the
various tasks among the product testing team. A sample System Test Plan is given in Appendix –
2.
V1.0 Page 4 of 52
1. Identify the requirements for training of the team members and draw plan to impart it. Get the
plan sanctioned by the Group Head. If it can be arranged with internal resources, Project Co-
ordinator can arrange it and forward the training details to Head – Training Department for
record purpose. In case the training has to be arranged with external resources, inform Head
– Training Department for implementation.
2. Obtain the test cases and checklists. If customer has not given any test cases, then the team
members will go through the user manual and other relevant documents and develop the test
plan and test cases along with checklists to be used. The project co-ordinator will get the test
plan and test cases reviewed and approved by using peer review or any suitable method.
The Team Leader/Group Head should approve the test plan and test cases.
3. The test cases obtained from the client shall be considered as the original ones. The testers
should make a copy of their allocated test cases and maintain a master copy. Similarly they
should again copy the checklists, which would be used. Review the checklists and test cases
– for their completeness. If required, the test cases may be added or modified. These
modifications would have to be reviewed and approved.
4. Install the software and verify proper installation. If installation fails, inform the customer and
despatch the product testing report. If required, return other materials like the original CDs of
the product. Wait for next build from customer.
5. If the installation is proper, then prepare the database for storing the defects and apply the
test cases. Execute the testing as planned. First test the basic features and functions of the
software, then go for integration testing, simulation and stress testing.
The testers should maintain the following records in everyday testing:
• Actual test records on test checklist (recording of pass/fail against the test cases)
• Detail Defect Report,
6. The Project Co-ordinators should maintain the Weekly Project Status Report (to be sent to
Client representative for status updating).
7. If any project has a particular record to maintain as per client’s choice, it should be adjusted
with the suggested records.
8. Consolidate the defects found at each stage of testing and prepare a defect summary report
along with the Product Testing report stating the extent to which the product meets its stated
specifications.
9. At the end of the project, Close the product testing, get report signed by Group Head
and send it to customer.
V1.0 Page 5 of 52
Automated Software Testing, as conducted with today’s automated test tools, is a development
activity that includes programming responsibilities similar to those of the SUT developer.
Automated test tools generate code comprising test scripts while exercising a user interface. This
code can be modified and reused to serve as automated test scripts for other applications in
lesser time.
Software testing tools makes the job easier. Now lots of standard tools are available in the
market, ready to serve the purpose. Similarly, one can develop his own customized test tool to
serve his particular requirement or can use the standard tools and then ‘instruct’ it to get the
desired service.
Usage of tools should be decided at the planning stage of the project. And testing activity should
also be started from the Requirement Specification Stage. Though early start of testing is
recommended for any project, irrespective of usage of tools, it seems to be a must for projects
that use tools. The reason lies in preparation of test cases. Test cases for System Testing should
be prepared once the SRS is baselined. Test cases then would be modified with evolution of
design spec, code and of course, actual software.
This process also requires a well-defined test plan and it should be prepared right after project
plan. Most of the standard tools work with ‘capture/playback’ method. The tool has a capture
facility. While using the tool, tester would first run the tool, start its capture facility and then run the
SUT according to a test case. With its capture facility, the tool will record all the steps performed
by users including keystrokes, mouse activity and selected output. Now the user can playback
the recorded steps for a test case automatically driving the application and validating the results
by comparing them to the previously saved baseline.
When the tool records the steps, it automatically generates a piece of code that is known as a
test script. Now, the tester can modify the code to enable the script performing some desired
activity. Tester can write his own test scripts also. A number of test cases can be combined in a
test suite and the tester can schedule a few of test suite to run in night or any off time unattended.
The result log would be automatically generated and stored for review.
Actually there are many pitfalls in automated testing and there has to be some proper planning
for its implementation.
V1.0 Page 6 of 52
Few known pitfalls are:
a. This is not cheap and time consuming also. It usually takes 3 to 10 times as long to
create, verify and minimally document [1] the automated test compared to manual
test. Many tests will be worth automating, but for all the tests that are run once or
twice, it is not worthwhile.
b. These tests are not powerful.
c. In practice, many test groups automate only the easy-to-run tests.
d. Slightest change in UI will make the script invalid.
10 Code coverage
A perfectly effective test suite would find every bug. Since we don’t know how many bugs there
are, we can’t measure how closely test suites approach perfection. Consequently, we use an
index as an approximate measure of test suite quality: since we can’t measure what we want, we
measure something related.
With coverage, we estimate test suite quality by examining how thoroughly the tests exercise the
code:
Is every if statement taken in both the true and false directions?
Is every case taken? What about the default case?
Is every while loop executed more than once? Does some test force the while loop to be
skipped?
Is every loop executed exactly once?
Do the tests probe off-by-one errors?
V1.0 Page 7 of 52
The main technique for demonstrating that the testing has been thorough is called test coverage
analysis. Simply stated, the idea is to create, in some systematic fashion, a large and
comprehensive list of tasks and check that each task is covered in the testing phase. Coverage
can help in monitoring the quality of testing, assist in creating tests for areas that have not been
tested before, and help with forming small yet comprehensive regression suites.
Coverage, in general, can be divided into two types: code-based or functional. Code-based
coverage concentrates on measuring syntactic properties in the execution, for example, that
each statement was executed, or each branch was taken. This makes program-based coverage
a generic method which is usually easy to measure, and for which many tools are available.
Examples include program based coverage tools for C, C++ and Java . Functional coverage, on
the other hand, focuses on the functionality of the program, and it is used to check that every
aspect of the functionality is tested. Therefore, functional coverage is design and implementation
specific, and is more costly to measure.
Few general code coverage categories are :
Control-flow Coverage
Block Coverage
Data-flow Coverage
V1.0 Page 8 of 52
Code based coverage, usually just called coverage, is a technique that measures the execution
of tests against the source code of the program. For example, one can measure whether all the
statements of the program have been executed. The main uses of program based coverage are
assessing the quality of the testing, finding missing requirements in the test plan and constructing
regression suites.
A number of standards, as well as internal company policies, require the testing program to
achieve some level of coverage, under some model. For example, one of the requirements of the
ABC standard is 100% statement coverage.
Now-a-days lots of software tools are available. Almost all coverage tools implement the
statement and branch coverage models. Many tools also implement multi-condition coverage, a
model that checks that each part of a condition (e.g. A or B and C) had impact. Fewer tools
implement the more complex models such as define-use, mutation, and path coverage variants.
The main advantage of code based coverage tools is their simplicity of use. The tools come
ready for the testing environment. No special preparations are needed in the programs and
understanding the feedback from the tool is straightforward. The main disadvantage of code
coverage tools is that the tools do not “understand” the application domain. Therefore, it is very
hard to tune the tools to areas which the user thinks are of significant. These “defects” can be
removed by the use of simple scripting languages like VB / Perl / Tcl-tk.
Coverage should not be used if the resources used for it can be better spent elsewhere. This is
the case when the budget is very tight and there is not enough time to even finish the test plan. In
such a case, designing new tests is not useful as not all the old tests will be run. Coverage
should be used only if there is a full commitment to make use of the data collected. Measuring
coverage in order to report coverage percentile is practically worthless. Coverage points out parts
of the applications that have not been tested and guides test generation to these parts. Moreover,
it is very important to try to reach full coverage or at least set high coverage goals, since many
bugs hide in hard-to-reach places.
Coverage is a very useful criterion for test selection for regression suites. Whenever a small set
of tests is needed, the test suite should be selected so that it will cover as many requirements or
coverage tasks as possible.
When coverage and reviews are used for the same project reviews can put less emphasis on
things that coverage is likely to find. For example, a review for dead code is unnecessary if
statement coverage is used, and manually checking that some values of variable can be attained
is not needed if the appropriate functional coverage model is used.
Coverage should not be used to judge if the “desirable” features are implemented.
11 System Testing
11.1 Objective
V1.0 Page 9 of 52
The purpose of system testing is showing that the product is inconsistent with its original
objectives. System testing is oriented toward a distinct class of errors and measured with respect
to a distinct type of documentation in the development process. It could very well be partially
overlapped in time with other testing process. Care must be taken so that no component / class
of error is missed as this is the last phase of testing.
System Testing of software is divided into four major types:
• Functional System Testing;
• Regression Testing;
• Performance Testing; and
• Sanity Testing.
The system testing will be designed to test each functional group of software modules in a
sequence that is expected in production. In each functional testing area, the following will be
tested at a minimum:
• Initial inputs;
• Program modifications and functionality (as applicable);
• Table and ledger updates; and
• Error conditions.
System Test cases are designed by analyzing the objectives and then formulated by analyzing
the user documentation.
Different categories of test cases are given below:
• Facility Testing
• Volume Testing
• Stress Testing
• Usability Testing
• Security Testing
• Performance Testing
• Storage Testing
• Configuration Testing
• Compatibility/Conversion Testing
• Installability Testing
• Reliability Testing
• Recovery Testing
• Serviceability Testing
• Documentation Testing
• Procedure Testing
Time to Plan and Test : Test planning starts with the preparation of Software Requirement
specification. Testing starts after the completion of unit testing and integration testing.
Responsibilities:
Project Manager/ Project Leader : They are responsible for the following activities:
• Preparation of test plan
• Obtain existing Test cases and checklist
• Responsible for getting the test plan and test cases reviewed and approved by using peer
review or any other suitable method.
• Communication with client
• Project Tracking and reporting
V1.0 Page 10 of 52
Test Engineers: Their responsibilities include:
• Preparation of test cases
• Actual testing
• Actual test record on checklist
• Detail defect report
• Development and review the system test plan and test results;
• Provide training for the system testers;
• Designate a “final authority” to provide written sign-off and approvals of all deliverables in
each implementation area. Once the person designated as the final authority approves, in
writing, the deliverables, they will be considered final and the Project Team will proceed with
migration to the user acceptance testing environment;
• Execute the system tests, including all cycles and tests identified in the plans;
• Resolve issues that arise during testing using formal issue resolution process; and
• Document test results.
11.5 Input
11.6 Deliverables:
V1.0 Page 11 of 52
11.7 Various Methods of System Testing
A functional test exercises a system application with regard to functional requirements with the
intent of discovering non-conformance with end-user requirements. This technique is central to
most software test programs. Its primary objective is to assess whether the application does what
it is supposed to do in accordance with specified requirements.
Test development considerations for functional tests include concentrating on test procedures
that execute the functionality of the system based upon the project’s requirements. One
significant test development consideration arises when several test engineers will be performing
test development and execution simultaneously. When these test engineers are working
independently & sharing the same data or database, a method needs to be implemented to
ensure that the test engineer A does not modify or affect the data being manipulated by test
engineer B, potentially invalidating the test results produced by test engineer B. Also, automated
test procedures should be organized in such a way that effort is not duplicated.
Security Testing attempts to verify that protection mechanisms built into a system will, in fact,
protect it from improper penetration. Security Tests involve checks to verify the proper
performance of system access & data access mechanisms. Test procedures are devised that
attempt to subvert the programs security checks. The test engineer uses security tests to validate
security levels & access limits and thereby verify compliance with specified security requirements
and any applicable security regulations.
a) To check that
system is password protected
users only granted necessary system privileges
b) Deliberately attempt to break the security mechanism by:
accessing the files of another user
breaking into the system authorization files
accessing a resource when it is locked
Performance Testing is designed to test the runtime performance of software within the context of
an integrated system. It should be done throughout all steps in testing process. Even at the unit
level, the performance of an individual module maybe assessed
Performance testing verify that the system application meets specific performance efficiency
objectives. It can measure & report on such data as I/O rates, total no. of I/O actions, average
database query response time & CPU utilization rates. The same tools used in stress testing can
generally be used in performance testing to allow for automatic checks of performance efficiency.
V1.0 Page 12 of 52
To conduct performance testing, the following performance objectives need to be defined:
• How many transactions per second need to be processed?
• How is a transaction defined?
• How many concurrent & total users are possible?
• Which protocols are supported?
• With which external data sources or systems does the application interact?
Many automated performance test tools permit virtual user testing, in which the test engineer can
simulate tens, hundreds or even thousands of users executing various testscripts.
In stress testing, the system is subjected to extreme and maximum loads to find out whether and
where the system breaks and to identify what breaks first. The system is asked to process a huge
amount of data or perform many function calls within a short period of time. It is important to
identify the weak points of the system. System requirements should define these thresholds and
describe the system’s response to an overload. Stress testing should then verify that it works
properly when subjected to an overload.
Examples of stress testing: – running a client application continuously for many hours or
simulating a multi-user environment. Typical types of errors uncovered include memory leakage,
performance problems, locking problems, concurrency problems, and excess consumption of
system resources and exhaustion of disk space.
Stress tools typically monitor resource usage, including usage of global memory, DOS memory,
free file handles, and disk space, and can identify trends in resource usage so as to detect
problem areas, such as memory leaks and excess consumption of system resources and disk
space.
The goal of all types of testing is the improvement of the eventual reliability of the program, but if
the program’s objectives contain specific statements about reliability, specific reliability tests
might be devised. For the objective of building highly reliable systems, the test effort should be
initiated during the development cycle’s requirements definition phase, when requirements are
developed & refined.
Usability Testing involves having the users work with the product & observing their responses to
it. It should be done as early as possible in the development life cycle. The real customer is
involved as early as possible. The existence of the functional design specification is the
prerequisite for starting.
Usability testing is the process of attempting to identify discrepancies between the user interfaces
of a product and the human engineering requirements of its potential users. Usability Testing
collects information on specific issues from the intended users. It often evaluation of a products
presentation rather than its functionality.
Usability characteristics, which can be tested, include the following:
Accessibility, Responsibility, Efficiency, Comprehensibility.
The testing activities here involve basically testing the environment setup activities as well as the
calibration of the test tools to match the specific environment.
When checking the set-up activities, need to test the set-up script (if any), the integration &
validation of resources – hardware, software, network resources, databases. The objective
should be to ensure the complete functionality of the production application & performance
analysis.
It is also necessary to check for stress testing requirements, where we require the use of multiple
workstations to run multiple test procedures simultaneously.
V1.0 Page 13 of 52
11.7.8 Storage Testing
Products do have some storage specifications. For instance, the amounts of main & secondary
storage used by the program & sizes of required temporary or spill files.
Checks should be made to monitor memory & backing storage occupancy & taking necessary
measurements.
Installation testing involves the testing of the installation procedures. Its purpose is not to find
software errors, but to find installation errors i.e. to locate any errors made during the installation
process.
Installation tests should be developed by the organization that produced the system, delivered as
part of the system, and run after the system is installed. Among other things, the test cases might
check to ensure that a compatible set of options has been created and have the necessary
contents, and that the hardware configuration is appropriate.
The system must have recovery objectives, stating how the system is to recover from hardware
failures, and data errors. These can be injected into the system to analyze the system’s reaction.
A system must be fault tolerant; system failure must be corrected within a specific period of time
This involves subjecting the program to heavy volumes of data. For instance, a compiler would
be fed an absurdly large source program to compile. A linkage editor might be fed a program
containing thousands of modules. If a program is supposed to handle files spanning multiple
volumes, enough data are created to cause the program to switch from one volume to another.
Thus, the purpose of volume testing is to show that the program cannot handle the volume of
data specified in its objectives.
Error Guessing is an ad hoc approach, based on intuition & experience, to identify tests, likely to
expose errors. The basic idea is to make a list of possible errors or error-prone situations & then
develop tests based on the list.
For instance, the presence of the value 0 in the program’s input or output is an error prone
situation. Therefore, one might write test cases for which particular input values have a 0 value
and for which particular output values are forced to 0.
Also, where variable number of input output can be present, the cases of “none” and “one” are
error-prone situations. Another idea is to identify test cases associated with assumptions that the
programmer might have made when reading the specification. - i.e. things that were omitted from
the specification
V1.0 Page 14 of 52
Many programs developed are often replacements for some deficient system, either a data
processing or manual system. Programs often have specific objectives concerning their
compatibility with, and conversion procedures from, the existing system. Thus the objective of
Compatibility testing is to determine whether the compatibility objectives of the program have
been met & whether conversion procedures work.
11.7.14 User Interface testing
The User Interface is checked against the design or requirement specification. The user interface
is tested as per the User manual, on-line help and SRS.
The test cases should be build for interface style, help facilities & the error handling protocol.
Also, issues like – number of actions required per task & whether they are easy to remember &
invoke, how self-explanatory & clear are the icons, how easy its to learn the basic system
operations etc., need to be evaluated while conducting the User Interface Testing.
The acceptance test phase includes testing performed for or by end users of the software
product. Its purpose is to ensure that end users are satisfied with the functionality & performance
of the software system. The acceptance test phase begins only after the successful conclusion of
system testing.
Commercial software products do not generally undergo customer acceptance testing, but do
often allow a large number of users to retrieve an early copy of the software, so that they can
provide feedback as a part of beta test.
Limit Testing implies testing with the values beyond the specified limits –for e.g. memory, no of
users, no of files open etc.
The test cases should focus on testing with out of range values i.e. with values, which exceeds
the values as laid down in the specifications, which the system can well handle. Such cases
should be included within every stage of the testing life cycle.
This testing considers whether the software developed, in case of a system error, displays
appropriate system error messages & thereafter provides a clear exit.
All possibilities of the system error cropping up, should be tested, except those causing abnormal
termination.
The system developed should have consistency throughout the system with respect to both data
& modules. The interrelated modules should be using the same set of data, retrieving/writing the
data to the same common place, & thus reflecting uniformity over the entire system. Thus test
cases should be developed involving those sample data & methods, which will provide an insight
as to the system ensures consistency or not.
Help information should be adequate and provide useful information. The contents should cover
all significant areas of the system, on which the users might be requiring help. The flow of the
help information should be sequential and the links embedded in the document must be relevant
& must be even tested for as to whether actually provides the link or not. The contents should
also be correct, & tested for its clarity & completeness.
V1.0 Page 15 of 52
11.7.20 Manual procedure testing
In this method of test, the system is tested for manual device requirement and handling. For e.g.-
this could be related to tape loading, manual switching of any device etc. The system should
recognise all those devices, successfully do the task of loading/unloading, and also smooth
working with such devices. Also, any prescribed human procedures, such as procedures to be
performed by the system operator, database administrator, or terminal user should be tested
during the system test.
The User information Testing is also concerned with the adequacy & correctness of the user
documentation. It should be determined whether the user manual gives a proper representation
of the system. Also, it should be tested for clarity & that whether it’s easy to look for any
information related to the system.
V1.0 Page 16 of 52
12 Testing GUI Applications
12.1 Introduction
The most obvious characteristic of GUI applications is the fact that the GUI allows multiple
windows to be displayed at the same time. Displayed windows are ‘owned’ by applications and of
course, there may be more than one application active at the same time. Access to features of
the systems is provided through mechanisms menu bars buttons and keyboard shortcuts. GUIs
free the user to access system functionality in their preferred way. They have permanent access
to all features and may use the mouse, the keyboard or a combination of both to have a more
natural dialogue with the system.
12.1.1 GUIs as universal client
GUIs have become the established alternative to traditional forms-based user interfaces. GUIs
are the assumed user interface for virtually all systems development using modern technologies.
We can list some of the multifarious errors that can occur in a client/server-based application that
we might reasonably expect to be able to test for using the GUI. Many of these errors relate to
the GUI, others relate to the underlying functionality or interfaces between the GUI application
and other client/server components.
• Data validation
• Incorrect field defaults
• Mishandling of server process failures
• Mandatory fields, not mandatory
• Wrong fields retrieved by queries
• Incorrect search criteria
• Field order
• Multiple database rows returned, single row expected
• Currency of data on screens
V1.0 Page 17 of 52
• Window object/DB field correspondence
• Correct window modality?
• Window system commands not available/don’t work
• Control state alignment with state of data in window?
• Focus on objects needing it?
• Menu options align with state of data or application mode?
• Action of menu commands aligns with state of data in window
• Synchronisation of window object content
• State of controls aligns with state of data in window?
By targeting different categories of errors in this list, we can derive a set of different test types
that focus on a single error category of errors each and provide coverage across all error types.
The four stages are summarised in Table 2 below. We can map the four test stages to traditional
test stages as follows:
• Low level - maps to a unit test stage.
• Application - maps to either a unit test or functional system test stage.
• Integration - maps to a functional system test stage.
• Non-functional - maps to non-functional system test stage.
The mappings described above are approximate. Clearly there are occasions when some ‘GUI
integration testing’ can be performed as part of a unit test. The test types in ‘GUI application
testing’ are equally suitable in unit or system testing. In applying the proposed GUI test types, the
objective of each test stage, the capabilities of developers and testers, the availability of test
environment and tools all need to be taken into consideration before deciding whether and where
each GUI test type is implemented in your test process.
The GUI test types alone do not constitute a complete set of tests to be applied to a system. We
have not included any code-based or structural testing, nor have we considered the need to
conduct other integration tests or non-functional tests of performance, reliability and so on. Your
test strategy should address all these issues.
Stage Test Types
Low Level Checklist testing , Navigation
Application Equivalence Partitioning, Boundary Values
Decision Tables, State Transition Testing
Integration Desktop Integration, C/S Communications
Synchronization
Non-Functional Soak testing, Compatibility testing
Platform/environment
V1.0 Page 18 of 52
Checklists are a straightforward way of documenting simple re-usable tests. The types of checks
that are best documented in this way are:
• Programming/GUI standards covering standard features such as: window size, positioning,
type (modal/non-modal), standard system commands/buttons (close, minimise, maximise
etc.)
Application standards or conventions such as: standard OK, cancel, continue buttons,
appearance, colour, size, location consistent use of buttons or controls object/field labelling to
use standard/consistent text.
In the context of a GUI, we can view navigation tests as a form of integration testing. To conduct
meaningful navigation tests the following are required to be in place:
• An application backbone with at least the required menu options and call mechanisms to call
the window under test.
• Windows that can invoke the window under test.
• Windows that are called by the window under test.
Obviously, if any of the above components are not available, stubs and/or drivers will be
necessary to implement navigation tests. If we assume all required components are available,
what tests should we implement? We can split the task into steps:
• For every window, identify all the legitimate calls to the window that the application should
allow and create test cases for each call.
• Identify all the legitimate calls from the window to other features that the application should
allow and create test cases for each call.
• Identify reversible calls, i.e. where closing a called window should return to the ‘calling’
window and create a test case for each.
• Identify irreversible calls i.e. where the calling window closes before the called window
appears.
There may be multiple ways of executing a call to another window i.e. menus, buttons, keyboard
commands. In this circumstance, consider creating one test case for each valid path by each
available means of navigation. Note that navigation tests reflect only a part of the full integration
testing that should be undertaken. These tests constitute the ‘visible’ integration testing of the
GUI components that a ‘black box’ tester should undertake.
12.5.3 Application Testing
Application testing is the testing that would normally be undertaken on a forms-based application.
This testing focuses very much on the behaviour of the objects within windows. Some guidelines
for their use with GUI windows are presented in the table below:
Equivalence Partitions and
Boundary Value Analysis
• Input validation
• Simple rule-based processing
Decision Tables
• Complex logic or rule-based processing
State-transition testing
• Applications with modes or states where processing behaviour is affected
Windows where there are dependencies between objects in the window.
12.5.4 Desktop Integration Testing
V1.0 Page 19 of 52
Client/server systems assume a ‘component based’ architecture so they often treat other
products on the desktop as components such as a word processor, spreadsheet, electronic mail
or Internet based applications and make use of features of these products by calling them as
components directly or through specialist middleware.
We define desktop integration as the integration and testing of a client application with these
other components.
• The tester needs to know what interfaces exist, what mechanisms are used by these
interfaces and how the interface can be exercised by using the application user interface.
To derive a list of test cases the tester needs to ask a series of questions for each known
interface:
• Is there a dialogue between the application and interfacing product (i.e. a sequence of stages
with different message types to test individually) or is it a direct call made once only?
• Is information passed in both directions across the interface?
• Is the call to the interfacing product context sensitive?
• Are there different message types? If so, how can these be varied?
There may be circumstances in the application under test where there are dependencies
between different features. Examples of synchronisation are when:
• The application has different modes - if a particular window is open, then certain menu
options become available (or unavailable).
• If the data in the database changes and these changes are notified to the application by an
unsolicited event to update displayed windows.
• If data on a visible window is changed and makes data on another displayed window
inconsistent.
In some circumstances, there may be reciprocity between windows. For example, changes on
window A trigger changes in window B and the reverse effect also applies (changes in window B
trigger changes on window A).
In the case of displayed data, there may be other windows that display the same or similar data
which either cannot be displayed simultaneously, or should not change for some reason. These
situations should be considered also. To derive synchronisation test cases:
• Prepare one test case for every window object affected by a change or unsolicited event and
one test case for reciprocal situations.
• Prepare one test case for every window object that must not be affected - but might be.
The tests described in the previous sections are functional tests. These tests are adequate for
demonstrating the software meets it’s requirements and does not fail. However, GUI applications
have non-functional modes of failure also. We propose three additional GUI test types (that are
likely to be automated).
12.6.1 Soak Testing
Soak tests exercise system transactions continuously for an extended period in order to flush- out
memory leaks problems.
These tests are normally conducted using an automated tool.
Selected transactions are repeatedly executed and machine resources on the client (or the
server) monitored to identify resources that are being allocated but not returned by the
application code.
V1.0 Page 20 of 52
12.6.2 Compatibility Testing
Compatibility Tests are (usually) automated tests that aim to demonstrate that resources that are
shared with other desktop products are not locked unnecessarily causing the system under test
or the other products to fail.
These tests normally execute a selected set of transactions in the system under test and then
switch to exercising other desktop products in turn and doing this repeatedly over an extended
period.
In some environments, the platform upon which the developed GUI application is deployed may
not be under the control of the developers. PC end-users may have a variety of hardware types
such as 486 and Pentium machines, various video drivers, Microsoft Windows 3.1, 95 and NT.
Application may be designed to operate on a variety of platforms; you may have to execute tests
of these various configurations to ensure when the software is implemented, it continues to
function as designed. In this circumstance, the testing requirement is for a repeatable regression
test to be executed on a variety of platforms and configurations. Again, the requirement for
automated support is clear so we would normally use a tool to execute these tests on each of the
platforms and configurations as required.
Automating test execution is normally justified based on the need to conduct functional
regression tests. In organisations currently performing regression test manually, this case is easy
to make - the tool will save testers time. However, most organisations do not conduct formal
regression tests, and often compensate for this ‘sub-consciously’ by starting to test late in the
project or by executing tests in which there is a large amount of duplication.
In this situation, buying a tool to perform regression tests will not save time, because no time is
being spent on regression testing in the first place. In organisations where development follows a
RAD approach or where development is chaotic, regression testing is difficult to implement at all -
software products may never be stable enough for a regression test to mature and be of value.
Usually, the cost of developing and maintaining automated tests exceeds the value of finding
regression errors.
We propose that by adopting a systematic approach to testing GUIs and using tools selectively
for specific types of tests, tools can be used to find errors during the early test stages. That is, we
can use tools to find errors pro-actively rather than repeating tests that didn’t find bugs first time
round to search for regression errors late in a project.
12.7.2 Automating GUI Tests
Throughout the discussion of the various test types in the previous chapter, we have assumed
that by designing tests with specific goals in mind, we will be in a better position to make
successful choices on whether we automate tests or continue to execute them manually. Based
on our experience of preparing automated tests and helping client organisations to implement
GUI test running tools we offer some general recommendations concerning GUI test automation
below.
Pareto law
We expect 80% of the benefit to derive from the automation of 20% of the tests.
Don’t waste time scripting low volume complex scripts at the expense of high volume simple
ones.
Hybrid Approach
V1.0 Page 21 of 52
Consider using the tools to perform navigation and data entry prior to manual test execution.
Consider using the tool for test running, but perform comparisons manually or ‘off-line’.
Coded scripts
These work best for navigation and checklist-type scripts.
Use where loops and case statements in code leverage simple scripts.
Are relatively easy to maintain as regression tests.
Recorded Scripts
Need to be customised to make repeatable.
Sensitive to changes in the user interface.
Test Integration
Automated scripts need to be integrated into some form of test harness.
Proprietary test harnesses are usually crude so custom-built harnesses are required.
Migrating Manual
Test Scripts
Manual scripts document automated scripts
Delay migration of manual scripts until the software is stable, and then reuse for regression
tests.
Non-Functional Tests
Any script can be reused for soak tests, but they must exercise the functionality of concern.
Tests of interfaces to desktop products and server processes are high on the list of tests to
automate.
Instrument these scripts to take response time measurements and re-use for performance
testing.
Following are the test automation regime that fits the GUI test process and Manual versus
automated execution presents a rough guideline and provides a broad indication to select
tests to automate.
Test Types
• Manual or Automated?
• Checklist testing
• Manual execution of tests of application conventions
• Navigation
• Automated execution.
• Equivalence Partitioning,
• Boundary Values, Decision
• Tables, State Transition
• Testing
Automated execution of large numbers of simple tests of the same functionality or process e.g.
the 256 combinations indicated by a decision table.
Manual execution of low volume or complex tests
• Desktop Integration, C/S
• Communications
• Automated execution of repeated tests of simple transactions
V1.0 Page 22 of 52
12.7.3 Criteria for the Selection of GUI tool
• Cross platform availability
• Supporting the underlying test methodology e.g. Bitmap comparison, Record Playback etc.
• Functionality
• Ease of use
• Support for distributed testing
• Style and power of scripting language
• Option to test Script development environment
• Non standard window handling capability
• Availability of technical support
• Low price
V1.0 Page 23 of 52
• Check to make sure that the command buttons are greyed out when not in use.
The distributes nature of client/server systems pose a set of unique problems for software testers
with the following areas in focus:
• Client GUI considerations
• Target environment and platform diversity considerations
• Distributed database considerations
• Distributed processing considerations
• Non-robust target environment
• Non-linear performance relationships
The strategy and tactics associated with c/s testing must be designed in a manner that allows
each of these issues to be addressed.
13.2 C/S Testing Tactics
Object oriented testing techniques can be used even if the system is not implemented with c/s
technology. The replicated data and processes can be organised into classes of objects that
share the same set of properties. Once test cases have been derived for a class of objects, those
test cases should be broadly applicable for all instances of the class. The OO point of is
particularly valuable when the GUI of the c/s system is under testing. GUI is inherently object
oriented.
Performance of C/S systems is also under test due to the following issues:
• Large volumes of network traffic caused by ‘intelligent clients’
• Increased layers of ‘architectural layers’
• Delays between distributed processes communicating across networks
• The increased number of suppliers of architectural components who must be dealt with.
V1.0 Page 24 of 52
The execution of a performance test must be automated. The five main tools for the test process
are:
• Test database creation / maintenance – create the large volume of data on the database
• Load generation – tools can be of two types, either a test running tool which drives the client
application, or a test driver which simulates clients workstations
• Application running tool – test running tool which drives the application under test and
records response time measurements
• Resource monitoring – utilities which can monitor and log client and server system resources,
network traffic, database activity.
14 Web Testing
While many of the traditional concepts of software testing still hold true, Web and e-Business
applications have a different risk profile to other, more mature environments. Gone are the days
of measuring release cycles in months or years; instead, Web applications now have release
cycles often measured in days or even hours! A typical Web tester now has to deal with shorter
release cycles, constantly changing technology, fewer mature testing tools, and an anticipated
user base that may run into millions on the first day of a site’s launch.
The most crucial aspect of a Web site testing is the test environment. A Web site testing is
challenging. Breaking up the testing tasks based on each of the tiers of the Windows DNA
architecture helps to reduce the complexity of the testing task.
This is the first and very important phase of Web testing. It should also be clearly mentioned to
your “Test Plan”. Whenever you are going to test a Website just make sure that the Website
must follow some standards. The following points must be avoided / may not be present with a
Standard Website:
14.1.1 Frames
Splitting a page into frames is very confusing for users since frames break the fundamental user
model of the web page. All of a sudden, you cannot bookmark the current page and return to it,
URLs stop working, and printouts become difficult. Even worse, the predictability of user actions
goes out the door: who knows what information will appear where when you click on a link?
Don't try to attract users to your site by bragging about use of the latest web technology. The Site
may attract a few nerds, but mainstream users will care more about useful content and site’s
ability to offer good customer service. Using the latest and greatest before it is even out of beta is
a sure way to discourage users: if their system crashes while visiting your site, you can bet that
many of them will not be back. Unless you are in the business of selling Internet products or
services, it is better to wait until some experience has been gained with respect to the
appropriate ways of using new techniques.
V1.0 Page 25 of 52
14.1.3 Scrolling Text, Marquees & Constantly Running Animations
Never include page elements that move incessantly. Moving images have an overpowering effect
on the human peripheral vision. Give your user some peace and quiet to actually read the text!
Only 10% of users scroll beyond the information that is visible on the screen when a page comes
up. Just test it properly that all-critical content and navigation options should be on the top part of
the page.
14.1.5 Complex URLs
It is always found that users actually try to decode the URLs of pages to infer the structure of web
sites. Users do this because of the horrifying lack of support for navigation and sense of location
in current web browsers. Thus, a URL should contain human-readable directory and file names
that reflect the nature of the information space.
Also, users sometimes need to type in a URL, so try to minimize the risk of typing by using short
names with all lower-case characters and no special characters (many people don't know how to
type a ~).
Make sure that all pages include a clear indication of what web site they belong to since users
may access pages directly without coming in through your home page. For the same reason,
every page should have a link up to your home page as well as some indication of where they fit
within the structure of your information space.
14.1.7 Non-standard Link Colors
The standard Links to pages that have not been seen by the user are blue; links to previously
seen pages are purple or red. Don't mess with these colors since the ability to understand what
links have been followed is one of the few navigational aides that is standard in most web
browsers. Consistency is key to teaching users what the link colors mean.
Many old pages keep their relevance and should be linked into the new pages. Of course, some
pages are better off being removed completely from the server after their expiration date.
14.1.9 Lack of Navigation Support
Don't assume that users know as much about your site as you do. They always have difficulty
finding information, so they need support in the form of a strong sense of structure and place.
The Web Site must be having a Complete Site map so that the user can know where they are
and if he / she want to go to different page he / she can easily go with the help of Site map. Also
the site must contain a good search feature since even the best navigation support will never be
enough.
14.1.10 Overly Long Download Times
If the Web site contains a link for download, then make sure that download time should not
exceed 10 seconds. Traditional human factor guidelines indicate 10 seconds as the maximum
response time before users lose interest. On the web, users have been trained to endure so
much suffering that it may be acceptable to increase this limit to 15 seconds for a few pages.
V1.0 Page 26 of 52
14.2 Testing of User Friendly
This is the second phase of testing. As the name suggest this is testing for User friendliness like
“How friendly the WEB site with End User?” Long time back, "user-friendly" software was any
application that had a menu or allowed a user to correct an input error. Today, usability
engineering is a distinct professional discipline in its own right, where researchers and
practitioners strive to develop and implement techniques for making software systems user-
friendlier.
In the meantime, the sustained growth of the World Wide Web has resulted in the creation of
literally millions of Web sites -- only a small percentage of which are user-friendly. Fortunately,
many of the principles from usability engineering can be easily applied (or adapted) to Web
development.
The site should be very much user friendly. Any end user can easily work with the site. There
should be proper guide to use / work with the site. Identify which type of user is going to use the
site with which type of connection (modem, lease line etc) and test accordingly.
Building a User friendly Web site is a worthwhile endeavor in its own right. After all, satisfied
Users are the keys to a truly successful Web site. But there are also certain fringe benefits that
go along with genuine user-friendliness. User-friendly Web sites should also:
• Browser-friendly
• Bandwidth-friendly
• Server-friendly
Apart from this Loading of site should be tested properly because speed is the single biggest
determinant of user satisfaction. Small, fast-loading pages make for more and happier visitors at
your Web site. Large, slow-loading pages simply invite your visitor to browse elsewhere.
A user-friendly Web site understands who its intended users are, and it targets them directly. So
the objective of WEB testing should be user perspective. Using your user language, without
neither apology nor pretense, helps them to feel "at home" at your Web site.
Always remember that you are your user's host in their virtual tour of your Webspace Be
conversational. Be polite. Use complete sentences. And don't nag them about their choice in Web
browser.
14.2.2 Checklist of User-friendliness:
B. Accessibility
V1.0 Page 27 of 52
B. Consistency
C. Navigation
• Does the site make effective use of hyperlinks to tie related items together?
• Are there dead links?
• Is page length appropriate to site content?
E. Visual Presentation
The site should be looking good and gives a good feeling to the user. Every screen should be
appeared properly. The Site should have:
The visual appearance of a Web site is important to maintain repeat visits. Although the home
page of a Web site is the "breadwinner," catalog pages cannot be ignored. Regardless of the
developer's choice for color, font, or graphics, the tester needs to test for the appearance of the
site thoroughly and try to bring out problem areas.
Tests required to check the visual appeal of a site are described below.
There are a number of different fonts available on HTML editors these days. However, many of
these fonts may not display on all browsers, especially on older versions. Or they may display as
unreadable characters. Therefore, it's important to test the browser for version compatibility.
Test for consistency of font size throughout the Web site. The standard font size for Web is 18 to
24 for Header Part and 10 to 14 for Body part.
14.3.1.3 Colors
Testing of color is again an important thing for Web testing. Test the combinations of foreground
and background colors of all the pages of the Web Site.
V1.0 Page 28 of 52
14.3.1.4 Graphics
Fewer graphics on a Web page aid in faster downloads. As much as possible, thumbnails should
replace photographs. The tester must test for download time of graphics-intensive pages if there
is any link of download.
The home page requires special attention because it is the first page that the site visitor sees.
Use the spelling checker to check the spelling throughout the site. Sometimes there are errors
that may not be checked by the spelling checker, such as "there" and "their."
Finally, make sure to proofread the entire site to check the grammar.
• Test for general look and feel appearance of the entire Window.
• Test the complete functionality of control panel of the entire Window. (Minimizing, maximizing,
double click of mouse on control panel should work properly, close etc.)
• Test the spellings of all the text displayed in the window, such as the window caption, status
bar options, field prompts, pop-up text, and error messages.
• Test the colors, fonts, and font widths of entire Window. That should be standard for the field
prompts and displayed text.
• Test each toolbar and menu item for navigation using the mouse and keyboard.
• Test window navigation using the mouse and keyboard.
• Test to make sure that proper format masks are used. For example, all drop-down boxes
should be properly sorted. The date entry should also be properly formatted.
• Test for the color of the field prompts and field background is to standard in read-only mode.
• Test the Vertical / Horizontal Scroll Bar. These should appear only if required.
• Test the various controls on the window. The control should be aligned properly.
• Test the resizing of Window.
• Test the alignment of the field. All character or alphanumeric fields should be left aligned and
all the numeric fields should be right aligned.
• Check for the display of defaults if there are any.
• Test all the shortcut keys. These all should be well defined and work properly.
• Test the Hotkeys of the entire Window. Every menu command should have a properly defined
hotkey.
• Test for the duplication of Hotkeys on the same Window.
• Test that Alt + Tab is working properly.
• Test that Alt + F4 is working properly.
• Test the tab order. It should be from top left to bottom right. Also, the read-only/disabled fields
should be avoided in the TAB sequence.
• Test the positioning of the Cursor. The cursor should be positioned on the first input field (if
any) when the window is opened.
• Make sure that if any default button is specified, then it should work properly.
• Test and validate the behaviour of each control, such as push button, radio button, list box
etc.
• Test to make sure that the window is modal. This will prevent the user from accessing other
functions when this window is active.
• Test to make sure that multiple windows can be opened at the same time.
• Make sure that there is a Help menu.
V1.0 Page 29 of 52
• Check to make sure that the command buttons are greyed out when not in use.
Web sites that rely on a heavy volume of trading on the Internet need to make sure that their Web
servers have a very high uptime. To prevent breakdown and to offload traffic from a server at
peak time, entrepreneurs must invest in additional Web servers. The power of a Web server to
handle a heavy load at peak hours depends on the network speed and the server's processing
power, memory, and storage space of the server. The hardware component of the Web server is
most vulnerable at peak hours.
The number of simultaneous users that the server can successfully handle measures its capacity.
Excessive load on the Web server causes it to degrade dramatically in performance until the load
is reduced. The objective of this load testing is to determine an optimum number of simultaneous
users
Most of the Web sites typically have User profile, stores catalogs, shopping Cart, and order
information in the database. Since the database stores lot of information about the site and user
so it must be tested thoroughly. The purpose of database testing is to determine how well the
database meets requirements.
The Search option is one of the most frequently used functions of online databases. Generally
users uses the Search results to go directly to other page instead of going step-by-step and also
to save the time and effort.
It was found that Search option of lots of Web site is not working properly. Which makes a user
annoyed. Just to make them happy be sure that the Search option of your Web site is working
properly and displaying the proper result.
A team of people that are not a part of the development team should carry out testing for Search
relevance. This team assumes the role of the online customer and tries out random Search
options with different keywords. The Search results are recorded by the percentage of relevance
to the keyword. At the end of the testing process, the team comes up with a series of
recommendations. This can be incorporated into the database Search options.
The query response time is essential in online transactions. The turnaround time for responding
to queries in a database must be short. The results from this testing may help to identify
problems, such as bottlenecks in the network, specific queries, the database structure, or the
hardware.
A database stores an important data of catalog, pricing, shipping tables, tax tables, order
database, and customer information. Testing must verify the correctness of the stored data.
Therefore, testing should be performed on a regular basis because data changes over time.
V1.0 Page 30 of 52
Prepare the following checklist for the proper testing of Data Integrity of a Web Site:
• From the list of functionality provided by the development team test the creation,
modification, and deletion of data in tables.
• Test to make sure that sets of radio buttons represent a fixed set of values. Check on what
happens when a blank value is retrieved from the database.
• Test to make sure that when a particular set of data is saved to the database, each value
gets saved fully. In other words, the truncation of strings and rounding of numeric value does
not occur.
• Test whether default values are saved in the database if the user input is not specified.
• Test the compatibility with old data. In addition, old hardware, versions of the operating
system, and interfaces with other software need to be tested.
Errors caused due to incorrect data entry, called data validation errors, are probably the most
common data related errors. These errors are also the most difficult to detect in the system.
These errors are typically caused when a large volume of data is entered in a short time frame.
For example, $67 can be entered as $76 by mistake. The data entered is therefore invalid.
You can reduce data validity errors. Use the data validation rules in the data fields.
E.g. the date field in a database uses the MM/DD/YYYY format. A developer can incorporate a
data validation rule, such that MM does not exceed 12; DD does not exceed 31.
In many cases, simple field validation rules are unable to detect data validity errors. Here, queries
can be used to validate data fields. For example, a query can be written to compare the sum of
the numbers in the database data field with the original sum of numbers from the source. A
difference between the figures indicates an error in at least one data element.
Another test that is performed on database software is the Recovery of data test. This test
involves forcing the system to fail in a variety of ways to ensure that the system recovers from
faults and resumes processing within a pre-defined period of time. The system is fault-tolerant,
which means that processing faults do not halt the overall functioning of the system. Data
recovery and restart are correct in case of auto-recovery. If recovery requires human intervention,
then the mean time to repair the database is within pre-defined acceptable limits.
Gaining the confidence of online customers is extremely important to Web site success. Building
the confidence of online customers is not an easy task and requires a lot of time and effort.
Therefore, entrepreneurs must plan confidence-building measures. Ensuring the security of
transactions over the Internet ensures customer confidence.
The main technique in security testing is to attempt to violate built-in security controls. This
technique ensures that the protection mechanisms in the system secure it from improper
penetration.
The tester overwhelms the system by continuous requests, thereby denying service to others.
The tester may purposely cause system errors to penetrate during recovery or may browse
through insecure data, to find the key to system entry.
V1.0 Page 31 of 52
There are two distinct areas of concern in Web site security:
Unauthorized users can wreak havoc on a Web site by accessing confidential information or by
damaging the data on the server. This kind of security lapse is due to insufficient network security
measures. The network operating systems, together with the firewall, take care of the security
over the network.
The network operating system must be configured to allow only authentic users to access the
network. Also, firewalls must be installed and configured. This ensures that the transfer of data is
restricted from only one point on the network. This effectively prevents hackers from accessing
the network.
For example, a hacker accesses the unsecured FTP port (say Port 25) of a Web server. Using
this port as an entry point to the network, the hacker can access data on the server. The hacker
may also be able to access any machine connected to this server. Therefore, security testing will
indicate these vulnerable areas and will also help to configure the network settings for better
security.
Secure transactions create customer confidence. That's because when customers purchase
goods over the Internet, they can be apprehensive about giving Credit Card information.
Therefore, security measures should be communicated to the customer.
Two things needed to be tested to ensure that the customer's Credit Card information is safe:
i. Testing should ensure that the credit card information is transmitted and stored securely.
ii. Testing should verify that strong encryption software is used to store the Credit Card
information, and only limited, authorized access is allowed to this information.
Software performance testing aims to ensure that the software performs in accordance with
operational specifications for response time, processing costs, storage use, and printed output.
All interfaces are fully tested. This includes verifying the facilities and equipment, and checking to
make sure that the communication lines are performing satisfactorily.
Correct data capture refers to the use of CGI scripts or ASP to capture data from the Web client.
This includes forms, credit card numbers, and payment details. Any error in capturing this data
will result in incorrect processing of the customers' orders.
Transaction completeness is the most important aspect of a Web site transaction. Any error in
this phase of operation can invite legal action because the affected party may be at risk of losing
money due to an incomplete transaction.
The payment gateway consists of software installed on Web servers to facilitate payment
transactions. The gateway software captures Credit Card details from the customer and then
verifies the validity of the Credit Card with the transaction clearinghouse.
V1.0 Page 32 of 52
Gateways are complex because they can create compatibility problems. In turn, these problems
make Web site transactions unreliable. So, the entrepreneur needs to consult experienced
developers before investing in a payment gateway. Therefore, before launching the site, online
pilot testing must be done to test the reliability of the gateway.
Running the system in a high-stress mode creates high demands on resources and stress tests
the system. Some systems are designed to handle a specified volume of load.
For example, A Bank Transaction Processing System may be designed to process up to 100
transactions per second; an operating system may be assigned to handle up to 200 separate
terminals.
Tests must be designed to ensure that the system can process expected load. This usually
involves planning a series of tests where the load is gradually increased to reflect the expected
usage pattern.
Stress tests steadily increase the load on the system beyond the maximum design load until the
system fails. This type of testing has a dual function:
i) It tests the failure behavior of the system. Circumstances may arise through an
unexpected combination of events where the load placed on the system exceeds the
maximum anticipated load. Stress testing determines if overloading the system results
in loss of data or user service.
ii) It stresses the system and may cause certain defects to come to light, which may not
normally manifest the errors.
Stress testing is particularly relevant to Web site system with Web databases. These systems
often exhibit severe degradation when the network is swamped with operating system call.
As defects are discovered in a component, modifications should be made to correct them. This
may require other components in the testing process to be re-tested.
Component system errors can present themselves later in the testing process. The process is
iterative because is information fed back from later stages to earlier parts of the process.
Repairing program defects may introduce new defects. Therefore, the testing process should be
repeated after the system is modified.
i) Test any modifications to the system to ensure that no new problems are introduced
and that the operational performance is not degraded due to the modifications.
ii) Any changes to the system after the completion of any phase of testing or after the
final testing of the system must be subjected to a thorough Regression test. This is to
ensure that the effects of the changes are transparent to other areas of the system
and other systems that interface with the system.
V1.0 Page 33 of 52
iii) The project team must create test data based on predefined specifications. The
original test data should come from other levels of testing and then it should be
modified along with test cases.
Acceptance testing often reveals errors and omissions in the system requirements definition. The
requirements may not reflect the actual facilities and performance required by the user.
Acceptance testing may demonstrate that the system does not exhibit the anticipated
performance and functionality. This test confirms that the system is ready for production.
Running a pilot for a select set of customers helps in Acceptance testing for an e-commerce site.
A survey is conducted among these site visitors on different aspects of the Web site, such as
user friendliness, convenience, visual appeal, relevance, and responsiveness.
A sample of test plan and test cases on Web Testing has been added in Appendix – 3.
Since the strategy is aimed at addressing the risks of a client/server development, knowledge of
the risks of these projects is required.
A standard test plan contains different sections. Those sections with their explanations are given
below:
1. Introduction
V1.0 Page 34 of 52
Set goals and expectations of the testing effort. Summarise the software items and software
features to be tested. The purpose of each item and its history may be included.
References to the following documents, if they exist, are required in the highest-level test
plan:
• Project authorisation
• Project plan
• Relevant policies
• Relevant standards
In multilevel test plans, each lower level plan must reference the next higher level plan.
1.1 Purpose
Describe the purpose of the test plan. Multiple components can be incorporated into one test
plan
This section provides an overview of the project and identifies critical and high-risk functions of
the system.
2. Test Environment
The test environment mirrors the production environment. This section describes the
hardware and software configurations that compose the system test environment. The
hardware must be sufficient to ensure complete functionality of the software. Also, it
should support performance analysis aimed at demonstrating field performance.
2.2 Automated Tools
V1.0 Page 35 of 52
Working in conjunction with the database group, the test team will create the test
database. The test database will be populated with unclassified production data.
3. Test Program
List of the features to be tested, such as particular field and expected values, etc., Identify
software features to be tested. Identify the Test-Design Specification associated with each
Feature
V1.0 Page 36 of 52
3.2 Areas beyond the scope
Identify all features and significant combinations of features, which will not be tested, and the
reasons.
3.3 Test Plan Identifier
ABCXXYYnn
ABC - First 3 letters of the project name
XX - Type of Testing Code
For e.g.
System Testing - ST
Integration Testing -IT
Unit Testing -UT
Functional Testing - FT
Identify the test items including their version/revision level. Also specify characteristics of their
transmittal media which impact hardware requirements or indicate the need for logical or physical
transformations before testing can begin. (An example would be that the code must be placed on
a production server or migrated to a special testing environment separate from the development
environment.)
Supply references to the following item documentation, if it exists:
• Requirements specification
• Design specification
• User guide
• Operations guide
• Installation guide
• Reference any incident reports relating to the test items.
V1.0 Page 37 of 52
3.5 Test Schedule
Include test milestones identified in the software project schedule as well as all item transmittal
events. Define any additional test milestones needed. Estimate the time required for each testing
task. Specify the schedule for each testing task and test milestone. For each testing resource
(that is, facilities, tools, and staff), specify its periods of use.
3.6 Test Approach
Describe the general approach to testing software features and how this approach will ensure
that these features are adequately tested. Specify the major activities, techniques, and tools,
which will be used to test the described features. The description should include such detail as
identification of the major testing tasks and estimation of the time required to do each one.
3.6.1 Test Coverage
(Determine the adequacy of test plan) Indicate branch or multiple location. If all the conditions are
covered in test cases.
3.6.2 Fixes and Regression Testing
This contains the criteria and reference used to develop test specification. Type of testing (Black
or White Box Testing is to be mentioned).
3.6.4 Pass/Fail Criteria
Specify the criteria to be used to determine whether each test item has passed or failed testing. If
no basis exists for passing or failing test items, explain how such a basis could be created and
what steps will be taken to do so.
3.6.5. Suspension criteria and resumption requirements
Specify the criteria used to suspend all or a portion of the testing activity on the test items
associated with this plan. Specify the testing activities that must be repeated, when testing
resumes.
3.6.6 Defect Tracking
3.6.7 Constraints
Identify significant constraints on testing such as test item availability, testing resource availability,
and deadlines.
Identify all documents relating to the testing effort. These should include the following documents:
Test Plan, Test-Design Specifications, Test-Case Specifications, Test-Procedure Specifications,
Test-Item Transmittal Reports, Test Logs, Test-Incident Reports, and Test-Summary Reports.
Also identify test input/output data, test drivers, testing tools, etc.
V1.0 Page 38 of 52
3.6.10 Dependencies and Risk
Identify the high-risk assumptions of the test plan. Specify contingency plans for each.
3.6.11 Approvals
Specify the names and titles of all persons who must approve this plan. Provide space for the
signatures and dates.
16 Amendment History
V1.0 – First Release
References
1. Kaner, Cem (1997). Improving the Maintainability of Automated Test Suites.
www.kaner.com/lawst1.htm
2. Myers, Glenford J. (1978). The Art of Software Testing. John Wiley & Sons.
3. Pressman, Roger S. (5/e). Software Engineering – A practitioner’s Approach. McGraw Hill.
4. Beizer, Boris, (1995). Black Box Testing – Techniques for Functional Testing of Software and
Systems. John Wiley & Sons
5. Dustin, Elfriede ; Rashka, Jeff ; Paul, John; (1999). Automated Software Testing –
Introduction, Management and Performance. Addison Wesley.
V1.0 Page 39 of 52
Appendix – 1
List of Testing Tools:
Life-Cycle Type of Tool, Tool Description Tool Example
Phase
V1.0 Page 40 of 52
and Design Tools developing second- , Developer 2000,
Phase generation enterprise Erwin, Popkins
client-server systems ,Terrain by
Cayenne
Application Help define software Rational Rose,
Design Tools architecture; allow for Oracle
object-oriented Developer 2000,
analysis, Popkins,
modeling, design, Platinum, Object
and construction Team by
Cayenne
V1.0 Page 41 of 52
Metrics Code (Test) Identify untested STW/Coverage,
Tools Coverage code and Software
Analyzers or support dynamic Research TCAT,
Code testing Rational Pure
Instrumentors Coverage,
IntegriSoft,
Hindsight and
EZCover
Usability Provide usability ErgoLight
Measurements testing as
conducted in
usability labs
V1.0 Page 42 of 52
network
GUI Testing Allow for Rational Suite Test
Tools-( automated GUI Studio, Visual
Capture/Playback tests; capture/ Test,
) playback tools Mercury
interactions with Interactive’s
online systems, so WinRunner,
they may be Segue’s
replayed Silk,
automatically STW/Regression
from Software
Research, Auto
Scriptor Inferno,
Automated Test
Facility from
Softbridge,
QARUN
from Compuware
Non-GUI Test Allow for
Drivers automated
execution of tests
for
products without a
graphical
user interface
Load/Performanc Allow for Rational
e Testing Tools load/performance Performance
and stress testing Studio
Web Testing Allow for testing of Segue’s Silk, ,
Tools Web Applications, Java ParaSoft’s
and so on Jtest
V1.0 Page 43 of 52
Appendix - 2
Sample system test plan
1.0 Introduction
1.1 Purpose
This test plan for ABC version 1.0 should support the following objectives.
1. To detail the activities required to prepare for and conduct the system test.
2. To communicate to all responsible parties the task(s), which they are to perform, and the
schedule to be followed in performing the tests.
3. To define the sources of the information used to prepare the plan.
4. To define the test tools and environment needed to conduct the system test.
1.2 Background
ABC is an integrated set of software tools developed to extract raw information and data flow
information from C programs and then identify objects, patterns, and finite state machines.
V1.0 Page 44 of 52
Table 2A Test Team Profile
The tests will be conducted on one of the machines licensed to run REFINE/C.
Software
One tool, which we will use, is DGL. DGL is a test case generator tool. We will build a C grammar
to generate random C code. While the generated code will not be syntactically correct in all
cases, it will give us some good ideas for use in stress testing our code. Thus, the main purpose
of DGL will be to generate arbitrary code that will give the test team ideas on building tests that
might otherwise not be considered.
V1.0 Page 45 of 52
2.3 Test Data
Discuss with the development team to generate test data and create a test database. May
contact with the customer also.
This test plan covers a complete "black box" or functional test of the associated program modules
(below). We assume the correctness of the REFINE/C grammar and parser. We also assume that
"white box" or "glass box" testing will be done prior to these tests. We will not explicitly test the
interfaces between modules.
Program Modules
The program modules are detailed below. The design documents and the references mentioned
below will provide the basis for defining correct operation.
Design Documents
These are links to the program module design documents. The indentation shows the
dependencies between modules. Modules at the same level do not depend upon each other. The
inner level indented module depends upon the outer levels. All depend either directly or indirectly
on "Interface ..". Control Dependency Graphs and Reaching Definitions depend upon Control
Flow Graphs, but are independent of each other, and so on.
V1.0 Page 46 of 52
Test Documents
This set of links 1points to the root of each individual module's test document tree.
The test personnel will use the design document references in conjunction with the ANSI C
grammar by Jutta Degener to devise a comprehensive set of test cases. The aim will be to have
a representative sample of any possible constuct which the module should handle. For example,
in testing the Control Flow Graph module, we would want cases containing various combinations
of iteration-statements, jump-statements, and selection-statements.
The complete set of test cases developed for a particular module will be rerun after program
changes to correct errors found in that module during the course of testing.
3.6.2 Comprehensiveness
1
The links mentioned above, are for redirecting the reader to some pre-existing test documents. Since this
is a sample test plan, it is beyond the scope of this document to include actual test documents. It only gives
the idea that one can refer to existing test document with hyperlink.
V1.0 Page 47 of 52
Using the C grammar as a basis for generating the test cases should result in a comprehensive
set of test cases. We will not necessarily try to exhaustively cover all permutations of the
grammar, but will strive for a representative sample of the permutations.
The initial run of tests on any given module will be verified by one of the test team personnel.
After these tests are verified as correct, they will be archived and used as an oracle for automatic
verification for additional or regression testing. As an example, when testing the Control Flow
Graph module, an output is deemed correct if the module outputs the correct set of nodes and
edges for a particular input C program fragment.
N/A
3.6.6 Constraints
We need to plug in our deadline(s) here. Testing deadlines are not firm at this time.
We have decided that because the vast majority of the test cases will be submitted by the test
team personnel, there is no need for a Test Item Transmittal Report. If there are any test cases
submitted by outside parties, we will handle these as if they were a change request. This means
that the test team must approve of the reason for the test and the specific test before it will be
placed into the appropriate module's test suite.
3.6.9 Approvals
Test Manager/Date
V1.0 Page 48 of 52
Quality Assurance Manager/Date
V1.0 Page 49 of 52
Appendix - 3
Amendment
Version Date Amendment Author
ID
< Name of the Company > has developed a dynamic site, that displays the product
ranges based on the database and allows a user to select items and place order through
Internet.
The testing is being planned on special request from the Project Manager.
Test Objectives
The objective is to conduct black box testing of the system from user perspective.
Test Scope
1. Testing of all the features submitted by the Project Manager is to be done thoroughly. All
these documents are available in the baseline library under SRS folder of ABC Cables.
2. Since the project has been used with Internet Explorer so the testing should be done under
Netscape. The test cases should cover:
• Verification of maximum no. of Item (150 Item) selected by all the available methods.
• Normal (Below 150 Pounds of weight of selected Item), Boundary (150 Pounds of weight
of selected Item) & Extreme (Above 150 Pounds of weight of selected Item) condition of
Shopping should be tested on Live site.
• Testing should be done on Local Site for extreme conditions of large quantity (9999) of an
Item, Large value (9999999999.99) of an invoice, large number of Items (100) in the
Shopping Cart and large number of operations (approx. 50, other than adding item) on
the shopping cart.
3. Coverage of the System (Based on Working Model & Specification document):
• Menu Options – 100%.
• Functionality – 100% – based on the specification document submitted by Project
Manager.
• User Interface – 75% (Mainly covering the general Look & Feel, screen appearance &
Popup Menu of each type of the page).
V1.0 Page 50 of 52
• Navigation - 30% (Mainly covering Switching from one page to another through Search
(15% items) and links (15% items) and movement within the page)
• Security – 75% - Covering in detail Login & Logout for registered users (at least one of
each type) and some invalid conditions.
Test Environment
Stop Criteria
• All the test cases are tested at least once.
• All the critical defects are addressed and verified.
• All the unsolved defects are analysed and marked with necessary comments / status.
• In the last iteration of testing, there are no critical defects reported.
Test Process
• Testing team will prepare Test Case List & Test Cases based on the documents provided by
the development team.
• Testing shall be done based on these test cases and Test Report will be prepared.
• The bugs encountered shall be reported using <Name of the Defect tracking System >
simultaneously.
• The decision for Project acceptance or rejection will be based on the feedback from the
Project Manager.
• The verification of the fixed defects shall be done after the release of fresh software (if
required).
• In case of any defects, that do not allow the test case to be tested completely, no further
testing will be done on that test case. During verification of the fixed defect, complete testing
of the test case will be repeated.
• Testing team will maintain the status and criticality of each reported defect.
• The process of defect finding and verification shall be iterated until stop criteria is satisfied.
Human Resources
All the team members of Testing Group < Name of the Team members > will be involved
in testing. However depending on other tasks and resource availability, reallocation may be
done.
Reporting
• After the completion of each test cycle Testing Head will submit the defect report and inform
whether the software is rejected or not.
Training Requirement
V1.0 Page 51 of 52
Sample Test cases For Login Page
Topic: Login
Functionality: Login
Reference(s): Nil.
Data set should cover the normal, boundary and extreme cases of data for each field in the
screen concerned.
1. The testing should be done for the following valid conditions at least once:
Login as a privileged user (with all 7 types) & add 5 items randomly selected
& verify the cost of an item against the percentage of discount allowed to that
category of user.
Verify that after successful login, the control goes to the screen-displaying
catalogue of ultra spec cables.
Search at least 2 items with no further sub-levels (after successful login).
Search at least 3 items with further sub-levels (after successful login).
Clicking on an item category display that category in details i.e. showing the
contents or items available under that category.
3. Testing for the User Interface issues of the screen should be done covering
following points:
Make sure that control(s), caption(s), text etc. are clearly visible and looking
fine.
Make sure that Alt + Tab is working for switching between different opened
applications.
Make sure that pop-up menu is context sensitive.
Make sure that the Heading, Sub-heading & Normal text are identifiable
clearly by the font size, attribute & color.
Make sure that the company’s logo is clearly visible.
Testing Information
Environment: Iteration #:
Start Date: End Date:
Tester Name: Status:
V1.0 Page 52 of 52
Sample Test cases for First page
Topic: General
Functionality: All
Reference(s): Nil.
Data set should cover the normal, boundary and extreme cases of data for each field in the
screen concerned.
2. The testing should be done for the following valid conditions least once with:
2.1. About Us:
Make sure that there is/are no spelling mistake(s).
2.2. Ordering Information:
Make sure that there is/are no spelling mistake(s).
Make sure that adequate information is given.
2.3. Terms and Conditions:
Make sure that there is/are no spelling mistake(s).
Make sure that adequate information is given.
Make sure that all 16 hypertext is functioning properly.
3. User Interface:
Make sure that control / text / Caption is/are clearly visible.
Make sure that Alt + Tab is working fine.
Make sure that Heading, Subheading and normal text clearly identified by font
size.
Make sure that 'catch word' is clearly identified by attribute and/or color.
Make sure that the hypertext clearly identified by its font color whether it is
opened or not.
Make sure that logo is clearly visible and looking fine.
Make sure that pop-up menu is context sensitive.
Testing Information
Environment: Iteration #:
Start Date: End Date:
Tester Name: Status:
V1.0 Page 53 of 52
Sample Test Case for User Registration Page
Topic: Register Me
Functionality: Register
Reference(s): Nil
Data set should cover the normal, boundary and extreme cases of data for each field in the
screen concerned.
1.1. Prepare the test 8 data set fulfilling at least following criteria:
with only mandatory fields
with optional fields
with maximum length data in all fields
with minimum length data in all fields
with at least 8 states selected from the combo with their respective zip codes NJ(08817-20),
NY(11100-01, 11104), AL(12200-10), CT(06030-34), OH(43803-04)
for address in US
with ‘ship to’ same as ‘bill to’
with ‘ship to’ different from ‘bill to’
at least 1 entry with all fields
Register me with at least 6 different combinations
moving to Home at least 2 times and make sure that home page is opened
searching at least 2 different part IDs
3. Testing for the User Interface issues of the screen should be done covering
following points:
as soon as the screen is opened, make sure that the cursor is positioned at
the first enterable field
The unrelated options should be disabled.
Alt+down key for listing from combo box.
All the message boxes should have relevant and correct messages.
Make sure that button(s) is/are accordingly enabled/disabled or displayed
according to screen functionality.
Make sure that control(s), caption(s), text etc. are clearly visible and looking
fine.
V1.0 Page 54 of 52
Make sure that Alt + Tab is working for switching between different opened
applications.
Make sure that pop-up menu is context sensitive.
Cut – Copy – Paste with short cut Keys (Like Cntrl C, Cntrl V etc.) are
working properly with every input screen as per Windows norms.
Pasting of any text to Date field should not be allowed.
Look and feel the appearance of all the controls like Text Boxes, Date Box etc
should be normal.
Check that a scroll bar appears when a long text is entered in an editable
control.
Make sure that the screens are invoked through all their available options.
Testing Information
Environment: Iteration #:
Start Date: End Date:
Tester Name: Status:
V1.0 Page 55 of 52
Sample Test Case for Search Functionality of the Page
Topic: Search
Functionality: Search
Reference(s): Nil.
Data set should cover the normal, boundary and extreme cases of data for each field in the
screen concerned.
4. The testing should be done for the following valid conditions at least once
with:
Make sure that search option is opening the adequate page (the page, which
contains the searched Item) and the cursor is positioned on quantity field.
Search at least 40 items covering each group.
Try to search at least 20 item from different page.
Try to search at least 10 item without login. (as a casual user)
Testing Information
Environment: Iteration #:
Start Date: End Date:
Tester Name: Status:
V1.0 Page 56 of 52
GLOSSARY:
The Tester - Test engineer who manually tests or the Test Supervisor who starts the automatic
test scripts
The Test Bed -This comprises of Hardware, Software, Test Scripts, Test Plan,
Test Cases, etc
Black & White box testing: White box testing is a testcase design method that
uses the control structure of the procedural design to derive test cases to:
Exercise all independent paths within a module at least once.
Exercise all logical decisions on their true and false sides
Execute all loops at their boundaries and within their operational bound
Exercise internal data structures to ensure their validity.
Black box testing is to derive sets of input conditions that will fully exercise all
functional requirements for a program.
V1.0 Page 57 of 52