Anda di halaman 1dari 21

1

KIET GROUP OF INSTITUTIONS


DEPARTMENT OF COMPUTER APPLICATIONS
SESSION: 2016-17, V SEMESTER
SOFTWARE TESTING, NMCA E43
UNIT 4

Contents

Regression testing, Regression test process


Initial Smoke or Sanity test
Selection of regression tests
Execution Trace, Dynamic Slicing
Test Minimization, Tools for regression testing
Ad hoc Testing: Pair testing
Exploratory testing, Iterative testing
Defect seeding concepts and understanding

_____________________________________________________________________________________
REGRESSION TESTING

Regression testing a black box testing technique that consists of re-executing those tests that
are impacted by the code changes. These tests should be executed as often as possible
throughout the software development life cycle.
When a bug is fixed by the development team than testing the other features of the applications
which might be affected due to the bug fix is known as regression testing.
Regression testing is always done to verify that modified code does not break the existing
functionality of the application and works within the requirements of the system.
There are mostly two strategies to regression testing, 1) to run all tests and 2) always run a
subset of tests based on a test case prioritization technique.
Whenever developers change or modify their software, even a small tweak can have
unexpected consequences. Regression testing is testing existing software applications to make
sure that a ha ge or additio has t roke a e isti g fu tio alit . Its purpose is to catch
bugs that may have been accidentally introduced into a new build or release candidate, and to
ensure that previously eradicated bugs continue to stay dead. By re-running testing scenarios
that were originally scripted when known problems were first fixed, you can make sure that any
e
ha ges to a appli atio ha e t resulted i a regressio , or aused o po e ts that
formerly worked to fail. Such tests can be performed manually on small projects, but in most
cases repeating a suite of tests each time an update is made is too time-consuming and
complicated to consider, so an automated testing tool is typically required.

Neelam Rawat

Retest All

This is one of the methods for regression testing in which all the tests in the existing test bucket
or suite should be re-executed. This is very expensive as it requires huge time and resources.

Regression Test Selection

Instead of re-executing the entire test suite, it is better to select part of test suite to be run
Test cases selected can be categorized as 1) Reusable Test Cases 2) Obsolete Test Cases.
Re-usable Test cases can be used in succeeding regression cycles.
Obsolete Test Cases can't be used in succeeding cycles.

Prioritization of Test Cases

Prioritize the test cases depending on business impact, critical & frequently used functionalities
Selection of test cases based on priority will greatly reduce the regression test suite.

REGRESSION TESTING EXAMPLE: Example of regression testing with its process is explained below:
For Example there are three Modules in the Project named Admin Module, Personal Information, and
Employment Module and suppose bug occurs in the Admin Module like on Admin Module existing User
is not able to login with valid login credentials so this is the bug.
Now Testing team sends the above - mentioned Bug to the Development team to fix it and when
development team fixes the Bug and hand over to Testing team than testing team checks that fixed bug
does not affect the remaining functionality of the other modules (Admin, PI, Employment) and also the
functionality of the same module (Admin) so this is known as the process of regression testing done by
Software Testers.
TYPES OF REGRESSION TESTS:
1. Final Regression Tests: - A "final regression testing" is performed to validate the build that hasn't
changed for a period of time. This build is deployed or shipped to customers.
2. Regression Tests: - A normal regression testing is performed to verify if the build has NOT
broken any other parts of the application by the recent code changes for defect fixing or for
enhancement.

Neelam Rawat

SELECTING TEST CASES FOR REGRESSION TESTING


It was found from industry data that good number of the defects reported by customers were due to
last minute bug fixes creating side effects and hence selecting the test case for regression testing is an
art and not that easy. Effective Regression Tests can be done by selecting following test cases 1.
2.
3.
4.
5.
6.
7.
8.
9.

Test cases which have frequent defects


Functionalities which are more visible to the users
Test cases which verify core features of the product
Test cases of Functionalities which has undergone more and recent changes
All Integration Test Cases
All Complex Test Cases
Boundary value test cases
Sample of Successful test cases
Sample of Failure test cases

Selecting Regression Tests:

Requires knowledge about the system and how it affects by the existing functionalities.
Tests are selected based on the area of frequent defects.
Tests are selected to include the area, which has undergone code changes many a times.
Tests are selected based on the criticality of the features.

REGRESSION TESTING STEPS:


Regression tests are the ideal cases of automation which results in better Return On Investment (ROI).
Select the Tests for Regression.
Choose the apt tool and automate the Regression Tests
Verify applications with Checkpoints
Manage Regression Tests/update when required
Schedule the tests
Integrate with the builds
Analyze the results
DIFFERENCE BETWEEN RE-TESTING AND REGRESSION TESTING:

Retesting means testing the functionality or bug again to ensure the code is fixed. If it is not
fixed, defect needs to be re-opened. If fixed, defect is closed.
Regression testing means testing your software application when it undergoes a code change
to ensure that the new code has not affected other parts of the software

FOLLOWING ARE THE MAJOR TESTING PROBLEMS FOR DOING REGRESSION TESTING:
1.
2.

With successive regression runs, test suites become fairly large. Due to time and budget
constraints, the entire regression test suite cannot be executed
Minimizing test suite while achieving maximum test coverage remains a challenge

Neelam Rawat

3.

Determination of frequency of Regression Tests, i.e., after every modification or every build
update or after a bunch of bug fixes, is a challenge.

CONCLUSION:
An effective regression strategy, save organizations both time and money. As per one of the case study
in banking domain, regression saves up to 60% time in bug fixes (which would have been caught by
regression tests) and 40% in money
SMOKE TESTING
Smoke Testing is performed after software build to ascertain that the critical functionalities of the
program is working fine.It is executed "before" any detailed functional or regression tests are executed
on the software build.The purpose is to reject a badly broken application, so that the QA team does not
waste time installing and testing the software application.
In Smoke Testing, the test cases chosen cover the most important functionality or component of the
system. The objective is not to perform exhaustive testing, but to verify that the critical functionalities of
the system is working fine.
WHAT IS SANITY TESTING?
After receiving a software build, with minor changes in code, or functionality, Sanity testing is performed
to ascertain that the bugs have been fixed and no further issues are introduced due to these
changes.The goal is to determine that the proposed functionality works roughly as expected. If sanity
test fails, the build is rejected to save the time and costs involved in a more rigorous testing.
The objective is "not" to verify thoroughly the new functionality, but to determine that the developer
has applied some rationality (sanity) while producing the software. For instance, if your scientific
calculator gives the result of 2 + 2 =5! Then, there is no point testing the advanced functionalities like sin
30 + cos 50.
SMOKE TESTING VS SANITY TESTING - KEY DIFFERENCES

Neelam Rawat

POINTS TO NOTE:
1. Both sanity tests and smoke tests are ways to avoid wasting time and effort by quickly
determining whether an application is too flawed to merit any rigorous testing.
2. Sanity Testing is also called tester acceptance testing.
3. Smoke testing performed on a particular build is also known as a build verification test.
4. One of the best industry practice is to conduct a Daily build and smoke test in software projects.
5. Both smoke and sanity tests can be executed manually or using an automation tool. When
automated tools are used, the tests are often initiated by the same process that generates the
build itself.
6. As per the needs of testing, you may have to execute both Sanity and Smoke Tests on the
software build. In such cases you will first execute Smoke tests and then go ahead with Sanity
Testing. In industry, test cases for Sanity Testing are commonly combined with that for smoke
tests, to speed up test execution. Hence it's a common that the terms are often confused and
used interchangeably.
WHAT IS ADHOC TESTING?
When a software testing performed without proper planning and documentation, it is said to be Adhoc
Testing. Such kind of tests is executed only once unless we uncover the defects.
Adhoc Tests are done after formal testing is performed on the application. Adhoc methods are the least
formal type of testing as it is NOT a structured approach. Hence, defects found using this method are
hard to replicate as there are no test cases aligned for those scenarios.
Testing is carried out with the knowledge of the tester about the application and the tester tests
randomly without following the specifications/requirements. Hence the success of Adhoc testing
depends upon the capability of the tester, who carries out the test. The tester has to find defects
without any proper planning and documentation, solely based on tester's intuition.
WHEN TO EXECUTE ADHOC TESTING?
Adhoc testing can be performed when there is limited time to do exhaustive testing and usually
performed after the formal test execution. Adhoc testing will be effective only if the tester has in-depth
understanding about the System Under Test.
FORMS OF ADHOC TESTING:
1.

2.

3.

Buddy Testing: Two buddies, one from development team and one from test team mutually
work on identifying defects in the same module. Buddy testing helps the testers develop
better test cases while development team can also make design changes early. This kind of
testing happens usually after completing the unit testing.
Pair Testing: Two testers are assigned the same modules and they share ideas and work on
the same systems to find defects. One tester executes the tests while another tester records
the notes on their findings.
Monkey Testing: Testing is performed randomly without any test cases in order to break the
system.

Neelam Rawat

AD-HOC TESTING BENEFITS:


It testing warrants the tester with a lot of power to be as creative as necessary.
This increases the testing quality and efficiency as below:
1.

2.

3.

4.
5.

The biggest advantage that stands out is that a tester can find more number of defects than
in traditional testing because of the various innovative methods they can apply to test the
software.
This for of testi g a e applied a
here i the DLC; its ot o l restri ted to the
testing team. The developers can also conduct this testing, which would help them code
better and also predict what problems might occur.
Can be coupled with other testing to get best results which can sometimes cut short the
time needed for the regular testing. This would enable better quality test cases to be
generated and better quality of the product on the whole.
Does t a date a do u e tatio to e do e hi h pre e ts e tra urde o the tester.
Tester can concentrate on actually understanding the underlying architecture.
In cases when there is not much time available to test, this can prove to be very valuable in
terms of test coverage and quality.

AD-HOC TESTING DRAWBACKS:


Ad-ho testi g also has a fe
pronounced:

dra

a ks. Lets take a look at so e of the dra

a ks that are

i e its ot er orga ized and there is no documentation mandated, the most evident problem is that
the tester has to remember and recollect all the details of the ad-hoc scenarios in memory. This can be
even more challenging especially in scenarios where there is a lot of interaction between different
components.
1.
2.

3.

Followed from the first point, this would also result in not being able recreate defects in the
subsequent attempts, if asked for information.
Another very important question this brings to light is the effort accountability. Since this is
not planned/ structured, there is no way to account for the time and effort invested in this
kind of testing.
Ad-hoc testing has to only be performed by a very knowledgeable and skilled tester in the
team as it demands being proactive and intuition in terms foreseeing potential defect ridden
areas.

PAIR TESTING/BUDDY TESTING:


This is a technique where two team members work on the same Built (software application) on the
same machine. One of the team member will work with the systems(with keyboard and mouse) and
another should make notes and scenarios.
For this concept you may find in the internet or from some books that, one team member should be a
tester and another should be from Development team or a business analyst or both the team members
should be from testing team where one should be experienced and other should be non-experienced to

Neelam Rawat

share their views etc etc. In some articles Pair testing and Buddy testing defined as two different
concept as Pair testing contains two testers and buddy testing requires one tester and one developer.
But in real, the naming does not matters as the goal and methodology is same.
But as per my view, when a tester and a developer work together to ensure the quality of a product
then the efficiency rises a bit, even if the time is less. Here there is no documentation is needed like Test
Cases, Test Plan or Test Scenarios.
This technique is used under following scenarios:1.

2.

3.

When the specification and requirement are not clear.


Sometimes, the whole requirement and specifications of the product, asked by the client is
not clear and the deadline is less than one month or so. The specifications like field
validation, boundary value range, navigation, etc. Lack of proper specification confuses the
tester in the account of Usability issues, functional issues. So, the presence of another team
member as Developer or a experienced tester may help in greater extent.
The dead line is near.
Of course, as QA we all encounter this type of scenario in our professional life where the
development took more time and when the product comes for testing, there is only one or
two days in hand, for complete QA. In this case the pair/buddy testing will come in handy.
The team is new, where the quick knowledge on the product is needed.
When there is a new member in the team (developer or tester), quick knowledge on the
existing or requested product is needed. The domain knowledge, technology going to be
used, end user type etc also matters. By this technique of testing, a new tester can get a
hold on the functional flow of the product. And a new developer can pin point the common
mistake areas which may help him do better next time.

When NOT to use Pair/Buddy testing:1. The product needed or should go under automation testing, another team member is not
needed.
2. If the hara ter set , eha ior or attitude of the t o e ers does t at h the the testi g
should t e do e i pair as it a reate o fli t.
3. If there is only structured test cases , then only one tester will suffice.
And in many cases where one man power is sufficient, the pair testing is not needed.
ADVANTAGES OF PAIR/BUDDY TESTING:1. Here if the developer and a tester are paired together then the developer can learn a better
approach to the built and the tester also can learn the functionality from better angle.
2. There are many random bugs occurs during testing which are tough to reproduce, but if a tester
and developer are paired together then developer can catch the link of this type of random bugs
at once. This may sa e a lot of ti e. The tester does t ha e to aste ti e i reprodu i g the
issue a d does t eed to apture s ree shots. Dire t a d i
ediate o
u i atio leads to
the faster fixation.
3. Both developer and tester will work to their best due to the presence of each other.
4. Developer can mark his/her mistakes, so that they can be efficient and optimized in future.

Neelam Rawat

5. Work load ill e less i prese e of a other tea


and use maximum scenarios in to his/her benefit.
6. Work will be fun and enjoyable.

ers help, so, the tester a thi k lear

GETTING STARTED WITH PAIR TESTING/BUDDY TESTING (HOW TO SET UP):1. First choose a developer/tester who you trust or you know him better or his attitude and
character set matches with you. It is a very vital point of Pair Testing as it has high impact on the
efficiency.
2. Make a plan of approach, have a short meeting before and make schedule. A good start without
any holding back, is needed.
3. As both of the team member have to use one machine, then the environment should be chosen
carefully. Both the members should feel comfortable and free to talk to each other without any
disturbance nor creating disturbance for others.
4. Project also does matters while applying this type of testing. The project should be of
appropriate size. Pair testi g should t e do e effi ie tl o er ig or hole proje t. As I
lear ed Pair testi g orks ell he testi g e fu tio alit a d he oth parti ipa ts
ha e ee orki g o the proje t si e i eptio .
EXPLORATORY TESTING

As its name implies, exploratory testing is about exploring, finding out about the software, what
it does, hat it does t do, hat orks a d hat does t ork. The tester is o sta tl
aki g
decisions about what to test next and where to spend the (limited) time. This is an approach
that is most useful when there are no or poor specifications and when time is severely limited.
The test design and test execution activities are performed in parallel typically without formally
documenting the test conditions, test cases or test scripts. This does not mean that other, more
formal testing techniques will not be used. For example, the tester may decide to us boundary
value analysis but will think through and test the most important boundary values without
necessarily writing them down. Some notes will be written during the exploratory-testing
session, so that a report can be produced afterwards.
Test logging is undertaken as test execution is performed, documenting the key aspects of what
is tested, any defects found and any thoughts about possible further testing.
It can also serve to complement other, more formal testing, helping to establish greater
confidence in the software. In this way, exploratory testing can be used as a check on the formal
test process by helping to ensure that the most serious defects have been found.

Neelam Rawat

Exploratory testing is a hands-on approach in which testers are involved in minimum planning
and maximum test execution.
The planning involves the creation of a test charter, a short declaration of the scope of a short (1
to 2 hour) time-boxed test effort, the objectives and possible approaches to be used.
During testing phase where there is severe time pressure, Exploratory testing technique is
adopted that combines the experience of testers along with a structured approach to testing.
Exploratory testing often performed as a black box testing technique, the tester learns things
that together with experience and creativity generate new good tests to run.

MAJOR DIFFERENCES BETWEEN SCRIPTED AND EXPLORATORY TESTING:

ADVANTAGES
1.
2.
3.
4.
5.
6.
7.
8.

This testing is useful when requirement documents are not available or partially available
It involves Investigation process which helps find more bugs than normal testingUncover bugs which are normally ignored by other testing techniques
Helps to expand the imagination of testers by executing more and more test cases which finally
improves productivity as well
This testing drill down to smallest part of application and covers all the requirements
This testing covers all the types of testing and it covers various scenarios and cases
Encourages creativity and intuition
Generation of new ideas during test execution

DISADVANTAGES
1. This testing purely depends on the tester skills
2. Limited by domain knowledge of the tester
3. Not suitable for Long execution time

Neelam Rawat

10

ITERATIVE TESTING
Iterative testing simply means testing that is repeated, or iterated, multiple times. Iterative usability
testing matters because the ultimate goal of all usability work is to improve usability, not to catalog
problems. A single usability test - particularly if no action is taken based on its findings - can only tell you
how successful or unsuccessful you were in creating ease of use. To improve upon what you already
have, recommendations based on the usability test's findings must be incorporated into a revision of the
product. Once this has been done, it's advisable to test the product again to make sure that no
additional usability flaws were incorporated with the fixes to the previously found glitches. In an ideal
world, of course, this cycle of testing would continue as long as meaningful recommendations for
improvement could be drawn from the usability test results. In reality, it's best to define quantifiable
goals for your product's usability before you begin testing, and to continue the cycle of testing and
revising until your usability goals have been met.
DEFECT SEEDING OR BEBUGGING OR ERROR SEEDING

It is a process of consciously adding of errors to the source code, which can be used to
evaluate the amount of residual errors after the system software test phase. After adding
the errors to the source code, one can try to estimate the number of "real" errors in the
code based on the number of seeded errors found.

Bebugging is the process of adding known defects to the application intentionally for the
purpose of monitoring the rate of detection and removal. This process is also known as
defect seeding or Fault injection or defect feeding.

PURPOSE OF BEBUGGING:
Bebugging is a way to improve the quality of the product by introducing a new known defect. It is also
used in determining the reliability of test set/test suite. It is achieved NOT by developing more tests but
by introducing new defects.
DRAWBACKS OF DEFECT SEEDING
A common problem with this type of technique is forgetting to remove the errors deliberately inserted.
Another common problem is that removing the seeded defects introduces new errors. To prevent these
problems, be sure to remove all seeded defects prior to final system testing and product release. A
useful implementation standard for seeded errors is to require them to be implemented only by adding
one or two lines of code that create the error; this standard assures that you can remove the seeded
errors safely by simply removing the erroneous lines of code.
A SIMPLE QUESTION ON DEFECT SEEDING:
A defect-seeding program inserts 81 defects into an application. Inspections and testing found 5,832
defects. Of these, 72 were seeded defects. How many errors or defects are predicted to remain in the
application?
(a) 523
(b) 640
(c) 648
(d) 729

Neelam Rawat

11

Correct Answer: 729


To determine the total number of defects, use the following formula:
72(discovered seeded defects)
81(total seeded defects)
=

5832(total discovered defects)


X(total defects)

Then, solve for X:


72X=5832(81)
X= 5832(81)
72
X=6561
Take the total number of defects minus the total discovered defects to determine the amount of defects
predicted to remain in the program:
6561 5832 = 729

Neelam Rawat

12

KIET GROUP OF INSTITUTIONS


DEPARTMENT OF COMPUTER APPLICATIONS
SESSION: 2016-17, V SEMESTER
SOFTWARE TESTING, NMCA E43
UNIT 5

Contents

Test Planning, Management


Test Plan Execution and Reporting
Software Test Automation: Scope of automation
Design & Architecture for automation
Test tool selection,
Testing in Object Oriented Systems

_____________________________________________________________________________________
TEST PLANNING
Software testing is a formal process carried out by a committed testing team in which a piece of
software, parts of software or even multiple pieces of software are examined to detect differences
between existing and required conditions.
A Software Test Plan is a document describing the testing scope and activities. It is the basis for formally
testing any software/product in a project.
Test plan: A document describing the scope, approach, resources and schedule of intended test
activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do
each task, degree of tester independence, the test environment, the test design techniques and entry
and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency
planning. It is a record of the test planning process.
Why do we need to plan for it?

Testing is a complex process


Test planning is essential in:
ensuring testing identifies and reveals as many errors in the software as possible
bringing software to an acceptable level of quality
giving efficiency regarding budgetary and scheduling limitations.
IEEE ta da d fo oft a e Test Do u e tatio defi es Test Pla i g as a do u e t
describing the scope, approach, resources and schedule of intended testing a ti ities

Neelam Rawat

13

TEST MANAGEMENT
Test management, process of managing the tests. A test management is also performed using tools to
manage both types of tests, automated and manual, that have been previously specified by a test
procedure.

Neelam Rawat

14

Test management tools allow automatic generation of the requirement test matrix (RTM), which is an
indication of functional coverage of the application under test (SUT).
Test Management tool often has multifunctional capabilities such as testware management, test
scheduling, the logging of results, test tracking, incident management and test reporting.
Test Management is a series of planning, execution, monitoring and control activities that help achieve
project goals. This course is designed for newly appointed Test Managers and gives you the tips, tools &
procedure to steer your project to success.
TEST MANAGEMENT RESPONSIBILITIES:
The Test Ma age takes full espo si ilit fo the p oje t s su ess. The ole i ol es ualit & test
advocacy, resource planning & management, and resolution of issues that impede the testing effort
1. Test Management has a clear set of roles and responsibilities for improving the quality of the
product.
2. Test management helps the development and maintenance of product metrics during the
course of project.
3. Test management enables developers to make sure that there are fewer design or coding faults.

Neelam Rawat

15

WHAT ARE THE CHALLENGES IN TEST MANAGEMENT?


Being a Test Manager, you must guarantee all the following requirements:

Neelam Rawat

16

WHAT IS TEST EXECUTION?


Test execution is the process of executing the code and comparing the expected and actual results.
Following factors are to be considered for a test execution process:
1.
2.
3.
4.
5.
6.

Based on a risk, select a subset of test suite to be executed for this cycle.
Assign the test cases in each test suite to testers for execution.
Execute tests, report bugs, and capture test status continuously.
Resolve blocking issues as they arise.
Report status, adjust assignments, and reconsider plans and priorities daily.
Report test cycle findings and status.

Whe people talk a out the testi g tool , it is ostl a test e e utio tool that the thi k of, asi all a
tool that a u tests. This t pe of tool is also k o as a test u i g tool . Most tools of this t pe get
started by captu i g o e o di g a ual tests; he e the a e also k o as aptu e/pla a k tools,
aptu e/ epla tools o e o d/pla a k tools. It is si ila as e o di g a tele isio p og a , a d
playing it back.
The Test execution tools need a scripting language in order to run the tool. The scripting language is
basically a programming language. So any tester who wants to run a test execution tool directly will
need to use programming skills to create and modify the scripts. The basic advantage of programmable
scripting is that tests can repeat actions (in loops) for different data values (i.e. test inputs), they can
take different routes depending on the outcome of a test (e.g. if a test fails, go to a different set of tests)
and they can be called from other scripts giving some structure to the set of tests.
FEATURES OR CHARACTERISTICS OF TEST EXECUTION TOOLS ARE:
1. To capture (record) test inputs while tests are executed manually;
2. To store an expected result in the form of a screen or object to compare to, the next time the
test is run;
3. To execute tests from stored scripts and optionally data files accessed by the script (if datadriven or keyword-driven scripting is used);
4. To do the dynamic comparison (while the test is running) of screens, elements, links, controls,
objects and values;
5. To initiate post-execution comparison;
6. To log results of tests run (pass/fail, differences between expected and actual results);
7. To mask or filter the subsets of actual and expected results, for example excluding the screendisplayed current date and time which is not of interest to a particular test;
8. To measure the timings for tests;
9. To synchronize inputs with the application under test, e.g. wait until the application is ready to
accept the next input, or insert a fixed delay to represent human interaction speed;
10. To send the summary results to a test management tool.

Neelam Rawat

17

SOFTWARE TEST AUTOMATION


AUTOMATED TESTING: PROCESS, PLANNING, TOOL SELECTION
Manual testing is performed by a human sitting in front of a computer carefully executing the test steps.
Automation Testing means using an automation tool to execute your test case suite. The automation
software can also enter test data into the System Under Test , compare expected and actual results
and generate detailed test reports.
Test Automation demands considerable investments of money and resources. Successive development
cycles will require execution of same test suite repeatedly. Using a test automation tool it's possible to
record this test suite and re-play it as required. Once the test suite is automated, no human
intervention is required . This improved ROI of Test Automation.
Goal of Automation is to reduce number of test cases to be run manually and not eliminate manual
testing all together.
WHY AUTOMATED TESTING?
Automated testing is important due to following reasons:
1. Manual Testing of all work flows, all fields , all negative scenarios is time and cost
consuming
2. It is difficult to test for multi lingual sites manually
3. Automation does not require Human intervention. You can run automated test
unattended (overnight)
4. Automation increases speed of test execution
5. Automation helps increase Test Coverage
6. Manual Testing can become boring and hence error prone.
WHICH TEST CASES TO AUTOMATE?
Test cases to be automated can be selected using the following criterion to increase the automation ROI
1. High Risk - Business Critical test cases
2. Test cases that are executed repeatedly
3. Test Cases that are very tedious or difficult to perform manually
4. Test Cases which are time consuming
The following categories of test cases are not suitable for automation:
1.
2.
3.

Test Cases that are newly designed and not executed manually at least once
Test Cases for which the requirements are changing frequently
Test cases which are executed on ad-hoc basis.

Neelam Rawat

18

AUTOMATION PROCESS
Following steps are followed in an Automation Process:

TEST TOOL SELECTION


Test Tool selection largely depends on the technology the Application Under Test is built on. For
instance QTP does not support Informatica. So QTP cannot be used for testing Informatica applications.
It's a good idea to conduct Proof of Concept of Tool on AUT
DEFINE THE SCOPE OF AUTOMATION
Scope of automation is the area of your Application Under Test which will be automated. Following
points help determine scope:
1. Feature that are important for the business
2. Scenarios which have large amount of data
3. Common functionalities across applications
4. Technical feasibility
5. Extent to which business components are reused
6. Complexity of test cases
7. Ability to use the same test cases for cross browser testing
PLANNING, DESIGN AND DEVELOPMENT
During this phase you create Automation strategy & plan, which contains following details1. Automation tools selected
2. Framework design and its features
3. In-Scope and Out-of-scope items of automation
4. Automation test bed preparation
5. Schedule and Timeline of scripting and execution
6. Deliverables of automation testing

Neelam Rawat

19

TEST EXECUTION
Automation Scripts are executed during this phase. The scripts need input test data before there are set
to run. Once executed they provide detailed test reports.
Execution can be performed using the automation tool directly or through the Test Management tool
which will invoke the automation tool.
Example: Quality center is the Test Management tool which in turn it will invoke QTP for execution of
automation scripts. Scripts can be executed in a single machine or a group of machines. The execution
can be done during night, to save time.
MAINTENANCE
As new functionalities are added to the System Under Test with successive cycles, Automation Scripts
need to be added, reviewed and maintained for each release cycle. Maintenance becomes necessary to
improve effectiveness of Automation Scripts.
SOFTWARE TESTING TOOL SELECTION
While introducing the tool in the organization it must match a need within the organization, and solve
that need in a way that is both effective and efficient. The tool should help in building the strengths of
the organization and should also address its weaknesses. The organization needs to be ready for the
changes that will come along with the new tool. If the current testing practices are not good enough and
the organization is not mature, then it is always recommended to improve testing practices first rather
than to try to find tools to support poor practices. Automating chaos just gives faster chaos!
Certainly, we can sometimes improve our own processes in parallel with introducing a tool to support
those practices and we can always pick up some good ideas for improvement from the ways that the
tools work. However, do not depend on the tool for everything, but it should provide support to your
organization as expected.
The following factors are important during tool selection:

Assess e t of the o ga izatio s atu it e.g. eadi ess fo ha ge ;


Identification of the areas within the organization where tool support will help to improve
testing processes;
Evaluation of tools against clear requirements and objective criteria;
Proof-of-concept to see whether the product works as desired and meets the requirements and
objectives defined for it;
Evaluation of the vendor (training, support and other commercial aspects) or open-source
network of support;
Identifying and planning internal implementation (including coaching and mentoring for those
new to the use of the tool).

Neelam Rawat

20

TESTING OBJECT-ORIENTED SYSTEMS


Testing is a continuous activity during software development. In object-oriented systems, testing
encompasses three levels, namely, unit testing, subsystem testing, and system testing.
UNIT TESTING
In unit testing, the individual classes are tested. It is seen whether the class attributes are implemented
as per design and whether the methods and the interfaces are error-free. Unit testing is the
responsibility of the application engineer who implements the structure.
SUBSYSTEM TESTING
This involves testing a particular module or a subsystem and is the responsibility of the subsystem lead.
It involves testing the associations within the subsystem as well as the interaction of the subsystem with
the outside. Subsystem tests can be used as regression tests for each newly released version of the
subsystem.
SYSTEM TESTING
System testing involves testing the system as a whole and is the responsibility of the quality-assurance
team. The team often uses system tests as regression tests when assembling new releases.
OBJECT-ORIENTED TESTING TECHNIQUES
GREY BOX TESTING
The different types of test cases that can be designed for testing object-oriented programs are called
grey box test cases. Some of the important types of grey box testing are:

State model based testing : This encompasses state coverage, state transition coverage, and
state transition path coverage.
Use case based testing : Each scenario in each use case is tested.
Class diagram based testing : Each class, derived class, associations, and aggregations are tested.
Sequence diagram based testing : The methods in the messages in the sequence diagrams are
tested.

TECHNIQUES FOR SUBSYSTEM TESTING


The two main approaches of subsystem testing are:
1. Thread based testing : All classes that are needed to realize a single use case in a subsystem are
integrated and tested.
2. Use based testing : The interfaces and services of the modules at each level of hierarchy are
tested. Testing starts from the individual classes to the small modules comprising of classes,
gradually to larger modules, and finally all the major subsystems.

Neelam Rawat

21

OBJECT-ORIENTED METRICS
Metrics can be broadly classified into three categories: project metrics, product metrics, and process
metrics.
PROJECT METRICS
Project Metrics enable a software project manager to assess the status and performance of an ongoing
project. The following metrics are appropriate for object-oriented software projects:
1.
2.
3.
4.

Number of scenario scripts


Number of key classes
Number of support classes
Number of subsystems

PRODUCT METRICS
Product metrics measure the characteristics of the software product that has been developed. The
product metrics suitable for object-oriented systems are:
1. Methods per Class : It determines the complexity of a class. If all the methods of a class are
assumed to be equally complex, then a class with more methods is more complex and thus
more susceptible to errors.
2. Inheritance Structure : Systems with several small inheritance lattices are more wellstructured
than systems with a single large inheritance lattice. As a thumb rule, an inheritance tree should
not have more than 7 ( 2) number of levels and the tree should be balanced.
3. Coupling and Cohesion : Modules having low coupling and high cohesion are considered to be
better designed, as they permit greater reusability and maintainability.
4. Response for a Class : It measures the efficiency of the methods that are called by the instances
of the class.
PROCESS METRICS
Process metrics help in measuring how a process is performing. They are collected over all projects over
long periods of time. They are used as indicators for long-term software process improvements. Some
process metrics are:
1. Number of KLOC (Kilo Lines of Code)
2. Defect removal efficiency
3. Average number of failures detected during testing
4. Number of latent defects per KLOC

Neelam Rawat

Anda mungkin juga menyukai