Anda di halaman 1dari 54

Training - Automation Architect

Test Automation Strategy and Design


Agenda – Day I
• Assurance Services
– Automation Strategy, Architecture, Automation Assessment and Consulting
• Test Automation Strategy Overview
– Objectives
– Automation Feasibility Analysis, Automation Prioritization
• Feasibility Analysis
– Feasibility Study Methodology
– Technical Feasibility
– Automation Quotient
• Automation Scope Definition
– Analysis
– Scope Definition
• Design
– Data Driven Framework
– Keyword Driven framework
– Build Verification Test / Deployment Verification Test
• Pilot
– Activities to be done in Pilot
– Pilot Reporting
13 July 2009
Introduction to Automation & Test Automation Strategy Overview
Introduction from a Consulting Firm

13 July 2009
Automation Strategy
Automation Objectives Automation Strategy Guidelines
 Faster to market  Test automation is a fulltime effort, not a
 Increase in Test Effectiveness sideline.
 Effort/Cost Saving  The test design and the test framework
 Decrease in Tester dependency are totally separate entities.
 The test framework should be application-
independent.
 The test framework must be easy to
expand and maintain
 The test strategy/design should be
framework independent.
 The test strategy/design should remove
most testers from the complexities of the
test framework

13 July 2009
Test Automation Strategy and Design
Rationale  Rationale behind the scope (Feasibility Analysis)

Test Types/ Activities


 Shakedown testing
 Regression
Scope  Business process tests
 Data creation/ conditioning
Other considerations
 Test case consolidation and prioritization
 Data Driven
Framework  Keyword /Table driven
 Hybrid
 Script Design
 Data Sheet Design
Design
 Integration Design
 Component based model in BPT
 Estimation, ROI
Others
 Maintenance, Execution

13 July 2009
Feasibility Analysis
Feasibility Analysis
 Automation Need Evaluation
 Type of Automation
 Activities/Type of tests to be automated
• Shakedown (tests to determine test-readiness)
• Regression tests
• Data creation/ conditioning activity
• Tests to be run on multiple builds/ releases
 IT strategy (choice of toolsets, application strategy..)
 Future upgrades
• Is there any major upgrade in the offering (if yes, check if automation would need a
major overhaul and if the tool proposed will support the technology of the proposed
upgrade)
 Activity expected in future
• Is the application likely to be retired/replaced and therefore to see very little activity in
the near future

13 July 2009
Automation Feasibility Study Methodology
Analysis
Automation quotient Technical quotient
•Availability of Automated scripts
Study Sources •Bounded tests
•Test coverage level •Control over input & output
•Completeness of documents
Questionnaire •Manual interventions
•% of data variant test conditions •Consistency of object properties
•Control over input & output
•Object identification
Demo •Repetitiveness in test conditions •Validation complexity – Bus
•Effort distribution across test Rules, Attribute validation, Static
phases
Discussions or predictable results
•Frequency of releases •Effort required for Workarounds
•Potential impact of patches/small
Documents fixes
•Effort and time for automation
maintenance Other Factors
•Stability of GUI • Application Enhancement Roadmap
•Churn rate of regression test bed • Business criticality
•SME bottlenecks
•Type of defects found –
explorative, functional, GUI

Recommendations Discussions
• Solution & Implementation approach •Discussion with Customer
• Timeline & effort champions on findings
• Estimated benefits & recommendation

13 July 2009
Sample – Feasibility Analysis Document
• Sample Document

13 July 2009
Tool Evaluation
Factors to evaluate Test Tool
Technology stream
 Support for the current technology stream
 Support for the new technologies being planned as per IT strategy (if any)
Total cost for the tool
 Cost of acquiring license
 Cost of AMC
 Cost for add-ins required as per technology stream
 Cost for any additional integration exercise
 Comparison to be against similar license types (ex. global license if offshore is
considered)
Exit cost for leaving existing tools
 Sunk investment in direct assets (code etc.) that cannot be reused
 Cost of exiting resources because skill set is not required.
 Cost for changes invested in other tools (any custom integration with test
management tool etc.)

13 July 2009
Tool Evaluation
Factors to evaluate Test Tool
The entry barrier
 Ease of training new resources
 Ease of finding skilled resources in the market
 Availability of functional libraries
 Ease of Integrating with other tools
Strategic Vendor Rating
 Use "Gartner's Magic Quadrant" to see if the strategic direction of the vendor is in sync
with company needs.
User comfort level
 Technical proficiency required to script
 Any special skill required to work on the tool
Technical Support level
 How much support is required from the vendor?
 What kind of support is required and how often?
Potential to resolve current issues
 If a current tool is being used, what are the issues faced?. Can the new tool handle it?
Comparison of Market leading test tools capabilities

13 July 2009
Tool Landscape

13 July 2009
Automation Scope Definition
Scope Definition
Test Case Analysis

Type of Test Case


Number of Data variant Test
cases
Number of unique Validation

Effort Analysis Defect Analysis

Test design % of User Interface defects


Test Data setup % of Withdrawn defects due to
Test execution test data issue
Test Reporting % of Regression Defects
% of Defects Slippage
% of Functional Defects

Regression Testing
Shakedown Testing/Build
Verification Test/Smoke
test
Business process tests
Data creation/
conditioning

13 July 2009
Automatability analysis

Function-level Analysis Test case Analysis

Automation Analysis
Completeness Analysis

Ignore Ready to Ready to

Automatability
Automate Automate
Prioritization Analysis Undertake
High Evaluate Complete and
Automation
further
priority Facilitation Analysis business-critical Alternate
functions test cases methods/
Ignore Ready to
Phasing based on Selective
Automate
business-context and Automation
availability Gap Analysis
Ignore
and completion
Alternate Alternate
methods/ methods/
Ignore
Selective Selective
Business criticality Automation Automation

Business criticality

Facilitation Analysis: Analysis of test case completeness, test case review and tool availability, accessibility and offshoreability

Business Criticality Analysis: Analysis of Day-in-life (DIL) factor, system criticality, potential legislative impact, financial and consumer impact

Automatability Analysis: Analysis of data-variance, potential degree of automation, control of input and output, complexity of case and setup
requirements, potential reusability across phases, potential savings/ benefits

13 July 2009
Design
Framework HP-Mercury QC BPT
(Business Components
model)

Keyword or Table Driven

Hybrid • Object, Action, Input


Data, and Expected
Data-Driven Result all typically
• Data-Driven With
externalized and in One
• Test Input/Output values Externalization other than Record
are read from data files data(Object/Part of actions)
• Hardest and most time-
• Object and action are
• Keyword Driven with the consuming data driven
stored in the Script approach to implement
business keywords(only
• Easiest and Quickest to
step tables/Test Tables) • Greatest potential for
Implement
long-term success
• Low potential for long-term • Combination of Data-driven
success • Table creation can start
and Keyword Driven (due
before Application
• Application building is a to knowledge constraints) building
pre-condition

13 July 2009
Data Driven Script Design
• Driver Script
Initialize the setup required to run and call
scripts in a desired order.
Application under Test
• Main Script
Script calls the functional scripts in an order and
executes the test case.
Driver Script
• Functional Script
Common functionality or module being used in
Main Script Script Library many business functions.
Recovery Functional
• Recovery Scenario Script
Data Grid Scenario Script Run time exception handling scripts
Script
• Report Script
Report Utility Script execution report generation scripts
Script Script
• Utility Script
Utility function scripts (Can be an VBS file also)
• Data Sheet
Excel sheet/Any other Data Source

13 July 2009
Data Sheet Design
Input
 Row heading – for the screen
 Mandatory and Optional with different colors
 Field values in the list box
 Data Grid fields are directly mapped to the application objects

Expected
 Separate section/Sheet
 Protect the cells from the user input
 Formulae to Calculate the expected

Comparison
 Script/Data Sheet can be used

13 July 2009
Keyword/Table Driven Framework
Suite Driver Sheet and
Script
Run T 1 - Y
Run T 2 - Y
Run T N - N

Datasheet T1 - Keywords
Key Word Interpreter Functional Library
for component, action,
utility, results Business Functions
Utility Functions
Test Driver
Object Map Component Functions
Manual
Component 1 Result Functions
Testing
Test Tool
Component 2

Component 3

AUT* 3 AUT* 1 AUT*2 Results

QC
QC** API

•Application Under Test (AUT) **HP QC (Quality Center)

13 July 2009
BVT Automation framework
Key Pointers
• Uses descriptive programming method
• XML and Excel based
• Up to 40%+ faster testing on builds
• Extendible - reports, checks
• Object checks parameterized - easy maintenance on UI changes or test requirement changes
• Catches defects early, saves rework

Tool Independent XML Define granularity


Application under test
based repository and format of report

Object property list Object property list Shakedown Customized


repository Repository Execution Reports

Optimized property Expected object Data Flow sheet,


list for objects property result sheet additional checks

Automatic generation
of expected results
for UI validation

Ready to use Scripts Customize and use Guidelines

13 July 2009 - 22
-
Tool Integration
• Integration of Version control tool with Test Management tool
• Integration of the Automation Tool and Test Management Tool
– Execution of Scripts from Test Management Tool
– Reporting Mechanism

Pros/Cons Executing Cases From QC Executing cases outside QC and porting


results as batch

Ease of execution by a non –tech professional Fast turnaround times

Scheduling selective run can be done through Integrated view of results (manual and
QC automated) possible
Integrated view of results (manual and Easier to maintain the script
automated) is possible
Good medium for sharing between onsite and
Pros offshore
QC link can become a bottle neck Cannot schedule tests from QC

Latency issues
Uploading scripts into QC takes longer
Version management?
Potential space availability
Cons Maintainability of scripts
Building a controller script which interfaces with QTP Results and the Data grid with updated
the actual script and data grid results to be attached as an attachment to the
Other considerations test set

13 July 2009
Pilot
Pilot – Objectives and considerations
Aims of the pilot :
• Define the Automation Frame work for the main phase
• Define guidelines for selecting test scenarios amenable for automation
• Identify the scenarios/function to be automated based on the ROI indicators measured
during the Pilot phase of the project.
• Workout a detailed plan to automate the rest of the test cases (that can be automated)
• Investigate potential roadblocks for offshoring the automation project and come up with
alternatives for the same.
While scoping a Pilot, the following are key considerations:
• Select a representative set of business scenarios using a systematic approach
– Involving multiple human interaction points
– Multiple platform/ online-batch jumps/ interfaces
– Account for frequency of operation of a business case and the complexity of the test
case

13 July 2009
Pilot - Activities to be done in Pilot
Functionality Identification
 Factors
 Complexity
 Coverage
 Priority
Test cases Identification
 Coverage – Including Positive and Negative
Test Data Preparation
 Prepare Test Data for each test case
 Preconditioning the data
Data sheet Preparation
 Identify the Fields needed for Functionality under test
 Prepare the Data Sheet with Identified fields as per the Data Sheet Design
 Make a entry of all the Test cases in the Data Sheet
 Key in the test data for each test cases in the corresponding fields

13 July 2009
Pilot - Activities to be done in Pilot
Automating the Functionality under Test
 Define every feature of an Functionality
 Modularize the Units of Functionality under Test
 Create all related steps of Functionality
 Do the necessary modifications to the script to make it Data Driven
 Unit Testing

Execution
 Run the Script with the data available in the Data sheet
 Report the Execution Results as per the defined format

Metrics to Management
 Benefits out of Automation
 Possible Risks and Mitigation

13 July 2009
Estimation
Effort Estimation Criteria
• Categorization of Complexity
– no. of objects
– type of objects
– type of action
– number of steps
– number of verifications/syncs/outputs

• degree of reusability
– time for understanding
– time for coding (including visiting existing libraries for reusability)
– time for testing the scripts
– time for unique, reused
– time for user documentation
– time for review
– time for fix

• framework setup
• Dependency

13 July 2009
Effort Estimation Model
• Sample Test case based model
Type of Unit No Familiari Test Script Script Test Integra Automatio
Test effort Test zation Automation Data debugging Execution tion n Effort
Case Case Script externali (Data with (PD)
s capture and zation SetUp+Ex Test
construction ecution+A Reposi
nalysis+R tory
eporting) tool

Simple 0.25 50 3.125 18.75 12.5 12.5 3.125 0 50


Medium 0.25 30 2.25 15 11.25 9.375 3 0 40.875
Complex 0.25 20 2 12.5 10 7.5 3 0 35

Test Automation Total


Effort 125.875

13 July 2009
ROI Calculation
Payback from Automation
Savings due to increased efficiency + additional productivity benefits
Payback = -------------------------------------------------------------------------------------------
Tool cost (or AMC) + Maintenance cost+ Build cost

Note1 : Tool cost can be ignored if the tool has already been procured and is unutilized

Other popular metrics include:

% of strategic applications automated

Test Execution savings due to automation (incl. reduction in TTM)

% of errors due to testing

Other benefits include:


- Optimal utilization of tester time
- Increased motivation..

13 July 2009
Metrics – Design Phase
Test Metric Objective What to collect When to collect Using the metrics
Auto
matio
n
Phase

Design Automation quotient Measures depth of No of test steps automated in a test At the end of A low value indicates shallow
per test case (depth) automation within a case, Total number of test steps in design phase automation. The selection
test case a test case of application/modules for
Automation quotient = test automation should be
No of test steps automated in a test re-looked.
case
--------------------------------------
Total number of test steps in a test
case

Design % of reusable scripts Measure of Reusability No of reusable scripts used At the end of High % indicates quicker
for new achieved Total no. of components created new design phase time-to-build, automation
enhancements or can be used earlier in Test
projects Development Life cycle
(TDLC).

13 July 2009
Sample Standards and Guidelines
Standards and Guidelines
Need of Standards for Automation
Standards are needed to achieve the following
 Readability
 Maintainability
 Clarity
 Uniformity
Various Automation Standards
Hardware Standards
Software Standards
Tool Setting Standards
 General Options and Test Settings
 Object Identification Settings
 Options Settings
 Editor Settings

13 July 2009
Standards and Guidelines Contd…
Recording Standards
 Recording standards will allow to record the test as per the settings are set. These
settings are Tool specific. For Example,
 Mercury – QTP and Winrunner - Recording Settings can be set to keep/Remove
specific properties of the objects before starting the Recording
 Rational functional Tester – Once the Object is recorded, Properties can be edited
Coding Standards
 Test Script – Function Name
 Version – Ensure that Versioning is maintained for the Scripts
 Author – Ensure that Author name is mentioned in the header of the script
 Comments – Comments about the Script and Each unit of the script
 Descriptions/Definitions – Ensure that all the variables are defined and Descriptions
are mentioned for Functions
 Parameters used - Ensure that all parameters indicate their nature (in/ out/ in-out) and
-have appropriate comments where needed
 Modularization – Ensure that Scripts are modularized based on Functional Units
 Length of the Script – Ensure that Script is not too long

13 July 2009
Standards and Guidelines Contd…
 Path Hard Coding – Eliminate hard coded paths
 Indentation - Code is to be indented for easy readability and debugging
 Defining Array Bounds - Arrays are defined as utility functions, and therefore they
need to be dynamic in nature. Bounds should not be defined, allowing them to process
data of any size
 Defining Function - Anything abstracted at unit level, needs to be classified as a
function and stored accordingly in the utility repository
 Creating Functions afresh – To increase efficiency and remove redundancy,
Traceability matrix should be maintained to check if functions required are already
available before scripting afresh
 Windows Flow control - Wherever possible, window flows to be defined using
keywords. This would enable maximum reusability (can add new scenarios on the fly)
and encourage reuse as well
 Nested Loops - Keep nested loops to a minimum wherever possible
 Synchronization – Ensure that Event loops are used for flow between screens
 Reserved Words – Ensure that no reserved words are used

13 July 2009
Standards and Guidelines Contd…
Execution Standards
 Ensure that Same environment is set up for all the machines to be used for Execution

Debugging Checklist

 Debugging checklist should be maintained. Below are the check points to be taken
care
 Did the code work before?
 No – Follow standard debugging
 Yes – Check identical OS is used
 If not, OS Specific Problem
 If Yes, Check if the service packs are installed are of the same version

13 July 2009
Data sheet Standards and Guidelines Contd…
Usability

 Ensure that Guidelines are provided for each field in the data sheet. Which will make
tester independent.
 Ensure that possible values are provided to select in the data sheet
 Define each section clearly as Input, Output and Final Results
 If possible, allow Tester to edit only the input section

Readability

 Conventions to be followed for each section of data sheet


 Names/Color code for the variables

13 July 2009
13 July 2009
Thank You

13 July 2009
Mercury Quick Test Pro – Best practices
Usability:

 Data Grid Design Highlights

 Row heading – for the screen


 Mandatory and Optional with different colors
 Field values in the list box
 Data Grid fields are directly mapped to the application objects

 Document

 Execution Manual - Describes the activities to execute scripts

 Reporting

 Readable QTP results thru Report event


 Report written on the separate results workbook to port in to MQC
 Error message written in Data Grid used

13 July 2009
Mercury Quick Test Pro – Best practices
Maintainability:

 Object Handling – Minimize Maintenance effort

 Descriptive objects
The test data and application object properties will be used as input to define the dynamic objects. Descriptive objects will be used to
automate dynamic screens.
Example: Account – Networks, Benefit Option – Network selection
Retrofitting in the code

 Static objects
These objects will be used to handle static screens/input. These Objects will be captured and stored in the test object repository.
Example: Client Screen, Doc Gen, Tie contacts
Retrofitting in the object repository

13 July 2009
Mercury Quick Test Pro – Best practices
Maintainability (Contd..):

 Coding Standard
A coding standards document which makes the entire development cogent and comprehensible
– Descriptions (Version, Parameters used & Global variables)
– Variable Definition
– Content & Length of the Script
– Comments

 Data Grid Standard


Document which makes the entire development cogent and comprehensible

 Script Matrix
This links together the various scripts used per type

 Traceability Matrix
Traceability on Test cases with automation script

13 July 2009
Mercury Quick Test Pro – Best practices
Integration:

 Results porting Tool

 Ports executed results to regression test set in MQC


API built using QC Open Test Architecture
Save Manual effort

 Reporting

 Report written on the separate results workbook to port in to MQC

13 July 2009
Mercury Quick Test Pro – Best practices
Performance:

 Unmonitored runs
 Recovery Scenario

 Execution Speed
 Coding standards
 combination of descriptive and static objects
 Logical Synchronization (Wait statement is not used)
 Scripts are stored in local and QC integration mechanism build

13 July 2009
Mercury Quick Test Pro – Best practices
Data Grid:

13 July 2009
Mercury Quick Test Pro – Best practices
Reporting:

13 July 2009
Mercury Quick Test Pro – Best practices
Descriptive objects:

13 July 2009
Mercury Quick Test Pro – Best practices
Static object repository:

13 July 2009
Mercury Quick Test Pro – Best practices
Script matrix:

13 July 2009
Mercury Quick Test Pro – Best practices
Traceability matrix:

13 July 2009
Mercury Quick Test Pro – Best practices
Integration:
Pros/Cons Executing Cases From QC Executing cases outside QC and porting
results as batch

Ease of execution by a non –tech professional Fast turnaround times

Scheduling selective run can be done through Integrated view of results (manual and
QC automated) possible
Integrated view of results (manual and Easier to maintain the script
automated) is possible
Good medium for sharing between onsite and
Pros offshore
QC link can become a bottle neck Cannot schedule tests from QC

Latency issues
Uploading scripts into QC takes longer
Version management?
Potential space availability
Cons Maintainability of scripts
Building a controller script which interfaces with QTP Results and the Data grid with updated
the actual script and data grid results to be attached as an attachment to the
Other considerations test set

13 July 2009
Mercury Quick Test Pro – Best practices
Integration (Contd..):

13 July 2009

Anda mungkin juga menyukai