Goals
Difficulties Dimensions of Test Case selection
Stages of Testing
Test Generation Strategies(Techniques) Prioritization
Automation
Detect faults
Establish confidence in software Evaluate properties of software
Reliability
Performance Memory Usage Security Usability
Most of the software testing literature equates test case selection to software testing but that is just one difficult part. Other difficult issues include:
Determining whether or not outputs are correct.
Stages of Development
Selection
Integration Testing
Tests the smallest individually executable code units. Usually done by programmers. Test cases might be selected based on code, specification, intuition, etc.
Tools: Test driver/harness Code coverage analyzer Automatic test case generator
Tests interactions between two or more units or components. Usually done by programmers. Emphasizes interfaces. Issues: In what order are units combined?
How are units integrated? What are the implications of this order?
Top-down => need stubs; top-level tested repeatedly. Bottom-up => need drivers; bottom-levels tested
repeatedly. Critical units first => stubs & drivers needed; critical units tested repeatedly.
Potential Problems: Inadequate unit testing. Inadequate planning & organization for integration testing. Inadequate documentation and testing of externallysupplied components.
Reliability Testing
Regression Testing
Test the functionality of the entire system. Usually done by professional testers.
Every systematic test selection strategy can be viewed as a way of dividing the input domain into sub domains, and selecting one or more test case from each.
The division can be based on such things as code characteristics (white box), specification details (black box), domain structure, risk analysis, etc.
then it can be prohibitively expensive. Dont know the relationship between a thoroughly tested component and faults. Can generally argue that they are necessary conditions but not sufficient ones.
rarely/never is) it is very difficult to assure that all parts of the specification have been used to select test cases. Specifications are rarely kept up-to-date as the system is modified. Even if every functionality unit of a specification has been tested, that doesnt assure that there arent faults.
Look at characteristics of the input domain or subdomains. Consider typical, boundary, & near-boundary cases. This sort of boundary analysis may be meaningless for non-numeric inputs. What are the boundaries of {Rome, Paris, London, }? Can also apply similar analysis to output values, producing output-based test cases.
Risk is the expected loss attributable to the failures caused by faults remaining in the software. Risk is based on Failure likelihood or likelihood of occurrence. Failure consequence. So risk-based testing involves selecting test cases in order to minimize risk by making sure that the most likely inputs and highest consequence ones are selected.
to evaluate whether the system meets their criteria. The outcome determines whether the customer will accept system. This is often part of a contractual agreement.
Test modified versions of a previously validated system. Usually done by testers. The goal is to assure that changes to the system have
and therefore all tests cant be run, at least the most important ones can be.
White-box methods can be used for Test case selection or generation. Test case adequacy assessment.
fault detection? Are coverage criteria more effective than random test case selection?
cases/suites without human intervention. Test generation: Produce test cases by processing the specification, code, or model. Test management: Log test cases & results; map tests to requirements & functionality; track test progress & completeness
More testing can be accomplished in less time. Testing is repetitive, tedious, and error-prone. Test cases are valuable - once they are created, they
expense and effort of automation? Learning to use an automation tool can be difficult. Tests, have a finite lifetime. Completely automated execution implies putting the system into the proper state, supplying the inputs, running the test case, collecting the results, and verifying the results.
maintain (estimates of 3-30 times). Automated tests can lose relevancy, particularly when the system under test changes. Use of tools require that testers learn how to use them, cope with their problems, and understand what they can and cant do.
numbers of human testers simultaneously accessing a system. Regression test suites -Tests maintained from previous releases; run to check that changes havent caused faults. Sanity tests - Run after every new system build to check for obvious problems. Stability tests - Run the system for 24 hours to see that it can stay up.
NIST estimates that billions of dollars could be saved each year if improvements were made to the testing process.
*NIST Report: The Economic Impact of Inadequate Infrastructure for Software Testing, 2002.