Anda di halaman 1dari 32

Testing Definition

Goals
Difficulties Dimensions of Test Case selection

Stages of Testing
Test Generation Strategies(Techniques) Prioritization

Automation

Executing software in a simulated or real environment, using inputs selected somehow.

Detect faults
Establish confidence in software Evaluate properties of software
Reliability
Performance Memory Usage Security Usability

Most of the software testing literature equates test case selection to software testing but that is just one difficult part. Other difficult issues include:
Determining whether or not outputs are correct.

Comparing resulting internal states to expected states.


Determining whether adequate testing has been done. Measuring performance characteristics.

Stages of Development

Source of Information for Test Case

Selection

Testing in the Small


Unit Testing Feature Testing

Integration Testing

Tests the smallest individually executable code units. Usually done by programmers. Test cases might be selected based on code, specification, intuition, etc.

Tools: Test driver/harness Code coverage analyzer Automatic test case generator

Tests interactions between two or more units or components. Usually done by programmers. Emphasizes interfaces. Issues: In what order are units combined?

How are units integrated? What are the implications of this order?
Top-down => need stubs; top-level tested repeatedly. Bottom-up => need drivers; bottom-levels tested

repeatedly. Critical units first => stubs & drivers needed; critical units tested repeatedly.

Potential Problems: Inadequate unit testing. Inadequate planning & organization for integration testing. Inadequate documentation and testing of externallysupplied components.

Testing in the Large


System Testing End-to-End Testing Operations Readiness Testing Beta Testing Load Testing Stress Testing Performance Testing

Reliability Testing
Regression Testing

Test the functionality of the entire system. Usually done by professional testers.

Exhaustive testing is not possible.

Testing is creative and difficult.


A major objective of testing is failure prevention. Testing must be planned.

Testing should be done by people who are

independent of the developers.

Every systematic test selection strategy can be viewed as a way of dividing the input domain into sub domains, and selecting one or more test case from each.
The division can be based on such things as code characteristics (white box), specification details (black box), domain structure, risk analysis, etc.

Can only be used at the unit testing level, and even

then it can be prohibitively expensive. Dont know the relationship between a thoroughly tested component and faults. Can generally argue that they are necessary conditions but not sufficient ones.

Unless there is a formal specification, (which there

rarely/never is) it is very difficult to assure that all parts of the specification have been used to select test cases. Specifications are rarely kept up-to-date as the system is modified. Even if every functionality unit of a specification has been tested, that doesnt assure that there arent faults.

Look at characteristics of the input domain or subdomains. Consider typical, boundary, & near-boundary cases. This sort of boundary analysis may be meaningless for non-numeric inputs. What are the boundaries of {Rome, Paris, London, }? Can also apply similar analysis to output values, producing output-based test cases.

Risk is the expected loss attributable to the failures caused by faults remaining in the software. Risk is based on Failure likelihood or likelihood of occurrence. Failure consequence. So risk-based testing involves selecting test cases in order to minimize risk by making sure that the most likely inputs and highest consequence ones are selected.

The end user runs the system in their environment

to evaluate whether the system meets their criteria. The outcome determines whether the customer will accept system. This is often part of a contractual agreement.

Test modified versions of a previously validated system. Usually done by testers. The goal is to assure that changes to the system have

not introduced errors.


The primary issue is how to choose an effective

regression test suite from existing, previously-run test cases.

Once a test suite has been selected, it is often desirable

to prioritize test cases based on some criterion.


That way, since the time available for testing is limited

and therefore all tests cant be run, at least the most important ones can be.

Most frequently executed inputs.


Most critical functions. Most critical individual inputs.

White-box methods can be used for Test case selection or generation. Test case adequacy assessment.

Is code coverage an effective means of detecting faults?

How much coverage is enough?


Is one coverage criterion better than another? Does increasing coverage necessarily lead to higher

fault detection? Are coverage criteria more effective than random test case selection?

Test execution: Run large numbers of test

cases/suites without human intervention. Test generation: Produce test cases by processing the specification, code, or model. Test management: Log test cases & results; map tests to requirements & functionality; track test progress & completeness

More testing can be accomplished in less time. Testing is repetitive, tedious, and error-prone. Test cases are valuable - once they are created, they

can and should be used again, particularly during regression testing.

Does the payoff from test automation justify the

expense and effort of automation? Learning to use an automation tool can be difficult. Tests, have a finite lifetime. Completely automated execution implies putting the system into the proper state, supplying the inputs, running the test case, collecting the results, and verifying the results.

Automated tests are more expensive to create and

maintain (estimates of 3-30 times). Automated tests can lose relevancy, particularly when the system under test changes. Use of tools require that testers learn how to use them, cope with their problems, and understand what they can and cant do.

Load/stress tests -Very difficult to have very large

numbers of human testers simultaneously accessing a system. Regression test suites -Tests maintained from previous releases; run to check that changes havent caused faults. Sanity tests - Run after every new system build to check for obvious problems. Stability tests - Run the system for 24 hours to see that it can stay up.

NIST estimates that billions of dollars could be saved each year if improvements were made to the testing process.

*NIST Report: The Economic Impact of Inadequate Infrastructure for Software Testing, 2002.

Anda mungkin juga menyukai