Anda di halaman 1dari 21

Software Project Management and Quality Assurance

Unit 3

Unit 3
Structure 3.0 3.1 Introduction Unit Testing

Software Testing Strategies

Unit Test Considerations 3.2 Integration Testing Big Bang Bottom-up Integration Top-down Integration 3.3 System Testing Recovery Testing Security Testing Stress Testing Performance Testing 3.4 S/w validation Verification and Validation Validation Test Criteria S/w Configuration review Alpha and Beta Testing 3.5 Debugging Steps for Debugging 3.6 3.7 3.8 3.9 Testing Life Cycle Roles in Software Testing Summary Terminal Questions

Sikkim Manipal University

Page No. 46

Software Project Management and Quality Assurance

Unit 3

3.0 Introduction
In computer programming, unit testing is a procedure used to validate whether the individual modules or units of source code are working properly. More technically one should consider that a unit is the smallest testable part of an application. In a Procedural Design a unit may be an individual program, function, procedure, web page, menu etc. But in Object Oriented Design, the smallest unit is always a Class, which may be a base/super class, abstract class or derived/child class. In computer programming, a unit test is a method of testing the correctness of a particular module of source code. The idea is to write test cases for every non-trivial function or method in the module so that each test case is separate from the others if possible. It is the first test in the development process. The source code is normally divided into modules, which in turn are divided into smaller units called units. These units have specific behavior. The test done on these units of code is called unit test. It is important to realize that unit-testing will not catch every error in the program. By definition, it only tests the functionality of the units themselves. Therefore, it will not catch integration errors, performance problems and any other system-wide issues. In addition, it may not be trivial to anticipate all special cases of input that the program unit under study may receive in reality. Unit testing is only effective if it is used in conjunction with other software testing activities. In this chapter we are discussing the unit testing considerations, system testing approaches, validation testing approaches, testing life cycle and comparison of testing and quality assurance.

3.1 Unit Testing


Unit testing focuses verification effort on the smallest unit of software design the Software component or module. Using the component-level design
Sikkim Manipal University Page No. 47

Software Project Management and Quality Assurance

Unit 3

description as a Guide, important control paths are tested to uncover errors within the boundary of the module. The relative complexity of tests and uncovered errors is limited by the constrained scope established for unit testing. The unit test is white-box oriented, and the step can be conducted in parallel for multiple components. 3.1.1 Unit Test Considerations The tests that occur as part of unit tests are illustrated schematically in Figure 3.1. The module interface is tested to ensure that information properly flows into and out of the program unit under test. The local data structure is examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm's execution. Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing. All independent paths (basis paths) through the control structure are exercised to ensure that all statements in a module have been executed at least once. Selective testing of execution paths is an essential task during the unit test. Test Cases should be designed to uncover errors due to erroneous computations, incorrect comparisons, or improper control flow. Basis path and loop testing are effective techniques for uncovering a broad array of path errors. Among the more common errors in computation are (1) misunderstood or incorrect arithmetic precedence, (2) mixed mode operations, (3) incorrect initialization, (4) precision inaccuracy, (5) incorrect symbolic representation of an expression. Comparison and control flow are closely coupled to one another (i.e., change of flow frequently occurs after a comparison). Test cases should uncover errors such as (1) comparison of different data types, (2) incorrect logical operators or precedence, (3) expectation of equality when precision error makes equality unlikely, (4) incorrect
Sikkim Manipal University Page No. 48

Software Project Management and Quality Assurance

Unit 3

comparison of variables, (5) improper or nonexistent loop termination, (6) failure to exit when divergent iteration is encountered, and (7) improperly modified loop The following figure 3.1 shows the Unit testing environment.

Figure 3.1

SAQ 1. What is unit testing? Explain its strategy.

3.2 Integration Testing


Integration testing is a logical extension of unit testing. In its simplest form, two units that have already been tested are combined into a component and the interface between them is tested. A component, in this sense, refers to an integrated aggregate of more than one unit. In a realistic scenario, many units are combined into components, which are in turn aggregated into even larger parts of the program. The idea is to test combinations of pieces and eventually expand the process to test your modules with those of other groups. Eventually all the modules making up a process are tested together.
Sikkim Manipal University Page No. 49

Software Project Management and Quality Assurance

Unit 3

Beyond that, if the program is composed of more than one process, they should be tested in pairs rather than all at once. Integration testing identifies problems that occur when units are combined. By using a test plan that requires you to test each unit and ensure the viability of each before combining units, you know that any errors discovered when combining units are likely related to the interface between units. This method reduces the number of possibilities to a far simpler level of analysis. The purpose of integration testing is to verify functional, performance and reliability requirements placed on major design items. These "design items", i.e. assemblages (or groups of units), are exercised through their interfaces using black box testing, success and error cases being simulated via appropriate parameter and data inputs. Simulated usage of shared data areas and inter-process communication is tested and individual subsystems are exercised through their input interface. Test cases (Inputs, Output and conditions for each functionality) are constructed to test that all components within assemblages interact correctly, for example across procedure calls or process activations, and this is done after testing individual modules, i.e. unit testing.The overall idea is a "building block" approach, in which verified assemblages are added to a verified base which is then used to support the integration testing of further assemblages. Common integration types are, Big Bang Bottom-up Top-Down 3.2.1 Big Bang: In this approach, all or most of the developed modules are coupled together to form a complete software systems or major part of the system and then used for integration testing. The Big Bang method is very effective for saving time in the integration testing process. However, if the test cases and their results are not recorded properly, the entire integration

Sikkim Manipal University

Page No. 50

Software Project Management and Quality Assurance

Unit 3

process will be more complicated and may prevent the testing team from achieving the goal of integration testing. 3.2.2 Bottom Up: All the bottom or low level modules, procedures or functions are integrated and then tested. After the integration testing of lower level integrated modules, the next level of modules will be formed and can be used for integration testing. This approach is helpful only when all or most of the modules of the same development level are ready. This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage. It is also stated in a bottom-up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, whereby the beginnings are small, but eventually grow in complexity and completeness. However, "organic strategies", may result in a tangle of elements and subsystems, developed in isolation, and subject to local optimization as opposed to meeting a global purpose.

Sikkim Manipal University

Page No. 51

Software Project Management and Quality Assurance

Unit 3

The figure shows the bottom-up testing approach,

Fig. 3.2

In the above figure, components are combined to form clusters 1, 2, and 3. Each of the clusters is tested using a driver (shown as a dashed block). Components in clusters 1 and 2 are subordinate to Ma. Drivers D1 and D2 are removed and the clusters are interfaced directly to Ma. Similarly, driver D3 for cluster 3 is removed prior to integration with module Mb. Both Ma and Mb will ultimately be integrated with component Mc, and so forth. As integration moves upward, the need for separate test drivers lessens. In fact, if the top two levels of program structure are integrated top down, the number of drivers can be reduced substantially and integration of clusters is greatly simplified. 3.2.3 Top-Down: In a top-down approach an overview of the system is first formulated, specifying but not detailing any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem levels, until the entire specification is reduced to base
Sikkim Manipal University Page No. 52

Software Project Management and Quality Assurance

Unit 3

elements. A top-down model is often specified with the assistance of "black boxes" that make it easier to manipulate. However, black boxes may fail to elucidate elementary mechanisms or be detailed enough to realistically validate the model. SAQ 1. What is integration testing? Explain bottom-up and top-down integration testing. 2. What is the purpose of Integration Testing? Is it not possible to release the software without integration testing.

3.4 System Testing


System testing of software is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic. As a rule, system testing takes, as its input, all of the "integrated" software components that have successfully passed integration testing and also the software system itself integrated with any applicable hardware system(s). The purpose of integration testing is to detect any inconsistencies between the software units that are integrated together (called assemblages) or between any of the assemblages and the hardware. System testing is a more limiting type of testing; it seeks to detect defects both within the "interassemblages" and also within the system as a whole. 3.4.1 Recovery testing In software testing, recovery testing is the activity of testing how well the software is able to recover from crashes, hardware failures and other similar problems. Recovery testing is the forced failure of the software in a variety of ways to verify that recovery is properly performed.
Sikkim Manipal University Page No. 53

Software Project Management and Quality Assurance

Unit 3

Examples of recovery testing: 1) While the application is running, suddenly restart the computer and after that check the validness of application's data integrity; 2) While application receives data from the network, unplug and then in some time plug-in the cable, and analyze the application ability to continue receiving of data from that point, when network connection disappeared; 3) To restart the system while the browser will have definite number of sessions and after rebooting, check that it is able to recover all of them. 3.4.2 Security Testing Any computer-based system that manages sensitive information or causes actions that can improperly harm (or benefit) individuals is a target for improper or illegal penetration. Penetration spans a broad range of activities: hackers who attempt to penetrate systems for sport; disgruntled employees who attempt to penetrate for revenge; dishonest individuals who attempt to penetrate for illicit personal gain. Security testing attempts to verify that protection mechanisms built into a system will, in fact, protect it from improper penetration. To quote Beizer [BEI84]: "The system's security must, of course, be tested for invulnerability from frontal attack but must also be tested for invulnerability from flank or rear attack." During security testing, the tester plays the role(s) of the individual who desires to penetrate the system. Anything goes! The tester may attempt to acquire passwords through external clerical means; may attack the system with custom software designed to breakdown any defenses that have been constructed; may overwhelm the system, thereby denying service to others; may purposely cause system errors, hoping to penetrate during recovery; may browse through insecure data, hoping to find the key to system entry. Given enough time and resources, good security testing will ultimately penetrate

Sikkim Manipal University

Page No. 54

Software Project Management and Quality Assurance

Unit 3

during system recovery. The role of the system designer is to make penetration cost more than the value of the information that will be obtained. 3.4.3 Stress Testing Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. Stress testing may have a more specific meaning in certain industries. In software testing, stress testing often refers to tests that put a greater emphasis on robustness, availability and error handling under a heavy load, than on what would be considered correct behavior under normal circumstances. In particular, the goals of such tests may be to ensure the software doesn't crash in conditions of insufficient computational resources (such as memory or disk space), unusually high concurrency or denial of service attacks. Examples: A web server may be stress tested using scripts, bots, and various denials of service tools to observe the performance of a web site during peak loads. 3.4.4 Performance Testing Performance testing is of various types: 1. Load Testing: It is the testing of an application by applying varying loads. The intention is to find the breaking point of the application where it crashes. The load is applied in multiple factors. 2. Stress Testing: It is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. For example, a web server may be stress tested using scripts, bots,

Sikkim Manipal University

Page No. 55

Software Project Management and Quality Assurance

Unit 3

and various denials of service tools to observe the performance of a web site during peak loads. Stress testing is a subset of load testing. 3. Soak Testing: It is a kind of testing where an application under test is put under load over a period of time say 48 to 72 hours to check the stability. Generally there can be memory overflow problem if the session connection is not cleaned from the memory. 4. Volume Testing: It is similar to stress testing but generally done for stand alone application where we check for the system by sending huge volume of data across the system. For example if we consider any banking applications, a back up is created for every 5 seconds to avoid crashing of the data. The data from one system is transferred to another system through an intermediate system, which monitors the performance of the transfer and also the speed of transfer. 5. Smoke Testing: Smoke testing is done by developers before the build is released or by testers before accepting a build for further testing. In software engineering, a smoke test generally consists of a collection of tests that can be applied to a newly created or repaired computer program. Sometimes the tests are performed by the automated system that builds the final software. In this sense a smoke test is the process of validating code changes before the changes are checked into the larger products official source code collection. Next after code reviews, smoke testing is the most cost effective method for identifying and fixing defects in software; some even believe that it is the most effective of all. In software testing, a smoke test is a collection of written tests that are performed on a system prior to being accepted for further testing. This is also known as a build verification test. This is a "shallow and wide" approach to the application. The tester "touches" all areas of the application without getting too deep, looking for answers to basic questions like, "Can I
Sikkim Manipal University Page No. 56

Software Project Management and Quality Assurance

Unit 3

launch the test item at all?", "Does it open to a window?", "Do the buttons on the window do things? There is no need to get down to field validation or business flows. If you get a "No" answer to basic questions like these, then the application is so badly broken, there's effectively nothing there to allow further testing. These written tests can either be performed manually or using an automated tool. When automated tools are used, the tests are often initiated by the same process that generates the build itself. SAQ 1. What is system testing? Explain any three system testing.

3.4 Validation
3.4.1 Verification and Validation Software testing is one element of a broader topic that is often referred to as verification and validation (V&V). Verification refers to the set of activities that ensure that software correctly implements a specific function. Validation refers to a different set of activities that ensure that the software that has been built is traceable to customer requirements. Boehm [BOE81] states this another way, Verification: "Are we building the product right?" Validation: "Are we building the right product?" The definition of V&V encompasses many of the activities that we have referred to as software quality assurance (SQA). Verification and validation encompasses a wide array of SQA activities that include formal technical reviews, quality and configuration audits, performance monitoring,

simulation, feasibility study, documentation review, database review, algorithm analysis, development testing, qualification testing and installation testing [WAL89].

Sikkim Manipal University

Page No. 57

Software Project Management and Quality Assurance

Unit 3

Although testing plays an extremely important role in V&V, many other activities are also necessary. Testing does provide the last bastion from which quality can be assessed and, more pragmatically, errors can be uncovered. But testing should not be viewed as a safety net. As they say, "You can't test in quality. If it's not there before you begin testing, it won't be there when you have finished testing." Quality is incorporated into software throughout the process of software engineering. Proper application of methods and tools, effective formal technical reviews, and solid

management and measurement all lead to quality that is confirmed during testing. Miller [MIL77] relates software testing to quality assurance by stating that "the underlying motivation of program testing is to affirm software quality with methods that can be economically and effectively applied to both large-scale and small-scales systems. At the culmination of integration testing, software is completely assembled as a package, interfacing errors have been uncovered and corrected, and a final series of software tests validation testing may begin. Validation can be defined in many ways, but a simple (albeit harsh) definition is that validation succeeds when software functions in a manner that can be reasonably expected by the customer. At this point a battle-hardened software developer might protest Who or what is the arbiter of reasonable expectations?" Reasonable expectations are defined in the Software Requirements Specification a document that describes all user-visible attributes of the software. The specification contains a section called Validation Criteria. Information contained in that section forms the basis for a validation testing approach.

Sikkim Manipal University

Page No. 58

Software Project Management and Quality Assurance

Unit 3

3.4.2 Validation Test Criteria Software Validation is achieved through a series of Black-Box tests that demonstrate conformity with requirements. A test plan outlines the classes of tests to be conducted and a test procedure defines specific test cases that will be used to demonstrate conformity with requirements. Both the plan and procedure are designed to ensure that all functional requirements are satisfied, all behavioral characteristics are achieved, all performance requirements are attained, documentation is correct, and human-engineered and other requirements are met (e.g., transportability, compatibility, error-recovery, maintainability). After each validation test case has been conducted, one of two possible conditions exists: (1) The function or performance characteristics conform to specification and are accepted or (2) a deviation from specification is uncovered and a deficiency list is created. Deviation or errors discovered at this stage in a project can rarely be corrected prior to scheduled delivery. It is often necessary to negotiate with the customer to establish a method for resolving deficiencies. 3.4.3 Configuration review An important element of the validation process is a configuration review. The intent of the review is to ensure that all elements of the software configuration have been properly developed, are cataloged, and have the necessary detail to bolster the support phase of the software life cycle. The configuration review is sometimes called an audit. 3.4.4 Alpha and Beta Testing It is virtually impossible for a software developer to foresee how the customer will really use a program. Instructions for use may be misinterpreted; strange combinations of data may be regularly used; output that seemed clear to the tester may be unintelligible to a user in the field.
Sikkim Manipal University Page No. 59

Software Project Management and Quality Assurance

Unit 3

When custom software is built for one customer, a series of acceptance tests are conducted to enable the customer to validate all requirements. Conducted by the end user rather than software engineers, an acceptance test can range from an informal test drive to a planned and systematically executed series of tests. In fact, acceptance testing can be conducted over a period of weeks or months, thereby uncovering cumulative errors that might degrade the system over time. If software is developed as a product to be used by many customers, it is impractical to perform formal acceptance tests with each one. Most software product builders use a process called alpha and beta testing to uncover errors that only the enduser seems able to find. The alpha test is conducted at the developer's site by a customer. The software is used in a natural setting with the developer "looking over the shoulder" of the user and recording errors and usage problems. Alpha tests are conducted in a controlled environment. The beta test is conducted at one or more customer sites by the end-user of the software. Unlike alpha testing, the developer is generally not present. Therefore, the beta test is a "live" application of the software in an environment that cannot be controlled by the developer. The customer records all problems (real or imagined) that are encountered during beta testing and reports these to the developer at regular intervals. As a result of problems reported during beta tests, software engineers make modifications and then prepare for release of the software product to the entire customer base. SAQ 1. What are verification and validation? 2. What are alpha and beta testing?

Sikkim Manipal University

Page No. 60

Software Project Management and Quality Assurance

Unit 3

3.5 Debugging
Definition Debugging is the process of locating and fixing errors (known as bugs), in a computer program or hardware device To debug a program or hardware device, you start with a known problem, isolate the source of the problem, and then fix it. When someone says they have debugged a program, or "removed the bugs" in a program, they imply that they have fixed the program, so that the bugs no longer exist in it. Debugging is a necessary process in almost any new software or hardware development process, whether a commercial product, an enterprise or personal application program. For complex products, debugging is done periodically throughout the development, and again during the customer beta test stages. As most computer programs and many programmed hardware devices contain thousands of lines of code, almost any new product is likely to contain a few bugs. Invariably, the bugs in the functions that get the most use are found and fixed first. An early version of a program that has lots of bugs is referred to as "buggy." Debugging tools help identify coding errors at various stages of development. Some programming language packages include a facility for checking the code for errors as it is being written. Although each debugging experience is unique, certain general principles can be applied in debugging. This section particularly addresses debugging software, although many of these principles can also be applied to debugging hardware. 3.5.1 The Basic steps in Debugging Recognize that a bug exists Isolate the source of the bug
Sikkim Manipal University Page No. 61

Software Project Management and Quality Assurance

Unit 3

Identify the cause of the bug Determine a fix for the bug Apply the fix and test it Recognize a bug exists Detection of bugs can be done proactively or passively. An experienced programmer often knows where errors are more likely to occur, based on the complexity of sections of the program as well as possible data corruption. For example, any data obtained from a user should be treated suspiciously. Great care should be taken to verify that the format and content of the data are correct. Data obtained from transmissions should be checked to make sure the entire message (data) was received. Complex data that must be parsed and/or processed may contain unexpected combinations of values that were not anticipated, and not handled correctly. By inserting checks for likely error symptoms, the program can detect when data has been corrupted or not handled correctly. Isolate sources of debugging This step is often the most difficult (and therefore rewarding) step in debugging. The idea is to identify what portion of the system is causing the error. Unfortunately, the source of the problem isn't always the same as the source of the symptoms. For example, if an input record is corrupted, an error may not occur until the program is processing a different record, or performing some action based on the erroneous information, which could happen long after the record was read. This step often involves iterative testing. The programmer might first verify that the input is correct, next if it was read correctly, processed correctly, etc. For modular systems, this step can be a little easier by checking the validity of data passed across interfaces between different modules. If the input was correct, but the output was not, then the source of the error is
Sikkim Manipal University Page No. 62

Software Project Management and Quality Assurance

Unit 3

within the module. By iteratively testing inputs and outputs, the debugger can identify within a few lines of code where the error is occurring. Identify Cause of the Bug Having identified the source of the problem, the next task is to determine how the problem can be fixed. This is because the fix will modify the existing behavior of the system, which may produce unexpected results. Furthermore, fixing an existing bug can often either create additional bugs, or expose other bugs that were already present in the program, but never exposed because of the original bug. These problems are often caused by the program executing a previously untested branch of code, or under previously untested conditions. Determine a fix for the problem Fix and test After the fix has been applied, it is important to test the system and determine that the fix handles the former problem correctly. Testing should be done for two purposes: (1) does the fix now handle the original problem correctly, and (2) make sure the fix hasn't created any undesirable side effects. For large systems, it is a good idea to have regression tests, a series of test runs that exercise the system. After significant changes and/or bug fixes, these tests can be repeated at any time to verify that the system still executes as expected. As new features are added, additional tests can be included in the test suite. SAQ 1. What is debugging? Explain the procedure for bug fixing.

Sikkim Manipal University

Page No. 63

Software Project Management and Quality Assurance

Unit 3

3.6 Steps for Cycle Testing


1. Requirements Analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work with developers in determining what aspects of a design are testable and under what parameter those tests work. 2. Test Planning: Test Strategy, Test Plan(s), Test Bed creation. 3. Test Development: Test Procedures, Test Scenarios, Test Cases, and Test Scripts to use in testing software. 4. Test Execution: Testers execute the software based on the plans and tests and report any errors found to the development team. 5. Test Reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.

3.7 Roles in software testing


Software testing can be done by software testers. Until the 1950s the term software tester was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing (see D. Gelperin and W.C. Hetzel) there have been established different roles: Test lead/manager Tester Test designer Test automater / automation developer Test administrator Participants of testing team: 1. Tester
Sikkim Manipal University Page No. 64

Software Project Management and Quality Assurance

Unit 3

2. Developer 3. Business Analyst 4. Customer 5. Information Service Management 6. Senior Organization Management 7. Quality team SAQ 1. What is cycle testing? Write necessary steps for cycle testing.

3.8 Summary
The objective of software testing is to uncover errors. To fulfill this objective, a series of test steps unit, integration, validation and system tests are planned and executed. Unit and integration tests concentrate on functional verification of a component and incorporation of components into a program structure. Validation testing demonstrates traceability to software

requirements, and system testing validates software once it has been incorporated into a larger system. Each test step is accomplished through a series of systematic test techniques that assist in the design of test cases. With each testing step, the level of abstraction with which software is considered is broadened. Unlike testing (a systematic, planned activity), debugging must be viewed as an art. Beginning with a symptomatic indication of a problem, the debugging activity must track down the cause of an error. Of the many resources available during debugging, the most valuable is the counsel of other members of the software engineering staff. The requirement for higher-quality software demands a more systematic approach to testing. To quote Dunn and Ullman [DUN82], what is required is an overall strategy, spanning the strategic test space, quite as deliberate in its methodology as was the systematic development on which analysis, design and code were based.
Sikkim Manipal University Page No. 65

Software Project Management and Quality Assurance

Unit 3

3.9 Terminal Questions


1. What is validation? Explain validation criterias. 2. What is debugging? Explain the basic steps in debugging? 3. What is system testing? Explain the approaches for system testing. 4. Explain the steps for Testing Life Cycle 5. What is Unit testing? Explain the unit testing considerations. 6. What is the difference between the Testing and Quality assurance?

Sikkim Manipal University

Page No. 66

Anda mungkin juga menyukai