Anda di halaman 1dari 50

SOFTER TESTING --------------

Testing: -------Testing is defined as the process in which defects are identified,isolated,subjected for rectification and ensured that the product is defect free in order to provide the quality to the product and hence customer satisfaction

Identification of defects:One has to identify the areas where in the requirements are not justified properly which have to be considered as defects.Hence several defects are encountered to the system.

Isolation of defects:All the defects encountered in the previous step will be listed out in a seperate document for them to be dealt with.

Subjected to rectification:The list of defects is documented in a seperate document and should be sent to the development team for the process of rectification.

Software Testing: ----------------Testing is defined as Verification and Validation

What is verification: A QA People uses verification method to ensure the system complies with an organization standards and processes, replying on review or non executable methods (such as software, hardware, documentation and personnel) Are we Building the Right Product

What is validation: ------------------Validation physically ensures that the system operates according to plan by Executing the system functions through series of tests that can be observed or evaluated. Are we building the Product Right

What is quality assurance: -------------------------A planned and systematic pattern for all actions necessary to provide adequate confidence that the item or product conforms to established technical requirements In this Process QA people will verify the completeness and correctness of roles responsibilities of partments,employees weather they are working according to standards are not.

-->QA process is also called as Defect prevention process

What is quality control: -----------------------Quality Control is defined as a set of activities or techniques whose purpose is to ensure that all quality requirements are being met. In order to achieve this purpose, processes are monitored and performance problems are solved.

-->QC process is alo called as "Defect Detection process".

What is Quality: --------------Fitness to use A journey towards excellence

Defect: Nonconformance to requirements or functional / program specification

Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner.

Sign In: -------it is the process in which for the Customer and development team coming to the aggrement that the project has to be completed with in the dead line and with in the budjet.

-----------------------------------------------------------

SOFTWARE DEVELOPMENT LIFE CYCLE: -------------------------------Phased engineering approach to software development Requirements Analysis & Specification Design Architecture, System or High Level, Detailed or Low Level Coding / Construction Testing Maintenance

-->SDLC contains following Phases

Initial Phase Analysys phase Design phase Coding phase

Testingh phase Delivery/maintanace phase

Each phase has the following tasks

->Responsibility ->Role ->Documents

Initial phase: --------------

Responsibility:Gathering the requirements Role:Bussiness Analyst(BA) Documents:BRS,BD

Analisys Phase: ---------------

Responsibility: Role:System Analyst(SA) Documents:SRS(System Requirement Specifications)

Design Phase: -------------

Responsibility:1.HLD 2. LLD

Role: Documents:TDD

Contents of TDD: ---------------Flow charts Data Flow diagrams Data tables Psudo code

Coding: ------Role:Programmer/Developer Documents:SCD

Testing: -------Responsibilty:To validate a application to ensure that all the requirements of customer is met. Role:Test Engineer Documents:Test deliverables

Delivery/Maintanace: -------------------User acceptance Testing

Alpha Testing Beta testing

-----------------------------------------------------------

SDLC Models: ------------

-->Waterfall model -->prototype model -->V-model

WATERFALL MODEL: ---------------Requirement Analisys Design Code Test Maintain

Waterfall model describes a process of stepwise refinement Widely used in defense & aerospace industries.

But software is different: No fabrication step. Program code is just another design level. Hence, no commit step - software can always b changed!. No body of experience for design analysis (yet)

Waterfall model takes a static view of requirements - ignores changing needs. - Lack of user involvement once specification is written Doesnt accommodate prototyping, reuse, etc.

PROTOTYPING Model: -----------------Requirements-->Design Prototype-->Build Prototype-->Test | Doc.Requirements-->Design-->Code-->Test-->Deliver | Prototype

Prototyping is used for: - understanding the requirements for the user interface - examining feasibility of a proposed design approach - exploring system performance issues

DisAdvantages: 1. Users treat the prototype as the solution 2. A prototype is only a partial specification

v-model: --------

Testing Methodologies --------------------Whitebox Testing

BlackBox Testing gray Box Testing

-----------------------------------------------------------

LEVELS OF TESTING: -->Unit Testing -->Integration Testing -->System Testing -->User Acceptance Testing

1)Unit Testing: --------------The testing done to show whether a unit (the smallest piece of software that can be independently complied or assembled, loaded, and tested) satisfies its functional specification or its implemented structure matches the intended design structure.

2)Integration Testing: ---------------------Integration Testing refers to the testing in which software units of an application are combined and tested for evaluating the interaction between them.

-----------------------------------------------------------

Functional & system testing:

---------------------------After completion of all posible modules integration as a system.The separate testing team in an organization validates that build through a set of black box testing techniques. These techniques are clasified into four divisions

1.usability Testing 2.Functional Testing 3.Performance Testing 4.Security Testing

----------------------------------------------------------Usability Testying: In general,a System testing starts with usability.This is done in early days of the job. In this level,testing team follows two testing techniques.

a)User interface testing:Under this testing technique, we have three catagories

-->Ease of use(i.e.Understandable of screens)

-->Look & Feel(Attractiveness of screens)

-->speed of interface(sharp navigations to complete a task)

b)Manual support testing:In this technique, context sensitiveness(with respect to work or task)of the user manuals.This is done end days of the job. Eg:Help documents of user manuals.

Receive build from developers | User Interface testing | Remaining functional &system tests | Manual support testing

-----------------------------------------------------------

2.Functional Testing:A mandatory part in black box testing is functional testing.during this tests,testing team concentrate on "meet customer requirements".Functional testing is classified into below sub tests

a)Functionality testing:It is also known as requirements testing and validates correctness of every functionality. Functionality testing consists of below six coverage:

* Behavuioral coverage:Meant that property of objects changes with respect to task.For example,whenever we type user-id & passwoed automatically submit button is enabled.Means that

with respect to task the next button should be enabled.

* Error-Handling coverage:In this testing, we have to prevent negative navigations.For example,in login process, if we give wrong password it has to give a warning message so that it should not terminate.

* Input Domain coverage:In this we have to check, the size and type of input object is correct or not.

* Calculations coverage:In this we have to check, the corretness of the outputs.

* Backend coverage:In this we have to see, that the impact of front-end operations are storing on back-end database. It means that there are some operations performed on front-end ,but the change has to take place in back-end tables also.

* Service levels coverage:In this we have to see, that the order of functionalities are placed in right position. For example: There are some funcionalities listed below

Mail Chat Forgot password

Change password Exit

Here in tis ,there is one wrong placement,i.e.forgot password. If you forgot password how can you login to the window.

-----------------------------------------------------------

b)input domain testing: It is a part of functionality testing, but test engineers are giving special treatment to input domains of object, such as 'Boundary value Analysis' (BVA)and 'Equivalence class partitions'(ECP).

the BVA and ECP are as follows:

BVA(size/Range) --------------Min =pass Min-1=fail Min+1=pass Max =pass Max-1=pass Max+1=fail

ECP(Type) --------Valid | Invalid --------------| pass | fail |

BVA defines the range and size of the object.For example take age, the range is 18-60,here the range is taken into consideration, not the size.ECP defines what type of

characters it accepts is valid and remaining is invalid.

Example 1: A login process allows user-id and password to authorized users.From designed documents, user-id allows alpha-numeric from 4-16 characters long and password allows in lower case from 4-8 characters long prepare BVA and ECP for user-id and password.

Example2:Atextbox allows 12-digits numbers.In this number '*'is mandatory and '-'is optional.Give BVA and ECP for this textbox.

c)Sanitation Testing: --------------------It is also known as garabage testing.During this test,test engineers find extra functionality in build with respect to SRS

d)Instalation Testing: ---------------------Easy interface is during installation and occupied disk space is checked after installation.

e)Parallel testing: -------------------

It is also known as comparative testing.During this test engineers try to find competitiveness of our application product through comparision with other product.This test is done only for the software product,not for application software. -----------------------------------------------------------

3.Performance Testing: ---------------------Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performancerequirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".

a)Recovery Testing: ------------------It is also known as Reliability Testing. during this test,test engineers validate that whether our application build change from abnormal state to normal state or not?

B)Compatability Testing: -----------------------It is also known as portability testing. During this test test engineers validate that whether our application build run on customer expected platforms or not? During this test engineers are facing two types of compatability problems i.e forward compatability and backward compatability

c)Storage Testing: -----------------The execution of our application build under huge amount of resources to estimate storage limit is called storage Testing.

d)Load Testing: ------------The execution of our application build under customer expected configuration and customer expected load to estimate performance is called Load Testing.

e)Sress Testing: -------------the execution of our application build under customer expected configuration and un interval loads to estimate performance is called Stress Testing.

-----------------------------------------------------------

4.Security Testing: ------------------This Testing Technique is complex to be applied.During this test testing team concetrate on "privacy to user perations" in our application build.This testing technique is lassified into below sub tests

a)Authorization: ---------------whether a user is authorized or not to use and connect to an application.

b)Access Control: ----------------whether avalid user has permission to use specific service or not?

c)Encryption/Decryption: -----------------------Here sender i.e client performs,encryption and receiver i.e server performs decryption.

Note:In small scale organizations,authorization and access control are covered by the test engineers.Developers perform the encryption and decryption procedures.

----------------------------------------------------------VI)user Acceptance Testing: --------------------------After completion of all possible functional and system testing,our project management concetrates on User Acceptance Testing to collect feedback from customer site people.

-->There are two ways to conduct UAT

Alpha Testing: -------------It is the type of UAT conducted on an application with in the

Beta Testing: -------------

-------------------------------------------------------

Testing Terminalogy: --------------------

1.Monkey Testing or Chimpanzee Tssting: --------------------------------------The covarage of "main activities" in our application build during testing is called monkey Testing.Due to lack of time, testing team follows this type of testing.

for example Mail compose,Mail replay Mail forward are three are there we conduct only Mail open and Mail compose.Because,Mail replay and Mail forward is similar to Mail compose due to lack of time.

2.Exploratory Testing: ---------------------The covarage of all activities in level by level during testing is called exploratory testing.Due to lack of knowledge on the that application,test engineers are following this style of Testing.

3.Ad-hoc testing: ----------------A tester conducts a test on application build, depends on predetermined ideas,called Ad-hoc Testing.Based on the past experiance tester tests the build of the project.

4.Sanity testing: ----------------Whether a development team released build is stable is stable for complete testing or not?For example specifying that it is specifying that it is not good without reason,just watch is not working.

5.Smoke Testing: ---------------It is an extra shake-up in sanity testing, in this level , testing team reject a build with reason when that build is not working to be applied complete testing. For example to say that watch is not working due to keyrod i.e with reason.

6.Retesting: -----------The re-execution of a test with multiple test data on same application build is called Retesting.

Test Data --------Input1 -----Min Min Max Max value 0 0 Input2 -----Min Max Min Max 0 Value 0

7.Regression Testing: ---------------------The re-execution of tests on modified build to ensure "bug fix work" and possibilities of side effects occurrence is called Regression Testing.

Note:From the definitions of Restesting and Regression testing, test repetition is a mandatory task in test engineer's job. Due to this reason,test engineers are going to test Automation.

TESTING LIFE CYCLE: ------------------An Effective testing lifecycle must be imposed over the development lifecycle before testing improvements can be made.

The goal is to make the testing flow naturally from the development work.

The testing work should not disrupt or lengthen the development work.

TESTING DOCUMENTS -----------------

Quality control(QC) <---Test policy | | | Test strategy | Quality Analysts(QA) | Test Methodology | | | Test Lead <-----Test Plan | Company Level

| | | Test cases | | | Test procedures | | | Test Engineers | | Test Execution | | Test Log | | | Defect Report Test scripts Project Level

-----------------------------------------------------------

1)Test policy:It is a company level document and developed by Quality control people(QC->almost management)

The below abbreviations are follows:

LOC->Lines of code FP->Functional points(i.e Number of screens,input's, output's queries,forms,reports) QAM->Quality Assessment Measurement TMM->Test Management Measurement PCM->Process Capability Measurement

-----------------------------------------------------------

xxxxx Address of Company ----------------------------------------------------------Testing defination :Verification + validation

Testing process

:Proper planning before starts testing

Testing Standard :1 Error per 250 LOC/1 defect per 10fp

Testing Measurements:QAM,TMM,PCM

xxxxxxxxxxx (C.E.O) -----------------------------------------------------------

Above test policy can defines,"Testing objective".To meet that objective Quality Analyst people can define,Testing

Approach through a test strategy document.

2)Test Strategy: ---------------It is a company level document and developed by Quality Analyst(QA)people.This test strategy defines a common testing approach to be followed.

Components in test strategy:

1)Scope and objective: About organization,purpose of testing and testing objective.

2)Business issues: Budget control for testing Eg:

100% /\ / \ / \ 64% 36%

s/w Development & Maintenance

Testing & Quality Assurance

3)Testing Approach: Mapping b/w Testing issues and development stages(V-Model)

This testing approach is done in the folloeing way i.e in matrix is called as test Responsibility Matric(TRM)/Test Matrix(TM). It is mainly based on the Development stages and test factors.In Development stages we have,five stages, and in testing factors side we have,fifteen factors.These are shown in the following figure:

----------------------------------------------------------Development |information design coding system Maintanan stages | Gathering

Or Test factors |

& Analysis Testing

----------------------------------------------------------1)Ease of use 2)Authorization X + X + + + + + Depends on Change Req-uest

-----------------------------------------------------------

4)Roles and Responsibilities: Names of jobs in testing team and their responsibilities during tesing.

5)Test Deliverables: Required testing documents to be prepared during testing.

6)Communication and status Reporting: Required negotiation b/w every two consecutive jobs in testing team.

7)Defect Reporting & Tracking: Required negotiation b/w testing team and development team to track defects.

8)testing Measurements & Metrics: QAM,TMM,PCM.

9)Risks & Mitigations: List of expected failures and possible solutions to over come during testing.

10)Training Plan: Required training sessions to testing team to understand business requirements.

11)Change & configuration Management: How to handle change requests of customer during testing and maintenance.

12)Test Automation & Tools: Required possibilities to go to automation.

TEST FACTORS: To define a quality software,Quality Analyst -----------people are using fifteen test factors.

1)Authorization: Whether a user is valid or not valid to connect to application?

2)Access Control: Whether a valid user have permissions to use specific services or not?

3)Audit Trail: Whether our applicaiton maintains metadata about user operations or not?

4)Continuity of processing: The integration of internal modules for control and data transmission (Integraion Testing).

5)Correctness: Meet customer requirements in terms of functionality.

6)Coupling: co-existence with other existing software(inter system testing) to share common resources.

7)Ease of use: User friendliness of screens.

8)Ease of operate: Installation,Uninstallation,dumping (one computer to other computer),downloading,uploading.

9)File Integrity: Creation of back up during execution of our applicaiton (for recovery).

10)Reliability: Recover from abnormal states.

11)Portability: Run on different platforms.

12)Performance: Speed of processing.

13)Service levels: Order of functionalities .

14)Maintainable: Whether our application build is long time serviceable in customer site or not?

15)Methodology: Whether test engineers are following standards or not during testing.

TEST FACTORS

Vs

BLACK BOX TESTING TECHNIQUES

-------------------------------------------1) Authorization ->Security testing ->Functionality/Requirements testing

2) Access control

->Security testing ->Functionality/Requirements testing

3) Audit trail

->Functionality/Requirements (Error-Handling coverage)

testing

4) Continuity of processing

->Integration testing(white box

testing)

5) Correctness

->Functionality /Requirements

testing

6) Coupling

->Inter systems testing

7) Ease of use

->User interface testing ->Manual support testing

8) Ease of operate

->Installation testing

9) File integrity

->Functionality/Requirements ->Recovery testing

testing

10) Reliability

->Recovery testing(one-user level) ->Strees testing(peak load level)

11) Portability

->Compatibility testing ->Configuration testing(H/W)

12) Performance

->Load testing ->Stress testing ->Data volume testing ->Storage testing

13) Service Level

->Functionality/Requirements ->Stress testing(peak load)

testing

14) Maintainable

->Compliance testing

15) Methodology testing

->Compliance testing(Where our standards or not during testing?)

testing teams follow

-----------------------------------------------------------

Quality

-> By quality control people(QC) |

Test factors

-> By quality analyst |

Testing techniques -> By test lead | Test cases -> By test engineers

TEST METHODOLOGY:It is a project level document and developed by quality analyst or corresponding 'project manager'(PM) The test methodology is a refinement form of the test strategy with respect to corresponding project. to develop a test methodology from corresponding test strategy QA/PM follows below approach.

Step1: Acquire test strategy Step2: Identify project type

----------------------------------------------------------project Type Analysis Design Coding system Maintanance testing

----------------------------------------------------------1)traditional + + + + + project

2)off_the_ self project

(out sourcing)

3)Maintenance x project(onsite project)

-----------------------------------------------------------

NOTE:Depends on project type,quality analyst(QA)or project manager(PM)decrease number of columns in TRM(Test responsibility Matrix)means i.e in development stages.

step3: Determine project requirements

NOTE:Depends on current project version requirements, quality analyst(QA)or project manager(PM)decrease number of rows in TRM,means that is done in test factors.

step4: Determine the scope of project requirements.

NOTE:Depends on expected future enhancements,quality analyst(QA) or project manager(PM)can add some of the previously removed test factors into TRM.

step5: Identify tactical risks

NOTE: Depends on analyzed risks,quality analyst(QA)or project manager(PM) decrease some of selected rows in TRM.

STEP6: Finalize TRM current project,depending on above analysis.

STEP7: prepare system plan.

STEP8: prepare modules test plans if required.

PET process:(process experts and tools and technology) ------------

It is also a refinement form of v-model. this model defines mapping b/w development process and testing process.

information gathering(BRS) | | Analysis(S/wRS) | Development side testing side

________________________________________________ | Design(HLD & LLD's) | Coding | Unit & integration testing | test | test | study S/wRS and design docs | planning initiation

(intial build)

test design reviews | |

&

Level-0(sanity/tester acceptance/build verfication testing) | | Test automation(if facility available in company) | |

Create test batches/test suites | | Bug fixing modified select a batch and & ---------> <---------------| |

Resolving build starts execution ^ | | | | |

Developers Defect if test engineers got any mismatch | INDEPEND | Report | | ---------->| | | |

<------ then suspend that batch

Batches |

other wise ------------------------| | test closer | | Level-3(Final Regression/pre-Acceptance test/post mortem) | | UAT(User Acceptance test) | | sign off

4)TEST PLANNING: After completion of test methodology creation and finalization of required testing process, test lead category people concentrate on test planning to define"what to test?", "how to test"?,"who to test?","When to Test"?

TEST PLAN FORMAT:(IEEE) -----------------------

1)Test paln-ID: Unique number or name

2)Introduction:About project

3)Test items :Modules or functions or services or features

4)Features to be tested:Responsible modules for test designing.

5)Features not to be tested:Which ones &why not?

6)Approach:Seleccted testing techniques to be applied on above modules.(Finalized TRM by project Manager)

7)Testing tasks:Necessary tasks to do before starts every testing.

8)Suspension Ceiteria:What are the technological problms,raised during execution of above features testing.

9)Feature pass or fail criteria:When a feature is pass and when a feature is fail.

10)Test Environment:Required hardwares and softwares to conduct testing on above modules.

11)Test Deliverables:Required testing documents to be prepared during above modules Testing by Test Engineers.

12)Staff & Training needs:The names of selected test engineers and required trining sessions to them to understand business logic (i.e customer Requirement)

13)Responsibilities:Work allocation to above selected testers,in terms of modules.

14)Schedule:Dates and time

15)Risks & Mitigations:Non-technical problems raised during testing and their solutions to over come.

16)Approvals:Signatures of project manager or quality Analyst & Test lead.

3,4,5->Defines what to test?

6,7,8,9,10,11->Defines how to test?

12,13->Defines who to test?

14->Defines when to test?

To develop above like test plan document, test lead follows below work bench (approach)

----------------------------------------------------------Development plan, 1)testing team formation

s/wRS,Design Documents | | Inputs output | | Finalized TRM ----------------------------------------------------------4)review test plan 3)prepare test plan Test plan 2)Identify tactical risks

1)Testing team formation:In general ,the test plan process starts with testing team formation,depends on below faactors

-->Availability of test engineers

-->Possible test duration

-->Availability of test environment resoures

Case Study: Test Duration:

Client/server,web applications ,ERP(like SAP)->3-5 months of system testing

System software(Net working,compilers,Hard ware related projects)->7-9 months of system testing.

Machine Critical(Like satellite projects)->12-15months of system testing.

Team Size: Team is based on the developers and expressed in terms of ratio's i.e Developers:Testers=3:1

2)Identify Tactical Risks:After completion of testing team formation,test lead concentrate on risks analysis or cause-root analysis.

Examples:

Risk 1: Lack of knowledge on that domain of test engineers (training sessions required to test engineers)

Risk 2: Lack of budget(i.e time)

Risk 3: Lack of resources(Bad testing environment, in terms of facilities )

Risk 4: Lack of test data(Improper documents,and mitigation is ad-hoc testing,i.e based on past experience)

Risk 5: Delays in delivery (in terms of job completion, mitigation is working for over time)

Risk 6: Lack of communication

3)Prepare test plan: After completion of testing team formation and risks analysis ,test lead prepare test plan document in IEEE format.

4)Review test plan: After completion of test plan document preparaion ,test lead conducts reviews on that document for completeness and correctness. In this review, test lead applies covrage analysis.

-->Requirements based coverage(what to test?)

-->Risks based coverage(who & when to test?)

-->TRM based coverage(how to test?)

Test case format(IEEE): ----------------------1) Test case-id :unique or name

2) Test case name :The name of test condition

3) Feature to be tested :Module or function name (to be tested)

4) Test suit-id :The name of test batch, in which this case is a member

5) Priority :Importance of test cases in terms of functionality

p0-> Basic functionality p1>generalfunctionality(I/Pdomain,Errorhandling,compatibility,intersystems,configuration,installation.. ..)

p2-> Cosmetic functionality(Eg: User interface testing)

6) Test environment :Required hardwares & softwares to execute this case

7) Test Effort(person per hour) :Time to excute this test case (Eg: Average time to execute a test case is 20 minutes)

8) Test Duration :Date and time to execute this test case after receiving build from developers.

9) Test Setup :Necessary tasks to do before starts this test case execution .

10) Test procedure :A step by step process to execute this test case

----------------------------------------------------------Step Description I/P Expected Actual Results comments No Required

---------------------------------------------------------| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |

-----------------------------------------------------------

These are filled during test design, These are filled during test execution

11) Test case pass/fail; criteria: When this case is pass & when this case is fail.

NOTE: In general, test engineers are creating test case document with the step by step procedure only. They can try to remember remaining fields for the further test execution. ----------------------------------------------------------Case study d1: prepare test case document for "Successful file save"in notepad.

1) Test case-id : Tc_save_1 2) Test case Nane : Successful file save

3) Test procedure : ----------------------------------------------------------Step| Description| No | I/P | Expected

| Required |

-----------------------------------------------------------1) | open notepad| ------ |Empty editor opened and save| | | option disable | | | 2) |Fill with |Valid text |Save option enabled | text | | | | |

3) |Click save | ------ |Save window appears with | | | | | |default file name

4) |Enter file |Unique file|Saved file name appears in |name & cilck | name |save | | |title bar of notepad

-----------------------------------------------------------

Case study 2: Prepare test case document for "Successful Mail Reply".

1) Test case_id : Tc_Mail_Reply_1

2) Test case name : Successful Mail Reply

3) Test procedure :

-----------------------------------------------------------Step| Description| I/P No | |Required |

|Expected

-----------------------------------------------------------1) |Login to site|Valid user-|Mail page must be opened | | |id & pwd | | |

2) |Click inbox | ------ |Mail box page appears | link | | | | |

3) |Select receiv| ------ |Mail message window appears |ed mail sub | | | | |(Mail opened)

4) |cilck reply | ------ |Compose window appears with | | | | | | | | | | | | |to:Received mail_id |sub:Re:(Received Mial sub) |CC:Off |BCC:Off |Message:Recevied mail msg |with comment | |

5) |Enter new msg|Valid text |Acknowledgement from sever |& click send | |

Test Execution: --------------After completion of test design and their reviews,testing team team concetrate on build release from development team.

1)Levels of test execution:

Development

Testing

initial build Build --------------------------->Level-0(Sanity/TAT/BVT) | | Stable build | | |

Stable Build<----------------> | |

Bug Fixing<---------------------- Level-1(Comprehensive | testing)

| | Bug Resolving-------------------->Level-2( Regression Testing) | | | Level-3(Final Regression Testing)

2)Test Execution Levels Vs Test Cases:

Level-0-->all P0 Test cases Level-1-->all P0,P1 & P2 Test cases as batches Level-2-->Selected P0,P1 & P2 Test cases with respect to modifications Level-3-->Selected P0,P1 & P2 Test cases with respect to high bug density modules.

3)Test Harness:

It means that "Ready to Testing"

TestHarness=Test Environment + Test Bed (Test Environment is hardwares and softwares required for testing,where as Test Bed means that "testing documents"

Test Reporting: --------------During test execution,test engineers are reporting mismatches to developers through an IEEE defect report.

IEEE Format:

1)Defect_Id:unique number or name

2)Description:Summary of that defect.

3)Build Version_Id:Current build version,in which above defect raised

4)Feature:In which module of that build,you found that defect

5)Test Case Name:Corresponding failed test condition,which returns above defect

6)Reproducible:Yes/No Yes->If defect appears every time during test repetition No->If defect does not appears every time(i.e appears rarely)during test execution

7)If Yes,Attach Test Procedure

8)If No,Attach snap shot and strong reasons

9)Severity:Seriousness of defect with respect to functionality

High->Not able to continue remaining testing before solving that defect

Medium->Mandatory to solve,but able to continue remaining testing before solving that defect.

Low->May or may not to solve

10)priority:The importance of defect to solve with respect to customer(high,medium,low)

11)Status:New/reopen:

New->For the first time test engineers are reporting

defect to the developers

Reopen->Re-reporting defect again second time

12)Reported by:Name of the test engineer

13)Reported on:Date of Submission

14)Assigned to:The Name of Responsible Person at development side to receive that defect

15)Suggested fix:Possible reasons to accept and resolve defect

Defect Age:The time gap between "resolved on"and"reported on" -----------------------------------------------------------

Use Case: --------Ingeneral,test engineers are preparing "Test Cases" are depending on "Use Cases in SRS. Every use case in SRS,describes that how to use a functionality?These use cases are known as functional specifications(FS).

Types Of defects(bugs):

----------------------1)User Interface bugs(Low severity) Eg1:Spelling mistake(High priority based on customer Eg2:improper right alignment(Low Priority) requirements)

2)Input domain bugs(Medium Severity) Eg1:Does not allows valid type(High priority) Eg2:Allows invalid type also(Low Priority)

3)Error handling bugs(Medium severity) Eg1:Does not return error message(High priority) Eg2:Incomplete meaning of error message(Low Priority)

4)Calculation bugs(High Severity) Eg1:Dependent outputs are wrong(High priority) Eg2:Final output is wrong(Low Priority)

5)Race Condition bugs(High Severity) Eg1:dead lock(High Priority) Eg2:Does not run on expected platforms(Low Priority)

----------------------------------------------------------Hold: ----If the information is not enough to decide whether it is a bug or not a bug this bug is assigned to the status "Hold status".

As per Design: -------------In case the defect is raised on functionality which has got the modification as per the requirements the defect will not be accepted by the developer and asign the status " as per design"

Testers error: -------------in case the Test engineer misunderstand the functionality he will perform wrong testing and raises wrong defects which are not accepted by the development team and always such type of defects are assigned with the "Testers error"

-----------------------------------------------------------

Reporting : ----------How Test engineer sends defect profiles to the Development team.

Bug Reporting process: ---------------------it is the processs in which the test engineer send the bug report document to the developer

Basicaly bug reporting process is the process in which the testing team sends the information to the development team for the process of rectification.There are three types of bug reporting process are as below.

-->Classical Bug Reporting

-->Repository Oriented Bug Reporting

-->Bug Tracking Tool Oriented Bug Reporting

Classical Bug Repoorting: ------------------------In this process the test engineers prepares the respective defect profile Documents inturned send to the quality lead.Quality lead consolidates these documents into one document which can be sent to pm through an email as an attachment pm is them assign these Documents to the developers so that they begin the process of defect Rectification.

Drawbacks associated with this process: ---------------------------------------

-->Defect information is not secured as it is sent through Email

-->The Consolidation is a tedious procss to the Quality lead

-->No provision for the development team to view the defect information and to know the status of functionality while testing is going on

Repository Oriented Bug Reporting process: -----------------------------------------In this process once the test engineers developed the individual defect profile documents,these are send to the Quality lead for the consolidation.Quality lead keeps this Consolidated document in a common Repository(Like a database)once the Quality lead sent the documents then he will intimate the same to PM through Mail.

Bug Tracking Tool Oriented Bug Report: In this process Gug Tracking Tool is introduced apart from the common Repository in the tool to place on the Repository so as to be available for all the

members of the team.When testing happens the test engineers will not be preparing individual defect profile but as entered into defect profile template provided by the tool.

Anda mungkin juga menyukai