Anda di halaman 1dari 9

What do u mean by Quality??

Meeting the Customer Requirement Follow or satisfy the standards & guideline Deliver the product in standard time. Product is maintainable Why Quality is required? For customer satisfaction To retain the customer Developing the new customer To compete in the market To reduce the maintenance cost Quality Assurance preventing problems Prepare Standards and Guidelines Review the reports Prepare the test plans Verification Responsibilities of QA analyst: Writing the test plans and test cases. Understanding the requirements and discuss the issues with developers Prepare the test guideline documents and process document. Preparation of test scenario and evaluating the test tools. Responsibilities of Test Engineer: Execution of test cases Entering the defects. Verifying thr Resolved defects. Preparing the test reports Developing the Test Scripts. Qualities of good test Engineer : Has a test to break attitude. Ability to take the point of view of customer. A strong desire for quality Ability to communicate technically Maintianing good relation with the developers. When to stop testing: Deadlines are reached. Test cases are completed Test Budget depleted Bug rate falls below a certain level. Quality Control Identify problems Implement Standards and Guildelines Prepare the reports Implement the test plans Validation

Test Strategy for a Tight schedule: Which functionality is important Which functionality is most visible to the customer Which functionality has largest safety impact. Which aspects of the application can be tested early in the development cycle. Which part of the code is complex.

Test Case :It is a document which describes an inout action and expected result to determine whether the functionality of the application is working correctly or not. TEST TEST PRETEST TEST TEST EXPECTED Actual Priority CASE OBJECTIVE CONDITION STEPS DATA RESULT NO To log into www.yahoomail.com 1. User Tc1 yahoo mail is opened Enter name valid :Parimal login Passwd: name Pari123 and passwd Black-Box Testing: Not required to have knowledge of Internal logic of the code statement. Test cases are based on the Requirements and functionality As you dont know what code or module look like you are testing based on inputs and outputs Black Box tests are based on how the system behaves on valid as well as invalid inputs. White Box Testing: Based on knowledge of Internal Logic of the code Statements. Test Cases are based on Coverage of code statements, branches, paths and statements White box tests are based on what is present in the code , it tests that existing code works and no additional code is present . It checks security. Unit Testing: Testing of each code independently Involves debugging etc.. While Unit testing

All the branches of method are under test. What inputs should produce what results or exception. Interplay of affected member variables and argument.

Unit testing is done Test for data integrity Valid Input : should produce correct output InValid input: Shd see whether this errors are properly handled Equivalence partitioning: Is definition of group of inputs, any one of which should be treated exactly the same by the method under test This helps in eliminating redundant tests: Behavior of method shd be equivalent so any of the test need be run for any equivalence class. Boundary Value Analysis: The term for choosing the appropriate values to test a particular equivalence subset. Chooses a value at the edge of a set. Also good idea is to include the transition value. TEST PLANS: What are the testing priorities. When to start the testing. Do you have an adequate Resources? Who is responsible for Testing Environment?? Who does the Testing and how are they organized? How much testing can and shd be automated. How do u manage simultaneous testing, fixes and development. How can u you promote a release or Beta

Test Cases: Describe how to test a system/module function Description shd also identify system state before executing the function to be tested. Parameter values for the test. Expected outcome of the Test. Objective: To uncover errors in a complete manner with a minimum of efforts and time. Bug Report: Client Priority Project Status Module Phase detected Bug Title

Severity Steps to reproduce etc Priority: A priority classification of a software error is based on the importance and urgency of resolving the error. It has following category: 1. Blocking Development 2. Must Fix for Milestone release 3. Measureable improves 4. Oppurtunistics Servity: A severity classification of a software error is based on the degree of the error impact on the operation of the system. Generally given at Bug level: 1. Crashes product 2. Affects Core Functionality 3. usability Bug Status : Open Assign Resolved Closed Rejected Reopened Reassigne d verified Waiting reproduction etc.. Bug Cycle: Enter a New Defect Assign to Developer

Analyses the Defect

Fix the Bug

Resolve the Bug with Explanation

Assign to Tester

Reopened

Verify the Bug

Closed

SDLC: Project Inititaion Requirement Analysis System Desig Document Coding/Development Testing Deployment Testing Maintenance

Testing life Cycle Planning Designing Execution Result Ananlysis Defect tracking Summary report.

SDLC

WALK THRUS, Inspection,Review and NEED meeting REQUIREMENT HIGH LEVEL DESIGNING Low Level Designing

ACCEPTANCE TESTING SYSTEM TESTING UNIT TESTING TESTING INTEGRATION Success Validation

Verification

CODING Failure

Regression Testing: selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements is done when Bug is fixed and change request is logged. Test old functionality to be sure all older functionality are still present..

Acceptance testing:

formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system There shdnt be any bug till this time. All bugs must be fixed. Client can accept application after this. Load Testing:
the testing of a system that attempts to cause failures involving how the performance of a system varies under normal conditions of utilization (e.g., as the load increases and becomes heavy). Load testing can identify failures involving scalability requirements as well as distribution and load balancing mechanisms in distributed, scalable systems. Contrast with stress testing. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails. How many max user the system can withstand. Performance testing:

limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load. Integration testing:

testing conducted to evaluate the compliance of a system or component with specified performance requirements Time to respond the action for individual use When all together performs within the constraint. Stress Testing: Testing conducted to evaluate a system or component at or beyond the

testing in which software components, hardware components, or both are combined and tested to evaluate the interaction between them to test that major subsystem that makes up the project work and play with each other..

Fucntionality testing: Tests the functionality End to End testing: complete application testing. Recovery Testing: A system test that forces the system to fail in a variety of ways and verifies that recovery is handled. Security: Testing an integrated hardware and software system to verify the system meets its specified Requirement. Alpha
Testing of software at the developer's site by the customer

Beta
Testing a pre-release (potentially unreliable) version of a piece of software by making it available to selected users

Capability Maturity Model (SW-CMM) for Software The Capability Maturity Model for Software describes the principles and practices underlying software process maturity and is intended to help software organizations improve the maturity of their software processes in terms of an evolutionary path from ad hoc, chaotic processes to mature, disciplined software processes. The CMM is organized into five maturity levels: 1) Initial. The software process is characterized as ad hoc, and occasionally even chaotic. Few processes are defined, and success depends on individual effort and heroics. 2) Repeatable. Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications. 3) Defined. The software process for both management and engineering activities is documented, standardized, and integrated into a standard software process for the organization. All projects use an approved, tailored version of the organization's standard software process for developing and maintaining software. 4) Managed. Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled. 5) Optimizing. Continuous process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies. Predictability, effectiveness, and control of an organization's software processes are believed to improve as the organization moves up these five levels. While not rigorous, the empirical evidence to date supports this belief. Except for Level 1, each maturity level is decomposed into several key process areas that indicate the areas an organization should focus on to improve its software process. The key process areas at Level 2 focus on the software project's concerns related to establishing basic project management controls. They are Requirements Management, Software Project Planning, Software Project Tracking and Oversight, Software Subcontract Management, Software Quality Assurance, and Software Configuration Management. The key process areas at Level 3 address both project and organizational issues, as the organization establishes an infrastructure that institutionalizes effective software engineering and management processes across all projects. They are Organization Process Focus, Organization Process Definition, Training Program, Integrated Software Management, Software Product Engineering, Intergroup Coordination, and Peer Reviews. The key process areas at Level 4 focus on establishing a quantitative understanding of both the software process and the software work products being built. They are Quantitative Process Management and Software Quality Management.

The key process areas at Level 5 cover the issues that both the organization and the projects must address to implement continual, measurable software process improvement. They are Defect Prevention, Technology Change Management, and Process Change Management. Each key process area is described in terms of the key practices that contribute to satisfying its goals. The key practices describe the infrastructure and activities that contribute most to the effective implementation and institutionalization of the key process area. You have mentioned that u have carried out a project requirement study? What do u mean by that ??????? what is the advantage of carrying out his study???? Have u used winrunner? When u will prefer using winrunner?? (refer those 100 question. How did u use Script builder , voice and work and pro C in IVS Few tech questions on telecom domains. What is primary key, secondary key etc What is normalization ? and few database related. Tell me advantage of code review.. What is SDLC and STLC ?? What isBug life cycle. What are Iso Standards?? What difference between ISO CMM Six sigma Explain call Flow of IVR related application ?? Which is the important bug ?? Explain testing concepts like regression testing etc . Relate them with projects

What do u mean by testing the complete call flow for IVR related application retrevial of data from backend database

Anda mungkin juga menyukai