Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of
agreed acceptance criteria.
- Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Then let
management or the customers (not the developers or testers) decide if the changes are warranted - after all, that's their job.
- Balance the effort put into setting up automated testing with the expected effort required to re-do them to deal with changes.
- Try to design some flexibility into automated test scripts.
- Focus initial automated testing on application aspects that are most likely to remain unchanged.
- Devote appropriate effort to risk analysis of changes to minimize regression testing needs.
- Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-
type test plans)
- Focus less on detailed test plans and test cases and more on ad hoc testing (with an understanding of the added risk that this entails).
143. What if the project isn't big enough to justify extensive testing?
Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the same
considerations as described previously in 'What if there isn't enough time for thorough testing?' apply. The tester might then do ad hoc testing, or write up a limited
test plan based on the risk analysis.
144. What if the application has functionality that wasn't in the requirements?
It may take serious effort to determine if an application has significant unexpected or hidden functionality, and it would indicate deeper problems in the software
development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies
that were not taken into account by the designer or the customer. If not removed, design information will be needed to determine added testing needs or
regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only
effects areas such as minor improvements in the user interface, for example, it may not be a significant risk.
146. What if an organization is growing so fast that fixed QA processes are impossible?
This is a common problem in the software industry, especially in new technology areas. There is no easy solution in this situation, other than:
- Hire good people
- Management should 'ruthlessly prioritize' quality issues and maintain focus on the customer
- Everyone in the organization should be clear on what 'quality' means to the customer
147. How does a client/server environment affect testing?
Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing
requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance
testing may be useful in determining client/server application limitations and capabilities. There are commercial tools to assist with such testing.
151. What's the difference between black box and white box testing?
Black-box and white-box are test design methods. Black-box test design treats the system as a “black-box”, so it doesn't explicitly use knowledge of the internal
structure. Black-box test design is usually described as focusing on testing functional requirements. Synonyms for black-box include: behavioral, functional,
opaque-box, and closed-box. White-box test design allows one to peek inside the “box”, and it focuses specifically on using internal knowledge of the software to
guide the selection of test data. Synonyms for white-box include: structural, glass-box and clear-box.
While black-box and white-box are terms that are still in popular use, many people prefer the terms 'behavioral' and 'structural'. Behavioral test design is slightly
different from black-box test design because the use of internal knowledge isn't strictly forbidden, but it's still discouraged. In practice, it hasn't proven useful to use
a single test design method. One has to use a mixture of different methods so that they aren't hindered by the limitations of a particular one. Some call this 'gray-
box' or 'translucent-box' test design, but others wish we'd stop talking about boxes altogether.
It is important to understand that these methods are used during the test design phase, and their influence is hard to see in the tests once they're implemented.
Note that any level of testing (unit testing, system testing, etc.) can use any test design methods. Unit testing is usually associated with structural test design, but
this is because testers usually don't have well-defined requirements at the unit level to validate.
• What are the benefits of creating multiple actions within any virtual user script?
• How you used WinRunner in your project?
• Explain WinRunner testing process?
• What is contained in the GUI map in WinRunner?
• How does WinRunner recognize objects on the application?
• Have you created test scripts and what is contained in the test scripts in WinRunner?
• How does WinRunner evaluate test results?
• Have you performed debugging of the scripts in WinRunner?
• How do you run your test scripts in WinRunner ?
• How do you analyze results and report the defects in WinRunner?
• What is the use of Test Director software in WinRunner?
• Have you integrated your automated scripts from TestDirector in WinRunner?
• What is the purpose of loading WinRunner Add-Ins?
• What is meant by the logical name of the object in WinRunner?
• If the object does not have a name then what will be the logical name in WinRunner?
• What is the different between GUI map and GUI map files in WinRunner?
• How do you view the contents of the GUI map in WinRunner?
• When you create GUI map do you record all the objects of specific objects in WinRunner?
• What is load testing?
• What is Performance testing?
• Explain the Load testing process? LoadRunner Version 7.2
• When do you do load and performance Testing?
• What are the components of LoadRunner?
• What Component of LoadRunner would you use to record a Script?
• When do you do load and performance Testing?
• What are the components of LoadRunner?
• What Component of LoadRunner would you use to play Back the script in multi user mode?
• What is a rendezvous point about LoadRunner?
• What is a scenario about LoadRunner?
• Explain the recording mode for web Vuser script about LoadRunner?
• Why do you create parameters about LoadRunner?
• What is correlation LoadRunner? Explain the difference between automatic correlation and manual correlation?
• How do you find out where correlation is required LoadRunner? Give few examples from your projects?
• Where do you set automatic correlation options about LoadRunner?
• When do you disable log in Virtual User Generator, When do you choose standard and extended logs about LoadRunner?
• What is a function to capture dynamic values in the web Vuser script about LoadRunner?
• How do you debug a LoadRunner script?
• How do you write user defined functions in LoadRunner?
• What are the changes you can make in run-time settings about LoadRunner?
• Where do you set Iteration for Vuser testing about LoadRunner?
• How do you perform functional testing under load about LoadRunner?
• What is Ramp up? How do you set this about LoadRunner?
• What is the advantage of running the Vuser as thread? about LoadRunner
• If you want to stop the execution of your script on error about LoadRunner, how do you do that?
• What is the relation between Response Time and Throughput about LoadRunner?
• Explain the Configuration of your systems about LoadRunner?
• How do you identify the performance bottlenecks about LoadRunner?
• How did you find database related issues about LoadRunner?
• What is the difference between Overlay graph and Correlate graph? about LoadRunner
• How did you plan the Load? What are the Criteria about LoadRunner?
• What does vuser_init action contain about LoadRunner?
• What does vuser_end action contain about LoadRunner?
• Who should be involved in each level of testing? What should be their responsibilities?
• You have more verifiable QA experience testing:
a.Java APIs
b.End user applications using third-party automated test tools
c.End user applications using manual testing and in-house test tools
• Your professional experience debugging, developing test cases and running system tests for developed subsystems and features is:
a.N/A - I do not have experience in these areas.
b.I have this experience on 1 to 3 commercial product launches or product integrations.
c.I have this experience on 4 to 6 commercial product launches or product integrations.
d.I have this experience on 7 to 10 or more commercial product launches or product integrations.
• You have personally created the following number of test plans or test cases:
a.N/A - I would be new to creating test plans or test cases
b.For 1 to 3 product releases
c.For 4 to 6 product releases
d.For 7 to 10 product releases
• What is an advantage of black box testing over white box testing:
a.Tests give confidence that the program meets its specifications
b.Tests can be done while the program is being written instead of waiting until it is finished
c.It insures that every piece of code written is tested in some way
d.Tests give confidence that every part of the code is working
• What is an advantage of white box testing over black box testing:
a.Tests can discover that some code is missing
b.Tests can be done while the program is being written instead of waiting until it is finished
c.Tests can be designed before a program is written
d.Tests give confidence that the program meets its specifications
• Your experience with Programming within the context of Quality Assurance is:
a.N/A - I have no programming experience in C, C++ or Java.
b.You have done some programming in my role as a QA Engineer, and am comfortable meeting such requirements in Java, C and C++ or VC++.
c.You have developed applications of moderate complexity that have taken up to three months to complete.
• Your skill in maintaining and debugging an application is best described as:
a.N/A - You have not participated in debugging a product.
b.You have worked under the mentorship of a team lead to learn various debugging techniques and strategies.
c.You have both an interest in getting to the root of a problem and understand the steps You need to take to document it fully for the developer.
d.You am experienced in working with great autonomy on debugging/maintenance efforts and have a track record of successful projects You can
discuss.
• Why does testing not prove a program is 100 percent correct (except for extremely simple programs)?
a.Because we can only test a finite number of cases, but the program may have an infinite number of possible combinations of inputs and outputs
b.Because the people who test the program are not the people who write the code
c.Because the program is too long
d.All of the above
e.We CAN prove a program is 100 percent correct by testing
• Which statement regarding Validation is correct:
a.It refers to the set of activities that ensures the software has been built according to the customer's requirements.
b.It refers to the set of activities that ensure the software correctly implements specific functions.
Software Quality Assurance Interview Questions only (8)
a.A technique to ensure that every unique sequence of commands through a program is executed
b.Testing pieces of a program (usually classes) independently using test drivers and/or stubs
c.Testing using inputs that are derived from a statement of what the program is supposed to do
d.Testing of increasingly more complete versions of a program
• Which of the following is used to determine whether all reported bugs have indeed been fixed, and no new ones introduced:
a.Regression
b.Matrix
c.Performance
d.Functional
e.White Box
• Which of the following is not correct regarding the purpose of testing:
a.To verify that a program works correctly
b.To verify that a program matches the initial specifications
c.To prove a program is 100 percent correct in all cases
d.To uncover bugs that are then fixed via debugging
• Which of the following testing strategies ignores the internal structure of the software?
a.Interface testing
b.Top down testing
c.White box testing
d.Black box testing
e.Sandwich testing
• Regarding your experience with XML:
a.N/A - You would be new to using XML.
b.You have a basic understanding of its use.
c.You have experience using XML to transfer and transform data.>
d.ou have significant experience creating XML schema and creating applications for data transfer.
• I have the following experience writing SQL queries:
a.You would be new to writing SQL queries.
b.You have written simple SQL queries.
c.You have written medium difficulty SQL queries.
d.You have written complex SQL queries that included joins.
• I have the following experience writing Shell Scripts:
a.You would be new to writing Shell Scripts.
b.You have a basic understanding of Shell Scripts.
c.You have experience writing Shell Scripts.
d.You have significant experience creating complex Shell Scripts
• Regarding the use of a computer, You am:
a.An expert with computers, the Internet and Windows, and am often asked to help others.
b.New to computers and would need a little help to get started.
c.Comfortable with e-mail and the Internet, but would need help with other applications required for the position.
d.Comfortable with e-mail and a variety of computer software, but not an expert.
• Your knowledge and experience in Linux is:
a.N/A - You have no direct Linux operating system experience. d need help to become functionally proficient.
b.You have a good understanding of Linux and run this OS on Your home PC.
c.You have experience with multiple Linux variants and feel as comfortable with it as most people do working in Windows.
• The highest level of education Youhave attained is:
a.High school diploma
b.Some college experience, but no BS/BA degree, with relevant QA experience.
c.An undergraduate degree in Computer Science or related field.
d.An undergraduate degree in Computer Science or related field, or comparable working experience.
• Are regression tests required or do you feel there is a better use for resources?
• Our software designers use UML for modeling applications. Based on their use cases, we would like to plan a test strategy. Do you agree with this
approach or would this mean more effort for the testers.
• Tell me about a difficult time you had at work and how you worked through it.
• Give me an example of something you tried at work but did not work out so you had to go at things another way.
• How can one file compare future dated output files from a program which has change, against the baseline run which used current date for input. The
client does not want to mask dates on the output files to allow compares. - Answer-Rerun baseline and future date input files same # of days as future
dated run of program with change. Now run a file compare against the baseline future dated output and the changed programs' future dated output.
• What is the structure of the company?
• Who is going to do the interview-possible background information of interviewer?
• What is the employer's environment (platforms, tools, etc.)?
•
• What are the employer's methods and processes used in software arena?
• What is the employer's philosophy?
• What automating testing tools are you familiar with?
• How did you use automating testing tools in your job?
• Describe some problem that you had with automating testing tool.
• How do you plan test automation?
• Can test automation improve test effectiveness?
• What is Negative testing?
• What was a problem you had in your previous assignment (testing if possible)? How did you resolve it?
• What are two of your strengths that you will bring to our QA/testing team?
• How would you define Quality Assurance?
• What do you like most about Quality Assurance/Testing?
• What do you like least about Quality Assurance/Testing?
• What is the Waterfall Development Method and do you agree with all the steps?
• What is the V-Model Development Method and do you agree with this model?
• What is the Capability Maturity Model (CMM)? At what CMM level were the last few companies you worked?
• What is a "Good Tester"?
• Could you tell me two things you did in your previous assignment (QA/Testing related hopefully) that you are proud of?
• List 5 words that best describe your strengths.
• What are two of your weaknesses?
• What methodologies have you used to develop test cases?
• In an application currently in production, one module of code is being modified. Is it necessary to re- test the whole application or is it enough to just test
functionality associated with that module?
• Define each of the following and explain how each relates to the other: Unit, System, and Integration testing.
• Define Verification and Validation. Explain the differences between the two.
• Explain the differences between White-box, Gray-box, and Black-box testing.
• How do you go about going into a new organization? How do you assimilate?
• Define the following and explain their usefulness: Change Management, Configuration Management, Version Control, and Defect Tracking.
• What is ISO 9000? Have you ever been in an ISO shop?
• When are you done testing?
• What is the difference between a test strategy and a test plan?
• What is ISO 9003? Why is it important
• What are ISO standards? Why are they important?
• What is IEEE 829? (This standard is important for Software Test Documentation-Why?)
• What is IEEE? Why is it important?
• Do you support automated testing? Why?
• We have a testing assignment that is time-driven. Do you think automated tests are the best solution?
• What is your experience with change control? Our development team has only 10 members. Do you think managing change is such a big deal for us?
• Are reusable test cases a big plus of automated testing and explain why.
• Can you build a good audit trail using Compuware's QACenter products. Explain why.
• How important is Change Management in today's computing environments?
• Do you think tools are required for managing change. Explain and please list some tools/practices which can help you managing change.
• We believe in ad-hoc software processes for projects. Do you agree with this? Please explain your answer.
• When is a good time for system testing?
Software testing is the process and tools used to test coded software before it is released to the public. Software testing is a critical component of the software
development cycle. Without software testing, consumers would not get stable software releases. Software testing is a process used to help identify the
correctness, completeness and quality of developed computer software. With that in mind, software testing can never completely establish the correctness of
arbitrary computer software.
There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating
and following rote procedure. One definition of software testing is "the process of questioning a software product in order to evaluate it", where the "questions" are
things the tester tries to do with the product, and the product answers with its behavior in reaction to the probing of the tester. Although most of the intellectual
processes of software testing are nearly identical to that of review or inspection, the word software testing is connoted to mean the dynamic analysis of the
software product—putting the software through its paces.
Automated testing is a critical part of developing and deploying
quality software applications. QA Wizard completely automates
the functional and regression testing of Web-based and Windows
applications, helping you test more of your
application in less time.
iBeta provides all types of testing services for our business and entertainment clients: test engineering,
automated and manual testing, functionality testing, performance/stress/load testing, hardware and software
compatibility testing, data conversion testing, usability testing, console certification testing, handheld testing,
game play testing, localization testing, beta test management, and user guide testing.