Anda di halaman 1dari 20

Software Testing FAQ

1.Software Quality Assurance


(1) A planned and systematic pattern of all actions necessary to provide adequate
confidence that an item or product conforms to established technical
requirements.
(2) A set of activities designed to evaluate the process by which products are
developed or manufactured.

2.What's difference between client/server and Web Application?


Client/server based is any application architecture where one server application
and one or many client applications are involved like your mail server and MS
outlook Express, it can be a web application as well, where the Web Application is
a kind of client server application that is hosted on the web server and accessed
over the internet or intranet. There are lots of things that differs between testing
of the two type above and can’t be posted in one post but you can look into the
data flow, communication and server side variable like session and security etc

3.Software Quality Assurance Activities

Application of Technical Methods (Employing proper methods and tools for


developing software)

Conduct of Formal Technical Review (FTR)

Testing of Software

Enforcement of Standards (Customer imposed standards or management imposed


standards)

Control of Change (Assess the need for change, document the change)

Measurement (Software Metrics to measure the quality, quantifiable)


Records Keeping and Recording (Documentation, reviewed, change control etc.
i.e. benefits of docs).

4.What's the difference between STATIC TESTING and DYNAMIC


TESTING?

Answer1:
Dynamic testing: Required program to be executed
static testing: Does not involve program execution

The program is run on some test cases & results of the program’s performance
are examined to check whether the program operated as expected
E.g. Compiler task such as Syntax & type checking, symbolic execution, program
proving, data flow analysis, control flow analysis

Answer2:
Static Testing: Verification performed with out executing the system code
Dynamic Testing: Verification and validation performed by executing the system
code

5.Software Testing
Software testing is a critical component of the software engineering process. It is
an element of software quality assurance and can be described as a process of
running a program in such a manner as to uncover any errors. This process, while
seen by some as tedious, tiresome and unnecessary, plays a vital role in software
development.

Testing involves operation of a system or application under controlled conditions


and evaluating the results (e.g., 'if the user is in interface A of the application
while using hardware B, and does C, then D should happen'). The controlled
conditions should include both normal and abnormal conditions. Testing should
intentionally attempt to make things go wrong to determine if things happen
when they shouldn't or things don't happen when they should. It is oriented to
'detection'.

Organizations vary considerably in how they assign responsibility for QA and


testing. Sometimes they're the combined responsibility of one group or individual.
Also common are project teams that include a mix of testers and developers who
work closely together, with overall QA processes monitored by project managers.
It will depend on what best fits an organization's size and business structure.

6.What's difference between QA/testing


The quality assurance
process is a process for providing adequate assurance that the software products
and processes in the product life cycle conform to their specific requirements and
adhere to their established plans."
The purpose of Software Quality Assurance is to provide management with
appropriate visibility into the process being used by the software project and of
the products being built

7.What black box testing types can you tell me about?


Black box testing is functional testing, not based on any knowledge of internal
software design or code.
Black box testing is based on requirements and functionality. Functional testing is
also a black-box type of testing geared to functional requirements of an
application.
System testing is also a black box type of testing. Acceptance testing is also a
black box type of testing. Functional testing is also a black box type of testing.
Closed box testing is also a black box type of testing. Integration testing is also a
black box type of testing.

8.What is software testing methodology?


One software testing methodology is the use a three step process of...
1. Creating a test strategy;
2. Creating a test plan/design; and
3. Executing tests. This methodology can be used and molded to your
organization's needs. Rob Davis believes that using this methodology is important
in the development and ongoing maintenance of his clients' applications.

9.What’s the difference between QA and testing?


TESTING means “Quality Control”; and
QUALITY CONTROL measures the quality of a product; while
QUALITY ASSURANCE measures the quality of processes used to create a quality
product.

10.Why Testing CANNOT Ensure Quality


Testing in itself cannot ensure the quality of software. All testing can do is give
you a certain level of assurance (confidence) in the software. On its own, the only
thing that testing proves is that under specific controlled conditions, the software
functioned as expected by the test cases executed.

11.How to find all the Bugs during first round of Testing?


Answer1:
I understand the problems you are facing. I was involved with a web-based HR
system that was encountering the same problems. What I ended up doing was
going back over a few release cycles and analyzing the types of defects found and
when (in the release cycle including the various testing cycles) they were found. I
started to notice a distinct trend in certain areas.
For each defect type, I started looking into the possibility if it could have been
caught in the prior phase (lots of things were being found in the Systems test
phase that should have been caught earlier). If so, why wasn't it caught? Could it
have been caught even earlier (say via a peer review)? If so, why not? This led
me to start examining the various processes and found a definite problem with
peer reviews (not very thorough IF they were even being done) and with the
testing process (not rigorous enough). We worked with the customer and folks
doing the testing to start educating them and improving the processes. The result
was the number of defects found in the latter test stages (System test for
example) were cut by over half! It was getting harder to find problems with the
product as they were discovering them earlier in the process -- saving time &
money!

Answer2:
There could be several reasons for not catching a showstopper in the first or
second build/rev. A found defect could either functionally or physiologically mask a
second or third defect. Functionally the thread or path to the second defect could
have been broken or rerouted to another path or physiologically the tester who
found the first defect knows the app must go back and be rewritten so he/she
proceeds halfheartedly on and misses the second one. I've seen both cases. It is
difficult to keep testing on a known defective app. The testers seem to lose
interest knowing that what effort they put in to test it, will have to be redone on
the next iteration. This will test your metal as a lead to get them to follow through
and maintain a professional attitude.

Answer3:
The best way is to prevent bugs in the first place. Also testing doesn't fix or
prevent bugs. It just provides information. Applying this information to your
situation is the important part.
The other thing that you may be encountering is that testing tends to be
exploratory in nature. You have stated that these are existing bugs, but not stated
whether tests already existed for these bugs.
Bugs in early cycles inhibit exploration. Additionally, a tester's understanding of
the application and its relationships and interactions will improve with time and
thus more 'interesting' bugs tend to be found in later iterations as testers expand
their exploration (i.e. think of new tests).
No matter how much time you have to read through the documents and inspect
artifacts, seeing the actual application is going to trigger new thoughts, and thus
introduce previously unthought of tests. Exposure to the application will trigger
new thoughts as well, thus the longer your testing goes, the more new tests (and
potential bugs) are going to be found. Iterative development is a good way to
counter this, as testers get to see something physical earlier, but this issue will
always exist to some degree as the passing of time, and exploration of the
application allow new tests to be thought of at inconvenient moments.

12.Is regression testing performed manually?


The answer to this question depends on the initial testing approach. If the initial
testing approach was manual testing, then the regression testing is usually
performed manually. Conversely, if the initial testing approach was automated
testing, then the regression testing is usually performed by automated testing.

13.How to choose which defect to remove in 1000000 defects? (because


It will take too much resources in order to remove them all.)

Answe1:
Are you the programmer who has to fix them, the project manager who has to
supervise the programmers, the change control team that decides which areas are
too high risk to impact, the stakeholder-user whose organization pays for the
damage caused by the defects or the tester?
The tester does not choose which defects to fix.
The tester helps ensure that the people who do choose, make a well-informed
choice.
Testers should provide data to indicate the *severity* of bugs, but the project
manager or the development team do the prioritization.
When I say "indicate the severity", I don't just mean writing S3 on a piece of
paper. Test groups often do follow-up tests to assess how serious a failure is and
how broad the range of failure-triggering conditions.
Priority depends on a wide range of factors, including code-change risk,
difficulty/time to complete the change, which stakeholders are affected by the
bug, the other commitments being handled by the person most knowledgeable
about fixing a certain bug, etc. Many of these factors are not within the
knowledge of most test groups.

Answe2:
As a tester we don't fix the defects but we surely can prioritize them once
detected. In our org we assign severity level to the defects depending upon their
influence on other parts of products. If a defect doesn’t allow you to go ahead and
test the product, it is critical one so it has to be fixed ASAP. We have 5 levels as
1-critical
2-High
3-Medium
4-Low
5-Cosmetic

Dev can group all the critical ones and take them to fix before any other defect.

Answer3:
Priority/Severity P1 P2 P3
S1
S2
S3

Generally the defects are classified in above shown grid. Every organization /
software has some target of fixing the bugs.
Example -
P1S1 -> 90% of the bugs reported should be fixed.
P3S3 -> 5% of the bugs reported may be fixed. Rest are taken in letter service
packs or versions.

Thus the organization should decide its target and act accordingly.
Basically bug free software is not possible.

Answer4:
Ideally, the customer should assign priorities to their requirements. They tend to
resist this. On a large, multi-year project I just completed, I would often (in the
lack of customer guidelines) rely on my knowledge of the application and the
potential downstream impacts in the modeled business process to prioritize
defects.
If the customer doesn't then I fell the test organization should based on risk or
other, similar considerations.

14.What is software quality?


The quality of the software varies widely from system to system. Some common
quality attributes are stability, usability, reliability, portability, and maintainability.

15.What are the five dimensions of the Risks?


Schedule: Unrealistic schedules, exclusion of certain activities when chalking out a
schedule etc. could be deterrents to project delivery on time. Unstable
communication link can be considered as a probable risk if testing is carried out
from a remote location.
Client: Ambiguous requirements definition, clarifications on issues not being
readily available, frequent changes to the requirements etc. could cause chaos
during project execution.
Human Resources: Non-availability of sufficient resources with the skill level
expected in the project are not available; Attrition of resources - Appropriate
training schedules must be planned for resources to balance the knowledge level
to be at par with resources quitting. Underestimating the training effort may have
an impact in the project delivery.
System Resources: Non-availability of /delay in procuring all critical computer
resources either hardware and software tools or licenses for software will have an
adverse impact.
Quality: Compound factors like lack of resources along with a tight delivery
schedule and frequent changes to requirements will have an impact on the quality
of the product tested.

16.What is good code?


A good code is code that works, is free of bugs and is readable and maintainable.
Organizations usually have coding standards all developers should adhere to, but
every programmer and software engineer has different ideas about what is best
and what are too many or too few rules. We need to keep in mind that excessive
use of rules can stifle both productivity and creativity. Peer reviews and code
analysis tools can be used to check for problems and enforce standards.

17.How do you perform integration testing?


To perform integration testing, first, all unit testing has to be completed. Upon
completion of unit testing, integration testing begins. Integration testing is black
box testing. The purpose of integration testing is to ensure distinct components of
the application still work in accordance to customer requirements. Test cases are
developed with the express purpose of exercising the interfaces between the
components. This activity is carried out by the test team.
Integration testing is considered complete, when actual results and expected
results are either in line or differences are explainable, or acceptable, based on
client input.

18.Why back-end testing is required, if we are going to check the front-


end. What errros/bugs we are missing out by not doing back-end testing.

Why we need to do unit testing, if all the features are being tested in
System testing. What extra things are tested in unit testing, which can
not be tested in System testing.

Answer1:
Assume that you're thinking client-server or web. If you test the application on
the front end only you can see if the data was stored and retrieved correctly. You
can't see if the servers are in an error state or not. many server processes are
monitored by another process. If they crash, they are restarted. You can't see
that without looking at it.
The data may not be stored correctly either but the front end may have cached
data lying around and it will use that instead. The least you should be doing is
verifying the data as stored in the database.
It is easier to test data being transferred on the boundaries and see the results of
those transactions when you can set the data in a driver.

Answer2:
Back-End testing : Basically the requirement of this testing depends on project.
like Say if your project is .Ticket booking system,Front end u will provided with an
Interface , where u can book the ticket by giving the appropriate details ( Like
Place to go, and Time when u want to go etc..). It will have a Data storage system
(Database or XL sheet etc) which is a Back end for storing details entered by the
user.
After submitting the details ,U might have provided with a correct
acknowledgement. But in back end , the details might not updated correctly in
Database because of wrong logic development. Then that will cause a major
problem.
and regarding Unit level testing and System testing Unit level testing is for testing
the basic checks whether the application is working fyn with the basic
requirements.This will be done by developers before delivering to the QA.In
System testing , In addition to the unit checks ,u will be performing all the checks
( all possible integrated checks which required) .Basically this will be carried out
by tester

Answer3:
Ever heard about divide and conquer tactic ? It is a same method applied in
backend and frontend testing.
A good back end test will help minimize the burden of frontend test.
Another point is you can test the backend while develop the frontend. A true
paralism could be achieved.
Backend testing has another problem which must addressed before front end
could use it. The problem is concurrency. Building a scenario to test concurrency
is formidable task.
A complex thing is hard to test. To create such scenarios will make you unsure
which test you already done and which you haven't. What we need is an effective
methods to test our application. The simplest method I know is using divide and
conquer.

Answer4:
A wide range of errors are hard to see if you don't see the code. For example,
there are many optimizations in programs that treat special cases. If you don't
see the special case, you don't test the optimization. Also, a substantial portion of
most programs is error handling. Most programmers anticipate more errors than
most testers.
Programmers find and fix the vast majority of their own bugs. This is cheaper,
because there is no communication overhead, faster because there is no delay
from tester-reporter to programmer, and more effective because the programmer
is likely to fix what she finds, and she is likely to know the cause of the problems
she sees. Also, the rapid feedback gives the programmer information about the
weaknesses in her programming that can help her write better code.
Many tests -- most boundary tests -- are done at the system level primarily
because we don't trust that they were done at the unit level. They are wasteful
and tedious at the system level. I'd rather see them properly done and properly
automated in a suite of programmer tests.

19.What is Software “Quality”?


Quality software is reasonably bug-free, delivered on time and within budget,
meets requirements and/or expectations, and is maintainable.
However, quality is a subjective term. It will depend on who the ‘customer’ is and
their overall influence in the scheme of things. A wide-angle view of the
‘customers’ of a software development project might include end-users, customer
acceptance testers, customer contract officers, customer management, the
development organization’s management/accountants/testers/salespeople, future
software maintenance engineers, stockholders, magazine reviewers, etc. Each
type of ‘customer’ will have their own view on ‘quality’ - the accounting
department might define quality in terms of profits while an end-user might define
quality as user-friendly and bug-free.

20.What is retesting?

Answer1:
Retesting is usually equated with regression testing (see above) but it is different
in that is follows a specific fix--such as a bug fix--and is very narrow in focus (as
opposed to testing entire application again in a regression test). A product should
never be released after any change has been applied to the code, with only
retesting of the bug fix, and without a regression test.

Answer2:
1. Re-testing is the testing for a specific bug after it has been fixed.(one given by
your definition).
2. Re-testing can be one which is done for a bug which was raised by QA but
could not be found or confirmed by Development and has been rejected. So QA
does a re-test to make sure the bug still exists and again assigns it back to them.
when entire project is tested & client have some doubts about the quality of
testing, Re-Testing can be called. It can also be testing the same application again
for better Quality.

Answer3:
Regression Testing is, the selective retesting of a system that has been modified
to ensure that any bugs have been fixed and that no other previously working
functions have failed as a result of the reparations and that newly added features
have not created problems with previous versions of the software. Also referred to
as verification testing
It is important to determine whether in a given set of circumstances a particular
series of tests has been failed. The supplier may want to submit the software for
re-testing. The contract should deal with the parameters for retests, including (1)
will test program which are doomed to failure be allowed to finish early, or must
they be completed in their entirety? (2) when can, or must, the supplier submit
his software for retesting?, and (3) how many times can the supplier fail tests and
submit software for retesting ñ is this based on time spent, or the number of
attempts? A well drawn contract will grant the customer options in the event of
failure of acceptance tests, and these options may vary depending on how many
attempts the supplier has made to achieve acceptance.
So the conclusion is retesting is more or less regression testing. More
appropriately retesting is a part of regression testing.

Answer4:
Re-testing is simply executing the test plan another time. The client may request
a re-test for any reason - most likely is that the testers did not properly execute
the scripts, poor documentation of test results, or the client may not be
comfortable with the results.
I've performed re-tests when the developer inserted unauthorized code changes,
or did not document changes.
Regression testing is the execution of test cases "not impacted" by the specific
project. I am currently working on testing of a system with poor system
documentation (and no user documentation) so our regression testing must be
extensive.

Answer5:
* QA gets a bug fix, and has to verify that the bug is fixed. You might want to
check a few things that are a “gut feel” if you want to and get away by calling it
retesting, but not the entire function / module / product. * Development Refuses
a bug on the basis of it being “Non Reproducible”, then retesting, preferably in the
presence of the Developer, is needed.

How to establish QA Process in an organization?


1.CURRENT SITUATION
The first thing you should do is to put what you currently do in a piece of paper in
some sort of a flowchart diagram. This will allow you to analyze what is being
currently done.
2.DEVELOPMENT PROCESS STAGE
Once you have the "big picture", you have to be aware of the current status of
your development project or projects. The processes you select will vary
depending if you are in early stages of developing a new application (i.e.:
developing a version 1.0), or maintaining an existing application (i.e.: working on
release 6.7.1).
3. PRIORITIES
The next thing you need to do is identify the priorities of your project, for
example: - Compliance with industry standards - Validation of new functionality
(new GUIs, etc) - Security - Capacity Planning ( You should see "Effective Methods
for Software Testing" for more info). Make a list of the priorities, and then assign
them values of (H)igh, (M)edium and (L)ow.
4. TESTING TYPES
Once you are aware of the priorities, focus on the High first, then Medium, and
finally evaluate whether the Low ones need immediate attention.
Based on this, you need to select those Testing Types that will provide coverage
for your priorities. Example of testing types:
- Functional Testing
- Integration Testing
- System Testing
- System-to-System Testing (for testing interfaces)
- Regression Testing
- Load Testing
- Performance Testing
- Stress Testing
Etc.

5. WRITE A TEST PLAN


Once you have determined your needs, the simplest way to document and
implement your process is to elaborate a "Test Plan" for every effort that you are
engaged into (i.e.: for every release).
For this you can use generic Test Plan templates available in the web that will help
you brainstorm and define the scope of your testing:
- Scope of Testing (defects, functionality, and what will be and will not be tested).
- Testing Types (Functional, Regression, etc).
- Responsible people
- Requirements tractability matrix (match test cases with requirements to ensure
coverage)
- Defect tracking
- Test Cases
DURING AND POST-TESTING ACTIVITIES
Make sure you keep track of the completion of your testing activities, the defects
found, and that you comply with an exit criteria prior to moving to the next stage
in testing (i.e. User Acceptance Testing, then Production Release).
Make sure you have a mechanism for:
- Reporting
- Test tracking

What is software testing?


1) Software testing is a process that identifies the correctness, completeness, and
quality of software. Actually, testing cannot establish the correctness of software.
It can find defects, but cannot prove there are no defects.
2) It is a systematic analysis of the software to see whether it has performed to
specified requirements. What software testing does is to uncover errors however
it does not tell us that errors are still not present.

Any recommendation for estimation how many bugs the customer will
find till gold release?

Answer1:
If you take the total number of bugs in the application and subtract the number of
bugs you found, the difference will be the maximum number of bugs the customer
can find.
Seriously, I doubt you will find any sort of calculations or formula that can answer
your question with much accuracy. If you could reference a previous application
release, it might give you a rough idea. The best thing to do is insure your test
coverage is as good as you can make it then hope you've found the ones the
customer might find.
Remember Software testing is Risk Management!

Answer2:
For doing estimation :
1.)Find out the Coverage during testing of your software and then estimate
keeping in mind 80-20 principle.
2.)You can also look at the deepening of your test cases e.g. how much unit level
testing and how much life cycle testing have you performed (Believe that most of
the bugs from customer comes due to real use of lifecycle in the software)
3.)You can also refer the defect density from earlier releases of the same product
line.
by doing these evaluation you can find out the probability of bugs at an
approximately optimum estimation.

Answer3:
You can look at the customer issues mapping from previous release (If you have
the same product line) to the current release ,This is the best way of finding
estimation for gold release of migration of any product.Secondly, till gold release
most of the issues comes from various combination of installation testing like
cross-platform,i18 issues,Customization,upgradation and migration.
So ,these can be taken as a parameter and then can estimation be completed.

When the build comes to the QA team, what are the parameters to be
taken for consideration to reject the build upfront without committing for
testing ?

Answer1:
Agree with R&D a set of tests that if one fails you can reject the build. I usually
have some build verification tests that just make sure the build is stable and the
major functionality is working.
Then if one test fails you can reject the build.

Answer2:
The only way to legitimately reject a build is if the entrance criteria have not been
met. That means that the entrance criteria to the test phase have been defined
and agreed upon up front. This should be standard for all builds for all products.
Entrance criteria could include:
- Turn-over documentation is complete
- All unit testing has been successfully completed and U/T cases are documented
in turn-over
- All expected software components have been turned-over (staged)
- All walkthroughs and inspections are complete
- Change requests have been updated to correct status
- Configuration Management and build information is provided, and correct, in
turn-over
The only way we could really reject a build without any testing, would be a failure
of the turn-over procedure. There may, but shouldn't be, politics involved. The
only way the test phase can proceed is for the test team to have all components
required to perform successful testing. You will have to define entrance (and exit)
criteria for each phase of the SDLC. This is an effort to be taken together by the
whole development team. Developments entrance criteria would include signed
requirements, HLD doc, etc. Having this criteria pre-established sets everyone up
for success

Answer3:
The primary reason to reject a build is that it is untreatable, or if the testing
would be considered invalid.
For example, suppose someone gave you a "bad build" in which several of the
wrong files had been loaded. Once you know it contains the wrong versions, most
groups think there is no point continuing testing of that build.
Every reason for rejecting a build beyond this is reached by agreement. For
example, if you set a build verification test and the program fails it, the
agreement in your company might be to reject the program from testing. Some
BVTs are designed to include relatively few tests, and those of core functionality.
Failure of any of these tests might reflect fundamental instability. However,
several test groups include a lot of additional tests, and failure of these might not
be grounds for rejecting a build.
In some companies, there are firm entry criteria to testing. Many companies pay
lipservice to entry criteria but start testing the code whether the entry criteria are
met or not. Neither of these is right or wrong--it's the culture of the company. Be
sure of your corporate culture before rejecting a build.

Answer4:
Generally a company would have set some sort of minimum goals/criteria that a
build needs to satisfy - if it satisfies this - it can be accepted else it has to be
rejected
For e.g..
Nil - high priority bugs
2 - Medium Priority bugs
Sanity test or Minimum acceptance and Basic acceptance should pass The reasons
for the new build - say a change to a specific case - this should pass Not able to
proceed - non - testability or even some more which is in relation to the new build
or the product If the above criterias don't pass then the build could be rejected.

What is software testing?

Software testing is more than just error detection;


Testing software is operating the software under controlled conditions, to (1)
verify that it behaves “as specified”; (2) to detect errors, and (3) to validate that
what has been specified is what the user actually wanted.
Verification is the checking or testing of items, including software, for
conformance and consistency by evaluating the results against pre-specified
requirements. [Verification: Are we building the system right?]
Error Detection: Testing should intentionally attempt to make things go wrong to
determine if things happen when they shouldn’t or things don’t happen when they
should.
Validation looks at the system correctness – i.e. is the process of checking that
what has been specified is what the user actually wanted. [Validation: Are we
building the right system?]
In other words, validation checks to see if we are building what the customer
wants/needs, and verification checks to see if we are building that system
correctly. Both verification and validation are necessary, but different components
of any testing activity.

The definition of testing according to the ANSI/IEEE 1059 standard is that testing
is the process of analyzing a software item to detect the differences between
existing and required conditions (that is defects/errors/bugs) and to evaluate the
features of the software item.

Remember: The purpose of testing is verification, validation and error detection in


order to find problems – and the purpose of finding those problems is to get them
fixed.

What is the testing lifecycle?


There is no standard, but it consists of:
Test Planning (Test Strategy, Test Plan(s), Test Bed Creation)
Test Development (Test Procedures, Test Scenarios, Test Cases)
Test Execution
Result Analysis (compare Expected to Actual results)
Defect Tracking
Reporting

How to validate data?


I assume that you are doing ETL (extract, transform, load) and cleaning. If my
assumption is correct then
1. you are building data warehouse/ data mining
2. you ask right question to wrong place

What is quality?
Quality software is software that is reasonably bug-free, delivered on time and
within budget, meets requirements and expectations and is maintainable.
However, quality is a subjective term. Quality depends on who the customer is
and their overall influence in the scheme of things. Customers of a software
development project include end-users, customer acceptance test engineers,
testers, customer contract officers, customer management, the development
organization's management, test engineers, testers, salespeople, software
engineers, stockholders and accountants. Each type of customer will have his or
her own slant on quality. The accounting department might define quality in terms
of profits, while an end-user might define quality as user friendly and bug free.

What is Benchmark?
How it is linked with SDLC (Software Development Life Cycle)?
or SDLC and Benchmark are two unrelated things.?
What are the components of Benchmark?
In Software Testing where Benchmark fits in?
A Benchmark is a standard to measure against. If you benchmark an application,
all future application changes will be tested and compared against the
benchmarked application.

Which of the following Statements about generating test cases is false?


1. Test cases may contain multiple valid conditions
2. Test cases may contain multiple invalid conditions
3. Test cases may contain both valid and invalid conditions
4. Test cases may contain more than 1 step.
5. test cases should contain Expected results.

Answer1:
all the conditions mentioned are valid and not a single condition can be stated as
false.
Here I think, the condition means the input type or situation (some may call it as
valid or invalid, positive or negative)
Also a single test case can contain both the input types and then the final result
can be verified (it obviously should not bring the required result, as one of the
input condition is invalid, when the test case would be executed), this usually
happens while writing scenario based test cases.
For ex. Consider web based registration form, in which input data type for some
fields are positive and for some fields it is negative (in a scenario based test case)
Above screen can be tested by generating various scenario's and combinations.
The final result can be verified against actual result and the registration should
not be carried out successfully (as one/some input types are invalid), when this
test case is executed.
The writing of test case also depends upon the no. of descriptive fields the tester
has in the test case template. So more elaborative is the test case template, more
is the ease of writing test cases and generating scenario's. So writing of test cases
totally depends on the indepth thinking of the tester and there are no predefined
or hard coded norms for writing test case.
This is according to my understanding of testing and test case writing knowledge
(as for many applications, I have written many positive and negative conditions in
a single test case and verified different scenario's by generating such test cases)

Answer2:
The answer to this question will be 3 Test cases may contain both valid and invalid
conditions.
Since there is no restriction for the test case to be of multiple steps or more than
one valid or invalid conditions. But A test case whether it is feature ,unit level or
end to end test case ,it can not contain both valid and invalid condition in a unit
test case.
Because if this will happen then the concept of test case for a result will be
dwindled and hence has no meaning.

What is “Quality Assurance”?


“Quality Assurance” measures the quality of processes used to create a quality
product.
Software Quality Assurance (‘SQA’ or ‘QA’) is the process of monitoring and
improving all activities associated with software development, from requirements
gathering, design and reviews to coding, testing and implementation.
It involves the entire software development process - monitoring and improving
the process, making sure that any agreed-upon standards and procedures are
followed, and ensuring that problems are found and dealt with, at the earliest
possible stage. Unlike testing, which is mainly a ‘detection’ process, QA is
‘preventative’ in that it aims to ensure quality in the methods & processes – and
therefore reduce the prevalence of errors in the software.
Organizations vary considerably in how they assign responsibility for QA and
testing. Sometimes they’re the combined responsibility of one group or individual.
Also common are project teams that include a mix of testers and developers who
work closely together, with overall QA processes monitored by project managers
or quality managers.

Quality Assurance and Software Development


Quality Assurance and development of a product are parallel activities. Complete
QA includes reviews of the development methods and standards, reviews of all
the documentation (not just for standardization but for verification and clarity of
the contents also). Overall Quality Assurance processes also include code
validation.
A note about quality assurance: The role of quality assurance is a superset of
testing. Its mission is to help minimize the risk of project failure. QA people aim to
understand the causes of project failure (which includes software errors as an
aspect) and help the team prevent, detect, and correct the problems. Often test
teams are referred to as QA Teams, perhaps acknowledging that testers should
consider broader QA issues as well as testing.

Which things to consider to test a mobile application through black box


technique?
Answer1:
Not sure how your device/server is to operate, so mold these ideas to fit your
app. Some highlights are:
Range testing: Ensure that you can reconnect when leaving and returning back
into range.
Port/IP/firewall testing - change ports and ips to ensure that you can connect and
disconnect. modify the firewall to shutoff the connection.
Multiple devices - make sure that a user receives his messages with other devices
connected to the same ip/port. Your app should have a method to determine
which device/user sent the message and only return to it. Should be in the
message string sent and received. Unless you have conferencing capabilities
within the application.
Cycle the power of the server and watch the mobile unit reconnect automatically.
Mobile unit sends a message and then power off the unit, when powering back on
and reconnecting, ensure that the message is returned to the mobile unit.

Answer2:
Not clearly mentioned which area of the mobile application you are testing with.
Whether is it simple SMS application or WAP application, you need to specify more
details.If you are working with WAP then you can download simulators from net
and start testing over it.

What is the general testing process?


The general testing process is the creation of a test strategy (which sometimes
includes the creation of test cases), creation of a test plan/design (which usually
includes test cases and test procedures) and the execution of tests. Test data are
inputs that have been devised to test the system
Test Cases are inputs and outputs specification plus a statement of the function
under the test.
Test data can be generated automatically (simulated) or real (live).

The stages in the testing process are as follows:


1. Unit testing: (Code Oriented)
Individual components are tested to ensure that they operate correctly. Each
component is tested independently, without other system components.

2. Module testing:
A module is a collection of dependent components such as an object class, an
abstract data type or some looser collection of procedures and functions. A
module encapsulates related components so it can be tested without other system
modules.

3. Sub-system testing: (Integration Testing) (Design Oriented)


This phase involves testing collections of modules, which have been integrated
into sub-systems. Sub-systems may be independently designed and implemented.
The most common problems, which arise in large software systems, are sub-
systems interface mismatches. The sub-system test process should therefore
concentrate on the detection of interface errors by rigorously exercising these
interfaces.

4. System testing:
The sub-systems are integrated to make up the entire system. The testing
process is concerned with finding errors that result from unanticipated interactions
between sub-systems and system components. It is also concerned with
validating that the system meets its functional and non-functional requirements.

5. Acceptance testing:
This is the final stage in the testing process before the system is accepted for
operational use. The system is tested with data supplied by the system client
rather than simulated test data. Acceptance testing may reveal errors and
omissions in the systems requirements definition( user - oriented) because real
data exercises the system in different ways from the test data. Acceptance testing
may also reveal requirement problems where the system facilities do not really
meet the users needs (functional) or the system performance (non-functional) is
unacceptable.

Acceptance testing is sometimes called alpha testing. Bespoke systems are


developed for a single client. The alpha testing process continues until the system
developer and the client agrees that the delivered system is an acceptable
implementation of the system requirements.
When a system is to be marketed as a software product, a testing process called
beta testing is often used.

Beta testing involves delivering a system to a number of potential customers who


agree to use that system. They report problems to the system developers. This
exposes the product to real use and detects errors that may not have been
anticipated by the system builders. After this feedback, the system is modified
and either released fur further beta testing or for general sale.

What's normal practices of the QA specialists with perspective of


software?
These are the normal practices of the QA specialists with perspective of software
[note: these are all QC activities, not QA activities.]
1-Desgin Review Meetings with the System Analyst and If possible should be the
part in Requirement gathering
2-Analysing the requirements and the design and to trace the design with respect
to the requirements
3-Test Planning
4-Test Case Identification using different techniques (With respect to the Web
Based Application and Desktop Applications)
5-Test Case Writing (This part is to be assigned to the testing engineers)
6-Test Case Execution (This part is to be assigned to the testing engineers)
7-Bug Reporting (This part is to be assigned to the testing engineers)
8-Bug Review and their Analysis so that future bus can be removed by designing
some standards

From low-level to high level (Testing in Stages)


Except for small programs, systems should not be tested as a single unit. Large
systems are built out of sub-systems, which are built out of modules that are
composed of procedures and functions. The testing process should therefore
proceed in stages where testing is carried out incrementally in conjunction with
system implementation.

The most widely used testing process consists of five stages

Unit
Testing

Component
testing

Module
Testing
White Box Testing Techniques
Verification
(Tests that are derived from
(Process
knowledge of the program's
Oriented)
Sub- structure and implementation)
system
Testing
Integrated
testing

System
Testing

Validation Black Box Testing Techniques


User Acceptance
(Product (Tests are derived from the program
testing Testing
Oriented) specification)

However, as defects are discovered at any one stage, they require program
modifications to correct them and this may require other stages in the testing
process to be repeated.
Errors in program components, say may come to light at a later stage of the
testing process. The process is therefore an iterative one with information being
fed back from later stages to earlier parts of the process.

How to test and to get the difference between two images which is in the
same window?

Answer1:
How are you doing your comparison? If you are doing it manually, then you should be
able to see any major differences. If you are using an automated tool, then there is
usually a comparison facility in the tool to do that.

Answer2:
Jasper Software is an open-source utility which can be compiled into C++ and has a
imgcmp function which compares JPEG files in very good detail as long as they have
the same dimensions and number of components.

Answer3:
Rational has a comparison tool that may be used. I'm sure Mercury has the same
tool.

Answer4:
The key question is whether we need a bit-by-bit exact comparison, which the current
tools are good at, or an equivalency comparison. What differences between these
images are not differences? Near-match comparison has been the subject of a lot of
research in printer testing, including an M.Sc. thesis at Florida Tech. It's a tough
problem.

Anda mungkin juga menyukai