Testing of Software
Control of Change (Assess the need for change, document the change)
Answer1:
Dynamic testing: Required program to be executed
static testing: Does not involve program execution
The program is run on some test cases & results of the program’s performance
are examined to check whether the program operated as expected
E.g. Compiler task such as Syntax & type checking, symbolic execution, program
proving, data flow analysis, control flow analysis
Answer2:
Static Testing: Verification performed with out executing the system code
Dynamic Testing: Verification and validation performed by executing the system
code
5.Software Testing
Software testing is a critical component of the software engineering process. It is
an element of software quality assurance and can be described as a process of
running a program in such a manner as to uncover any errors. This process, while
seen by some as tedious, tiresome and unnecessary, plays a vital role in software
development.
Answer2:
There could be several reasons for not catching a showstopper in the first or
second build/rev. A found defect could either functionally or physiologically mask a
second or third defect. Functionally the thread or path to the second defect could
have been broken or rerouted to another path or physiologically the tester who
found the first defect knows the app must go back and be rewritten so he/she
proceeds halfheartedly on and misses the second one. I've seen both cases. It is
difficult to keep testing on a known defective app. The testers seem to lose
interest knowing that what effort they put in to test it, will have to be redone on
the next iteration. This will test your metal as a lead to get them to follow through
and maintain a professional attitude.
Answer3:
The best way is to prevent bugs in the first place. Also testing doesn't fix or
prevent bugs. It just provides information. Applying this information to your
situation is the important part.
The other thing that you may be encountering is that testing tends to be
exploratory in nature. You have stated that these are existing bugs, but not stated
whether tests already existed for these bugs.
Bugs in early cycles inhibit exploration. Additionally, a tester's understanding of
the application and its relationships and interactions will improve with time and
thus more 'interesting' bugs tend to be found in later iterations as testers expand
their exploration (i.e. think of new tests).
No matter how much time you have to read through the documents and inspect
artifacts, seeing the actual application is going to trigger new thoughts, and thus
introduce previously unthought of tests. Exposure to the application will trigger
new thoughts as well, thus the longer your testing goes, the more new tests (and
potential bugs) are going to be found. Iterative development is a good way to
counter this, as testers get to see something physical earlier, but this issue will
always exist to some degree as the passing of time, and exploration of the
application allow new tests to be thought of at inconvenient moments.
Answe1:
Are you the programmer who has to fix them, the project manager who has to
supervise the programmers, the change control team that decides which areas are
too high risk to impact, the stakeholder-user whose organization pays for the
damage caused by the defects or the tester?
The tester does not choose which defects to fix.
The tester helps ensure that the people who do choose, make a well-informed
choice.
Testers should provide data to indicate the *severity* of bugs, but the project
manager or the development team do the prioritization.
When I say "indicate the severity", I don't just mean writing S3 on a piece of
paper. Test groups often do follow-up tests to assess how serious a failure is and
how broad the range of failure-triggering conditions.
Priority depends on a wide range of factors, including code-change risk,
difficulty/time to complete the change, which stakeholders are affected by the
bug, the other commitments being handled by the person most knowledgeable
about fixing a certain bug, etc. Many of these factors are not within the
knowledge of most test groups.
Answe2:
As a tester we don't fix the defects but we surely can prioritize them once
detected. In our org we assign severity level to the defects depending upon their
influence on other parts of products. If a defect doesn’t allow you to go ahead and
test the product, it is critical one so it has to be fixed ASAP. We have 5 levels as
1-critical
2-High
3-Medium
4-Low
5-Cosmetic
Dev can group all the critical ones and take them to fix before any other defect.
Answer3:
Priority/Severity P1 P2 P3
S1
S2
S3
Generally the defects are classified in above shown grid. Every organization /
software has some target of fixing the bugs.
Example -
P1S1 -> 90% of the bugs reported should be fixed.
P3S3 -> 5% of the bugs reported may be fixed. Rest are taken in letter service
packs or versions.
Thus the organization should decide its target and act accordingly.
Basically bug free software is not possible.
Answer4:
Ideally, the customer should assign priorities to their requirements. They tend to
resist this. On a large, multi-year project I just completed, I would often (in the
lack of customer guidelines) rely on my knowledge of the application and the
potential downstream impacts in the modeled business process to prioritize
defects.
If the customer doesn't then I fell the test organization should based on risk or
other, similar considerations.
Why we need to do unit testing, if all the features are being tested in
System testing. What extra things are tested in unit testing, which can
not be tested in System testing.
Answer1:
Assume that you're thinking client-server or web. If you test the application on
the front end only you can see if the data was stored and retrieved correctly. You
can't see if the servers are in an error state or not. many server processes are
monitored by another process. If they crash, they are restarted. You can't see
that without looking at it.
The data may not be stored correctly either but the front end may have cached
data lying around and it will use that instead. The least you should be doing is
verifying the data as stored in the database.
It is easier to test data being transferred on the boundaries and see the results of
those transactions when you can set the data in a driver.
Answer2:
Back-End testing : Basically the requirement of this testing depends on project.
like Say if your project is .Ticket booking system,Front end u will provided with an
Interface , where u can book the ticket by giving the appropriate details ( Like
Place to go, and Time when u want to go etc..). It will have a Data storage system
(Database or XL sheet etc) which is a Back end for storing details entered by the
user.
After submitting the details ,U might have provided with a correct
acknowledgement. But in back end , the details might not updated correctly in
Database because of wrong logic development. Then that will cause a major
problem.
and regarding Unit level testing and System testing Unit level testing is for testing
the basic checks whether the application is working fyn with the basic
requirements.This will be done by developers before delivering to the QA.In
System testing , In addition to the unit checks ,u will be performing all the checks
( all possible integrated checks which required) .Basically this will be carried out
by tester
Answer3:
Ever heard about divide and conquer tactic ? It is a same method applied in
backend and frontend testing.
A good back end test will help minimize the burden of frontend test.
Another point is you can test the backend while develop the frontend. A true
paralism could be achieved.
Backend testing has another problem which must addressed before front end
could use it. The problem is concurrency. Building a scenario to test concurrency
is formidable task.
A complex thing is hard to test. To create such scenarios will make you unsure
which test you already done and which you haven't. What we need is an effective
methods to test our application. The simplest method I know is using divide and
conquer.
Answer4:
A wide range of errors are hard to see if you don't see the code. For example,
there are many optimizations in programs that treat special cases. If you don't
see the special case, you don't test the optimization. Also, a substantial portion of
most programs is error handling. Most programmers anticipate more errors than
most testers.
Programmers find and fix the vast majority of their own bugs. This is cheaper,
because there is no communication overhead, faster because there is no delay
from tester-reporter to programmer, and more effective because the programmer
is likely to fix what she finds, and she is likely to know the cause of the problems
she sees. Also, the rapid feedback gives the programmer information about the
weaknesses in her programming that can help her write better code.
Many tests -- most boundary tests -- are done at the system level primarily
because we don't trust that they were done at the unit level. They are wasteful
and tedious at the system level. I'd rather see them properly done and properly
automated in a suite of programmer tests.
20.What is retesting?
Answer1:
Retesting is usually equated with regression testing (see above) but it is different
in that is follows a specific fix--such as a bug fix--and is very narrow in focus (as
opposed to testing entire application again in a regression test). A product should
never be released after any change has been applied to the code, with only
retesting of the bug fix, and without a regression test.
Answer2:
1. Re-testing is the testing for a specific bug after it has been fixed.(one given by
your definition).
2. Re-testing can be one which is done for a bug which was raised by QA but
could not be found or confirmed by Development and has been rejected. So QA
does a re-test to make sure the bug still exists and again assigns it back to them.
when entire project is tested & client have some doubts about the quality of
testing, Re-Testing can be called. It can also be testing the same application again
for better Quality.
Answer3:
Regression Testing is, the selective retesting of a system that has been modified
to ensure that any bugs have been fixed and that no other previously working
functions have failed as a result of the reparations and that newly added features
have not created problems with previous versions of the software. Also referred to
as verification testing
It is important to determine whether in a given set of circumstances a particular
series of tests has been failed. The supplier may want to submit the software for
re-testing. The contract should deal with the parameters for retests, including (1)
will test program which are doomed to failure be allowed to finish early, or must
they be completed in their entirety? (2) when can, or must, the supplier submit
his software for retesting?, and (3) how many times can the supplier fail tests and
submit software for retesting ñ is this based on time spent, or the number of
attempts? A well drawn contract will grant the customer options in the event of
failure of acceptance tests, and these options may vary depending on how many
attempts the supplier has made to achieve acceptance.
So the conclusion is retesting is more or less regression testing. More
appropriately retesting is a part of regression testing.
Answer4:
Re-testing is simply executing the test plan another time. The client may request
a re-test for any reason - most likely is that the testers did not properly execute
the scripts, poor documentation of test results, or the client may not be
comfortable with the results.
I've performed re-tests when the developer inserted unauthorized code changes,
or did not document changes.
Regression testing is the execution of test cases "not impacted" by the specific
project. I am currently working on testing of a system with poor system
documentation (and no user documentation) so our regression testing must be
extensive.
Answer5:
* QA gets a bug fix, and has to verify that the bug is fixed. You might want to
check a few things that are a “gut feel” if you want to and get away by calling it
retesting, but not the entire function / module / product. * Development Refuses
a bug on the basis of it being “Non Reproducible”, then retesting, preferably in the
presence of the Developer, is needed.
Any recommendation for estimation how many bugs the customer will
find till gold release?
Answer1:
If you take the total number of bugs in the application and subtract the number of
bugs you found, the difference will be the maximum number of bugs the customer
can find.
Seriously, I doubt you will find any sort of calculations or formula that can answer
your question with much accuracy. If you could reference a previous application
release, it might give you a rough idea. The best thing to do is insure your test
coverage is as good as you can make it then hope you've found the ones the
customer might find.
Remember Software testing is Risk Management!
Answer2:
For doing estimation :
1.)Find out the Coverage during testing of your software and then estimate
keeping in mind 80-20 principle.
2.)You can also look at the deepening of your test cases e.g. how much unit level
testing and how much life cycle testing have you performed (Believe that most of
the bugs from customer comes due to real use of lifecycle in the software)
3.)You can also refer the defect density from earlier releases of the same product
line.
by doing these evaluation you can find out the probability of bugs at an
approximately optimum estimation.
Answer3:
You can look at the customer issues mapping from previous release (If you have
the same product line) to the current release ,This is the best way of finding
estimation for gold release of migration of any product.Secondly, till gold release
most of the issues comes from various combination of installation testing like
cross-platform,i18 issues,Customization,upgradation and migration.
So ,these can be taken as a parameter and then can estimation be completed.
When the build comes to the QA team, what are the parameters to be
taken for consideration to reject the build upfront without committing for
testing ?
Answer1:
Agree with R&D a set of tests that if one fails you can reject the build. I usually
have some build verification tests that just make sure the build is stable and the
major functionality is working.
Then if one test fails you can reject the build.
Answer2:
The only way to legitimately reject a build is if the entrance criteria have not been
met. That means that the entrance criteria to the test phase have been defined
and agreed upon up front. This should be standard for all builds for all products.
Entrance criteria could include:
- Turn-over documentation is complete
- All unit testing has been successfully completed and U/T cases are documented
in turn-over
- All expected software components have been turned-over (staged)
- All walkthroughs and inspections are complete
- Change requests have been updated to correct status
- Configuration Management and build information is provided, and correct, in
turn-over
The only way we could really reject a build without any testing, would be a failure
of the turn-over procedure. There may, but shouldn't be, politics involved. The
only way the test phase can proceed is for the test team to have all components
required to perform successful testing. You will have to define entrance (and exit)
criteria for each phase of the SDLC. This is an effort to be taken together by the
whole development team. Developments entrance criteria would include signed
requirements, HLD doc, etc. Having this criteria pre-established sets everyone up
for success
Answer3:
The primary reason to reject a build is that it is untreatable, or if the testing
would be considered invalid.
For example, suppose someone gave you a "bad build" in which several of the
wrong files had been loaded. Once you know it contains the wrong versions, most
groups think there is no point continuing testing of that build.
Every reason for rejecting a build beyond this is reached by agreement. For
example, if you set a build verification test and the program fails it, the
agreement in your company might be to reject the program from testing. Some
BVTs are designed to include relatively few tests, and those of core functionality.
Failure of any of these tests might reflect fundamental instability. However,
several test groups include a lot of additional tests, and failure of these might not
be grounds for rejecting a build.
In some companies, there are firm entry criteria to testing. Many companies pay
lipservice to entry criteria but start testing the code whether the entry criteria are
met or not. Neither of these is right or wrong--it's the culture of the company. Be
sure of your corporate culture before rejecting a build.
Answer4:
Generally a company would have set some sort of minimum goals/criteria that a
build needs to satisfy - if it satisfies this - it can be accepted else it has to be
rejected
For e.g..
Nil - high priority bugs
2 - Medium Priority bugs
Sanity test or Minimum acceptance and Basic acceptance should pass The reasons
for the new build - say a change to a specific case - this should pass Not able to
proceed - non - testability or even some more which is in relation to the new build
or the product If the above criterias don't pass then the build could be rejected.
The definition of testing according to the ANSI/IEEE 1059 standard is that testing
is the process of analyzing a software item to detect the differences between
existing and required conditions (that is defects/errors/bugs) and to evaluate the
features of the software item.
What is quality?
Quality software is software that is reasonably bug-free, delivered on time and
within budget, meets requirements and expectations and is maintainable.
However, quality is a subjective term. Quality depends on who the customer is
and their overall influence in the scheme of things. Customers of a software
development project include end-users, customer acceptance test engineers,
testers, customer contract officers, customer management, the development
organization's management, test engineers, testers, salespeople, software
engineers, stockholders and accountants. Each type of customer will have his or
her own slant on quality. The accounting department might define quality in terms
of profits, while an end-user might define quality as user friendly and bug free.
What is Benchmark?
How it is linked with SDLC (Software Development Life Cycle)?
or SDLC and Benchmark are two unrelated things.?
What are the components of Benchmark?
In Software Testing where Benchmark fits in?
A Benchmark is a standard to measure against. If you benchmark an application,
all future application changes will be tested and compared against the
benchmarked application.
Answer1:
all the conditions mentioned are valid and not a single condition can be stated as
false.
Here I think, the condition means the input type or situation (some may call it as
valid or invalid, positive or negative)
Also a single test case can contain both the input types and then the final result
can be verified (it obviously should not bring the required result, as one of the
input condition is invalid, when the test case would be executed), this usually
happens while writing scenario based test cases.
For ex. Consider web based registration form, in which input data type for some
fields are positive and for some fields it is negative (in a scenario based test case)
Above screen can be tested by generating various scenario's and combinations.
The final result can be verified against actual result and the registration should
not be carried out successfully (as one/some input types are invalid), when this
test case is executed.
The writing of test case also depends upon the no. of descriptive fields the tester
has in the test case template. So more elaborative is the test case template, more
is the ease of writing test cases and generating scenario's. So writing of test cases
totally depends on the indepth thinking of the tester and there are no predefined
or hard coded norms for writing test case.
This is according to my understanding of testing and test case writing knowledge
(as for many applications, I have written many positive and negative conditions in
a single test case and verified different scenario's by generating such test cases)
Answer2:
The answer to this question will be 3 Test cases may contain both valid and invalid
conditions.
Since there is no restriction for the test case to be of multiple steps or more than
one valid or invalid conditions. But A test case whether it is feature ,unit level or
end to end test case ,it can not contain both valid and invalid condition in a unit
test case.
Because if this will happen then the concept of test case for a result will be
dwindled and hence has no meaning.
Answer2:
Not clearly mentioned which area of the mobile application you are testing with.
Whether is it simple SMS application or WAP application, you need to specify more
details.If you are working with WAP then you can download simulators from net
and start testing over it.
2. Module testing:
A module is a collection of dependent components such as an object class, an
abstract data type or some looser collection of procedures and functions. A
module encapsulates related components so it can be tested without other system
modules.
4. System testing:
The sub-systems are integrated to make up the entire system. The testing
process is concerned with finding errors that result from unanticipated interactions
between sub-systems and system components. It is also concerned with
validating that the system meets its functional and non-functional requirements.
5. Acceptance testing:
This is the final stage in the testing process before the system is accepted for
operational use. The system is tested with data supplied by the system client
rather than simulated test data. Acceptance testing may reveal errors and
omissions in the systems requirements definition( user - oriented) because real
data exercises the system in different ways from the test data. Acceptance testing
may also reveal requirement problems where the system facilities do not really
meet the users needs (functional) or the system performance (non-functional) is
unacceptable.
Unit
Testing
Component
testing
Module
Testing
White Box Testing Techniques
Verification
(Tests that are derived from
(Process
knowledge of the program's
Oriented)
Sub- structure and implementation)
system
Testing
Integrated
testing
System
Testing
However, as defects are discovered at any one stage, they require program
modifications to correct them and this may require other stages in the testing
process to be repeated.
Errors in program components, say may come to light at a later stage of the
testing process. The process is therefore an iterative one with information being
fed back from later stages to earlier parts of the process.
How to test and to get the difference between two images which is in the
same window?
Answer1:
How are you doing your comparison? If you are doing it manually, then you should be
able to see any major differences. If you are using an automated tool, then there is
usually a comparison facility in the tool to do that.
Answer2:
Jasper Software is an open-source utility which can be compiled into C++ and has a
imgcmp function which compares JPEG files in very good detail as long as they have
the same dimensions and number of components.
Answer3:
Rational has a comparison tool that may be used. I'm sure Mercury has the same
tool.
Answer4:
The key question is whether we need a bit-by-bit exact comparison, which the current
tools are good at, or an equivalency comparison. What differences between these
images are not differences? Near-match comparison has been the subject of a lot of
research in printer testing, including an M.Sc. thesis at Florida Tech. It's a tough
problem.