Anda di halaman 1dari 18

Skills required to become a Software Tester

Following skills are indispensable to become a good software tester. Compare your skill set against the following checklist to determine whether Software Testing is a really for you A good software tester should have sharp analytical skills. Analytical skills will help break up a complex software system into smaller units to gain a better understanding and created corresponding test cases. Not sure that you have good analytical skills - Refer this link - if, if you can solve atleast ONE problem you have good analytical skills.

A good software tester must have strong technical skills . This would include high level of proficiency in tools like MS Office , OpenOffice etc , Testing tools like QTP , Loadrunner , etc.. and ofcourse deep understand of the application under test. These skills can be acquired through relevant training and practice. Also it's an added advantage that you have some programming skills but its NOT a must.

A good software tester must have a good verbal and written communication skill. Testing artifacts (like test cases/plans, test strategies, bug reports etc) created by the software tester should be easy to read and comprehend. Dealing with developers (in case of bugs or any other issue) will require a shade of discreetness and diplomacy.

Testing at times could be a demanding job especially during the release of code. A software tester must efficiently manage workload, have high productivity ,exhibit optimal time management and organization skills

To be a good software tester you must a GREAT attitude. An attitude to test to break' , detail orientation , willingness to learn and suggest process improvements. In software industry, technologies evolved with an overwhelming speed and a good software tester should upgrade his/her technical skills with the changing technologies. Your attitude must reflect a certain degree of independence where you take ownership of the task allocated and complete it without much direct supervision.

To excel in any profession or job, one must have a great degree of the passion for it. A software tester must have passion for his / her field. BUT how do you determine whether you have a passion for software testing if you have never tested before? Simple TRY it out and if software testing does not excite you switch to something else that holds your interest.

Academic Background:
Academic background of a software tester should be in Computer Science. A BTech/ B.E. , MCA , BCA , BScComputers will land you a job easily. If you do not hold any of these degrees than you must complete a software testing certification like ISTQB and CSTE which help you learn Software Development/ Test Life Cycle and other testing methodologies.

Compensation of a software tester varies from company to company. Average salary range of a software tester in US is $45,993 - $74,935. Average salary range of a software tester in India is Rs 247,315 - Rs 449,111. Also, a software tester is also give health insurance, bonuses, gratuity and other perks.

Typical Workday:
On any typical work day you will be busy understanding requirement documents , creating test cases , executing test cases , reporting and re-testing bugs , attending review meetings and other team building activities.

Career Progression:
Your career progression as a software tester (QA Analyst) in typical CMMI level 5 company will look like following but will vary from company to company QA Analyst (Fresher) => Sr. QA Analyst (2-3 year experience)=> QA Team Coordinator (5-6 year experience> =>Test Manager (8-11 experience) => Senior Test Manager (14+ experience)

Alternate Career Tracks as a Software Tester

Once you have got yours hand dirty in manual testing , you can pursue following specializations

Automation Testing : As an automation Test Engineer , you will be responsible for automating menial test case execution which otherwise could be time consuming. Tools used IBM Rational Robot , Silk performer and QTP

Performance Testing: As a performance test engineer , you will be responsible for checking application responsiveness (time taken to load , maximum load application can handle) etc. Tools used WEBLoad , Loadrunner.

Business Analyst: A major advantages Testers have over Developers is that they have end to end business knowledge. An obvious career progression for testers is to become a Business Analyst. As a Business Analyst you will be responsible to analyze and assess your company's business model and work flows ,and especially how they integration with technology . Based on your observation you will suggest and drive process improvements.

Common Myths
Software Testing as a Career pays Less Developers are more respected as compared to Testers Contrary to popular belief , Software Testers (better known as QA professionals) are paid and treated at par with Software Developers in all "aspiring" companies. A career in Software Testing should never be considered as "second rated". Software Testing is Boring Software Testing could actually "test" your nerves since you need to make sense of Business Requirements and draft test cases based on your understanding. Software testing is not boring. What is boring is doing the same set of tasks repeatedly. The key is to try new things. For that matter , have you ever spoken a to a software developer with more than 3 years experience ?He will tell you how boring his job has become offlately.

Software testing is a process used to identify the correctness, completeness, and quality of developed computer software.It includes a set of activities conducted with the intent of finding errors in software so that it could be corrected before the product is released to the end users. In simple words, software testing is an activity to check whether the actual results match the expected results and to ensure that the software system is defect free. Why is testing is important? This is China Airlines Airbus A300 crashing due to a software bug on April 26, 1994 killing 264 innocent lives Software bugs can potentially cause monetary and human loss, history is full of such examples In 1985, Canada's Therac-25 radiation therapy machine malfunctioned due to software bug and delivered lethal radiation doses to patients ,leaving 3 people dead and critically injuring 3 others In April of 1999 ,a software bug caused the failure of a $1.2 billion military satellite launch, the costliest accident in history In may of 1996, a software bug caused the bank accounts of 823 customers of a major U.S. bank to be credited with 920 million US dollars As you see, testing is important because software bugs could be expensive or even dangerous As Paul Elrich puts it - "To err is human, but to really foul things up you need a computer." Video Transcript with Key Takeaways Highlighted: Consider a scenario where you are moving a file from folder A to Folder B.Think of all the possible ways you can test this. Apart from the usual scenarios, you can also test the following conditions Trying to move the file when it is Open

You do not have the security rights to paste the file in Folder B Folder B is on a shared drive and storage capacity is full. Folder B already has a file with the same name, infact the list is endless Or suppose you have 15 input fields to test ,each having 5 possible values , the number of combinations to be tested would be 5^15 If you were to test the entire possible combinations project EXECUTION TIME & COSTS will rise exponentially. Hence, one of the testing principle states that EXHAUSTIVE testing is not possible. Instead we need optimal amount of testing based on the risk assessment of the application. And the million dollar question is, how do you determine this risk ? To answer this lets do an exercise In your opinion, Which operations is most likely to cause your Operating system to fail? I am sure most of you would have guessed, Opening 10 different application all at the same time. So if you were testing this Operating system you would realize that defects are likely to be found in multi-tasking and needs to be tested thoroughly which brings us to our next principle Defect Clustering which states that a small number of modules contain most of the defects detected. By experience you can identify such risky modules.But this approach has its own problems If the same tests are repeated over and over again , eventually the same test cases will no longer find new bugs This is the another principle of testing called Pesticide Paradox To overcome this, the test cases need to be regularly reviewed & revised , adding new & different test cases to help find more defects. But even after all this sweat & hard work in testing, you can never claim you product is bug free. To drive home this point , lets see this video of public launch of Windows 98 You think a company like MICROSOFT would not have tested their OS thoroughly & would risk their reputation just to see their OS crashing during its public launch! Hence, testing principle states that - Testing shows presence of defects i.e. Software Testing reduces the probability of undiscovered defects remaining in the software but even if no defects are found, it is not a proof of correctness. But what if , you work extra hard , taking all precautions & make your software product 99% bug free .And the software does not meet the needs & requirements of the clients. This leads us to our next principle, which states that Absence of Error is a Fallacy i.e. Finding and fixing defects does not help if the system build is unusable and does not fulfill the users needs & requirements To fix this problem , the next principle of testing states that Early Testing - Testing should start as early as possible in the Software Development Life Cycle. so that any defects in the requirements or design phase are captured as well more on this principle in a later training tutorial. And the last principle of testing states that the Testing is context dependent which basically means that the way you test a e-commerce site will be different from the way you test a commercial off the shelf application Summary of the Seven Testing Principles

Principle 1 Principle 2 Principle 3 Principle 4 Principle 5 Principle 6 Principle 7

Testing shows presence of defects Exhaustive testing is impossible Early Testing Defect Clustering Pesticide Paradox Testing is context dependent Absence of errors - fallacy

Suppose, you are assigned a task, to develop a custom software for a client. Each block below represents a step required to develop the software. Irrespective of your technical background, try and make an educated guess about the sequence of steps you will follow, to achieve the task The correct sequence would be. Gather as much information as possible about the details & specifications of the desired software from the client. This is nothing but the Requirements gathering stage. Plan the programming language like java , php , .net ; database like oracle , mysql etc which would be suited for the project. also some high level functions & architecture. This is the Design Stage. Actually code the software. This is the Built Stage. Next you ,Test the software to verify that it is built as per the specifications given by the client. This is the TEST stage. Once your software product is ready , you may to do some code changes to accommodate enhancements requested by the client. This would be Maintenance stage. All these levels constitute the waterfall method of software development lifecycle.As you may observe, that testing in the model starts only after implementation is done. But if you are working in large project, where the systems are complex, its easy to miss out key details in the requirements phase itself. In such cases , an entirely wrong product will be delivered to the client. You will have to start a fresh with the project Or if you manage to note the requirements correctly but make serious mistakes in design and architecture of you software you will have to redesign the entire software to correct the error. Assessments of thousands of projects have shown that defects introduced during requirements & design make up close to half of the total number of defects Also, the costs of fixing a defect increases across the development life cycle. The earlier in life cycle a defect is detected, the cheaper it is to fix it. As the say, "A stitch in time saves a nine" To address this concern , the V model of testing was developed where for every phase , in the Development life cycle there is a corresponding Testing phase The left side of the model is Software Development Life Cycle - SDLC The right side of the model is Software Test Life Cycle - STLC The entire figure looks like a V , hence the name V - model You a find a few stages different from the waterfall model. These differences , along with the details of each testing phase will be discussed in later tutorial Apart from V model , there are iterative development models , where development is carried in phases , with each phase adding a functionality to the software. Each phase comprises of, its own independent set of development and testing activities. Good examples of Development lifecycles following iterative method are Rapid Application Development, Agile Development Before we close this software testing training a few pointers You must note that, there are numerous development life cycle models. Development model selected for a project, depends on the aims and goals of that project Testing is not a stand-alone activity and it has to adopt with the development model chosen for the project. In any model, testing should performed at all levels i.e. right from requirements until maintenance.

Developers do unit test. In practical world, developers are either reluctant to test their code or do not have time to unit test .Many a times, much of the unit testing is done by testers

Integration testing

Integration Testing is carried out by Testers. Data transfer between the modules is tested thoroughly. Consider this Integration Testing Scenario. Customer is currently in Current Balance Module. His balance is 1000. He navigates to the Transfer Module. And transfers 500 to a 3rd part account Customer navigates back to the Current Balance Module & now his latest balance should be 500. The Modules in the project are assigned to 5 different developers to reduce coding time Coder 2 is ready with Current Balance module. Coder 5 is not ready with Transfer module required to test your integration scenario. What do you do in such a situation? On approach is to use Big - Bang Integration Testing - where you wait for all modules to be developed before you begin integration testing. The major disadvantage is that it increases project execution time, since testers will be sitting idle unless all modules are developed .Also it becomes difficult to trace the root cause of defects. Alternatively, you can use Incremental approach were modules are checked for integration as and when they are available. Consider that the Transfer module is yet to be developed but Current Balance module is ready .You will create a Stub which will accept and give back data to the current balance module. Note that , this is not a complete implementation of the Transfer module which will have lots of check like 3rd party account # entered is correct, amount to transfer should not be more than amount to available in account and so on. But it will just simulate the data transfer that takes place between the two modules to facilitate testing On the contrary, if transfer module is ready but current balance module is not developed you will create a Driver to stimulate data transfer between the modules To increase the effectiveness of the integration testing you may use Top to down approach where higher level modules are tested first .This technique will require creation of stubs Bottom Up approach -where lower level modules are tested first. This technique will require creation of drivers Other approaches, would be functional increment & Sandwich - which is combination of top to down and bottom to up. The choice of approach chosen depends on the system architecture and location of high risk modules.

System and Acceptance Testing

Unlike Integration testing, which focuses on data transfer amongst modules, System Testing checks complete end to end scenarios , as the way a customer would use the system. A good example of Test Case in the phase would be - Login into the banking application, Check Current Balance, Transfer some money, Logout Apart from Functional, NON-FUNCTIONAL requirements are also checked during system testing . Non-functional requirements include performance, reliability. We will discuss non- functional requirement in detail in a later tutorial Acceptance test is usually done at client location, by the client, once all the defects found in System testing phase are fixed. Focus of Acceptance test is not to find defects but to check whether meet their requirements .Since this is the first time, the client sees their requirements which is plain text into an actual working system Acceptance Testing can be done in two ways

Alpha Testing A small set of employees of the client in our case employees of the bank will use the system as the end user would Beta Testing A small set of customers in our case bank account holders will use the software and recommend changes Thats all to the various testing levels in the V- model

Sanity and smoke testing

Consider a scenario, when after fixing defects of integration testing , the system is made available to the testing team for system testing You look at the initial screen , system looks fine and delay system test execution for the next day since you have other critical testing requirements to attend to Next day say you plan to execute the scenario login > View Balance > Transfer 500 > logout . The deadline is 4 hours. You begin executing the scenario , enter a valid login id , password, click the login button and boom you are taken to a blank screen with absolutely no links , no buttons & no where for you to proceed with succeeding steps of your scenario This is not a figment of any imagination but a very practical condition which could arise due to developer negligence, time pressures , test environment configuration & instability To fix this developer requires atleast 5 hours & deadline would be missed In fact , none of your team members would be able to execute their respective scenarios , since view balance is start point to perform any other operation and the entire project will be delayed Had you checked this yesterday itself, the system would been fixed by now and you were good for testing To avoid such situation sanity also know SMOKE testing is done to check critical functionalities of the system before its is accepted for major testing. Sanity testing is quick and is non- exhaustive. Goal is not to find defects but to check system health

Maintenance and Regression Testing

Suppose in the current Balance module instead of just showing the current balance the client now wants customized reports based on date & amount of transaction Obviously, any such change needs to be tested. Once deployed, testing any further system changes , enhancements or corrections forms part of Maintenance Testing Suppose that in our banking application your current balance is 2000. Using the new enhancement, you check your balance for an year ago that comes out to be 500. You enter the transfer module and try to transfer Rs 1000.In order to proceed the transfer module checks for the current balance. Instead of sending the current balance, it sends the old balance of 500 and transaction fails As you may observe, code changes were in Current Balance module only but still transfer module is affected. Regression testing is carried out to check modification in software has not caused unintended adverse side - effects

Non-Functional Testing

Apart from Functional testing, non functional requirements like performance , usability ,load factor are also important. How many times have you seen such long load time messages while accessing an application - (pic in video) I am sure many To address this issue , performance testing is carried out to check & fine tune system response times. The goal of performance testing is to reduce response time to an acceptable level Or you may have seen messages like - (pic in video ) .Hence load testing is carried out to check systems performance at different loads i.e. number of users accessing the system Depending on the results and expected usage, more system resources may be added Thats all to Types of Testing In general there three testing types 1) Functional 2) Non - Functional 3) Maintenance. Under these types, you havemultiple TESTING Level's but usually people call them as Testing Types. You may find some difference in this classification in different resources but the general theme remains the same. This is not the complete list as there are more than 150 types of testing types and still adding. No need to bother or worry, you will pick them up as you age in the testing industry Also, note that not all testing types are applicable to all projects but depend on nature & scope of the project. More on this in a later tutorial.

Test Scenario

- A Scenario is any functionality that can be tested. It is also called Test Condition ,or Test Possibility. For the Flight Reservation Application a few scenarios would be 1) Check the Login Functionality 2) Check that a New Order can be created 3) Check that an existing Order can be opened 4) Check that a user , can FAX an order 5) Check that the information displayed in the HELP section is correct 6) Check that the information displayed in About section , like version , programmer name , copy right information is correct


Consider a scenario where, the client changes the requirement , something so usual in the practical world and adds a Field Recipient name to the functionality. So now you need to enter email id and name both to send a mail Obviously you will need to change your test cases to meet this new requirement But , by now your test case suite is very large and it is very difficult to trace the test cases affected by the test cases Instead , if the requirements were numbered and were referenced in the test case suite it would have been very easy to track the test cases that are affected. This is nothing but Traceability The traceability matrix links a business requirement to its corresponding functional requirement right up to the corresponding test cases. If a Test Case fails, traceability helps determine the corresponding functionality easily .

Decision Table Testing is a good way to deal with combination of inputs, which produce different results To understand this with an example lets consider the behavior of Flight Button for different combinations of Fly From & Fly To

When both Fly From & Fly To are not set the Flight Icon is disabled.In the decision table , we register values False for Fly From & Fly To and the outcome would be ,which is Flights Button will be disabled i.e. FALSE Next , when Fly From is set but Fly to is not set , Flight button is disabled. Correspondingly you register True for Fly from in the decision table and rest of the entries are false When , Fly from is not set but Fly to is set , Flight button is disabled And you make entries in the decision table Lastly , only when Fly to and Fly from are set , Flights button is enabled And you make corresponding entry in the decision table If you observe the outcomes for Rule 1 , 2 & 3 remain the same .So you can select any of the them and rule 4 for your testing The significance of this technique becomes immediately clear as the number of inputs increases. .Number of possible Combinations is given by 2 ^ n , where n is number of Inputs. For n = 10 , which is very common is web based testing , having big input forms , the number of combinations will be 1024.Obviously, you cannot test all but

In Equivalence Partitioning , you divide set of test conditions into partitions that can be considered the same. To understand this better , lets consider the behavior of tickets in the Flight reservation application , while booking a new flight. Ticket values 1 to 10 are considered valid & ticket is booked. Values 11 to 99 are considered invalid and a error message "Only ten tickets may be ordered at one time" is shown On entering values 100 and above , the ticket # number defaults to a two digit number On entering values 0 and below , the ticket # defaults to 1 We can not test all the possible values , because if done , number of test cases will be more than 100 .To address this problem we use equivalence partitioning where we divide the possible values of tickets into groups or sets where the system behavior can be considered the same. The divided sets are called Equivalence Partitions or Equivalence Classes. Then we pick only one value from each partition for testing. The hypothesis behind this technique is that if one condition/value in a partition passes all others will also pass. Likewise , if one condition in a partition fails , all other conditions in that partition will fail. In Boundary Value Analysis , you test boundaries between equivalence partitions In our earlier example instead of checking, one value from each partition you will check the values at the partitions like 0,1,10,11 and so on As you may observe, you test values at both valid and invalid boundaries

To understand a review in detail lets consider the same example , to add email functionality to flight reservation application for which the Functional Design Document is prepared by the technical lead Technical Lead approaches his Manager and requests to initiate a review. The Manager will quickly go through the document and check whether the document is of acceptable quality to request a review by other people. For example , in this case , he finds a few spelling mistakes and asks the technical lead to correct them. Once corrected The manger will send out a meeting request to all stake holders with Meeting Location Information, Date and Time of meeting, and will mention the Agenda for the Meeting, also attach the Functional Design Document itself. This is the planning stage

Next stage is the Kick Off Meeting. It is an Optional Step. Goal is to get everybody on the same wavelength regarding the document under review and is beneficial for new or highly complex projects Next stage is the Preparation Stage where Review Meeting participants individually go through the document to identify defects, comments and questions to be asked during the review meeting This phase is necessary to ensure that during the meeting participants focus of subject in hand instead of day dreaming. This is your exercise. For this Functional Design Document think of the details missing, which will help you test this functionality. Pause the training and think! The next stage , which is, the actual review meeting. Here , the meeting Participants are assigned different roles to increase the effectiveness of the meeting. The moderator is a role usually played by the manager who leads the review meeting and sets the Agenda. The creator of the document ,under review plays the role of AUTHOR who reads the document and invites comments The task of the reviewer is to communicate any defects in the work product . Suppose , one of the reviewer says it would be nice to have a Reset Button. The author agrees to this Another review comment is that there is no mention , as to where in the menu item ,the Email Functionality will appear. Again the author agrees and accepts to make changes The meeting participant playing the role of the scribe ( also know as recorder ) , will note down this defect or suggestion. One young review , suggests the possibility of sharing a ticket via face book , orkut and so on. The author strongly disagrees to this and the reviewer and author enter into a heated argument. At this juncture the moderator intervenes and finds a amicable solution which is to ask the client whether he needs sharing via social networking Finally , all comments are discussed and the scribe gives a list of Defects / Comments / Suggestions that the author needs to incorporate into his work product. The moderator then closes the review meeting. Thats all to the meeting phase of review. The important roles here are - The Moderator ,The Author ,The Scribe / Recorder ,The Reviewers The moderator and scribe can also play the role of reviewer meaning they can give review comments to the author. The next phase of the review, is the re-work phase, where the author will make changes in the document ,as per the action items of the meeting . In the Follow -up phase , the moderator will circulate the reworked document all review participants and ensure that all changes have been included satisfactorily. This was a generic review. Note that, there are three types of reviews Walk Through , which is led by Author Technical Review , which is led by a trained moderator with No management participation Inspection ,which is led by trained moderator and use entry and exit criteria All these 3 types follow the same review process and follow the same stages as discussed earlier.

Based on above , you can make a list testing types that are in scope and will be tested and out of scope , testing types that will not be executed for flight Reservation. A Risk could any future event with a negative consequence .You need to identify the risks associated with your project Risks are of two types 1) Project Risks 2) Product Risk Example of Project risk is Senior Team Member leaving the project abruptly Every risk is assigned a likelihood i.e. chance of it occurring, typically on a scale of 1 to 10.Also the impact of that risk is identified on a scale of 1- 10 . But just identifying the risk is not enough. You need to identify a mitigation. In this case mitigation could be Knowledge Transfer to other team members & having a buffer tester in place

The second type of risks are product risks. An example of a product risk would be Flight Reservation system not installing in test environment Mitigation in this case would be conducting a smoke or sanity testing. Accordingly you will make changes in your scope items to include sanity testing This is risk based strategy of testing. There are many other testing strategies to help you select testing types for your application under test. Most of the times your out scope times will not contain out of context testing types but in context testing types excluded due to the test strategy chosen , budget and timing considerations. So for example if timing considerations do not permit performance testing It will move from in scope to out of scope list . That apart, a test plan will contain information about the Test Estimates,Test Team,Schedule,and so on. A test Plan helps monitor the progress of various testing activities and helps take controlling action in case of any deviations from the planned activities. Thats a brief overview of how to create a test plan

While executing test cases you may find that actual results vary from the expected results. This is nothing but a defect also called incident , bug , problem or issues. In case you find a defect , What information would you convey to a developer to help him understand the defect ? Pause the training and think.Your Bug Report should contain following information Defect_ID Unique identification number for the defect. Defect Description Detailed description of the defect including information about the module in which defect was found. Version Version of the application in which defect was found. Steps Detailed steps along with screenshots with which the developer can reproduce the defects. Date Raised Date when the defect is raised Reference- where in you Provide reference to the documents like . requirements, design, architecture or may be even screenshots of the error to help understand the defect Detected By Name/ID of the tester who raised the defect Status Status of the defect , more on this later Fixed by Name/ID of the developer who fixed it Date Closed Date when the defect is closed A sample bug report for your reference. This apart , your bug report will also include Severity , which describes the impact of the defect on the application Priority which is related to defect fixing urgency. Severity & Priority could be High/Medium/Low based on the impact & urgency at which the defect should be fixed respectively A defect could have a very low severity but a high priority For example if there is an error in the text of the logo of flight reservation application , its severity is low since its can be fixed very easily and it does not affect any functionality of the system .But it needs to be fixed at high priority since you do not want to ship out your product with a incorrect logo Likewise a defect could be high severity but low priority Suppose there is a problem with Email functionality of Flight Reservation .This defect has high severity since it causes the application to crash but the functionality is scheduled to release in next cycle which makes it a low priority

From discovery to resolution a defect moves through a definite lifecycle called the defect lifecycle Lets look into it. Suppose a tester finds a defect .The defect is assigned a status , new.

The defect is assigned to development project manager who will analyze the defect.He will check whether it is a valid defect. Consider that In the flight reservation application, the only valid password is mercury. But you test the application for some random password , which causes logon failure and report it as defect .Such defects due to due to corrupted test data , miss configurations in the test environment , invalid expected results etc are assigned a status rejected If not , next the defect is checked whether it is in scope. Suppose you find a problem with the email functionality. But it is not part of the current release .Such defects are postponed Next, manager checks whether a similar defect was raised earlier . If yes defect is assigned a status duplicate If no the defect is assigned to developer who starts fixing the code. During this stage defect is assigned a status in- progress. Once code is fixed. Defect is assigned a status fixed Next the tester will re-test the code. In case the test case passes the defect is closed. If the test cases fails again the defect is re-opened and assigned to the developer Consider a situation where during the 1st release of Flight Reservation a defect was found in Fax order which was fixed and assigned a status closed During the second upgrade release the same defect again re-surfaced. In such cases, a closed defect will be re-opened.Thats all to Bug Life Cycle

What is Web Testing?

Web Testing in simple terms is checking your web application for potential bugs before its made live or before code is moved into the production environment. During this stage issues such as that of web application security, the functioning of the site, its access to handicapped as well as regular users and its ability to handle traffic is checked.

Web Application Testing Checklist:

Some or all of the following testing types may be performed depending on your web testing requirements.

1. Functionality Testing :
This is used to check of your product is as per the specifications you intended for it as well as the functional requirements you charted out for it in your developmental documentation.Testing Activities Included: Test all links in your webpages are working correctly and make sure there are no broken links. Links to be checked will include Outgoing links Internal links Anchor Links MailTo Links Test Forms are working as expected. This will include Scripting checks on the form are working as expected. For example- if a user does not fill a mandatory field in a form a error message is shown. Check default values are being populated Once submitted , the data in the forms is submitted to a live database or is linked to an working email address Forms are optimally formatted for better readability

Test Cookies are working as expected. Cookies are small files used by websites to primarily remember active user sessions so you do not to log in every time you visit a website. Cookie Testing will include

Testing cookies (sessions) are deleted either when cache is cleared or when they reach their expiry. Delete cookies (sessions) and test that login credentials are asked for when you next visit the site. Test HTML and CSS to ensure that search engines can crawl your site easily. This will include Checking for Syntax Errors Readable Color Schemas Standard Compliance.Ensure standards such W3C, OASIS, IETF, ISO, ECMA, or WS-I are followed. Test business workflow- This will include Testing your end - to - end workflow/ business scenarios which takes the user through a series of webpage's to complete. Test negative scenarios as well , such that when a user executes an unexpected step , appropriate error message or help is shown in your web application. Tools that can be used: QTP , IBM Rational , Selenium

2. Usability testing:
Usability testing has now become a vital part of any web based project. It can carried out by testers like you or a small focus groupsimilar to the target audience of the web application. Test the site Navigation: Menus , buttons or Links to different pages on your site should be easily visible and consistent on all webpages Test the Content: Content should be legible with no spelling or grammatical errors. Images if present should contain and "alt" text Tools that can be used: Chalkmark, Clicktale, Clixpy and Feedback Army

3.Interface Testing:
Three areas to be tested here are - Application , Web and Database Server Application: Test requests are sent correctly to the Database and output at the client side is displayed correctly. Errors if any must be caught by the application and must be only shown to the administrator and not the end user. Web Server: Test Web server is handling all application requests without any service denial. Database Server: Make sure queries sent to the database give expected results. Test system response when connection between the three layers (Application, Web and Database) can not be established and appropriate message is shown to the end user. Tools that can be used: AlertFox,Ranorex

4.Database Testing:
Database is one critical component of your web application and stress must be laid to test it thoroughly. Testing activities will include Test if any errors are shown while executing queries Data Integrity is maintained while creating , updating or deleting data in database. Check response time of queries and fine tune them if necessary. Test data retrieved from your database is shown accurately in your web application Tools that can be used: QTP

5. Compatibility testing.
Compatibility tests ensures that your web application displays correctly across different devices. This would includeBrowser Compatibility Test: Same website in different browsers will display differently. You need to test if your web application is being displayed correctly across browsers , javascript , AJAX and authentication is working fine. You may also check for Mobile Browser Compatibility. The rendering of web elements like buttons , text fields etc changes with change in Operating System. Make sure your website works fine for various combination of Operating systems such as Windows , Linux , Mac and Browsers such as Firefox , Internet Explorer , Safari etc. Tools that can be used: NetMechanic

6.Performance Testing:
This will ensure your site works under all loads. Testing activities will include but not limited to Website application response times at different connection speeds Load test your web application to determine its behavior under normal and peak loads Stress test your web site to determine its break point when pushed to beyond normal loads at peak time. Test if a crash occurs due to peak load , how does the site recover from such an event Make sure optimization techniques like gzip compression , browser and server side cache enabled to reduce load times Tools that can be used: Loadrunner, JMeter

7. Security testing:
Security testing is vital for e-commerce website that store sensitive customer information like credit cards.Testing Activities will include Test unauthorized access to secure pages should not be permitted Restricted files should not be downloadable without appropriate access Check sessions are automatically killed after prolonged user inactivity On use of SSL certificates , website should re-direct to encrypted SSL pages. Tools that can be used: Babel Enterprise, BFBTester and CROSS

8.Crowd Testing:
You will select a large number of people (crowd) to execute tests which otherwise would have been executed a select group of people in the company. Crowdsourced testing is an interesting and upcoming concept and helps unravel many a unnoticed defects. Tools that can be used: People like you and me . And yes , loads of them!

This concludes almost all testing types applicable to your web application. As a Web-tester its important to note that web testing is quite an arduous process and you are bound to come across many obstacles. One of the major problems you will face is of course deadline pressure. Everything is always needed yesterday! The number of times the code will need changing is also taxing. Make sure you plan your work and know clearly what is expected of you. Its best define all the tasks involved in your web testing and then create a work chart for accurate estimates and planning

Each of these stages have a definite Entry and Exit criteria , Activities & Deliverables associated with it.

In an Ideal world you will not enter the next stage until the exit criteria for the previous stage is met. But practically this is not always possible. So for this tutorial , we will focus of activities and deliverables for the different stages in STLC. Lets look into them in detail.

Requirement Analysis
During this phase, test team studies the requirements from a testing point of view to identify the testable requirements. The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads, System Architects etc) to understand the requirements in detail. Requirements could be either Functional (defining what the software must do) or Non Functional (defining system performance /security availability ) .Automation feasibility for the given testing project is also done in this stage.


Identify types of tests to be performed. Gather details about testing priorities and focus. Prepare Requirement Traceability Matrix (RTM). Identify test environment details where testing is supposed to be carried out. Automation feasibility analysis (if required). RTM Automation feasibility report. (if applicable)


Test Planning
This phase is also called Test Strategy phase. Typically , in this stage, a Senior QA manager will determine effort and cost estimates for the project and would prepare and finalize the Test Plan.


Preparation of test plan/strategy document for various types of testing Test tool selection Test effort estimation Resource planning and determining roles and responsibilities. Training requirement Test plan /strategy document. Effort estimation document.


Test Case Development

This phase involves creation, verification and rework of test cases & test scripts. Test data , is identified/created and is reviewed and then reworked as well.


Create test cases, automation scripts (if applicable) Review and baseline test cases and scripts Create test data (If Test Environment is available) Test cases/scripts Test data


Test Environment Setup

Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development Stage. Test team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.


Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment. Setup test Environment and test data Perform smoke test on the build


Environment ready with test data set up Smoke Test Results.

Test Execution
During this phase test team will carry out the testing based on the test plans and the test cases prepared. Bugs will be reported back to the development team for correction and retesting will be performed.


Execute tests as per plan Document test results, and log defects for failed cases Map defects to test cases in RTM Retest the defect fixes Track the defects to closure Completed RTM with execution status Test cases updated with results Defect reports


Test Cycle Closure

Testing team will meet , discuss and analyze testing artifacts to identify strategies that have to be implemented in future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future test cycles and share best practices for any similar projects in future.


Evaluate cycle completion criteria based on Time,Test coverage,Cost,Software,Critical Business Objectives , Quality Prepare test metrics based on the above parameters. Document the learning out of the project Prepare Test closure report Qualitative and quantitative reporting of quality of the work product to the customer. Test result analysis to find out the defect distribution by type and severity.


Test Closure report Test metrics

Finally, summary of STLC along with Entry and Exit Criteria

STLC Stage Entry Criteria Requirement Requirements Analysis Document available (both functional and non functional) Acceptance criteria defined. Application architectural document

Activity Analyse business functionality to know the business modules and module specific functionalities. Identify all transactions in the modules. Identify all the user profiles. Gather user interface/authentication, geographic spread requirements.

Exit Criteria Signed off RTM Test automation feasibility report signed off by the client

Deliverable RTM Automation feasibility repo (if applicable)


Test Planning Requirements Documents Requirement Traceability matrix. Test automation feasibility document.

Test case development

Requirements Documents RTM and test plan Automation analysis report

Identify types of tests to be performed. Gather details about testing priorities and focus. Prepare Requirement Traceability Matrix (RTM). Identify test environment details where testing is supposed to be carried out. Automation feasibility analysis (if required). Analyze various testing approaches Approved test available plan/strategy Finalize on the best suited approach document. Preparation of test plan/strategy Effort estimation document for various types of testingdocument signed Test tool selection off. Test effort estimation Resource planning and determining roles and responsibilities. Create test cases, automation scripts Reviewed and (where applicable) signed test Review and baseline test cases and Cases/scripts scripts Reviewed and Create test data signed test data

Test plan/strat document. Effort estimati document.

Test cases/scri Test data

Test System Design and Environment architecture documents setup are available Environment set-up plan is available

Test Execution

Understand the required architecture, Environment setup environment set-up is working as per Prepare hardware and software the plan and requirement list checklist Finalize connectivity requirements Test data setup is Prepare environment setup checklist complete Setup test Environment and test data Smoke test is Perform smoke test on the build successful Accept/reject the build depending on smoke test result Baselined RTM, Test Execute tests as per plan All tests planned Plan , Test case/scripts Document test results, and log are executed are available defects for failed cases Defects logged and Test environment is Update test plans/test cases, if tracked to closure ready necessary Test data set up is done Map defects to test cases in RTM Unit/Integration test Retest the defect fixes report for the build to Regression testing of application be tested is available Track the defects to closure

Environment ready with test data set up Smoke Test Results.

Completed RT with execution status Test cases upd with results Defect reports

Test Cycle closure

Testing has been completed Test results are available Defect logs are available

Evaluate cycle completion criteria based on - Time, Test coverage , Cost , Software Quality , Critical Business Objectives Prepare test metrics based on the above parameters. Document the learning out of the project Prepare Test closure report Qualitative and quantitative reporting of quality of the work product to the customer. Test result analysis to find out the defect distribution by type and severity

Test Closure report Test Closure signed off by client report Test metrics