Anda di halaman 1dari 318

Manual Testing

Software QA/Testing FAQ (Question & Answers):


www.robdavispe.com/free

http://robdavispe.com/free2/ http://software-quality-testing.blogspot.com/2007/01/bug-priority-vs-bug-severity.html WinRunner Interview FAQ (Question & Answers): http://quality-assurance-software-testing.blogspot.com/2006/01/winrunner-interviewquestions.html QTP Interview FAQ (Question & Answers): http://qtpro.blogspot.com/ http://interviewhelper.blogspot.com/200 /02/qtp-interview-questions.html http://www.techinterviews.com/!p"#10 http://techpreparation.com/qtp-interview-questions-answers1.htm website with a lot of questions and answers. It also has a comparison between WR and QTP http://kabinfo.net/kabinterviews/ oadRunner Interview FAQ (Question & Answers): http://www.techinterviews.com/!p"#$$
Mercur Qualit !enter "tt#://www$%ercur $co%/us/#ro&ucts/'ualit (center/ %ercur& Qualit& 'enter( provides a web)based s&stem for automated software qualit& testin* and mana*ement across a wide ran*e of application environments. +ashboard technolo*& *ives &ou the visibilit& to validate both functionalit& and automated business processes , and identif& bottlenecks in production that stand in the wa& of business outcomes. %ercur& Qualit& 'enter enables the IT team to en*a*e in application testin* even before the development process is complete. This shortens release schedules while ensurin* the hi*hest level of qualit&. Wit" Mercur Qualit !enter) ou can: %ake *o)live decisions with confidence. -tandardi.e and mana*e the entire qualit& process. %ake qualit& decisions based on business risks and priorities. Reduce application deplo&ment risk. Improve application qualit& and reliabilit&. %ana*e application chan*e impact throu*h manual and automated functional testin*. Track qualit& assets and pro*ress across releases and test c&cles. /nsure qualit& in strate*ic sourcin* initiatives. Warehouse critical application qualit& pro0ect data. Test service)oriented architecture services for both functionalit& and performance. /nsure support for all environments includin* 12//3 .4/T3 5racle and -6P.

Mercur Qualit !enter *fferings %ercur& Qualit& 'enter includes automated software testin* products such as %ercur& Qualit& 'enter +ashboard3 %ercur& Test+irector73 %ercur& QuickTest Professional(3 %ercur& WinRunner(3 %ercur& 8usiness Process Testin*(3 and %ercur& -ervice Test(. %ercur& Qualit& 'enter also provides best practice)based services for deplo&ments in)house or throu*h our %ana*ed -oftware -olutions.

W"at is t"e &ifference +etween t"e #riorit an& severit of a +ug, Priority tells U how Important the bug is. Severity tells U how bad the bug is. Severity is constant....whereas priority might change according to schedule. Priority means how urgently bug is needed to fix Severity means how badly it harms the system Priority defines how fast the bug should be resolved , where as the severity defines how severe the bug it is. It can be known by the extent of negative impact on the related and other functionalities

-$ W"at is Software Testing, -oftware testin* is more than 0ust error detection9 Testin* software is operatin* the software under controlled conditions3 to :#; verif& that it behaves <as specified<9 :2; to detect errors3 and :=; to validate that what has been specified is what the user actuall& wanted. #..erification is the checkin* or testin* of items3 includin* software3 for conformance and consistenc& b& evaluatin* the results a*ainst pre)specified requirements. >?erification: 6re we buildin* the s&stem ri*ht!@ 2./rror +etection: Testin* should intentionall& attempt to make thin*s *o wron* to determine if thin*s happen when the& shouldnAt or thin*s donAt happen when the& should. =..ali&ation looks at the s&stem correctness , i.e. is the process of checkin* that what has been specified is what the user actuall& wanted. >?alidation: 6re we buildin* the ri*ht s&stem!@ In other words3 validation checks to see if we are buildin* what the customer wants/needs3 and verification checks to see if we are buildin* that s&stem correctl&. 8oth verification and validation are necessar&3 but different components of an& testin* activit&.
W"at is a strateg , W" &oes testing nee& one, 6 strate*& outlines what to plan3 and how to plan it. 6 successful strate*& is &our *uide throu*h chan*e3 and provides a firm foundation for on*oin* improvement. Bnlike a plan3 which is obsolete from the point of creation3 a strate*& reflects the values of an or*ani.ation ) and remains current and useful. Test strate*ies can cover a wide ran*e of testin* and business issues. While not a checklist3 &ou mi*ht eCpect to see some of the followin* in &our own strate*&: approaches to risk assessment3 costs and qualit& throu*h the or*ani.ation test techniques3 test data3 test scope and test plannin* completion criteria and anal&sis test mana*ement3 metrics and improvement skills3 staffin*3 team structure and trainin*

test environment3 chan*e control and release strate*& defect control3 trackin* and the approach to fiCes re)tests and re*ression tests profilin* and anal&sis for non)functional testin* test automation and test tool assessment

M*/01S: Waterfall) Incre%ental) Protot #ing) S#iral an& . Mo&els (i) Waterfall Mo&el: http://www.eCperienced&namics.com/popups/popupDwaterfallDmethod.php 8usiness Requirements doc:often incomplete;)ETechnical Requirement doc:Review tech specs;) E8e*in 'ode:8uild s/w architecture;)E+evelop use cases:develop interface;)E Finish 'odin*) EQ6 testin* technical testin* user acceptance testin*)Edebu*)E aunch G The waterfall model is a simplistic sequential model :-trate*&)E6nal&sis)E+esi*n)E8uild)ETest)ETransition; G It assumes that development can follow a step)b&)step process. G Hou never *o back to previous steps. (ii) Incre%ental Mo&el/Met"o&: http://scitec.uwichill.edu.bb/cmp/online/cs22l/incremental.htm G There are a number of models t&pified b& an incremental approach. G Pieces are desi*ned3 implemented3 and tested individuall&. G The s&stem is built up piece b& piece. G -omeone has to keep the bi* picture in mind. (iii) Protot #ing Mo&el: http://searchsmb.techtar*et.com/s+efinition/I33sidJJD*ciKLLJJ#3II.html The Protot&pin* %odel is a s&stems development method :-+%; in which a protot&pe :an earl& approCimation of a final s&stem or product; is built3 tested3 and then reworked as necessar& until an acceptable protot&pe is finall& achieved from which the complete s&stem or product can now be developed. This model works best in scenarios where not all of the pro0ect requirements are known in detail ahead of time. It is an iterative3 trial)and)error process that takes place between the developers and the users. (iv) S#iral Mo&el: http://en.wikipedia.or*/wiki/-piralDmodel The spiral model is a software development process combinin* elements of both desi*n and protot&pin*)in)sta*es3 in an effort to combine advanta*es of top)down and bottom)up concepts. This model of development combines the features of the protot&pin* model and the waterfall model. The spiral model is intended for lar*e3 eCpensive3 and complicated pro0ects. T"e .(Mo&el: http://en.wikipedia.or*/wiki/?)%odelD:softwareDdevelopment; The ?)model is a software development model which can be presumed to be the eCtension of the waterfall model. Instead of movin* down in a linear wa&3 the process steps are bent upwards after the codin* phase3 to form the t&pical ? shape. The ?)%odel demonstrates the relationships between each phase of the development life c&cle and its associated phase of testin*. The development process proceeds from the upper left point of the ? toward the ri*ht3 endin* at the upper ri*ht point. In the left)hand3 downward)slopin* branch of the ?3 development personnel define business requirements3 application desi*n parameters and desi*n processes. 6t the base point of the ?3 the code is written. In the ri*ht)hand3 upward)slopin* branch of the ?3 testin* and debu**in* is done. The unit testin* is carried out first3 followed b& bottom)up inte*ration testin*. The eCtreme upper ri*ht point of the ? represents product release and on*oin* support.

-)2) 3 an& 4(tier arc"itecture


-(Tier Arc"itecture ) 6 simple form of standalone application architecture where ever&thin* resides in a sin*le pro*ram. 'ontrast this to 2)tier and =)tier architectures. 6 #)tier architecture is the most basic setup because it involves a sin*le tier on a sin*le machine. Think of an application that runs on &our P': /ver&thin* &ou need to run the application :data stora*e3 business lo*ic3 user interface3 and so forth; is wrapped up to*ether. 6n eCample of a #) tiered application is a basic word processor or a desktop file utilit& pro*ram. 2( Tier Arc"itecture ) The two tiers are: M. 'lient application: the application on the client computer consumes the data and presents it in a readable format to the student. M. +ata server: the database serves up data based on -Q queries submitted b& the application. the client handles the displa&3 the server handles the database 6lthou*h the 2)tier approach increases scalabilit& and separates the displa& and database la&ers 'lient/Presentation tier)E data tier 5R 'lient)E+atabase:5racle/-Q server; , ' I/4T/-/R?/R 3( Tier Arc"itecture ) We create a =)tier architecture b& insertin* another pro*ram at the server level. We call this the server application. 4ow the client application no lon*er directl& queries the database9 it queries the server application3 which in turn queries the data server. M. 'lient application: the application on the client computer consumes the data and presents it in a readable format to the student. M. -erver 6pplication: M. +ata server: the database serves up data based on -Q queries submitted b& the application. 6 =)tier architecture is the most common approach used for web applications toda&. In the t&pical eCample of this model3 the web browser acts as the client3 an application server :such as %acromedia 'oldFusion; handles the business lo*ic3 and a separate tier :such as 5racle or %&-Q database servers; handles database functions. 'lient)E-erver:II-/6pache;)E+atabase:5racle/-Q server; , W/8 86-/+ Workflow: #. The student asks the client application. 2. The client application asks the server application. =. The server application queries the data server. J. The data server serves up a recordset with all the studentAs *rades. L. The server application does all the calculations to determine the *rade. N. The server application serves up the final *rade to the client application. K. The client application displa&s the final *rade for the student. 'lient/Presentation tier)E8usiness tier )E data tier 5R 'lient)E-erver:II-/6pache;)E+atabase:5racle/-Q server; 4( Tier Arc"itecture: 'lient)E-erver:II-/6pache;)E8usiness o*in :?erisi*n etc; )E+atabase:5racle/-Q server;

Waterfall Met"o&: http://www.oreill&.com/catalo*/oracledes/eCcerpt/odeI#I#.*if http://www.eCperienced&namics.com/popups/popupDwaterfallDmethod.php http://www.horde.or*/papers/oscon2II#)caseDstud&/JDwaterfall.Cml.html

The waterfall model is a simplistic sequential model :-trate*&)E6nal&sis) E+esi*n)E8uild)ETest)ETransition; It assumes that development can follow a step)b&)step process. Hou never *o back to previous steps.

The waterfall method is the oldest s&stem development method. Its startin* point is3 that thin*s should be done in nice phases3 that a phase ends with new3 approved documentation3 and that the neCt phase then starts. This is some of the reason thereAs a lot of documentation. 6nother effect is3 that the desi*n canAt be chan*ed after the desi*n phase3 so &ou assume that the *oal is ver& clear from the be*innin*. The comparison with the waterfall comes from: once the water is over the ed*e3 &ou canAt *et it to move up ) &ou canAt *o backwards. ItAs an advanta*e3 that thin*s look ver& *ood on paper3 with eas& to calculate deadlines. ItAs a disadvanta*e3 that realit& isnAt this nice3 so &ou end up doin* somethin* not so waterfall like an&wa&. Incre%ental Met"o&: There are a number of models t&pified b& an incremental approach. Pieces are desi*ned3 implemented3 and tested individuall&. The s&stem is built up piece b& piece. -omeone has to keep the bi* picture in mind. Protot #ing Mo&el: http://searchsmb.techtar*et.com/s+efinition/I33sidJJD*ciKLLJJ#3II.html ) The Protot&pin* %odel is a s&stems development method :-+%; in which a protot&pe :an earl& approCimation of a final s&stem or product; is built3 tested3 and then reworked as necessar& until an acceptable protot&pe is finall& achieved from which the complete s&stem or product can now be developed. This model works best in scenarios where not all of the pro0ect requirements are known in detail ahead of time. It is an iterative3 trial)and)error process that takes place between the developers and the users. There are several steps in the Protot&pin* %odel: The new s&stem requirements are defined in as much detail as possible. This usuall& involves interviewin* a number of users representin* all the departments or aspects of the eCistin* s&stem. 6 preliminar& desi*n is created for the new s&stem. 6 first protot&pe of the new s&stem is constructed from the preliminar& desi*n. This is usuall& a scaled)down s&stem3 and represents an approCimation of the characteristics of the final product. The users thorou*hl& evaluate the first protot&pe3 notin* its stren*ths and weaknesses3 what needs to be added3 and what should to be removed. The developer collects and anal&.es the remarks from the users. The first protot&pe is modified3 based on the comments supplied b& the users3 and a second protot&pe of the new s&stem is constructed. The second protot&pe is evaluated in the same manner as was the first protot&pe.

The precedin* steps are iterated as man& times as necessar&3 until the users are satisfied that the protot&pe represents the final product desired. The final s&stem is constructed3 based on the final protot&pe. The final s&stem is thorou*hl& evaluated and tested. Routine maintenance is carried out on a continuin* basis to prevent lar*e)scale failures and to minimi.e downtime. T"e S#iral Mo&el: http://en.wikipedia.or*/wiki/-piralDmodel +/FI4ITI54 ) The spiral model3 also known as the spiral lifec&cle model3 is a s&stems development method :-+%; used in information technolo*& :IT;. This model of development combines the features of the protot&pin* model and the waterfall model. The spiral model is intended for lar*e3 eCpensive3 and complicated pro0ects. The steps in the spiral model can be *enerali.ed as follows: The new s&stem requirements are defined in as much detail as possible. This usuall& involves interviewin* a number of users representin* all the eCternal or internal users and other aspects of the eCistin* s&stem. 6 preliminar& desi*n is created for the new s&stem. 6 first protot&pe of the new s&stem is constructed from the preliminar& desi*n. This is usuall& a scaled)down s&stem3 and represents an approCimation of the characteristics of the final product. 6 second protot&pe is evolved b& a fourfold procedure: :#; evaluatin* the first protot&pe in terms of its stren*ths3 weaknesses3 and risks9 :2; definin* the requirements of the second protot&pe9 :=; plannin* and desi*nin* the second protot&pe9 :J; constructin* and testin* the second protot&pe. 6t the customerAs option3 the entire pro0ect can be aborted if the risk is deemed too *reat. Risk factors mi*ht involve development cost overruns3 operatin*)cost miscalculation3 or an& other factor that could3 in the customerAs 0ud*ment3 result in a less)than)satisfactor& final product. The eCistin* protot&pe is evaluated in the same manner as was the previous protot&pe3 and3 if necessar&3 another protot&pe is developed from it accordin* to the fourfold procedure outlined above. The precedin* steps are iterated until the customer is satisfied that the refined protot&pe represents the final product desired. The final s&stem is constructed3 based on the refined protot&pe. The final s&stem is thorou*hl& evaluated and tested. Routine maintenance is carried out on a continuin* basis to prevent lar*e)scale failures and to minimi.e downtime. /ifference +etween Al#"a & 5eta testings: 6lpha testin* is final testin* before the software is released to the *eneral public. First3 :and this is called the first phase of alpha testin*;3 the software is tested b&

in)house developers. The& use either debu**er software3 or hardware)assisted debu**ers. The *oal is to catch bu*s quickl&. Then3 :and this is called second sta*e of alpha testin*;3 the software is handed over to us3 the software Q6 staff3 for additional testin* in an environment that is similar to the intended use. Followin* alpha testin*3 <beta versions< of the software are released to a *roup of people3 and limited public tests are performed3 so that further testin* can ensure the product has few bu*s. 5ther times3 beta versions are made available to the *eneral public3 in order to receive as much feedback as possible. The *oal is to benefit the maCimum number of future users. +ifference between 6lpha and 8eta Testin* In)house developers and software Q6 personnel perform alpha testin*. The public3 a few select prospective customers3 or the *eneral public performs beta testin*. Process Flow Testing: Process testin* needs to cover the followin*: ) 6li*nment to strate*&3 customer requirements and value ) 'ompliance to desi*n principles ) 'ompliance to le*al requirements ) %easurabilit& of the process ) /Cistin* capabilit& to perform the process ) FleCibilit& of the process to address eCceptions and special non)standard requirements ) +oes the process have self)re*ulator& feedback loops! ) Is the process under sufficient control to secure reliabilit&! http://www.sasqa*.or*/pastmeetin*s/I#O4D-ufficienc&Testin*.pptP2N=3=3Qlobali. ation Qlobali.ation: )E 1ocali6ation(1-74)) Internationali6ation(I-84 When +ev team is plannin* to do ocali.ation3 initiall& the& have to follow the I#O4 standard before the& develop the code and satisf& the I#O4 standards. ater when we release /n*lish version to customers then dev team start works on ocali.ation and *ive a separate build :not *old master; to Q6. 4ow Q6 should install the locali.ed build on different server and test the product b& followin* locali.ation standards. -ee below for....Row to test and locali.ation standards: 1ocali6ation(1-74): means takin* an internationali.ed product and customi.in* it for a specific market. T"is inclu&es translating t"e software strings) rearranging t"e 9I co%#onents to #reserve t"e original loo: an& feel after translation) custo%i6ing t"e for%ats (suc" as &ate/ti%e) #a#er si6e) etc$)) t"e &efaults) an& even t"e logic &e#en&ing on t"e targete& %ar:et$ $$$ such a customi.ation is possible onl& if the application is properl& internationali.ed9

otherwise3 the #I4 team faces a challen*e whose si*nificance depends on the application and the lan*ua*e version. < ocali.ation : #In; is the adaptation of products or services to the cultural3 le*al3 lin*uistic3 and technical requirements of a specific locale. ocali.ation is a compleC process involvin* man& steps: Reviewin* tar*et markets to identif& local lin*uistic and environmental requirements 6nal&.in* products to determine areas to be adapted /Ctractin* teCt and other lin*uistic or culturall& sensitive material. Translatin* and modif&in* elements Reen*ineerin* the core product to accept new forei*n market content :e.*.3 resi.in* of buttons to accommodate new teCt; Testin* new forei*n market editions to ensure the& meet the performance standards of the domestic product < Web developers and testers have known about the sub0ect of ;1ocali6ation; or ;<lo+ali6ation;) w"ic" +asicall involves t"e translation of we+ #ages into various languages) t"ere+ allowing co%#anies to reac" a ;glo+al #u+lic;$ %ore recentl&3 companies have be*un appl&in* this technolo*& to their QBI and e*ac& applications in *reater numbers as the& enter into the *lobal marketplace. Internationali6ation (I-84): Internationali.ation is the process of developin* a software product whose core desi*n does not make assumptions based on a locale. It #otentiall "an&les all targete& linguistic an& cultural variations (suc" as te=t orientation) &ate/ti%e for%at) currenc ) accente& an& &ou+le( + te c"aracters) sorting) etc$) wit"in a single co&e +ase$ Isolatin* all messa*e strin*s in teCt files is another necessar& step to prepare a product for locali.ation. http://developers.sun.com/dev/*adc/i#Ontestin*/checklists/teCtual/teCtual.html 1ocali6ation 1in: (1-74): http://www.cs.bsu.edu/homepa*es/metrics/csN=$d/'-N=$WWW/chapter$)##/ A locali6ation testing c"ec:list for t"e user interface t #icall inclu&es: ?alidation of all locali.ed application resources a*ainst base product ?erif&in* that all locali.able teCt has been translated and that the translated teCt is properl& displa&ed Bser interface usabilit& 6ssessment of cultural appropriateness ?erif&in*/eliminatin* politicall& sensitive content +ate format :lon* and short; G Time format :/urope uses a 2J hour clock; G %one& and an&thin* relatin* to it: taCes3 etc. G 4umber formats :usin* a <.< instead of a <3< to denote thousands; G 6ddress formats :havin* a .ip field hard)coded;

G Font names3 si.e3 and decoration


I-84 an& 1-74 &evelo#%ent #rocess http://www.t*pconsultin*.com/developers.htm I-84 is the industr& acron&m for Internationali.ation. There are #O letters between the first and last letter of the word Internationali.ation. Software I-84 is &efine& as t"e set of #rocesses) tools) co&ing tec"ni'ues an& #roce&ures use& to write a software #rogra% t"at su##orts all of t"e language re'uire%ents an& countr conventions of all of t"e countries w"ere t"e SW #rogra% will +e use&$ For instance3 writin* an I#O4 read& application that supports the writin* s&stems for 1apan and /n*lish3 includin* the special sortin* for the different alphabets. T"e user interface of t"is I-84 rea& a##lication is still in 0nglis") +ut t"e +ase co&e su##orts t"e language re'uire%ents for +ot" 0nglis" an& >a#anese$ 1-74 is the industr& acron&m for ocali.ation. There are #I letters between the first and last letter of the word ocali.ation. Software 1-74 is t"e i%#le%entation of an I-84 #rogra% for a s#ecific countr /locale$ In t"e a+ove e=a%#le) t"e >a#anese version of t"e I-84 rea& a##lication is t"e s#ecific >a#anese locali6ation w"ere t"e user?s interface is in >a#anese$ Qoal of I#O4 T"e %ain a&vantage of writing software t"at is I-84 rea& fro% t"e start is t"at t"ere is no nee& to re(arc"itect t"e +ase co&e ever ti%e we nee& to su##ort anot"er countr /locale?s re'uire%ents$ T"is translates into faster locali6ation &evelo#%ent/testing c cles$ WhatAs wron* with our current development process Bsuall& there are no spare c&cles to include the support for other lan*ua*es and countr& requirements when developin* the base code. I#O4 and #I4 requirements are unknown at the time when developers start writin* code3 or information on such requirements arrive too late in the development process. For I#O4 development *uidelines see the 'ourseware pa*e. WhatAs wron* with our codin* techniques The main problem is that not man& people are familiar with the sub0ect of Internationali.ation en*ineerin* and desi*n. 6cademic Institutions in *eneral havenAt contributed much to the development of the skills required to desi*n3 code and test for an IntAl audience. 9se t"e following lin: to test ot"er t"an 0nglis" language: This link will convert /n*lish letters to other lan*ua*es3 so that without knowin* other lan*ua*es &ou can do #I4: http://babelfish.altavista.com/tr

.erification ) It has to do with the software developers askin* themselves A are we buildin* the product ri*ht is the process of determinin* if a s&stem meets the conditions set forth at the be*innin*3 or durin* previous activities of the software development life c&cle3 correctl&. These conditions are set forth in software requirements3 which are usuall& formall& documented. The standard for software requirements documentation is 64-I/I/// -tandard O=I. .erification t #icall involves reviews an& %eetings to evaluate &ocu%ents) #lans) co&e) re'uire%ents an& s#ecifications$ T"is can +e &one wit" c"ec:lists) issues lists) wal:t"roug"s) an& ins#ection %eetings$ .ali&ation ) the question A are we buildin* the ri*ht productA is addressed. is the process of evaluatin* a s&stem to determine whether it satisfies the specified requirements and meets customer needs. .ali&ation t #icall involves actual testing an& ta:es #lace after verifications are co%#lete&$

For more info visit: http://en.wikipedia.or*/wiki/?S2N?:D?erificationDandD?alidation Pro= Server: http://www.webopedia.com/T/R%/P/proC&Dserver.html

Qualit !ontrol .s Qualit Assurance: /0FI4ITI*4 *F QA/Q! Qualit& 'ontrol :Q'; is a s&stem of routine technical activities3 to measure and control the qualit& of the inventor& as it is bein* developed. The Q' s&stem is desi*ned to::i; Provide routine and consistent checks to ensure data inte*rit&3 correctness3 and completeness9 :ii; Identif& and address errors and omissions9 :iii; +ocument and archive inventor& material and record all Q' activities. Q' activities include *eneral methods such as accurac& checks on data acquisition and calculations and the use of approved standardised procedures for emission calculations3 measurements3 estimatin* uncertainties3 archivin* information and reportin*. Ri*her tier Q' activities include technical reviews of source cate*ories3 activit& and emission factor data3 and methods. Qualit& 6ssurance :Q6; activities include a planned s&stem of review procedures conducted b& personnel not directl& involved in the inventor& compilation/development process. Reviews3 preferabl& b& independent third parties3 should be performed upon a finalised inventor& followin* the implementation of Q' procedures. Reviews verif& that data qualit& ob0ectives were met3 ensure that the inventor& represents the best possible estimates of emissions and sinks *iven the current state of scientific knowled*e and data available3 and support the effectiveness of the Q' pro*ramme. Qualit !ontrol refers to 'ualit relate& activities associate& wit" t"e creation of #ro@ect &elivera+les$ Qualit control is use& to verif t"at &elivera+les are of acce#ta+le 'ualit an& t"at t"e are co%#lete an& correct$ 0=a%#les of 'ualit control activities include deliverable peer reviews and the testin* process. Qualit Assurance refers to t"e #rocess use& to create t"e &elivera+les) an& can +e #erfor%e& + a %anager) client) or even a t"ir&(#art reviewer$ 0=a%#les of 'ualit assurance inclu&e #rocess c"ec:lists an& #ro@ect au&its$ If &our pro0ect *ets audited3 for instance3 an auditor mi*ht not be able to tell if the content of a specific deliverable is acceptable :qualit& control;. Rowever3 the auditor should be able to tell if the deliverable seems acceptable based on the process used to create it :qualit& assurance;. ThatAs wh& pro0ect auditors can

perform a qualit& assurance review on &our pro0ect3 even if the& do not know the specifics of what &ou are deliverin*.
For more info visit: http://www.ipcc)n**ip.i*es.or.0p/public/*p/en*lish/ODQ6)Q'.pdf 0FI Re': We are lookin* for an independent3 detail oriented and eCperienced -r. Q6 en*ineer who can work side b& side with development en*ineerin* team to test and verif& new -W features. The ideal candidate will have an eCtensive eCperience and knowled*e in usin* 5peratin* -&stems3 Win&ows an& Mac3 and test client a##lications an& &rivers. +uties include reviewin* en*ineerin* specification and providin* feedback3 creatin* and eCecutin* test plans and test matrices3 reportin* defects to en*ineers3 trackin* defect status. 6n eCperience from a printin* industr& is fundamental and knowled*e of printin* workflow is a plus. Technical Requirements: -tron* back*round in client)server testin* environment3 includin* BI testin* /Cperience with all 5peratin* -&stems3 both Windows and %ac Qood understandin* of networ: #rotocols and services Qood knowled*e of 5ffice3 Qraphic +esi*n and web application 8asic user level knowled*e of 1inu=/94IA is recommended. Pro*rammin* and scri#ting eCperiences is a P B5ther Requirements: T Qood writin* and verbal communication skills T -elf motivated3 detail)oriented and able to work with minimal or no supervision T -tron* cross)functional communication and interpersonal skills T The abilit& to resolve conflicts and make sound decisions T 6bilit& to *et thin*s done b& workin* proactivel& and a *ood team pla&er /ducational Requirements: 8- de*ree in 'omputer science or related field combined with 2)=U &ears of related eCperience in the field of software testin* /FI understands that from top mana*ement to more entr&)level personnel3 emplo&ees drive the success of a compan&. To this end3 /FI strives to provide a work environment that offers challen*in* and rewardin* opportunities3 while encoura*in* open communication3 teamwork3 and diversit&. -end &our resume3 cover letter and salar& requirements :includin* req P =NOL in sub0ect line; to careersVefi.com.

Baiser Re'uire%ent: We would advise &ou to focus on the followin*3 based on our understandin* our 'lientAs requirement and the nature of the interview conducted on earlier occasion. ) Ri*h level idea and understandin* of 1ava/12// technolo*& ) 1-P3 /18As3 -ervlets. Bnderstandin* of 1ava 'ode is essential from the white boC testin* perspective. ) -ecurit& related questions :buffer overflow3 authentication3 authori.ation3 *et and post ) when do &ou use *et W post3 secured encr&ption #2O3 2LN bit3 sql in0ection ) what W when used3 cross)site scriptin*3 -- 3

RTTP and RTTP- etc; ) Q6 related questions ) WinRunner3 oadRunner3 Test +irector etc. 5ther ) -criptin* in Perl3 some database related questions.

We have an ur*ent requirement for a -ecurit& Test /n*ineer for one of our Fortune %edical Insurance client at Walnut 'reek3 '6. Title: -ecurit& Test /n*ineer +uration: N U %onths ocation: Walnut 'reek3 '6 Interview: Immediate 1ob +escription: /Ctensive securit& testin* and development eCperience. Web development eCperience. White)boC testin* eCperience. 'reate -ecurit& Rackin* test cases. 6ssure that installation of -- certificates was done correctl&. 6bilit& to evaluate source code chan*es and developer 1unit tests. Previous eCperience in the followin* would be hi*hl& desirable: threat level :vulnerabilit&; assessment and modelin*3 web securit& threats and countermeasures3 knowled*e of cr&pto*raph& standards :includin* *overnment issued;3 abilit& to evaluate application) and enterprise)level securit& models includin* secure development framework3 cookie mana*ement. The candidates must have deep knowled*e of the followin*: T'P/IP3 intrusion detection s&stems and associated data output mana*ement3 firewalls3 routers and load balancers3 some pro*rammin* skills3 and multitude of operatin* s&stems3 web servers3 and databases. Proficient knowled*e of: Websphere3 6pache3 '?-3 16?63 -TRBT-3 58 IX3 +6P3 -- or XLI$ certificates. If &ou are interested to pursue for this opportunit& please send &our updated resume with contact no.s 3 availabilit& and &our eCpected pa& rate so that we can take it forward to the neCt level ))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) Single Sign(*n When a user requests a resource from a server3 the server collects the access)control lists :6' s; associated with that resource and evaluates them. If the serverAs evaluation of the 6' s requires identification of the user3 the server requests client authentication3 in the form of either a name and password or a di*ital certificate presented accordin* to the -ecure -ockets a&er :-- ; protocol. 6fter the server has established the userAs identit&3 optionall& includin* user/*roup information stored in a i*htwei*ht +irector& 6ccess Protocol : +6P; director&3 it continues its evaluation of the 6' s and authori.es or denies access to the requested information accordin* to the userAs access privile*es. Fi*ure # illustrates the basic elements of the 6' evaluation process. 4etscapeAs approach to sin*le si*n)on replaces client authentication based on passwords sent over the network with client authentication based on the -ecure -ockets a&er :-- ; and certificates. This approach has several benefits for users and administrators:

0ase of use: Bsers can lo* in once and *et authenticated access to all servers for which that user is authori.ed3 without bein* interrupted b& repeated requests for passwords. Passwor& li%ite& to local %ac"ine: To lo* in3 the user t&pes a sin*le password that protects the private)ke& database on the local machine. Passwords are not sent over the network. Si%#lifie& %anage%ent: 6dministrators can control who is allowed access to which servers b& controllin* the lists of certificate authorities maintained b& client and server software. These lists are shorter than lists of user names and passwords and donAt chan*e as often. Access control not affecte&: -in*le si*n)on involves replacin* client authentication mechanisms3 not access)control mechanisms. 6dministrators donAt need to chan*e eCistin* 6' s that ma& have been ori*inall& set up to work with basic password authentication.

Securit Testing: -ecurit& Testin* is a s&stem test3 that is perform to verif& that protection mechanism built into s&stem will infect protect it from improper entr&/connections. Securit Cac:ing test cases: #. ?erif& elementar& step of web securit& is to setup directories and BR pointed to home.C&. or indeC.C&. 2. ?erif& -- intalled and confi*ured properl& :RTTP-; =. ?erif& -- does the secure transaction J. ?erif& that the s&s doesnYt allow invalid user names and pwds and allow valid user lo*ins L. ?erif& after min number of lo*in account3 it should lock out the user account N. ?erif& that the server locks are workin* properl& and shows the track of ever& trascartion in lo* file K. ?erif& lo* is trackin* un)successful lo*in attempts O. ?erif& activeC disabled to avoid securit& hackin* $. ?erif& 0ava scipt disabled to avoid sh #I. ?erif& ille*al connections not made and if not able see an& other persons info

5uffer *verflow: A +uffer overflow occurs w"en a #rogra% or #rocess tries to store %ore &ata in a +uffer (te%#orar &ata storage area) t"an it was inten&e& to "ol&. -ince buffers are created to contain a finite amount of data3 the eCtra information ) which has to *o somewhere ) can overflow into ad0acent buffers3 corruptin* or overwritin* the valid data held in them. 6lthou*h it %a occur acci&entall t"roug" #rogra%%ing error) +uffer overflow is an increasingl co%%on t #e of securit attac: on &ata integrit $ In buffer overflow attacks3 the eCtra data ma& contain codes desi*ned to tri**er specific actions3 in effect sendin* new instructions to the attacked computer that could3 for eCample3 &a%age t"e user?s files) c"ange &ata) or &isclose confi&ential infor%ation$ 8uffer overflow attacks are said to have arisen because the ' pro*rammin* lan*ua*e supplied the framework3 and poor pro*rammin* practices supplied the vulnerabilit&. Aut"entication) Aut"ori6ation) an& Accounting (AAA) 6uthentication3 authori.ation3 and accountin* (AAA) is a ter% for a fra%ewor: for intelligentl controlling access to co%#uter resources) enforcing #olicies) au&iting usage) an& #rovi&ing t"e infor%ation necessar to +ill for services$ These combined processes are considered important for effective network mana*ement and securit&.

6s the first process3 aut"entication #rovi&es a wa of i&entif ing a user) t #icall + "aving t"e user enter a vali& user na%e an& vali& #asswor& +efore access is grante&$ T"e #rocess of aut"entication is +ase& on eac" user "aving a uni'ue set of criteria for gaining access$ T"e AAA server co%#ares a user?s aut"entication cre&entials wit" ot"er user cre&entials store& in a &ata+ase$ If t"e cre&entials %atc") t"e user is grante& access to t"e networ:$ If the credentials are at variance3 authentication fails and network access is denied. Followin* authentication3 a user %ust gain aut"ori6ation for &oing certain tas:s$ After logging into a s ste%) for instance) t"e user %a tr to issue co%%an&s$ T"e aut"ori6ation #rocess &eter%ines w"et"er t"e user "as t"e aut"orit to issue suc" co%%an&s$ Si%#l #ut) aut"ori6ation is t"e #rocess of enforcing #olicies: &eter%ining w"at t #es or 'ualities of activities) resources) or services a user is #er%itte&$ Bsuall&3 authori.ation occurs within the conteCt of authentication. 5nce &ou have authenticated a user3 the& ma& be authori.ed for different t&pes of access or activit&. The final plank in the 666 framework is accounting) w"ic" %easures t"e resources a user consu%es &uring access$ T"is can inclu&e t"e a%ount of s ste% ti%e or t"e a%ount of &ata a user "as sent an&/or receive& &uring a session. 6ccountin* is carried out b& lo**in* of session statistics and usa*e information and is used for authori.ation control3 billin*3 trend anal&sis3 resource utili.ation3 and capacit& plannin* activities. 6uthentication3 authori.ation3 and accountin* services are often provided b& a dedicated 666 server3 a pro*ram that performs these functions. 6 current standard b& which network access servers interface with the 666 server is the Remote 6uthentication +ial)In Bser -ervice :R6+IB-;. Aut"entication: When a particular resource has been protected usin* basic authentication3 6pache sends a JI# 6uthentication Required header with the response to the request3 in order to notif& the client that user credentials must be supplied in order for the resource to be returned as requested. Bpon receivin* a JI# response header3 the clientAs browser3 if it supports basic authentication3 will ask the user to suppl& a username and password to be sent to the server. If &ou are usin* a *raphical browser3 such as 4etscape or Internet /Cplorer3 what &ou will see is a boC which pops up and *ives &ou a place to t&pe in &our username and password3 to be sent back to the server. If the username is in the approved list3 and if the password supplied is correct3 the resource will be returned to the client. 8ecause the RTTP protocol is stateless3 each request will be treated in the same wa&3 even thou*h the& are from the same client. That is3 ever& resource which is requested from the server will have to suppl& authentication credentials over a*ain in order to receive the resource. Fortunatel&3 the browser takes care of the details here3 so that &ou onl& have to t&pe in &our username and password one time per browser session ) that is3 &ou mi*ht have to t&pe it in a*ain the neCt time &ou open up &our browser and visit the same web site. Aut"ori6ation: 6uthori.ation is the process of *ivin* someone permission to do or have somethin*. In multi)user computer s&stems3 a s&stem administrator defines for the s&stem which users are allowed access to the s&stem and what privile*es of use :such as access to which file directories3 hours of access3 amount of allocated stora*e space3 and so forth;. 6ssumin* that someone has lo**ed in to a computer operatin* s&stem or application3 the s&stem or application ma& want to identif& what resources the user can be *iven durin* this session. Thus3 authori.ation is sometimes seen as both the preliminar& settin* up of permissions b& a s&stem adminstrator and the actual checkin* of the permission values that have been set up when a user is *ettin* access.

<0T an& P*ST: ;<0T; is +asicall for @ust getting (retrieving) &ata w"ereas ;P*ST; %a involve an t"ing) li:e storing or u#&ating &ata) or or&ering a #ro&uct) or sen&ing 0(%ail$ The fundamental differences between <Q/T< and <P5-T< The RT% specifications technicall& define the difference between <Q/T< and <P5-T< so that former means that form data is to be encoded :b& a browser; into a BR while the latter means that the form data is to appear within a messa*e bod&. 8ut the specifications also *ive the usa*e recommendation that the <Q/T< method should be used when the form processin* is <idempotent<3 and in those cases onl&. 6s a simplification3 we mi*ht sa& that <Q/T< is basicall& for 0ust *ettin* :retrievin*; data whereas <P5-T< ma& involve an&thin*3 like storin* or updatin* data3 or orderin* a product3 or sendin* /)mail.

CTTP (#ort 87): C #er Te=t Transfer Protocol (CTTP)) t"e actual co%%unications #rotocol t"at ena+les We+ +rowsing$ CTTP #rotocol) w"ic" is use& to trans%it an& receive we+ #ages 3 as well as some server workin*s and scriptin* technolo*ies RTTP- (#ort DD3): 5rowsers can connect to we+ servers over "tt# an& over "tt#s$ !onnecting over "tt#s involves ou entering "tt#s:// +efore t"e &o%ain na%e or 9R1 an&) #rovi&ing t"e we+ server "as a SS1 certificate) t"e connection will +e secure& an& encr #te&$

SQ1 In@ection: SQ1 in@ection is a tec"ni'ue for e=#loiting we+ a##lications t"at use client(su##lie& &ata in SQ1 'ueries wit"out stri##ing #otentiall "ar%ful c"aracters first$ +espite bein* remarkabl& simple to protect a*ainst3 there is an astonishin* number of production s&stems connected to the Internet that are vulnerable to this t&pe of attack. The ob0ective of this paper is to educate the professional securit& communit& on the techniques that can be used to take advanta*e of a web application that is vulnerable to -Q in0ection3 and to make clear the correct mechanisms that should be put in place to protect a*ainst -Q in0ection and input validation problems in *eneral. ;SQ1 In@ection; is su+set of t"e unverifie&/unsaniti6e& user in#ut vulnera+ilit :<buffer overflows< are a different subset;3 an& t"e i&ea is to convince t"e a##lication to run SQ1 co&e t"at was not inten&e&$ If t"e a##lication is creating SQ1 strings naivel on t"e fl an& t"en running t"e%) it?s straig"tforwar& to create so%e real sur#rises

Secure& 0ncr #tion: /ncr&ption3 or information scramblin*3 technolo*& is an important securit& tool. Properl& applied3 it can provide a secure communication channel even when the underl&in* s&stem and network infrastructure is not secure. T"is is #articularl i%#ortant w"en &ata #asses t"roug" s"are& s ste%s or networ: seg%ents w"ere %ulti#le #eo#le %a "ave access to t"e infor%ation$ In t"ese situations) sensitive &ata((an& es#eciall #asswor&s((s"oul& +e encr #te& in or&er to #rotect it fro% uninten&e& &isclosure or %o&ification$ 0ncr #tion is a #roce&ure t"at involves a %at"e%atical transfor%ation of infor%ation into scra%+le& go++le& goo:) calle& ;ci#"er te=t$; T"e co%#utational #rocess (an algorit"%) uses a :e ((actuall @ust a +ig nu%+er associate& wit" a #asswor& or #ass #"rase((to co%#ute or convert #lain te=t into ci#"er te=t wit" nu%+ers or strings of c"aracters$ The

resultin* encr&pted teCt is decipherable onl& b& the holder of the correspondin* ke&. This decipherin* process is also called decr&ption. Intrusion /etection s ste%s (I/): I/ stan&s for Intrusion /etection) w"ic" is t"e art of &etecting ina##ro#riate) incorrect) or ano%alous activit . I+ s&stems that operate on a host to detect malicious activit& on that host are called host)based I+ s&stems3 and I+ s&stems that operate on network data flows are called network)based I+ s&stems. -ometimes3 a distinction is made between misuse and intrusion detection. T"e ter% intrusion is use& to &escri+e attac:s fro% t"e outsi&eE w"ereas) %isuse is use& to &escri+e an attac: t"at originates fro% t"e internal networ:$ Rowever3 most people donAt draw such distinctions. The most common approaches to I+ are statistical anomal& detection and pattern)matchin* detection. Securit Cac:ing: "tt#://www$&evs"e&$co%/c/a/Securit /Cac:ing(Four(*wn(Site/ The purpose of this article is not to teach &ou how to hack sites3 but to show &ou some scenarios that ma& reveal to &ou how vulnerable &our eCistin* site ma& be3 or will hopefull& help &ou prevent an& future sites from havin* these vulnerabilities. Bnfortunatel&3 hackin* toda& is a fact of life. 8ut not all hackers are bad hackers9 in fact the term hacker can describe an&one who is enthusiasticall& interested in computers or pro*rammin*. The ori*inal hackers3 the first ever known3 are reported to be a *roup of model railroad enthusiasts who3 sometime in the #$LIAs were *iven some old telephon& equipment as a donation. 4ot wantin* to waste this equipment3 the& AhackedA or modified it for use in their railroad s&stem and were able to Adial inA track switchin* commands usin* rec&cled dialers and other parts of the phone equipment. -o the ori*inal term hackin* also meant to modif& or eCploit a previousl& unknown use of somethin*. Punch)card computer s&stems were soon the sub0ect of hackin*3 and pro*rammers deli*hted in findin* wa&s of doin* the same thin*s with fewer punch cards. It was shortl& after this3 sometime in the earl& seventies3 that malicious hackin* be*an to come about in the form of phreakin*3 hackin* into telephone networks and havin* telephone usa*e char*ed to other people or not at all. Toda& the terms hackin* and hackers have man& connotations3 the best known bein* of course people who eCploit software and/or the Internet for personal *ain or fun. These hackers are sometimes referred to as black)hat hackers3 or crackers3 and those that simpl& use software to hack3 with no real pro*rammin* knowled*e are called script)kiddies. There is also an increasin* number of so)called white)hat or ethical hackers who3 amon* other thin*s3 use their skills to test web applications for weaknesses and to help develop securit& in web applications and software. 5ften3 #eo#le w"o loo: at o#en source software an& atte%#t to refine an& a&& to its e=isting features are referre& to as "ac:ers$ The purpose of this article is not to teach &ou how to hack sites successfull&9 I wonAt be teachin* &ou how to steal credit card numbers3 brin* down Rotmail or reverse)en*ineer the latest release of Windows. IAm simpl& *oin* to show &ou a couple of scenarios that ma& reveal to &ou how vulnerable &our eCistin* site ma& be3 or will hopefull& help &ou prevent an& future sites from havin* these vulnerabilities. +onAt be fooled however9 the iron)clad securit& needed b& some sites such as online banks requires the hi*hest de*ree of professional assistance. 'ountless books have been written on the sub0ect of hackin*3 so there is no possible wa& for me to discuss all known t&pes of attack. There are some techniques &ou can tr& out to attempt to assess the vulnerabilit& of &our own site and applications3 techniques that once learned3 &ou should emplo& as part of the creative process in ever& site &ou construct. W"at is CIPAA,

White Paper b& Tom -tevens3 President and '/5 /-Q3 Inc. The Cealt" Insurance Porta+ilit an& Accounta+ilit Act of #$$N :RIP66;3 was the result of efforts b& the 'linton 6dministration and con*ressional healthcare reform proponents to reform healthcare. T"e goals an& o+@ectives of t"is legislation are to strea%line in&ustr inefficiencies) re&uce #a#erwor:) %a:e it easier to &etect an& #rosecute frau& an& a+use an& ena+le wor:ers of all #rofessions to c"ange @o+s) even if t"e (or fa%il %e%+ers) "a& #re(e=isting %e&ical con&itions$ T"e CIPAA legislation "a& four #ri%ar o+@ectives: 6ssure health insurance portabilit& b& eliminatin* 0ob)lock due to pre)eCistin* medical conditions Reduce healthcare fraud and abuse /nforce standards for health information Quarantee securit& and privac& of health information The RIP66 le*islation is or*ani.ed as follows: Title I: <uarantees "ealt" insurance access) #orta+ilit an& renewal Quarantees covera*e and renewal /liminates some pre)eCistin* condition eCclusions Prohibits discrimination based on health status Title II: Preventing "ealt"care frau& an& a+use Fraud and abuse controls 6dministrative -implification :6-; provisions :-ubtitle; %edical iabilit& Reform Title III: Me&ical Savings Accounts Realth Insurance taC deduction for self)emplo&ed Title I?: 0nforce%ent of grou# "ealt" #lan #rovisions Title ?: Revenue offset #rovisions$ Rowever3 when lookin* at RIP66 it is important to remember that the actual RIP66 rules and detail requirements that the healthcare industr& have to follow stem from the 6dministrative -implification :6-; provisions of RIP663 which fall under Title II :Fraud and 6buse; of the RIP66 act itself. These provisions are intended to reduce the costs and administrative burdens of healthcare b& makin* possible the standardi.ed3 electronic transmission of administrative and financial transactions that are currentl& eCecuted manuall& and on paper. "tt#://www$ra#i&ssl$co%/ssl(certificate(su##ort/ssl(ter%s$"t% SS1: SS1 is short for Secure Soc:ets 1a er . The -- protocol was developed b& 4etscape and is supported b& all popular web browsers such as Internet /Cplorer3 4etscape3 65 and 5pera $ For SS1 to wor: a SS1 certificate issue& + a !ertification Aut"orit (.erisign) %ust +e installe& on t"e we+ server) SS1 can t"en +e use& to encr #t t"e &ata trans%itte& :secure -- transactions; +etween a +rowser an& we+ server :and vice versa;. 5rowsers in&icate a

SS1 secure& session + c"anging t"e "tt# to "tt#s an& &is#la ing a s%all #a&loc:$ We+site visitors can clic: on t"e #a&loc: to view t"e SS1 certificate$ #2O / 2LN bit -- : -28 / 2GH +it SS1 is also referre& to as strong SS1 securit $ The #2O / 2LN bit tells users that the si.e of the encr&ption ke& used to encr&pt the data bein* passed between a web browser and web server is #2O / 2LN bits in si.e :mathematicall& this would be 2 to the power of #2O / 2LN ;. 8ecause the si.e of the #2O / 2LN bit ke& is lar*e it is computationall& unfeasible to crack and hence is known as stron* -- securit&. !SR '-R is short for !ertificate Signing Re'uest. W"en a##l ing for a SS1 certificate t"e first stage is to create a !SR on our we+ server$ T"is involves telling our we+ server so%e &etails a+out our site an& our organi6ation) it will t"en out#ut a !SR file$ T"is file will +e nee&e& w"en ou a##l for our SS1 certificate$ SS1 Be The -- Ze&3 also known as a Private Ze&3 is the secret ke& associated with &our -- certificate and should reside securel& on &our web server. When &ou create a '-R &our web server will also create a -- Ze&. When &our -- certificate has been issued3 &ou will need to install the -certificate onto &our web server ) which effectivel& marries the -- certificate to the -- ke&. 6s the -- ke& is onl& ever used b& the web server it is a means of provin* that the web server can le*itimatel& use the -- certificate. SS1 Port / "tt#s Port 6 port is the <lo*ical connection place< where a browser will connect to a web server. The -port or the https port is the port that &ou would assi*n on &our web server for -- traffic. The industr& standard port to use is port DD3 ) most networks and firewalls eCpect port DD3 to +e use& for SS1$ Rowever it is possible to name other -- ports / https ports to be used if necessar&. The standard port used for non)secure http traffic is 87. ;W"at is !ross Site Scri#ting,; !ross site scri#ting :also known as X--; occurs w"en a we+ a##lication gat"ers %alicious &ata fro% a user$ T"e &ata is usuall gat"ere& in t"e for% of a " #erlin: w"ic" contains %alicious content wit"in it$ T"e user will %ost li:el clic: on t"is lin: fro% anot"er we+site) instant %essage) or si%#l @ust rea&ing a we+ +oar& or e%ail %essage$ Bsuall& the attacker will encode the malicious portion of the link to the site in R/X :or other encodin* methods; so the request is less suspicious lookin* to the user when clicked on. 6fter the data is collected b& the web application3 it creates an output pa*e for the user containin* the malicious data that was ori*inall& sent to it3 but in a manner to make it appear as valid content from the website. %an& popular *uestbook and forum pro*rams allow users to submit posts with html and 0avascript embedded in them$ If for e=a%#le I was logge& in as ;@o"n; an& rea& a %essage + ;@oe; t"at containe& %alicious @avascri#t in it) t"en it %a +e #ossi+le for ;@oe; to "i@ac: % session @ust + rea&ing "is +ulletin +oar& #ost$ Further details on how attacks like this are accomplished via <coo:ie t"eft< are eCplained in detail below.

T"e /ifferences +etween >ava an& >avaScri#t: http://www.0sr.communitech.net/difference.htm %an& people use the words 1ava and 1ava-cript interchan*eabl&3 or confuse the two. This is how 4etscape eCplain the differences on their Web site: The 1ava-cript lan*ua*e resembles 1ava3 but without 1avaAs static t&pin* and stron* t&pe checkin*. 1ava-cript supports most of 1avaAs eCpression s&ntaC and basic control flow constructs.

In contrast to 1avaAs compile)time s&stem of classes built b& declarations3 1ava-cript supports a run)time s&stem based on a small number of data t&pes representin* numeric3 8oolean3 and strin* values. 1ava-cript has a simple instance)based ob0ect model that still provides si*nificant capabilities. 1ava-cript also supports functions3 a*ain without an& special declarative requirements. Functions can be properties of ob0ects3 eCecutin* as loosel& t&ped methods. The followin* table compares and contrasts 1ava-cript and 1ava. >avaScri#t .S >ava: Interpreted :not compiled; b& client. 'ompiled on server before eCecution on client. 5b0ect)based. 'ode uses built)in3 eCtensible ob0ects3 but no classes or inheritance. 5b0ect)oriented. 6pplets consist of ob0ect classes with inheritance. 'ode inte*rated with3 and embedded in3 RT% . 6pplets distinct from RT% :accessed from RT% pa*es;. ?ariable data t&pes not declared :loose t&pin*;. ?ariable data t&pes must be declared :stron* t&pin*;. +&namic bindin*. 5b0ect references checked at run)time. -tatic bindin*. 5b0ect references must eCist at compile)time. 'annot automaticall& write to hard disk. 'annot automaticall& write to hard disk. W"at is SS1, http://www.ccwebhost.com/support/faqs/secure)server)faq.htmPWhatIs-Secure Soc:ets 1a er 3 -- 3 is the standard securit& technolo*& for creating an encr #te& lin: +etween a we+ server an& a +rowser$ T"is lin: ensures t"at all &ata #asse& +etween t"e we+ server an& +rowser re%ain #rivate an& integral . -- is an industr& standard and is used b& millions of websites in the protection of their online transactions with their customers. In order to be able to *enerate an -- link3 a web server requires an -- 'ertificate. 8& convention3 Web pa*es that require an -- connection start with https: instead of http: I W"at Are t"e 5enefits of Active /irector , Totall integrate& wit" Win&ows 2777 Server) Active /irector gives networ: a&%inistrators) &evelo#ers) an& users access to a &irector service t"at: Si%#lifies %anage%ent tas:s$ Strengt"ens networ: securit $ Ma:es use of e=isting s ste%s t"roug" intero#era+ilit Strengt"ens Securit It i%#roves #asswor& securit an& %anage%ent$ 5 #rovi&ing single sign(on to networ: resources wit" integrate&) "ig"(#owere& securit services t"at are trans#arent to en& users$ It ensures &es:to# functionalit $ 5 loc:ing(&own &es:to# configurations an& #reventing access to s#ecific client %ac"ine o#erations) suc" as software installation or registr e&iting) +ase& on t"e role of t"e en& user$ It speeds e)business deplo&ment. 8& providin* built)in support for secure Internet)standard protocols and authentication mechanisms such as Zerberos3 public ke& infrastructure :PZI; and li*htwei*ht director& access protocol : +6P; over secure sockets la&er :-- ;. It ti*htl& controls securit&. 8& settin* access control privile*es on director& ob0ects and the individual data elements that make them up. In addition3 6ctive +irector& nativel& supports a full& inte*rated public ke& infrastructure and Internet secure protocols3 such as +6P over -- 3 to let or*ani.ations securel& eCtend selected director& information be&ond their firewall to eCtranet users and e)commerce customers. In this wa&3 6ctive +irector& stren*thens securit& and speeds deplo&ment of e)business b& lettin*

administrators use the same tools and processes to mana*e access control and user privile*es across internal desktop users3 remote dial)up users3 and eCternal e)commerce customers. !onclusion: Active /irector services wit"in Win&ows 2777 #rovi&e a focal #oint for %anaging an& securing Win&ows user accounts) clients) servers) an& a##lications$ In a&&ition) Active /irector is &esigne& to integrate wit" t"e non(Win&ows &irectories wit"in e=isting s ste%s) a##lications) an& &evices to #rovi&e a single #lace an& a consistent wa of %anaging an entire networ: infrastructure$ In t"is wa ) Active /irector increases t"e value of an organi6ation?s e=isting invest%ents an& lowers t"e overall costs of co%#uting + re&ucing t"e nu%+er of #laces w"ere a&%inistrators nee& to %anage &irector infor%ation 1/AP: +6P is the 1ig"weig"t /irector Access Protocol. 1/AP is &esigne& to +e a stan&ar& wa of #rovi&ing access to &irector services$ A &irector service is @ust a &ata+ase t"at "as +een &esigne& to +e rea& fro% %ore t"an it is &esigne& to written to$ 1/AP #rovi&es access to &irector infor%ation li:e co%#an #"one/e%ail &irectories$ It is also +eing use& to act as a gatewa to ot"er electronic infor%ation s ste%s as a %eta(&irector + co%#anies li:e For& an& Co%e /e#ot to &e#lo t"eir intranet/e=tranet s ste%s$ Ber+oros: A #rotocol t"at &efines "ow clients interact wit" a networ: aut"entication service$ !lients o+tain tic:ets fro% t"e Ber+eros Be /istri+ution !enter (B/!)) an& t"e #resent t"ese tic:ets to servers w"en connections are esta+lis"e&$ Ber+eros tic:ets re#resent t"e client?s networ: cre&entials$ W"en t"e 1A4/WA4 ena+les wit" Ber+oros feature) ot"ers cannot "ig"@ac:/%isuse/trac: #asswor& an& ot"er i%#/secrete info 4T1M: Win&ows 4T 1A4 Manager (4T1M)3 4T % is the authentication protocol used on networks that include s&stems runnin* the Windows 4T operatin* s&stem and on stand)alone s&stems. We+/A. We+(+ase& /istri+ute& Aut"oring an& .ersioning :Web+6?; is a file access protocol described in /Ctensible %arkup an*ua*e :X% ;. It uses the RTTP and runs over eCistin* Internet infrastructure[for eCample3 firewalls and routers. .irtual Private 4etwor: (.P4) ?P4 provides users with a wa& to securel& access private information on their corporate network over a shared public network infrastructure such as the Internet There are = t&pes of ?P4s: #. Intranet ?P4 2. Remote 6ccess ?P4 =. /Ctranet ?P4 #. Intranet .P4: 6n intranet is a network for business that is internal to compan&. Intranet ?P4 usin* ?P4 technolo*& to link different corporate network sites to*ether throu*h the shared internet infrastructure 2. Re%ote Access .P4: It enables the remote business users to securel& access the compan&Ys Intranet throu*h the shared internet infrastructure

=. 0=tranet .P4: It is a network that allows controlled access from eCternal networks3 such as from customers3 suppliers3 and partners. It levera*es the shared internet infrastructure to create the /Ctranet

W"at is /C!P, +R'P stands for </ na%ic Cost !onfiguration Protocol<. What is +R'PAs purpose! /C!P?s #ur#ose is to ena+le in&ivi&ual co%#uters on an IP networ: to e=tract t"eir configurations fro% a server :the A+R'P serverA; or servers3 in particular3 servers that have no eCact information about the individual computers until the& request the information. The overall purpose of this is to reduce the work necessar& to administer a lar*e IP network. The most si*nificant piece of information distributed in this manner is the IP address. W"at is t"e /4S, +4- stands for the /o%ain 4a%e Service. It is a set of software an& #rotocols t"at translate a &o%ain na%e li:e www$co%#an $co% into an IP a&&ress suc" as -J2$-H8$7$- . 6 request for such a translation is called a +4- quer&. Web browsers like 4etscape and Internet /Cplorer *enerate queries whenever the& browse addresses like http://www.compan&.com. W"at is IMAP) e=actl , K Is one of t"e services for 1/AP an& use& to sen&/receive e%ail I%6P :Internet Message Access Protocol; is a mature and popular Internet standard for email. It forms the blueprint for our new mail s&stem. 8ecause I%6P is a standard rather than a sin*le pro*ram3 an&one can create software that will work with it. and man& have. WeYve chosen a few of these pro*rams to recommend and support. /Cpert users ma& want to use their own favorite client pro*rams and are free to do so. I%6P stands for Internet %essa*e 6ccess Protocol. It is a %et"o& of accessing electronic %ail t"at are :e#t on a %ail server . In other words3 it permits a <client< email pro*ram to access remote messa*e stores as if the& were local. For e=a%#le) e%ail store& on an IMAP server can +e %ani#ulate& fro% a &es:to# co%#uter at "o%e) a wor:station at t"e office) an& a note+oo: co%#uter w"ile traveling) wit"out t"e nee& to transfer %essages or files +ac: an& fort" +etween t"ese co%#uters$ I%6PAs abilit& to access messa*es :both new and saved; from more than one computer has become eCtremel& important as reliance on electronic messa*in* and use of multiple computers increase3 but this functionalit& cannot be taken for *ranted: the widel& used Post *ffice Protocol (P*P) works best when one has onl& a sin*le computer3 since it was desi*ned to support <offline< messa*e access3 wherein messa*es are downloaded and then deleted from the mail server. This mode of access is not compatible with access from multiple computers since it tends to sprinkle messa*es across all of the computers used for mail access. Thus3 unless all of those machines share a common file s&stem3 the offline mode of access that P5P was desi*ned to support effectivel& ties the user to one computer for messa*e stora*e and manipulation. more... W"at is T!P/IP, T'P/ IP stands for Trans%ission !ontrol Protocol/ Internet Protocol . T'P/ IP is a network protocol used on 64s3 W64s and the Internet. T'P/IP is a name *iven to the collection of networkin* protocols that have been used to construct the *lobal Internet. FTP: FTP :File Transfer Protocol; allows a person to transfer files between two computers3 *enerall& connected via the Internet. If &our s&stem has FTP and is connected to the Internet3 &ou can access ver& lar*e amounts of files available on a *reat number of computer s&stems.

SM5: -%8 (Server Message 5loc:)3 is a #rotocol for s"aring files) #rinters) serial #orts 3 and communications abstractions such as named pipes and mail slots between computers. SM5 is a client server) re'uest(res#onse #rotocol$ W"en t"e client "as re'ueste& o##ortunistic loc:s (o#loc:s) an& t"e server su+se'uentl "as to +rea: an alrea& grante& o#loc: +ecause anot"er client "as re'ueste& a file o#en wit" a %o&e t"at is inco%#ati+le wit" t"e grante& o#loc:$ In t"is case) t"e server sen&s an unsolicite& %essage to t"e client signalling t"e o#loc: +rea:$ SMTP: -%TP :Si%#le Mail Transfer Protocol; is a T'P/IP protocol used in sendin* and receivin* mail. It is InternetAs standard host to host mail transport protocol. It is defined b& RF'O2# P*P3: Post *ffice Protocol ver 3. 8asicall&3 P5P= is intended to permit a workstation :client; to d&namicall& access a mailboC on a server and download mail messa*es. ItAs amon* the most simplistic Internet protocols around. S4MP: Si%#le 4etwor: Manage%ent Protocol is a protocol for Internet network mana*ement services. It is formall& specified in a series of related RF' documents MP1S :%ulti Protocol abel -witchin*; 44TP :4etwork 4ews Transfer Protocol; 4TP :4etwork Time Protocol; 8787: CTTP) DD3: CTTPS) D3: FTP) 2G: SMTP) 38J: 1/AP (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( 1ava 2 Platform3 /nterprise /dition :12//; defines the standard for developin* component)based multitier enterprise applications. 12// simplifies buildin* enterprise applications that are portable3 scalable3 and that inte*rate easil& with le*ac& applications and data. 12// is also a platform for buildin* and usin* web services. It incorporates web services standards such as those in the W-) I 8asic Profile. This means that web services in a 12//)compliant environment can interoperate with web services in non)12// )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) /nterprise 1ava8eans :/18; technolo*& is the server)side component architecture for the 1ava 2 Platform3 /nterprise /dition :12//; platform. /18 technolo*& enables rapid and simplified development of distributed3 transactional3 secure and portable applications based on 1ava technolo*&. ))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) -win* is a QBI toolkit for 1ava. -win* is one part of the 1ava Foundation 'lasses :1F';. -win* includes *raphical user interface :QBI; wid*ets such as teCt boCes3 buttons3 split)panes3 and tables. -WI4Q *ives much fancier screen displa&s than the raw 6WT. -ince the& are written in pure 1ava3 the& run the same on all platforms3 unlike the 6WT. The& are part of the 1F'. The& support plu**able look and feel [ not b& usin* the native platformAs facilities but b& rou*hl& emulatin* them. This means &ou can *et an& supported look and feel on an& platform. The disadvanta*e of li*htwei*ht components is slower eCecution. The advanta*e is uniform behaviour on all platforms. ))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) 6n applet is a pro*ram written in the 1avaT% pro*rammin* lan*ua*e that can be included in an RT% pa*e3 much in the same wa& an ima*e is included. When &ou use a 1ava technolo*&)

enabled browser to view a pa*e that contains an applet3 the appletAs code is transferred to &our s&stem and eCecuted b& the browserAs 1ava ?irtual %achine :1?%;. ))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) 6pache -truts :formerl& under the 6pache 1akarta pro0ect3 -truts is now a top level pro0ect; is an open)source framework for developin* 12// web applications. It uses and eCtends the 1ava -ervlet 6PI to encoura*e developers to adopt an %?' architecture. The -truts 'onsole is a FR// standalone 1ava -win* application for developin* and mana*in* -truts)based applications. Wit" t"e Struts !onsole ou can visuall e&it >SP Tag 1i+rar ) Struts) Tiles an& .ali&ator configuration files$ The -truts 'onsole also plu*s into multiple3 popular 1ava I+/s for seamless mana*ement of -truts applications from one central development tool ))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) !.S ( !oncurrent .ersions S ste%: '?- is a version control s&stem3 an important component of -ource 'onfi*uration %ana*ement :-'%;. Bsin* it3 &ou can record the histor& of sources files3 and documents. It fills a similar role to the free software R'-3 PR'-3 and 6e*is packa*es. '?- is a production qualit& s&stem in wide use around the world3 includin* man& free software pro0ects )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) W"ite 5o= Testing: W"ite +o= testing is testing fro% t"e insi&e((tests t"at go in an& test t"e actual #rogra% structure$ 8asis path testin*: ?er& simpl&3 test ever& statement in the pro*ram at least once. HouAll note that the testin* department at F'' chose test cases that did this9 the entire eCecution tree was covered. 8asis path testin* is %64+6T5RH))so much so that there are software products written especiall& to assist in it. Profiling: There are a lot of tools))often included with compilers))which show where the 'PB is spendin* most of its time in a pro*ram. 4aturall&3 the busiest parts of the pro*ram are the ones &ou want to test most. 1oo# tests: /Cercise each +53WRI /3F5R3 and other repeatin* statements several times. In#ut tests: 6s the old sa&in* *oes))*arba*e in3 *arba*e out. If a procedure recieves the wron* data3 itAs not *oin* to work. /ach procedure should be tested to make certain that the procedure actuall& received the data &ou sent to it. This will spot t&pe mismatches3 bad pointers3 and other such bu*s :these are common\; White 8oC Testin* ) 6lso known as *lass boC3 structural3 clear boC and open boC testin*. 6 software testin* technique whereb& eCplicit knowled*e of the internal workin*s of the item bein* tested are used to select the test data. Bnlike black boC testin*3 white boC testin* uses specific knowled*e of pro*rammin* code to eCamine outputs. The test is accurate onl& if the tester knows what the pro*ram is supposed to do. Re or she can then see if the pro*ram diver*es from its intended *oal. White boC testin* does not account for errors caused b& omission3 and all visible code must also be readable.

#I/2J/IL )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))

"tt#://www$&rivers"'$co%/Lcontent Cow to test /RI.0RS: +rivers are pro*rams that control a device. /ver& device3 whether it is a printer3 disk drive3 or ke&board3 it must have a driver pro*ram. +evices such as the video or soundcard need to have their drivers installed when the product is installed. +river +etective can help &ou identif& problem drivers on &our s&stem. 5ur Free scan takes onl& minutes to do. Tr& it toda& for free and solve &our driver problems in minutes\ /rivers are &ifferent t #es: 9S5 /rivers: B-8 +rivers allow &our s&stem to talk to a whole ran*e of devices. B-8 devices include cameras3 scanners3 printers3 B-8 pocket drives and camcorders as well as man& others. B-8 currentl& comes in B-8 #.# and 2.I hi*h speed. +river +etective can help to identif& problem B-8 drivers. 5ur eCtensive database contains detailed information on B-8 manufacturers such as 6daptec3 /pson3 Toshiba 6merica3 -on&3 %otorola and thousands of others. et +river +etective help &ou solve &our B-8 driver dilemma. Mo&e% /rivers: %odem +rivers allow &our operatin* s&stem to talk to &our modem and thus allow &ou to connect to the Internet. %odem drivers need to be kept up to date to avoid securit& issues. +river +etective can help to identif& problem %odem drivers. 5ur eCtensive database contains detailed information on %odem manufacturers such as Rockwell3 %otorola3 ucent3 'oneCant3 B- Robotics3 P')Tel3 ='om3 R-P3 Ra&es3 ]oom3 'irrus o*ic3 6tech3 6cer and 'reative abs. Au&io /rivers: 6udio +rivers control how &our s&stem interfaces with &our sound card. This in turn controls how sound *ets to &our speakers. Improper or corrupt 6udio drivers can often be the cause of 6udio problems. +river +etective helps to identif& problem 6udio drivers. 5ur eCtensive database contains detailed information on 6udio manufacturers such as 6.tech abs3 'reative abs3 'r&stal3 +iamond %ultimedia3 /nsoniq3 -oundPro and Hamaha. Tr& the +river +etective scan toda&. ItAs Free\ .i&eo /rivers: ?ideo +rivers allow &our operatin* s&stem to communicate with &our *raphics card. ?ideo drivers need updatin* more than an& other driver. +river +etectiveAs top use is for ?ideo +rivers. +river +etective is keen to identif& problem ?ideo drivers. 5ur database contains information on ?ideo manufacturers such as =+fC3 6-B-3 6TI3 'irrus o*ic3 'reative abs3 +iamond %ultimedia3 Rercules3 %atroC3 4vidia3 -=3 -is and Trident. Tr& the +river +etective scan toda&. ItAs Free\ /rivers Wor:flow: Run +river +etective -can Retrieve Hour -can Results Find Hour Bpdates +ownload the *ood drivers Zeep &our s&stem updated\

Printing Wor:flow (Aero= FreeFlow) +i*ital book printin* brin*s man& challen*es to printers movin* into this rapidl& *rowin* market. 1obs need to be entered easil& and quickl& moved into prepress. Pa*es and book blocks need to be accuratel& ali*ned and proofed. Production has to be fast and affordable. XeroC provides an eCtensive portfolio of book printin* solutions for books3 booklets and other bound documents. Bse our FreeFlow workflow and partner components3 to*ether with a complete portfolio of full color3 hi*hli*ht color and monochrome cut sheet and continuous feed printers3 for an end)to)end3 automated book production process3 from order entr& to finishin*. 40TW*RB #rotocols: "tt#://www$@avvin$co%/%o&el$"t%l (At en& of t"e #age ou will fin& P/F file ) Ta+le of !ontents

5verview of 4etwork 'ommunication 6rchitecture and Protocols T'P/IP Protocols -ecurit& and ?P4 Protocols ?oice 5ver IP :?5IP; Protocols Wide 6rea 4etwork :W64; Protocols ocal 6rea 4etwork : 64; Protocols %etropolitan 6rea 4etwork :%64; Protocols -tora*e 6rea 4etwork :-64; Protocols I-5 Protocols in 5-I K a&ers Reference %odel 'isco Protocols 4ovell 4etWare and Protocols I8% -&stems 4etwork 6rchitecture :-46; and Protocols 6ppleTalk: 6pple 'omputer Protocols -uite +/'net3 %icrosoft and XeroC Protocols --K / 'K Protocol -uite: -i*nalin* -&stem P K for Telephon& -i*nalin* 4etwork Protocols +ictionar&: From 6 to ] and I to $ %a0or 4etworkin* and Telecom -tandard 5r*ani.ations 4etwork 'ommunication Protocols %ap :e8ook onl&; We+ Access/Single Sign(on: Rere^
I&entit Manage%ent:

Rere^

9/P: The Bser +ata*ram Protocol offers onl& a minimal transport service )) non)*uaranteed data*ram deliver& )) and *ives applications direct access to the data*ram service of the IP la&er. B+P is used b& applications that do not require the level of service of T'P or that wish to use communications services :e.*.3 multicast or broadcast deliver&; not available from T'P. B+P is almost a null protocol9 the onl& services it provides over IP are checksummin* of data and multipleCin* b& port number. Therefore3 an application pro*ram runnin* over B+P must deal directl& with end)to)end communication problems that a connection)oriented protocol would have handled )) e.*.3 retransmission for reliable deliver&3 packeti.ation and reassembl&3 flow control3 con*estion avoidance3 etc.3 when these are required. The fairl& compleC couplin* between IP and T'P will be mirrored in the couplin* between B+P and man& applications usin* B+P. 6bbreviated B+P3 a connectionless protocol that3 like T'P3 runs on top of IP networks. Bnlike T'P/IP3 B+P/IP provides ver& few error recover& services3 offerin* instead a direct wa& to send and receive data*rams over an IP network. ItAs used primaril& for broadcastin* messa*es over a network.

CTTP: R&per TeCt Transfer Protocol :RTTP;3 the actual communications protocol that enables Web browsin*. RTTP protocol3 which is used to transmit and receive web pa*es3 as well as some server workin*s and scriptin* technolo*ies

))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) The +ifferences between 1ava and 1ava-cript: http://www.0sr.communitech.net/difference.htm %an& people use the words 1ava and 1ava-cript interchan*eabl&3 or confuse the two. This is how 4etscape eCplain the differences on their Web site: The 1ava-cript lan*ua*e resembles 1ava3 but without 1avaAs static t&pin* and stron* t&pe checkin*. 1ava-cript supports most of 1avaAs eCpression s&ntaC and basic control flow constructs. In contrast to 1avaAs compile)time s&stem of classes built b& declarations3 1ava-cript supports a run)time s&stem based on a small number of data t&pes representin* numeric3 8oolean3 and strin* values. 1ava-cript has a simple instance)based ob0ect model that still provides si*nificant capabilities. 1ava-cript also supports functions3 a*ain without an& special declarative requirements. Functions can be properties of ob0ects3 eCecutin* as loosel& t&ped methods. The followin* table compares and contrasts 1ava-cript and 1ava. >avaScri#t .S >ava: Interpreted :not compiled; b& client. 'ompiled on server before eCecution on client. 5b0ect)based. 'ode uses built)in3 eCtensible ob0ects3 but no classes or inheritance. 5b0ect) oriented. 6pplets consist of ob0ect classes with inheritance. 'ode inte*rated with3 and embedded in3 RT% . 6pplets distinct from RT% :accessed from RT% pa*es;. ?ariable data t&pes not declared :loose t&pin*;. ?ariable data t&pes must be declared :stron* t&pin*;. +&namic bindin*. 5b0ect references checked at run)time. -tatic bindin*. 5b0ect references must eCist at compile)time. 'annot automaticall& write to hard disk. 'annot automaticall& write to hard disk. ))))))))))))))))))))))))))))))))))))))))) 4//+ T5 /6R4 TR/ 8/ 5W: """""""""""""""""""""""" RT% B4IX -Q -6P 6-P X% N/#N/IL #. Which testin* activities &ou ma& want to automate! 6ns: Test cases which are repitadl& runnin* in ever& build #a What not to be tested! 6ns: :i; We ma& have installation test cases as well3 but if we do installation onl& once for the complete release3 there is no need to automate the installation test cases. :ii; If a particular test case is eas& to test manuall& and dificult to automate :if takes more time ect; better donAt do automation 2. Who attends the bu* review meetin*s and what is discussed durin* them! 6ns: Read& to +ecline or Postponse bu*s will be discussed in bu* review meetin* Who will attend: 2 cases:

caseP#: 5nl& Q6 man*er3 developer:s;3 development mana*er will attend caseP2: Q6 Testers :Who ever bu*s are read& to decline or postpone;3 Q6 %ana*er3 +eveloper:s;3 +evelopment %ana*er =. When a tester finds a bu*3 should he/she directl& file the bu* or discuss with developers or Q6 mana*er before filin* it! If tester is confident enou*h3 he/she can directl& file a bu* /C: If requirment ta* fails. Tester can discuss with developer or qa mana*er etc.. if there is no specific requirement for that particuler bu* J. If a tester thinks its a bu* and the developer thinks its not3 how can the tester convince the developer! If requriment ta* is alread& there for that particular bu* then there is no problem otherwise &ou have to *ive the real time eCample sa&in* in real time enviroment if customer do the similar kind of test it not looks *ood etc..:/C: BI messed up when I create a user with lar*e characters ) #II chars; L. What tools are available for support of testin* durin* software development life c&cle! B% ) For desi*n 1B4IT ) For Bnit testin* ?-- ) For version control

#. 2. =. J. L. N. K. O. $. #I. ##. #2.

Row to document a bu*! What is the difference between the priorit& and severit& of a bu*! What are the contents available in Test plan! What is Risk %ana*ement! What are the different states of a bu*! /Cplain the process of Q6! What is smoke test! What is re*ression testin*! What is the purpose of 8u* review meetin*s! What is an eCit criteria for a product! When should we start the automation! Produce a test cases for a. 'alculator b. ?endin* %achine c. Printer d. Phone

#. Row can &ou make manual testin* faster. 2. +efine an& of the followin*s RT% 3 BR . =. What is RTTP.

J. Row do &ou test a web site L. What is a difference between client)server and web testin*. N. Write the test cases for testin* the telephone :phone was in front of me;. K. What was &our last pro0ect and tell me about it. O. What is Test Plan $. What methods did &ou used while testin*. #I. /Cplain the followin*s _tableE _/tableE _ulE _/ulE _liE _/liE ##. Find a bu* :*iven some web site link; and document it.

F6QYs :Q6;: #. Tell me about &our latest pro0ect 2. 'an &ou handle the qa team and handle the pro0ect alone =. 8u* review meetin*s J. Test cases for calculate3 wireless cell phone3 ?endin* machine L. 6bout protocols like dhcp3 http3 smtp3 pop=3 ssl etc. N. 8ackend/database/server side testin* K. Test Plan table of contents O. Test case format $. +atabase queries like ddl3 dml3 dcl etc #I. BniC commends

5asic 94IA co%%an&s: http://www.emba.uvm.edu/'F/basic.html Tool !o%%an& 1anguage(T!1): http://www.tcl.tk/scriptin*/primer.html 8elow is a Tcl command that prints the current time. It uses three Tcl commands: set3 clock3 and puts. The set command assi*ns the variable. The clock command manipulates time values. The puts command prints the values. set seconds >clock seconds@ puts <The time is >clock format `seconds@< !MM tutorial: http://www.cplusplus.com/doc/tutorial/ W"at s"oul& a test "arness inclu&e, Test harnesses should include the followin* capabilities: 6 standard wa& to specif& setup :i.e.3 creatin* an artificial runtime environment; and cleanup. 6 method for selectin* individual tests to run3 or all tests.

6 means of anal&.in* output for eCpected :or uneCpected; results. 6 standardi.ed form of failure reportin*. P0R1 tutorial: http://www.comp.leeds.ac.uk/Perl/start.html http://archive.ncsa.uiuc.edu/Qeneral/Trainin*/PerlIntro/ http://www.pa*eresource.com/c*irec/indeC2.htm 1oa&Runner Me%or 1ea://11 errors: %emor& eak or + errors3 &ou will see mostl& in -tand alone applications like 'UU/?8/P83 because of Pointer or 5b0ects in the application In Web based applications3 &ou will see -ession Timeout/LII internal server/JIJ Pa*e not found. Hou can see those error info in summar&/RTTP request/response in oadRunner 6nali.er 6dmin/+eveloper will fiC the above errors. If it is a web based error like memor& leak3 admin will fiC it b& doin* like clear the cash etc.. If link/button broken3 then developer will fiC it T!1/TB http://he*el.ittc.ku.edu/topics/tcltk/tutorial)noplu*in/indeC.html ) Tutorial ) ver goo& lin: http://www.tcl.tk/software/tcltk/ Tcl (Tool !o%%an& 1anguage) is used b& over half a million developers worldwide and has become a critical component in thousands of corporations. It has a simple and pro*rammable s&ntaC and can be either used as a standalone application or embedded in application pro*rams. 8est of all3 Tcl is open source so itAs completel& free. T: is a *raphical user interface tool:it that makes it possible to create powerful QBIs incredibl& quickl&. It proved so popular that it now ships with all distributions of Tcl. Tcl and T: were created and developed b& 1ohn 5usterhout. +evelopers all over the world followed his eCample and built their own Tcl eCtensions. Toda&3 there are hundreds of Tcl eCtensions for all manner of applications. We have catalo*ed hundreds of these contributions3 which &ou can find listed under our 6pplications3 /Ctensions3 and Tools. Tcl and Tk are hi*hl& portable3 runnin* on essentiall& all flavors of BniC3 : inuC3 -olaris3 IRIX3 6IX3 M8-+M3 the list *oes on and on; Windows3 %acintosh3 and more. There are several sites that provide precompiled binaries for various platforms that are listed under our Tcl/Tk Ports area. 0=: 5pen a file create 2 names :like rama3 dhira0; and then write those 2 names in new file set f >open </tcltest/data< <w<@ puts `f <IAm ) rama< puts `f <IAm ) niktesh< close `f set f >open </tcltest/data< <r<@

set line# >*ets `f@ set lenDline2 >*ets `f line2@ close `f puts <line #: `line#< puts <line 2: `line2< set f >open </tcltest/data#< <w<@ puts <line #: `line#< puts <line 2: `line2< close `f S*AP: http://www.soapuser.com/basics#.html Si%#le *+@ect Access Protocol3 is an X% based protocol for accessin* remote ob0ects over the network. -56P is an X% )based messa*in* protocol. It defines a set of rules for structurin* messa*es that can be used for simple one)wa& messa*in* but is particularl& useful for performin* RP')st&le :Remote Procedure 'all; request)response dialo*ues. It is not tied to an& particular transport protocol thou*h RTTP is popular. 4or is it tied to an& particular operatin* s&stem or pro*rammin* lan*ua*e so theoreticall& the clients and servers in these dialo*ues can be runnin* on an& platform and written in an& lan*ua*e as lon* as the& can formulate and understand -56P messa*es. 6s such it is an important buildin* block for developin* distributed applications that eCploit functionalit& published as services over an intranet or the internet. etAs look at an eCample. Ima*ine &ou have a ver& simple corporate database that holds a table specif&in* emplo&ee reference number3 name and telephone number. Hou want to offer a service that enables other s&stems in &our compan& to do a lookup on this data. The service should return a name and telephone number :a two element arra& of strin*s; for a *iven emplo&ee reference number :an inte*er;. Rere is a 1ava)st&le protot&pe for the service: PCP: http://us=.php.net/manual/en/faq.*eneral.php PRP stands for PCP: C #erte=t Pre#rocessor PRP is an RT% )embedded scriptin* lan*ua*e. %uch of its s&ntaC is borrowed from '3 1ava and Perl with a couple of unique PRP)specific features thrown in. The *oal of the lan*ua*e is to allow web developers to write d&namicall& *enerated pa*es quickl&. >94IT: http://www.octopull.demon.co.uk/0ava/Introducin*D1Bnit.html http://0unit.sourcefor*e.net/doc/cookstour/cookstour.htm http://0unit.sourcefor*e.net/doc/cookbook/cookbook.htm http://0unit.sourcefor*e.net/doc/faq/faq.htm http://www.octopull.demon.co.uk/0ava/Introducin*D1Bnit.html 1Bnit supports writin* unit test in 1ava for 1ava code. The runnin* of tests3 recordin* of results and reportin* of errors is common to all tests3 and 1Bnit provides precisel& this functionalit&. 6ll that the test developer needs to write is the test code itself ) and there is no avoidin* that\ 0=a%#le:

Pro*rammer will develop a 0ava pro*ram:cal.0ava; for calculator :eC: c" aUb;3 then tester will write another 0ava pro*ram :eC:testcal.0ava; to make sure the calculation is correct b& passin* the values like :23 =; 4o user interface is available for 1B4IT API: http://www.applecore$$.com/api/apiII#.asp A##lication Progra%%ing Interface: 6PI stands for 6pplication Pro*rammin* Interface. /ssentiall&3 it is a wa& of interfacin* with the Windows environment. 0=a%#le: Qenerall& developer will develop the frame work/server side work for mail host application :without ui;3 and then develop the 6PIYs to make sure3 the functionalit& like sendin*/receivin* email workin* or not properl&. Tester will test the 6PI from command line3 b& passin* values T53 FR5%3 6ttachments etc and certif& it accordin*l&. 5nce 6PIYs are certified3 developers will desi*n the Bser Interface :BI; and like the 6PIYs with frame work. +evelper code: -ub Proc ?ar :T53 FR5%3 6ttachment ; Tester values: -ub Proc ?ar :ramausa#Vhotmail.com3 lalithaD*lpV&ahoo.com3 rama.doc; W" use APIs There are several reasons wh& &ou mi*ht wish to use 6PIs instead of or in addition to the built)in ?86 functions: -peed ) althou*h there mi*ht be onl& a fraction of a millisecondAs difference between a ?86 function and usin* an 6PI call3 if &ou are usin* it repeatedl&3 then this difference mounts up. 6 *ood eCample of this is recursivel& searchin* for a file throu*h the directories and sub)directories9 Reliabilit& ) &ou wish to ensure a more reliable application3 either to avoid A+ RellA3 caused b& settin* a reference to a particular version of a common dll3 such as 'om'tl=2.dll3 or a settin* that can be AconfusedA3 such as /nviron9 /Ctensibilit& ) &ou wish to perform somethin* that cannot be achieved usin* ?86 functions. Rowever3 there is a steep learnin* curve with 6PIs ) &ou are more likel& to either crash 6ccess or even the s&stem when testin* 6PIs. Therefore savin* &our code before &ou test it is vital. Fortunatel&3 once &ou have the code workin*3 there shouldnAt be an& problems. Row to use 6PIs! In *eneral3 an 6PI is declared as below: >PrivateaPublic@ +eclare >Functiona-ub@ 6PI4ame ib >+ :Return T&pe; For eCample: Private +eclare Function apiQet+' ib <user=2< 6lias <Qet+'< :8&?al hwnd 6s on*; 6s on* >PrivateaPublic@: This determines the scope of the function of subprocedure. This is mostl& a matter of preference. I prefer to declare m& 6PI calls private within a module3 and then use a function to call them. This allows me to have a module that is stand)alone and can be copied to another database without reliance on other modules. >Functiona-ub@: Whether it is a subprocedure or a function. 4earl& all 6PIs are functions3 and the& nearl& all return a value directl&.

4ame@ :6lias 6PI4ame; :6r*uments;

>+

4ame@:

The name of the + that the procedure is in. For the standard + s3 user=2.dll3 kernel=2.dll or *di=2.dll &ou can omit the file eCtension3 but for all other + s &ou must include the file eCtension. :6lias 6PI4ame;: If &ou have declared the 6PI as bein* different from the name that it is known within the + &ou must specif& the correct name here. There are several reasons wh& &ou ma& wish to do this: The name of the 6PI is not a valid ?86 function name3 such as ADlwriteA9 Hou are declarin* it twice3 for eCample to accept different ar*ument t&pes to *et around the A6s 6n&A variable t&pe9 Hou wish to have a common namin* polic& for 6PI calls3 such as prefiCin* them all with AapiA 4ote that the 6PI name must be in the correct case ) AfindfileA is not equal to AFI4+FI /A :6r*uments;: 6s with ?86 procedures3 6PIs ma& accept various ar*uments. Rowever3 this is one area where care needs to be taken to ensure that &ou pass 8&Ref or 8&?alue as needed. Hou will often also need to predeclare strin* ar*uments to be a certain len*th. Hou ma& also find that &ou pass a T&pe -tructure as an ar*ument3 and the values that &ou want are in that T&pe -tructure. :Return ?alue;: The datat&pe that the 6PI returns. 4ormall& this will be a on* Inte*er3 with I often indicatin* an error. Row to find out more about them There are several ver& *ood resources for 6PIs3 both written and on the Web:

Win&ows AP Service Pac: 2 Features list: http://www.microsoft.com/windowsCp/sp2/features.mspC -afer 8rowsin* and 'ommunication Internet 0=#lorer Po#(u# 5loc:er %akes browsin* the Internet more en0o&able b& enablin* &ou to reduce unwanted ads and content. Internet 0=#lorer &ownloa& %onitoring Warns &ou about potentiall& harmful downloads and *ives &ou the option to block files that could be malicious. Internet 0=#lorer Infor%ation 5ar Provides better information about events that are happenin* as &ou browse the Web3 so itYs easier to know whatYs *oin* on and address potential securit& issues. Internet 0=#lorer A&&(on Manager /nhances securit& and reduces the potential for crashes b& allowin* &ou to easil& mana*e Internet /Cplorer add)ons :pro*rams which have been added to the Web browser;. *utloo: 0=#ress #rivac u#&ate

Relps reduce unwanted e)mail b& limitin* the possibilit& of &our e)mail address bein* validated b& potential spammers. Attac"%ent Manager %onitors and disables potentiall& unsafe attachments3 which could contain viruses that mi*ht spread throu*h Internet /Cplorer3 5utlook /Cpress3 and Windows %essen*er. Powerful -ecurit& Tools Feature Row it helps &ou Windows -ecurit& 'enter 6llows &ou to easil& view &our securit& status and mana*e ke& securit& settin*s in one convenient place. Win&ows Firewall u#&ate 6utomaticall& turned on b& default3 this improved firewall helps protect Windows XP from viruses3 worms3 and other securit& threats that can spread over the Internet. Windows Firewall simple compatibilit& setup ets &ou set up Windows Firewall to co)eCist with &our favorite Internet applications and home network. Win&ows Firewall startu# an& s"ut&own su##ort /Ctends Windows Firewall protection to Windows startup and shutdown time3 ensurin* enhanced protection from the moment &ou turn &our P' on to the moment &ou turn it off. Auto%atic 9#&ates en"ance%ents Relps &ou automaticall& sta& up)to)date with the latest updates for Windows XP. 6lso includes new technolo*& to help dial)up customers download updates more efficientl&. Improved /Cperiences Feature Row it helps &ou Improved wireless support +ramaticall& improves and simplifies the process of discoverin* and connectin* to wireless networks. 8luetooth technolo*ies /nables &ou to easil& connect to the latest 8luetooth)enabled hardware devices such as ke&boards3 cell phones3 and P+6s. Windows %edia Pla&er $ -eries %akes it eas& to en0o& music3 video3 and broadband content with enhanced securit&. +irectX update Relps &ou en0o& advanced *raphics and *amin* with the latest +irectX technolo*& from %icrosoft. *#en Source: 6n& software whose code is available for users to look at and modif& freel&. inuC is the best)known eCample9 others include 6pache3 the dominant software for servers that dish out corporate web pa*es. www.fortune.com/fortune/techatwork/articles/I3#L##J3=NO$JK3II.html -oftware development b& makin* source code freel& available so that outside pro*rammers can submit improvements3 or use it themselves. This includes fiCin* bu*s3

improvin* performance3 and addin* features. /Camples of open source pro0ects include 8-+3 inuC3 and %o.illa. For a more precise and detailed definition3 visit 5pen-ource.or*. www.*erbilboC.com/new.illa/*lossar&.php

Test+e&s: Testbeds fill an important role in the Qlobus pro0ect. The& provide an environment in which we can evaluate the performance and functionalit& of the tools that have been developed. The& allow us to understand how to construct applications that can eCploit the distributed resources available on a computational *rid. If lar*e enou*h3 the& allow us to conduct new and interestin* science. Pu+lic Be Infrastructure (PBI): %icrosoft Pu+lic Be Infrastructure (PBI) for Windows -erver 2II= provides an inte*rated public ke& infrastructure that ena+les ou to secure an& e=c"ange infor%ation wit" strong securit an& eas a&%inistration across t"e Internet) e=tranets) intranets) an& a##lications$ /Camples of PZI)enabled applications include: /F-3 %icrosoft Internet /Cplorer3 %icrosoft %one&3 Internet Information -erver3 remote access services3 %icrosoft 5utlook73 and %icrosoft 5utlook /Cpress. W"at is SS1, http://www.ccwebhost.com/support/faqs/secure)server)faq.htmPWhatIs-Secure Soc:ets 1a er 3 -- 3 is the standard securit& technolo*& for creating an encr #te& lin: +etween a we+ server an& a +rowser$ T"is lin: ensures t"at all &ata #asse& +etween t"e we+ server an& +rowser re%ain #rivate an& integral . -- is an industr& standard and is used b& millions of websites in the protection of their online transactions with their customers. In order to be able to *enerate an -- link3 a web server requires an -- 'ertificate. 8& convention3 Web pa*es that require an -- connection start with https: instead of http:. Active/irector : http://www.microsoft.com/windows2III/server/evaluation/features/dirlist.aspPheadin*= Active /irector is a core o#erating s ste% service t"at #rovi&es a single #lace to fin& networ: resources) an& serves as +ot" t"e PBI certificate re#ositor an& t"e %anage%ent &irector $ W"at Is Active /irector , 6ctive +irector& is an essential and inseparable part of the Windows 2III network architecture that improves on the domain architecture of the Windows 4T7 J.I operatin* s&stem to provide a director& service desi*ned for distributed networkin* environments. 6ctive +irector& lets or*ani.ations efficientl& share and mana*e information about network resources and users. In addition3 Active /irector acts as t"e central aut"orit for networ: securit ) letting t"e o#erating s ste% rea&il verif a user?s i&entit an& control access to networ: resources$ 0'uall i%#ortant) Active /irector acts as an integration #oint for +ringing s ste%s toget"er an& consoli&ating %anage%ent tas:s I Cow /oes Active /irector Wor:, Active /irector lets organi6ations store infor%ation in a "ierarc"ical) o+@ect(oriente& fas"ion) an& #rovi&es %ulti(%aster re#lication to su##ort &istri+ute& networ: environ%ents$

Cierarc"ical *rgani6ation 6ctive +irector& uses ob0ects to represent network resources such as users3 *roups3 machines3 devices3 and applications. It uses containers to represent or*ani.ations3 such as the marketin* department3 or collections of related ob0ects3 such as printers. It or*ani.es information in a tree structure made up of these ob0ects and containers3 similar to the wa& the Windows operatin* s&stem uses folders and files to or*ani.e information on a computer. I W"at Are t"e 5enefits of Active /irector , Totall integrate& wit" Win&ows 2777 Server) Active /irector a&%inistrators) &evelo#ers) an& users access to a &irector service t"at: -implifies mana*ement tasks. -tren*thens network securit&. %akes use of eCistin* s&stems throu*h interoperabilit& Strengt"ens Securit It improves password securit& and mana*ement. 8& providin* sin*le si*n)on to network resources with inte*rated3 hi*h)powered securit& services that are transparent to end users. It ensures desktop functionalit&. 8& lockin*)down desktop confi*urations and preventin* access to specific client machine operations3 such as software installation or re*istr& editin*3 based on the role of the end user. It speeds e)business deplo&ment. 8& providin* built)in support for secure Internet)standard protocols and authentication mechanisms such as Zerberos3 public ke& infrastructure :PZI; and li*htwei*ht director& access protocol : +6P; over secure sockets la&er :-- ;. It ti*htl& controls securit&. 8& settin* access control privile*es on director& ob0ects and the individual data elements that make them up. In addition3 6ctive +irector& nativel& supports a full& inte*rated public ke& infrastructure and Internet secure protocols3 such as +6P over -- 3 to let or*ani.ations securel& eCtend selected director& information be&ond their firewall to eCtranet users and e)commerce customers. In this wa&3 6ctive +irector& stren*thens securit& and speeds deplo&ment of e)business b& lettin* administrators use the same tools and processes to mana*e access control and user privile*es across internal desktop users3 remote dial)up users3 and eCternal e)commerce customers. !onclusion: 6ctive +irector& services within Windows 2III provide a focal point for mana*in* and securin* Windows user accounts3 clients3 servers3 and applications. In addition3 6ctive +irector& is desi*ned to inte*rate with the non)Windows directories within eCistin* s&stems3 applications3 and devices to provide a sin*le place and a consistent wa& of mana*in* an entire network infrastructure. In this wa&3 6ctive +irector& increases the value of an or*ani.ationAs eCistin* investments and lowers the overall costs of computin* b& reducin* the number of places where administrators need to mana*e director& information 1/AP: +6P is the 1ig"weig"t /irector Access Protocol. +6P is desi*ned to be a standard wa& of providin* access to director& services. 6 director& service is 0ust a database that has been desi*ned to be read from more than it is desi*ned to written to. 1/AP #rovi&es access to &irector infor%ation li:e co%#an #"one/e%ail &irectories$ It is also +eing use& to act as a gatewa to ot"er electronic infor%ation s ste%s as a %eta(&irector + co%#anies li:e For& an& Co%e /e#ot to &e#lo t"eir intranet/e=tranet s ste%s$ Ber+oros: A #rotocol t"at &efines "ow clients interact wit" a networ: aut"entication service$ 'lients obtain tickets from the Zerberos Ze& +istribution 'enter :Z+';3 and the& present these tickets to gives networ:

servers when connections are established. Zerberos tickets represent the clientAs network credentials. When the 64/W64 enables with Zerboros feature3 others cannot hi*h0ack/misuse/track password and other imp/secrete info 4T1M: Win&ows 4T 1A4 Manager (4T1M)3 4T % is the authentication protocol used on networks that include s&stems runnin* the Windows 4T operatin* s&stem and on stand)alone s&stems. We+/A. We+(+ase& /istri+ute& Aut"oring an& .ersioning :Web+6?; is a file access protocol described in /Ctensible %arkup an*ua*e :X% ;. It uses the RTTP and runs over eCistin* Internet infrastructure[for eCample3 firewalls and routers. .irtual Private 4etwor: (.P4) ?P4 provides users with a wa& to securel& access private information on their corporate network over a shared public network infrastructure such as the Internet There are = t&pes of ?P4s: #. Intranet ?P4 2. Remote 6ccess ?P4 =. /Ctranet ?P4 J. Intranet .P4: 6n intranet is a network for business that is internal to compan&. Intranet ?P4 usin* ?P4 technolo*& to link different corporate network sites to*ether throu*h the shared internet infrastructure L. Re%ote Access .P4: It enables the remote business users to securel& access the compan&Ys Intranet throu*h the shared internet infrastructure N. 0=tranet .P4: It is a network that allows controlled access from eCternal networks3 such as from customers3 suppliers3 and partners. It levera*es the shared internet infrastructure to create the /Ctranet

W"at is /C!P, +R'P stands for </ na%ic Cost !onfiguration Protocol<. What is +R'PAs purpose! /C!P?s #ur#ose is to ena+le in&ivi&ual co%#uters on an IP networ: to e=tract t"eir configurations fro% a server :the A+R'P serverA; or servers3 in particular3 servers that have no eCact information about the individual computers until the& request the information. The overall purpose of this is to reduce the work necessar& to administer a lar*e IP network. The most si*nificant piece of information distributed in this manner is the IP address. W"at is t"e /4S, +4- stands for the /o%ain 4a%e Service. It is a set of software an& #rotocols t"at translate a &o%ain na%e li:e www$co%#an $co% into an IP a&&ress suc" as -J2$-H8$7$- . 6 request for such a translation is called a +4- quer&. Web browsers like 4etscape and Internet /Cplorer *enerate queries whenever the& browse addresses like http://www.compan&.com. W"at is IMAP) e=actl , K Is one of t"e services for 1/AP an& use& to sen&/receive e%ail I%6P :Internet Message Access Protocol; is a mature and popular Internet standard for email. It forms the blueprint for our new mail s&stem. 8ecause I%6P is a standard rather than a sin*le pro*ram3 an&one can create software that will work with it. and man& have. WeYve chosen a few

of these pro*rams to recommend and support. /Cpert users ma& want to use their own favorite client pro*rams and are free to do so. I%6P stands for Internet %essa*e 6ccess Protocol. It is a %et"o& of accessing electronic %ail t"at are :e#t on a %ail server . In other words3 it permits a <client< email pro*ram to access remote messa*e stores as if the& were local. For e=a%#le) e%ail store& on an IMAP server can +e %ani#ulate& fro% a &es:to# co%#uter at "o%e) a wor:station at t"e office) an& a note+oo: co%#uter w"ile traveling) wit"out t"e nee& to transfer %essages or files +ac: an& fort" +etween t"ese co%#uters$ I%6PAs abilit& to access messa*es :both new and saved; from more than one computer has become eCtremel& important as reliance on electronic messa*in* and use of multiple computers increase3 but this functionalit& cannot be taken for *ranted: the widel& used Post *ffice Protocol (P*P) works best when one has onl& a sin*le computer3 since it was desi*ned to support <offline< messa*e access3 wherein messa*es are downloaded and then deleted from the mail server. This mode of access is not compatible with access from multiple computers since it tends to sprinkle messa*es across all of the computers used for mail access. Thus3 unless all of those machines share a common file s&stem3 the offline mode of access that P5P was desi*ned to support effectivel& ties the user to one computer for messa*e stora*e and manipulation. more... W"at is T!P/IP, T'P/ IP stands for Trans%ission !ontrol Protocol/ Internet Protocol . T'P/ IP is a network protocol used on 64s3 W64s and the Internet. T'P/IP is a name *iven to the collection of networkin* protocols that have been used to construct the *lobal Internet. FTP: FTP :File Transfer Protocol; allows a person to transfer files between two computers3 *enerall& connected via the Internet. If &our s&stem has FTP and is connected to the Internet3 &ou can access ver& lar*e amounts of files available on a *reat number of computer s&stems. SM5: -%8 (Server Message 5loc:)3 is a #rotocol for s"aring files) #rinters) serial #orts 3 and communications abstractions such as named pipes and mail slots between computers. SM5 is a client server) re'uest(res#onse #rotocol$ W"en t"e client "as re'ueste& o##ortunistic loc:s (o#loc:s) an& t"e server su+se'uentl "as to +rea: an alrea& grante& o#loc: +ecause anot"er client "as re'ueste& a file o#en wit" a %o&e t"at is inco%#ati+le wit" t"e grante& o#loc:$ In t"is case) t"e server sen&s an unsolicite& %essage to t"e client signalling t"e o#loc: +rea:$ SMTP: -%TP :Si%#le Mail Transfer Protocol; is a T'P/IP protocol used in sendin* and receivin* mail. It is InternetAs standard host to host mail transport protocol. It is defined b& RF'O2# P*P3: Post *ffice Protocol ver 3. 8asicall&3 P5P= is intended to permit a workstation :client; to d&namicall& access a mailboC on a server and download mail messa*es. ItAs amon* the most simplistic Internet protocols around. S4MP: Si%#le 4etwor: Manage%ent Protocol is a protocol for Internet network mana*ement services. It is formall& specified in a series of related RF' documents

MP1S :%ulti Protocol abel -witchin*; 44TP :4etwork 4ews Transfer Protocol; 4TP :4etwork Time Protocol; 8787: CTTP) DD3: CTTPS) D3: FTP) 2G: SMTP) 38J: 1/AP !or#orate Ti%e: !T is grou# calen&ar software t"at allows co%#uterise& &iar sc"e&uling$ Wit" !or#orate Ti%e) users can arrange %eetings an& view ot"er users? agen&as) su+@ect to access #er%issions$ !or#orateTi%e can act as a #ersonal ti%e %anage%ent tool) wit" tas: lists an& &ail notes$ Personal +i*ital 6ssistants :P+6s; like the Palm Pilot and the Psion3 can s&nchroni.e with 'orporateTime3 with the aid of additional conduit software. Introduction 6ll members of staff and post*raduate research students are re*istered to use 'orporate Time via the webdiar& interface. The 'orporate Time :Web +iar&; accounts for -taff and Post*rads are stored in a different database. This separation results in PQRs users of Web +iar& bein* unable to view the diaries of staff3 and vice versa. 'orporate Time can also be accessed b& staff via a 'orporate Time client application for %- Windows and 6pple %acintosh. 5racle !alen&ar?s (!or#orate Ti%eN) #ri%ar #ur#ose is to facilitate t"e sc"e&uling of #eo#le into %eetings) a##oint%ents an& ot"er activities$ If re'uire&) a user?s agen&a can +e %anage& + ot"ers wit" t"e assigning of ;&esignates;$ It can +oo: recurring %eetings) grou# %eetings) an& re%in& ou of u#co%ing events$ Resource agen&as (%eeting roo%s) la#to#s etc$) can also +e create& an& %aintaine&$ 5racle 'alendarAs :'orporate Time(; service has been established for use b& staff and facult&. We are unable to offer the service for students3 but will make eCceptions for students who are members of committees etc.3 provided the& are sponsored b& a universit& department. 0T1: /T is the +ata Warehouse acquisition processes of 0=tracting) Transfor%ing an& 1oa&ing (0T1) data from source s&stems into the data warehouse. 5racle supports the /T process with their <5racle Warehouse 8uilder< product. %an& new features in the 5racle$i database will also make /T processin* easier. For eCample: 4ew %/RQ/ command :also called BP-/RT3 Insert and update information in one step;9 /Cternal Tables allows users to run -/ /'T statements on eCternal data files :with pipelinin* support;. !ontent Manage%ent S ste%s: 1I.01I4B Product 4ame: ivelink -upplier: 5pen TeCt BZ td Web: www.openteCt.com +escription: 1ivelin: is t"e lea&ing solution for :nowle&ge %anage%ent an& colla+oration in glo+al organisations wit" core functions t"at inclu&e virtual tea%wor:) &ocu%ent %anage%ent #rocess auto%ation) wor:flows an& infor%ation retrieval$ 1ivelin: serve& as an enter#rise content %anage%ent #latfor% for effective colla+oration in&e#en&ent of location an& ti%e$ It is a we+ +ase& solution wit" an o#en arc"itecture w"at can +e 'uic:el i%#le%ente& at low over"ea& cost$ 5pen teCt has been implemented its solutions across #I3III or*anisations in variet& of industries includin* financial services3 pharmaceutical 3 *overnment telco.

-&stem: operatin* s&stems :Window 4T/2III3 -un3 -olaris3 RP)BX; relational databases :5racle3 %icrosoft3 -Ql server -&base; Web servers :netscape3 iPlanet and %icrosoft Internet information server; and web browsers :4etscape3 4avi*ator and %icrosoft /Cplorer; SIT0S!AP0: -ite-cape -olution -ets provide man& of the specific features needed to automate ke& hori.ontal business processes. /ach -olution -et combines the proven scalabilit& and robust functionalit& of -ite-cape Forum and Forum e%eetin* )) includin* web)based meetin* services3 *roup calendarin*3 action item mana*ement3 document libraries3 threaded discussions3 instant messa*in*3 and portal inte*ration services )) with purpose)built workflows3 commands3 data t&pes and eas&)to)use interfaces. 6 proven set of collaboration tools that allow virtual teams in business and *overnment to save time and mone&. G -ecure G 8rowser)based G Platform)independent G Workflow)enabled 6 comprehensive real)time collaboration environment that makes it eas& to find participants and instantl& share information. G Instant messa*in* G 6udio conferencin* G Web conferencin* G Bnified address book G 'alendar W %eetin* %ana*ement G Znowled*e 4etworks G Pro*ram %ana*ement G 'onferencin* -ervice Providers A##le Re%ote /es:to#: +eliverin* over LI new features and countless enhancements3 6pple Remote +esktop 2 is a complete desktop mana*ement solution for %ac 5- X. Hou can distribute software3 confi*ure s&stems3 offer real)time online help and create detailed hardware and software reports [ for all of &our %ac s&stems in &our or*ani.ation3 all from &our own %ac. HouYve 4ever Rad It -o /as&. With its state)of)the)art software distribution features3 A##le Re%ote /es:to# 2 makes installin* software on the %acs &ou mana*e a virtual walk in the park. It lets ou re%otel install new software on an nu%+er of our networ:e& Macs wit"out interru#ting t"e client s ste%s or requirin* an& interaction. 6pple Remote +esktop 2 also lets &ou create custom install packa*es [ containin* compan&) specific software or files3 for eCample. 5r if ou want to co# files an& fol&ers to targete& locations in our clientsO "ar& &rives) ou can &o t"at) too$ 6pple Remote +esktop 2 allows &ou to specif& multiple software packa*es for consecutive installation. 5nce &ou *et the process started3 &ouYre done3 even if the packa*e requires a restart. 6pple Remote +esktop will detect and offer to restart the computer for &ou upon completion of the installation. It also enables &ou to schedule &our distributions3 so &ou can set it to perform &our installations durin* those times when network traffic is at its lowest.

/Ctend a Rand. Without eavin* Hour +esk. 6pple Remote +esktop 2 *ives &ou the necessar& tools to provide the best technical assistance possible to the computer users on &our network. With its real)time screen)sharin* [ which works not 0ust with %ac s&stems but with an& ?irtual 4etwork 'omputin* :?4';)enabled computer3 includin* Windows3 inuC or B4IX s&stems [ 6pple Remote +esktop 2 allows &ou to observe up to LI clientsY screens simultaneousl&3 as well as control individual screens 9sa+ilit Testing http://www.usabilit&sciences.com/services/faq#.html Q. <What is Bsabilit&!< 6. Bsabilit& is a measure of a productAs abilit& to facilitate completion of usersA intended *oals3 whether their *oal is to complete a transaction or process3 find product information3 or access customer service. Ze& components of creatin* a usable product are ensurin* consistenc& of terminolo*& and navi*ation elements3 the presence of a clearl& defined process3 and desi*nin* the product around customer *oals. 6 usable product helps to provide a positive customer eCperience. Q$ ;W"at is 9sa+ilit Testing,; 6. 9sa+ilit Testing is a #rocess t"at %easures "ow well a we+ site or software a##lication allows its users to navigate) fin& valua+le infor%ation 'uic:l ) an& co%#lete +usiness transactions efficientl $ Bsabilit& Testin* is a critical component in the development process that ensures an overall *ood user eCperience. 6t Bsabilit& -ciences3 we develop t&pical tasks for users to perform on a desi*n or interface and observe how the& are able to complete the desired tasks. Throu*hout a testin* session the team of Bsabilit& 6nal&sts carefull& watch the userAs actions and listens for feedback related to the desi*n or interface. We look for desi*n flaws that present obstacles to usersA abilit& to complete simple tasks. 6fter *atherin* information related to how users are able to complete the tasks3 and reach their *oals3 we write recommendations to our clients on how to improve the usersA eCperience. Q. <Wh& do I need to consider Bsabilit& Testin* for our compan&!< 6. Testin* reveals obstacles that are not often apparent durin* the development of the web site or software packa*e. %an& times the web site desi*ners and developers are so familiar with the site that the& are not aware of possible problems that can cause users difficult&. 6dditionall&3 Bsabilit& Testin* is a *reat method for settlin* internal debates over issues that have been difficult to solve and3 b& *ettin* <real users< input3 development decisions can be made b& watchin* how the users tr& to complete tasks. 6fter spendin* valuable compan& time and resources creatin* &our web site it is important to ensure that visitors will want to come back after their first eCperience. Bsabilit& testin* can help *uarantee that users are able to reach their intended *oal for comin* to &our site3 thus creatin* a pleasant eCperience and a desire to return to the site. Q. <When should I do Bsabilit& Testin*!< 6. 9sa+ilit Testing is a##ro#riate at all stages of &evelo#%ent$ Four we+ site or software a##lication &oes not nee& to +e functionall co%#lete +efore testing$ It is reco%%en&e& to test our we+ site as it is +eing &evelo#e&) once it is launc"e&) an& after it "as +een in t"e %ar:et for so%e #erio& of ti%e$ 5ur tests ran*e from +esi*n Walkthrou*hs for sites or software applications that are not full& developed to Feature Focus tests when clients 0ust want to hone in on a specific feature. Bsabilit& Testin* is best performed throu*hout the life of &our site or software application to keep pace with industr& chan*es and user preferences.

0ffective an& 0fficient http://www.qalabs.com/eCpertise/plannin*/

6t Q6 abs3 Test Plannin* is the foundation of successful software testin*3 but we know how challen*in* it is for pro0ect teams to produce comprehensive test cases when the schedule is ti*ht. We help b& puttin* pro0ect requirements throu*h an anal&sis process that ensures the hi*hest priorities *et tested first in the time available. Test Strateg The Q6 abs Test -trate*& is the cornerstone of our methodolo*&. -impl& put3 it is a comprehensive document that establishes the strate*ic direction of our work for &ou. It includes: Qualit& Requirements and Qoals -cope of Testin* Tasks to be performed and how the& will be undertaken Resources and Tools -chedules and %ilestones Pro0ect and Testin* Risks +ependencies of the Testin* /ffort Providin* a well)defined testin* roadmap Test Anal sis 5ur Test 6nal&sis pinpoints precisel& what tests will be performed. It provides feedback that the requirements have been covered for each test t&pe. 8& reviewin* the requirements3 Q6 absY Test 6nal&sis raises critical issues earl& in the pro0ect lifec&cle to prevent errors from reachin* the code. In the case of incomplete or absent requirements3 Q6 abs can rapidl& breverse en*ineerc those artifacts for Test Plannin* purposes. This makes enumeration and confirmation of the eCpected behaviors eas&3 t&picall& with little impact on the overall testin* schedule. /ocu%enting t"e Tests +ifferent kinds of software ma& require radicall& different testin* techniques3 timeframes and cost considerations. Q6 abs testin* documentation includes: 8uild ?erification and 6cceptance Functionalit& /rror Randlin* 'ompatibilit& -calabilit& Bser Interface :BI; Bsabilit& The above take the form of individual Test 'ases and/or task)oriented Test -cenarios or Test -cripts. 5ther t&pes of testin* are included as described in the Test -trate*&. Read more about Test -trate*ies and Test Plannin* on our Resources pa*e. Test Planning Process 5ur Test Plannin* process can include: Pro@ect Fa%iliari6ation Identif&in* and reviewin* the eCistin* pro0ect documentation :use cases and/or requirements; Functionalit& walk)throu*h :demo3 mock)ups3 user trainin*; Requirements Review :for completeness3 consistenc&3 correctness3 un ambi*uit&3 testabilit&3 etc.; +efinin* the overall scope of testin* for the pro0ect +efinin* the Test -trate*& and scope of each t&pe of testin* to be performed +efinin* the Test /nvironment:s;

+eterminin* which tools and testin* utilities will be needed +eterminin* applicabilit& of manual vs. automated testin* Identif&in* and evaluatin* skills/resources available 'onfi*urin* the Test /nvironment:s; +efinin* the test environment requirements Performin* installation and confi*uration of test tool:s; Test Plannin* and Test 'ase +esi*n Test 6nal&sis of the Requirements to enumerate the tests to be performed for each requirement for each t&pe of testin*. /Cpandin* the Test 6nal&sis into Test 'ases and/or Test -cenarios and prioriti.in* the same.

1in:e& 1ists: http://cslibrar&.stanford.edu/#I=/ inked ist8asics.pdf inked list is a chain of structure. In which each structure consists of data as well as pointers3 which stores the address :link; of neCt lo*ical structure in the list. 6dvanta*es of inked ists over arra&s: #. It is not necessar& to know the no. of elements and allocate memor& for . %emor& can be allocated as and when necessar& 2. Insertin* to and deletin* from can be handled efficientl& without havin* to restructure the list /C: -truct6 Int var# -truct8.MP:;9 Pointer: %eans bit manipulation and memor& allocation -tructures: FIFI3 IF5 etc %ethods: -tacs3 Queues3 8inar& Trees

6re &ou still available! H/Would &ou please review and answer this questionaire re*ardin* this position at 6kamia Technolo*ies D #. Row familiar would &ou sa& &ou are with Perl on a scale of #)#I! ) O 2. Rave &ou built/enhanced a test harness or been part of a ma0or pro0ect usin* Perl in the past! If &es3 please elaborate. H/-. :i; IAve written P/R scripts to simulate some of the scannin* features in X/R5X corp. :ii; Qenerall& Test harnesses should include the followin* capabilities: M6 standard wa& to specif& setup and cleanup. M6 method for selectin* individual tests to run3 or all tests etc.. =. Would &ou sa& &ou have a solid workin* knowled*e of RTTP protocol! That means: different RTTP methods3 RTTP status code cate*ories3 ma0or differences between version #.I and #.# . H/-3 #. -afe and Idempotent methods3 2. -uccess 2XX3 5Z 2II3 'R/6T/+ 2I#3 /rror JCC3LCC3 8ad Request JII etc.. =. %a0or diff RTTP # vs RTTP #.#: Rostname identification3 content ne*otiation3 persistent connections3 chunked transfers3 b&te ran*es and support for proCies and caches. J. Rave &ou worked on testin* proC& servers/web servers! If &es3 please

elaborate. ) 6pache3 II-3 T5%'6T3 I8% Web -phere3 Web o*ic L. What would be the split between manual testin* versus automation! %anual testin* is time)consumin* and requirin* a heav& investment in the human resources. 6utomated testin* with WinRunner addresses these problems b& dramaticall& speedin* up the testin* process. Hou can create test scripts that check all aspects of &our application3 and then run these tests on each new build. 8enefits of 6utomated Testin*: #. Fast 2. Reliable =. Repeatable J.Pro*rammable L.Reusable N. What are the different cate*ories of test that &ou usuall& perform in a Q6 c&cle! %anual Testin* :8lack8oC;3 6utomation3 White8oC testin*s that means.. Bser 6cceptance/sanit&3 Inte*ration3 Re*ression33 -ecurit&3 Bsabilit&3 Reliabilit&3 ocali.ation/Internationali.ation testin* +8 %i*ration3 Bp*radation3 Partitions3 -&stem3 6lpha testin* etc. K. What are the primar& platforms that &ou have done si*nificant amount of testin* on! -olaris3 Windows3 inuC O. Rave &ou been eCposed to web cachin* technolo*&! If &es3 at which compan&! H/$. Rave &ou been eCposed to 12///.4/T! If &es3 please elaborate. H/I have testin* and development eCperience in 12// and .4/T technol*oies. provides a component)based approach to the desi*n3 development3 assembl&3 and deplo&ment of enterprise applications. The 12// platform offers a multitiered distributed application model3 reusable components3 a unified securit& model3 fleCible transaction control3 and web services support throu*h inte*rated data interchan*e on /Ctensible %arkup an*ua*e :X% ;)based open standards and protocols. In .4/T technolo*&3 we can develop pro*ram for the same functionalit&:/C: -um of two numbers; in '3 'UU3 '585 3 ?83 6-P3 or 16?6 etc... #I. 6re &ou familiar with how +4- works! +4- stands for the +omain 4ame -ervice. It is a set of software and protocols that translate a domain name like www.compan&.com into an IP address such as #$2.#NO.I.# . 6 request for such a translation is called a +4- quer&. Web browsers like 4etscape and Internet /Cplorer *enerate queries whenever the& browse addresses like http://www.compan&.com. ##. Rave &ou ever done server performance testin*! If &es3 please elaborate. H/Rere we check how speed the server is3 how fast the application is respondin* to user actions. In this t&pe of testin* we will *ather timin*s for both read and update operations to determine whether these timin*s lies in a acceptable time frame or not. First this should be done stand) alone3 then it should be done in a multi)user environment to determine the transaction throu*hput. #2. Row familiar would &ou sa& &ou are with each of: '3 'UU3 1ava on a scale of #)#I! $

Auto%ation Testing WinRunner


#. What is 8atch test!

2. What is +ata driven test! =. What is Re*ular /Cpression and how it is useful! J. What is difference between Qlobal QBI File and QBI file per test mode.What mode should we used! L. What are the two wa&s of error handlin* in WR N. Which method of error handlin* we should use and wh&! K. 4ame the different t&pes of recordin* methods and what is the difference between them! O. What are the different runnin* modes in WR! What is the different between them!

how to work with d&namic ob0ects:movin* ob0ects; in winrunner! Tr& to find a propert& that is not chan*in* when ob0ect is moved from one location to another and add that propert& to the *ui map. what are the limitations of WR! If there isnAt an& propert& that is consistant with different location3 then it becomes tou*h. what could *o wron* with test automation! If &ou find one propert& that is not chan*in* then no problems. whatAs the win runner frame work ! Qroup of function files :eC: 'onfi* files etc; Row schedulin* is done in Winrunner!! 4ever done it. Row to validate the +ate Format in WinRunner There a time function in T- .From the time we have to pick the required data :eC: date/time etc; and &ou have to valid it accorin*l&. There is no +6T/ t&pe in WR. Qo to WR help and find more info
earn perl3 uniC3 html3 -Q and P /-Q and more. http://www.*eocities.com/CDscn//books//8ooks.html -ee also in the old stuff http://www.*eocities.com/CDscn//books/ an*.html ?isit this site. &ou can learn WinRunner and oadRunner. http://www.wilsonmar.com/#loadrun.htm

This is a *ood website &ou mi*ht want to *o throu*h sometime. ItAs a lot of help. http://testin*software.blo*spot.com/2IIL/IK/all)posts.html

*t"er Tec"nical to#ics:


9M1 (9nifie& Mo&eling 1anguage) Tutorial: http://pi*se&e.kennesaw.edu/ddbraun/csisJNLI/6W+/B% Dtutorial/ http://homepa*es.uel.ac.uk/+.8owden/ This is video based trainin*

!L Tutorial http://www.softsteel.co.uk/tutorials/c-harp/contents.html AM1 Tutorial: "tt#://www$w3sc"ools$co%/=%l/ Translate !"aracters fro% an language to an language: "tt#://+a+elfis"$altavista$co%/tr 4etwor:ing QA: "tt#://www$%o6illa$org/'ualit /networ:ing/a+out$"t%l

Manual Testing Question and Answers


What makes a good test engineer? A good test engineer has a test to break attitude, an ability to take the point of view of the customer, a strong desire for !uality, and an attention to detail. "act and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical #developers$ and non%technical #customers, management$ people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers point of view, and reduce the learning curve in automated test tool programming. &udgement skills are needed to assess high%risk areas of an application on which to focus testing efforts when time is limited. What makes a good Software QA engineer? "he same !ualities a good tester has are useful for a 'A engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organi(ation. )ommunication skills and the ability to understand various sides of issues are important. In organi(ations in the early stages of implementing 'A processes, patience and diplomacy are especially needed. An ability to find problems as well as to see what s missing is important for inspections and reviews. What makes a good QA or Test manager? A good 'A, test, or 'A*"est#combined$ manager should+ , be familiar with the software development process , be able to maintain enthusiasm of their team and promote a positive atmosphere, despite

, what is a somewhat negative process #e.g., looking for or preventing problems$ , be able to promote teamwork to increase productivity , be able to promote cooperation between software, test, and 'A engineers , have the diplomatic skills needed to promote improvements in 'A processes , have the ability to withstand pressures and say no to other managers when !uality is insufficient or 'A processes are not being adhered to , have people -udgement skills for hiring and keeping skilled personnel , be able to communicate with technical and non%technical people, engineers, managers, and customers. , be able to run meetings and keep them focused What's the role of documentation in QA? )ritical. #.ote that documentation can be electronic, not necessarily paper.$ 'A practices should be documented such that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. "here should ideally be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information. )hange management for documentation should be used if possible. What's the big deal about 'requirements'? /ne of the most reliable methods of insuring problems, or failure, in a complex software pro-ect is to have poorly documented re!uirements specifications. 0e!uirements are the details describing an application s externally%perceived functionality and properties. 0e!uirements should be clear, complete, reasonably detailed, cohesive, attainable, and testable. A non%testable re!uirement would be, for example, user% friendly #too sub-ective$. A testable re!uirement would be something like the user must enter their previously%assigned password to access the application . 1etermining and organi(ing re!uirements details in a useful and efficient way can be a difficult effort2 different methods are available depending on the particular pro-ect. 3any books are available that describe various approaches to this task. #See the 4ookstore section s Software 0e!uirements 5ngineering category for books on Software 0e!uirements.$ )are should be taken to involve A66 of a pro-ect s significant customers in the re!uirements process. )ustomers could be in% house personnel or out, and could include end%users, customer acceptance testers, customer contract officers, customer management,

future software maintenance engineers, salespeople, etc. Anyone who could later derail the pro-ect if their expectations aren t met should be included if possible. /rgani(ations vary considerably in their handling of re!uirements specifications. Ideally, the re!uirements are spelled out in a document with statements such as "he product shall..... . 1esign specifications should not be confused with re!uirements 2 design specifications should be traceable back to the re!uirements. In some organi(ations re!uirements may end up in high level pro-ect plans, functional specification documents, in design documents, or in other documents at various levels of detail. .o matter what they are called, some type of documentation with detailed re!uirements will be needed by testers in order to properly plan and execute tests. 7ithout such documentation, there will be no clear%cut way to determine if a software application is performing correctly. Agile methods such as 8P use methods re!uiring close interaction and cooperation between programmers and customers*end%users to iteratively develop re!uirements. "he programmer uses "est first development to first create automated unit testing code, which essentially embodies the re!uirements. What steps are needed to develop and run software tests? "he following are some of the steps to consider+ , /btain re!uirements, functional design, and internal design specifications and other necessary documents , /btain budget and schedule re!uirements , 1etermine pro-ect%related personnel and their responsibilities, reporting re!uirements, re!uired standards and processes #such as release processes, change processes, etc.$ , Identify application s higher%risk aspects, set priorities, and determine scope and limitations of tests , 1etermine test approaches and methods % unit, integration, functional, system, load, usability tests, etc. , 1etermine test environment re!uirements #hardware, software, communications, etc.$ , 1etermine testware re!uirements #record*playback tools, coverage analy(ers, test tracking, problem*bug tracking, etc.$ , 1etermine test input data re!uirements , Identify tasks, those responsible for tasks, and labor re!uirements , Set schedule estimates, timelines, milestones , 1etermine input e!uivalence classes, boundary value analyses, error classes , Prepare test plan document and have needed reviews*approvals , 7rite test cases

, 9ave needed reviews*inspections*approvals of test cases , Prepare test environment and testware, obtain needed user manuals*reference documents*configuration guides*installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data , /btain and install software releases , Perform tests , 5valuate and report results , "rack problems*bugs and fixes , 0etest as needed , 3aintain and update test plans, test cases, test environment, and testware through life cycle What's a 'test plan'? A software pro-ect test plan is a document that describes the ob-ectives, scope, approach, and focus of a software testing effort. "he process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. "he completed document will help people outside the test group understand the why and how of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. "he following are some of the items that might be included in a test plan, depending on the particular pro-ect+ , "itle , Identification of software including version*release numbers , 0evision history of document including authors, dates, approvals , "able of )ontents , Purpose of document, intended audience , /b-ective of testing effort , Software product overview , 0elevant related document list, such as re!uirements, design documents, other test plans, etc. , 0elevant standards or legal re!uirements , "raceability re!uirements , 0elevant naming conventions and identifier conventions , /verall software pro-ect organi(ation and personnel*contact% info*responsibilties , "est organi(ation and personnel*contact%info*responsibilities , Assumptions and dependencies , Pro-ect risk analysis , "esting priorities and focus , Scope and limitations of testing , "est outline % a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable

, /utline of data input e!uivalence classes, boundary value analysis, error classes , "est environment % hardware, operating systems, other re!uired software, data configurations, interfaces to other systems , "est environment validity analysis % differences between the test and production systems and their impact on test validity. , "est environment setup and configuration issues , Software migration processes , Software )3 processes , "est data setup re!uirements , 1atabase setup re!uirements , /utline of system%logging*error%logging*other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs , 1iscussion of any speciali(ed software or hardware tools that will be used by testers to help track the cause or source of bugs , "est automation % -ustification and overview , "est tools to be used, including versions, patches, etc. , "est script*test code maintenance processes and version control , Problem tracking and resolution % tools and processes , Pro-ect test metrics to be used , 0eporting re!uirements and testing deliverables , Software entrance and exit criteria , Initial sanity testing period and criteria , "est suspension and restart criteria , Personnel allocation , Personnel pre%training needs , "est site*location , /utside test organi(ations to be utili(ed and their purpose, responsibilties, deliverables, contact persons, and coordination issues , 0elevant proprietary, classified, security, and licensing issues. , /pen issues , Appendix % glossary, acronyms, etc. #See the 4ookstore section s Software "esting and Software 'A categories for useful books with more information.$ What's a 'test case'? , A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, ob-ective, test conditions*setup, input data re!uirements, steps, and expected results. , .ote that the process of developing test cases can help find problems in the re!uirements or design of an application, since it

re!uires completely thinking through the operation of the application. :or this reason, it s useful to prepare test cases early in the development cycle if possible. What should be done after a bug is found? "he bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re%tested, and determinations made regarding re!uirements for regression testing to check that fixes didn t create problems elsewhere. If a problem% tracking system is in place, it should encapsulate these processes. A variety of commercial problem%tracking*management software tools are available #see the "ools section for web resources with listings of such tools$. "he following are items to consider in the tracking process+ , )omplete information such that developers can understand the bug, get an idea of it s severity, and reproduce it if necessary. , 4ug identifier #number, I1, etc.$ , )urrent bug status #e.g., 0eleased for 0etest , .ew , etc.$ , "he application name or identifier and version , "he function, module, feature, ob-ect, screen, etc. where the bug occurred , 5nvironment specifics, system, platform, relevant hardware specifics , "est case name*number*identifier , /ne%line bug description , :ull bug description , 1escription of steps needed to reproduce the bug if not covered by a test case or if the developer doesn t have easy access to the test case*test script*test tool , .ames and*or descriptions of file*data*messages*etc. used in test , :ile excerpts*error messages*log file excerpts*screen shots*test tool logs that would be helpful in finding the cause of the problem , Severity estimate #a ;%level range such as <%; or critical %to% low is common$ , 7as the bug reproducible= , "ester name , "est date , 4ug reporting date , .ame of developer*group*organi(ation the problem is assigned to , 1escription of problem cause , 1escription of fix , )ode section*file*module*class*method that was fixed , 1ate of fix , Application version that contains the fix , "ester responsible for retest

, 0etest date , 0etest results , 0egression testing re!uirements , "ester responsible for regression tests , 0egression testing results A reporting or tracking process should enable notification of appropriate personnel at various stages. :or instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting*summary capabilities are needed for managers. What is 'configuration management'? )onfiguration management covers the processes used to control, coordinate, and track+ code, re!uirements, documentation, problems, change re!uests, designs, tools*compilers*libraries*patches, changes made to them, and who makes the changes. #See the "ools section for web resources with listings of configuration management tools. Also see the 4ookstore section s )onfiguration 3anagement category for useful books with more information.$ What if the software is so bugg it can't reall be tested at all? "he best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking%type problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process #such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.$ managers should be notified, and provided with some documentation as evidence of the problem. !ow can it be known when to stop testing? "his can be difficult to determine. 3any modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. )ommon factors in deciding when to stop are+ , 1eadlines #release deadlines, testing deadlines, etc.$ , "est cases completed with certain percentage passed , "est budget depleted , )overage of code*functionality*re!uirements reaches a specified point , 4ug rate falls below a certain level , 4eta or alpha testing period ends What if there isn't enough time for thorough testing?

Use risk analysis to determine where testing should be focused. Since it s rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development pro-ects. "his re!uires -udgement skills, common sense, and experience. #If warranted, formal methods are also available.$ )onsiderations can include+ , 7hich functionality is most important to the pro-ect s intended purpose= , 7hich functionality is most visible to the user= , 7hich functionality has the largest safety impact= , 7hich functionality has the largest financial impact on users= , 7hich aspects of the application are most important to the customer= , 7hich aspects of the application can be tested early in the development cycle= , 7hich parts of the code are most complex, and thus most sub-ect to errors= , 7hich parts of the application were developed in rush or panic mode= , 7hich aspects of similar*related previous pro-ects caused problems= , 7hich aspects of similar*related previous pro-ects had large maintenance expenses= , 7hich parts of the re!uirements and design are unclear or poorly thought out= , 7hat do the developers think are the highest%risk aspects of the application= , 7hat kinds of problems would cause the worst publicity= , 7hat kinds of problems would cause the most customer service complaints= , 7hat kinds of tests could easily cover multiple functionalities= , 7hich tests will have the best high%risk%coverage to time%re!uired ratio= What if the pro"ect isn't big enough to "ustif e#tensive testing? )onsider the impact of pro-ect errors, not the si(e of the pro-ect. 9owever, if extensive testing is still not -ustified, risk analysis is again needed and the same considerations as described previously in 7hat if there isn t enough time for thorough testing= apply. "he tester might then do ad hoc testing, or write up a limited test plan based on the risk analysis. What can be done if requirements are changing continuousl ?

A common problem and a ma-or headache. , 7ork with the pro-ect s stakeholders early on to understand how re!uirements might change so that alternate test plans and strategies can be worked out in advance, if possible. , It s helpful if the application s initial design allows for some adaptability so that later changes do not re!uire redoing the application from scratch. , If the code is well%commented and well%documented this makes changes easier for the developers. , Use rapid prototyping whenever possible to help customers feel sure of their re!uirements and minimi(e changes. , "he pro-ect s initial schedule should allow for some extra time commensurate with the possibility of changes. , "ry to move new re!uirements to a Phase > version of an application, while using the original re!uirements for the Phase < version. , .egotiate to allow only easily%implemented new re!uirements into the pro-ect, while moving more difficult new re!uirements into future versions of the application. , 4e sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant re!uirements changes. "hen let management or the customers #not the developers or testers$ decide if the changes are warranted % after all, that s their -ob. , 4alance the effort put into setting up automated testing with the expected effort re!uired to re%do them to deal with changes. , "ry to design some flexibility into automated test scripts. , :ocus initial automated testing on application aspects that are most likely to remain unchanged. , 1evote appropriate effort to risk analysis of changes to minimi(e regression testing needs. , 1esign some flexibility into test cases #this is not easily done2 the best bet might be to minimi(e the detail in the test cases, or set up only higher%level generic%type test plans$ , :ocus less on detailed test plans and test cases and more on ad hoc testing #with an understanding of the added risk that this entails$. What if the application has functionalit that wasn't in the requirements? It may take serious effort to determine if an application has significant unexpected or hidden functionality, and it would indicate deeper problems in the software development process. If the functionality isn t necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer. If not removed, design

information will be needed to determine added testing needs or regression testing needs. 3anagement should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only effects areas such as minor improvements in the user interface, for example, it may not be a significant risk. !ow can Software QA processes be implemented without stifling productivit ? 4y implementing 'A processes slowly over time, using consensus to reach agreement on processes, and ad-usting and experimenting as an organi(ation grows and matures, productivity will be improved instead of stifled. Problem prevention will lessen the need for problem detection, panics and burn%out will decrease, and there will be improved focus and less wasted effort. At the same time, attempts should be made to keep processes simple and efficient, minimi(e paperwork, promote computer%based processes and automated tracking and reporting, minimi(e time re!uired in meetings, and promote training as part of the 'A process. 9owever, no one % especially talented technical types % likes rules or bureacracy, and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be re!uired for late%night bug%fixing and calming of irate customers. What if an organi$ation is growing so fast that fi#ed QA processes are impossible? "his is a common problem in the software industry, especially in new technology areas. "here is no easy solution in this situation, other than+ , 9ire good people , 3anagement should ruthlessly prioriti(e !uality issues and maintain focus on the customer , 5veryone in the organi(ation should be clear on what !uality means to the customer !ow does a client%server environment affect testing? )lient*server applications can be !uite complex due to the multiple dependencies among clients, data communications, hardware, and servers. "hus testing re!uirements can be extensive. 7hen time is limited #as it usually is$ the focus should be on integration and system testing. Additionally, load*stress*performance testing may be useful in determining client*server application limitations and capabilities. "here are commercial tools to assist with such testing. #See the "ools section for web resources with listings that include these kinds of test

tools.$ !ow can World Wide Web sites be tested? 7eb sites are essentially client*server applications % with web servers and browser clients. )onsideration should be given to the interactions between html pages, ")P*IP communications, Internet connections, firewalls, applications that run in web pages #such as applets, -avascript, plug%in applications$, and applications that run on the server side #such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.$. Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. "he end result is that testing for web sites can become a ma-or ongoing effort. /ther considerations might include+ , 7hat are the expected loads on the server #e.g., number of hits per unit time=$, and what kind of performance is re!uired under such loads #such as web server response time, database !uery response times$. 7hat kinds of tools will be needed for performance testing #such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.$= , 7ho is the target audience= 7hat kind of browsers will they be using= 7hat kind of connection speeds will they by using= Are they intra% organi(ation #thus with likely high connection speeds and similar browsers$ or Internet%wide #thus with a wide variety of connection speeds and browser types$= , 7hat kind of performance is expected on the client side #e.g., how fast should pages appear, how fast should animations, applets, etc. load and run$= , 7ill down time for server and content maintenance*upgrades be allowed= how much= , 7hat kinds of security #firewalls, encryptions, passwords, etc.$ will be re!uired and what is it expected to do= 9ow can it be tested= , 9ow reliable are the site s Internet connections re!uired to be= And how does that affect backup system or redundant connection re!uirements and testing= , 7hat processes will be re!uired to manage updates to the web site s content, and what are the re!uirements for maintaining, tracking, and controlling page content, graphics, links, etc.= , 7hich 9"36 specification will be adhered to= 9ow strictly= 7hat variations will be allowed for targeted browsers= , 7ill there be any standards or re!uirements for page appearance and*or graphics throughout a site or parts of a site==

, 9ow will internal and external links be validated and updated= how often= , )an testing be done on the production system, or will a separate test system be re!uired= 9ow are browser caching, variations in browser option settings, dial%up connection variabilities, and real%world internet traffic congestion problems to be accounted for in testing= , 9ow extensive or customi(ed are the server logging and reporting re!uirements2 are they considered an integral part of the system and do they re!uire testing= , 9ow are cgi programs, applets, -avascripts, Active8 components, etc. to be maintained, tracked, controlled, and tested= Some sources of site security information include the Usenet newsgroup comp.security.announce and links concerning web site security in the /ther 0esources section. Some usability guidelines to consider % these are sub-ective and may or may not apply to a given situation #.ote+ more information on usability testing issues can be found in articles about web site usability in the /ther 0esources section$+ , Pages should be ?%; screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page. , "he page layouts and design elements should be consistent throughout a site, so that it s clear to the user that they re still within a site. , Pages should be as browser%independent as possible, or pages should be provided or generated based on the browser%type. , All pages should have links external to the page2 there should be no dead%end pages. , "he page owner, revision date, and a link to a contact person or organi(ation should be included on each page. 3any new web site test tools have appeared in the recent years and more than >@A of them are listed in the 7eb "est "ools section. !ow is testing affected b ob"ect&oriented designs? 7ell%engineered ob-ect%oriented design can make it easier to trace from code to internal design to functional design to re!uirements. 7hile there will be little affect on black box testing #where an understanding of the internal design of the application is unnecessary$, white%box testing can be oriented to the application s ob-ects. If the application was well%designed this can simplify test design. What is '#treme (rogramming and what's it got to do with testing? 5xtreme Programming #8P$ is a software development approach for small teams on risk%prone pro-ects with unstable re!uirements. It was

created by Bent 4eck who described the approach in his book 5xtreme Programming 5xplained #See the Software!atest.com 4ooks page.$. "esting # extreme testing $ is a core aspect of 5xtreme Programming. Programmers are expected to write unit and functional test code first % before the application is developed. "est code is under source control along with the rest of the code. )ustomers are expected to be an integral part of the pro-ect team and to help develope scenarios for acceptance*black box testing. Acceptance tests are preferably automated, and are modified and rerun for each of the fre!uent development iterations. 'A and test personnel are also re!uired to be an integral part of the pro-ect team. 1etailed re!uirements documentation is not used, and fre!uent re%scheduling, re%estimating, and re%prioriti(ing is expected. :or more info see the 8P%related listings in the Software!atest.com /ther 0esources section. What is 'Software Qualit Assurance'? Software 'A involves the entire software development P0/)5SS % monitoring and improving the process, making sure that any agreed% upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to prevention . #See the 4ookstore section s Software 'A category for a list of useful books on Software 'uality Assurance.$ What is 'Software Testing'? "esting involves operation of a system or application under controlled conditions and evaluating the results #eg, if the user is in interface A of the application while using hardware 4, and does ), then 1 should happen $. "he controlled conditions should include both normal and abnormal conditions. "esting should intentionally attempt to make things go wrong to determine if things happen when they shouldn t or things don t happen when they should. It is oriented to detection . #See the 4ookstore section s Software "esting category for a list of useful books on Software "esting.$ , /rgani(ations vary considerably in how they assign responsibility for 'A and testing. Sometimes they re the combined responsibility of one group or individual. Also common are pro-ect teams that include a mix of testers and developers who work closely together, with overall 'A processes monitored by pro-ect managers. It will depend on what best fits an organi(ation s si(e and business structure. What are some recent ma"or computer s stem failures caused b software bugs? , A ma-or U.S. retailer was reportedly hit with a large government fine in /ctober of >AA? due to web site errors that enabled customers to

view one anothers online orders. , .ews stories in the fall of >AA? stated that a manufacturing company recalled all their transportation products in order to fix a software problem causing instability in certain circumstances. "he company found and reported the bug itself and initiated the recall procedure in which a software upgrade fixed the problems. , In August of >AA? a U.S. court ruled that a lawsuit against a large online brokerage company could proceed2 the lawsuit reportedly involved claims that the company was not fixing system problems that sometimes resulted in failed stock trades, based on the experiences of C plaintiffs during an @%month period. A previous lower court s ruling that D...six miscues out of more than CAA trades does not indicate negligence.D was invalidated. , In April of >AA? it was announced that the largest student loan company in the U.S. made a software error in calculating the monthly payments on @AA,AAA loans. Although borrowers were to be notified of an increase in their re!uired payments, the company will still reportedly lose E@ million in interest. "he error was uncovered when borrowers began reporting inconsistencies in their bills. , .ews reports in :ebruary of >AA? revealed that the U.S. "reasury 1epartment mailed ;A,AAA Social Security checks without any beneficiary names. A spokesperson indicated that the missing names were due to an error in a software change. 0eplacement checks were subse!uently mailed out with the problem corrected, and recipients were then able to cash their Social Security checks. , In 3arch of >AA> it was reported that software bugs in 4ritain s national tax system resulted in more than <AA,AAA erroneous tax overcharges. "he problem was partly attibuted to the difficulty of testing the integration of multiple systems. , A newspaper columnist reported in &uly >AA< that a serious flaw was found in off%the%shelf software that had long been used in systems for tracking certain U.S. nuclear materials. "he same software had been recently donated to another country to be used in tracking their own nuclear materials, and it was not until scientists in that country discovered the problem, and shared the information, that U.S. officials became aware of the problems. , According to newspaper stories in mid%>AA<, a ma-or systems development contractor was fired and sued over problems with a large retirement plan management system. According to the reports, the client claimed that system deliveries were late, the software had excessive defects, and it caused other systems to crash. , In &anuary of >AA< newspapers reported that a ma-or 5uropean railroad was hit by the aftereffects of the F>B bug. "he company found that many of their newer trains would not run due to their inability to

recogni(e the date ?<*<>*>AAA 2 the trains were started by altering the control system s date settings. , .ews reports in September of >AAA told of a software vendor settling a lawsuit with a large mortgage lender2 the vendor had reportedly delivered an online mortgage processing system that did not meet specifications, was delivered late, and didn t work. , In early >AAA, ma-or problems were reported with a new computer system in a large suburban U.S. public school district with <AA,AAAG students2 problems included <A,AAA erroneous report cards and students left stranded by failed class registration systems2 the district s )I/ was fired. "he school district decided to reinstate it s original >;%year old system for at least a year until the bugs were worked out of the new system by the software vendors. , In /ctober of <HHH the E<>; million .ASA 3ars )limate /rbiter spacecraft was believed to be lost in space due to a simple data conversion error. It was determined that spacecraft software used certain data in 5nglish units that should have been in metric units. Among other tasks, the orbiter was to serve as a communications relay for the 3ars Polar 6ander mission, which failed for unknown reasons in 1ecember <HHH. Several investigating panels were convened to determine the process failures that allowed the error to go undetected. , 4ugs in software supporting a large commercial high%speed data network affected IA,AAA business customers over a period of @ days in August of <HHH. Among those affected was the electronic trading system of the largest U.S. futures exchange, which was shut down for most of a week as a result of the outages. , In April of <HHH a software bug caused the failure of a E<.> billion U.S. military satellite launch, the costliest unmanned accident in the history of )ape )anaveral launches. "he failure was the latest in a string of launch failures, triggering a complete military and industry review of U.S. space launch programs, including software integration and testing processes. )ongressional oversight hearings were re!uested. , A small town in Illinois in the U.S. received an unusually large monthly electric bill of EI million in 3arch of <HHH. "his was about IAA times larger than its normal bill. It turned out to be due to bugs in new software that had been purchased by the local power company to deal with F>B software issues. , In early <HHH a ma-or computer game company recalled all copies of a popular new product due to software problems. "he company made a public apology for releasing a product before it was ready. Wh is it often hard for management to get serious about

qualit assurance? Solving problems is a high%visibility process2 preventing problems is low%visibility. "his is illustrated by an old parable+ In ancient )hina there was a family of healers, one of whom was known throughout the land and employed as a physician to a great lord. "he physician was asked which of his family was the most skillful healer. 9e replied, DI tend to the sick and dying with drastic and dramatic treatments, and on occasion someone is cured and my name gets out among the lords.D D3y elder brother cures sickness when it -ust begins to take root, and his skills are known among the local peasants and neighbors.D D3y eldest brother is able to sense the spirit of sickness and eradicate it before it takes form. 9is name is unknown outside our home.D Wh does software have bugs? , miscommunication or no communication % as to specifics of what an application should or shouldn t do #the application s re!uirements$. , software complexity % the complexity of current software applicationscan be difficult to comprehend for anyone without experience in modern%day software development. 7indows%type interfaces, client%server and distributed applications, data communications, enormous relational databases, and sheer si(e of applications have all contributed to the exponential growth in software*system complexity. And the use of ob-ect%oriented techni!ues can complicate instead of simplify a pro-ect unless it is well%engineered. , programming errors % programmers, like anyone else, can make mistakes. , changing re!uirements #whether documented or undocumented$ % the customer may not understand the effects of changes, or may understand and re!uest them anyway % redesign, rescheduling of engineers, effects on other pro-ects, work already completed that may have to be redone or thrown out, hardware re!uirements that may be affected, etc. If there are many minor changes or any ma-or changes, known and unknown dependencies among parts of the pro-ect are likely to interact and cause problems, and the complexity of coordinating changes may result in errors. 5nthusiasm of engineering staff may be affected. In some fast%changing business environments, continuously modified re!uirements may be a fact of life. In this case, management must understand the resulting risks, and 'A and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control % see 7hat can be done if re!uirements are changing continuously= in Part > of the

:A'. , time pressures % scheduling of software pro-ects is difficult at best, often re!uiring a lot of guesswork. 7hen deadlines loom and the crunch comes, mistakes will be made. , egos % people prefer to say things like+ no problem piece of cake I can whip that out in a few hours it should be easy to update that old code instead of+ that adds a lot of complexity and we could end up making a lot of mistakes we have no idea if we can do that2 we ll wing it I can t estimate how long it will take, until I take a close look at it we can t figure out what that old spaghetti code did in the first place If there are too many unrealistic no problem s , the result is bugs. , poorly documented code % it s tough to maintain and modify code that is badly written or poorly documented2 the result is bugs. In many organi(ations management provides no incentive for programmers to document their code or write clear, understandable, maintainable code. In fact, it s usually the opposite+ they get points mostly for !uickly turning out code, and there s -ob security if nobody else can understand it # if it was hard to write, it should be hard to read $. , software development tools % visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs. !ow can new Software QA processes be introduced in an e#isting organi$ation? , A lot depends on the si(e of the organi(ation and the risks involved. :or large organi(ations with high%risk #in terms of lives or property$ pro-ects, serious management buy%in is re!uired and a formali(ed 'A process is necessary. , 7here the risk is lower, management and organi(ational buy%in and 'A implementation may be a slower, step%at%a%time process. 'A processes should be balanced with productivity so as to keep bureaucracy from getting out of hand. , :or small groups or pro-ects, a more ad%hoc process may be appropriate, depending on the type of customers and pro-ects. A lot will depend on team leads or managers, feedback to developers, and

ensuring ade!uate communications among customers, managers, developers, and testers. , "he most value for effort will be in #a$ re!uirements management processes, with a goal of clear, complete, testable re!uirement specifications embodied in re!uirements or design documentation and #b$ design inspections and code inspections. What is verification? validation? Jerification typically involves reviews and meetings to evaluate documents, plans, code, re!uirements, and specifications. "his can be done with checklists, issues lists, walkthroughs, and inspection meetings. Jalidation typically involves actual testing and takes place after verifications are completed. "he term IJ K J refers to Independent Jerification and Jalidation. What is a 'walkthrough'? A walkthrough is an informal meeting for evaluation or informational purposes. 6ittle or no preparation is usually re!uired. What's an 'inspection'? An inspection is more formali(ed than a walkthrough , typically with ?% @ people including a moderator, reader, and a recorder to take notes. "he sub-ect of the inspection is typically a document such as a re!uirements spec or a test plan, and the purpose is to find problems and see what s missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document2 most problems will be found during this preparation. "he result of the inspection meeting should be a written report. "horough preparation for inspections is difficult, painstaking work, but is one of the most cost effective methods of ensuring !uality. 5mployees who are most skilled at inspections are like the eldest brother in the parable in 7hy is it often hard for management to get serious about !uality assurance= . "heir skill may have low visibility but they are extremely valuable to any software development organi(ation, since bug prevention is far more cost%effective than bug detection. What kinds of testing should be considered? , 4lack box testing % not based on any knowledge of internal design or code. "ests are based on re!uirements and functionality. , 7hite box testing % based on knowledge of the internal logic of an application s code. "ests are based on coverage of code statements, branches, paths, conditions. , unit testing % the most micro scale of testing2 to test particular functions or code modules. "ypically done by the programmer and not

by testers, as it re!uires detailed knowledge of the internal program design and code. .ot always easily done unless the application has a well%designed architecture with tight code2 may re!uire developing test driver modules or test harnesses. , incremental integration testing % continuous testing of an application as new functionality is added2 re!uires that various aspects of an application s functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed2 done by programmers or by testers. , integration testing % testing of combined parts of an application to determine if they function together correctly. "he parts can be code modules, individual applications, client and server applications on a network, etc. "his type of testing is especially relevant to client*server and distributed systems. , functional testing % black%box type testing geared to functional re!uirements of an application2 this type of testing should be done by testers. "his doesn t mean that the programmers shouldn t check that their code works before releasing it #which of course applies to any stage of testing.$ , system testing % black%box type testing that is based on overall re!uirements specifications2 covers all combined parts of a system. , end%to%end testing % similar to system testing2 the macro end of the test scale2 involves testing of a complete application environment in a situation that mimics real%world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. , sanity testing or smoke testing % typically an initial testing effort to determine if a new software version is performing well enough to accept it for a ma-or testing effort. :or example, if the new software is crashing systems every ; minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a sane enough condition to warrant further testing in its current state. , regression testing % re%testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re%testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing. , acceptance testing % final testing based on specifications of the end% user or customer, or based on use by end%users*customers over some limited period of time. , load testing % testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system s response time degrades or fails. , stress testing % term often used interchangeably with load and

performance testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex !ueries to a database system, etc. , performance testing % term often used interchangeably with stress and load testing. Ideally performance testing #and any other type of testing$ is defined in re!uirements documentation or 'A or "est Plans. , usability testing % testing for user%friendliness . )learly this is sub-ective, and will depend on the targeted end%user or customer. User interviews, surveys, video recording of user sessions, and other techni!ues can be used. Programmers and testers are usually not appropriate as usability testers. , install*uninstall testing % testing of full, partial, or upgrade install*uninstall processes. , recovery testing % testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. , security testing % testing how well the system protects against unauthori(ed internal or external access, willful damage, etc2 may re!uire sophisticated testing techni!ues. , compatability testing % testing how well software performs in a particular hardware*software*operating system*network*etc. environment. , exploratory testing % often taken to mean a creative, informal software test that is not based on formal test plans or test cases2 testers may be learning the software as they test it. , ad%hoc testing % similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it. , user acceptance testing % determining if software is satisfactory to an end%user or customer. , comparison testing % comparing software weaknesses and strengths to competing products. , alpha testing % testing of an application when development is nearing completion2 minor design changes may still be made as a result of such testing. "ypically done by end%users or others, not by programmers or testers. , beta testing % testing when development and testing are essentially completed and final bugs and problems need to be found before final release. "ypically done by end%users or others, not by programmers or testers. , mutation testing % a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes # bugs $ and retesting with the original test data*cases to determine if

the bugs are detected. Proper implementation re!uires large computational resources. What are ) common problems in the software development process? , poor re!uirements % if re!uirements are unclear, incomplete, too general, or not testable, there will be problems. , unrealistic schedule % if too much work is crammed in too little time, problems are inevitable. , inade!uate testing % no one will know whether or not the program is any good until the customer complains or systems crash. , featuritis % re!uests to pile on new features after development is underway2 extremely common. , miscommunication % if developers don t know what s needed or customer s have erroneous expectations, problems are guaranteed. What are ) common solutions to software development problems? , solid re!uirements % clear, complete, detailed, cohesive, attainable, testable re!uirements that are agreed to by all players. Use prototypes to help nail down re!uirements. , realistic schedules % allow ade!uate time for planning, design, testing, bug fixing, re%testing, changes, and documentation2 personnel should be able to complete the pro-ect without burning out. , ade!uate testing % start testing early on, re%test after fixes or changes, plan for ade!uate time for testing and bug%fixing. , stick to initial re!uirements as much as possible % be prepared to defend against changes and additions once development has begun, and be prepared to explain conse!uences. If changes are necessary, they should be ade!uately reflected in related schedule changes. If possible, use rapid prototyping during the design phase so that customers can see what to expect. "his will provide them a higher comfort level with their re!uirements decisions and minimi(e changes later on. , communication % re!uire walkthroughs and inspections when appropriate2 make extensive use of group communication tools % e% mail, groupware, networked bug%tracking tools and change management tools, intranet capabilities, etc.2 insure that documentation is available and up%to%date % preferably electronic, not paper2 promote teamwork and cooperation2 use protoypes early on so that customers expectations are clarified. What is software 'qualit '? 'uality software is reasonably bug%free, delivered on time and within

budget, meets re!uirements and*or expectations, and is maintainable. 9owever, !uality is obviously a sub-ective term. It will depend on who the customer is and their overall influence in the scheme of things. A wide%angle view of the customers of a software development pro-ect might include end%users, customer acceptance testers, customer contract officers, customer management, the development organi(ation s management*accountants*testers*salespeople, future software maintenance engineers, stockholders, maga(ine columnists, etc. 5ach type of customer will have their own slant on !uality % the accounting department might define !uality in terms of profits while an end%user might define !uality as user%friendly and bug%free. What is 'good code'? Lood code is code that works, is bug free, and is readable and maintainable. Some organi(ations have coding standards that all developers are supposed to adhere to, but everyone has different ideas about what s best, or what is too many or too few rules. "here are also various theories and metrics, such as 3c)abe )omplexity metrics. It should be kept in mind that excessive use of standards and rules can stifle productivity and creativity. Peer reviews , buddy checks code analysis tools, etc. can be used to check for problems and enforce standards. :or ) and )GG coding, here are some typical ideas to consider in setting rules*standards2 these may or may not apply to a particular situation+ , minimi(e or eliminate use of global variables. , use descriptive function and method names % use both upper and lower case, avoid abbreviations, use as many characters as necessary to be ade!uately descriptive #use of more than >A characters is not out of line$2 be consistent in naming conventions. , use descriptive variable names % use both upper and lower case, avoid abbreviations, use as many characters as necessary to be ade!uately descriptive #use of more than >A characters is not out of line$2 be consistent in naming conventions. , function and method si(es should be minimi(ed2 less than <AA lines of code is good, less than ;A lines is preferable. , function descriptions should be clearly spelled out in comments preceding a function s code. , organi(e code for readability. , use whitespace generously % vertically and hori(ontally , each line of code should contain IA characters max. , one code statement per line. , coding style should be consistent throught a program #eg, use of

brackets, indentations, naming conventions, etc.$ , in adding comments, err on the side of too many rather than too few comments2 a common rule of thumb is that there should be at least as many lines of comments #including header blocks$ as lines of code. , no matter how small, an application should include documentaion of the overall program function and flow #even a few paragraphs is better than nothing$2 or if possible a separate flow chart and detailed program documentation. , make extensive use of error handling procedures and status and error logging. , for )GG, to minimi(e complexity and increase maintainability, avoid too many levels of inheritance in class heirarchies #relative to the si(e and complexity of the application$. 3inimi(e use of multiple inheritance, and minimi(e use of operator overloading #note that the &ava programming language eliminates multiple inheritance and operator overloading.$ , for )GG, keep class methods small, less than ;A lines of code per method is preferable. , for )GG, make liberal use of exception handlers What is 'good design'? 1esign could refer to many things, but often refers to functional design or internal design . Lood internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable2 is robust with sufficient error%handling and status logging capability2 and works correctly when implemented. Lood functional design is indicated by an application whose functionality can be traced back to customer and end%user re!uirements. #See further discussion of functional and internal design in 7hat s the big deal about re!uirements= in :A' M>.$ :or programs that have a user interface, it s often a good idea to assume that the end user will have little computer knowledge and may not read a user manual or even the on%line help2 some common rules%of%thumb include+ , the program should act in a way that least surprises the user , it should always be evident to the user what can be done next and how to exit , the program shouldn t let the users do something stupid without warning them. What is S'*? +MM? *S,? *'''? A-S*? Will it help? , S5I N Software 5ngineering Institute at )arnegie%3ellon University2 initiated by the U.S. 1efense 1epartment to help improve software development processes.

, )33 N )apability 3aturity 3odel , developed by the S5I. It s a model of ; levels of organi(ational maturity that determine effectiveness in delivering !uality software. It is geared to large organi(ations such as large U.S. 1efense 1epartment contractors. 9owever, many of the 'A processes involved are appropriate to any organi(ation, and if reasonably applied can be helpful. /rgani(ations can receive )33 ratings by undergoing assessments by !ualified auditors. 6evel < % characteri(ed by chaos, periodic panics, and heroic efforts re!uired by individuals to successfully complete pro-ects. :ew if any processes in place2 successes may not be repeatable. 6evel > % software pro-ect tracking, re!uirements management, realistic planning, and configuration management processes are in place2 successful practices can be repeated. 6evel ? % standard software development and maintenance processes are integrated throughout an organi(ation2 a Software 5ngineering Process Lroup is is in place to oversee software processes, and training programs are used to ensure understanding and compliance. 6evel C % metrics are used to track productivity, processes, and products. Pro-ect performance is predictable, and !uality is consistently high. 6evel ; % the focus is on continouous process improvement. "he impact of new processes and technologies can be predicted and effectively implemented when re!uired. Perspective on )33 ratings+ 1uring <HHI%>AA<, <A<@ organi(ations were assessed. /f those, >IO were rated at 6evel <, ?HO at >, >?O at ?, PO at C, and ;O at ;. #:or ratings during the period <HH>%HP, P>O were at 6evel <, >?O at >, <?O at ?, >O at C, and A.CO at ;.$ "he median si(e of organi(ations was <AA software engineering*maintenance personnel2 ?>O of organi(ations were U.S. federal contractors or agencies. :or those rated at 6evel <, the most problematical key process area was in Software 'uality Assurance. , IS/ N International /rganisation for Standardi(ation % "he IS/ HAA<+>AAA standard #which replaces the previous standard of <HHC$ concerns !uality systems that are assessed by outside auditors, and it applies to many kinds of production and manufacturing organi(ations,

not -ust software. It covers documentation, design, development, production, testing, installation, servicing, and other processes. "he full set of standards consists of+ #a$'HAA<%>AAA % 'uality 3anagement Systems+ 0e!uirements2 #b$'HAAA%>AAA % 'uality 3anagement Systems+ :undamentals and Jocabulary2 #c$'HAAC%>AAA % 'uality 3anagement Systems+ Luidelines for Performance Improvements. "o be IS/ HAA< certified, a third%party auditor assesses an organi(ation, and certification is typically good for about ? years, after which a complete reassessment is re!uired. .ote that IS/ certification does not necessarily indicate !uality products % it indicates only that documented processes are followed. Also see http+**www.iso.ch* for the latest information. In the U.S. the standards can be purchased via the AS' web site at http+**e%standards.as!.org* , I555 N Institute of 5lectrical and 5lectronics 5ngineers % among other things, creates standards such as I555 Standard for Software "est 1ocumentation #I555*A.SI Standard @>H$, I555 Standard of Software Unit "esting #I555*A.SI Standard <AA@$, I555 Standard for Software 'uality Assurance Plans #I555*A.SI Standard I?A$, and others. , A.SI N American .ational Standards Institute , the primary industrial standards body in the U.S.2 publishes some software%related standards in con-unction with the I555 and AS' #American Society for 'uality$. , /ther software development process assessment methods besides )33 and IS/ HAAA include SPI)5, "rillium, "ickI". and 4ootstrap. 7hat is the software life cycle = "he life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, re!uirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase%out, and other aspects. Will automated testing tools make testing easier? , Possibly. :or small pro-ects, the time needed to learn and implement them may not be worth it. :or larger pro-ects, or on%going long%term pro-ects they can be valuable. , A common type of automated tool is the record*playback type. :or example, a tester could click through all combinations of menu

choices, dialog box choices, buttons, etc. in an application LUI and have them recorded and the results logged by a tool. "he recording is typically in the form of text based on a scripting language that is interpretable by the testing tool. If new buttons are added, or some underlying code in the application is changed, etc. the application might then be retested by -ust playing back the recorded actions, and comparing the logging results to check effects of the changes. "he problem with such tools is that if there are continual changes to the system being tested, the recordings may have to be changed so much that it becomes very time%consuming to continuously update the scripts. Additionally, interpretation and analysis of results #screens, data, logs, etc.$ can be a difficult task. .ote that there are record*playback tools for text%based interfaces also, and for all types of platforms. , /ther automated tools can include+ code analy(ers % monitor code complexity, adherence to standards, etc. coverage analy(ers % these tools check which parts of the code have been exercised by a test, and may be oriented to code statement coverage, condition coverage, path coverage, etc. memory analy(ers % such as bounds%checkers and leak detectors. load*performance test tools % for testing client*server and web applications under various load levels. web test tools % to check that links are valid, 9"36 code usage is correct, client%side and server%side programs work, a web site s interactions are secure. other tools % for test case management, documentation management, bug reporting, and configuration management.

QA interview 'uestions:

$ell%me%about%yourself &o%you%have%e'perience%writing%test%plans(%test%cases )ill%you%not%be%bored%doing%manual%testing )here%do%you%see%yourself%three%years%from%now )hat%are%your%strengths $ell%me%about%a%bug%that%you%helped%resolve%while%your%cowor*ers%could%not )hat%are%your%wea*nesses $ell%me%about%your%recent%pro+ect

)hat%do%you%li*e%about%,.ow%strong%are%you%in%writing%/,0%queries.%1ate%yourself%between%1%to%10 )hat%was%the%duration%of%release%cycles%in%your%previous%pro+ects .ow%much%time%did%you%have%for%testing%before%each%release )hat%was%the%si2e%of%,-%teams &o%you%have%e'perience%wor*ing%with%small%companies .ow%do%you%handle%requirement%changes .ave%you%handled%multiple%modules/%pro+ects%at%a%time .ave%you%written%detailed%test%cases%or%high%level%test%cases .ow%do%you%go%about%writing%test%cases 3an%you%write%test%cases%without%requirement%docs .ow%do%you%write%test%cases%without%requirement%docs .ow%do%you%analy2e%a%bug &o%you%*now%uni' .ow%do%you%set%permissions%in%uni' )hat%do%you%write%in%a%test%plan%and%test%case $ell%me%about%a%bug%that%you%found%in%your%previous%pro+ect%that%you%are%very% proud%of )as%as*ed%to%write%a%/,0%query )hy%did%you%apply%to%this%company )hy%should%we%hire%you )hy%we%should%not%hire%you .ow%do%you%feel%about%having%huge%responsibilities )hat%are%the%qualities%that%a%,-%tester%should%have )hy%do%you%thin*%that%you%need%those%qualities 4ou%filed%an%important%bug%and%the%dev%team%re+ects%it.%)hat%do%you%do )hat%is%automation%testing 4ou%are%handling%multiple%pro+ects%and%you%have%a%deadline%that%you%cannot%meet.% 4ou%are%the%only%one%to%do%the%+ob.%)hat%do%you%do )as%as*ed%to%write%test%cases%for%the%company%application%and%also%file%bug%report% if%5%find%any%bugs.%)as%given%one%hour%to%do%this )hat%*ind%of%environment%do%you%li*e%to%wor*%in )hat%do%you%li*e%about%our%office%environment

WinRunner FAQ: #; Row &ou used WinRunner in &our pro0ect! a; Hes3 I have been usin* WinRunner for creatin* automated scripts for QBI3 functional and re*ression testin* of the 6BT. 2; /Cplain WinRunner testin* process! a; WinRunner testin* process involves siC main sta*es i. 'reate QBI %ap File so that WinRunner can reco*ni.e the QBI ob0ects in the application bein* tested

ii. 'reate test scripts b& recordin*3 pro*rammin*3 or a combination of both. While recordin* tests3 insert checkpoints where &ou want to check the response of the application bein* tested. iii. +ebu* Test: run tests in +ebu* mode to make sure the& run smoothl& iv. Run Tests: run tests in ?erif& mode to test &our application. v. ?iew Results: determines the success or failure of the tests. vi. Report +efects: If a test run fails due to a defect in the application bein* tested3 &ou can report information about the defect directl& from the Test Results window. =; What is contained in the QBI map! a; WinRunner stores information it learns about a window or ob0ect in a QBI %ap. When WinRunner runs a test3 it uses the QBI map to locate ob0ects. It reads an ob0ectYs description in the QBI map and then looks for an ob0ect with the same properties in the application bein* tested. /ach of these ob0ects in the QBI %ap file will be havin* a lo*ical name and a ph&sical description. b; There are 2 t&pes of QBI %ap files. i. Qlobal QBI %ap file: a sin*le QBI %ap file for the entire application ii. QBI %ap File per Test: WinRunner automaticall& creates a QBI %ap file for each test created. J; Row does WinRunner reco*ni.e ob0ects on the application! a; WinRunner uses the QBI %ap file to reco*ni.e ob0ects on the application. When WinRunner runs a test3 it uses the QBI map to locate ob0ects. It reads an ob0ectYs description in the QBI map and then looks for an ob0ect with the same properties in the application bein* tested. L; Rave &ou created test scripts and what is contained in the test scripts! a; Hes I have created test scripts. It contains the statement in %ercur& InteractiveYs Test -cript an*ua*e :T- ;. These statements appear as a test script in a test window. Hou can then enhance &our recorded test script3 either b& t&pin* in additional T- functions and pro*rammin* elements or b& usin* WinRunnerYs visual pro*rammin* tool3 the Function Qenerator. N; Row does WinRunner evaluate test results! a; Followin* each test run3 WinRunner displa&s the results in a report. The report details all the ma0or events that occurred durin* the run3 such as checkpoints3 error messa*es3 s&stem messa*es3 or user messa*es. If mismatches are detected at checkpoints durin* the test run3 &ou can view the eCpected results and the actual results from the Test Results window.

K; Rave &ou performed debu**in* of the scripts! a; Hes3 I have performed debu**in* of scripts. We can debu* the script b& eCecutin* the script in the debu* mode. We can also debu* script usin* the -tep3 -tep Into3 -tep out functionalities provided b& the WinRunner. O; Row do &ou run &our test scripts! a; We run tests in ?erif& mode to test &our application. /ach time WinRunner encounters a checkpoint in the test script3 it compares the current data of the application bein* tested to the eCpected data captured earlier. If an& mismatches are found3 WinRunner captures them as actual results. $; Row do &ou anal&.e results and report the defects! a; Followin* each test run3 WinRunner displa&s the results in a report. The report details all the ma0or events that occurred durin* the run3 such as checkpoints3 error messa*es3 s&stem messa*es3 or user messa*es. If mismatches are detected at checkpoints durin* the test run3 &ou can view the eCpected results and the actual results from the Test Results window. If a test run fails due to a defect in the application bein* tested3 &ou can report information about the defect directl& from the Test Results window. This information is sent via e)mail to the qualit& assurance mana*er3 who tracks the defect until it is fiCed.

#I; What is the use of Test +irector software! a; Test+irector is %ercur& InteractiveYs software test mana*ement tool. It helps qualit& assurance personnel plan and or*ani.e the testin* process. With Test+irector &ou can create a database of manual and automated tests3 build test c&cles3 run tests3 and report and track defects. Hou can also create reports and *raphs to help review the pro*ress of plannin* tests3 runnin* tests3 and trackin* defects before a software release. ##; Row &ou inte*rated &our automated scripts from Test+irector! a; When &ou work with WinRunner3 &ou can choose to save &our tests directl& to &our Test+irector database or while creatin* a test case in the Test+irector we can specif& whether the script in automated or manual. 6nd if it is automated script then Test+irector will build a skeleton for the script that can be later modified into one which could be used to test the 6BT. #2; What are the different modes of recordin*! a; There are two t&pe of recordin* in WinRunner. i. 'onteCt -ensitive recordin* records the operations &ou perform on &our application b& identif&in* Qraphical Bser Interface :QBI; ob0ects. ii. 6nalo* recordin* records ke&board input3 mouse clicks3 and the precise C) and &)coordinates traveled b& the mouse pointer across the screen. #=; What is the purpose of loadin* WinRunner 6dd)Ins! a; 6dd)Ins are used in WinRunner to load functions specific to the particular add)in to the memor&. While creatin* a script onl& those functions in the add)in selected will be listed in the function *enerator and while eCecutin* the script onl& those functions in the loaded add)in will be eCecuted else WinRunner will *ive an error messa*e sa&in* it does not reco*ni.e the function. #J; What are the reasons that WinRunner fails to identif& an ob0ect on the QBI! a; WinRunner fails to identif& an ob0ect in a QBI due to various reasons. i. The ob0ect is not a standard windows ob0ect. ii. If the browser used is not compatible with the WinRunner version3 QBI %ap /ditor will not be able to learn an& of the ob0ects displa&ed in the browser window. #L; What do &ou mean b& the lo*ical name of the ob0ect. a; 6n ob0ectYs lo*ical name is determined b& its class. In most cases3 the lo*ical name is the label that appears on an ob0ect. #N; If the ob0ect does not have a name then what will be the lo*ical name! a; If the ob0ect does not have a name then the lo*ical name could be the attached teCt. #K; What is the different between QBI map and QBI map files! a; The QBI map is actuall& the sum of one or more QBI map files. There are two modes for or*ani.in* QBI map files. i. Qlobal QBI %ap file: a sin*le QBI %ap file for the entire application ii. QBI %ap File per Test: WinRunner automaticall& creates a QBI %ap file for each test created. b; QBI %ap file is a file which contains the windows and the ob0ects learned b& the WinRunner with its lo*ical name and their ph&sical description. #O; Row do &ou view the contents of the QBI map! a; QBI %ap editor displa&s the content of a QBI %ap. We can invoke QBI %ap /ditor from the Tools %enu in WinRunner. The QBI %ap /ditor displa&s the various QBI %ap files created and the windows and ob0ects learned in to them with their lo*ical name and ph&sical description. #$; When &ou create QBI map do &ou record all the ob0ects of specific ob0ects! a; If we are learnin* a window then WinRunner automaticall& learns all the ob0ects in the window else we will we identif&in* those ob0ect3 which are to be learned in a window3 since we will be

workin* with onl& those ob0ects while creatin* scripts. 2I; What is the purpose of setDwindow command! a; -etDWindow command sets the focus to the specified window. We use this command to set the focus to the required window before eCecutin* tests on a particular window. -&ntaC: setDwindow:3 time;9 The lo*ical name is the lo*ical name of the window and time is the time the eCecution has to wait till it *ets the *iven window into focus. 2#; Row do &ou load QBI map! a; We can load a QBI %ap b& usin* the QBIDload command. -&ntaC: QBIDload:;9 22; What is the disadvanta*e of loadin* the QBI maps throu*h start up scripts! a; If we are usin* a sin*le QBI %ap file for the entire 6BT then the memor& used b& the QBI %ap ma& be much hi*h. b; If there is an& chan*e in the ob0ect bein* learned then WinRunner will not be able to reco*ni.e the ob0ect3 as it is not in the QBI %ap file loaded in the memor&. -o we will have to learn the ob0ect a*ain and update the QBI File and reload it. 2=; Row do &ou unload the QBI map! a; We can use QBIDclose to unload a specific QBI %ap file or else we call use QBIDcloseDall command to unload all the QBI %ap files loaded in the memor&. -&ntaC: QBIDclose:;9 or QBIDcloseDall9 2J; What actuall& happens when &ou load QBI map! a; When we load a QBI %ap file3 the information about the windows and the ob0ects with their lo*ical names and ph&sical description are loaded into memor&. -o when the WinRunner eCecutes a script on a particular window3 it can identif& the ob0ects usin* this information loaded in the memor&. 2L; What is the purpose of the temp QBI map file! a; While recordin* a script3 WinRunner learns ob0ects and windows b& itself. This is actuall& stored into the temporar& QBI %ap file. We can specif& whether we have to load this temporar& QBI %ap file should be loaded each time in the Qeneral 5ptions. 2N; What is the eCtension of QBI map file! a; The eCtension for a QBI %ap file is b.*uic. 2K; Row do &ou find an ob0ect in a QBI map. a; The QBI %ap /ditor is been provided with a Find and -how 8uttons. i. To find a particular ob0ect in the QBI %ap file in the application3 select the ob0ect and click the -how window. This blinks the selected ob0ect. ii. To find a particular ob0ect in a QBI %ap file click the Find button3 which *ives the option to select the ob0ect. When the ob0ect is selected3 if the ob0ect has been learned to the QBI %ap file it will be focused in the QBI %ap file. 2O; What different actions are performed b& find and show button! a; To find a particular ob0ect in the QBI %ap file in the application3 select the ob0ect and click the -how window. This blinks the selected ob0ect. b; To find a particular ob0ect in a QBI %ap file click the Find button3 which *ives the option to select the ob0ect. When the ob0ect is selected3 if the ob0ect has been learned to the QBI %ap file it will be focused in the QBI %ap file.

2$; Row do &ou identif& which files are loaded in the QBI map! a; The QBI %ap /ditor has a drop down bQBI Filec displa&in* all the QBI %ap files loaded into the memor&. =I; Row do &ou modif& the lo*ical name or the ph&sical description of the ob0ects in QBI map! a; Hou can modif& the lo*ical name or the ph&sical description of an ob0ect in a QBI map file usin* the QBI %ap /ditor. =#; When do &ou feel &ou need to modif& the lo*ical name! a; 'han*in* the lo*ical name of an ob0ect is useful when the assi*ned lo*ical name is not sufficientl& descriptive or is too lon*. =2; When it is appropriate to chan*e ph&sical description! a; 'han*in* the ph&sical description is necessar& when the propert& value of an ob0ect chan*es. ==; Row WinRunner handles var&in* window labels! a; We can handle var&in* window labels usin* re*ular eCpressions. WinRunner uses two bhiddenc properties in order to use re*ular eCpression in an ob0ectYs ph&sical description. These properties are re*eCpDlabel and re*eCpD%-WDclass. i. The re*eCpDlabel propert& is used for windows onl&. It operates bbehind the scenesc to insert a re*ular eCpression into a windowYs label description. ii. The re*eCpD%-WDclass propert& inserts a re*ular eCpression into an ob0ectYs %-WDclass. It is obli*ator& for all t&pes of windows and for the ob0ect class ob0ect. > =J; What is the purpose of re*eCpDlabel propert& and re*eCpD%-WDclass propert&! a; The re*eCpDlabel propert& is used for windows onl&. It operates bbehind the scenesc to insert a re*ular eCpression into a windowYs label description. b; The re*eCpD%-WDclass propert& inserts a re*ular eCpression into an ob0ectYs %-WDclass. It is obli*ator& for all t&pes of windows and for the ob0ect class ob0ect. =L; Row do &ou suppress a re*ular eCpression! a; We can suppress the re*ular eCpression of a window b& replacin* the re*eCpDlabel propert& with label propert&. =N; Row do &ou cop& and move ob0ects between different QBI map files! a; We can cop& and move ob0ects between different QBI %ap files usin* the QBI %ap /ditor. The steps to be followed are: i. 'hoose Tools E QBI %ap /ditor to open the QBI %ap /ditor. ii. 'hoose ?iew E QBI Files. iii. 'lick /Cpand in the QBI %ap /ditor. The dialo* boC eCpands to displa& two QBI map files simultaneousl&. iv. ?iew a different QBI map file on each side of the dialo* boC b& clickin* the file names in the QBI File lists. v. In one file3 select the ob0ects &ou want to cop& or move. Bse the -hift ke& and/or 'ontrol ke& to select multiple ob0ects. To select all ob0ects in a QBI map file3 choose /dit E -elect 6ll. vi. 'lick 'op& or %ove. vii. To restore the QBI %ap /ditor to its ori*inal si.e3 click 'ollapse. =K; Row do &ou select multiple ob0ects durin* mer*in* the files! a; Bse the -hift ke& and/or 'ontrol ke& to select multiple ob0ects. To select all ob0ects in a QBI map file3 choose /dit E -elect 6ll. =O; Row do &ou clear a QBI map files! a; We can clear a QBI %ap file usin* the b'lear 6llc option in the QBI %ap /ditor.

=$; Row do &ou filter the ob0ects in the QBI map! a; QBI %ap /ditor has a Filter option. This provides for filterin* with = different t&pes of options. i. o*ical name displa&s onl& ob0ects with the specified lo*ical name. ii. Ph&sical description displa&s onl& ob0ects matchin* the specified ph&sical description. Bse an& substrin* belon*in* to the ph&sical description. iii. 'lass displa&s onl& ob0ects of the specified class3 such as all the push buttons. JI; Row do &ou confi*ure QBI map! a; When WinRunner learns the description of a QBI ob0ect3 it does not learn all its properties. Instead3 it learns the minimum number of properties to provide a unique identification of the ob0ect. b; %an& applications also contain custom QBI ob0ects. 6 custom ob0ect is an& ob0ect not belon*in* to one of the standard classes used b& WinRunner. These ob0ects are therefore assi*ned to the *eneric bob0ectc class. When WinRunner records an operation on a custom ob0ect3 it *enerates ob0DmouseD statements in the test script. c; If a custom ob0ect is similar to a standard ob0ect3 &ou can map it to one of the standard classes. Hou can also confi*ure the properties WinRunner uses to identif& a custom ob0ect durin* 'onteCt -ensitive testin*. The mappin* and the confi*uration &ou set are valid onl& for the current WinRunner session. To make the mappin* and the confi*uration permanent3 &ou must add confi*uration statements to &our startup test script. J#; What is the purpose of QBI map confi*uration! a; QBI %ap confi*uration is used to map a custom ob0ect to a standard ob0ect. J2; Row do &ou make the confi*uration and mappin*s permanent! a; The mappin* and the confi*uration &ou set are valid onl& for the current WinRunner session. To make the mappin* and the confi*uration permanent3 &ou must add confi*uration statements to &our startup test script. J=; What is the purpose of QBI sp&! a; Bsin* the QBI -p&3 &ou can view the properties of an& QBI ob0ect on &our desktop. Hou use the -p& pointer to point to an ob0ect3 and the QBI -p& displa&s the properties and their values in the QBI -p& dialo* boC. Hou can choose to view all the properties of an ob0ect3 or onl& the selected set of properties that WinRunner learns.

JJ; What is the purpose of obli*ator& and optional properties of the ob0ects! a; For each class3 WinRunner learns a set of default properties. /ach default propert& is classified bobli*ator&c or boptionalc. i. 6n obli*ator& propert& is alwa&s learned :if it eCists;. ii. 6n optional propert& is used onl& if the obli*ator& properties do not provide unique identification of an ob0ect. These optional properties are stored in a list. WinRunner selects the minimum number of properties from this list that are necessar& to identif& the ob0ect. It be*ins with the first propert& in the list3 and continues3 if necessar&3 to add properties to the description until it obtains unique identification for the ob0ect. JL; When the optional properties are learned! a; 6n optional propert& is used onl& if the obli*ator& properties do not provide unique identification of an ob0ect. JN; What is the purpose of location indicator and indeC indicator in QBI map confi*uration! a; In cases where the obli*ator& and optional properties do not uniquel& identif& an ob0ect3 WinRunner uses a selector to differentiate between them. Two t&pes of selectors are available: i. 6 location selector uses the spatial position of ob0ects. #. The location selector uses the spatial order of ob0ects within the window3 from the top left to the

bottom ri*ht corners3 to differentiate amon* ob0ects with the same description. ii. 6n indeC selector uses a unique number to identif& the ob0ect in a window. #. The indeC selector uses numbers assi*ned at the time of creation of ob0ects to identif& the ob0ect in a window. Bse this selector if the location of ob0ects with the same description ma& chan*e within a window. JK; Row do &ou handle custom ob0ects! a; 6 custom ob0ect is an& QBI ob0ect not belon*in* to one of the standard classes used b& WinRunner. WinRunner learns such ob0ects under the *eneric bob0ectc class. WinRunner records operations on custom ob0ects usin* ob0DmouseD statements. b; If a custom ob0ect is similar to a standard ob0ect3 &ou can map it to one of the standard classes. Hou can also confi*ure the properties WinRunner uses to identif& a custom ob0ect durin* 'onteCt -ensitive testin*. JO; What is the name of custom class in WinRunner and what methods it applies on the custom ob0ects! a; WinRunner learns custom class ob0ects under the *eneric bob0ectc class. WinRunner records operations on custom ob0ects usin* ob0D statements. J$; In a situation when obli*ator& and optional both the properties cannot uniquel& identif& an ob0ect what method WinRunner applies! a; In cases where the obli*ator& and optional properties do not uniquel& identif& an ob0ect3 WinRunner uses a selector to differentiate between them. Two t&pes of selectors are available: i. 6 location selector uses the spatial position of ob0ects. ii. 6n indeC selector uses a unique number to identif& the ob0ect in a window. LI; What is the purpose of different record methods #; Record 2; Pass up =; 6s 5b0ect J; I*nore. a; Record instructs WinRunner to record all operations performed on a QBI ob0ect. This is the default record method for all classes. :The onl& eCception is the static class :static teCt;3 for which the default is Pass Bp.; b; Pass Bp instructs WinRunner to record an operation performed on this class as an operation performed on the element containin* the ob0ect. Bsuall& this element is a window3 and the operation is recorded as winDmouseDclick. c; 6s 5b0ect instructs WinRunner to record all operations performed on a QBI ob0ect as thou*h its class were bob0ectc class. d; I*nore instructs WinRunner to disre*ard all operations performed on the class. L#; Row do &ou find out which is the start up file in WinRunner! a; The test script name in the -tartup Test boC in the /nvironment tab in the Qeneral 5ptions dialo* boC is the start up file in WinRunner. L2; What are the virtual ob0ects and how do &ou learn them! a; 6pplications ma& contain bitmaps that look and behave like QBI ob0ects. WinRunner records operations on these bitmaps usin* winDmouseDclick statements. 8& definin* a bitmap as a virtual ob0ect3 &ou can instruct WinRunner to treat it like a QBI ob0ect such as a push button3 when &ou record and run tests. b; Bsin* the ?irtual 5b0ect wi.ard3 &ou can assi*n a bitmap to a standard ob0ect class3 define the coordinates of that ob0ect3 and assi*n it a lo*ical name. To define a virtual ob0ect usin* the ?irtual 5b0ect wi.ard: i. 'hoose Tools E ?irtual 5b0ect Wi.ard. The ?irtual 5b0ect wi.ard opens. 'lick 4eCt. ii. In the 'lass list3 select a class for the new virtual ob0ect. If rows that are displa&ed in the window. For a table class3 select the number of visible rows and columns. 'lick 4eCt. iii. 'lick %ark 5b0ect. Bse the crosshairs pointer to select the area of the virtual ob0ect. Hou can use the arrow ke&s to make precise ad0ustments to the area &ou define with the crosshairs. Press /nter or click the ri*ht mouse button to displa& the virtual ob0ectYs coordinates in the wi.ard. If the ob0ect marked is visible on the screen3 &ou can click the Ri*hli*ht button to view it. 'lick 4eCt.

iv. 6ssi*n a lo*ical name to the virtual ob0ect. This is the name that appears in the test script when &ou record on the virtual ob0ect. If the ob0ect contains teCt that WinRunner can read3 the wi.ard su**ests usin* this teCt for the lo*ical name. 5therwise3 WinRunner su**ests virtualDob0ect3 virtualDpushDbutton3 virtualDlist3 etc. v. Hou can accept the wi.ardYs su**estion or t&pe in a different name. WinRunner checks that there are no other ob0ects in the QBI map with the same name before confirmin* &our choice. 'lick 4eCt.

L=; Row &ou created &ou test scripts #; b& recordin* or 2; pro*rammin*! a; Pro*rammin*. I have done complete pro*rammin* onl&3 absolutel& no recordin*. LJ; What are the two modes of recordin*! a; There are 2 modes of recordin* in WinRunner i. 'onteCt -ensitive recordin* records the operations &ou perform on &our application b& identif&in* Qraphical Bser Interface :QBI; ob0ects. ii. 6nalo* recordin* records ke&board input3 mouse clicks3 and the precise C) and &)coordinates traveled b& the mouse pointer across the screen. LL; What is a checkpoint and what are different t&pes of checkpoints! a; 'heckpoints allow &ou to compare the current behavior of the application bein* tested to its behavior in an earlier version. Hou can add four t&pes of checkpoints to &our test scripts: i. QBI checkpoints verif& information about QBI ob0ects. For eCample3 &ou can check that a button is enabled or see which item is selected in a list. ii. 8itmap checkpoints take a bsnapshotc of a window or area of &our application and compare this to an ima*e captured in an earlier version. iii. TeCt checkpoints read teCt in QBI ob0ects and in bitmaps and enable &ou to verif& their contents. iv. +atabase checkpoints check the contents and the number of rows and columns of a result set3 which is based on a quer& &ou create on &our database. LN; What are data driven tests! a; When &ou test &our application3 &ou ma& want to check how it performs the same operations with multiple sets of data. Hou can create a data)driven test with a loop that runs ten times: each time the loop runs3 it is driven b& a different set of data. In order for WinRunner to use data to drive the test3 &ou must link the data to the test script which it drives. This is called parameteri.in* &our test. The data is stored in a data table. Hou can perform these operations manuall&3 or &ou can use the +ata+river Wi.ard to parameteri.e &our test and store the data in a data table. LK; What are the s&nchroni.ation points! a; -&nchroni.ation points enable &ou to solve anticipated timin* problems between the test and &our application. For eCample3 if &ou create a test that opens a database application3 &ou can add a s&nchroni.ation point that causes the test to wait until the database records are loaded on the screen. b; For 6nalo* testin*3 &ou can also use a s&nchroni.ation point to ensure that WinRunner repositions a window at a specific location. When &ou run a test3 the mouse cursor travels alon* eCact coordinates. Repositionin* the window enables the mouse pointer to make contact with the correct elements in the window. LO; What is parameteri.in*! a; In order for WinRunner to use data to drive the test3 &ou must link the data to the test script which it drives. This is called parameteri.in* &our test. The data is stored in a data table.

L$; Row do &ou maintain the document information of the test scripts! a; 8efore creatin* a test3 &ou can document information about the test in the Qeneral and +escription tabs of the Test Properties dialo* boC. Hou can enter the name of the test author3 the t&pe of functionalit& tested3 a detailed description of the test3 and a reference to the relevant functional specifications document. NI; What do &ou verif& with the QBI checkpoint for sin*le propert& and what command it *enerates3 eCplain s&ntaC! a; Hou can check a sin*le propert& of a QBI ob0ect. For eCample3 &ou can check whether a button is enabled or disabled or whether an item in a list is selected. To create a QBI checkpoint for a propert& value3 use the 'heck Propert& dialo* boC to add one of the followin* functions to the test script: i. buttonDcheckDinfo ii. scrollDcheckDinfo iii. editDcheckDinfo iv. staticDcheckDinfo v. listDcheckDinfo vi. winDcheckDinfo vii. ob0DcheckDinfo -&ntaC: buttonDcheckDinfo :button3 propert&3 propert&Dvalue ;9 editDcheckDinfo : edit3 propert&3 propert&Dvalue ;9 N#; What do &ou verif& with the QBI checkpoint for ob0ect/window and what command it *enerates3 eCplain s&ntaC! a; Hou can create a QBI checkpoint to check a sin*le ob0ect in the application bein* tested. Hou can either check the ob0ect with its default properties or &ou can specif& which properties to check. b; 'reatin* a QBI 'heckpoint usin* the +efault 'hecks i. Hou can create a QBI checkpoint that performs a default check on the propert& recommended b& WinRunner. For eCample3 if &ou create a QBI checkpoint that checks a push button3 the default check verifies that the push button is enabled. ii. To create a QBI checkpoint usin* default checks: #. 'hoose 'reate E QBI 'heckpoint E For 5b0ect/Window3 or click the QBI 'heckpoint for 5b0ect/Window button on the Bser toolbar. If &ou are recordin* in 6nalo* mode3 press the 'R/'Z QBI F5R 581/'T/WI4+5W softke& in order to avoid eCtraneous mouse movements. 4ote that &ou can press the 'R/'Z QBI F5R 581/'T/WI4+5W softke& in 'onteCt -ensitive mode as well. The WinRunner window is minimi.ed3 the mouse pointer becomes a pointin* hand3 and a help window opens on the screen. 2. 'lick an ob0ect. =. WinRunner captures the current value of the propert& of the QBI ob0ect bein* checked and stores it in the testYs eCpected results folder. The WinRunner window is restored and a QBI checkpoint is inserted in the test script as an ob0DcheckD*ui statement -&ntaC: winDcheckD*ui : window3 checklist3 eCpectedDresultsDfile3 time ;9 c; 'reatin* a QBI 'heckpoint b& -pecif&in* which Properties to 'heck d; Hou can specif& which properties to check for an ob0ect. For eCample3 if &ou create a checkpoint that checks a push button3 &ou can choose to verif& that it is in focus3 instead of enabled. e; To create a QBI checkpoint b& specif&in* which properties to check: i. 'hoose 'reate E QBI 'heckpoint E For 5b0ect/Window3 or click the QBI 'heckpoint for 5b0ect/Window button on the Bser toolbar. If &ou are recordin* in 6nalo* mode3 press the

'R/'Z QBI F5R 581/'T/WI4+5W softke& in order to avoid eCtraneous mouse movements. 4ote that &ou can press the 'R/'Z QBI F5R 581/'T/WI4+5W softke& in 'onteCt -ensitive mode as well. The WinRunner window is minimi.ed3 the mouse pointer becomes a pointin* hand3 and a help window opens on the screen. ii. +ouble)click the ob0ect or window. The 'heck QBI dialo* boC opens. iii. 'lick an ob0ect name in the 5b0ects pane. The Properties pane lists all the properties for the selected ob0ect. iv. -elect the properties &ou want to check. #. To edit the eCpected value of a propert&3 first select it. 4eCt3 either click the /dit /Cpected ?alue button3 or double)click the value in the /Cpected ?alue column to edit it. 2. To add a check in which &ou specif& ar*uments3 first select the propert& for which &ou want to specif& ar*uments. 4eCt3 either click the -pecif& 6r*uments button3 or double)click in the 6r*uments column. 4ote that if an ellipsis :three dots; appears in the 6r*uments column3 then &ou must specif& ar*uments for a check on this propert&. :Hou do not need to specif& ar*uments if a default ar*ument is specified.; When checkin* standard ob0ects3 &ou onl& specif& ar*uments for certain properties of edit and static teCt ob0ects. Hou also specif& ar*uments for checks on certain properties of nonstandard ob0ects. =. To chan*e the viewin* options for the properties of an ob0ect3 use the -how Properties buttons. J. 'lick 5Z to close the 'heck QBI dialo* boC. WinRunner captures the QBI information and stores it in the testYs eCpected results folder. The WinRunner window is restored and a QBI checkpoint is inserted in the test script as an ob0DcheckD*ui or a winDcheckD*ui statement. -&ntaC: winDcheckD*ui : window3 checklist3 eCpectedDresultsDfile3 time ;9 ob0DcheckD*ui : ob0ect3 checklist3 eCpected results file3 time ;9 N2; What do &ou verif& with the QBI checkpoint for multiple ob0ects and what command it *enerates3 eCplain s&ntaC! a; To create a QBI checkpoint for two or more ob0ects: i. 'hoose 'reate E QBI 'heckpoint E For %ultiple 5b0ects or click the QBI 'heckpoint for %ultiple 5b0ects button on the Bser toolbar. If &ou are recordin* in 6nalo* mode3 press the 'R/'Z QBI F5R %B TIP / 581/'T- softke& in order to avoid eCtraneous mouse movements. The 'reate QBI 'heckpoint dialo* boC opens. ii. 'lick the 6dd button. The mouse pointer becomes a pointin* hand and a help window opens. iii. To add an ob0ect3 click it once. If &ou click a window title bar or menu bar3 a help window prompts &ou to check all the ob0ects in the window. iv. The pointin* hand remains active. Hou can continue to choose ob0ects b& repeatin* step = above for each ob0ect &ou want to check. v. 'lick the ri*ht mouse button to stop the selection process and to restore the mouse pointer to its ori*inal shape. The 'reate QBI 'heckpoint dialo* boC reopens. vi. The 5b0ects pane contains the name of the window and ob0ects included in the QBI checkpoint. To specif& which ob0ects to check3 click an ob0ect name in the 5b0ects pane. The Properties pane lists all the properties of the ob0ect. The default properties are selected. #. To edit the eCpected value of a propert&3 first select it. 4eCt3 either click the /dit /Cpected ?alue button3 or double)click the value in the /Cpected ?alue column to edit it. 2. To add a check in which &ou specif& ar*uments3 first select the propert& for which &ou want to specif& ar*uments. 4eCt3 either click the -pecif& 6r*uments button3 or double)click in the 6r*uments column. 4ote that if an ellipsis appears in the 6r*uments column3 then &ou must specif& ar*uments for a check on this propert&. :Hou do not need to specif& ar*uments if a default ar*ument is specified.; When checkin* standard ob0ects3 &ou onl& specif& ar*uments for certain properties of edit and static teCt ob0ects. Hou also specif& ar*uments for checks on certain properties of nonstandard ob0ects. =. To chan*e the viewin* options for the properties of an ob0ect3 use the -how Properties buttons. vii. To save the checklist and close the 'reate QBI 'heckpoint dialo* boC3 click 5Z. WinRunner

captures the current propert& values of the selected QBI ob0ects and stores it in the eCpected results folder. 6 winDcheckD*ui statement is inserted in the test script. -&ntaC: winDcheckD*ui : window3 checklist3 eCpectedDresultsDfile3 time ;9 ob0DcheckD*ui : ob0ect3 checklist3 eCpected results file3 time ;9 N=; What information is contained in the checklist file and in which file eCpected results are stored! a; The checklist file contains information about the ob0ects and the properties of the ob0ect we are verif&in*. b; The *uiM.chk file contains the eCpected results which is stored in the eCp folder NJ; What do &ou verif& with the bitmap check point for ob0ect/window and what command it *enerates3 eCplain s&ntaC! a; Hou can check an ob0ect3 a window3 or an area of a screen in &our application as a bitmap. While creatin* a test3 &ou indicate what &ou want to check. WinRunner captures the specified bitmap3 stores it in the eCpected results folder :eCp; of the test3 and inserts a checkpoint in the test script. When &ou run the test3 WinRunner compares the bitmap currentl& displa&ed in the application bein* tested with the eCpected bitmap stored earlier. In the event of a mismatch3 WinRunner captures the current actual bitmap and *enerates a difference bitmap. 8& comparin* the three bitmaps :eCpected3 actual3 and difference;3 &ou can identif& the nature of the discrepanc&. b; When workin* in 'onteCt -ensitive mode3 &ou can capture a bitmap of a window3 ob0ect3 or of a specified area of a screen. WinRunner inserts a checkpoint in the test script in the form of either a winDcheckDbitmap or ob0DcheckDbitmap statement. c; 4ote that when &ou record a test in 6nalo* mode3 &ou should press the 'R/'Z 8IT%6P 5F WI4+5W softke& or the 'R/'Z 8IT%6P 5F -'R//4 6R/6 softke& to create a bitmap checkpoint. This prevents WinRunner from recordin* eCtraneous mouse movements. If &ou are pro*rammin* a test3 &ou can also use the 6nalo* function checkDwindow to check a bitmap. d; To capture a window or ob0ect as a bitmap: i. 'hoose 'reate E 8itmap 'heckpoint E For 5b0ect/Window or click the 8itmap 'heckpoint for 5b0ect/Window button on the Bser toolbar. 6lternativel&3 if &ou are recordin* in 6nalo* mode3 press the 'R/'Z 8IT%6P 5F 581/'T/WI4+5W softke&. The WinRunner window is minimi.ed3 the mouse pointer becomes a pointin* hand3 and a help window opens. ii. Point to the ob0ect or window and click it. WinRunner captures the bitmap and *enerates a winDcheckDbitmap or ob0DcheckDbitmap statement in the script. The T- statement *enerated for a window bitmap has the followin* s&ntaC: winDcheckDbitmap : ob0ect3 bitmap3 time ;9 iii. For an ob0ect bitmap3 the s&ntaC is: ob0DcheckDbitmap : ob0ect3 bitmap3 time ;9 iv. For eCample3 when &ou click the title bar of the main window of the Fli*ht Reservation application3 the resultin* statement mi*ht be: winDcheckDbitmap :<Fli*ht Reservation<3 <Im*2<3 #;9 v. Rowever3 if &ou click the +ate of Fli*ht boC in the same window3 the statement mi*ht be: ob0DcheckDbitmap :<+ate of Fli*ht:<3 <Im*#<3 #;9 -&ntaC: ob0DcheckDbitmap : ob0ect3 bitmap3 time >3 C3 &3 width3 hei*ht@ ;9 NL; What do &ou verif& with the bitmap checkpoint for screen area and what command it *enerates3 eCplain s&ntaC! a; Hou can define an& rectan*ular area of the screen and capture it as a bitmap for comparison. The area can be an& si.e: it can be part of a sin*le window3 or it can intersect several windows. The rectan*le is identified b& the coordinates of its upper left and lower ri*ht corners3 relative to the upper left corner of the window in which the area is located. If the area intersects several

windows or is part of a window with no title :for eCample3 a popup window;3 its coordinates are relative to the entire screen :the root window;. b; To capture an area of the screen as a bitmap: i. 'hoose 'reate E 8itmap 'heckpoint E For -creen 6rea or click the 8itmap 'heckpoint for -creen 6rea button. 6lternativel&3 if &ou are recordin* in 6nalo* mode3 press the 'R/'Z 8IT%6P 5F -'R//4 6R/6 softke&. The WinRunner window is minimi.ed3 the mouse pointer becomes a crosshairs pointer3 and a help window opens. ii. %ark the area to be captured: press the left mouse button and dra* the mouse pointer until a rectan*le encloses the area9 then release the mouse button. iii. Press the ri*ht mouse button to complete the operation. WinRunner captures the area and *enerates a winDcheckDbitmap statement in &our script. iv. The winDcheckDbitmap statement for an area of the screen has the followin* s&ntaC: winDcheckDbitmap : window3 bitmap3 time3 C3 &3 width3 hei*ht ;9 NN; What do &ou verif& with the database checkpoint default and what command it *enerates3 eCplain s&ntaC! a; 8& addin* runtime database record checkpoints &ou can compare the information in &our application durin* a test run with the correspondin* record in &our database. 8& addin* standard database checkpoints to &our test scripts3 &ou can check the contents of databases in different versions of &our application. b; When &ou create database checkpoints3 &ou define a quer& on &our database3 and &our database checkpoint checks the values contained in the result set. The result set is set of values retrieved from the results of the quer&. c; Hou can create runtime database record checkpoints in order to compare the values displa&ed in &our application durin* the test run with the correspondin* values in the database. If the comparison does not meet the success criteria &ou d; specif& for the checkpoint3 the checkpoint fails. Hou can define a successful runtime database record checkpoint as one where one or more matchin* records were found3 eCactl& one matchin* record was found3 or where no matchin* records are found. e; Hou can create standard database checkpoints to compare the current values of the properties of the result set durin* the test run to the eCpected values captured durin* recordin* or otherwise set before the test run. If the eCpected results and the current results do not match3 the database checkpoint fails. -tandard database checkpoints are useful when the eCpected results can be established before the test run. -&ntaC: dbDcheck:3 ;9 f; Hou can add a runtime database record checkpoint to &our test in order to compare information that appears in &our application durin* a test run with the current value:s; in the correspondin* record:s; in &our database. Hou add runtime database record checkpoints b& runnin* the Runtime Record 'heckpoint wi.ard. When &ou are finished3 the wi.ard inserts the appropriate dbDrecordDcheck statement into &our script. -&ntaC: dbDrecordDcheck:'hecklistFile4ame3-uccess'onditions3Record4umber ;9 'hecklistFile4ame 6 file created b& WinRunner and saved in the testAs checklist folder. The file contains information about the data to be captured durin* the test run and its correspondin* field in the database. The file is created based on the information entered in the Runtime Record ?erification wi.ard. -uccess'onditions 'ontains one of the followin* values: #. +?RD54/D5RD%5R/D%6T'R ) The checkpoint passes if one or more matchin* database records are found.

2. +?RD54/D%6T'R ) The checkpoint passes if eCactl& one matchin* database record is found. =. +?RD45D%6T'R ) The checkpoint passes if no matchin* database records are found. Record4umber 6n out parameter returnin* the number of records in the database. NK; Row do &ou handle d&namicall& chan*in* area of the window in the bitmap checkpoints! a; The difference between bitmaps option in the Run Tab of the *eneral options defines the minimum number of piCels that constitute a bitmap mismatch NO; What do &ou verif& with the database check point custom and what command it *enerates3 eCplain s&ntaC! a; When &ou create a custom check on a database3 &ou create a standard database checkpoint in which &ou can specif& which properties to check on a result set. b; Hou can create a custom check on a database in order to: i. check the contents of part or the entire result set ii. edit the eCpected results of the contents of the result set iii. count the rows in the result set iv. count the columns in the result set c; Hou can create a custom check on a database usin* 5+8'3 %icrosoft Quer& or +ata 1unction. N$; What do &ou verif& with the s&nc point for ob0ect/window propert& and what command it *enerates3 eCplain s&ntaC! a; -&nchroni.ation compensates for inconsistencies in the performance of &our application durin* a test run. 8& insertin* a s&nchroni.ation point in &our test script3 &ou can instruct WinRunner to suspend the test run and wait for a cue before continuin* the test. b; Hou can a s&nchroni.ation point that instructs WinRunner to wait for a specified ob0ect or window to appear. For eCample3 &ou can tell WinRunner to wait for a window to open before performin* an operation within that window3 or &ou ma& want WinRunner to wait for an ob0ect to appear in order to perform an operation on that ob0ect. c; Hou use the ob0DeCists function to create an ob0ect s&nchroni.ation point3 and &ou use the winDeCists function to create a window s&nchroni.ation point. These functions have the followin* s&ntaC: -&ntaC: ob0DeCists : ob0ect >3 time @ ;9 winDeCists : window >3 time @ ;9 KI; What do &ou verif& with the s&nc point for ob0ect/window bitmap and what command it *enerates3 eCplain s&ntaC! a; Hou can create a bitmap s&nchroni.ation point that waits for the bitmap of an ob0ect or a window to appear in the application bein* tested. b; +urin* a test run3 WinRunner suspends test eCecution until the specified bitmap is redrawn3 and then compares the current bitmap with the eCpected one captured earlier. If the bitmaps match3 then WinRunner continues the test. -&ntaC: ob0DwaitDbitmap : ob0ect3 ima*e3 time ;9 winDwaitDbitmap : window3 ima*e3 time ;9 K#; What do &ou verif& with the s&nc point for screen area and what command it *enerates3 eCplain s&ntaC! a; For screen area verification we actuall& capture the screen area into a bitmap and verif& the application screen area with the bitmap file durin* eCecution -&ntaC: ob0DwaitDbitmap:ob0ect3 ima*e3 time3 C3 &3 width3 hei*ht;9 K2; Row do &ou edit checklist file and when do &ou need to edit the checklist file! a; WinRunner has an edit checklist file option under the create menu. -elect the b/dit QBI 'hecklistc to modif& QBI checklist file and b/dit +atabase 'hecklistc to edit database checklist file. This brin*s up a dialo* boC that *ives &ou option to select the checklist file to modif&. There is

also an option to select the scope of the checklist file3 whether it is Test specific or a shared one. -elect the checklist file3 click 5Z which opens up the window to edit the properties of the ob0ects. K=; Row do &ou edit the eCpected value of an ob0ect! a; We can modif& the eCpected value of the ob0ect b& eCecutin* the script in the Bpdate mode. We can also manuall& edit the *uiM.chk file which contains the eCpected values which come under the eCp folder to chan*e the values. KJ; Row do &ou modif& the eCpected results of a QBI checkpoint! a; We can modif& the eCpected results of a QBI checkpoint be runnin* the script containin* the checkpoint in the update mode. KL; Row do &ou handle 6ctiveX and ?isual basic ob0ects! a; WinRunner provides with add)ins for 6ctiveX and ?isual basic ob0ects. When loadin* WinRunner3 select those add)ins and these add)ins provide with a set of functions to work on 6ctiveX and ?8 ob0ects. KN; Row do &ou create 5+8' quer&! a; We can create 5+8' quer& usin* the database checkpoint wi.ard. It provides with option to create an -Q file that uses an 5+8' +-4 to connect to the database. The -Q File will contain the connection strin* and the -Q statement. KK; Row do &ou record a data driven test! a; We can create a data)driven testin* usin* data from a flat file3 data table or a database. i. Bsin* Flat File: we actuall& store the data to be used in a required format in the file. We access the file usin* the File manipulation commands3 reads data from the file and assi*n the variables with data. ii. +ata Table: It is an eCcel file. We can store test data in these files and manipulate them. We use the eddtDMY functions to manipulate data in the data table. iii. +atabase: we store test data in the database and access these data usin* edbDMY functions. KO; Row do &ou convert a database file to a teCt file! a; Hou can use +ata 1unction to create a conversion file which converts a database to a tar*et teCt file. K$; Row do &ou parameteri.e database check points! a; When &ou create a standard database checkpoint usin* 5+8' :%icrosoft Quer&;3 &ou can add parameters to an -Q statement to parameteri.e the checkpoint. This is useful if &ou want to create a database checkpoint with a quer& in which the -Q statement definin* &our quer& chan*es. OI; Row do &ou create parameteri.e -Q commands! a; 6 parameteri.ed quer& is a quer& in which at least one of the fields of the WR/R/ clause is parameteri.ed3 i.e.3 the value of the field is specified b& a question mark s&mbol : ! ;. For eCample3 the followin* -Q statement is based on a quer& on the database in the sample Fli*ht Reservation application: i. -/ /'T Fli*hts.+eparture3 Fli*hts.Fli*htD4umber3 Fli*hts.+a&D5fDWeek FR5% Fli*hts Fli*hts WR/R/ :Fli*hts.+eparture"!; 64+ :Fli*hts.+a&D5fDWeek"!; -/ /'T defines the columns to include in the quer&. FR5% specifies the path of the database. WR/R/ :optional; specifies the conditions3 or filters to use in the quer&. +eparture is the parameter that represents the departure point of a fli*ht. +a&D5fDWeek is the parameter that represents the da& of the week of a fli*ht.

b; When creatin* a database checkpoint3 &ou insert a dbDcheck statement into &our test script. When &ou parameteri.e the -Q statement in &our checkpoint3 the dbDcheck function has a fourth3 optional3 ar*ument: the parameterDarra& ar*ument. 6 statement similar to the followin* is inserted into &our test script: dbDcheck:<list#.cdl<3 <dbvf#<3 45D I%IT3 dbvf#Dparams;9 The parameterDarra& ar*ument will contain the values to substitute for the parameters in the parameteri.ed checkpoint. O#; /Cplain the followin* commands: a; dbDconnect i. to connect to a database dbDconnect:3 ;9 b; dbDeCecuteDquer& i. to eCecute a quer& dbDeCecuteDquer& : sessionDname3 -Q 3 recordDnumber ;9 recordDnumber is the out value. c; dbD*etDfieldDvalue i. returns the value of a sin*le field in the specified rowDindeC and column in the sessionDname database session. dbD*etDfieldDvalue : sessionDname3 rowDindeC3 column ;9 d; dbD*etDheaders i. returns the number of column headers in a quer& and the content of the column headers3 concatenated and delimited b& tabs. dbD*etDheaders : sessionDname3 headerDcount3 headerDcontent ;9 e; dbD*etDrow i. returns the content of the row3 concatenated and delimited b& tabs. dbD*etDrow : sessionDname3 rowDindeC3 rowDcontent ;9 f; dbDwriteDrecords i. writes the record set into a teCt file delimited b& tabs. dbDwriteDrecords : sessionDname3 outputDfile > 3 headers > 3 recordDlimit @ @ ;9 *; dbD*etDlastDerror i. returns the last error messa*e of the last 5+8' or +ata 1unction operation in the sessionDname database session. dbD*etDlastDerror : sessionDname3 error ;9 h; dbDdisconnect i. disconnects from the database and ends the database session. dbDdisconnect : sessionDname ;9 i; dbDd0Dconvert i. runs the d0sDfile +ata 1unction eCport file. When &ou run this file3 the +ata 1unction /n*ine converts data from one spoke :source; to another :tar*et;. The optional parameters enable &ou to

override the settin*s in the +ata 1unction eCport file. dbDd0Dconvert : d0sDfile > 3 outputDfile > 3 headers > 3 recordDlimit @ @ @ ;9 O2; What check points &ou will use to read and check teCt on the QBI and eCplain its s&ntaC! a; Hou can use teCt checkpoints in &our test scripts to read and check teCt in QBI ob0ects and in areas of the screen. While creatin* a test &ou point to an ob0ect or a window containin* teCt. WinRunner reads the teCt and writes a T- statement to the test script. Hou ma& then add simple pro*rammin* elements to &our test scripts to verif& the contents of the teCt. b; Hou can use a teCt checkpoint to: i. Read teCt from a QBI ob0ect or window in &our application3 usin* ob0D*etDteCt and winD*etDteCt ii. -earch for teCt in an ob0ect or window3 usin* winDfindDteCt and ob0DfindDteCt iii. %ove the mouse pointer to teCt in an ob0ect or window3 usin* ob0DmoveDlocatorDteCt and winDmoveDlocatorDteCt iv. 'lick on teCt in an ob0ect or window3 usin* ob0DclickDonDteCt and winDclickDonDteCt O=; /Cplain Qet TeCt checkpoint from ob0ect/window with s&ntaC! a; We use ob0D*etDteCt :3 ; function to *et the teCt from an ob0ect b; We use winD*etDteCt :window3 outDteCt >3 C#3 &#3 C23 &2@; function to *et the teCt from a window. OJ; /Cplain Qet TeCt checkpoint from screen area with s&ntaC! a; We use winD*etDteCt :window3 outDteCt >3 C#3 &#3 C23 &2@; function to *et the teCt from a window. OL; /Cplain Qet TeCt checkpoint from selection :web onl&; with s&ntaC! a; Returns a teCt strin* from an ob0ect. webDob0D*etDteCt :ob0ect3 tableDrow3 tableDcolumn3 outDteCt >3 teCtDbefore3 teCtDafter3 indeC@;9 i. ob0ect The lo*ical name of the ob0ect. ii. tableDrow If the ob0ect is a table3 it specifies the location of the row within a table. The strin* is preceded b& the P character. iii. tableDcolumn If the ob0ect is a table3 it specifies the location of the column within a table. The strin* is preceded b& the P character. iv. outDteCt The output variable that stores the teCt strin*. v. teCtDbefore +efines the start of the search area for a particular teCt strin*. vi. teCtDafter +efines the end of the search area for a particular teCt strin*. vii. indeC The occurrence number to locate. :The default parameter number is numbered #;. ON; /Cplain Qet TeCt checkpoint web teCt checkpoint with s&ntaC! a; We use webDob0DteCtDeCists function for web teCt checkpoints. webDob0DteCtDeCists : ob0ect3 tableDrow3 tableDcolumn3 teCtDtoDfind >3 teCtDbefore3 teCtDafter@ ;9 a. ob0ect The lo*ical name of the ob0ect to search. b. tableDrow If the ob0ect is a table3 it specifies the location of the row within a table. The strin* is preceded b& the character P. c. tableDcolumn If the ob0ect is a table3 it specifies the location of the column within a table. The strin* is preceded b& the character P. d. teCtDtoDfind The strin* that is searched for. e. teCtDbefore +efines the start of the search area for a particular teCt strin*. f. teCtDafter +efines the end of the search area for a particular teCt strin*. OK; Which T- functions &ou will use for a; -earchin* teCt on the window

i. findDteCt : strin*3 outDcoordDarra&3 searchDarea >3 strin*Ddef @ ;9 strin* The strin* that is searched for. The strin* must be complete3 contain no spaces3 and it must be preceded and followed b& a space outside the quotation marks. To specif& a literal3 case) sensitive strin*3 enclose the strin* in quotation marks. 6lternativel&3 &ou can specif& the name of a strin* variable. In this case3 the strin* variable can include a re*ular eCpression. outDcoordDarra& The name of the arra& that stores the screen coordinates of the teCt :see eCplanation below;. searchDarea The area to search3 specified as coordinates C#3&#3C23&2. These define an& two dia*onal corners of a rectan*le. The interpreter searches for the teCt in the area defined b& the rectan*le. strin*Ddef +efines the t&pe of search to perform. If no value is specified3 :I or F6 -/3 the default;3 the search is for a sin*le complete word onl&. When #3 or TRB/3 is specified3 the search is not restricted to a sin*le3 complete word. b; *ettin* the location of the teCt strin* i. winDfindDteCt : window3 strin*3 resultDarra& >3 searchDarea >3 strin*Ddef @ @ ;9 window The lo*ical name of the window to search. strin* The teCt to locate. To specif& a literal3 case sensitive strin*3 enclose the strin* in quotation marks. 6lternativel&3 &ou can specif& the name of a strin* variable. The value of the strin* variable can include a re*ular eCpression. The re*ular eCpression should not include an eCclamation mark :\;3 however3 which is treated as a literal character. For more information re*ardin* Re*ular /Cpressions3 refer to the <Bsin* Re*ular /Cpressions< chapter in &our BserAs Quide. resultDarra& The name of the output variable that stores the location of the strin* as a four) element arra&. searchDarea The re*ion of the ob0ect to search3 relative to the window. This area is defined as a pair of coordinates3 with C#3&#3C23&2 specif&in* an& two dia*onall& opposite corners of the rectan*ular search re*ion. If this parameter is not defined3 then the entire window is considered the search area. strin*Ddef +efines how the teCt search is performed. If no strin*Ddef is specified3 :I or F6 -/3 the default parameter;3 the interpreter searches for a complete word onl&. If #3 or TRB/3 is specified3 the search is not restricted to a sin*le3 complete word. c; %ovin* the pointer to that teCt strin* i. winDmoveDlocatorDteCt :window3 strin* > 3searchDarea > 3strin*Ddef @ @ ;9 window The lo*ical name of the window. strin* The teCt to locate. To specif& a literal3 case sensitive strin*3 enclose the strin* in quotation marks. 6lternativel&3 &ou can specif& the name of a strin* variable. The value of the strin* variable can include a re*ular eCpression :the re*ular eCpression need not be*in with an eCclamation mark;. searchDarea The re*ion of the ob0ect to search3 relative to the window. This area is defined as a pair of coordinates3 with C#3 &#3 C23 &2 specif&in* an& two dia*onall& opposite corners of the rectan*ular search re*ion. If this parameter is not defined3 then the entire window specified is considered the search area.

strin*Ddef +efines how the teCt search is performed. If no strin*Ddef is specified3 :I or F6 -/3 the default parameter;3 the interpreter searches for a complete word onl&. If #3 or TRB/3 is specified3 the search is not restricted to a sin*le3 complete word. d; 'omparin* the teCt i. compareDteCt :str#3 str2 >3 chars#3 chars2@;9 str#3 str2 The two strin*s to be compared. chars# 5ne or more characters in the first strin*. chars2 5ne or more characters in the second strin*. These characters are substituted for those in chars#. OO; What are the steps of creatin* a data driven test! a; The steps involved in data driven testin* are: i. 'reatin* a test ii. 'onvertin* to a data)driven test and preparin* a database iii. Runnin* the test iv. 6nal&.in* the test results. O$; Record a data driven test script usin* data driver wi.ard! a; Hou can use the +ata+river Wi.ard to convert &our entire script or a part of &our script into a data)driven test. For eCample3 &our test script ma& include recorded operations3 checkpoints3 and other statements that do not need to be repeated for multiple sets of data. Hou need to parameteri.e onl& the portion of &our test script that &ou want to run in a loop with multiple sets of data. To create a data)driven test: i. If &ou want to turn onl& part of &our test script into a data)driven test3 first select those lines in the test script. ii. 'hoose Tools E +ata+river Wi.ard. iii. If &ou want to turn onl& part of the test into a data)driven test3 click 'ancel. -elect those lines in the test script and reopen the +ata+river Wi.ard. If &ou want to turn the entire test into a data) driven test3 click 4eCt. iv. The Bse a new or eCistin* /Ccel table boC displa&s the name of the /Ccel file that WinRunner creates3 which stores the data for the data)driven test. 6ccept the default data table for this test3 enter a different name for the data table3 or use v. The browse button to locate the path of an eCistin* data table. 8& default3 the data table is stored in the test folder. vi. In the 6ssi*n a name to the variable boC3 enter a variable name with which to refer to the data table3 or accept the default name3 btable.c vii. 6t the be*innin* of a data)driven test3 the /Ccel data table &ou selected is assi*ned as the value of the table variable. Throu*hout the script3 onl& the table variable name is used. This makes it eas& for &ou to assi*n a different data table viii. To the script at a later time without makin* chan*es throu*hout the script. iC. 'hoose from amon* the followin* options: #. 6dd statements to create a data)driven test: 6utomaticall& adds statements to run &our test in a loop: sets a variable name b& which to refer to the data table9 adds braces :fandg;3 a for statement3 and a ddtD*etDrowDcount statement to &our test script selection to run it in a loop while it reads from the data table9 adds ddtDopen and ddtDclose statements 2. To &our test script to open and close the data table3 which are necessar& in order to iterate rows in the table. 4ote that &ou can also add these statements to &our test script manuall&. =. If &ou do not choose this option3 &ou will receive a warnin* that &our data)driven test must contain a loop and statements to open and close &our datatable. J. Import data from a database: Imports data from a database. This option adds

ddtDupdateDfromDdb3 and ddtDsave statements to &our test script after the ddtDopen statement. L. 4ote that in order to import data from a database3 either %icrosoft Quer& or +ata 1unction must be installed on &our machine. Hou can install %icrosoft Quer& from the custom installation of %icrosoft 5ffice. 4ote that +ata 1unction is not automaticall& included in &our WinRunner packa*e. To purchase +ata 1unction3 contact &our %ercur& Interactive representative. For detailed information on workin* with +ata 1unction3 refer to the documentation in the +ata 1unction packa*e. N. Parameteri.e the test: Replaces fiCed values in selected checkpoints and in recorded statements with parameters3 usin* the ddtDval function3 and in the data table3 adds columns with variable values for the parameters. ine b& line: 5pens a wi.ard screen for each line of the selected test script3 which enables &ou to decide whether to parameteri.e a particular line3 and if so3 whether to add a new column to the data table or use an eCistin* column when parameteri.in* data. K. 6utomaticall&: Replaces all data with ddtDval statements and adds new columns to the data table. The first ar*ument of the function is the name of the column in the data table. The replaced data is inserted into the table. C. The Test script line to parameteri.e boC displa&s the line of the test script to parameteri.e. The hi*hli*hted value can be replaced b& a parameter. The 6r*ument to be replaced boC displa&s the ar*ument :value; that &ou can replace with a parameter. Hou can use the arrows to select a different ar*ument to replace. 'hoose whether and how to replace the selected data: #. +o not replace this data: +oes not parameteri.e this data. 2. 6n eCistin* column: If parameters alread& eCist in the data table for this test3 select an eCistin* parameter from the list. =. 6 new column: 'reates a new column for this parameter in the data table for this test. 6dds the selected data to this column of the data table. The default name for the new parameter is the lo*ical name of the ob0ect in the selected. T- statement above. 6ccept this name or assi*n a new name. Ci. The final screen of the wi.ard opens. #. If &ou want the data table to open after &ou close the wi.ard3 select -how data table now. 2. To perform the tasks specified in previous screens and close the wi.ard3 click Finish. =. To close the wi.ard without makin* an& chan*es to the test script3 click 'ancel.

$I; What are the three modes of runnin* the scripts! a; WinRunner provides three modes in which to run tests[?erif&3 +ebu*3 and Bpdate. Hou use each mode durin* a different phase of the testin* process. i. ?erif& #. Bse the ?erif& mode to check &our application. ii. +ebu* #. Bse the +ebu* mode to help &ou identif& bu*s in a test script. iii. Bpdate #. Bse the Bpdate mode to update the eCpected results of a test or to create a new eCpected results folder. $#; /Cplain the followin* T- functions: a; +dtDopen i. 'reates or opens a datatable file so that WinRunner can access it. -&ntaC: ddtDopen : dataDtableDname3 mode ;9 dataDtableDname The name of the data table. The name ma& be the table variable name3 the %icrosoft /Ccel file or a tabbed teCt file name3 or the full path and file name of the table. The first

row in the file contains the names of the parameters. This row is labeled row I. mode The mode for openin* the data table: ++TD%5+/DR/6+ :read)onl&; or ++TD%5+/DR/6+WRIT/ :read or write;. b; +dtDsave i. -aves the information into a data file. -&ntaC: dtDsave :dataDtableDname;9 dataDtableDname The name of the data table. The name ma& be the table variable name3 the %icrosoft /Ccel file or a tabbed teCt file name3 or the full path and file name of the table. c; +dtDclose i. 'loses a data table file -&ntaC: ddtDclose : dataDtableDname ;9 dataDtableDname The name of the data table. The data table is a %icrosoft /Ccel file or a tabbed teCt file. The first row in the file contains the names of the parameters. d; +dtDeCport i. /Cports the information of one data table file into a different data table file. -&ntaC: ddtDeCport :dataDtableDnamename#3 dataDtableDnamename2;9 dataDtableDnamename# The source data table filename. dataDtableDnamename2 The destination data table filename. e; +dtDshow i. -hows or hides the table editor of a specified data table. -&ntaC: ddtDshow :dataDtableDname >3 showDfla*@;9 dataDtableDname The name of the data table. The name ma& be the table variable name3 the %icrosoft /Ccel file or a tabbed teCt file name3 or the full path and file name of the table. showDfla* The value indicatin* whether the editor should be shown :default"#; or hidden :I;. f; +dtD*etDrowDcount i. Retrieves the no. of rows in a data tables -&ntaC: ddtD*etDrowDcount :dataDtableDname3 outDrowsDcount;9 dataDtableDname The name of the data table. The name ma& be the table variable name3 the %icrosoft /Ccel file or a tabbed teCt file name3 or the full path and file name of the table. The first row in the file contains the names of the parameters. outDrowsDcount The output variable that stores the total number of rows in the data table. *; ddtDneCtDrow i. 'han*es the active row in a database to the neCt row -&ntaC: ddtDneCtDrow :dataDtableDname;9 dataDtableDname The name of the data table. The name ma& be the table variable name3 the %icrosoft /Ccel file or a tabbed teCt file name3 or the full path and file name of the table. The first row in the file contains the names of the parameters. h; ddtDsetDrow i. -ets the active row in a data table. -&ntaC: ddtDsetDrow :dataDtableDname3 row;9

dataDtableDname The name of the data table. The name ma& be the table variable name3 the %icrosoft /Ccel file or a tabbed teCt file name3 or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row I. row The new active row in the data table. i; ddtDsetDval i. -ets a value in the current row of the data table -&ntaC: ddtDsetDval :dataDtableDname3 parameter3 value;9 dataDtableDname The name of the data table. The name ma& be the table variable name3 the %icrosoft /Ccel file or a tabbed teCt file name3 or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row I. parameter The name of the column into which the value will be inserted. value The value to be written into the table. 0; ddtDsetDvalDb&Drow i. -ets a value in a specified row of the data table. -&ntaC: ddtDsetDvalDb&Drow :dataDtableDname3 row3 parameter3 value;9 dataDtableDname The name of the data table. The name ma& be the table variable name3 the %icrosoft /Ccel file or a tabbed teCt file name3 or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row I. row The row number in the table. It can be an& eCistin* row or the current row number plus #3 which will add a new row to the data table. parameter The name of the column into which the value will be inserted. value The value to be written into the table. k; ddtD*etDcurrentDrow i. Retrieves the active row of a data table. -&ntaC: ddtD*etDcurrentDrow : dataDtableDname3 outDrow ;9 dataDtableDname The name of the data table. The name ma& be the table variable name3 the %icrosoft /Ccel file or a tabbed teCt file name3 or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row I. outDrow The output variable that stores the active row in the data table. l; ddtDisDparameter i. Returns whether a parameter in a datatable is valid -&ntaC: ddtDisDparameter :dataDtableDname3 parameter;9 dataDtableDname The name of the data table. The name ma& be the table variable name3 the %icrosoft /Ccel file or a tabbed teCt file name3 or the full path and file name of the table. The first row in the file contains the names of the parameters. parameter The parameter name to check in the data table. m; ddtD*etDparameters i. Returns a list of all parameters in a data table. -&ntaC: ddtD*etDparameters : table3 paramsDlist3 paramsDnum ;9

table The pathname of the data table. paramsDlist This out parameter returns the list of all parameters in the data table3 separated b& tabs. paramsDnum This out parameter returns the number of parameters in paramsDlist. n; ddtDval i. Returns the value of a parameter in the active roe in a data table. -&ntaC: ddtDval :dataDtableDname3 parameter;9 dataDtableDname The name of the data table. The name ma& be the table variable name3 the %icrosoft /Ccel file or a tabbed teCt file name3 or the full path and file name of the table. The first row in the file contains the names of the parameters. parameter The name of the parameter in the data table. o; ddtDvalDb&Drow i. Returns the value of a parameter in the specified row in a data table. -&ntaC: ddtDvalDb&Drow : dataDtableDname3 rowDnumber3 parameter ;9 dataDtableDname The name of the data table. The name ma& be the table variable name3 the %icrosoft /Ccel file or a tabbed teCt file name3 or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row I. rowDnumber The number of the row in the data table. parameter The name of the parameter in the data table. p; ddtDreportDrow i. Reports the active row in a data table to the test results -&ntaC: ddtDreportDrow :dataDtableDname;9 dataDtableDname The name of the data table. The name ma& be the table variable name3 the %icrosoft /Ccel file or a tabbed teCt file name3 or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row I. q; ddtDupdateDfromDdb i. imports data from a database into a data table. It is inserted into &our test script when &ou select the Import data from a database option in the +ata+river Wi.ard. When &ou run &our test3 this function updates the data table with data from the database. $2; Row do &ou handle uneCpected events and errors! a; WinRunner uses eCception handlin* to detect an uneCpected event when it occurs and act to recover the test run.

WinRunner enables &ou to handle the followin* t&pes of eCceptions: Pop)up eCceptions: Instruct WinRunner to detect and handle the appearance of a specific

window. T- eCceptions: Instruct WinRunner to detect and handle T- functions that return a specific error code. 5b0ect eCceptions: Instruct WinRunner to detect and handle a chan*e in a propert& for a specific QBI ob0ect. Web eCceptions: When the WebTest add)in is loaded3 &ou can instruct WinRunner to handle uneCpected events and errors that occur in &our Web site durin* a test run. $=; Row do &ou handle pop)up eCceptions! a; 6 pop)up eCception Randler handles the pop)up messa*es that come up durin* the eCecution of the script in the 6BT. T5 handle this t&pe of eCception we make WinRunner learn the window and also specif& a handler to the eCception. It could be i. +efault actions: WinRunner clicks the 5Z or 'ancel button in the pop)up window3 or presses /nter on the ke&board. To select a default handler3 click the appropriate button in the dialo* boC. ii. Bser)defined handler: If &ou prefer3 specif& the name of &our own handler. 'lick Bser +efined Function 4ame and t&pe in a name in the Bser +efined Function 4ame boC. $J; Row do &ou handle T- eCceptions! a; 6 T- eCception enables &ou to detect and respond to a specific error code returned durin* test eCecution. b; -uppose &ou are runnin* a batch test on an unstable version of &our application. If &our application crashes3 &ou want WinRunner to recover test eCecution. 6 T- eCception can instruct WinRunner to recover test eCecution b& eCitin* the current test3 restartin* the application3 and continuin* with the neCt test in the batch. c; The handler function is responsible for recoverin* test eCecution. When WinRunner detects a specific error code3 it calls the handler function. Hou implement this function to respond to the uneCpected error in the wa& that meets &our specific testin* needs. d; 5nce &ou have defined the eCception3 WinRunner activates handlin* and adds the eCception to the list of default T- eCceptions in the /Cceptions dialo* boC. +efault T- eCceptions are defined b& the XRD/X'PDT- confi*uration parameter in the wrun.ini confi*uration file. $L; Row do &ou handle ob0ect eCceptions! a; +urin* testin*3 uneCpected chan*es can occur to QBI ob0ects in the application &ou are testin*. These chan*es are often subtle but the& can disrupt the test run and distort results. b; Hou could use eCception handlin* to detect a chan*e in propert& of the QBI ob0ect durin* the test run3 and to recover test eCecution b& callin* a handler function and continue with the test eCecution $N; Row do &ou comment &our script! a; We comment a script or line of script b& insertin* a ePY at the be*innin* of the line. $K; What is a compile module! a; 6 compiled module is a script containin* a librar& of user)defined functions that &ou want to call frequentl& from other tests. When &ou load a compiled module3 its functions are automaticall& compiled and remain in memor&. Hou can call them directl& from within an& test. b; 'ompiled modules can improve the or*ani.ation and performance of &our tests. -ince &ou debu* compiled modules before usin* them3 &our tests will require less error)checkin*. In addition3 callin* a function that is alread& compiled is si*nificantl& faster than interpretin* a function in a test script. $O; What is the difference between script and compile module! a; Test script contains the eCecutable file in WinRunner while 'ompiled %odule is used to store reusable functions. 'omplied modules are not eCecutable.

b; WinRunner performs a pre)compilation automaticall& when it saves a module assi*ned a propert& value of b'ompiled %odulec. c; 8& default3 modules containin* T- code have a propert& value of <main<. %ain modules are called for eCecution from within other modules. %ain modules are d&namicall& compiled into machine code onl& when WinRunner reco*ni.es a <call< statement. /Cample of a call for the <appDinit< script: call csoDinit:;9 call: <':hh%&6ppFolderhh< W <appDinit< ;9 d; 'ompiled modules are loaded into memor& to be referenced from T- code in an& module. /Cample of a load statement: reload :b':hh%&6ppFolderhh< W <fltDlib<;9 or load :<':hh%&6ppFolderhh< W <fltDlib<;9 $$; Write and eCplain various loop command! a; 6 for loop instructs WinRunner to eCecute one or more statements a specified number of times. It has the followin* s&ntaC: for : > eCpression# @9 > eCpression2 @9 > eCpression= @ ; statement i. First3 eCpression# is eCecuted. 4eCt3 eCpression2 is evaluated. If eCpression2 is true3 statement is eCecuted and eCpression= is eCecuted. The c&cle is repeated as lon* as eCpression2 remains true. If eCpression2 is false3 the for statement terminates and eCecution passes to the first statement immediatel& followin*. ii. For eCample3 the for loop below selects the file BIDT/-T from the File 4ame list iii. in the 5pen window. It selects this file five times and then stops. setDwindow :<5pen<; for :i"I9 i_L9 iUU; listDselectDitem:<FileD4ame:D#<3<BIDT/-T<;9 PItem 4umber2 b; 6 while loop eCecutes a block of statements for as lon* as a specified condition is true. It has the followin* s&ntaC: while : eCpression ; statement 9 i. While eCpression is true3 the statement is eCecuted. The loop ends when the eCpression is false. For eCample3 the while statement below performs the same function as the for loop above. setDwindow :<5pen<;9 i"I9 while :i_L;f iUU9 listDselectDitem :<File 4ame:D#<3 <BIDT/-T<;9 P Item 4umber 2 g c; 6 do/while loop eCecutes a block of statements for as lon* as a specified condition is true. Bnlike the for loop and while loop3 a do/while loop tests the conditions at the end of the loop3 not at the be*innin*. 6 do/while loop has the followin* s&ntaC: do statement

while :eCpression;9 i. The statement is eCecuted and then the eCpression is evaluated. If the eCpression is true3 then the c&cle is repeated. If the eCpression is false3 the c&cle is not repeated. ii. For eCample3 the do/while statement below opens and closes the 5rder dialo* boC of Fli*ht Reservation five times. setDwindow :<Fli*ht Reservation<;9 i"I9 do f menuDselectDitem :<File95pen 5rder...<;9 setDwindow :<5pen 5rder<;9 buttonDpress :<'ancel<;9 iUU9 g while :i_L;9 #II; Write and eCplain decision makin* command! a; Hou can incorporate decision)makin* into &our test scripts usin* if/else or switch statements. i. 6n if/else statement eCecutes a statement if a condition is true9 otherwise3 it eCecutes another statement. It has the followin* s&ntaC: if : eCpression ; statement#9 > else statement29 @ eCpression is evaluated. If eCpression is true3 statement# is eCecuted. If eCpression# is false3 statement2 is eCecuted. b; 6 switch statement enables WinRunner to make a decision based on an eCpression that can have more than two values. It has the followin* s&ntaC: switch :eCpression ; f case caseD#: statements case caseD2: statements case caseDn: statements default: statement:s; g The switch statement consecutivel& evaluates each case eCpression until one is found that equals the initial eCpression. If no case is equal to the eCpression3 then the default statements are eCecuted. The default statements are optional. #I#; Write and eCplain switch command! a; 6 switch statement enables WinRunner to make a decision based on an eCpression that can have more than two values. It has the followin* s&ntaC: switch :eCpression ; f case caseD#: statements case caseD2: statements case caseDn: statements default: statement:s; g

b; The switch statement consecutivel& evaluates each case eCpression until one is found that equals the initial eCpression. If no case is equal to the eCpression3 then the default statements are eCecuted. The default statements are optional. #I2; Row do &ou write messa*es to the report! a; To write messa*e to a report we use the reportDms* statement -&ntaC: reportDms* :messa*e;9 #I=; What is a command to invoke application! a; InvokeDapplication is the function used to invoke an application. -&ntaC: invokeDapplication:file3 commandDoption3 workin*Ddir3 -R5W;9 #IJ; What is the purpose of tlDstep command! a; Bsed to determine whether sections of a test pass or fail. -&ntaC: tlDstep:stepDname3 status3 description;9 #IL; Which T- function &ou will use to compare two files! a; We can compare 2 files in WinRunner usin* the fileDcompare function. -&ntaC: fileDcompare :file#3 file2 >3 save file@;9 #IN; What is the use of function *enerator! a; The Function Qenerator provides a quick3 error)free wa& to pro*ram scripts. Hou can: i. 6dd 'onteCt -ensitive functions that perform operations on a QBI ob0ect or *et information from the application bein* tested. ii. 6dd -tandard and 6nalo* functions that perform non)'onteCt -ensitive tasks such as s&nchroni.in* test eCecution or sendin* user)defined messa*es to a report. iii. 6dd 'ustomi.ation functions that enable &ou to modif& WinRunner to suit &our testin* environment. #IK; What is the use of puttin* call and callDclose statements in the test script! a; Hou can use two t&pes of call statements to invoke one test from another: i. 6 call statement invokes a test from within another test. ii. 6 callDclose statement invokes a test from within a script and closes the test when the test is completed. iii. The call statement has the followin* s&ntaC: #. call testDname : > parameter#3 parameter23 ...parametern @ ;9 iv. The callDclose statement has the followin* s&ntaC: #. callDclose testDname : > parameter#3 parameter23 ... parametern @ ;9 v. The testDname is the name of the test to invoke. The parameters are the parameters defined for the called test. vi. The parameters are optional. Rowever3 when one test calls another3 the call statement should desi*nate a value for each parameter defined for the called test. If no parameters are defined for the called test3 the call statement must contain an empt& set of parentheses. #IO; What is the use of treturn and teCit statements in the test script! a; The treturn and teCit statements are used to stop eCecution of called tests. i. The treturn statement stops the current test and returns control to the callin* test. ii. The teCit statement stops test eCecution entirel&3 unless tests are bein* called from a batch test. In this case3 control is returned to the main batch test. b; 8oth functions provide a return value for the called test. If treturn or teCit is not used3 or if no value is specified3 then the return value of the call statement is I. treturn c; The treturn statement terminates eCecution of the called test and returns control to the callin*

test. The s&ntaC is: treturn >: eCpression ;@9 d; The optional eCpression is the value returned to the call statement used to invoke the test. teCit e; When tests are run interactivel&3 the teCit statement discontinues test eCecution. Rowever3 when tests are called from a batch test3 teCit ends eCecution of the current test onl&9 control is then returned to the callin* batch test. The s&ntaC is: teCit >: eCpression ;@9 #I$; Where do &ou set up the search path for a called test. a; The search path determines the directories that WinRunner will search for a called test. b; To set the search path3 choose -ettin*s E Qeneral 5ptions. The Qeneral 5ptions dialo* boC opens. 'lick the Folders tab and choose a search path in the -earch Path for 'alled Tests boC. WinRunner searches the directories in the order in which the& are listed in the boC. 4ote that the search paths &ou define remain active in future testin* sessions. ##I; Row &ou create user)defined functions and eCplain the s&ntaC! a; 6 user)defined function has the followin* structure: >class@ function name :>mode@ parameter...; f declarations9 statements9 g b; The class of a function can be either static or public. 6 static function is available onl& to the test or module within which the function was defined. c; Parameters need not be eCplicitl& declared. The& can be of mode in3 out3 or inout. For all non) arra& parameters3 the default mode is in. For arra& parameters3 the default is inout. The si*nificance of each of these parameter t&pes is as follows: in: 6 parameter that is assi*ned a value from outside the function. out: 6 parameter that is assi*ned a value from inside the function. inout: 6 parameter that can be assi*ned a value from outside or inside the function. ###; What does static and public class of a function means! a; The class of a function can be either static or public. b; 6 static function is available onl& to the test or module within which the function was defined. c; 5nce &ou eCecute a public function3 it is available to all tests3 for as lon* as the test containin* the function remains open. This is convenient when &ou want the function to be accessible from called tests. Rowever3 if &ou want to create a function that will be available to man& tests3 &ou should place it in a compiled module. The functions in a compiled module are available for the duration of the testin* session. d; If no class is eCplicitl& declared3 the function is assi*ned the default class3 public. ##2; What does in3 out and input parameters means! a; in: 6 parameter that is assi*ned a value from outside the function. b; out: 6 parameter that is assi*ned a value from inside the function. c; inout: 6 parameter that can be assi*ned a value from outside or inside the function. ##=; What is the purpose of return statement! a; This statement passes control back to the callin* function or test. It also returns the value of the evaluated eCpression to the callin* function or test. If no eCpression is assi*ned to the return

statement3 an empt& strin* is returned. -&ntaC: return >: eCpression ;@9 ##J; What does auto3 static3 public and eCtern variables means! a; auto: 6n auto variable can be declared onl& within a function and is local to that function. It eCists onl& for as lon* as the function is runnin*. 6 new cop& of the variable is created each time the function is called. b; static: 6 static variable is local to the function3 test3 or compiled module in which it is declared. The variable retains its value until the test is terminated b& an 6bort command. This variable is initiali.ed each time the definition of the function is eCecuted. c; public: 6 public variable can be declared onl& within a test or module3 and is available for all functions3 tests3 and compiled modules. d; eCtern: 6n eCtern declaration indicates a reference to a public variable declared outside of the current test or module. ##L; Row do &ou declare constants! a; The const specifier indicates that the declared value cannot be modified. The class of a constant ma& be either public or static. If no class is eCplicitl& declared3 the constant is assi*ned the default class public. 5nce a constant is defined3 it remains in eCistence until &ou eCit WinRunner. b; The s&ntaC of this declaration is: >class@ const name >" eCpression@9 ##N; Row do &ou declare arra&s! a; The followin* s&ntaC is used to define the class and the initial eCpression of an arra&. 6rra& si.e need not be defined in T- . b; class arra&Dname > @ >"initDeCpression@ c; The arra& class ma& be an& of the classes used for variable declarations :auto3 static3 public3 eCtern;. ##K; Row do &ou load and unload a compile module! a; In order to access the functions in a compiled module &ou need to load the module. Hou can load it from within an& test script usin* the load command9 all tests will then be able to access the function until &ou quit WinRunner or unload the compiled module. b; Hou can load a module either as a s&stem module or as a user module. 6 s&stem module is *enerall& a closed module that is binvisiblec to the tester. It is not displa&ed when it is loaded3 cannot be stepped into3 and is not stopped b& a pause command. 6 s&stem module is not unloaded when &ou eCecute an unload statement with no parameters :*lobal unload;. load :moduleDname >3#aI@ >3#aI@ ;9 The moduleDname is the name of an eCistin* compiled module. Two additional3 optional parameters indicate the t&pe of module. The first parameter indicates whether the function module is a s&stem module or a user module: # indicates a s&stem module9 I indicates a user module. :+efault " I; The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automaticall& after it is loaded: # indicates that the module will close automaticall&9 I indicates that the module will remain open. :+efault " I; c; The unload function removes a loaded module or selected functions from memor&. d; It has the followin* s&ntaC: unload : > moduleDname a testDname > 3 <functionDname< @ @ ;9

##O; Wh& &ou use reload function! a; If &ou make chan*es in a module3 &ou should reload it. The reload function removes a loaded module from memor& and reloads it :combinin* the functions of unload and load;. The s&ntaC of the reload function is: reload : moduleDname > 3#aI @ > 3#aI @ ;9 The moduleDname is the name of an eCistin* compiled module. Two additional optional parameters indicate the t&pe of module. The first parameter indicates whether the module is a s&stem module or a user module: # indicates a s&stem module9 I indicates a user module. :+efault " I; The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automaticall& after it is loaded. # indicates that the module will close automaticall&. I indicates that the module will remain open. :+efault " I; ##$; Wh& does the minus si*n not appear when usin* ob0Dt&pe:;3 winDt&pe:;3 t&pe:;! If usin* an& of the t&pe:; functions3 minus si*ns actuall& means hold down the button for the previous character. The solution is to put a backslash character <hh< before the minus si*n. This also applies to U _ E.

Securit Testing
Hou would seek <Web 6uditin*< stuff3 and some 6pplication -ecurit& in that field :like confi*urin* &our servers3 includin* &our web server3 file server3 etc;. between3 a tester3 workin* on a specific aspect like Web -ecurit& should have some required knowled*e on some other aspects of securit& en*ineerin* too. 6 %5-T3 the& all are *onna be discussed in Penetration Testin* :PenTestin*; materials3 which &ou would make a move at &our own to find a couple of them. between3 the most recommended readin* for &a at the point3 is Web 6pplication -ecurit& materials :this is accordin* to &our willin*s;: +eveloperAs Quide to Web 6pplication -ecurit& The Web 6pplication Rackers Randbook: +iscoverin* and /Cploitin* -ecurit& Flaws Rackin* the 'ode: 6-P.4/T Web 6pplication -ecurit& Professional Pen Testin* for Web 6pplications Testin* Web -ecurit&: 6ssessin* the -ecurit& of Web -ites and 6pplications 6nd &ou would check here to find more: 6nd at the end of it all3 i think the website below will *oin* to help &a in &our field of interest :i.e. web testin*;: http://www.owasp.or*

If &ouAre facin* an& technical problem or questions3 0ust feel free to ask here.

QTP +iffercence between QTP $.I and O.I2. Well3 the differences in app are usuall& appeared in the form of new features involved in the newer version:s; of QTP :QuickTest Professional;. There is a bunch of new features in qtp $ in comparison to qtp O.C that IAm *onna notice some of Aem: #. the first one3 obvious one: Its 4ew BI :Bser)Interface; 2. %issin*)Resources Panel :it can be considered as in branch of its new ui too;. =. /nhanced Intelli-ense -upport J. the abilit& to Pass Parameters between 6ctions. L. 'op&/%ove ob0ects between 5b0ect)Repositories easil& than ever before. N. the abilit& to 'onvert 5b0ect)Repositor& to/from X% in an eas& stat. K. %ultiple 5b0ect Repositories per test asset O. %ana*e Functions and Ze&words from a 'entral identit&. $. -tep Into Function +efinition 'ode While +ebu**in* the Test :a neat feature; #I. The new stat of codin* Plu*)Ins. ##. the abilit& to :Bn);'omment %ultiple ines in /Cpert ?iew #2. the abilit& to :Bn);Indent %ultiple ines in /Cpert ?iew #=. %ultiple +ocument Interface for function librarie #J. ... and a lot more... #. What are the features and benefits of Quick Test Pro:QTP;! #. Ze& word driven testin* 2. -uitable for both client server and web based application =. ?8 script as the script lan*ua*e J. 8etter error handlin* mechanism L. /Ccellent data driven testin* features 2. Row to handle the eCceptions usin* recover& scenario mana*er in QTP! Hou can instruct QTP to recover uneCpected events or errors that occurred in &our testin* environment durin* test run. Recover& scenario mana*er provides a wi.ard that *uides &ou throu*h the definin* recover& scenario. Recover& scenario has three steps #. Tri**ered /vents 2. Recover& steps =. Post Recover& Test)Run =. What is the use of TeCt output value in QTP! 5utput values enable to view the values that the application talks durin* run time. When parameteri.ed3 the values chan*e for each iteration. Thus b& creatin* output values3 we can capture the values that the application takes for each run and output them to the data table.

J. Row to use the 5b0ect sp& in QTP O.I version! There are two wa&s to -p& the ob0ects in QTP #; Thru file toolbar: In the File Tool8ar click on the last toolbar button :an icon showin* a person with hat;. 2; Thru 5b0ect repositor& +ialo*: In 5b0ectrepositor& dialo* click on the button bob0ect sp&^c In the 5b0ect sp& +ialo* click on the button showin* hand s&mbol. The pointer now chan*es in to a hand s&mbol and we have to point out the ob0ect to sp& the state of the ob0ect. If at all the ob0ect is not visible or window is minimi.ed then hold the 'trl button and activate the required window to and release the 'trl button. L. What is the file eCtension of the code file and ob0ect repositor& file in QTP! File eCtension of Per test ob0ect rep: filename.mtr -hared 5b0ect rep: filename.tsr 'ode file eCtension id: script.mts N. /Cplain the concept of ob0ect repositor& and how QTP reco*ni.es ob0ects! 5b0ect Repositor&: displa&s a tree of all ob0ects in the current component or in the current action or entire test: dependin* on the ob0ect repositor& mode &ou selected;. we can view or modif& the test ob0ect description of an& test ob0ect in the repositor& or to add new ob0ects to the repositor&. Quicktest learns the default propert& values and determines in which test ob0ect class it fits. If it is not enou*h it adds assistive properties3 one b& one to the description until it has compiled the unique description. If no assistive properties are available3 then it adds a special 5rdianl identifier such as ob0ects location on the pa*e or in the source code. K. What are the properties &ou would use for identif&in* a browser and pa*e when usin* descriptive pro*rammin*! bnamec would be another propert& apart from btitlec that we can use. 5R We can also use the propert& bmic'lassc. eC: 8rowser:cmic'lass:"browserc;.pa*e:cmic'lass:"pa*ec; O. What are the different scriptin* lan*ua*es &ou could use when workin* with QTP! Hou can write scripts usin* followin* lan*ua*es: ?isual 8asic :?8;3 X% 3 1ava-cript3 1ava3 RT% $. Tell some commonl& used /Ccel ?86 functions. 'ommon functions are: 'olorin* the cell3 6uto fit cell3 settin* navi*ation from link in one cell to other savin* #I. /Cplain the ke&word createob0ect with an eCample. 'reates and returns a reference to an 6utomation ob0ect s&ntaC: 'reate5b0ect:servername.t&pename >3 location@; 6r*uments servername:Required. The name of the application providin* the ob0ect. t&pename : Required. The t&pe or class of the ob0ect to create. location : 5ptional. The name of the network server where the ob0ect is to be created.

##. /Cplain in brief about the QTP 6utomation 5b0ect %odel. /ssentiall& all confi*uration and run functionalit& provided via the QuickTest interface is in some wa& represented in the QuickTest automation ob0ect model via ob0ects3 methods3 and properties. 6lthou*h a one)on)one comparison cannot alwa&s be made3 most dialo* boCes in QuickTest have a correspondin* automation ob0ect3 most options in dialo* boCes can be set and/or retrieved usin* the correspondin* ob0ect propert&3 and most menu commands and other operations have correspondin* automation methods. Hou can use the ob0ects3 methods3 and properties eCposed b& the QuickTest automation ob0ect model3 alon* with standard pro*rammin* elements such as loops and conditional statements to desi*n &our pro*ram. #2. Row to handle d&namic ob0ects in QTP! QTP has a unique feature called -mart 5b0ect Identification/reco*nition. QTP *enerall& identifies an ob0ect b& matchin* its test ob0ect and run time ob0ect properties. QTP ma& fail to reco*ni.e the d&namic ob0ects whose properties chan*e durin* run time. Rence it has an option of enablin* -mart Identification3 wherein it can identif& the ob0ects even if their properties chan*es durin* run time. 'heck out this: If QuickTest is unable to find an& ob0ect that matches the recorded ob0ect description3 or if it finds more than one ob0ect that fits the description3 then QuickTest i*nores the recorded description3 and uses the -mart Identification mechanism to tr& to identif& the ob0ect. While the -mart Identification mechanism is more compleC3 it is more fleCible3 and thus3 if confi*ured lo*icall&3 a -mart Identification definition can probabl& help QuickTest identif& an ob0ect3 if it is present3 even when the recorded description fails. The -mart Identification mechanism uses two t&pes of properties: 8ase filter properties ) The most fundamental properties of a particular test ob0ect class9 those whose values cannot be chan*ed without chan*in* the essence of the ori*inal ob0ect. For eCample3 if a Web linkYs ta* was chan*ed from to an& other value3 &ou could no lon*er call it the same ob0ect. 5ptional filter properties ) 5ther properties that can help identif& ob0ects of a particular class as the& are unlikel& to chan*e on a re*ular basis3 but which can be i*nored if the& are no lon*er applicable. #=. What is a Run)Time +ata Table! Where can I find and view this table! In QTP3 there is data table used3 which is used at runtime. )In QTP3 select the option ?iew)E+ata table. )This is basicall& an eCcel file3 which is stored in the folder of the test created3 its name is +efault.Cls b& default. #J. Row does Parameteri.ation and +ata)+rivin* relate to each other in QTP! To data driven we have to parameteri.e. i.e. we have to make the constant value as parameter3 so that in each interaction:c&cle; it takes a value that is supplied in run) time data table. Throu*h parameteri.ation onl& we can drive a transaction :action; with different sets of data. Hou know runnin* the script with the same set of data several times is not su**ested3 and itYs also of no use. #L. What is the difference between 'all to 6ction and 'op& 6ction.!

'all to 6ction: The chan*es made in 'all to 6ction3 will be reflected in the ori*inal action :from where the script is called;. 8ut where as in 'op& 6ction 3 the chan*es made in the script 3will not effect the ori*inal script:6ction; #N. /Cplain the concept of how QTP identifies ob0ect. +urin* recordin* qtp looks at the ob0ect and stores it as test ob0ect. For each test ob0ect QT learns a set of default properties called mandator& properties3 and look at the rest of the ob0ects to check whether this properties are enou*h to uniquel& identif& the ob0ect. +urin* test run3 QTP searches for the run time ob0ects that matches with the test ob0ect it learned while recordin*. #K. +ifferentiate the two 5b0ect Repositor& T&pes of QTP. 5b0ect repositor& is used to store all the ob0ects in the application bein* tested. T&pes of ob0ect repositor&: Per action and shared repositor&. In shared repositor& onl& one centrali.ed repositor& for all the tests. where as in per action for each test a separate per action repositor& is created. #O. What the differences are and best practical application of 5b0ect Repositor&! Per 6ction: For /ach 6ction3 one 5b0ect Repositor& is created. -hared: 5ne 5b0ect Repositor& is used b& entire application #$. /Cplain what the difference between -hared Repositor& and Per 6ction Repositor& -hared Repositor&: /ntire application uses one 5b0ect Repositor& 3 that similar to Qlobal QBI %ap file in WinRunner Per 6ction: For each 6ction3 one 5b0ect Repositor& is created3 like QBI map file per test in WinRunner 2I. Rave &ou ever written a compiled module! If &es tell me about some of the functions that &ou wrote. -ample answer :Hou can tell about modules &ou worked on. If &our answer is Hes then Hou should eCpect more questions and should be able to eCplain those modules in later questions;: I Bsed the functions for 'apturin* the d&namic data durin* runtime. Function used for 'apturin* +esktop3 browser and pa*es. 2#. 'an &ou do more than 0ust capture and pla&back! -ample answer :-a& Hes onl& if &ou worked on;: I have done +&namicall& capturin* the ob0ects durin* runtime in which no recordin*3 no pla&back and no use of repositor& is done 6T 6 . )It was done b& the windows scriptin* usin* the +5%:+ocument 5b0ect %odel; of the windows. 22. Row to do the scriptin*. 6re there an& inbuilt functions in QTP! What is the difference between them! Row to handle script issues! Hes3 thereYs an in)built functionalit& called b-tep Qeneratorc in Insert)E-tep)E-tep Qenerator )FK3 which will *enerate the scripts as &ou enter the appropriate steps. 2=. What is the difference between check point and output value!

6n output value is a value captured durin* the test run and entered in the run)time but to a specified location. /X:) ocation in +ata Table>Qlobal sheet / local sheet@ 2J. Row man& t&pes of 6ctions are there in QTP! There are three kinds of actions: 4on)reusable action ) 6n action that can be called onl& in the test with which it is stored3 and can be called onl& once. Reusable action ) 6n action that can be called multiple times b& the test with which it is stored :the local test; as well as b& other tests. /Cternal action ) 6 reusable action stored with another test. /Cternal actions are read)onl& in the callin* test3 but &ou can choose to use a local3 editable cop& of the +ata Table information for the eCternal action. 2L. I want to open a 4otepad window without recordin* a test and I do not want to use -&stem utilit& Run command as well. Row do I do this! Hou can still make the notepad open without usin* the record or -&stem utilit& script3 0ust b& mentionin* the path of the notepad b: i.e. where the notepad.eCe is stored in the s&stem; in the bWindows 6pplications Tabc of the bRecord and Run -ettin*s window.

W"ite(+o= %et"o&s an& co%#arisons 5nce white)boC testin* is started3 there are a number of techniques to ensure the internal parts of the s&stem are bein* adequatel& tested and that there is sufficient lo*ic covera*e. The eCecution of a *iven test case a*ainst pro*ram p will eCercise :cover; certain parts of PAs internal lo*ic. 6 measure of testedness for p is the de*ree of lo*ic covera*e produced b& the collective set of test cases for P. White)boC testin* methods are used to increase lo*ic covera*e. There are four basic forms of lo*ic covera*e: :#; -tatement covera*e :2; +ecision :branch; covera*e :=; 'ondition covera*e :J; Path covera*e. White)boC methods defined and compared -tatement 'overa*e +ecision 'overa*e 'ondition 'overa*e +ecision/ 'ondition 'overa*e %ultiple 'ondition 'overa*e

/ach -tatement is eCecuted at least once

/ach decision takes on all Possible 5utcomes

Implicit

6t least once

/ach condition in a decision takes on all Possible 5utcomes 6t least once

Implicit

6ll possible combinations of condition outcomes in each decision occur at least once

Fi*ure illustrates white)boC methods. For eCample3 to perform condition covera*e3 tests coverin* characteristics # and = are required. Tests coverin* 2 and J are not required. To perform multiple condition covera*e3 tests coverin* characteristics # and J are required. -uch tests will automaticall& cover characteristics # and 2. Fi*ure The white)boC methods defined and compared. /ach column in this fi*ure represents a distinct method of white)boC testin*3 and each row :#)J; defines a different test characteristic. For a *iven method :column;3 <H< in a *iven row means that the test h characteristic is required for the method. <4< si*nifies no requirement. <Implicit< means the test characteristic is achieved implicitl& b& other requirements of the method. /Chaustive path covera*e is *enerall& impractical. Rowever3 there are practical methods3 based on the other three basic forms3 which provide increasin* de*rees of lo*ic covera*e. 0=a%#le of w"ite(+o= coverage To clarif& the difference between these covera*e methods3 consider the followin* Pascal procedure. The *oal of the eCample is to list one possible set of tests :sets of input data; which satisfies the criteria for each of the white)boC3 covera*e methods. The liabilit& procedure: procedure liabilit& :a*e3 seC3 married3 premium;9 be*in premium :" LII9 if ::a*e _ 2L; and :seC " male; and :not married; ; then premium :" premium U #LII9 3 else : if :married or :seC " female; ; then f . premium :" premium )2II9 c if : :a*e E JL; and :a*e _ NL; ; then premium :" premium )l559 ; end9 The three input parameters are a*e :inte*er;3 seC :male or female;3 and married :true or false;. Zeep in mind the followin*: G -tatement covera*e: /ach statement is eCecuted at least once.

G +ecision covera*e: /ach statement is eCecuted at least once9 each decision takes on all possible outcomes at least once. G 'ondition covera*e: /ach statement is eCecuted at least once9 each condition in a decision takes on all possible outcomes at least once. G +ecision/condition covera*e: /ach statement is eCecuted at least once9 each decision takes on all possible outcomes at least once9 each condition in a decision takes on all possible outcomes at least once. G %ultiple condition covera*e: /ach statement is eCecuted at least once9 all possible combinations of condition outcomes in each decision occur at least once. 6 lo*ic covera*e methods solution for the liabilit& :insurance; procedure follows. The followin* notation is used in each table shown below. The first column of each row denotes the specific <IF< statement from the eCercise pro*ram. For eCample3 <IF)2< means the second IF statement in the sample pro*ram. The last column indicates a test)case number in parentheses. For eCample3 < :=;< indicates test) case number =. 6n& information followin* the test case number is the test data itself in abbreviated form. For eCample3 <2= FT< means a*e " 2=3 seC " Female3 and married " True. 6n asterisk :M; in an& boC means <wild card< or < an& valid input.c

-tatement 'overa*e

6*e

-eC

%arrie d

Test 'ase

There are onl& two statements in this pro*ram3 and an& combination of inputs will provide covera*e for both statements.

-tatement 'overa*e

6*e

-eC

%arrie d

Test 'ase

IF)#

_2L

%ale

False

:#; 2= % F

IF)#

_2L

Female

False

:2; 2= F F

IF)2

Female

:2;

IF)2

E"2L

%ale

False

:=; LI % F

IF)=

_"JL

Female :n#;

:2;

IF)=

EJL3 _NL

:=;

4ote :n#;: This input is not necessar& for IF)=3 but it is necessar& to ensure that IF)l is false >if a*e _ 2L and married is false@ so that the else clause of IF)l :and hence IF)=; will be eCecuted.

-tatement 'overa*e

6*e

-eC

%arrie d

Test 'ase

IF)#

_2L

Femal e

False

:2; 2= F F

IF)#

_2L

%ale

False

:#; 2= % F

IF)2

%ale

:2;

IF)2

Female

False

:#;

IF)=

_"J L

:#;

IF)=

EJL

:=; KI F F

IF)=

_NL

:=;

4ote: These test cases fail to eCecute the then clauses of IF)# and well as the :empt&; else clause of IF)2.

'lient/-erver -oftware Testin* Introduction The first part of this pa*e is the introduction to 'lient/-erver architecture3 which includes three sections: What is the 'lient/-erver 'omputin*3 6rchitectures for 'lient/-erver -&stem3 and 'ritical Issues Involved in 'lient/-erver -&stem %ana*ement. 'lient/-erver computin* is a current realit& for professional s&stem developers and for sophisticated departmental computin* users. The section3 What is the 'lient/-erver 'omputin*3 points out the definition and ma0or characteristics of 'lient/-erver computin*. 4etcentric :or Internet; computin*3 as an evolution of 'lient/-erver model3 has brou*ht new technolo*& to the forefront. Rence3 the ma0or characteristics and differences between 4etcentic and traditional 'lient/-erver computin* are also presented in this section. 8oth traditional and 4etcentric computin* are tiered architectures. The brief introduction for three popular architectures3 namel&3 2)tiered architecture3 modified 2) tiered architecture3 and =)tiered architecture are found in the section )) The 6rchitecture for 'lient/-erver 'omputin*. The second part of this pa*e is about 'lient/-erver software testin*. There are four sections in this part: Introduction to 'lient/-erver -oftware Testin*3 Testin* Plan for 'lient/-erver 'omputin*3 'lient/-erver Testin* in +ifferent a&ers3 and -pecial 'oncerns for Internet 'omputin*[-ecurit& Testin*. In the section Introduction to the 'lient/-erver -oftware Testin*3 we present some basic characteristics of 'lient/-erver software testin* from different points of view. 8ecause of the difference between traditional and 'lient/-erver software testin*3 a practical testin* plan based on application functionalit& is attached in section 2 Testin* Plan for 'lient/-erver -oftware Testin*. We also *ive some detailed eCplanation for different test plans3 such as3 s&stem test plan3 operational plan3 acceptance test plan3 and re*ression test plan3 which are parts of a 'lient/-erver testin* plan. 6s mentioned in Part I3 a 'lient/-erver s&stem has several la&ers3 which can be viewed conceptuall& and ph&sicall&. ?iewed ph&sicall&3 the la&ers are client3 server3

middleware3 and network. In section = 'lient/-erver Testin* in +ifferent a&ers3 specific concerns related to client3 server and network problems3 testin* techniques3 testin* tools and some activities are addressed separatel& in Testin* on the 'lient -ide3 Testin* on the -erver -ide3 and 4etwork Testin*. For Internet)based 'lient/-erver s&stems3 securit& is one of the ma0or concerns. Rence3 this essa& also includes some securit& risks that need to be tested in the Part II3 section J -pecial 'oncerns for Internet 'omputin*[-ecurit& Testin*. 'lient/-erver -oftware Testin* I: Introduction to 'lient/-erver architecture: 'lient/-erver s&stem development is the preferred method of constructin* cost) effective department) and enterprise)level strate*ic corporate information s&stems. It allows the rapid deplo&ment of information s&stems in end)user environments. #: What is 'lient/-erver 'omputin*! 'lient/-erver computin* is a st&le of computin* involvin* multiple processors3 one of which is t&picall& a workstation and across which a sin*le business transaction is completed >#@. 'lient/-erver computin* reco*ni.es that business users3 and not a mainframe3 are the center of a business. Therefore3 'lient/-erver is also called bclient)centricc computin*. Toda&3 'lient/-erver computin* is eCtended to the Internet[netcentric computin* :network centric computin*;3 the concepts of business users have eCpanded *reatl&. Forrester Report describes the netcentric computin* as bRemote servers and clients cooperatin* over the Internet to do workc and sa&s that Internet 'omputin* eCtends and improves the 'lient/-erver model >2@. The characteristics of 'lient/-erver computin* includes: #. There are multiple processors. 2. 6 complete business transaction is processed across multiple servers 4etcentric computin* )))) as an evolution of 'lient/-erver model3 has brou*ht new technolo*& to the forefront3 especiall& in the area of eCternal presence and access3 ease of distribution3 and media capabilities. -ome of new technolo*ies are >=@: a. 8rowser3 which provides a buniversal clientc. In the traditional 'lient/-erver environment3 distributin* an application internall& or eCternall& for an enterprise requires that the application be recompiled and tested for all specific workstation platforms :operatin* s&stems;. It also usuall& requires loadin* the application on each client machine. The browser)centric application st&le offers an alternative to this traditional problem. The web browser provides a universal client that offers users a consistent and familiar user interface. Bsin* a browser3 a user can launch man& t&pes of applications and view man& t&pes of documents. This can be accomplished on different operatin* s&stems and is independent of where the applications or documents reside. b. +irect supplier)to)customer relationships. The eCternal presence and access enabled b& connectin* a business node to the Internet has opened up a series of opportunities to reach an audience outside a compan&Ys traditional internal users. c. Richer documents. 4etcentric technolo*ies :such as RT% 3 documents3 plu*)ins3 and 1ava; and standardi.ation of media information formats enable support for

compleC documents3 applications and even nondiscrete data t&pes such as audio and video. d. 6pplication version checkin* and d&namic update. The confi*uration mana*ement of traditional 'lient/-erver applications3 which tend to be stored on both the client and server sides3 is a ma0or issue for man& corporations. 4etcentric computin* can checkin* and update application versions d&namicall&. 2: 6rchitectures for 'lient/-erver -&stem. 8oth traditional 'lient/-erver as well as netcentric computin* are tiered architectures. In both cases3 there is a distribution of presentation services3 application code3 and data across clients and servers. In both cases3 there is a networkin* protocol that is used for communication between clients and servers. In both cases3 the& support a st&le of computin* where processes on different machines communicate usin* messa*es. In this st&le3 the bclientc dele*ates business functions or other tasks :such as data manipulation lo*ic; to one or more server processes. -erver processes respond to messa*es from clients. 6 'lient/-erver s&stem has several la&ers3 which can be visuali.ed in either a conceptual or a ph&sical manner. ?iewed conceptuall&3 the la&ers are presentation3 process3 and database. ?iewed ph&sicall&3 the la&ers are server3 client3 middleware3 and network. 2.#. 'lient/-erver 2)tiered architecture: 2)tiered architecture is also known as the client)centric model3 which implements a bfatc client. 4earl& all of the processin* happens on the client3 and client accesses the database directl& rather than throu*h an& middleware. In this model3 all of the presentation lo*ic and the business lo*ic are implemented as processes on the client. 2)tiered architecture is the simplest one to implement. Rence3 it is the simplest one to test. 6lso3 it is the most stable form of 'lient/-erver implementation3 makin* most of the errors that testers find independent of the implementation. +irect access to the database makes it simpler to verif& the test results. The disadvanta*e of this model is the limit of the scalabilit& and difficulties for maintenance. 8ecause it doesnYt partition the application lo*ic ver& well3 chan*es require reinstallation of the software on all of the client desktops. 2.2. %odified 2)tiered architecture: 8ecause of the ni*htmare of maintenance of the 2)tiered 'lient/-erver architecture3 the business lo*ic is moved to the database side3 implemented usin* tri**ers and procedures. This kind of model is known as modified 2)tiered architecture. In terms of software testin*3 modified 2)tiered architecture is more compleC than 2) tiered architecture for the followin* reasons: a. It is difficult to create a direct test of the business lo*ic. -pecial tools are required to implement and verif& the tests. b. It is possible to test the business lo*ic from the QBI3 but there is no wa& to determine the numbers of procedures and/or tri**ers that fires and create intermediate results before the end product is achieved. c. 6nother complication is d&namic database queries. The& are constructed b& the application and eCist onl& when the pro*ram needs them. It is ver& difficult to be sure

that the test *enerates a quer& bcorrectl&c3 or as eCpected. -pecial utilities that show what is runnin* in memor& must be used durin* the tests. 2.=. =)tiered architecture: For =)tiered architecture3 the application is divided into a presentation tier3 a middle tier3 and a data tier. The middle tier is composed of one or more application servers distributed across one or more ph&sical machines. This architecture is also termed the bthe thin client[fat serverc approach. This model is ver& complicated for testin* because the business and/or data ob0ects can be invoked from man& clients3 and the ob0ects can be partitioned across man& servers. The characteristics make the =)tiered architecture desirable as a development and implementation framework at the same time make testin* more complicated and trick&. =: 'ritical Issues Involved in 'lient/-erver -&stem %ana*ement: Rurwit. 'onsultin* Qroup3 Inc. has provided a framework for mana*in* 'lient/-erver s&stems that identifies ei*ht primar& mana*ement issues >J@: a. Performance b. Problem c. -oftware distribution d. 'onfi*uration and administration e. +ata and stora*e f. 5perations *. -ecurit& h. icense II 'lient/-erver -oftware Testin*: -oftware testin* for 'lient/-erver s&stems :+esktop or Webtop; presents a new set of testin* problems3 but it also includes the more traditional problems testers have alwa&s faced in the mainframe world. 6tre describes the special requirements of 'lient/-erver testin* >L@: a. The clientYs user interface b. The clientYs interface with the server c. The serverYs functionalit& d. The network :the reliabilit& and performance of the network; #. Introduction to the 'lient/-erver -oftware Testin*: We can view the 'lient/-erver software testin* from different perspectives: a. From a bdistributed processin*c perspective: -ince 'lient/-erver is a form of distributed processin*3 it is necessar& to consider its testin* implication from that point of view. The term bdistributedc implies that data and processes are dispersed across various and miscellaneous platforms. 8inder states several issues needed to be considered in the 'lient/-erver environments >N@. G 'lient QBI considerations G Tar*et environment and platform diversit& considerations G +istributed database considerations :includin* replicated data; G +istributed processin* considerations :includin* replicated processes; G 4onrobust tar*et environment G 4onlinear performance relationships

b. From a cross)platform perspective: The networked cross)platform nature of 'lient/-erver s&stems requires that we pa& much more attention to confi*uration testin* and compatibilit& testin*. The purpose of confi*uration testin* is to uncover the weakness of the s&stem operated in the different known hardware and software environments. The purpose of comparabilit& testin* is to find an& functionall& inconsistenc& of the interface across hardware and software. c. From a cross)window perspective: The current proliferation of %icrosoft Windows environments has created a number of problems for 'lient/-erver developers. For eCample3 Windows =.# is a #N)bit environment3 and Window $L and Window 4T are =2)bit environment. %iCin* and matchin* #N) bit and =2)bit code/#Nbits or =2bits s&stems and products causes ma0or problems. 4ow there eCit some automated tools that can *enerate both #N)bit and =2)bit test scripts. 2. Testin* Plan for 'lient/-erver 'omputin*: In man& instances3 testin* 'lient/-erver software cannot be planned from the perspective of traditional inte*rated testin* activities because this view either is not applicable at all or is too narrow3 and other dimensions must be considered. The followin* are some specific considerations needin* to be addressed in a 'lient/-erver testin* plan. G %ust include consideration of the different hardware and software platforms on which the s&stem will be used. G %ust take into account network and database server performance issues with which mainframe s&stems did not have to deal. G Ras to consider the replication of data and processes across networked servers -ee attached b'lient/-erver test plan based on application functionalit&c >K@. In the test plan3 we ma& address or construct several different kinds of testin*: a. The s&stem test plan: -&stem test scenarios are a set of test scripts3 which reflect user behaviors in a t&pical business situation. ItYs ver& important to identif& the business scenarios before constructin* the s&stem test plan. -ee attached '6-/ -TB+H: The business scenarios for the %F- ima*in* s&stem b. The user acceptance test plan: The user acceptance test plan is ver& similar to the s&stem test plan. The ma0or difference is direction. The user acceptance test is desi*ned to demonstrate the ma0or s&stem features to the user as opposed to findin* new errors. -ee attached '6-/ -TB+H: 6cceptance test specification for the %F- ima*in* s&stem c. The operational test plan: It *uides the sin*le user testin* of the *raphical user interface and of the s&stem function. d. The re*ression test plan: The re*ression test plan occurs at two levels. In 'lient/-erver development3 re*ression testin* happens between builds. 8etween s&stem releases3 re*ression testin* also occurs postproduction. /ach new build/release must be tested for three aspects: G To uncover errors introduced b& the fiC into previousl& correct function. G To uncover previousl& reported errors that remain. G To uncover errors in the new functionalit&.

e. %ultiuser performance test plan: It is necessar& to be performed in order to uncover an& uneCpected s&stem performance problem under load. =. 'lient/-erver Testin* in +ifferent a&ers: =.#. Testin* on the 'lient -ide[Qraphic Bser Interface Testin*: =.#.# The compleCit& for Qraphic Bser Interface Testin* is due to: a. 'ross)platform nature: The same QBI ob0ects ma& be required to run transparentl& :provide a consistent interface across platforms3 with the cross)platform nature unknown to the user; on different hardware and software platform. b. /vent)driven nature: QBI)base applications have increased testin* requirements because the& are in an event)driven environment where user actions are events that determine the applicationYs behavior. 8ecause the number of available user actions is ver& hi*h3 the number of lo*ical paths in the supportin* pro*ram code is also ver& hi*h. c. The mouse3 as an alternate method of input3 also raises some problems. It is necessar& to assure that the application handles both mouse input and ke&board input correctl&. d. The QBI testin* also requires testin* for the eCistence of a file that provides supportin* data/information for teCt ob0ects. The application must be sensitive to the eCistence3 or noneCistence. e. In man& cases3 QBI testin* also involves the testin* of the function that allows end)users to customi.e QBI ob0ects. %an& QBI development tools *ive the users the abilit& to define their own QBI ob0ects. The abilit& to do this requires the underl&in* application to be able to reco*ni.e and process events related to these custom ob0ects. =.#.2 QBI testin* techniques: %an& traditional software testin* techniques can be used in QBI testin*. a. Review techniques such as walkthrou*hs and inspections >O@. These human testin* procedures have been found to be ver& effective in the prevention and earl& correction of errors. It has been documented that two)thirds of all of the errors in finished information s&stems are the results of lo*ic flaws rather than poor codin* >$@. Preventive testin* approaches3 such as walkthrou*hs and inspections can eliminate the ma0orit& of these anal&sis and desi*n errors before the& *o throu*h to the production s&stem. b. +ata validation techniques: -ome of the most serious errors in software s&stems have been the result of inadequate or missin* input validation procedures. -oftware testin* has powerful data validation procedures in the form of the 8lack 8oC techniques of /quivalence Partitionin*3 8oundar& 6nal&sis3 and /rror Quessin*. These techniques are also ver& useful in QBI testin*. c. -cenario testin*: It is a s&stem)level 8lack 8oC approach that also assure *ood White 8oC lo*ic)level covera*e for 'lient/-erver s&stems. d. The decision lo*ic table :+ T;: + T represents an eCternal view of the functional specification that can be used to supplement scenario testin* from a lo*ic)covera*e perspective. In + Ts3 each lo*ical condition in the specification becomes a control path in the finished s&stem. /ach rule in the table describes a specific instance of a

pathwa& that must be implemented. Rence3 test cases based on the rules in a + T provide adequate covera*e of the moduleYs lo*ic independent of its coded implementation. In addition to these traditional testin* techniques3 a number of companies have be*un producin* structured capture/pla&back testin* tools that address the unique properties of QBIs. The difference between traditional capture/pla&back and structured capture/pla&back paradi*m is that capture/pla&back occurs at an eCternal level. It records input as ke&strokes or mouse actions and output as screen ima*es that are saved and compared a*ainst inputs and output ima*es of subsequent tasks. -tructured capture/pla&back is based on an internal view of eCternal activities. The application pro*ramYs interactions with the QBI are recorded as internal eeventsc that can be saved as bscriptsc written in some certain lan*ua*e. =.2 Testin* on the -erver -ide)))6pplication Testin*: There are several situations that scripts can be desi*ned to invoke durin* several tests: load testin*3 volume tests3 stress tests3 performance tests3 and data)recover& tests. =.2.# 'lient/-erver loadin* tests: 'lient/-erver s&stems must under*o two t&pes of testin*: sin*le)user)functional) based testin* and multiuser loadin* testin*. %ultiuser loadin* testin* is the best method to *au*e 'lient/-erver performance. It is necessar& in order to determine the suitabilit& of application server3 database server3 and web server performance. 8ecause multiuser load test requires emulatin* a situation in which multiple clients access a sin*le server application3 it is almost impossible to be done without automation. For the 'lient/-erver load testin*3 some common ob0ectives include: G %easurin* the len*th of time to complete an entire task G +iscoverin* which hardware/software confi*uration provides optimal performance G Tunin* database queries for optimal response G 'apturin* %ean)Time)To)Failure as a measure of reliabilit& G %easurin* s&stem capacit& to handle loads without performance de*radation G Identif&in* performance bottlenecks 8ased on the test ob0ectives3 a set of performance measurements should be described. T&pical measurements include: G /nd)to)end response time G 4etwork response time G QBI response time G -erver response time G %iddleware response time =.2.2 ?olume testin*: The purpose of volume testin* is to find weaknesses in the s&stem with respect to its handlin* of lar*e amount of data durin* eCtended time periods =.2.= -tress testin*: The purpose of stress testin* is to find defects of the s&stem capacit& of handlin* lar*e numbers of transactions durin* peak periods. For eCample3 a script mi*ht

require users to lo*in and proceed with their dail& activities while3 at the same time3 requirin* that a series of workstations emulatin* a lar*e number of other s&stems are runnin* recorded scripts that add3 update3 or delete from the database. =.2.J Performance testin*: -&stem performance is *enerall& assessed in terms of response time and throu*hput rates under differin* processin* and confi*uration conditions. To attack the performance problems3 there are several questions should be asked first: G Row much application lo*ic should be remotel& eCecuted! G Row much updatin* should be done to the database server over the network from the client workstation! G Row much data should be sent to each in each transaction! 6ccordin* to Ramilton >#I@3 the performance problems are most often the result of the client or server bein* confi*ured inappropriatel&. The best strate*& for improvin* client)sever performance is a three)step process >##@. First3 eCecute controlled performance tests that collect the data about volume3 stress3 and loadin* tests. -econd3 anal&.e the collected data. Third3 eCamine and tune the database queries and3 if necessar&3 provide temporar& data stora*e on the client while the application is eCecutin*. =.2.L 5ther server side testin* related to data stora*e: G +ata recover& testin* G +ata backup and restorin* testin* G +ata securit& testin* G Replicated data inte*rit& testin*. =.2.N /Camples for automated server testin* tools: oadRunnin*/X 3 offered from %ercur& Interactive3 is a BniC)based automated server testin* tool that tests the server side of multiuser 'lient/-erver application. oadRunnin*/P' is similar to products based on Windows environments. -Q Inspector and 5+8' Inspector are tools for testin* the link between the client and the server. These products monitor the database interface pipeline and collect information about all database calls or a selected subset of them. -Q Profiler3 is used for tunin* database calls. It stores and displa&s statistics about -Q commands embedded in 'lient/-erver applications. -Q /H/ is an 4T)based tool3 offered b& %icrosoft. It can track the information passed throu*h the -Q -erver and its client. 'lient application connect indirectl& to -Q server throu*h -Q /H/3 which allows users to view the queries sent to -Q -erver3 the returned results3 row counts3 messa*e3 and errors =.= 4etworked 6pplication Testin* Testin* the network is be&ond the scope of an individual 'lient/-erver pro0ect as it ma& serve more than a sin*le 'lient/-erver pro0ect. Thus3 network testin* falls into the domain of the network mana*ement *roup. 6s Robert 8uchanan >#2@ said: bIf &ou havenYt tested a network solution3 itYs hard to sa& if it works. It ma& eworkY. It ma& eCecute all commands3 but it ma& be too slow for &our needsc.

4em.om blames the ma0orit& of network performance problem on insufficient network capacit& >#=@. Re views bandwidth and latenc& as the critical determinants of network speed and capacit&. Re also sees interactions amon* intermediate network nodes :switches3 brid*es3 routers and *atewa&s; as addin* to the problem. /lements of network testin* include: G 6pplication response time measures G 6pplication functionalit& G Throu*hput and performance measurement G 'onfi*uration and si.in* G -tress testin* and performance testin* G Reliabilit& It is necessar& to measure application response time while the application is completin* a series of tasks. This kind of measure reflects the userYs perception of the network3 and is applicable throu*h the entire network life c&cle phase. Testin* application functionalit& involves testin* shared functionalit& across workstations3 shared data3 and shared processes. This t&pe of testin* is applicable durin* the development and evolution. 'onfi*uration and si.in* measure the response of specific s&stem confi*urations. This is done for different network confi*urations until the desired performance level is reached. . The point of stress testin* is to overload network resource such as routers or hubs. Performance testin* can be used to determine how man& network devices will be required to meet the networkYs performance requirements. Reliabilit& testin* involves runnin* the network for 2J)K2 hrs under a medium)to)heav& load. From a reliabilit& point of view3 it is important that the network remain functional in the event of a node failure. J -pecial 'oncerns for Internet 'omputin* ))) -ecurit& Testin*: For internet)based 'lient/-erver s&stems3 securit& testin* for the web server is important. The web server is &our 64Ys window to the world and3 conversel&3 is the worldYs window to &our 64. ItYs a maCim in s&stem securit& circles that bu**& software opens up securit& holes. ItYs a maCim in software development circles that lar*e3 compleC pro*rams contain bu*s. Bnfortunatel&3 web servers are lar*e3 compleC pro*rams that can contain securit& holes. Furthermore3 the open architecture of web server allows arbitrar& 'QI scripts to be eCecuted on the serverYs side of the connection in response to remote requests. 6n& 'QI script installed at &our site ma& contain bu*s3 and ever& such bu* is a potential securit& hole. Three t&pes of securit& risks have been identified >#L@: #. The primar& risk is errors in the web server side misconfi*uration that would allow remote users to: G -teal confidential information G /Cecute commands on the server host3 thus allowin* the users to modif& the s&stem G Qain information about the server host that would allow them to break into the s&stem G aunch attacks that will brin* the s&stem down. 2. The secondar& risk occurs on the 8rowser)side G 6ctive content that crashes the browser3 dama*es &our s&stem3 breaches &our compan&Ys privac&3 or creates an anno&ance. G The misuse of personal information provided b& the end user. =. The tertiar& risk is data interception durin* data transfer.

The above risks are also the focuses of web server securit& testin*. 6s a tester3 it is &our responsibilit& to test if the securit& eCtends provided b& the server meet the userYs eCpectation for the network securit&.

Web 6pplication Testin*

When &ou test a web site or a web)based application3 &ou have the certain *oal of testin*3 somethin* &ou intend to find about &our site/web)based application as a result of testin* with W6PT. It can be either the maCimum number of pa*e hits per second &our web server can serve under the load of multiple users3 or performance characteristics of &our site/web)based application3 or the breakin* points of &our site/web)based application a*ainst the maCimum user load3 or the optimal hardware/software confi*uration3 or the level of reliabilit& of &our web server over an eCtended period of hi*h user load3 or somethin* else.

-elect the ob0ective of &our test on the Testin* 5b0ectives pa*e of 4ew Test -cenario Wi.ard. When &ou select the ob0ective3 its description is shown below the list. There are three main terms for testin* b& simultaneousl& actin* multiple users :real or simulated;: load3 performance and stress testin*.

6ll performance and stress testin* require workload definition as a part of the test. There ma& be no load3 minimal load3 and normal load3 above normal load or eCtreme load. -o we will use the term <load testin*< as a *eneral cate*or& for all t&pes of testin* b& simultaneousl& actin* multiple users. The definition for load testin* is: <6n& t&pe of testin* where realistic :or h&per)realistic; workloads are characteri.ed3 simulated and submitted to the s&stem under the test.< /Ctreme loads are used in stress testin* ) to find the breakin* point and bottlenecks of the tested s&stem.

4ormal loads are used in performance testin* ) to ensure the acceptable level of performance characteristics like response time or request processin* time under the estimated load. %inimal loads are often used in benchmark testin* ) to estimate userAs eCperience. 6n&wa&3 load3 not performance3 is the shared characteristic across all these t&pes of testin*. In stress testin* &ou should tr& to break the application b& an eCtreme load and eCpose the bu*s that are likel& to appear under the stress3 such as data corruption3 buffer overflows3 poor handlin* of resource depletion3 deadlocks3 race conditions etc. The fact that performance metrics such as response time can be measured durin* the stress testin* is practicall& irrelevant to the purpose of stress testin*. -o stress testin* and performance testin* have totall& different *oals3 which make it clear that stress testin* is not a species of performance testin*. 6nother reason for puttin* performance and stress under the load testin* is another t&pe of testin* known as back*round testin*. In back*round testin* &ou use workloads :usuall& normal; to eCercise the s&stem under the test while &ou run functional and/or re*ression tests a*ainst the s&stem. The *oal of back*round testin* is to test the functionalit& in more realistic conditions3 i.e. with a realistic back*round workload3 like the application will have in real use. 8ut some sources treat Performance testin* as a *eneral cate*or& instead of oad testin*. This approach is oriented towards test results :measured timin*s; while the usin* of oad testin* as a cate*or& is concentrated on the test nature. We do not follow this approach3 but it is rather widespread3 so it should be mentioned. For eCample3 Rational Bnified Process defines these terms as follows: Performance testin* is a class of tests implemented and eCecuted to characteri.e and evaluate the performance related characteristics of the tar*et)of)test such as the timin* profiles3 eCecution flow3 response times3 and operational reliabilit& and limits. Included within this class are: oad testin* ) ?erifies the acceptabilit& of the tar*et)of)testAs performance behavior under the var&in* operational conditions :such as the number of users3 number of transactions3 etc; while the confi*uration remains constant. -tress testin* ) ?erifies the acceptabilit& of the tar*et)of)testAs performance behavior when abnormal or eCtreme conditions are encountered3 such as diminished resources or eCtremel& hi*h number of users. 8riefl&: G Performance testin* is the overall process3 G oad testin* checks if the s&stem will support the eCpected conditions3 G -tress testin* tries to break the s&stem. 4ew Test -cenario Wi.ard is desi*ned to help &ou creatin* test scenario dependin* on &our testin* ob0ectives. Wi.ard allows &ou to ad0ust W6PT parameters easier than usin* W6PT Runner and /ditor. 6lso 4ew Test -cenario Wi.ard includes short *uide on test results. 'lick the 4ew button on the toolbar :or click 4ew -cenario on the File menu; to initiate 4ew Test -cenario Wi.ard. The procedure of new test scenario creation is rather eas&3 0ust follow Wi.ard instructions. 4ote. 4ew Test -cenario Wi.ard is launched ever& time &ou click the 4ew button on the toolbar :or click 4ew -cenario on the File menu;. The first pa*e of Wi.ard is Welcome pa*e. Hou can click 'ancel button to cancel new test scenario creation3 or click 'reate +efault button to eCit Wi.ard and create the default test scenario without Wi.ard help. 'lick the Runner button on Test tab of left bar to open Runner. Rere &ou should specif&

Test volume -in*le run 'lick -in*le run to ad0ust a sin*le run test. The concept sin*le run means that onl& one test run will be performed. 6d0ust the number of virtual users that will take part in the test :users edit boC;. 6d0ust the number of iterations :iterations edit boC;. The number of iterations determines how man& times the test sequence will be eCecuted durin* the test run. Instead of settin* the number of iterations &ou can set test duration :in minutes;. %ark the Run for checkboC and enter duration. The iterations edit boC becomes disabled. W6PT automaticall& sets the number of iterations that should be enou*h to run the test for the defined duration. Instead of settin* the number of iterations &ou can set the overall amount of pa*e requests that should be performed durin* the test. %ark the Perform checkboC and enter the number of pa*e requests. The iterations edit boC becomes disabled. W6PT automaticall& sets the number of iterations that should be enou*h to perform the defined number of pa*e requests. If &ou have defined both test duration and the number of pa*e requests performed durin* the test run3 then the test will be finished when an& of these conditions is fulfilled :another condition is i*nored;. 8atch run 'lick 8atch run to ad0ust a batch run test. The concept batch run means that several test runs will be performed one after another. 8atch run b& Bsers: 6d0ust the number of users participatin* in each run: enter the number of users participatin* in the first run :from edit boC;9 enter the number of users participatin* in the last run :to edit boC;9 enter the step between defined limits :step edit boC;. The number of users participatin* in each run is equal to the number of users in the previous run U step value. 6d0ust the number of iterations :iterations edit boC;. 4ote. Hou can ad0ust a ne*ative step. In this case the value in to edit boC cannot be more than the value in from edit boC. 8atch run b& Iterations: 6d0ust the number of iterations of test sequence in each run: G enter the number of iterations in the first run :from edit boC;9 G enter the number of iterations in the last run :to edit boC;9 G enter the step between defined limits :step edit boC;. The number of iterations of test sequence in each run is equal to the number of iterations in the previous run U step value. 6d0ust the number of virtual users :users edit boC;. 4ote. Hou can ad0ust a ne*ative step. In this case the value in to edit boC cannot be more than the value in from edit boC. Interval between runs checkboC: %ark this checkboC if &ou need to set an interval between runs in the batch. 6d0ust the interval in seconds. Instead of iterations number &ou can specif& test duration or the number of performed pa*e requests. In case of a batch run these characteristics :the number of iterations3 duration and the number of pa*e requests; correspond to each run in the batch. Run for checkboC: +efines duration :in minutes; of test run. If &ou mark this checkboC3 the iterations edit boC is disabled. W6PT automaticall& sets the number of iterations that should be enou*h to run the test for defined duration.

Perform checkboC: +efines the number of pa*e requests performed durin* the test run. If &ou mark this checkboC3 the iterations edit boC is disabled. W6PT automaticall& sets the number of iterations that should be enou*h to perform defined number of pa*e requests. If &ou have defined both test duration and the number of pa*e requests performed durin* the test run3 then the test will be finished when an& of these conditions is fulfilled :another condition is i*nored;. oad level oad level defines load on the server durin* test run. 'lick either %aCimum load3 or oad level up to the number of pa*e requests per second. %aCimum load: Qives the load as it is specified in test scenario options. oad level up to: Hou can restrict load on the server to ad0usted number of pa*e requests per second. -cheduled test run This *roup boC serves for ad0ustin* the scheduled test run. %ark -tart test at checkboC to ad0ust the scheduled run. -elect date and time and click Run Test button on the toolbar to activate the scheduled test run. 6t the specified date and time test run will be started. If specified time has alread& passed3 then the test will start immediatel&. 4ote that scheduled run will be activated onl& if &ou click Run Test button after selectin* date and time. Reports Full lo* is a comprehensive lo* of W6PT test run that includes information on all pa*e requests3 responses3 redirects and returned RT% code. Full lo*s do not contain an& timin* information. /rror lo* contains pa*e requests and responses for which an error occurred durin* a test :i.e. response code is not equal to 2II :5Z;;. -ave error lo* / full lo* checkboC: If &ou mark it3 then for ever& virtual user a lo* file will be created. In case of a batch run lo* files will be created for each virtual user on each run in the batch. 4ote that creatin* lo*s could require much space on the disk. Bsuall& lo*s are used for functional testin* or for debu**in* a test scenario with d&namic parameter values. -pecif& a folder for storin* lo* files in the correspondin* edit boC. Hou can click the 8rowse o* +irector& button to browse for folder. -elect the folder in the displa&ed dialo* and click 5Z. -ave reports to checkboC: If &ou mark it3 then a teCt report will be created as a file with Cls eCtension. Hou can work with this file in %icrosoft7 /Ccel :for eCample3 make *raphs;. TeCt report contains information on pa*e requests eCecution time and summar& information of test run. -pecif& report file location in reports file location edit boC. 5r click the 8rowse Report +irector& button to browse for folder. -elect the folder in the displa&ed dialo* and click 5Z. In the Reports names pattern edit boC the pattern for report names is displa&ed. 'lick the /dit Pattern button to edit the pattern. Test run comment edit boC: Rere &ou can enter &our comment to the current test run. /ntered comment will be displa&ed in teCt report file. -ave pa*es timin*s checkboC: If &ou mark it3 then timin*s of all pa*e requests performed durin* a test will be displa&ed at the end of teCt report. -elect either +uration or +uration and timin*s in the combo)boC. :Rere duration means pa*e request duration9 timin*s mean iteration start and end times.; -elected item defines whether pa*e request durations or pa*e request durations and iterations start and end times will be displa&ed for each user on each iteration.

If &ou do not mark the -ave pa*es timin*s checkboC3 then this information will not be displa&ed in teCt report. Test sequences Test sequences *roup boC serves for ad0ustin* pa*e requests sequences. /ach iteration is a step)b&)step pass of main sequence. Initial sequence is eCecuted for each virtual user once at the be*innin* of each run. Final sequence is eCecuted for each virtual user once at the end of each run. Initial: %ark the Initial checkboC and click the /dit button to edit initial sequence. %ain: %ark the %ain checkboC and click the /dit button to edit main sequence. Final: %ark the Final checkboC and click the /dit button to edit final sequence. ?irtual Bsers start This *roup boC defines start dela& between virtual users. 6ll users can start simultaneousl&. In this case the number of flows equal to the number of virtual users will be started at the same time. 5r users can start one b& one with the certain dela& between 2 conti*uous users. 'lick either -tart all users simultaneousl&3 or +ela& between users. If &ou click +ela& between users button3 then &ou should ad0ust dela& in milliseconds. Timin* mode Timin* mode defines what operation should be measured durin* the test run. There are = possible modes: Web transaction time: It is the time from the first b&te of pa*e request sent to the last b&te of response received. In other words3 it is the time from clickin* the link till loadin* the pa*e. -elect whether download will be made either without ima*es3 or includin* ima*es. Response time :TTF8;: It is the time from the first b&te of pa*e request sent till the first received b&te of server response. TTF8 is the time to first b&te. Response download time: It is the time of *ettin* the content of server response: from the first received b&te of server response till the last received one. Web transaction time " Response time :TTF8; U Response download time. Timin* roundin* combo)boC: +efines the roundin* of results. Hou can set the roundin* to I3I# ms9 I3# ms9 or # ms. 6ll timin*s in teCt report will be displa&ed with 2 di*its after the decimal comma. In case the timin* roundin* is # ms these 2 di*its are .eros. In case the timin* roundin* is I3# ms the last di*it is .ero. Timeout check checkboC: +efines whether timeout at pa*e request processin* will be checked or not. If timeout is checked3 then the time of each pa*e request processin* will be no more than the maCimum time of waitin* for server response :specified in -ettin*s;. When this maCimum value is eCceeded3 the neCt pa*e request will be sent. If timeout is not checked3 then there is a probabilit& that current pa*e request processin* can become an endless process3 &et it decreases 'PB usa*e. This parameter could help &ou in stress testin* when &ou need to create an eCcessive load on the server. 8& default the Timeout check checkboC is marked. 6fter &ou have recorded test -cenario3 &ou can edit pa*e options so that the final test -cenario completel& satisfies &our testin* *oals. 'lick the /ditor button on Test tab of left bar to *o to /ditor. In /ditor &ou can set options for -cenario as a whole and for individual pa*es. The sequence of pa*es specified b& &ou in Recorder is transferred to /ditor. 4ote: When &ou click the -top Recordin* button on the toolbar3 the /ditor opens. 'lick the -ave -cenario button on the toolbar to save the current -cenario. If &ou are savin* an untitled -cenario3 then b& clickin* this button the -ave 6s dialo* appears. T&pe the appropriate name for test -cenario in the File name boC and click -ave. The file will be saved as a file with the wts eCtension. It is W6PT eCtension for files with test -cenarios. 6fter savin* the -ave -cenario button becomes disabled.

If &ou are savin* chan*es made in the eCistin* -cenario3 then b& clickin* the -ave -cenario button all chan*es will be automaticall& saved3 and the -ave -cenario button becomes disabled. W6PT displa&s test -cenario name in the title bar. The sequence of pa*es is displa&ed in the upper pane of /ditor. /ach pa*e has the default 4ame :for eCample3 pa*eD#3 pa*eD2;3 BR 3 +ata :parameters specified to the ri*ht of the <!< si*n in the BR name;3 and +ela& in seconds :dela& between 2 conti*uous pa*es;. 'lick the necessar& topic for details: 'ommon operations with the sequence of pa*es Hou can perform common operations with the sequence of BR s: G 'lick 6dd to append a new BR to the end of the sequence. 6 newl& created pa*e request is empt&. Hou should define its options :server3 BRI3 method3 dela&; in the Pa*e options *roup boC. G 'lick +elete to delete selected BR . Hou can also click the +elete button on &our ke&board to delete selected BR . G 'lick 'lear to delete all BR s from -cenario. The s&stem will ask &ou if &ou are sure &ou want to delete the sequence of pa*es. 'lick Hes to confirm deletin*3 or 4o otherwise. G 'lick Bp to move selected BR up in the test sequence. G 'lick +own to move selected BR down in the test sequence. 6d0ust pa*e options When &ou click a pa*e in the sequence3 its details are displa&ed in the Pa*e options *roup boC. G 4ame: Rere &ou can chan*e pa*e name. For eCample3 instead of default names pa*eD#3 pa*eD2 &ou can t&pe names with the certain meanin* :for eCample3 < o*in<3 <The list of films< etc;. G -erver: -erver name is displa&ed in this edit boC. Hou can chan*e it. G -ecure checkboC: %ark this checkboC if it is necessar& to make an RTTP- pa*e request. G BRI combo)boC: BRI can be d&namicall& calculated based on RT% code of response. 5ften it is used when BRI contains sessionI+ parameter. -elect one of the followin*: recorded BRI3 `href:;3 or `action. o `href:teCt; is parameter value of href:teCt; function. This function searches for the link with the specified teCt in RT% code of response to the previous request and takes BRI from this link. o `action is parameter value of action function. This function searches for the first form in response to the previous request and eCtracts BRI from the form action. G %ethod combo)boC: -elect Q/T or P5-T request method. When Q/T method is used3 parameters are transmitted in pa*e request strin*. When P5-T method is used3 parameters are transmitted implicitl& :the& are not specified in pa*e request strin*;. G +ela& from and to edit boCes: +ela& in seconds between the current pa*e and the neCt pa*e is displa&ed here. To randomi.e dela&s &ou should set different values for <from< and <to< edit boCes. In this case W6PT calculates random value from the specified ran*e to use it as a dela& between the current and the neCt pa*e request each time W6PT is requestin* a pa*e. It helps to correctl& emulate different <think time< of real web users. 4ote: 'han*es &ou make in Pa*e options *roup boC immediatel& appear in the upper pane of /ditor. +&namic calculation of request parameter values /ditor provides &ou with #I functions for calculation of pa*e request parameter values at run)time. -elect a pa*e from the sequence3 then select a parameter from the list of parameters:

'lick /dit parameter button to define or edit the function for calculation of selected parameter value. -ee Parameter ?alue +&namic 'alculation for detailed description of functions and functions editor. 6d0ust list of ima*es -elect a pa*e in test sequence and click the <Ima*es...< button to view and edit the list of BR s of ima*es correspondin* to this pa*e. If BR of ima*e is checked3 this ima*e will be requested durin* test run. If BR is unchecked3 this ima*e will not be requested durin* test run. +ouble)click BR to check/uncheck it. G 6dd: /nter BR of ima*e to the <Ima*e< edit boC and click <6dd< to add a new BR to the list of ima*e BR s. G Remove: -elect BR of ima*e in the list and click <Remove< to remove this BR from the list. G 'han*e: /nables to edit selected BR : select BR of ima*e in the list3 edit it in the <Ima*e< edit boC and click <'han*e< to save chan*es. G 'heck all: 'lick this button to check all BR s of ima*es in the list. G Bncheck all: 'lick this button to uncheck all BR s of ima*es in the list. 'lick 5Z to save chan*es. W6PT supports local cache emulation. It means that if some BR of ima*e was requested on some pa*e and the same BR appears on an& of the neCt pa*es3 then it will not be a*ain requested from the server. 6d0ust RTTP header custom strin*s -elect a pa*e in test sequence and click the <RTTP Reader...< button to view and edit custom strin*s in RTTP header correspondin* to this pa*e. 4ame edit boC: RTTP header field name. ?alue edit boC: RTTP header field value. G 6dd: -pecif& field name and field value in <4ame< and <?alue< edit boCes and click <6dd< to add a new custom strin* to RTTP header. G Remove: -elect custom strin* in the list and click <Remove< to remove it. G 'han*e: /nables to edit selected custom strin*: select custom strin* in the list3 edit it :<4ame< and <?alue< edit boCes; and click <'han*e< to save chan*es. 'lick 5Z to save chan*es. 6d0ust -cenario options There are the followin* -cenario options: G Bse dela&s checkboC: This checkboC defines how to process dela&s between pa*es. If &ou mark this checkboC3 then dela&s specified in Pa*e options *roup boC will be used durin* test run. If &ou unmark this checkboC3 then the test will be runned without dela&s between pa*es. G oad ima*es checkboC: This checkboC is identical to <-how pictures< checkboC in advanced Internet /Cplorer options. If &ou clear < oad ima*es< checkboC3 then pa*es will be loaded without ima*es3 so the& will be loaded faster. Hou can see the time of loadin* in *enerated teCt report. Rowever3 load on the tested server will be inadequate in this case: it will correspond to situation when all users switch off pictures showin*. G -imulate user connection speed checkboC: Hou can limit virtual user connection bandwidth for better real web user emulation. -elect t&pical web user connection speed from the combo)boC. G I*nore errors checkboC: This checkboC defines how to process errors. If this checkboC is unmarked and an error occurs3 test sequence will be aborted. :4ote that not the whole test will be aborted3 but the current iteration. The neCt iteration will be started.; If this checkboC is marked3 then test sequence will be continued re*ardless of an error. If an error occurs3 the neCt pa*e in the sequence will be loaded.

G 6dd <X)Forwarded)For< RTTP header checkboC: To**les the use of additional fields of the RTTP header for imitatin* different users workin* throu*h a proC& server. Bseful to emulate pa*e requests comin* from multiple computers. IP address mask used in proC& emulation is: `'# ) low b&te of virtual userAs number9 `'2 ) hi*h b&te of virtual userAs number9 `R# ) low b&te of iteration number9 `R2 ) hi*h b&te of iteration number. 6ll values span from # to 2LJ. To emulate pa*e requests comin* from unique IPs set the mask to: `'2.`'#.`R2.`R#. 5bviousl& if the amount of pa*e requests eCceeds NJ2N23 `RC values will be repeated. Hou can use a static value instead of `'2 if the amount of virtual users is lower than 2L=. W6PT default proC& mask is #$2.#NO.`'2.`'#. For eCample3 for the first virtual user the directive X)Forwarded)For: #$2.#NO.#.# will appear in the RTTP header. G Zeep alive checkboC: If this checkboC is marked3 the directive <'onnection: Zeep 6live< will be included to the header of each RTTP request. G 8asic authori.ation checkboC: -ome servers require basic authori.ation for *ettin* access to them. If &ou mark this checkboC3 then the 4ame and Password edit boCes will be activated. /nter there the name and password for basic authori.ation. G Bser a*ent checkboC: This checkboC defines the t&pe of browser to be used for the test. When &ou mark this checkboC3 browser combo)boC becomes activated. -elect the necessar& browser. -ave scenario 'lick the -ave -cenario button on the toolbar when &ou finish editin* &our test -cenario. 'lick -ettin*s on the /dit menu to ad0ust W6PT *lobal settin*s. Pla&back proC& settin*s 8& default W6PT connects to tested server via direct connection :without usin* the proC& server;. %ark Bse proC& server while performin* the test checkboC if it is necessar& to use proC& server durin* test runs. +efined proC& settin*s are used onl& while W6PT is performin* a test run :pla&back a test scenario;. For recordin* W6PT uses actual proC& settin*s specified for %icrosoft7 Internet /Cplorer. G Bse Internet /Cplorer proC& settin*s: If &ou click this button3 then current Internet /Cplorer proC& settin*s will be used for test runs. For each test run there will be different proC& settin*s. When a new test is started :at clickin* Run Test button on the toolbar;3 current proC& settin*s of Internet /Cplorer are defined and a new test runs with these settin*s. G Bse custom proC& settin*s: If &ou click this button3 &ou should define certain proC& settin*s for all test runs. o T&pe: select proC& server t&pe :RTTP3 RTTP-3 -5'Z-J or -5'Z-L;. o -erver: enter proC& server name. o Port: enter proC& server port number. o 6dvanced..: define advanced proC& settin*s: Bse authentication: %ark this checkboC if proC& server requires authentication. /nter username and password for proC& authentication. 8&pass proC& server for local addresses: %ark this checkboC to b&pass proC& server for local addresses durin* test runs. +o not use proC& server for addresses be*innin* with: /nter addresses for which proC& server will be b&passed durin* test runs. 6ddresses should be separated b& semicolons. 'lick 5Z to save chan*es.

4ote that testin* throu*h the proC& server could lead to distortion of test results. We recommend to use proC& onl& in the case it is reall& necessar&: if tested server is outside the firewall and there is no direct channel to tested server3 or if &ou wish to test the proC& server. +efault version of RTTP RTTP version combo)boC: -elect the version of RTTP that W6PT should use b& default :#.# or #.I;. +efault version of --- version combo)boC: -elect the version of -- that W6PT should use b& default :=.I or 2.I;. Timeout settin* %aCimum time of waitin* for server response :sec; edit boC: 8& default the maCimum time of waitin* for server response is #2I seconds. Hou can chan*e this value with the help of up and down spin buttons. Interface settin*s -how prompt for Recorder checkboC: If &ou mark this checkboC3 then at switchin* to Recorder a prompt dialo* will be displa&ed. There &ou should specif& whether W6PT will alwa&s start recordin* when switchin* to Recorder or not. 6lwa&s start recordin* when switch to Recorder checkboC: If &ou mark this checkboC3 W6PT will automaticall& start recordin* BR s at switchin* to Recorder. 6t this event3 the -tart recordin* button on the toolbar becomes disabled automaticall& :it means that record is started;. If &ou do not mark this checkboC3 then at switchin* to Recorder the -tart recordin* button on the toolbar will be activated. 'lick it to start recordin* BR s. -how RT% report on the test completion checkboC: If &ou mark this checkboC3 then &ou will see Internet /Cplorer window with *enerated RT% Report on test completion. This report enables viewin* test run results 0ust from Internet /Cplorer. 'lick 5Z to save settin*s or 'ancel to close -ettin*s dialo* without savin* chan*es.

%anual Testin*

-oftware Testin* Introduction -oftware testin* is a critical element of software qualit& assurance and represents the ultimate process to ensure the correctness of the product. The qualit& product alwa&s enhances the customer confidence in usin* the product thereb& increases the business economics. In other words3 a *ood qualit& product means .ero defects3 which is derived from a better qualit& process in testin*. -oftware is an inte*rated set of Pro*ram codes3 desi*ned lo*icall& to implement a particular function or to automate a particular process. To develop a software product or pro0ect3 user needs and constraints must be determined and eCplicitl& stated. The development process is broadl& classified into two. #. Product development 2. Pro0ect development Product development is done assumin* a wide ran*e of customers and their needs. This t&pe of development involves customers from all domains and collectin* requirements from man& different environments.

Pro0ect +evelopment is done b& focusin* a particular customerAs need3 *atherin* data from his environment and brin*in* out a valid set of information that will help as a pillar to development process. Testin* is a necessar& sta*e in the software life c&cle: it *ives the pro*rammer and user some sense of correctness3 thou*h never <proof of correctness. With effective testin* techniques3 software is more easil& debu**ed3 less likel& to <break3< more <correct<3 and3 in summar&3 better. %ost development processes in the IT industr& alwa&s seem to follow a ti*ht schedule. 5ften3 these schedules adversel& affect the testin* process3 resultin* in step motherl& treatment meted out to the testin* process. 6s a result3 defects accumulate in the application and are overlooked so as to meet deadlines. The developers convince themselves that the overlooked errors can be rectified in subsequent releases. The definition of testin* is not well understood. People use a totall& incorrect definition of the word testin*3 and that this is the primar& cause for poor pro*ram testin*. Testin* the product means addin* value to it b& raisin* the qualit& or reliabilit& of the product. Raisin* the reliabilit& of the product means findin* and removin* errors. Rence one should not test a product to show that it works9 rather3 one should start with the assumption that the pro*ram contains errors and then test the pro*ram to find as man& of the errors as possible. +efinitions of Testin*: bTestin* is the process of eCecutin* a pro*ram with the intent of findin* errors c 5r bTestin* is the process of evaluatin* a s&stem b& manual or automatic means and verif& that it satisfies specified requirementsc 5r <... the process of eCercisin* or evaluatin* a s&stem or s&stem component b& manual or automated means to verif& that it satisfies specified requirements or to identif& differences / between eCpected and actual results...< Wh& software Testin*! -oftware testin* helps to deliver qualit& software products that satisf& userYs requirements3 needs and eCpectations. If done poorl&3 i defects are found durin* operation3 i it results in hi*h maintenance cost and user dissatisfaction i It ma& cause mission failure i Impact on operational performance and reliabilit& -ome of the case studies +isne&Ys ion Zin*3 #$$J)#$$L In the fall of #$$J3 +isne& compan& Released its first multimedia '+)R5% *ame for children3 The ion Zin* 6nimated stor&book. This was +isne&Ys first venture into the market and it was hi*hl& promoted and advertised. -ales were hu*e. It was bthe *ame to bu&c for children that holida& season. What happened3 however3 was a hu*e debacle. 5n +ecember 2N3 the da& after 'hristmas3 +isne&Ys customer support phones be*an to rin*3 and rin*3 and rin*. -oon the phones support technicians were

swamped with calls from an*r& parents with cr&in* children who couldnYt *et the software to work. 4umerous stories appeared in newspapers and on T? news. This problem later was found out3 due to non performance of software testin* for all conditions. -oftware 8u*: 6 Formal +efinition 'allin* an& and all software problems bu*s ma& sound simple enou*h3 but doin* so hasnYt reall& addressed the issue. To keep from runnin* in circular definitions3 there needs to be a definitive description of what a bu* is. 6 software bu* occurs when one or more of the followin* five rules is true: #; The software doesnYt do somethin* that the product specification sa&s it should do. 2; The software does somethin* that the product specification sa&s it shouldnYt do. =; The software does somethin* that the product specification doesnYt mention. J; The software doesnYt do somethin* that the product specification doesnYt mention but should. L; The software is difficult to understand3 hard to use3 slow3 or ,in the software testerYs e&es) will be viewed b& the end user as 0ust plain not ri*ht. What eCactl& does -oftware Tester +o! :5r Role of Tester; From the above /Camples &ou have seen how nast& bu*s can be and &ou know what is the definition of a bu* is3 and &ou can think how costl& the& can be. -o main *oal of tester is bThe *oal of -oftware Tester is to find bu*sc 6s a software tester &ou shouldnYt be content at 0ust findin* bu*s3 &ou should think about how to find them sooner in the development process3 thus makin* them cheaper to fiC. bThe *oal of a -oftware Tester is to find bu*s3 and find them as earl& as possiblec. 8ut3 findin* bu*s earl& isnYt enou*h. bThe *oal of a -oftware Tester is to find bu*s3 and find them as earl& as possible and make sure the& *et fiCedc Principle of Testin* The main ob0ective of testin* is to find defects in requirements3 desi*n3 documentation3 and code as earl& as possible. The test process should be such that the software product that will be delivered to the customer is defect less. 6ll Tests should be traceable to customer requirements. Test cases must be written for invalid and uneCpected3 as well as for valid and eCpected input conditions. 6 necessar& part of a test case is a definition of the eCpected output or result. 6 *ood test case is one that has hi*h probabilit& of detectin* an as)&et undiscovered error. /i*ht 8asic Principles of Testin* G +efine the eCpected output or result. G +onAt test &our own pro*rams.

G Inspect the results of each test completel&. G Include test cases for invalid or uneCpected conditions. G Test the pro*ram to see if it does what it is not supposed to do as well as what it is supposed to do. G 6void disposable test cases unless the pro*ram itself is disposable. G +o not plan tests assumin* that no errors will be found. The probabilit& of locatin* more errors in an& one module is directl& proportional to the number of errors alread& found in that module. 8est Testin* Practices to be followed durin* testin* G Testin* and evaluation responsibilit& is *iven to ever& member3 so as to *enerate team responsibilit& amon* all. G +evelop %aster Test Plan so that resource and responsibilities are understood and assi*ned as earl& in the pro0ect as possible. G -&stematic evaluation and preliminar& test desi*n are established as a part of all s&stem en*ineerin* and specification work. G Testin* is used to verif& that all pro0ect deliverables and components are complete3 and to demonstrate and track true pro0ect pro*ress. G 6)risk prioriti.ed list of test requirements and ob0ectives :such as requirements) based3 desi*n)based3 etc; are developed and maintained. G 'onduct Reviews as earl& and as often as possible to provide developer feedback and *et problems found and fiCed as the& occur. -oftware +evelopment ife '&cle :-+ '; et us look at the Traditional -oftware +evelopment life c&cle vs Presentl& or %ostl& commonl& used life c&cle.

Fi* 6 :Traditional; Fi* 8 :%ost commonl& used; In the above Fi* 63 the Testin* Phase comes after the +evelopment or codin* is complete and before the product is launched and *oes into %aintenance phase. We have some disadvanta*es usin* this model ) cost of fiCin* errors will be hi*h because we are not able to find errors until codin* is completed. If there is error at

Requirements phase then all phases should be chan*ed. -o3 total cost becomes ver& hi*h.

The Fi* 8 shows the recommended Test Process involves testin* in ever& phase of the life c&cle. +urin* the Requirements phase3 the emphasis is upon validation to determine that the defined requirements meet the needs of the or*ani.ation. +urin* +esi*n and +evelopment phases3 the emphasis is on verification to ensure that the desi*n and pro*ram accomplish the defined requirements. +urin* the Test and Installation phases3 the emphasis is on inspection to determine that the implemented s&stem meets the s&stem specification. +urin* the maintenance phases3 the s&stem will be re)tested to determine that the chan*es work and that the unchan*ed portion continues to work. Requirements and 6nal&sis -pecification The main ob0ective of the requirement anal&sis is to prepare a document3 which includes all the client requirements. That is3 the -oftware Requirement -pecification :-R-; document is the primar& output of this phase. Proper requirements and specifications are critical for havin* a successful pro0ect. Removin* errors at this phase can reduce the cost as much as errors found in the +esi*n phase. 6nd also &ou should verif& the followin* activities: G +etermine ?erification 6pproach. G +etermine 6dequac& of Requirements. G Qenerate functional test data. G +etermine consistenc& of desi*n with requirements. +esi*n phase In this phase we are *oin* to desi*n entire pro0ect into two G Ri*h , evel +esi*n or -&stem +esi*n. G ow , evel +esi*n or +etailed +esi*n. Ri*h , evel +esi*n or -&stem +esi*n :R +; Ri*h , level +esi*n *ives the overall -&stem +esi*n in terms of Functional 6rchitecture and +atabase desi*n. This is ver& useful for the developers to understand the flow of the s&stem. In this phase desi*n team3 review team :testers; and customers pla&s a ma0or role. For this the entr& criteria are the requirement document that is -R-. 6nd the eCit criteria will be R +3 pro0ects standards3 the functional desi*n documents3 and the database desi*n document. ow , evel +esi*n : +;

+urin* the detailed phase3 the view of the application developed durin* the hi*h level desi*n is broken down into modules and pro*rams. o*ic desi*n is done for ever& pro*ram and then documented as pro*ram specifications. For ever& pro*ram3 a unit test plan is created. The entr& criteria for this will be the R + document. 6nd the eCit criteria will the pro*ram specification and unit test plan : +;.

+evelopment Phase This is the phase where actuall& codin* starts. 6fter the preparation of R + and the developers know what is their role and accordin* to the specifications the& develop the pro0ect. This sta*e produces the source code3 eCecutables3 and database. The output of this phase is the sub0ect to subsequent testin* and validation. 6nd we should also verif& these activities: G +etermine adequac& of implementation. G Qenerate structural and functional test data for pro*rams. The inputs for this phase are the ph&sical database desi*n document3 pro0ect standards3 pro*ram specification3 unit test plan3 pro*ram skeletons3 and utilities tools. The output will be test data3 source data3 eCecutables3 and code reviews. Testin* phase This phase is intended to find defects that can be eCposed onl& b& testin* the entire s&stem. This can be done b& -tatic Testin* or +&namic Testin*. -tatic testin* means testin* the product3 which is not eCecutin*3 we do it b& eCaminin* and conductin* the reviews. +&namic testin* is what &ou would normall& think of testin*. We test the eCecutin* part of the pro0ect. 6 series of different tests are done to verif& that all s&stem elements have been properl& inte*rated and the s&stem performs all its functions. +3

4ote that the s&stem test plannin* can occur before codin* is completed. Indeed3 it is often done in parallel with codin*. The input for this is requirements specification document3 and the output are the s&stem test plan and test result. Implementation phase or the 6cceptance phase This phase includes two basic tasks : G Qettin* the software accepted G Installin* the software at the customer site. 6cceptance consist of formal testin* conducted b& the customer accordin* to the 6cceptance test plan prepared earlier and anal&sis of the test results to determine whether the s&stem satisfies its acceptance criteria. When the result of the anal&sis satisfies the acceptance criteria3 the user accepts the software. %aintenance phase This phase is for all modifications3 which is not meetin* the customer requirements or an& thin* to append to the present s&stem. 6ll t&pes of corrections for the pro0ect or product take place in this phase. The cost of risk will be ver& hi*h in this phase. This is the last phase of software development life c&cle. The input to this will be pro0ect to be corrected and the output will be modified version of the pro0ect.

-oftware +evelopment ifec&cle %odels

The process used to create a software product from its initial conception to its public release is known as the software development lifec&cle model. There are man& different methods that can be used for developin* software3 and no model is necessaril& the best for a particular pro0ect. There are four frequentl& used models: G 8i* ,8an* %odel G Waterfall %odel G Protot&pe %odel G -piral %odel 8in , 8an* %odel The 8i*) 8an* %odel is the one in which we put hu*e amount of matter :people or mone&; is put to*ether3 a lot of ener*& is eCpended , often violentl& , and out comes the perfect software product or it doesnYt. The beaut& of this model is that itYs simple. There is little plannin*3 schedulin*3 or Formal development process. 6ll the effort is spent developin* the software and writin* the code. ItYs and ideal process if the product requirements arenYt well understood and the final release date is fleCible. ItYs also important to have fleCible customers3 too3 because the& wonYt know what the&Yre *ettin* until the ver& end.

Waterfall %odel 6 pro0ect usin* waterfall model moves down a series of steps startin* from an initial idea to a final product. 6t the end of each step3 the pro0ect team holds a review to determine if the&Yre read& to move to the neCt step. If the pro0ect isnYt read& to pro*ress3 it sta&s at that level until itYs read&. /ach phase requires well)defined information3 utili.es well)defined process3 and results in well)defined outputs. Resources are required to complete the process in each phase and each phase is accomplished throu*h the application of eCplicit methods3 tools and techniques. The Waterfall model is also called the Phased model because of the sequential move from one phase to another3 the implication bein* that s&stems cascade from one level to the neCt in smooth pro*ression. It has the followin* seven phases of development: The fi*ure represents the Waterfall %odel. 4otice three important points about this model. i ThereYs a lar*e emphasis on specif&in* what the product will be.

i The steps are discrete9 thereYs no overlap. i ThereYs no wa& to back up. 6s soon as &ouYre on a step3 &ou need to complete the tasks for that step and then move on.

Protot&pe model The Protot&pin* model3 also known as the /volutionar& model3 came into -+ ' because of certain failures in the first version of application software. 6 failure in the first version of an application inevitabl& leads to need for redoin* it. To avoid failure of -+ '3 the concept of Protot&pin* is used. The basic idea of Protot&pin* is that instead of fiCin* requirements before the desi*n and codin* can be*in3 a protot&pe is to understand the requirements. The protot&pe is built usin* known requirements. 8& viewin* or usin* the protot&pe3 the user can actuall& feel how the s&stem will work. The protot&pin* model has been defined as: b6 model whose sta*es consist of eCpandin* increments of an operational software with the direction of evolution bein* determined b& operational eCperience.c Protot&pin* Process The followin* activities are carried out in the protot&pin* process: G The developer and die user work to*ether to define the specifications of the critical parts of the s&stem. G The developer constructs a workin* model of the s&stem. G The resultin* protot&pe is a partial representation of the s&stem. G The protot&pe is demonstrated to the user. G The user identifies problems and redefines the requirements. G The desi*ner uses the validated requirements as a basis for desi*nin* the actual or production software Protot&pin* is used in the followin* situations: G When an earlier version of the s&stem does not eCist. G When the userAs needs are not clearl& definable/identifiable. G When the user is unable to state his/her requirements. G When user interfaces are an important part of the s&stem bein* developed. -piral model The traditional software process models donAt deal with the risks that ma& be faced durin* pro0ect development. 5ne of the ma0or causes of pro0ect failure in the past has been ne*li*ence of pro0ect risks. +ue to this3 nobod& was prepared when somethin* unforeseen happened. 8arr& 8oehm reco*ni.ed this and tried to incorporate the factor3 pro0ect risk3 into a life c&cle model. The result is the -piral model3 which was first presented in #$ON. The new model aims at incorporatin* the stren*ths and avoidin* the different of the other models b& shiftin* the mana*ement emphasis to risk evaluation and resolution. /ach phase in the spiral model is split into four sectors of ma0or activities. These activities are as follows:

5b0ective settin*: This activit& involves specif&in* the pro0ect and process ob0ectives in terms of their functionalit& and performance. Risk anal&sis: It involves identif&in* and anal&.in* alternative solutions. It also involves identif&in* the risks that ma& be faced durin* pro0ect development. /n*ineerin*: This activit& involves the actual construction of the s&stem. 'ustomer evaluation: +urin* this phase3 the customer evaluates the product for an& errors and modifications.

-oftware Testin* Terms and +efinitions

G ?erification and validation G Pro0ect %ana*ement G Qualit& %ana*ement G Risk %ana*ement G 'onfi*uration %ana*ement G 'ost %ana*ement G 'ompatibilit& %ana*ement

?erification W validation ?erification and validation are often used interchan*eabl& but have different definitions. These differences are important to software testin*. ?erification is the process confirmin* that software meets its specifications. ?alidation is the process confirmin* that it meets the userYs requirements. ?erification can be conducted throu*h Reviews. Qualit& reviews provides visibilit& into the development process throu*hout the software development life c&cle3 and help teams determine whether to continue development activit& at various

checkpoints or milestones in the process. The& are conducted to identif& defects in a product earl& in the life c&cle. T&pes of Reviews G In)process Reviews :) The& look at the product durin* a specific time period of life c&cle3 such as durin* the desi*n activit&. The& are usuall& limited to a se*ment of a pro0ect3 with the *oal of identif&in* defects as work pro*resses3 rather than at the close of a phase or even later3 when the& are more costl& to correct. G +ecision)point or phase)end Reviews: ) This t&pe of review is helpful in determinin* whether to continue with planed activities or not. The& are held at the end of each phase. G Post implementation Reviews: ) These reviews are held after implementation is complete to audit the process based on actual results. Post)implementation reviews are also know as b Postmortemsc3 and are held to assess the success of the overall process after release and identif& an& opportunities for process improvements. 'lasses of Reviews G Informal or Peer Review: ) In this t&pe of review *enerall& a one)to one meetin* between the author of a work product and a peer3 initiated as a request for input re*ardin* a particular artifact or problem. There is no a*enda3 and results are not formall& reported. These reviews occur as need)based throu*h each phase of a pro0ect. G -emiformal or Walkthrou*h Review: ) The author of the material bein* reviewed facilitates this. The participants are led throu*h the material in one of the two formats: the presentation is made without interruptions and comments are made at the end3 or comments are made throu*hout. Possible solutions for uncovered defects are not discussed durin* the review. G Formal or Inspection Review: ) 6n inspection is more formali.ed than a Awalkthrou*hA3 t&picall& with =)O people includin* a moderator3 reader3 and a recorder to take notes. The sub0ect of the inspection is t&picall& a document such as a requirements spec or a test plan3 and the purpose is to find problems and see whatAs missin*3 not to fiC an&thin*. 6ttendees should prepare for this t&pe of meetin* b& readin* thru the document9 most problems will be found durin* this preparation. The result of the inspection meetin* should be a written report. Thorou*h preparation for inspections is difficult3 painstakin* work3 but is one of the most cost effective methods of ensurin* qualit&.

Three rules should be followed for all reviews: #. The product is reviewed3 not the producer.

2. +efects and issues are identified3 not corrected. =. 6ll members of the reviewin* team are responsible for the results of the review.

Pro0ect %ana*ement Pro0ect mana*ement is 5r*ani.in*3 Plannin* and -chedulin* software pro0ects. It is concerned with activities involved in ensurin* that software is delivered on schedule and in accordance with the requirements of the or*ani.ation developin* and procurin* the software. Pro0ect mana*ement is needed because software development is alwa&s sub0ect to bud*et and schedule constraints that are set b& the or*ani.ation developin* the software. Pro0ect mana*ement activities includes G Pro0ect plannin*. G Pro0ect schedulin*. G Iterative 'ode/Test/Release Phases G Production Phase G Post %ortem Pro0ect plannin* This is the most time)consumin* pro0ect mana*ement activit&. It is a continuous activit& from initial concept throu*h to s&stem deliver&. Pro0ect Plan must be re*ularl& updated as new information becomes available. With out proper plan3 the development of the pro0ect will cause errors or it ma& lead to increase the cost3 which is hi*her than the schedule cost. Review.

Pro0ect schedulin* This activit& involves splittin* pro0ect into tasks and estimate time and resources required to complete each task. 5r*ani.e tasks concurrentl& to make optional use of workforce. %inimi.e task dependencies to avoid dela&s caused b& one task waitin* for another to complete. Pro0ect %ana*er has to take into consideration various aspects like schedulin*3 estimatin* manpower resources3 so that the cost of developin* a solution is within the limits. Pro0ect %ana*er also has to allow for contin*enc& in plannin*. Iterative 'ode/Test/Release Phases

6fter the plannin* and desi*n phases3 the client and development team has to a*ree on the feature set and the timeframe in which the product will be delivered. This includes iterative releases of the product as to let the client see full& implemented functionalit& earl& and to allow the developers to discover performance and architectural issues earl& in the development. /ach iterative release is treated as if the product were *oin* to production. Full testin* and user acceptance is performed for each iterative release. /Cperience shows that one should space iterations at least 2 , = months a part. If iterations are closer than that3 more time will be spent on conver*ence and the pro0ect timeframe eCpands. +urin* this phase3 code reviews must be done weekl& to ensure that the developers are deliverin* to specification and all source code is put under source control. 6lso3 full installation routines are to be used for each iterative release3 as it would be done in production. +eliverables G Tria*e G Weekl& -tatus with Pro0ect Plan and 8ud*et 6nal&sis G Risk 6ssessment G -&stem +ocumentation G Bser +ocumentation :if needed; G Test -i*noff for each iteration G 'ustomer -i*noff for each iteration Production Phase 5nce all iterations are complete3 the final product is presented to the client for a final si*noff. -ince the client has been involved in all iterations3 this phase should *o ver& smoothl&. +eliverables G Final Test -i*noff G Final 'ustomer -i*noff Post %ortem Phase The post mortem phase allows to step back and review the thin*s that went well and the thin*s that need improvement. Post mortem reviews cover processes that need ad0ustment3 hi*hli*ht the most effective processes and provide action items that will improve future pro0ects. To conduct a post mortem review3 announce the meetin* at least a week in advance so that ever&one has time to reflect on the pro0ect issues the& faced. /ver&one has to be asked to come to the meetin* with the followin*: #. Items that were done well durin* the pro0ect 2. Items that were done poorl& durin* the pro0ect =. -u**estions for future improvements +urin* the meetin*3 collection of the information listed above is required. 6s each person offers their input3 cate*ori.e the input so that all comments are collected. This will allow one to see how man& people had the same observations durin* the pro0ect. 6t the end of observation review3 a list of the items will be available that were mentioned most often. The list of items allowin* the team to prioriti.e the importance of each item has to be perused. This will allow drawin* a distinction of the most important items. Finall&3 a list of action items has to be made that will be used to improve the process and publish the results. When the neCt pro0ect be*ins3 ever&one

on the team should review the Post %ortem Report from the prior release as to improve the neCt release. Qualit& %ana*ement The pro0ect qualit& mana*ement knowled*e area is comprised of the set of processes that ensure the result of a pro0ect meets the needs for which the pro0ect was eCecuted. Processes such as qualit& plannin*3 assurance3 and control are included in this area. /ach process has a set of input and a set of output. /ach process also has a set of tools and techniques that are used to turn input into output. +efinition of Qualit&: G Qualit& is the totalit& of features and characteristics of a product or service that bare on its abilit& to satisf& stated or implied needs. 5r G Qualit& is defined as meetin* the customerYs requirement for the first time and for ever& time. This is much more that absence of defects which allows us to meet the requirements. -ome *oals of qualit& pro*rams include: G Fitness for use. :Is the product or service capable of bein* used!; G Fitness for purpose. :+oes the product or service meet its intended purpose!; G 'ustomer satisfaction. :+oes the product or service meet the customerAs eCpectations!;

Qualit& %ana*ement Processes Qualit& Plannin*: The process of identif&in* which qualit& standards is relevant to the pro0ect and determinin* how to satisf& them. G Input includes: Qualit& polic&3 scope statement3 product description3 standards and re*ulations3 and other process 5utput. G %ethods used: benefit / cost anal&sis3 benchmarkin*3 flowchartin*3 and desi*n of eCperiments. G 5utput includes: Qualit& %ana*ement Plan3 operational definitions3 checklists3 and Input to other processes. Qualit& 6ssurance The process of evaluatin* overall pro0ects performance on a re*ular basis to provide confidence that the pro0ect will satisf& the relevant qualit& standards. G Input includes: Qualit& %ana*ement Plan3 results of qualit& control measurements3 and operational definitions. G %ethods used: qualit& plannin* tools and techniques and qualit& audits. G 5utput includes: qualit& improvement. Qualit& 'ontrol

The process of monitorin* specific pro0ect results to determine if the& compl& with relevant qualit& standards and identif&in* wa&s to eliminate causes of unsatisfactor& performance. G Input includes: work results3 Qualit& %ana*ement Plan3 operational definitions3 and checklists. G %ethods used include: inspection3 control charts3 pareto charts3 statistical samplin*3 flowchartin*3 and trend anal&sis. G 5utput includes: qualit& improvements3 acceptance decisions3 rework3 completed checklists3 and process ad0ustments.

Qualit& Polic& The overall qualit& intentions and direction of an or*ani.ation as re*ards qualit&3 as formall& eCpressed b& top mana*ement Total Qualit& %ana*ement :TQ%; 6 common approach to implementin* a qualit& improvement pro*ram within an or*ani.ation Qualit& 'oncepts G ]ero +efects G The 'ustomer is the 4eCt Person in the Process G +o the Ri*ht Thin* Ri*ht the First Time :+TRTRTFT; G 'ontinuous Improvement Process :'IP; :From 1apanese word3 Zai.en; Tools of Qualit& %ana*ement Problem Identification Tools : G Pareto 'hart #. Ranks defects in order of frequenc& of occurrence to depict #IIS of the defects. :+ispla&ed as a histo*ram; 2. +efects with most frequent occurrence should be tar*eted for corrective action. =. OI)2I rule: OIS of problems are found in 2IS of the work. J. +oes not account for severit& of the defects G 'ause and /ffect +ia*rams :fishbone dia*rams or Ishikawa dia*rams; #. 6nal&.es the Input to a process to identif& the causes of errors. 2. Qenerall& consists of O ma0or Input to a qualit& process to permit the characteri.ation of each input. G Risto*rams #. -hows frequenc& of occurrence of items within a ran*e of activit&. 2. 'an be used to or*ani.e data collected for measurements done on a product or process. G -catter dia*rams #. Bsed to determine the relationship between two or more pieces of correspondin* data. 2. The data are plotted on an <X)H< chart to determine correlation :hi*hl& positive3 positive3 no correlation3 ne*ative3 and hi*hl& ne*ative;

Problem 6nal&sis Tools #. Qraphs 2. 'heck sheets :tic sheets; and check lists =. Flowcharts

Risk %ana*ement Risk mana*ement must be an inte*ral part of an& pro0ect. /ver&thin* does not alwa&s happen as planned. Pro0ect risk mana*ement contains the processes for identif&in*3 anal&.in*3 and respondin* to pro0ect risk. /ach process has a set of input and a set of output. /ach process also has a set of tools and techniques that are used to turn the input into output Risk %ana*ement Processes Risk %ana*ement Plannin* Bsed to decide how to approach and plan the risk mana*ement activities for a pro0ect. G Input includes: The pro0ect charter3 risk mana*ement policies3 and W8- all serve as input to this process G %ethods used: %an& plannin* meetin* will be held in order to *enerate the risk mana*ement plan G 5utput includes: The ma0or output is the risk mana*ement plan3 which does not include the response to specific risks. Rowever3 it does include methodolo*& to be used3 bud*etin*3 timin*3 and other information Risk Identification +eterminin* which risks mi*ht affect the pro0ect and documentin* their characteristics G Input includes: The risk mana*ement plan is used as input to this process G %ethods used: +ocumentation reviews should be performed in this process. +ia*rammin* techniques can also be used G 5utput includes: Risk and risk s&mptoms are identified as part of this process. There are *enerall& two t&pes of risks. The& are business risks that are risks of *ain or loss. Then there are pure risks that represent onl& a risk of loss. Pure risks are also known as insurable risks Risk 6nal&sis 6 qualitative anal&sis of risks and conditions is done to prioriti.e their affects on pro0ect ob0ectives. G Input includes: There are man& items used as input into this process. The& include thin*s such as the risk mana*ement plan. The risks should alread& be identified as well. Bse of low precision data ma& lead to an anal&sis that is not useable. Risks are rated a*ainst how the& impact the pro0ects ob0ectives for cost3 schedule3 scope3 and qualit&

G %ethods used: -everal tools and techniques can be used for this process. Probabilit& and Impact will have to be evaluated G 5utput includes: 6n overall pro0ect risk rankin* is produced as a result of this process. The risks are also prioriti.ed. Trends should be observed. Risks calculated as hi*h or moderate are prime candidates for further anal&sis Risk %onitorin* and 'ontrol Bsed to monitor risks3 identif& new risks3 eCecute risk reduction plans3 and evaluate their effectiveness throu*hout the pro0ect life c&cle. G Input includes: Input to this process includes the risk mana*ement plan3 risk identification and anal&sis3 and scope chan*es G %ethods used: 6udits should be used in this process to ensure that risks are still risks as well as discover other conditions that ma& arise. G 5utput includes: 5utput includes work)around plans3 corrective action3 pro0ect chan*e requests3 as well as other items Risk %ana*ement 'oncepts /Cpected %onetar& ?alue :/%?; G 6 Risk Quantification Tool G /%? is the product of the risk event probabilit& and the risk event value G Risk /vent Probabilit&: 6n estimate of the probabilit& that a *iven risk event will occur +ecision Trees 6 dia*ram that depicts ke& interactions amon* decisions and associated chance events as understood b& the decision maker. 'an be used in con0unction with /%? since risk events can occur individuall& or in *roups and in parallel or in sequence.

'onfi*uration %ana*ement 'onfi*uration mana*ement :'%; is the processes of controllin*3 coordinate3 and trackin* the -tandards and procedures for mana*in* chan*es in an evolvin* software product. 'onfi*uration Testin* is the process of checkin* the operation of the software bein* tested on various t&pes of hardware. 'onfi*uration mana*ement involves the development and application of procedures and standards to mana*e an evolvin* software product. This can be seen as part of a more *eneral qualit& mana*ement process. When released to '%3 software s&stems are sometimes called baselines3 as the& are a startin* point for further development. The best bet in this situation is for the testers to *o throu*h the process of reportin* whatever bu*s or blockin*)t&pe problems initiall& show up3 with the focus bein* on critical bu*s. -ince this t&pe of problem can severel& affect schedules3 and indicates deeper problems in the software development process :such as insufficient unit testin* or insufficient inte*ration testin*3 poor desi*n3 improper build or release procedures3 etc.; mana*ers should be notified3 and provided with some documentation as evidence of the problem. 'onfi*uration mana*ement can be mana*ed throu*h G ?ersion control. G 'han*es made in the pro0ect.

?ersion 'ontrol and Release mana*ement ?ersion is an instance of s&stem3 which is functionall& distinct in some wa& from other s&stem instances. It is nothin* but the updated or added features of the previous versions of software. It has to be planned as to when the new s&stem version is to be produced and it has to be ensured that version mana*ement procedures and tools are properl& applied. Release is the means of distributin* the software outside the development team. Releases must incorporate chan*es forced on the s&stem b& errors discovered b& users and b& hardware chan*es. The& must also incorporate new s&stem functionalit&. 'han*es made in the pro0ect This is one of most useful wa& of confi*urin* the s&stem. 6ll chan*es will have to be maintained that were made to the previous versions of the software. This is more important when the s&stem fails or not meetin* the requirements. 8& makin* note of it one can *et the ori*inal functionalit&. This can include documents3 data3 or simulation. 'onfi*uration %ana*ement Plannin* This starts at the earl& phases of the pro0ect and must define the documents or document classes3 which are to be mana*ed. +ocuments3 which mi*ht be required for future s&stem maintenance3 should be identified and included as mana*ed documents. It defines i the t&pes of documents to be mana*ed i document)namin* scheme i who takes responsibilit& for the '% procedures and creation of baselines i polices for chan*e control and version mana*ement. This contains three important documents the& are G 'han*e mana*ement items. G 'han*e request documents. G 'han*e control board. :''8; 'han*e mana*ement -oftware s&stems are sub0ect to continual chan*e requests from users3 from developers3 from market forces. 'han*e mana*ement is concerned with keepin*3 mana*in* of chan*es and ensurin* that the& are implemented in the most cost) effective wa&. 'han*e request form +efinition of chan*e request form is part of '% plannin* process. It records chan*es required3 reason <wh& chan*e )was su**ested and ur*enc& of chan*e : from requestor of the chan*e;. It also records chan*e evaluation3 impact anal&sis3 chan*e cost and recommendations :-&stem maintenance staff;3 6 ma0or problem in chan*e mana*ement is trackin* chan*e status. 'han*e trackin* tools keep track the status of each chan*e request and automaticall& ensure that chan*e requests are sent to the ri*ht people at the ri*ht time. Inte*rated with /mail s&stems allowin* electronic chan*e request distribution. 'han*e control board

6 *roup3 who decide3 whether or not the& are cost)effective from a strate*ic3 or*ani.ational and technical viewpoint3 should review the chan*es. This *roup is sometimes called a chan*e control board and includes members from pro0ect team.

T&pes of -oftware Testin*

-tatic Testin* -tatic testin* refers to testin* somethin* thatYs not runnin*. It is eCaminin* and reviewin* it. The specification is a document and not an eCecutin* pro*ram3 so itYs considered as static. ItYs also somethin* that was created usin* written or *raphical documents or a combination of both. Ri*h)level Reviews of specification G Pretend to be the customer. G Research eCistin* -tandards and Quidelines. G Review and Test similar software. ow)level Reviews of specification G -pecification 6ttributes checklist. G -pecification terminolo*& checklist. +&namic Testin* Techniques used are determined b& t&pe of testin* that must be conducted. G -tructural :usuall& called <white boC<; testin*. G Functional :<black boC<; testin*. -tructural testin* or White boC testin* -tructural tests verif& the structure of the software itself and require complete access to the source code. This is known as ewhite boCY testin* because &ou see into the internal workin*s of the code. White)boC tests make sure that the software structure itself contributes to proper and efficient pro*ram eCecution. 'omplicated loop structures3 common data areas3

#II3III lines of spa*hetti code and nests of ifs are evil. Well)desi*ned control structures3 sub)routines and reusable modular pro*rams are *ood. White)boC testin* stren*th is also its weakness. The code needs to be eCamined b& hi*hl& skilled technicians. That means that tools and skills are hi*hl& speciali.ed to the particular lan*ua*e and environment. 6lso3 lar*e or distributed s&stem eCecution *oes be&ond one pro*ram3 so a correct procedure mi*ht call another pro*ram that provides bad data. In lar*e s&stems3 it is the eCecution path as defined b& the pro*ram calls3 their input and output and the structure of common files that is important. This *ets into a h&brid kind of testin* that is often emplo&ed in intermediate or inte*ration sta*es of testin*. Functional or 8lack 8oC Testin* Functional tests eCamine the behavior of software as evidenced b& its outputs without reference to internal functions. Rence it is also called eblack boCY testin*. If the pro*ram consistentl& provides the desired features with acceptable performance3 then specific source code features are irrelevant. ItAs a pra*matic and down)to)earth assessment of software.

Functional or 8lack boC tests better address the modern pro*rammin* paradi*m. 6s ob0ect)oriented pro*rammin*3 automatic code *eneration and code re)use becomes more prevalent3 anal&sis of source code itself becomes less important and functional tests become more important. 8lack boC tests also better attack the qualit& tar*et. -ince onl& the people pa&in* for an application can determine if it meets their needs3 it is an advanta*e to create the qualit& criteria from this point of view from the be*innin*. 8lack boC tests have a basis in the scientific method. ike the process of science3 8lack boC tests must have a h&pothesis :specifications;3 a defined method or procedure :test plan;3 reproducible components :test data;3 and a standard notation to record the results. 5ne can re)run black boC tests after a chan*e to make sure the chan*e onl& produced intended results with no inadvertent effects.

Testin* levels There are several t&pes of testin* in a comprehensive software test process3 man& of which occur simultaneousl&. G Bnit Testin* G Inte*ration Testin* G -&stem Testin* G Performance / -tress Test G Re*ression Test G Qualit& 6ssurance Test G Bser 6cceptance Test and Installation Test Bnit Testin* Testin* each module individuall& is called Bnit Testin*. This follows a White)8oC testin*. In some or*ani.ations3 a peer review panel performs the desi*n and/or code inspections. Bnit or component tests usuall& involve some combination of structural and functional tests b& pro*rammers in their own s&stems. 'omponent tests often require buildin* some kind of supportin* framework that allows components to eCecute. Inte*ration testin* The individual components are combined with other components to make sure that necessar& communications3 links and data sharin* occur properl&. It is not trul& s&stem testin* because the components are not implemented in the operatin* environment. The inte*ration phase requires more plannin* and some reasonable

sub)set of production)t&pe data. ar*er s&stems often require several inte*ration steps. There are three basic inte*ration test methods: G all)at)once G bottom)up G top)down The all)at)once method provides a useful solution for simple inte*ration problems3 involvin* a small pro*ram possibl& usin* a few previousl& tested modules. 8ottom)up testin* involves individual testin* of each module usin* a driver routine that calls the module and provides it with needed resources. 8ottom)up testin* often works well in less structured shops because there is less dependenc& on availabilit& of other resources to accomplish the test. It is a more intuitive approach to testin* that also usuall& finds errors in critical routines earlier than the top)down method. Rowever3 in a new s&stem man& modules must be inte*rated to produce s&stem) level behavior3 thus interface errors surface late in the process. Top)down testin* fits a protot&pin* environment that establishes an initial skeleton that fills individual modules that is completed. The method lends itself to more structured or*ani.ations that plan out the entire test process. 6lthou*h interface errors are found earlier3 errors in critical low)level modules can be found later than &ou would like. -&stem Testin* The s&stem test phase be*ins once modules are inte*rated enou*h to perform tests in a whole s&stem environment. -&stem testin* can occur in parallel with inte*ration test3 especiall& with the top)down method. Performance / -tress Testin* 6n important phase of the s&stem testin*3 often)called load or volume or performance test3 stress tests tries to determine the failure point of a s&stem under eCtreme pressure. -tress tests are most useful when s&stems are bein* scaled up to lar*er environments or bein* implemented for the first time. Web sites3 like an& other lar*e) scale s&stem that requires multiple accesses and processin*3 contain vulnerable nodes that should be tested before deplo&ment. Bnfortunatel&3 most stress testin* can onl& simulate loads on various points of the s&stem and cannot trul& stress the entire network3 as the users would eCperience it. Fortunatel&3 once stress and load factors have been successfull& overcome3 it is onl& necessar& to stress test a*ain if ma0or chan*es take place. 6 drawback of performance testin* is it confirms the s&stem can handle heav& loads3 but cannot so easil& determine if the s&stem is producin* the correct information. Re*ression Testin* Re*ression tests confirm that implementation of chan*es have not adversel& affected other functions. Re*ression testin* is a t&pe of test as opposed to a phase in testin*. Re*ression tests appl& at all phases whenever a chan*e is made. Qualit& 6ssurance Testin* -ome or*ani.ations maintain a Qualit& Qroup that provides a different point of view3 uses a different set of tests3 and applies the tests in a different3 more complete test environment. The *roup mi*ht look to see that or*ani.ation standards have been followed in the specification3 codin* and documentation of the software. The& mi*ht check to see that the ori*inal requirement is documented3 verif& that the software properl& implements the required functions3 and see that ever&thin* is read& for the users to take a crack at it. Bser 6cceptance Test and Installation Testin* Traditionall&3 this is where the users e*et their first crackY at the software. Bnfortunatel&3 b& this time3 itAs usuall& too late. If the users have not seen protot&pes3 been involved with the desi*n3 and understood the evolution of the s&stem3 the& are

inevitabl& *oin* to be unhapp& with the result. If one can perform ever& test as user acceptance tests3 there is much better chance of a successful pro0ect.

T&pes of Testin* Techniques White 8oC Testin* Technique White boC testin* eCamines the basic pro*ram structure and it derives the test data from the pro*ram lo*ic3 ensurin* that all statements and conditions have been eCecuted at least once. White boC tests verif& that the software desi*n is valid and also whether it was built accordin* to the specified desi*n. +ifferent methods used are: -tatement covera*e , eCecutes all statements at least once. :each and ever& line; +ecision covera*e , eCecutes each decision direction at least once. 'ondition covera*e , eCecutes each and ever& condition in the pro*ram with all possible outcomes at least once. 8lack 8oC Testin* Technique 8lack)boC test technique treats the s&stem as a <black)boC<3 so it doesnAt eCplicitl& use knowled*e of the internal structure. 8lack)boC test desi*n is usuall& described as focusin* on testin* functional requirements. -&non&ms for black boC include: 8ehavioral3 Functional3 5paque)boC3 and 'losed)boC. 8lack boC testin* is conducted on inte*rated3 functional components whose desi*n inte*rit& has been verified throu*h completion of traceable white boC tests. 8lack boC testin* traces the requirements focusin* on s&stem eCternals. It validates that the software meets the requirements irrespective of the paths of eCecution taken to meet each requirements. Three successful techniques for mana*in* the amount of input data required includes : G /quivalence Partitionin* G 8oundar& 6nal&sis G /rror Quessin* /quivalence Partitionin*: /quivalence partitionin* is the process of methodicall& reducin* the hu*e:infinite;set of possible test cases into a much smaller3 but still equall& effective set. 6n /quivalence class is a subset of data that is representative of a lar*er class. /quivalence partitionin* is a technique for testin* equivalence classes rather than undertakin* eChaustive testin* of each value of the lar*er class3 when lookin* for equivalence partitions3 think about wa&s to *roup similar inputs3 similar outputs3 and similar operations of the software. These *roups are the equivalence partitions. For eCample 6 pro*ram that edits credit limits within a *iven ran*e :`2I3III)`LI3III; would have three equivalence classes: ess than `2I3III:invalid; 8etween `2I3III and `LI3III :valid; Qreater than `LI3III:invalid;

8oundar& value anal&sis: If one can safel& and confidentl& walk alon* the ed*e of a cliff without fallin* off3 he can almost certainl& walk in the middle of a field. If software can operate on the ed*e of its capabilities3 it will almost certainl& operate well under normal conditions. This technique consist of developin* test cases and data that focus on the input and output boundaries of a *iven function. In same credit limit eCample3 boundar& anal&sis would test: ow boundar& plus or minus one :`#$3$$$ and `2I3II#; 5n the boundar& :`2I3III and `LI3III; Bpper boundar& plus or minus one :`J$3$$$ and `LI3II#; /rror Quessin* This is based on the theor& that test cases can be developed based upon the intuition and eCperience of the Test)/n*ineer. /Cample: In the eCample of date3 where one of the inputs is the date3 a test ma& tr& Februar& 2$3 2III or $.$.$$ Incremental testin* Incremental testin* is a disciplined method of testin* the interfaces between unit) tested pro*rams as well as between s&stem components. It involves addin* unit) tested pro*rams to a *iven module or component one b& one3 and testin* each result and combination. There are two t&pes of incremental testin*: Top)down: ) This be*ins testin* from top of the module hierarch& and work down to the bottom usin* interim stubs to simulate lower interfacin* modules or pro*rams. %odules are added in descendin* hierarchical order. 8ottom)up: ) This be*ins testin* from the bottom of the hierarch& and works up to the top. %odules are added in ascendin* hierarchical order. 8ottom)up testin* requires the development of driver modules3 which provide the test input3 call the module or pro*ram bein* tested3 and displa& test output. There are procedures and constraints associated with each of these methods3 althou*h bottom)up testin* is often thou*ht to be easier to use. +rivers are often easier to create than stubs3 and can serve multiple purposes. 5utput is also often easier to eCamine in bottom)up testin*3 as the output alwa&s comes from the module directl& above the module under test. Thread testin* This test technique3 which is often used durin* earl& inte*ration testin*3 demonstrates ke& functional capabilities b& testin* a strin* of units that accomplish a specific function in the application. Thread testin* and incremental testin* are usuall& utili.ed to*ether. For eCample3 units can under*o incremental until enou*h units are inte*rated and a sin*le business function can be performed3 threadin* throu*h the inte*rated components.

Testin* ife '&cle

Test Plan Preparation The software test plan is the primar& means b& which software testers communicate to the product development team what the& intend to do. The purpose of the software test plan is to prescribe the scope3 approach3 resource3 and schedule of the testin* activities. To identif& the items bein* tested3 the features to be tested3 the testin* tasks to be preformed3 the personnel responsible for each task3 and the risks associated with the plan. The test plan is simpl& a b&)product of the detailed plannin* process thatYs undertaken to create it. ItYs the plannin* that matters3 not the resultin* documents. The ultimate *oal of the test plannin* process is communicatin* the software test teamYs intent3 its eCpectations3 and its understandin* of the testin* thatYs to be performed. The followin* are the important topics3 which helps in preparation of Test plan. G Ri*h) evel /Cpectations The first topics to address in the plannin* process are the ones that define the test teamYs hi*h)level eCpectations. The& are fundamental topics that must be a*reed to3 b& ever&one on the pro0ect team3 but the& are often overlooked. The& mi*ht be considered btoo obviousc and assumed to be understood b& ever&one3 but a *ood tester knows never to assume an&thin*. G People3 Places and Thin*s Test plan needs to identif& the people workin* on the pro0ect3 what the& do3 and how to contact them. The test team will likel& work with all of them and knowin* who the& are and how to contact them is ver& important. -imilarl&3 where documents are stored3 where the software can be downloaded from3 where the test tools are located3 and so on need to be identified. G Inter)Qroup Responsibilities Inter)Qroup responsibilities identif& tasks and deliverables that potentiall& affect the test effort. The test teamYs work is driven b& man& other functional *roups , pro*rammers3 pro0ect mana*es3 technical writers3 and so on. If the responsibilities arenYt planned out3 the pro0ect3 specificall& the testin*3 can become a worst or resultin* in important tasks been for*otten. G Test phases To plan the test phases3 the test team will look at the proposed development model and decide whether unique phases3 or sta*es3 of testin* should be performed over the course of the pro0ect. The test plannin* process should identif& each proposed

test phase and make each phase known to the pro0ect team. This process often helps the entire team from and understands the overall development model. G Test strate*& The test strate*& describes the approach that the test team will use to test the software both overall and in each phase. +ecidin* on the strate*& is a compleC task) one that needs to be made b& ver& eCperienced testers because it can determine the successes or failure of the test effort.

G 8u* Reportin* /Cactl& what process will be used to mana*e the bu*s needs to be planned so that each and ever& bu* is tracked3 from when itYs found to when itYs fiCed , and never3 ever for*otten. G %etrics and -tatistics %etrics and statistics are the means b& which the pro*ress and the success of the pro0ect3 and the testin*3 are tracked. The test plannin* process should identif& eCactl& what information will be *athered3 what decisions will be made with them3 and who will be responsible for collectin* them. G Risks and Issues 6 common and ver& useful part of test plannin* is to identif& potential problem or risk& areas of the pro0ect , ones that could have an impact on the test effort. Test 'ase +esi*n The test case desi*n specification refines the test approach and identifies the features to be covered b& the desi*n and its associated tests. It also identifies the test cases and test procedures3 if an&3 required to accomplish the testin* and specifics the feature pass or fail criteria. The purpose of the test desi*n specification is to or*ani.e and describe the testin* needs to be performed on a specific feature. The followin* topics address this purpose and should be part of the test desi*n specification that is created: G Test case I+ or identification 6 unique identifier that can be used to reference and locate the test desi*n specification the specification should also reference the overall test plan and contain pointers to an& other plans or specifications that it references. G Test 'ase +escription It is a description of the software feature covered b& the test desi*n specification for eCample3 b the addition function of calculator3c bfont si.e selection and displa& in word pad3c and bvideo card confi*uration testin* of quick time.c G Test case procedure It is a description of the *eneral approach that will be used to test the features. It should eCpand on the approach3 if an&3 listed in the test plan3 describe the technique to be used3 and eCplain how the results will be verified. G Test case Input or Test +ata It is the input the data to be tested usin* the test case. The input ma& be in an& form. +ifferent inputs can be tried for the same test case and test the data entered is correct or not. G /Cpected result It describes eCactl& what constitutes a pass and a fail of the tested feature. Which is eCpected to *et from the *iven input. Test /Cecution and Test o* Preparation

6fter test case desi*n3 each and ever& test case is checked and actual result obtained. 6fter *ettin* actual result3 with the eCpected column in the desi*n sta*e is compared3 if both the actual and eCpected are same3 then the test is passed otherwise it will be treated as failed. 4ow the test lo* is prepared3 which consists of entire data that were recorded3 whether the test failed or passed. It records each and ever& test case so that it will be useful at the time of revision. /Cample Test case I+ Test case +escription Test status/ result -&sDC&.DI# 'heckin* the lo*in window Fail -&sDC&.DI2 'heckin* the main window True

+efect Trackin*

6 defect can be defined in one or two wa&s. From the producerAs viewpoint3 a defect is a deviation from specifications3 whether missin*3 wron*3 etc. From the 'ustomerAs viewpoint3 a defect is an& that causes customer dissatisfaction3 whether in the requirements or not3 this is known as <fit for use<. It is critical that defects identified at each sta*e of the pro0ect life c&cle be tracked to resolution. +efects are recorded for followin* ma0or purposes: G To correct the defect G To report status of the application G To *ather statistics used to develop defect eCpectations in future applications G To improve the software development process %ost pro0ect teams utili.e some t&pe of tool to support the defect trackin* process. This tool could be as simple as a white board or a table created and maintained in a word processor or one of the more robust tools available toda&3 on the market3 such as %ercur&As Test +irector etc. Tools marketed for this purpose usuall& come with some number of customi.able fields for trackin* pro0ect specific data in addition to the basics. The& also provide advanced features such as standard and ad)hoc reportin*3 e)mail notification to developers and/or testers when a problem is assi*ned to them3 and *raphin* capabilities.

6t a minimum3 the tool selected should support the recordin* and communication si*nificant information about a defect. For eCample3 a defect lo* could include:

G +efect I+ number G +escriptive defect name and t&pe G -ource of defect )test case or other source G +efect severit& G +efect priorit& G +efect status :e.*. open3 fiCed3 closed3 user error3 desi*n3 and so on; )more robust tools provide a status histor& for the defect G +ate and time trackin* for either the most recent status chan*e3 or for each chan*e in the status histor& G +etailed description3 includin* the steps necessar& to reproduce the defect G 'omponent or pro*ram where defect was found G -creen prints3 lo*s3 etc. that will aid the developer in resolution process G -ta*e of ori*ination G Person assi*ned to research and/or correct the defect -everit& versus Priorit& The severit& of a defect should be assi*ned ob0ectivel& b& the test team based on predefined severit& descriptions. For eCample a <severit& one< defects ma&be defined as one that causes data corruption3 a s&stem crash3 securit& violations3 etc. In lar*e pro0ect3 it ma& also be necessar& to assi*n a priorit& to the defect3 which determines the order in which defects should be fiCed. The priorit& assi*ned to a defect is usuall& more sub0ective based upon input from users re*ardin* which defects are most important to them3 and therefore should be fiCed first. It is recommended that severit& levels be defined at the start of the pro0ect so that the& intentl& assi*ned and understood b& the team. This foresi*ht can help test teams avoid the common disa*reements with development teams about the criticalit& of a defect. -ome *eneral principles G The primar& *oal is to prevent defects. Wherever this is not possible or practical3 the *oals are to both find the defect as quickl& as possible and minimi.e the impact of the defect. G The defect mana*ement process3 like the entire software development process3 should be risk driven3 i.e.3 strate*ies3 priorities and resources should be based on an assessment of the risk and the de*ree to which the eCpected impact of risk can be reduced. G +efect measurement should be inte*rated into the development process and be used b& the pro0ect team to improve the development process. In other words3 information on defects should be captured at the source as a natural b&)product of doin* the 0ob. People unrelated to the pro0ect or s&stem should not do it. G 6s much as possible3 the capture and anal&sis of the information should be automated. There should be a document3 which includes a list of tools3 which have defect mana*ement capabilities and can be used to automate some of the defect mana*ement processes. G +efect information should be used to improve the process. This3 in fact3 is the primar& reason for *atherin* defect information.

G Imperfect or flawed processes cause most defects. Thus3 to prevent defects3 the process must be altered. The +efect %ana*ement Process The ke& elements of a defect mana*ement process are as follows. G +efect prevention G +eliverable base)linin* G +efect discover&/defect namin* G +efect resolution G Process improvement G %ana*ement reportin*

Test Reports 6 final test report should be prepared at the conclusion of each test activit&. This mi*ht include G Individual Pro0ect Test Report :e.*.3 a sin*le software s&stem; G Inte*ration Test Report G -&stem Test Report G 6cceptance Test Report The test reports are desi*ned to document the results of testin* as defined in the test plan. Without a well)developed test plan3 which has been eCecuted in accordance with its criteria3 it is difficult to develop a meanin*ful test report. It is desi*ned to accomplish three ob0ectives: G +efine the scope of testin* ) normall& a brief recap of the test plan9 G Present the results of testin*9 and G +raw conclusions and make recommendations based on those results The test report ma& be a combination of electronic data and hard cop&. For eCample3 if the function test matriC is maintained electronicall&3 there is no reason to print that3 as the paper report will summari.e that data3 draws the appropriate conclusions3 and present recommendations. The test report has one immediate and three lon*)term purposes. The immediate purpose is to provide information to the customers of the software s&stem so that the& can determine whether the s&stem is read& for production: and if so3 to assess the potential consequences and initiate appropriate actions to minimi.e those consequences. The first of the three lon*)term uses is for the pro0ect to trace problems in the event the application malfunctions in production. Znowin* which functions have been correctl& tested and which ones still contain defects can assist in takin* corrective action.

The second lon*)term purpose is to use the data to anal&.e the rework process for makin* chan*es to prevent defects from occurrin* in the future. 6ccumulatin* the results of man& test reports to identif& which components of the rework process are detect)prone does this. These defect)prone components identif& tasks/steps that3 if improved3 could eliminate or minimi.e the occurrence of hi*h)frequenc& defects. The third lon*)term purpose is to show what was accomplished. Individual Pro0ect Test Report These reports focus on individual pro0ects :e.*.3 software s&stem;. When different testers test individual pro0ects3 the& should prepare a report on their results. Inte*ration Test Report Inte*ration testin* tests the interfaces between individual pro0ects. 6 *ood test plan will identif& the interfaces and institute test conditions that will validate interfaces. Qiven this3 the interface report follows the same format as the individual Pro0ect Test report3 eCcept that the conditions tested are the interfaces. -&stem Test Report 6 s&stem test plan standard that identified the ob0ectives of testin*3 what was to be tested3 how it was to be tested and when tests should occur. The -&stem Test report should present the results of eCecutin* that test plan. If this is maintained electronicall&3 it need onl& be referenced3 not included in the report. 6cceptance Test Report There are two primar& ob0ectives for testin*. The first is to ensure that the s&stem as implemented meets the real operatin* needs of the user or customer. If the defined requirements are those true needs3 the testin* should have accomplished this ob0ective. The second ob0ective is to ensure that the software s&stem can operate in the real)world user environment3 which includes people skills and attitudes3 time pressures3 chan*in* business conditions3 and so forth.

/i*ht Interim Reports: #. Functional Testin* -tatus 2. Functions Workin* Timeline =. /Cpected verses 6ctual +efects +etected Timeline J. +efects +etected verses 'orrected Qap Timeline L. 6vera*e 6*e of +etected +efects b& T&pe N. +efect +istribution K. Relative +efect +istribution O. Testin* 6ction Functional Testin* -tatus Report This report will show percenta*es of the functions3 which have been: G Full& Tested G Tested With 5pen +efects G 4ot Tested

Functions Workin* Timeline report This report will show the actual plan to have all functions workin* verses the current status of functions workin*. 6n ideal format could be a line *raph. /Cpected verses 6ctual +efects +etected report This report will provide an anal&sis between the number of defects bein* *enerated a*ainst the eCpected number of defects eCpected from the plannin* sta*e +efects +etected verses 'orrected Qap report This report3 ideall& in a line *raph format3 will show the number of defects uncovered verses the number of defects bein* corrected and accepted b& the testin* *roup. If the *ap *rows too lar*e3 the pro0ect ma& not be read& when ori*inall& planned.

6vera*e 6*e +etected +efects b& T&pe report This report will show the avera*e outstandin* defects b& t&pe :severit& #3 severit& 23 etc.;. In the plannin* sta*e3 it is benefic determine the acceptable open da&s b& defect t&pe. +efect +istribution report This report will show the defect distribution b& function or module. It can also include items such as numbers of tests completed. Relative +efect +istribution report This report will take the previous report :+efect +istribution; and normali.e the level of defects. 6n eCample would be one application mi*ht be more in depth than another3 and would probabl& have a hi*her level of defects. Rowever3 when normali.ed over the number of functions or lines of code3 would show a more accurate level of defects. Testin* action report This report can show man& different thin*s3 includin* possible shortfalls in testin*. /Camples of data to show mi*ht be number of severit& defects3 tests that are behind schedule3 and other information that would present an accurate testin* picture

-oftware %etric /ffective mana*ement of an& process requires quantification3 measurement3 and modelin*. -oftware metrics provide a quantitative basis for the development and validation of models of the software development process. %etrics can be used to improve software productivit& and qualit&. This module introduces the most commonl& used software and reviews their use in constructin* models of the software development process.

+efinition of -oftware %etrics 6 metric is a mathematical number that shows a relationship between two variables. It is a quantitative measure of the de*ree to which a s&stem3 component or process possesses a *iven attribute. -oftware %etrics are measures that are used to quantif& the software3 software development resource and software development process. %etric *enerall& classified into 2 t&pes. G Process %etric G Product %etric Process %etric a metric used to measure the characteristic of the methods3 techniques and tools emplo&ed in developin*3 implementin* and maintainin* the software s&stem. Product %etric a metric used to measure the characteristic of the documentation and code The metrics for the test process would include status of test activities a*ainst the plan3 test covera*e achieved so far3 amon* others. 6n important metric is the number of defects found in internal testin* compared to the defects found in customer tests3 which indicate the effectiveness of the test process itself. Test %etrics The followin* are the %etrics collected in testin* process Bser participation " Bser Participation Test Time ?s Total Test Time Path Tested " 4umber of Path Tested Total 4umber of Paths 6cceptance 'riteria Tested " 6cceptance 'riteria ?erified ?s Total 6cceptance 'riteria 'ost to ocate +efect Test 'ost " 4o of +efects located in the Testin* This metric shows the cost to locate a defect +etected Production +efect 4o of +efects detected in production " 6pplication -&stem si.e Test 6utomation 'ost of %anual Test /ffort " Total Test 'ost =

5ther Testin* Terms

Bsabilit& Testin* +etermines how well the user will be able to understand and interact with the s&stem. It identifies areas of poor human factors desi*n that ma& make the s&stem difficult to use. Ideall& this test is conducted on a s&stem protot&pe before development actuall& bein*s. If a navi*ational or operational protot&pe is not available3 screen prints of all of the applications screens or windows can be used to walk the user throu*h various business scenarios. 'onversion Testin* -pecificall& desi*ned to validate the effectiveness of the conversion process. This test ma& be conducted 0ointl& b& developers and testers durin* inte*ration testin*3 or at the start of s&stem testin*3 since s&stem testin* must be conducted with the converted data. Field )to )Field mappin* and data translation is validated and3 if a foil cop& of production data will be used in the test. ?endor ?alidation Testin* ?erifies that the functionalit& of contracted or third part& software meets the or*ani.ationAs requirements3 prior to acceptin* it and installin* it into a production environment. This test can be conducted 0ointl& b& the software vendor and the test team3 and focuses on ensurin* that all requested functionalit& has been delivered. -tress / oad Testin* 'onducted to validate the application3 database3 and network3 the& ma& handle pro0ected volumes of users and data effectivel&. The test is conducted 0ointl& b& developers3 testers3 +86As and network associates after the s&stem testin*. +urin* the test3 the complete s&stem is sub0ected to environmental conditions that defer eCpectations to answer question such as: G Row lar*e can the database *row before performance de*rades! G 6t what point will more stora*e space be required! G Row man& users can use the s&stem simultaneousl& before it slows down or fails! Performance Testin* Bsuall& conducted in parallel with stress and load testin* in order to measure performance a*ainst specified service)level ob0ectives under various conditions. For instance3 one ma& need to ensure that batch processin* will complete within the allocated amount of time3 or that on)line response times meet performance requirements.

Recover& Testin* /valuates the contin*enc& features built into the application for handlin* inter and for returnin* to specific points in the application processin*. 6n& restoration3 and restart capabilities are also tested here. The test team ma& conduct this test durin* s&stem test or b& another team specificall& *athered for this purpose. 'onfi*uration Testin* In the IT Industr&3 a lar*e percenta*e of new applications are either client/server or web)based3 validatin* that the& will run on the various combinations of hardware and software. For instance3 confi*uration testin* for an web)based application would incorporate versions and releases of operatin* s&stems3 internet browsers3 modem speeds3 and various off the shelf applications that mi*ht be inte*rated :e.*. e)mail application;

8enefits Reali.ation Test With the increased focus on the value of business returns obtained from investments information technolo*& this t&pe of test or anal&sis is becomin* more critical. The benefits Reali.ation Test is a test or anal&sis conducted after an application is moved into production in order to determine whether the application is likel& to deliver the ori*inal pro0ected benefits. The anal&sis is usuall& conducted b&) the business user or client *roup who requested the pro0ect3 and results are reported back to eCecutive mana*ement.

Test -tandards

/Cternal -tandards) Familiarit& with and adoption of industr& test standards from 5r*ani.ations. Internal -tandards)+evelopment and enforcement of the test standards that testers must meet I/// G Institute of /lectrical and /lectronics /n*ineers G Founded in #OOJ G Rave an entire set of standards devoted to -oftware G Testers should be familiar with all the standards mentioned in I///. I/// -T64+6R+-: That a Tester should be aware of #.N#I.#2)#$$I I/// -tandard Qlossar& of -oftware /n*ineerin* Terminolo*&

2. K=I)#$$O I/// -tandard for -oftware Qualit& 6ssurance Plans =. O2O)#$$O I/// -tandard for -oftware 'onfi*uration %ana*ement Plan J.O2$)#$$O I/// -tandard for -oftware Test +ocumentation. L. O=I)#$$O I/// Recommended Practice for -oftware Requirement -pecification N.#IIO)#$OK :R#$$=; I/// -tandard for -oftware Bnit Testin* :64-I;

K. #I#2)#$$O I/// -tandard for -oftware ?erification and ?alidation. O. #I#2a)#$$O I/// -tandard for -oftware ?erification and ?alidation -upplement to #I#2)#$$O 'ontent %ap to I/// #222IK.# $. #I#N)#$$O I/// Recommended Practice for -oftware +escriptions #I. #I2O)#$$K I/// -tandard for -oftware Reviews ##. #IJJ)#$$= I/// -tandard classification for -oftware 6nomalies #2. #IJL)#$$2 I/// -tandard for -oftware Productivit& %etrics:64-I; #=. #ILO)#$$O I/// -tandard for -oftware Pro0ect %ana*ement Plans #J. #ILO.#)#$OK I/// -tandard for -oftware %ana*ement #L. #IN#)#$$O.# I/// -tandard for -oftware Qualit& %etrics %ethodolo*&.

5ther -tandards: G I-5)International 5r*ani.ation for -tandards G -PI'/ )-oftware Process Improvement and 'apabilit& +etermination G 4I-T )4ational Institute of -tandards and Technolo*& G +o+)+epartment of +efense Internal -tandards The use of -tandards... G -implifies communication G Promotes consistenc& and uniformit& G /liminates the need to invent &et another solution to the same problem G Provides continuit& G Presents a wa& of preservin* proven practices G -upplies benchmarks and framework

Web Testin* Introduction The Web Testin* is mainl& concerned on N parts the& are G Bsabilit& G Functionalit& G -erver side Interface G 'lient side 'ompatibilit& G Performance G -ecurit& Bsabilit& 5ne of the reasons the web browser is bein* used as the front end to applications is the ease of use. Bsers who have been on the web before will probabl& know how to navi*ate a well)built web site. While KI#2 are concentratin* on tinAs portion of testin* it is important to verif& that the application is eas& to use. %an& will believe that this is the least important area to test3 the site should be better and eas& to use. /ven if the web site is simple3 there will alwa&s be some one who needs some clarification. 6dditionall&3 the documentation needs also to be verified3 so that the instructions are correct. The followin* are the some of the thin*s to be checked for eas& navi*ation throu*h website: G -ite map or navi*ational bar +oes the site have a map! -ometimes power users know eCactl& where the& want to *o and donAt want to *o throu*h len*th& introductions. 5r new users *et lost easil&. /ither wa& a site map and/or ever)present navi*ational map can *uide the user. The site map needs to be verified for its correctness. +oes each link on the map actuall& eCist! 6re there links on the site that are not represented on the map! Is the navi*ational bar present on ever& screen! Is it consistent! +oes each link work on each pa*e! Is it or*ani.ed in an intuitive manner! G 'ontent To a developer3 functionalit& comes before wordin*. 6n&one can slap to*ether some fanc& mission statement later3 but while the& are developin*3 the& 0ust need some filler to verif& ali*nment and la&out. Bnfortunatel&3 teCt produce like this ma& sneak throu*h the cracks. It is important to check with the public relations department on the eCact wordin* of the content. 5therwise3 the compan& can *et into a lot of trouble3 le*all&. 5ne has to make sure the site looks professional. 5veruse of bold teCt3 bi* fonts and blinkin* can turn awa& a customer quickl&. It mi*ht be a *ood idea to consult a *raphic desi*ner to look over the site durin* Bser 6cceptance Testin*. Finall&3 one has to make sure that an& time a web reference is *iven3 that it is h&per linked. Plent& of sites ask them to email them at a specific address or to download a

browser from an address. 8ut if the user canAt click on it3 the& are *oin* to be anno&ed. G 'olors/back*rounds /ver since the web became popular3 ever&one thinks the& are a *raphic desi*ner. Bnfortunatel&3 some developers are more interested in their new back*rounds3 than ease of use. -ites will have &ellow teCt on a purple picture of a fractal pattern. This ma& seem <prett& neat<3 but itAs not eas& to use. Bsuall&3 the best idea is to use little or no back*round. If there is a back*round3 it mi*ht be a sin*le color on the left side of the pa*e3 containin* the navi*ational bar. 8ut3 patterns and pictures distract the user.

G Ima*es Whether itAs a screen *rab or a little icon that points the wa&3 a picture is worth a thousand words. -ometimes3 the best wa& to tell the user somethin* is to simpl& show them. Rowever3 bandwidth is precious to the client and the server3 so &ou need to conserve memor& usa*e. +o all the ima*es and value to each pa*e3 or do the& simpl& waste bandwidth! 'an a different file t&pe :.QIF3 1PQ; be used for =Ik less! In *eneral3 one doesnAt want lar*e pictures on the front pa*e3 since most users who abandon a load will do it on the front pa*e. If the front pa*e is available quickl&3 it will increase the chance the& will sta&.

G Tables It has to be verified that tables are setup properl&. +oes the user constantl& have to scroll ri*ht to see the price of the item! Would it be more efficient to put the price closer to the left and put miniscule details to the ri*ht! 6re the columns wide enou*h or does ever& row have to wrap around! 6re certain rows eCcessivel& hi*h because of one entr&! These are some of the points to be taken care of. G Wrap)around Finall&3 it has to be verified whether the wrap)around occurs properl&. If the teCt refers to a picture on the ri*ht3 make sure the picture is on the ri*ht. %ake sure that widow and orphan sentences and para*raphs donAt la&out in an awkward manner because of pictures. Functionalit& The functionalit& of the web site is wh& the compan& hired a developer and not 0ust an artist. This is the part that interfaces with the server and actuall& <does stuff<. G inks 6 link is the vehicle that *ets the user from pa*e to pa*e. Two thin*s has to be verified for each link ) that the link which brin*s to the pa*e it said it would and that the pa*es it is tr&in* to link3 eCist. It ma& sound a little sill& but man& of the web sites eCist with internal broken links.

G Forms When a user submits information throu*h a form it needs to work properl&. The submit button needs to work If the form is for an online re*istration3 the user should be *iven lo*in information :that works; after successful completion. If the form *athers shippin* information3 it should be handled properl& and the customer should receive their packa*e. In order to test this3 &ou need to verif& that the server stores the information properl& and that s&stems down the line can interpret and use that information. G +ata verification If the s&stem verifies user input accordin* to business rules3 then that needs to work properl&. For eCample3 a -tate field ma& be checked a*ainst a list of valid values. If this is the case3 &ou need to verif& that the list is complete and that the pro*ram actuall& calls the list properl& :add a bo*us value to the list and make sure the s&stem accepts it;. G 'ookies %ost users onl& like the kind with su*ar3 but developers love web cookies. If the s&stem uses them3 &ou need to check them. If the& store lo*in information3 make sure the cookies work and make sure itAs encr&pted in the cookie file. If the cookie is used for statistics3 verif& that totals are bein* counted properl&. 6nd &ouAll probabl& want to make sure those cookies are encr&pted too3 otherwise people can edit their cookies and skew &our statistics. G 6pplication specific functional requirements %ost importantl&3 one ma& want to verif& the application specific functional requirements3 Tr& to perform all functions a user would: place an order3 chan*e an order3 cancel an order3 check the status of the order3 chan*e shippin* information before an order is shipped3 pa& online3 ad naseum. This is wh& users will show up on the developerYs doorstep3 so one need to make sure that he can do what is advertised.

-erver side Interface %an& times3 a web site is not an island. The site will call eCternal servers for additional data3 verification of data or fulfillment of orders. G -erver interface The first interface one should test is the interface between the browser and the server3 transactions should be attempted3 then the server lo*s viewed and verified that what is seen in the browser is actuall& happenin* on the server. ItAs also a *ood idea to run queries on the database to make sure the transaction data is bein* stored properl&.

G /Cternal interfaces -ome web s&stems have eCternal interfaces. For eCample3 a merchant mi*ht verif& credit card transactions real)time in order to reduce fraud. -everal test transactions ma& have to be sent usin* the web interface. Tr& credit cards that are valid3 invalid3 and stolen. If the merchant onl& takes ?isa and %aster'ard3 tr& usin* a +iscover card. :6 simple client)side script can check = for 6merican /Cpress3 J for ?isa3 L for %aster'ard3 or N for +iscover3 before the transaction is sent.; 8asicall&3 it has to be ensured that the software can handle ever& possible messa*e returned b& the eCternal server. G /rror handlin* 5ne of the areas left untested most often is interface error handlin*. Bsuall& we tr& to make sure our s&stem can handle all our errors3 but we never plan for the other s&stemsA errors or for the uneCpected. Tr& leavin* the site mid)transaction ) what happens! +oes the order complete an&wa&! Tr& losin* the Internet connection from the user to the server. Tr& losin* the connection from the server to the credit card verification server. Is there proper error handlin* for all these situations! 6re char*es still made to credit cards! Is the interruption is not user initiated3 does the order *et stored so customer service reps can call back if the user doesnAt come back to the site!

'lient side 'ompatibilit& It has to be verified that the application can work on the machines &our customers will be usin*. If the product is *oin* to the web for the world to use3 ever& operatin* s&stem3 browser3 video settin* and modem speed has to be tried with various combinations. G 5peratin* s&stems +oes the site work for both %6' and I8% 'ompatibles! -ome fonts are not available on both s&stems3 so make sure that secondar& fonts are selected. %ake sure that the site doesnAt use plu*)ins onl& available for one 5-3 if the users usin* both. G 8rowsers +oes the site work with 4etscape! Internet /Cplorer! inuC! -ome RT% commands or scripts onl& work for certain browsers. %ake sure there are alternate ta*s for ima*es3 in case someone is usin* a teCt browser. If -- securit& is used3 it has to be checked whether browsers =.I and hi*her3 but it has to be verified that there is a messa*e for those usin* older browsers. G ?ideo settin*s +oes the la&out still look *ood on NJICJII or NIICOII! 6re fonts too small to read! 6re the& too bi*! +oes all the teCt and *raphic ali*nment still work!

G %odem/connection speeds +oes it take #I minutes to load a pa*e with a 2O.O modem3 but whether it is tested after hookin* up to hi*h)speed connections! Bsers will eCpect lon* download times when the& are *rabbin* documents or demos3 but not on the front pa*e. It has to be ensured that the ima*es arenAt too lar*e. %ake sure that marketin* donAt put LIk of font si.e )N ke&words for search en*ines. G Printers Bsers like to print. The concept behind the web should save paper and reduce printin*3 but most people would rather read on paper than on the screen. -o3 &ou need to verif& that the pa*es print properl&. -ometimes ima*es and teCt ali*n on the screen differentl& than on the printed pa*e. It has to be verified that order confirmation screens can be printed properl&. G 'ombinations 6 different combination has to be tried. %a&be NIICOII looks *ood on the %6' but not on the I8%. %a&be I8% with 4etscape works3 but not with inuC. If the web site will be used internall& it mi*ht make testin* a little easier. If the compan& has an official web browser choke3 then it has to be verified that it works for that browser. If ever&one has a hi*h)speed connection3 load times need not be checked. :8ut it has to be kept in mind3 some people ma& dial in from home.; With internal applications3 the development team can make disclaimers about s&stem requirements and onl& support those s&stems setups. 8ut3 ideall&3 the site should work on all machines without limit *rowth and chan*es in the future. Performance Testin* It need to be verified that the s&stem can handle a lar*e number of users at the same time3 a lar*e amount of data from each user3 and a lon* period of continuous use. 6ccessibilit& is eCtremel& important to users. If the& *et a <bus& si*nal<3 the& han* up and call the competition. 4ot onl& must the s&stem be checked so the customers can *ain access3 but man& times hackers will attempt to *ain access to a s&stem b& overloadin* it3 For the sake of securit&3 the s&stem needs to know what to do when itAs overloaded9 not simpl& blow up. G 'oncurrent users at the same time If the site 0ust put up the results of a national lotter&3 it will be better to handle millions of users ri*ht after the winnin* numbers are posted. 6 load test tool would be able simulate concurrent users accessin* the site at the same time. G ar*e amount of data from each user %ost customers ma& onl& order #)L books from &our new online bookstore3 but what if a universit& bookstore decides to order LIII copies of Intro to Ps&cholo*&! 5r what if one user wants to send a *ift to lar*er number of his/her friends for 'hristmas :separate mailin* addresses for each3 of course.; 'an the s&stem handle lar*e amounts of from a sin*le user!

G on* period of continuous use

If the site is intended to take orders for specific occasion3 then it will be better to handle well before the occasion. If the site offers web)based email3 it will be better to run months or even &ears3 without downtimes. It ma& probabl& be required to use an automated test tool to implement these t&pes of tests3 since the& are difficult to do manuall&. Ima*ine coordinatin* #II people to hit the site at the same time. 4ow tr& #II3III people. Qenerall&3 the tool will pa& for itself the second time &ou use it. 5nce the tool is set up3 runnin* another test is 0ust a click awa&. -ecurit& /ven if credit card pa&ments are not accepted3 securit& is ver& important. The web site will be the onl& eCposure for some customers to know about a compan&. 6nd3 if that eCposure is a hacked pa*e3 the customers wonAt feel safe doin* business with the compan& usin* internet. G +irector& setup The most elementar& step of web securit& is proper setup of directories. /ach director& should have an indeC.html or main.html pa*e so a director& listin* doesnAt appear. G -- :-ecured -ocket a&er; %an& sites use -- for secure transactions. While enterin* an -- site3 there will be a browser warnin* and the RTTP in the location field on the browser will chan*e to RTTP-. If the development *roup uses -- it is to be ensured that3 there is an alternate pa*e for browser with versions less than =.I3 since -- is not compatible with those browsers. -ufficient warnin*s while enterin* and leavin* the secured site are to be provided. 6lso it needs to be checked whether there is a time)out limit or what happens if the user tries a transaction after the timeout! G o*ins In order to validate users3 several sites require customers to lo*in. This makes it easier for the customer since the& donAt have to re)enter personal information ever& time. Hou need to verif& that the s&stem does not allow invalid usernames/password and that does allow valid lo*ins. Is there a maCimum number of failed lo*ins allowed before the server locks out the current user! Is the lockout based on IP! What happens after the maCimum failed lo*in attempts9 what are the rules for password selection , these needs to be checked. G o* files 8ehind the scenes3 it needs to be verified that server lo*s are workin* properl&. +oes the lo* track ever& transaction! +oes it track unsuccessful lo*in attempts! +oes it onl& track stolen credit card usa*e! What does it store for each transaction! IP address! Bser name! G -criptin* lan*ua*es -criptin* lan*ua*es are a constant source of securit& holes. The details are different for each lan*ua*e. -ome allow access to the root director&. 5thers onl& allow access to the mail server3 but a resourceful hacker could mail the servers username and password files to themselves. Find out what scriptin* lan*ua*es are bein* used and research the loopholes. It mi*ht also be a *ood idea to subscribe to a securit& news*roup that discusses the lan*ua*e that is bein* tested.

'onclusion Whether an Internet or intranet or eCtranet application is bein* tested3 testin* for the web can be more challen*in* than non)web applications. Bsers have hi*h eCpectations for web pa*e qualit&. In man& cases3 the pa*e is up for public relations3 0ust as much as functionalit&3 so the impression must be perfect.

Testin* Terms 6pplication: 6 sin*le software product that ma&or ma& not full& support a business function. 6udit: This is an inspection/assessment activit& that verifies compliance with plans3 policies3 and procedures3 and ensures that resources are conserved. 6udit is a staff function9 it serves as the <e&es and ears< of mana*ement 8aseline: 6 quantitative measure of the current level of performance. 8enchmarkin*: 'omparin* &our compan&As products3 services3 or processes a*ainst best practices3 or competitive practices3 to help define superior performance of a product3 service3 or support process. 8enefits Reali.ation Test: 6 test or anal&sis conducted after an application is moved into production to determine whether it is likel& to meet the ori*inatin* business case. 8lack)boC Testin*: 6 test technique that focuses on testin* the functionalit& of the pro*ram3 component3 or application a*ainst its specifications without knowled*e of how the s&stem is constructed9 usuall& data or business process driven. 8oundar& ?alue 6nal&sis: 6 data selection technique in which test data is chosen from the <boundaries< of the input or output domain classes3 data structures3 and procedure parameters. 'hoices often include the actual minimum and maCimum boundar& values3 the maCimum value plus or minus one3 and the minimum value plus or minus one. 8u*: 6 catchall term for all software defects or errors. 'ertification: 6cceptance of software b& an authori.ed a*ent after the software has been validated b& the a*ent or after its validit& has been demonstrated to the a*ent. 'heck sheet: 6 form used to record data as it is *athered. 'heckpoint: 6 formal review of ke& pro0ect deliverables. 5ne checkpoint is defined for each ke& pro0ect deliverable3 and verification and validation must be done for each of these deliverables that is produced. 'ondition 'overa*e: 6 white)boC testin* technique that measures the number of percenta*e of decision outcomes covered b& the test cases desi*ned. #IIS 'ondition covera*e would indicate that ever& possible outcome of each decision had been eCecuted at least once durin* testin*. 'onfi*uration Testin*: Testin* of an application on all supported hardware and software platforms. This ma& include various combinations of hardware t&pes3 confi*uration settin*s3 and software versions. 'ost of Qualit& :'5Q;: %one& spent above and be&ond eCpected production costs :labor3 materials3 equipment; to ensure that the product the customer receives is a qualit& :defect free; product The 'ost of Qualit& includes prevention3 appraisal3 and correction or repair costs. 'onversion Testin*: ?alidates the effectiveness of data conversion processes3 includin* field)to)field mappin*3 and data translation. +ecision 'overa*e: 6 white)boC testin* technique that measures the number of )or percenta*e )of decision directions eCecuted b& the test case desi*ned. #IIS

+ecision covera*e would indicate that all decision directions had been eCecuted at least once durin* testin*. 6lternativel&3 each lo*ical path throu*h the pro*ram can be tested. 5ften3 paths throu*h the pro*ram are *rouped into a finite set of classes3 and one path from each class is tested +ecision/'ondition 'overa*e: 6 white)boC testin* technique that eCecutes possible combinations of condition outcomes in each decision. +efect: 5perationall&3 it is useful to work with two definitions of a defect: :#; From the producerAs viewpoint: a product requirement that has not been met or a product attribute possessed b& a product or a function performed b& a product that is not in the statement of requirements that define the product9 or :2; From the customerAs viewpoint: an&thin* that causes customer dissatisfaction3 whether in the statement of requirements or not. +river: 'ode that sets up an environment and calls a module for test. +efect Trackin* Tools: Tools for documentin* defects as the& are found durin* testin* and for trackin* their status throu*h to resolution. +esk 'heckin*: The most traditional means for anal&.in* a s&stem to a pro*ram. The developer of a s&stem or pro*ram conducts desk checkin*. The process involves reviewin* the complete product to ensure that it is structurall& sound and that the standards and requirements have been met. This tool can also be used on artifacts created durin* anal&sis and desi*n. /ntrance 'riteria: Required conditions and standards for work product qualit& that must be present or met for entr& into the neCt sta*e of the software development process. /quivalence Partitionin*: 6 test technique that utili.es a subset of data that is representative of a lar*er class. This is done in place of undertakin* eChaustive testin* of each value of the lar*er class of data. For eCample3 a business rule that indicates that a pro*ram should edit salaries within a *iven ran*e :`#I3III )`#L3III; mi*ht have = equivalence classes to test: ess than `#I3III :invalid; 8etween `#I3III and `#L3III :valid; Qreater than `#L3III :invalid; /rror or +efect: #. 6 discrepanc& between a computed3 observed3 or measured value or condition and the true3 specified3 or theoreticall& corrects value or condition. 2. Ruman action that results in software containin* a fault :e.*.3 omission or misinterpretation of user requirements in a software specification3 incorrect translation3 or omission of a requirement in the desi*n specification;. /rror Quessin*: The data selection technique for pickin* values that seems likel& to cause defects. This technique is based upon the theor& that test cases and test data can be developed based on the intuition and eCperience of the tester. /Chaustive Testin*: /Cecutin* the pro*ram throu*h all possible combinations of values for pro*ram variables. /Cit 'riteria: -tandards for work product qualit&3 which block the promotion of incomplete or defective work products to subsequent sta*es of the software development process. Functional Testin*: 6pplication of test data derived from the specified functional requirements without re*ard to the final pro*ram structure. Inspection: 6 formal assessment of a work product conducted b& one or more qualified independent reviewers to detect defects3 violations of development standards3 and other problems. Inspections involve authors onl& when specific questions concernin* deliverables eCist. 6n inspection identifies defects3 but does not attempt to correct them. 6uthors take corrective actions and arran*e follow)up reviews as needed. Inte*ration Testin*: This test be*ins after two or more pro*rams or application components have been successfull& unit tested. The development team to validate the technical qualit& or desi*n of the application conducts it. It is the first level of testin* which formall& inte*rates a set of pro*rams that communicate amon* themselves via messa*es or files :a client and its server:s;3 a strin* of batch pro*rams3 or a set of on)line modules within a dialo* or conversation;.

ife '&cle Testin*: The process of verif&in* the consistenc&3 completeness3 and correctness of software at each sta*e of the development lifec&cle. Performance Test: ?alidates that both the on)line response time and batch run times meet the defined performance requirements. Qualit&: 6 product is a qualit& product if it is defect free. To the producer3 a product is a qualit& product if it meets or conforms to the statement of requirements that defines the product. This statement is usuall& shortened to: qualit& means meets requirements. From a customerAs perspective3 qualit& means <fit for use<. Qualit& 6ssurance :Q6;: The set of support activities :includin* facilitation3 trainin*3 measurement3 and anal&sis; needed to provide adequate confidence that processes are established and continuousl& improved to produce products that meet specifications and are fit for use. Qualit& 'ontrol :Q';: The process b& which product qualit& is compared with applicable standards3 and the action taken when nonconformance is detected. Its focus is defect detection and removal. This is a line function9 that is3 the performance of these tasks is the responsibilit& of the people workin* within the process. Recover& Test: /valuate the contin*enc& features built into the application for handlin* interruptions and for returnin* to specific points in ife application processin* c&cle3 includin*. )checkpoints3 backups3 restores3 and restarts. This test also assures that disaster recover& is possible. Re*ression Testin*: Re*ression testin* is the process of retestin* software to detect errors that ma& have been caused b& pro*ram chan*es. The technique requires the use of a set of test cases that have been developed to test all of the softwareAs functional capabilities. -tress Testin*: This test sub0ects a s&stem3 or components of a s&stem3 to var&in* environmental conditions that dela& normal eCpectations. For eCample: hi*h transaction volume3 lar*e database si.e or restart/recover& circumstances. The intention of stress testin* is to identif& constraints and to ensuJre that there are no performance problems3 -tructural Testin*: 6 testin* method in which the test data are derived solel& from the pro*ram structure. -tub: -pecial code se*ments )that when invoked b& a code se*ment under testin* sinuate the behavior of desi*ned and specified modules not &et constructed. I -&stem test: +urin* this event3 the entire s&stem is tested to verif& that all functional3 information3 structural and qualit& requirements have been met. 6 predetermined combination of tests is desi*ned that3 when eCecuted successfull&3 satisf& mana*ement that the s&stem meets specifications. -&stem testin* verifies the functional qualit& of the s&stem in addition to all eCternal interfaces3 manual procedures3 restart and recover&3 and human)computer interfaces. It also verifies that interfaces between the application and open environment work correctl&3 that 1' functions correctl&3 and that the application functions appropriatel& with the +atabase %ana*ement -&stem3 5perations environment3 and an& communications s&stems. Test 'ase ) 6 test case is a document that describes an input3 action3 or event and an eCpected response3 to determine if a feature of an application is workin* correctl&. 6 test case should contain particulars such as test case identifier3 test case name3 ob0ective3 test conditions/setup3 input data requirements3 steps3 and eCpected results. Test 'ase -pecification: )6n individual test condition3 eCecuted as part of a lar*er test contributes to the testAs ob0ectives. Test cases document the input3 eCpected results3 eCecution conditions of a *iven test item. Test cases are broken down into one or more detailed test scripts and test data conditions for eCecution. Test +ata -et: -et of input elements used in the testin* process

Test +esi*n -pecification: 6 document that specifies the details of the test approach for a software feature or a combination of features and identifies the associated tests. Test Item: 6 software item that is an ob0ect of testin*. Test o*: 6 chronolo*ical record of relevant details about the eCecution of tests. Test Plan: 6 document describin* the intended scope3 approach3 resources3 and schedule of testin* activities. It identifies test items3 the features to be tested3 the testin* tasks3 the personnel performin* each task3 and an& risks requirin* contin*enc& plannin*. Test Procedure -pecification: 6 document specif&in* a sequence of actions for the eCecution of a test. Test -ummar& Report 6 document that describes testin* activities and results and evaluates the correspondin* test items. Testin*: /Camination b& manual or automated means of the behaviour of a pro*ram b& eCecutin* the pro*ram on sample data sets to verif& that it satisfies specified requirements or to verif& differences between eCpected and actual results. Test -cripts: 6 tool that specifies an order of actions that should be performed durin* a test session. The script also contains eCpected results. Test scripts ma& be manuall& prepared usin* paper forms3 or ma& be automated usin* capture/pla&back tools or other kinds of automated scriptin* tools. Bsabilit& Test: The purpose of this event is to review the application user interface and other human factors of the application with the people who will be usin* the application. This is to ensure that the desi*n :la&out and sequence3 etc.; enables the business functions to be eCecuted as easil& and intuitivel& as possible. This review includes assurin* that the user interface adheres to documented Bser Interface standards3 and should be conducted earl& in the desi*n sta*e of development. Ideall&3 an application protot&pe is used to walk the client *roup throu*h various business scenarios3 althou*h paper copies of screens3 windows3 menus3 and reports can be used. Bser 6cceptance Test: Bser 6cceptance Testin* :B6T; is conducted to ensure that the s&stem meets the needs of the or*ani.ation and the end user/customer. It validates that the s&stem will work as intended b& the test in the real world3 and is based on real world business scenarios3 not s&stem requirements. /ssentiall&3 this test validates that the RIQRT s&stem was built. ?alidation: +etermination of the correctness of the final pro*ram or software produced from a development pro0ect with respect to the user needs and requirements. ?alidation is usuall& accomplished b& verif&in* each sta*e of the software development life c&cle. ?erification: I; The process of determinin* whether the products of a *iven phase of the software development c&cle fulfill the requirements established durin* the previous phase. II; The act of reviewin*3 inspectin*3 testin*3 checkin*3 auditin*3 or otherwise establishin* and documentin* whether items3 processes3 services3 or documents conform to specified requirements. Walkthrou*h: 6 manual anal&sis technique in which the module author describes the moduleAs structure and lo*ic to an audience of collea*ues. Techniques focus on error detection3 not correction. Will usuall& sue a formal set of standards or criteria as the basis of the review. White)boC Testin*: 6 testin* technique that assumes that the path of the lo*ic in a pro*ram unit or component is known. White)boC testin* usuall& consists of testin* paths3 branch b& branch3 to produce predictable results. This technique is usuall& used durin* tests eCecuted b& the development team3 such as Bnit or 'omponent testin*.

Technical Questions #. What is -oftware Testin*! The process of eCercisin* or evaluatin* a s&stem or s&stem component b& manual or automated means to verif& that it satisfies specified requirements or to identif& differences between eCpected and actual results. 2. What is the Purpose of Testin*! G To uncover hidden errors G To achieve the maCimum usabilit& of the s&stem G To +emonstrate eCpected performance of the s&stem. =. What t&pes of testin* do testers perform! 8lack boC testin*3 White boC testin* is the basic t&pe of testin* testers performs. 6part from that the& also perform a lot of tests like 6d ) Roc testin*3 'ookie Testin*3 '/T :'ustomer /Cperience Test;3 'lient)-erver Test3 'onfi*uration Tests3 'ompatibilit& testin*3 'onformance Testin* J. What is the 5utcome of Testin*! 6 stable application3 performin* its task as eCpected. L. What is the need for testin*! The Primar& need is to match requirements *et satisfied with the functionalit& and also to answer two questions G Whether the s&stem is doin* what it supposes to do! G Whether the s&stem is not performin* what it is not suppose to do! N. What are the entr& criteria for Functionalit& and Performance testin*! Functional testin*: Functional -pecification /8R- :'R-;/Bser %anual. 6n inte*rated application3 -table for testin*. K. Wh& do &ou *o for White boC testin*3 when 8lack boC testin* is available! 6 benchmark that certifies 'ommercial :8usiness; aspects and also functional :technical; aspects is ob0ectives of black boC testin*. Rere loops3 structures3 arra&s3 conditions3 files3 etc are ver& micro level but the& arc 8asement for an& application3 -o White boC takes these thin*s in %acro level and test these thin*s O. What are the entr& criteria for 6utomation testin*! 6pplication should be stable. 'lear +esi*n and Flow of the application is needed $. What is 8aseline document3 'an &ou sa& an& two!

6 baseline document3 which starts the understandin* of the application before the tester3 starts actual testin*. Functional -pecification and 8usiness Requirement +ocument #I. What are the Qualities of a Tester! G -hould be perfectionist G -hould be tactful and diplomatic G -hould be innovative and creative G -hould be relentless G -hould possess ne*ative thinkin* with *ood 0ud*ment skills G -hould possess the attitude to break the s&stem ##. Tell names of some testin* t&pe which &ou learnt or eCperienced! 6n& L or N t&pes which are related to companies profile is *ood to sa& in the interview3 G 6d ) Roc testin* G 'ookie Testin* G '/T :'ustomer /Cperience Test; G +epth Test G /vent)+riven G Performance Testin* G Recover& testin* G -anit& Test G -ecurit& Testin* G -moke testin* G Web Testin* #2. What eCactl& is Reuristic checklist approach for unit testin*! It is method of achievin* the most appropriate solution of several found b& alternative methods is selected at successive sta*es testin*. The checklist Prepared to Proceed is called Reuristic checklist #=. 6fter completin* testin*3 what would &ou deliver to the client! Test deliverables namel& Test plan Test +ata Test desi*n +ocuments :'ondition/'ases; G +efect Reports G Test 'losure +ocuments G Test %etrics #J. What is a Test 8ed! 8efore -tartin* the 6ctual testin* the element99 which supports the testin* activit& such as Test data3 +ata *uide lines. 6re collectivel& called as test 8ed. #L. What is a +ata Quideline! +ata Quidelines are used to specif& the data required to populate the test bed and prepare test scripts. It includes all data parameters that are required to test the conditions derived from the requirement / specification The +ocument3 which supports in preparin* test data are called +ata *uidelines #N. Wh& do &ou *o for Test 8ed!

When Test 'ondition is eCecuted its result should be compared to Test result :eCpected result;3 as Test data is needed for this here comes the role of test 8ed where Test data is made read&.

#K. 'an 6utomation testin* replace manual testin*! If it so3 how! 6utomated testin* can never replace manual Testin*. 6s these tools to Follow QIQ5 principle of computer tools. 6bsence of creativit& and innovative thinkin*. 8ut It speeds up the process. Follow a clear Process3 which can be reviewed easil&. 8etter -uited for Re*ression testin* of %anuall& tested 6pplication and Performance testin*. #O. What is the difference between qualit& and testin*! <Qualit& is *ivin* more cushions for user to use s&stem with all its eCpected characteristicsc. It is usuall& said as 1ourne& towards /Ccellence. bTestin* is an activit& done to achieve the qualit&c. #$. Wh& do we prepare test condition3 test cases3 test script :8efore -tartin* Testin*;! These are test desi*n document which are used to eCecute the actual testin* Without which eCecution of testin* is impossible3 finall& this eCecution is *oin* to find the bu*s to be fiCed so we have prepare this documents. 2I. Is it not waste of time in preparin* the test condition3 test case W Test -cript! 4o document prepared in an& process is waste of rime3 That too test desi*n documents which pla&s vital role in test eCecution can never be said waste of time as without which proper testin* cannot be done. 2#. Row do &ou *o about testin* of Web 6pplication! To approach a web application testin*3 the first attack on the application should be on its performance behavior as that is ver& important for a web application and then transfer of data between web server and .front end server3 securit& server and back end server. 22. What kind of +ocument &ou need for *oin* for a Functional testin*! Functional specification is the ultimate document3 which eCpresses all the functionalities of the application and other documents like user manual and 8R- are also need for functional testin*. Qap anal&sis document will add value to understand eCpected and eCistin* s&stem. 2=. 'an the -&stem testin* be done at an& sta*e! 4o3 .The s&stem as a whole can be tested onl& if all modules arc inte*rated and all modules work correctl& -&stem testin* should be done before B6T :Bser 6cceptance testin*; and 8efore Bnit Testin*.

2J. What is %utation testin* W when can it be done! %utation testin* is a powerful fault)based testin* technique for unit level testin*. -ince it is a fault)based testin* technique3 it is aimed at testin* and uncoverin* some specific kinds of faults3 namel& simple s&ntactic chan*es to a pro*ram. %utation testin* is based on two assumptions: the competent pro*rammer h&pothesis and the couplin* effect. The competent pro*rammer h&pothesis assumes that competent pro*rammers turn to write nearl& <correct< pro*rams. The couplin* effect stated that a set of test data that can uncover all simple faults in a pro*ram is also capable of detectin* more compleC faults. %utation testin* in0ects faults into code to determine optimal test inputs. 2L. Wh& it is impossible to test a pro*ram completel&! With an& software other than the smallest and simplest pro*ram3 there are too man& inputs3 too man& outputs3 and too man& path combinations to full& test. 6lso3 software specifications can be sub0ective and be interpreted in different wa&s. Test 6utomation: 2N. What automatin* testin* tools are &ou familiar with! WinRunner and oadRunner 2K. What is the use of automatin* testin* tools in an& 0ob! The automation testin* tools are used for Re*ression and Performance testin*. 2O. +escribe some problem with automatin* testin* tool. -everal problems are encountered while workin* with test automation tools like3

a. Tools imitations for 5b0ect +etections b. Tools 'onfi*uration / +eplo&ment in various /nvironments c. Tools Precision / +efault -keleton -cript Issues like window s&nchroni.ation issues etc. d. Tools bu*s with respect to eCception handlin*. e. Tools abnormal pol&morphism in behavior like sometimes it works but sometimes not for the same application / same script/same environment etc. 2$. Row test automation is planned! Plannin* is the most important task in Test 6utomation. Test 6utomation Plan should cover the followin* task items3 a. Tool -election: T&pe of Test 6utomation /Cpected :Re*ression / Performance etc.; b. Tool /valuation: Tool 6vailabilit& / Tool icense 6vailabilit& / Tool icense imitations. c. Tool 'ost /stimation ?s Pro0ect 'ost /stimation -tatistics for Testin*. d. Resource Requirements ?s 6vailabilit& -tud&. e. Time 6vailabilit& ?s Time /stimations 'alculations and +efinitions. f. Production Requirements 6nal&sis Results 'onsideration with respect to Factors like oad)Performance / Functionalit& /Cpected / -calabilit& etc.

*. Test 6utomation Process +efinitions includin* -tandard to be followed while performin* Test 6utomation. h. Test 6utomation -cope +efinition. i. 6utomation Risk 6nal&sis and plannin* to overcome if defined Risks /mer*e in the 6utomation Process. 0. Reference +ocument Requirement as Perquisites for Test 6utomation. =I. 'an test automation improve test effectiveness! Hes3 +efinitel& Test 6utomation pla&s a vital role in improvin* Test /ffectiveness in various wa&s like3 a. Reduction in -lippa*e caused due to human errors. b. 5b0ect / 5b0ect Properties evel BI ?erifications. c. ?irtual oad / Bsers usa*e in oad/Performance Testin* wherein its not possible to use so man& resources ph&sicall& performin* test and *et so accurate results. d. Prjcised Time 'alculations. e. 6nd man& more^ =#. What is data ) driven automation! +ata +riven 6utomation is the most important part of test automation where the requirement is to eCecute the same test cases for different set of test input data so that test can eCecuted for pre)defined iterations with different set of test input data for each iteration. =2. What are the main attributes of test automation! Rere are some of the attributes of test automation that can be measured3 %aintainabilit& G +efinition: The effort needed to update the test automation suites for each new release. G Possible measurements: The possible measurements can be e.*. the avera*e work effort in hours to update a test suite. Reliabilit& G +efinition: The accurac& and repeatabilit& of &our test automation. G Possible measurements: 4umber of times a test failed due to defects in the tests or in the test scripts. FleCibilit& G +efinition: The ease of workin* with all the different kinds of automation test ware. G Possible measurements: The time and effort needed to identif&3 locate3 restore3 combine and eCecute the different test automation test ware. /fficienc& G +efinition: The total cost related to the effort needed for the automation. G Possible measurements: %onitorin* over time the total cost of automated testin*3 i.e. resources3 material3 etc. Portabilit& G +efinition: The abilit& of the automated test to run on different environments. G Possible measurements: The effort and time needed to set)up and run test automation in a new environment.

Robustness G +efinition: The effectiveness of automation on an unstable or rapidl& chan*in* s&stem. G Possible measurements: 4umber of tests failed due to uneCpected events.

Bsabilit& G +efinition: The eCtent to which automation can be used b& different t&pes of users :+evelopers3 non)technical people or other users etc.3; G Possible measurements: The time needed to train users to become confident and productive with test automation. ==. +oes automation replace manual testin*! We cannot actuall& replace manual testin* #IIS usin* 6utomation but &es definitel& it can replace almost $IS of the manual test efforts if the automation is done efficientl&. =J. Row a tool for test automation is chosen! 8elow are factors to be considered while choosin* Test 6utomation Tool3 a. Test T&pe /Cpected. :/.*. Re*ression Testin* / Functional Testin* / Performance) oad Testin*; b. Tool 'ost ?s Pro0ect Testin* 8ud*et /stimation. c. Protocol -upport b& Tool ?s. 6pplication +esi*ned Protocol. d. Tools imitations ?s 6pplication Test Requirements e. R/W3 -/W W Platform -upport of Tool ?s 6pplication test -cope for these attributes. f. Tool icense imitations / 6vailabilit& ?s Test Requirements.:Tools -calabilit&; =L. Row one will evaluate the tool for test automation! Whenever a Tool has to be evaluated one need to *o throu*h few important verifications / validations of the tool like3 a. Platform -upport from the Tool. b. Protocols / Technolo*ies -upport. c. Tool 'ost d. Tool T&pe with its Features ?s 5ur Requirements 6nal&sis. e. Tool Bsa*e 'omparisons with other similar available tools in market. f. ToolYs 'ompatibilit& with our 6pplication 6rchitecture and +evelopment Technolo*ies. *. Tool 'onfi*uration W +eplo&ment Requirements. h. Tools imitations 6nal&sis. =N. What are main benefits of test automation! The main benefits of Test 6utomation are3 a. Test 6utomation -aves %a0or Testin* Time. b. -aves Resources :Ruman / R/w / -/W resources; c. Reduction in ?erification -lippa*es cased due to human errors. d. 5b0ect Properties evel ?erifications can be done which is difficult manuall&. e. ?irtual oad / Bsers Qeneration for load testin* which is not worth doin* manuall& as it needs lots of resources and also it mi*ht not *ive that precise results which can be achieved usin* a 6utomation Tool. f. Re*ression Testin* Purposes. *. For +ata +riven Testin*.

=K. What could *o wron* with test automation! While usin* Test 6utomation there are various factors that can affect the testin* process like3 a. ToolYs imitations mi*ht result in 6pplication +efects. b. 6utomation ToolYs abnormal behavior like -calabilit& ?ariations due to memor& violations mi*ht be considered as 6pplications memor& violation in heav& load tests. c. /nvironment -ettin*s Required for Tool :e.*. 1ava)'5R86 required 1+Z to be present in -&stem; causes 6pplication to show up 8u*s which are 0ust due to the 1+Z installation in -&stem which I had eCperienced m&self as on un)installation of 1+Z and 1ava)6ddins m& application works fine. =O. Row are the testin* activities described! The basic Testin* activities are as follows: a. Test Plannin* :Pre)Requisite: Qet 6dequate +ocuments of the Pro0ect to test; b. Test 'ases :Pre)Requisite: Qet 6dequate +ocuments of the Pro0ect to test; c. 'ursor Test :6 ?er& 8asic Test to make sure that all screens are comin* and application is read& for test or to automate; d. %anual Testin* e. Test 6utomation :Provided if the product had reached -tabilit& enou*h to be automated;. f. 8u* Trackin* W 8u* Reportin*. *. 6nal&sis of the Test and Test Report 'reation. h. If 8u* FiCin* '&cle repeats then -teps c)h repeats. =$. What testin* activities one ma& want to automate! 6n&thin*3 which is repeated3 should be automated if possible. Thus the followin* testin* activities can be automated3 a. Test 'ase Preparation b. Tests like 'ursor3 Re*ression3 Functional W oad / Performance testin*. c. Test Report Qeneration. d. Test -tatus/Results 4otifications. e. 8u* Trackin* -&stem. /tc. JI. +escribe common problems of test automation! In Test 6utomation we come across several problems3 out of which I would like to hi*hli*ht few as *iven below3 a. 6utomation -cript %aintenance3 which becomes tou*h if product *ets throu*h frequent chan*es. b. 6utomation ToolYs imitations for ob0ects Reco*ni.in*. c. 6utomation ToolYs Third Part Inte*ration imitations. d. 6utomation ToolYs abnormal behavior due to its -calabilit& Issues. e. +ue to ToolYs +efects3 We mi*ht assume its 6pplication +efect and consider an& issue as 6pplication 8u*. f. /nvironmental -ettin*s and 6PIYs / 6ddins Required b& Tool to make it compatible to work with -peciali.ed /nvironments like 16?6)'5R86 creates 16?6 /nvironmental Issues for the 6pplication to work. :/.*. WinRunner K.IL 1ava)-upport /nvironmental ?ariables 'reates 6pplication Bnder Test Ito malfunction;

*. There are man& issues3 which we come across while actual automation. J#. What are the t&pes of scriptin* techniques for test automation ! -criptin* Technique: how to structure automated test scripts for maCimum benefit and %inimum impact of software chan*es3 scriptin* issues3 scriptin* approaches: linear3 -hared3 data)driven and pro*rammed3 script pre)processin*3 minimi.in* the impact of -oftware chan*es on test scripts. The ma0or ones used are3 a. +ata)+riven -criptin* b. 'entrali.ed 6pplication -pecific / Qeneric 'ompiled %odules / ibrar& +evelopment. c. Parent 'hild -criptin*. d. Techniques to Qenerali.e the -cripts. e. Increasin* the factor of Reusabilit& of the -cript. J2. What are principles of *ood testin* scripts for automation! The ma0or principles of *ood testin* script for 6utomation are3 a. 6utomation -cripts should be reusable. b. 'odin* -tandards should be followed for -criptin*3 which makes -cript Bpdatin*3 Bnderstandin*3 +ebu**in* easier. c. -cripts should be /nvironment3 data Independent as much as possible which can be achieved usin* parameteri.ation. d. -cript should be *enerali.ed. e. -cripts should be modular. f. Repeated Tasks should be kept in Functions while scriptin* to avoid code repeat3 compleCit& and make script eas& for debu**in*. *. -cript should be readable and appropriate comments should be written for each line / section of script. h. -cript Reader should contain script developer name3 script updated date3 script environmental requirements3 scripted environmental details3 script pre)requisites from application side3 script description in brief3 script contents3 script scope etc. J=. What tools are available for support of testin* durin* software development life c&cle! Test +irector for Test %ana*ement3 8u*.illa for 8u* Trackin* and 4otification etc are the tools for -upport of Testin*. JJ. 'an the activities of test case desi*n be automated! Hes3 Test +irector is one of such tool3 which has the feature of Test 'ase +esi*n and eCecution. JL. What are the limitations of automatin* software testin*! If one talk about limitations of automatin* software testin*3 then to mention few3 a. 6utomation 4eeds lots of time in the initial sta*e of automation. b. /ver& tool will have its own limitations with respect to protocol support3 technolo*ies supported3 ob0ect reco*nition3 platform supported etc due to which not #IIS of the 6pplication can be automation because there is alwa&s somethin* limited to the tool which we have to overcome with RW+.

c. ToolYs %emor& Btili.ation is also one the important factor which blocks the applicationYs memor& resources and creates problems to application in few cases like 1ava 6pplications etc.

Re*ression Testin* 5verview G The re*ression test paradi*m G Problems with the paradi*m G T&pical practices that result in failure. G -hort term *ains are possible G 6utomation is software development G 6rchitectural approaches to QBI automation G 6W-T discussions G 8reakin* awa& from the re*ression paradi*m Re*ression Test Paradi*m G 'reate a test case. G Run it and inspect the output G If pro*ram fails3 report bu* and tr& later. G If pro*ram passes3 save the resultin* outputs. G In future tests run the pro*ram and compare the output to the saved results. Report an eCception when the current output and the saved output donYt match. Problems with the Paradi*m G Test case creation is eCpensive. G Hour most technicall& skilled staff is tied up in automation G 6utomation can dela& testin*3 addin* even more cost :albeit hidden cost.; G Hou are eCecutin* weak tests. k Row man& bu*s do &ou find with a test that the pro*ram has alread& passed! :IF &ou do eCtensive automation3 N)2LS; k Row much work does it take to do this eCtensive a level of automation! :LIS!; G 8ut the test case development finds lots of bu*s3 doesnYt it! k 'adenceYs data supports this3 but indirectl& illustrates the power of manual testin*. G %an& *roups automate onl& the eas& tests k often) su**ested strate*& k result is eCpensive investment in weak testin* of superficial issues G %ana*ement misunderstands the depth of testin* k Is a LIII) test suite bi* or small! G %aintenance can be hu*el& eCpensive.

k Recode hundreds of tests to catch up with one codin* chan*e k Het another half) baked pro*rammin* lan*ua*e with mediocre development tools G What do &ou have for &our neCt release! k What is &our covera*e! +o &ou know what tests &ou arenYt runnin* or what areas of the pro*ram arenYt covered! k 'an &ou read the code well enou*h to maintain it! k What weaknesses are there in the scripts! k Row do &ou detect a failure! k Will mana*ement allow &ou enou*h time to test3 *iven that the automation bshouldc do the Testin*! T&pical Practices that 5ften Result in Failure G 'apture repla& G -cript of individual tests G Part time automation G b#IIS automationc G 6utomation of eas& tests G 4o documentation -hort Term Qains are Possible G Printer compatibilit& testin* k %odem compatibilit& seems equall& obvious. k Row do we do video compatibilit& testin* or tests of other devices that require human appraisal! G -tress testin* k some tests can onl& be done b& machines :simulate #II3III users;. G Performance benchmarkin*. G -moke testin*. Interestin* 6pproaches: +ata +riven 6rchitecture Table eCample: G Picture G 'aption k t&peface k si.e k st&le k placement 4ote with this eCample: G We never run tests twice G We automate eCecution3 not evaluation G We save -5%/ time G We focus the tester on desi*n and results3 not eCecution. Interestin* 6pproaches: Frameworks Frameworks are code libraries that separate routine calls from desi*ned tests. G %odularit& G Reuse of components G Partial salvation from the custom control problem G Independence of application :the test case; from user interface details :eCecute usin* ke&board! %ouse! 6PI!; 6W-T Rather than talkin* about how we were havin* problems3 could we build a process that captures eCperience across labs!

G Facilitated meetin* G War stories G +iscussion G Principles and facts G ?otes and ar*uments Pattern of evolution G 'apture)repla& :disaster; G Individual cases :disaster; G 'ompleC frameworks :disaster; G +ata driven or simpler framework :ma& be stable; Reset mana*ement eCpectations G It takes time G Hou need people G 8enefits are for neCt release """"""""""""""""""""""""" -hould &ou automate! G 6 one)release product! G 6 first)release product with a rapidl& chan*in* BI! G 6 multi)platform product! ocali.ation was far less successful than we eCpected. G I think that there are some successes out there3 thou*h. G Think about functionalit& vs. content. What functionalit& risks are there! 6re the& worth addin* the locali.abilit& compleCit& needed to make the test cases themselves locali.able! 6utomation is -oftware +evelopment G 8i* code base3 with features :each test is a feature; G We would never tolerate desi*n of other software as LI3III standalone features G Requirements3 architecture3 standards3 documentation3 discipline +ata)+riven 6rchitecture G The pro*ramYs variables are data G The pro*ramYs commands are data G The pro*ramYs BI is data G The pro*ramYs state is data Frameworks G +efine ever& feature of the application under test k custom controls and klud*es G 'ommands / features of tool G -mall3 often reused tasks G ar*e3 compleC chunks G Btilit& functions k standardi.ed lo**er3 ma& not need it. Framework risks G 5ver)ambitious G Poor communication means non)use G -ome products donYt call for this t&pe of investment 5TR/R TR5BQRT5ther t&pes of automation and other *oals of automation

WRH F5'B- 54 R/QR/--I54! What +o We Want From 6utomation! G -ave time and mone& while &ou find bu*s. G 6utomate the test the first time &ou run it. G Run scrillions of tests. G %ake it eas& to fi*ure out what has and what has not been tested. G Randle multi)dimensional :multivariable; issues. G What else! /Camples of 4on)Re*ression Bses of the QBI Re*ression Tools Test eCecution. G /Cplorator& testin* k partial automation k full automation with an oracle k :clean room; G Function equivalence testin* G -tate transition testin* G /vent)lo* driven testin* :I think this is 6Z6 monke&s;. The bi**est challen*e is the oracle. 5ther Thin*s to 6utomate 6utomated testin* tools provide special capabilities: G 6nal&.in* the code for bu*s G +esi*nin* test cases G 6utomaticall& creatin* test cases G Relativel& eas& manual creation of test cases G /Cecutin* the tests G ?alidatin* the test results man& tools offer eCtra doo)dads such as inte*ration with bu* trackin* or source control3 pro0ect mana*ement3 etc. 4o tool offers all these capabilities. 4on)Re*ression 6t 6W-T3 each of us found that we had our most successful eCperiences in automation in 0oint pro0ects with the testin* and pro*rammin* staff. +onYt *et locked into the paradi*m3 but use the tool if it is useful.

%istakes and challen*es The seven most common test case mistakes In each writerAs work3 test case defects will cluster around certain writin* mistakes. If &ou are writin* cases or mana*in* writers3 donAt wait until cases are all done before findin* these clusters. Review the cases ever& da& or two3 lookin* for the faults that will make the cases harder to test and maintain. 'hances are &ou will discover that the opportunities to improve are clustered in one of the seven most common test case mistakes: #. %akin* cases too lon* 2. Incomplete3 incorrect3 or incoherent setup =. eavin* out a step J. 4amin* fields that chan*ed or no lon*er eCist L. Bnclear whether tester or s&stem does action

N. Bnclear what is a pass or fail result K. Failure to clean up Randlin* challen*es to *ood test cases /ven when &ou emplo& the best techniques and standards3 &ou still have to overcome the same challen*es that face ever& test writin* effort. etAs look at common challen*es to test writin*3 and see how the& can be mana*ed b& responses that salva*e as much qualit& as possible. The challen*es t&picall& are imposed at the pro0ect level3 and must be responded to at the testin* mana*ement level. If the& are imposed on a writer from the testin* mana*ement level3 the writer should make the response. 'hallen*e: Requirements chan*es Response: The best defense here is bein* informed. 8efore writin* cases3 and at ever& status meetin*3 find out where the *reatest risk of requirement chan*es are. -trate*i.e what cases will and wonAt be affected b& the chan*e. Write the ones that wonAt first. 8uild in variables or <to be decided< that &ou will come back and fill in later. %ake sure the bud*et owner knows the cost of revisin* test cases that are alread& written. Quantif& what it costs per case. et pro0ect mana*ement set priorities for which cases should be written or revised. et them see &ou canAt do it all and ask them to decide where the& have *reatest risk. Release the not)quite)ri*ht test cases unrevised. 6sk the testers to mark up what has to be chan*ed. -chedule more time to test each case3 plus time for maintainin* the tests. 'hallen*e: -chedule chan*es Response: If a testin* date is moved up3 *et mana*ement to participate in the options of how test cases will be affected. 6s in the chan*in* requirements challen*e3 let them choose what the& want to risk. 6dd staff onl& if time permits one to two weeks of trainin* before the& have to be productive3 and onl& if &ou have someone to mentor and review their work. -hift the order of writin* cases so &ou write those first that will be tested first. Tr& to sta& one case ahead of the testers. Hou can skinn& down the test cases to 0ust a purpose3 what requirement is bein* tested3 and a setup. This is not as bad as ad hoc testin*3 but mana*ement should know the results are not as reliable as if the cases were complete. -chedule more time to test this kind of test case3 and time to finish the case after testin*. 5ffer to have writers do the testin* and write as the& *o. -chedule more time for testin* and finishin* the writin* after testin*. 'hallen*e: -taff turnover Response: 4ew staff need an understandin* of the *oals3 schedule3 and or*ani.ation of the current testin* pro0ect3 in writin* if possible. ?erbal orientations *et lost in the shuffle.

4ew staff should concentrate on knowin* the business use of the software3 and then on the requirements and protot&pes. The& ma& write fewer cases3 but the& will be ri*ht. 4ew staff should have hands)on trainin* in the standards3 with man& practical eCamples of how to appl& it. Their work should be closel& checked at first. Tr& to place new staff in an area of *ood technical fit for the cases the& will be writin*. Test case assets Protectin* test case assets

The most important activit& to protect the value of test cases is to maintain them so the& are testable. The& should be maintained after each testin* c&cle3 since testers will find defects in the cases as well as in the software. When testin* schedules are created3 time should be allotted for the test anal&st or writer to fiC the cases while pro*rammers fiC bu*s in the application. If the& arenAt fiCed3 the testers and writers will waste time in the neCt c&cle fi*urin* out whether the test case or the software has the error. Test cases lost or corrupted b& poor versionin* and stora*e defeat the whole purpose of makin* them reusable. 'onfi*uration mana*ement :'%; of cases should be handled b& the or*ani.ation or pro0ect3 rather than the test mana*ement. If the or*ani.ation does not have this level of process maturit&3 the test mana*er or test writer needs to suppl& it. /ither the pro0ect or the test mana*er should protect valuable test case assets with the followin* confi*uration mana*ement standards: l4amin* and numberin* conventions lFormats3 file t&pe ?ersionin* Test ob0ects needed b& the case3 such as databases lRead onl& stora*e l'ontrolled access 5ff)site backup Test mana*ement needs to have an indeC of all test cases. If one is not supplied b& '%3 create &our own. 6 database should be searchable on ke&s of pro0ect3 software3 test name3 number3 and requirement. 6 full)teCt search capabilit& would be even better. evera*in* test cases

Test cases as development assets have a life be&ond testin*. The& represent a complete picture of how the software works written in plain /n*lish. /ven if the focus is destructive3 the& must also prove that all business scenarios work as required. 5ften the cases are written for testers who are the business users so the& use real world lan*ua*e and terms. 6 set of use cases has tremendous value to others who are workin* to learn or sell the software:

l8usiness users Technical writers lRelp desk technicians Trainers -ales and marketin* staff Web administrators 6ll of these people have a stake in seein* the software succeed3 and are also potential testers. +ependin* on the or*ani.ation3 *ood will and open communication between test writers and these *roups can *reatl& speed up the time to production or release. -ummar& The process of teachin* *ood writin* techniques and settin* test case standards is an asset in itself. It is never static3 but must be d&namicall& tau*ht3 applied3 audited3 measured3 and improved.

'ommon Q6 Terms 6cceptance Testin*: Formal testin* conducted to determine whether or not a s&stem satisfies its acceptance criteria[enables an end user to determine whether or not to accept the s&stem. 6ffinit& +ia*ram: 6 *roup process that takes lar*e amounts of lan*ua*e data3 such as a list developed b& brainstormin*3 and divides it into cate*ories. 6lpha Testin*: Testin* of a software product or s&stem conducted at the developerYs site b& the end user. 6udit: 6n inspection/assessment activit& that verifies compliance with plans3 policies3 and procedures3 and ensures that resources are conserved. 6udit is a staff function9 it serves as the be&es and earsc of mana*ement. 6utomated Testin*: That part of software testin* that is assisted with software tool:s; that does not require operator input3 anal&sis3 or evaluation. 8eta Testin*: Testin* conducted at one or more end user sites b& the end user of a delivered software product or s&stem. 8lack)boC Testin*: Functional testin* based on requirements with no knowled*e of the internal pro*ram structure or data. 6lso known as closed)boC testin*. 8lack boC testin* indicates whether or not a pro*ram meets required specifications b& spottin* faults of omission )) places where the specification is not fulfilled.

8ottom)up Testin*: 6n inte*ration testin* technique that tests the low)level components first usin* test drivers for those components that have not &et been developed to call the low)level components for test. 8oundar& ?alue 6nal&sis: 6 test data selection technique in which values are chosen to lie alon* data eCtremes. 8oundar& values include maCimum3 mini)mum3 0ust inside/outside boundaries3 t&pical values3 and error values. 8rainstormin*: 6 *roup process for *eneratin* creative and diverse ideas. 8ranch 'overa*e Testin*: 6 test method satisf&in* covera*e criteria that requires each decision point at each possible branch to be eCecuted at least once. 8u*: 6 desi*n flaw that will result in s&mptoms eChibited b& some ob0ect :the ob0ect under test or some other ob0ect; when an ob0ect is sub0ected to an appropriate test. 'ause)and)/ffect :Fishbone; +ia*ram: 6 tool used to identif& possible causes of a problem b& representin* the relationship between some effect and its possible cause. 'ause)effect Qraphin*: 6 testin* technique that aids in selectin*3 in a s&stematic wa&3 a hi*h)&ield set of test cases that lo*icall& relates causes to effects to produce test cases. It has a beneficial side effect in pointin* out incompleteness and ambi*uities in specifications. 'heck sheet: 6 form used to record data as it is *athered. 'lear)boC Testin*: 6nother term for white)boC testin*. -tructural testin* is sometimes referred to as clear)boC testin*3 since bwhite boCesc are considered opaque and do not reall& permit visibilit& into the code. This is also known as *lass)boC or open)boC testin*. 'lient: The end user that pa&s for the product received3 and receives the benefit from the use of the product. 'ontrol 'hart: 6 statistical method for distin*uishin* between common and special cause variation eChibited b& processes. 'ustomer :end user;: The individual or or*ani.ation3 internal or eCternal to the producin* or*ani.ation that receives the product. +ata Flow 6nal&sis: 'onsists of the *raphical anal&sis of collections of :sequential; data definitions and reference patterns to determine constraints that can be placed on data values at various points of eCecutin* the source pro*ram.

+ebu**in*: The act of attemptin* to determine the cause of the s&mptoms of malfunctions detected b& testin* or b& fren.ied user complaints. +efect: 45T/: 5perationall&3 it is useful to work with two definitions of a defect: #; From the producerYs viewpoint: a product requirement that has not been met or a product attribute possessed b& a product or a function performed b& a product that is not in the statement of requirements that define the product. 2; From the end userYs viewpoint: an&thin* that causes end user dissatisfaction3 whether in the statement of requirements or not. +efect 6nal&sis: Bsin* defects as data for continuous qualit& improvement. +efect anal&sis *enerall& seeks to classif& defects into cate*ories and identif& possible causes in order to direct process improvement efforts. +efect +ensit&: Ratio of the number of defects to pro*ram len*th :a relative number;. +esk 'heckin*: 6 form of manual static anal&sis usuall& performed b& the ori*inator. -ource code documentation3 etc.3 is visuall& checked a*ainst requirements and standards. +&namic 6nal&sis: The process of evaluatin* a pro*ram based on eCecution of that pro*ram. +&namic anal&sis approaches rel& on eCecutin* a piece of software with selected test data. +&namic Testin*: ?erification or validation performed which eCecutes the s&stemYs code. /rror: #; 6 discrepanc& between a computed3 observed3 or measured value or condition and the true3 specified3 or theoreticall& correct value or condition9 and 2; a mental mistake made b& a pro*rammer that ma& result in a pro*ram fault. /rror)based Testin*: Testin* where information about pro*rammin* st&le3 error)prone lan*ua*e constructs3 and other pro*rammin* knowled*e is applied to select test data capable of detectin* faults3 either a specified class of faults or all possible faults.

/valuation: The process of eCaminin* a s&stem or s&stem component to determine the eCtent to which specified properties are present. /Cecution: The process of a computer carr&in* out an instruction or instructions of a computer. /Chaustive Testin*: /Cecutin* the pro*ram with all possible combinations of values for pro*ram variables.

Failure: The inabilit& of a s&stem or s&stem component to perform a required function within specified limits. 6 failure ma& be produced when a fault is encountered. Failure)directed Testin*: Testin* based on the knowled*e of the t&pes of errors made in the past that are likel& for the s&stem under test. Fault: 6 manifestation of an error in software. 6 fault3 if encountered3 ma& cause a failure. Fault Tree 6nal&sis: 6 form of safet& anal&sis that assesses hardware safet& to provide failure statistics and sensitivit& anal&ses that indicate the possible effect of critical failures. Fault)based Testin*: Testin* that emplo&s a test data selection strate*& desi*ned to *enerate test data capable of demonstratin* the absence of a set of pre)specified faults3 t&picall&3 frequentl& occurrin* faults. Flowchart: 6 dia*ram showin* the sequential steps of a process or of a workflow around a product or service. Formal Review: 6 technical review conducted with the end user3 includin* the t&pes of reviews called for in the standards. Function Points: 6 consistent measure of software si.e based on user requirements. +ata components include inputs3 outputs3 etc. /nvironment characteristics include data communications3 performance3 reusabilit&3 operational ease3 etc. Wei*ht scale: I " not present9 # " minor influence3 L " stron* influence. Functional Testin*: 6pplication of test data derived from the specified functional requirements without re*ard to the final pro*ram structure. 6lso known as black)boC testin*. Reuristics Testin*: 6nother term for failure)directed testin*. Risto*ram: 6 *raphical description of individual measured values in a data set that is or*ani.ed accordin* to the frequenc& or relative frequenc& of occurrence. 6 histo*ram illustrates the shape of the distribution of individual values in a data set alon* with information re*ardin* the avera*e and variation. R&brid Testin*: 6 combination of top)down testin* combined with bottom)up testin* of prioriti.ed or available components.

Incremental 6nal&sis: Incremental anal&sis occurs when :partial; anal&sis ma& be performed on an incomplete product to allow earl& feedback on the development of that product. Infeasible Path: Pro*ram statement sequence that can never be eCecuted. Inputs: Products3 services3 or information needed from suppliers to make a process work. Inspection: #; 6 formal evaluation technique in which software requirements3 desi*n3 or code are eCamined in detail b& a person or *roup other than the author to detect faults3 violations of development standards3 and other problems. 2; 6 qualit& improvement process for written material that consists of two dominant components: product :document; improvement and process improvement :document production and inspection;. Instrument: To install or insert devices or instructions into hardware or software to monitor the operation of a s&stem or component. Inte*ration: The process of combinin* software components or hardware components3 or both3 into an overall s&stem. Inte*ration Testin*: 6n orderl& pro*ression of testin* in which software components or hardware components3 or both3 are combined and tested until the entire s&stem has been inte*rated. Interface: 6 shared boundar&. 6n interface mi*ht be a hardware component to link two devices3 or it mi*ht be a portion of stora*e or re*isters accessed b& two or more computer pro*rams. Interface 6nal&sis: 'hecks the interfaces between pro*ram elements for consistenc& and adherence to predefined rules or aCioms. Intrusive Testin*: Testin* that collects timin* and processin* information durin* pro*ram eCecution that ma& chan*e the behavior of the software from its behavior in a real environment. Bsuall& involves additional code embedded in the software bein* tested or additional processes runnin* concurrentl& with software bein* tested on the same platform. I?W?: Independent verification and validation is the verification and validation of a software product b& an or*ani.ation that is both technicall& and mana*eriall& separate from the or*ani.ation responsible for developin* the product.

ife '&cle: The period that starts when a software product is conceived and ends when the product is no lon*er available for use. The software life c&cle t&picall& includes a requirements phase3 desi*n phase3 implementation :code; phase3 test phase3 installation and checkout phase3 operation and maintenance phase3 and a retirement phase. %anual Testin*: That part of software testin* that requires operator input3 anal&sis3 or evaluation. %ean: 6 value derived b& addin* several qualities and dividin* the sum b& the number of these quantities. %easurement: #; The act or process of measurin*. 6 fi*ure3 eCtent3 or amount obtained b& measurin*. %etric: 6 measure of the eCtent or de*ree to which a product possesses and eChibits a certain qualit&3 propert&3 or attribute. %utation Testin*: 6 method to determine test set thorou*hness b& measurin* the eCtent to which a test set can discriminate the pro*ram from sli*ht variants of the pro*ram. 4on)intrusive Testin*: Testin* that is transparent to the software under test9 i.e.3 testin* that does not chan*e the timin* or processin* characteristics of the software under test from its behavior in a real environment. Bsuall& involves additional hardware that collects timin* or processin* information and processes that information on another platform. 5perational Requirements: Qualitative and quantitative parameters that specif& the desired operational capabilities of a s&stem and serve as a basis for deter)minin* the operational effectiveness and suitabilit& of a s&stem prior to deplo&ment. 5perational Testin*: Testin* performed b& the end user on software in its normal operatin* environment. 5utputs: Products3 services3 or information supplied to meet end user needs. Path 6nal&sis: Pro*ram anal&sis performed to identif& all possible paths throu*h a pro*ram3 to detect incomplete paths3 or to discover portions of the pro*ram that are not on an& path. Path 'overa*e Testin*: 6 test method satisf&in* covera*e criteria that each lo*ical path throu*h the pro*ram is tested. Paths throu*h the pro*ram often are *rouped into a finite set of classes9 one path from each class is tested. Peer Reviews: 6 methodical eCamination of software work products b& the producerYs peers to identif& defects and areas where chan*es are needed.

Polic&: %ana*erial desires and intents concernin* either process :intended ob0ectives; or products :desired attributes;. Problem: 6n& deviation from defined standards3 same as defect. Procedure: The step)b&)step method followed to ensure that standards are met. Process: The work effort that produces a product. This includes efforts of people and equipment *uided b& policies3 standards3 and procedures. Process Improvement: To chan*e a process to make the process produce a *iven product faster3 more economicall&3 or of hi*her qualit&. -uch chan*es ma& require the product to be chan*ed. The defect rate must be maintained or reduced. Product: The output of a process9 the work product. There are three useful classes of products: manufactured products :standard and custom;3 administrative/ information products :invoices3 letters3 etc.;3 and service products :ph&sical3 intellectual3 ph&siolo*ical3 and ps&cholo*ical;. Products are defined b& a statement of requirements9 the& are produced b& one or more people workin* in a process. Product Improvement: To chan*e the statement of requirements that defines a product to make the product more satisf&in* and attractive to the end user :more competitive;. -uch chan*es ma& add to or delete from the list of attributes and/or the list of functions definin* a product. -uch chan*es frequentl& require the process to be chan*ed. 45T/: This process could result in a totall& new product. Productivit&: The ratio of the output of a process to the input3 usuall& measured in the same units. It is frequentl& useful to compare the value added to a product b& a process to the value of the input resources required :usin* fair market values for both input and output;. Proof 'hecker: 6 pro*ram that checks formal proofs of pro*ram properties for lo*ical correctness. Protot&pin*: /valuatin* requirements or desi*ns at the conceptuali.ation phase3 the requirements anal&sis phase3 or desi*n phase b& quickl& buildin* scaled)down components of the intended s&stem to obtain rapid feedback of anal&sis and desi*n decisions. Qualification Testin*: Formal testin*3 usuall& conducted b& the developer for the end user3 to demonstrate that the software meets its specified requirements. Qualit&: 6 product is a qualit& product if it is defect free. To the producer a product is a qualit& product if it meets or conforms to the statement of requirements that defines

the product. This statement is usuall& shortened to bqualit& means meets requirements. 45T/: 5perationall&3 the work qualit& refers to products. Qualit& 6ssurance :Q6;: The set of support activities :includin* facilitation3 trainin*3 measurement3 and anal&sis; needed to provide adequate confidence that processes are established and continuousl& improved in order to produce products that meet specifications and are fit for use. Qualit& 'ontrol :Q';: The process b& which product qualit& is compared with applicable standards9 and the action taken when nonconformance is detected. Its focus is defect detection and removal. This is a line function3 that is3 the performance of these tasks is the responsibilit& of the people workin* within the process. Qualit& Improvement: To chan*e a production process so that the rate at which defective products :defects; are produced is reduced. -ome process chan*es ma& require the product to be chan*ed. Random Testin*: 6n essentiall& black)boC testin* approach in which a pro*ram is tested b& randoml& choosin* a subset of all possible input values. The distribution ma& be arbitrar& or ma& attempt to accuratel& reflect the distribution of inputs in the application environment. Re*ression Testin*: -elective retestin* to detect faults introduced durin* modification of a s&stem or s&stem component3 to verif& that modifications have not caused unintended adverse effects3 or to verif& that a modified s&stem or s&stem component still meets its specified requirements. Reliabilit&: The probabilit& of failure)free operation for a specified period. Requirement: 6 formal statement of: #; an attribute to be possessed b& the product or a function to be performed b& the product9 the performance standard for the attribute or function9 or =; the measurin* process to be used in verif&in* that the standard has been met. Review: 6 wa& to use the diversit& and power of a *roup of people to point out needed improvements in a product or confirm those parts of a product in which improvement is either not desired or not needed. 6 review is a *eneral work product evaluation technique that includes desk checkin*3 walkthrou*hs3 technical reviews3 peer reviews3 formal reviews3 and inspections. Run 'hart: 6 *raph of data points in chronolo*ical order used to illustrate trends or c&cles of the characteristic bein* measured for the purpose of su**estin* an assi*nable cause rather than random variation. -catter Plot :correlation dia*ram;: 6 *raph desi*ned to show whether there is a relationship between two chan*in* factors. -emantics: #; The relationship of characters or a *roup of characters to their meanin*s3 independent of the manner of their interpretation and use. 2; The relationships between s&mbols and their meanin*s. -oftware 'haracteristic: 6n inherent3 possibl& accidental3 trait3 qualit&3 or propert& of software :for eCample3 functionalit&3 performance3 attributes3 desi*n constraints3 number of states3 lines of branches;. -oftware Feature: 6 software characteristic specified or implied b& requirements documentation :for eCample3 functionalit&3 and performance3 attributes3 or desi*n constraints;. -oftware Tool: 6 computer pro*ram used to help develop3 test3 anal&.e3 or maintain another computer pro*ram or its documentation9 e.*.3 automated desi*n tools3 compilers3 test tools3 and maintenance tools. -tandards: The measure used to evaluate products and identif& nonconformance. The basis upon which adherence to policies is measured. -tandardi.e: Procedures are implemented to ensure that the output of a process is maintained at a desired level. -tatement 'overa*e Testin*: 6 test method satisf&in* covera*e criteria that requires each statement be eCecuted at least once.

-tatement of Requirements: The eChaustive list of requirements that define a product. 45T/: The statement of requirements should document requirements proposed and re0ected :includin* the reason for the re0ection; durin* the requirements determination process. -tatic Testin*: ?erification performed without eCecutin* the s&stemYs code. 6lso called static anal&sis. -tatistical Process 'ontrol: The use of statistical techniques and tools to measure an on*oin* process for chan*e or stabilit&. -tructural 'overa*e: This requires that each pair of module invocations be eCecuted at least once. -tructural Testin*: 6 testin* method where the test data is derived solel& from the pro*ram structure. -tub: 6 software component that usuall& minimall& simulates the actions of called components that have not &et been inte*rated durin* top)down testin*. -upplier: 6n individual or or*ani.ation that supplies inputs needed to *enerate a product3 service3 or information to an end user. -&ntaC: #; The relationship amon* characters or *roups of characters independent of their meanin*s or the manner of their interpretation and use9 2; the structure of eCpressions in a lan*ua*e9 and =; the rules *overnin* the structure of the lan*ua*e. -&stem: 6 collection of people3 machines3 and methods or*ani.ed to accomplish a set of specified functions. -&stem -imulation: 6nother name for protot&pin*. -&stem Testin*: The process of testin* an inte*rated hardware and software s&stem to verif& that the s&stem meets its specified requirements. Technical Review: 6 review that refers to content of the technical material bein* reviewed. Test 8ed: #; 6n environment that contains the inte*ral hardware3 instrumentation3 simulators3 software tools3 and other support elements needed to conduct a test of a lo*icall& or ph&sicall& separate component. 2; 6 suite of test pro*rams used in conductin* the test of a component or s&stem.

Test 'ase: The definition of test case differs from compan& to compan&3 en*ineer to en*ineer3 and even pro0ect to pro0ect. 6 test case usuall& includes an identified set of information about observable states3 conditions3 events3 and data3 includin* inputs and eCpected outputs. Test +evelopment: The development of an&thin* required to conduct testin*. This ma& include test requirements :ob0ectives;3 strate*ies3 processes3 plans3 software3 procedures3 cases3 documentation3 etc. Test /Cecutive: 6nother term for test harness. Test Rarness: 6 software tool that enables the testin* of software components that links test capabilities to perform specific tests3 accept pro*ram inputs3 simulate missin* components3 compare actual outputs with eCpected outputs to determine correctness3 and report discrepancies. Test 5b0ective: 6n identified set of software features to be measured under specified conditions b& comparin* actual behavior with the required behavior described in the software documentation. Test Plan: 6 formal or informal plan to be followed to assure the controlled testin* of the product under test. Test Procedure: The formal or informal procedure that will be followed to eCecute a test. This is usuall& a written document that allows others to eCecute the test with a minimum of trainin*. Testin*: 6n& activit& aimed at evaluatin* an attribute or capabilit& of a pro*ram or s&stem to determine that it meets its required results. The process of eCercisin* or evaluatin* a s&stem or s&stem component b& manual or automated means to verif& that it satisfies specified requirements or to identif& differences between eCpected and actual results. Top)down Testin*: 6n inte*ration testin* technique that tests the hi*h)level components first usin* stubs for lower)level called components that have not &et been inte*rated and that stimulate the required actions of those components. Bnit Testin*: The testin* done to show whether a unit :the smallest piece of software that can be independentl& compiled or assembled3 loaded3 and tested; satisfies its functional specification or its implemented structure matches the intended desi*n structure. Bser: The end user that actuall& uses the product received.

?) +ia*ram :model;: a dia*ram that visuali.es the order of testin* activities and their correspondin* phases of development ?alidation: The process of evaluatin* software to determine compliance with specified requirements. ?erification: The process of evaluatin* the products of a *iven software development activit& to determine correctness and consistenc& with respect to the products and standards provided as input to that activit&. Walkthrou*h: Bsuall&3 a step)b&)step simulation of the eCecution of a procedure3 as when walkin* throu*h code3 line b& line3 with an ima*ined set of inputs. The term has been eCtended to the review of material that is not procedural3 such as data descriptions3 reference manuals3 specifications3 etc. White)boC Testin*: Testin* approaches that eCamine the pro*ram structure and derive test data from the pro*ram lo*ic. This is also known as clear boC testin*3 *lass)boC or open)boC testin*. White boC testin* determines if pro*ram)code structure and lo*ic is fault&. The test is accurate onl& if the tester knows what the pro*ram is supposed to do. Re or she can then see if the pro*ram diver*es from its intended *oal. White boC testin* does not account for errors caused b& omission3 and all visible code must also be readable.

+ifferent ZindAs 5f -oftware Testin* WhatAs 6d Roc Testin*! 6 testin* where the tester tries to break the software b& randoml& tr&in* functionalit& of software. WhatAs the 6ccessibilit& Testin*! Testin* that determines if software will be usable b& people with disabilities. WhatAs the 6lpha Testin*! The 6lpha Testin* is conducted at the developer sites and in a controlled environment b& the end user of the software. What is the 8eta Testin*! Testin* the application after the installation at the client place. What is 'omponent Testin*! Testin* of individual software components :Bnit Testin*;. WhatAs 'ompatibilit& Testin*! In 'ompatibilit& testin* we can test that software is compatible with other elements of s&stem. 'ompatiblit& of a software is a measure of the abilit& of the application to work in different applications :e*. browsers3 different word editors;3 in different operatin* s&stems and to

interact with other applications. 6lso3 if &ou can use %- access 2IIK to open a file created in %- accessA$K3 it is forward compatiblit&. If &ou can %- access $K to open file create in %- access 2IIK3 it is called backward compatiblit&. 8ut mostl&3 compatblit& testin* is used to test how a web application is displa&ed in different browsers :also called cross browser testin*; and in different operatin* s&stems. 'ompatiblit& is an enhancement and not neccessaril& a ke& functionalit& of the application. 1ust keep in mind3 a software is created onl& accordin* to the requirements of the 'lient. -o3 the business team of the client:who comes with the requirements; reserves it ri*hts to limits it audiences and the applicationAs uses. -ome of the eCamples. It is a matter of checkin* whether &our web application is compatible with the different browsers. 'heck how &our web application is displa&ed in different browsers in windows3 mac. If &ou are workin* with a software that creates a document like 4otepad or X% sp& /ditor3 it is important to check whether the documents created b& &our software can be opened in other similar softwares. It is also important to check whether the documents created b& 4otePad v=.I can be edited b& 4otepad v2.I :backward compatiblit&; and the documents created b& 4otepad v2.I can be edited b& 4otePad vJ.I :Forward 'ompatiblit&;. If &our application supports usa*e of shortut ke&s3 It will be *reat if it uses the same shortcut ke&s as used b& the other ma0or softwares :'TR U? for pastin* teCt;. 6lso3 &ou can check if &ou cop& some data from &our application and paste in another application. What is 'oncurrenc& Testin*! %ulti)user testin* *eared towards determinin* the effects of accessin* the same application code3 module or database records. Identifies and measures the level of lockin*3 deadlockin* and use of sin*le)threaded code and lockin* semaphores. What is 'onformance Testin*! The process of testin* that an implementation conforms to the specification on which it is based. Bsuall& applied to testin* conformance to a formal standard. What is 'onteCt +riven Testin*! The conteCt)driven school of software testin* is flavor of 6*ile Testin* that advocates continuous and creative evaluation of testin* opportunities in li*ht of the potential information revealed and the value of that information to the or*ani.ation ri*ht now. What is +ata +riven Testin*! Testin* in which the action of a test case is parameteri.ed b& eCternall& defined data values3 maintained as a file or spreadsheet. 6 common technique in 6utomated Testin*. What is 'onversion Testin*! Testin* of pro*rams or procedures used to convert data from eCistin* s&stems for use in replacement s&stems. What is +ependenc& Testin*! /Camines an applicationAs requirements for pre)eCistin* software3 initial states and confi*uration in order to maintain proper functionalit&.

What is +epth Testin*! 6 test that eCercises a feature of a product in full detail. What is +&namic Testin*! Testin* software throu*h eCecutin* it. -ee also -tatic Testin*. What is /ndurance Testin*! 'hecks for memor& leaks or other problems that ma& occur with prolon*ed eCecution. What is /nd)to)/nd testin*! Testin* a complete application environment in a situation that mimics real)world use3 such as interactin* with a database3 usin* network communications3 or interactin* with other hardware3 applications3 or s&stems if appropriate. What is /Chaustive Testin*! Testin* which covers all combinations of input values and preconditions for an element of the software under test. What is Qorilla Testin*! Testin* one particular module3 functionalit& heavil&. What is Installation Testin*! 'onfirms that the application under test recovers from eCpected or uneCpected events without loss of data or functionalit&. /vents can include shorta*e of disk space3 uneCpected loss of communication3 or power out conditions. What is ocali.ation Testin*! This term refers to makin* software specificall& desi*ned for a specific localit&. What is oop Testin*! 6 white boC testin* technique that eCercises pro*ram loops. What is %utation Testin*! %utation testin* is a method for determinin* if a set of test data or test cases is useful3 b& deliberatel& introducin* various code chan*es :Abu*sA; and retestin* with the ori*inal test data/cases to determine if the Abu*sA are detected. Proper implementation requires lar*e computational resources What is %onke& Testin*! Testin* a s&stem or an 6pplication on the fl&3 i.e. 0ust few tests here and there to ensure the s&stem or an application does not crash out. What is Positive Testin*! Testin* aimed at showin* software works. 6lso known as <test to pass<. -ee also 4e*ative Testin*. What is 4e*ative Testin*! Testin* aimed at showin* software does not work. 6lso known as <test to fail<. -ee also Positive Testin*. What is Path Testin*! Testin* in which all paths in the pro*ram source code are tested at least once.

What is Performance Testin*! Testin* conducted to evaluate the compliance of a s&stem or component with specified performance requirements. 5ften this is performed usin* an automated test tool to simulate lar*e number of users. 6lso know as < oad Testin*<. What is Ramp Testin*! 'ontinuousl& raisin* an input si*nal until the s&stem breaks down. What is Recover& Testin*! 'onfirms that the pro*ram recovers from eCpected or uneCpected events without loss of data or functionalit&. /vents can include shorta*e of disk space3 uneCpected loss of communication3 or power out conditions. What is the Re)testin* testin*! Retestin*) 6*ain testin* the functionalit& of the application. What is the Re*ression testin*! Re*ression) 'heck that chan*e in code have not effected the workin* functionalit&. What is -anit& Testin*! 8rief test of ma0or functional elements of a piece of software to determine if its basicall& operational. What is -calabilit& Testin*! Performance testin* focused on ensurin* the application under test *racefull& handles increases in workload. What is -ecurit& Testin*! Testin* which confirms that the pro*ram can restrict access to authori.ed personnel and that the authori.ed personnel can access the functions available to their securit& level. What is -tress Testin*! -tress testin* is a form of testin* that is used to determine the stabilit& of a *iven s&stem or entit&. It involves testin* be&ond normal operational capacit&3 often to a breakin* point3 in order to observe the results. What is -moke Testin*! 6 quick)and)dirt& test that the ma0or functions of a piece of software work. 5ri*inated in the hardware testin* practice of turnin* on a new piece of hardware for the first time and considerin* it a success if it does not catch on fire. What is -oak Testin*! Runnin* a s&stem at hi*h load for a prolon*ed period of time. For eCample3 runnin* several times more transactions in an entire da& :or ni*ht; than would be eCpected in a bus& da&3 to identif& and performance problems that appear after a lar*e number of transactions have been eCecuted. WhatAs the Bsabilit& testin*! Bsabilit& testin* is for user friendliness. WhatAs the Bser acceptance testin*! Bser acceptance testin* is determinin* if software is satisfactor& to an end)user or customer. WhatAs the ?olume Testin*!

We can perform the ?olume testin*3 where the s&stem is sub0ected to lar*e volume of data.

'ate*ori.in* +efects b& -everit& Is Testin* an 6rt or -cience! We have different answers for the above question3 but certainl& Testin* is an 6rt while determinin* the severit& of a defect found in a s&stem. The classification of impact of +efect is important for followin* reasons: It helps to determine the efficienc& of Test Process. It helps to decide the priorit& of the defect3 hence improves overall development process b& fiCin* hi*her priorit& defects first. The bu* trackin* process can be made more effective if the severit& of the defect is clearl& defined. The focus of this paper is providin* some ideas on +efect -everit& and its classifications. What is +efect -everit&! 6 defect is a product anomal& or flaw3 which is variance from desired product specification. The classification of defect based on its impact on operation of product is called +efect -everit&. +efect -everit& or 8u* -everit&! 6 bu* is matured term of defect. 6 defect usuall& refers to as bu* onl& if it affects operation of s&stem and ne*ativel& impacts the user of the s&stem3 while defect itself ma& not have an& impact on operation of s&stem. In other terms3 all bu*s are defects but not all defects are bu*s. -ince severit& classification also includes those anomalies3 which doesnYt have an& impact on operation of s&stem :like cosmetic errors etc.;3 it is appropriate to mention as +efect -everit& rather than 8u* -everit&. 6nswer &ourself the followin* before determinin* the severit& Followin* questions allows &ou to decide &ourself the measure of severit&. +oes the s&stem allow me to work even after defect occurs! +oes the s&stem recover from the defect b& an& means! If the defect is recoverable3 does the s&stem can do this on its own or an& eCternal effort is needed to recover from the defect! +id I check whether the same defect is reflected in all other related sections :or entire s&stem;! 'an I be able to repeat the defect in some other s&stem havin* same confi*uration :5/-3 8rowsers etc.; as that of the s&stem where I found the defect! 'an I be able to repeat the defect in other confi*urations also! +oes the defect affect onl& particular cate*or& of users or all! Row frequentl& the defect occurs! Which inputs make the defect!

The severit& level increases if the answer for some of the above question is eHesY and for some others e4oY. For eCample3 if the answer for question # is eHesY3 then further testin* of the s&stem is not possible and hence severit& is hi*h. 6lso the defect should be *enerali.ed as far as possible. i.e. after &ou find the defect3 tr& to find out that the defect is repeated in all cross)browsers3 cross)5/- etc.

-ome tips on findin* the severit& +ecide the impact -ome defects are obvious to decide its severit&. For eCample3 bRTTP error occurs when navi*atin* to particular screenc. -ometimes3 a minor defect repeats in all sections or the frequenc& of such defect is more. In such cases3 impact of the defect is more in users perspective even thou*h it is minor defect. Rence such defects *et hi*her severit&. Isolate the defect Isolatin* the defect helps to find out its depth of impact. 6nal&.e the defect with what class of inputs does the defect supports. %ake sure that the defect occurs onl& with particular sequence of operation or list out other sequences3 which cause the defect. +efect 'lassification b& severit&

The impact of +efect -everit& can be classified into four cate*ories: Fatal %a0or %inor 'osmetic Fatal +efects are the defects3 which results in the failure of the complete software s&stem3 of a subs&stem3 or of a software unit so that no work or testin* can be carried out after the occurrence of the defect. %a0or +efects are one3 which also causes failure of entire or part of s&stem3 but there are some processin* alternatives3 which allows further operation of the s&stem. %inor +efects does not result in failure but causes the s&stem to produce incorrect3 incomplete3 or inconsistent results3 or the defect impairs the s&stem usabilit&. 'osmetic +efects are small errors that do not prevent or hinder functionalit&. Followin* are eCamples of t&pe of +efects3 which falls under each cate*or&. Fatal +efects

Functionalit& does not permit for further testin*. Runtime /rrors like 1ava-cript errors etc. Functionalit& %issed out / Incorrect Implementation :%a0or +eviation from Requirements;. Performance Issues :If specified b& 'lient;. 8rowser incompatibilit& and 5peratin* s&stems incompatibilit& issues dependin* on the impact of error. +ead inks. Recursive oop. %a0or +efects Functionalit& incorrectl& implemented :%inor +eviation from Requirements;. Performance Issues :If not specified b& 'lient;. %andator& ?alidations for %andator& Fields. Ima*es3 Qraphics missin* which hinders functionalit&. Front /nd / Rome Pa*e 6li*nment issues. %inor +efects -creen a&out Issues -pellin* %istakes / Qrammatical %istakes. +ocumentation /rrors Pa*e Titles %issin*. 6lt TeCt for Ima*es. 8ack*round 'olor for the Pa*es other than Rome pa*e. +efault ?alue missin* for the fields required. 'ursor -et Focus and Tab Flow on the Pa*e. Ima*es3 Qraphics missin*3 which does not3 hinders functionalit&. 'osmetic +efects -u**estions QBI ima*e color etc.

what is difference between bu* trakin* and bu* reportin*!

8u* Trackin* is simpl& trackin* bu*s\ how! Trackin* is the process of addressin* pro*rammin* errors that are alread& found. it involves recordin* and reviewin* a bu*3 and recordin* the required fiC. &ou can even decide on its bud*et and schedule. HouAre *onna *et that 8u* Trackin* is *ood to mana*e bu*s and/or deal with Aem. With bu* trackin*3 &ou can control the similar bu*s and/or find Aem better. it also enables &ou to know what is alread& done with that bu* :&ou can name it a 8u* Ristor& somehow;. so ever& aspect of a bu* is *onna be discussed3 then recorded.

thus it could be act like a neat reference to address bu*s3 and help &ou find some similar disasters a lot more easil&3 or provide better solutions about the current bu*. btw3 a *eneral sheet for a tracked bu* could be somethin* like this: 6ffectin* resource U version Zind of bu* Thin*s that should be chan*ed :and Row; to fiC the bu* +efine a bud*ets +efine a *eneric schedule)time for the fiC to be released 6 small description of the bu* 6 small description of the findin* process .... 5f course3 there are a lot more to add to that sheet :for eCample3 even the name of the finder of the bu*;. 8eside the fact that3 some vendors or communities have a 8u* Trackin* -&stem :8T-; or Issue Trackin* -&stem :IT-; which almost works on the basis that i mentioned above. 6nd 8u* Reportin*! itAs somehow a branch of 8u* Trackin* process. When &ou find bu*s &ou are *onna report it3 and :or personall&; the responsible team is *onna *ather more information around it :throu*h callin* &ourself as the finder3 callin* the vendor to share more details on the s&stem3 or 0ust anal&.in* the situation on team itself3 and/or ma&be throu*h man& other possible wa&s;3 and keep it on till a sheet will be created for that bu*9 then &ouAll have a *reat resource around that bu*3 describin* it in all of the aspects as i said above. That was the difference between 8u* Reportin* and 8u* Trackin*\

8BQ R/P5RT- TR6T %6Z/ -/4-/ Introduction 6fter a defect has been found3 it must be reported to development so that it can be fiCed. %uch has been written about identif&in* defects and reproducin* them, but ver& little has been done to eCplain the reportin* process and what developers reall& need. The purpose of this paper is to provide a *uideline for what information should be included in a report3 and how those requirements will var& based on the t&pe of bu* and the t&pe of function. 5verview of 8u*s 4o matter what a s&stem does3 what lan*ua*e itYs written in3 what platform itYs run on3 whether itYs client/server based or not, its basic functions are the same. The& are broken down into the followin* cate*ories: /ntr& -tora*e 5utput Process

6s the interaction between data and the s&stem increases usuall& so does the severit& of the bu*3 and the detail needed in a report. 8u* severit& can be cate*ori.ed as follows: 'osmetic Inconvenience oss of Function -&stem crash or han* oss of +ata 'osmetic bu*s are the simplest bu*s to report3 and affect the s&stem the least. The& are simpl& instances where thin*s look wron*. -pellin* errors3 screen anomalies, these are cosmetic bu*s. 8u*s that are classified as an inconvenience are 0ust that3 somethin* that makes the s&stem harder to use. These are sli*htl& more nebulous since part of their effect is sub0ective. This also makes it harder to describe what the actual problem is. When a bu* results in a loss of function3 reportin* becomes a bit more complicated and the ur*enc& to fiC the bu* is *reater. These bu*s do not affect the data3 but it means that a process is useless until it is fiCed. 8ecause of this3 the report a*ain becomes more complicated. 8u*s that cause the s&stem to crash or han* can be the hardest to reproduce3 and therefore the hardest to adequatel& describe. If &ou eCperience a crash or han* in testin*3 it is imperative to see if &ou can reproduce the problem3 documentin* all the steps taken alon* the wa&. 5n these occasions3 it is also important to include the data used in causin* the s&stem to crash/han*. The final classification is the worst, bu*s that result in the loss of data. +ata is the heart of almost ever& s&stem3 and an&thin* that threatens the inte*rit& of that data must be fiCed as quickl& as possible. Therefore more than an& other bu* t&pe it must be documented as thorou*hl& as possible. Reportin* Quidelines The ke& to makin* a *ood report is providin* the development staff with as much information as necessar& to reproduce the bu*. This can be broken down into L points: #; Qive a brief description of the problem 2; ist the steps that are needed to reproduce the bu* or problem =; -uppl& all relevant information such as version3 pro0ect and data used. J; -uppl& a cop& of all relevant reports and data includin* copies of the eCpected results. L; -ummari.e what &ou think the problem is. When &ou are reportin* a defect the more information &ou suppl&3 the easier it will be for the developers to determine the problem and fiC it. -imple problems can have a simple report3 but the more compleC the problem, the more information the developer is *oin* to need. For eCample: cosmetic errors ma& onl& require a brief description of the screen3 how to *et it and what needs to be chan*ed.

Rowever3 an error in processin* will require a more detailed description3 such as: #; The name of the process and how to *et to it. 2; +ocumentation on what was eCpected. :/Cpected results; =; The source of the eCpected results3 if available. This includes spread sheets3 an earlier version of the software and an& formulas used; J; +ocumentation on what actuall& happened. :Perceived results; L; 6n eCplanation of how the results differed. N; Identif& the individual items that are wron*. K; If specific data is involved3 a cop& of the data both before and after the process should be included. O; 'opies of an& output should be included. 6s a rule the detail of &our report will increase based on a; the severit& of the bu*3 b; the level of the processin*3 c; the compleCit& of reproducin* the bu*. 6natom& of a bu* report 8u* reports need to do more than 0ust describe the bu*. The& have to *ive developers somethin* to work with so that the& can successfull& reproduce the problem. In most cases the more information, correct information, *iven the better. The report should eCplain eCactl& how to reproduce the problem and an eCplanation of eCactl& what the problem is. The basic items in a report are as follows: ?ersion: This is ver& important. In most cases the product is not static3 developers will have been workin* on it and if the&Yve found a bu*, it ma& alread& have been reported or even fiCed. In either case3 the& need to know which version to use when testin* out the bu*. Product: If &ou are developin* more than one product, Identif& the product in question. +ata: Bnless &ou are reportin* somethin* ver& simple3 such as a cosmetic error on a screen3 &ou should include a dataset that eChibits the error. If &ouYre reportin* a processin* error3 &ou should include two versions of the dataset3 one before the process and one after. If the dataset from before the process is not included3 developers will be forced to tr& and find the bu* based on forensic evidence. With the data3 developers can trace what is happenin*. -teps: ist the steps taken to recreate the bu*. Include all proper menu names3 donYt abbreviate and donYt assume an&thin*. 6fter &ouYve finished writin* down the steps3 follow them ) make sure &ouYve included ever&thin* &ou t&pe and do to *et to the problem. If there are parameters3 list them. If &ou have to enter an& data3 suppl& the eCact data entered. Qo throu*h the process a*ain and see if there are an& steps that can be removed. When &ou report the steps the& should be the clearest steps to recreatin* the bu*. +escription: /Cplain what is wron* ) Tr& to weed out an& eCtraneous information3 but detail what is wron*. Include a list of what was eCpected. Remember report one problem at a time3 donYt combine bu*s in one report.

-upportin* documentation: If available3 suppl& documentation. If the process is a report3 include a cop& of the report with the problem areas hi*hli*hted. Include what &ou eCpected. If &ou have a report to compare a*ainst3 include it and its source information :if itYs a printout from a previous version3 include the version number and the dataset used; This information should be stored in a centrali.ed location so that +evelopers and Testers have access to the information. The developers need it to reproduce the bu*3 identif& it and fiC it. Testers will need this information for later re*ression testin* and verification. 5r*ani.ation 5r*ani.ation is one of the most important tools available. If &our reportin* process is or*ani.ed and standardi.ed it will serve &ou well. Take the time to develop a standardi.ed method of reportin* and train Testers3 Q6 and 8eta)testers in its use. If at all possible3 use a trackin* s&stem for &our defect/development trackin* and make sure that ever&one usin* it understands the fields and their importance. +ocument &our data samples to match up with the bu*s/defects reported. These will be useful both to development when fiCin* the bu* and to Testin*/Q6 when it comes time for re*ression testin*. -ummar& 6 bu* report is a case a*ainst a product. In order to work it must suppl& all necessar& information to not onl& identif& the problem but what is needed to fiC it as well. It is not enou*h to sa& that somethin* is wron*. The report must also sa& what the s&stem should be doin*. The report should be written in clear concise steps3 so that someone who has never seen the s&stem can follow the steps and reproduce the problem. It should include information about the product3 includin* the version number3 what data was used. The more or*ani.ed information provided the better the report will be.

-oftware Testin* Interview Questions #. What are &our roles and responsibilities as a tester! 6 tester has a test to break the application3 an abilit& to take the point of view of the customer3 a stron* desire for qualit&3 and an attention to detail. #a. /Cplain in the pre)testin* phase3 acceptance testin* and testin* phase. Pre)testin* phase , Review the requirement document3 settin* up %R tool3 writin* the Test Plan3 collectin* the test data3 installin* the test automation tool3 settin* up the database3 web browser3 web server. 6cceptance testin* , Tasks in the acceptance testin* phase are checkin* the product test entrance criteria and conductin* basic feature tests for the product. Testin* phase ) The tasks in the testin* phase include runnin* the tests from Test Plan3 enterin* bu*s on the %R tool3 workin* with the developers to resolve the bu*s3 runnin* re*ression tests3 collectin* test metrics3 and estimatin* if the test effort is followin* schedule in the %aster Test Plan.

2. /Cplain -oftware development life c&cle %arketin* require 8usiness require -&stem require 6nal&sis , hi*h level , detail desi*n3 codin*/ unit testin*/ inte*ration testin*/ s&stem testin*/ user acceptance/ product problem solutions. =. What is the master test plan! What is contains! Who is responsible for writin* it! 6 master plan is a test strate*& document3 it is based on qualit& method of operations It contains details of test environment: hw/sw and operation s&stem9 %R tools9 timeliness9 hr resources3 testers3 kill stets3 assi*nment of priorities3 c&cles of testin*9 entrance W eCit criteria for each phase of testin*9 code mi*ration from development environment to test environment9 bud*et3 automation testin*)which tool for re*ression test and performance test. Test leader or test mana*er is responsible for writin* it. J. What is test plan! Who is responsible for writin* it! What is contains! It is a document describes the ob0ectives3 scope3 approach W focus of a software testin* effort9 testin* priorities3 scope of testin*3 ob0ective of testin*3 test environment) hw3 s/w3 tools3 personnel9 test cases to test different requirement W defines what to test9 precond data that defines data require to test for this case9 test procedure that defines how to test the case9 data input , test data scenario specific. Tester is responsible for writin* the test plan It contains: title3 software version3 db requirement3 test tools3 requirement P3 test case9 Test pre condition3 Test procedures9 /Cpected result9 actual result9 defect I+3 Remarks)pass/fail L. What different t&pe of test cases &ou wrote in the test plan! Test case is a document that describes an input action or event W an eCperience response Test cases are written in test plan to test the applications features: functionalit&3 html link s testin*3 X% testin*3 'QI component testin*9 and test usabilit& of application. Functional cases to test limits of input3 output3 table3 files9 test cases to test the stora*e capacit& of the s&stem9 test performance of the application under load conditions3 scenarios for stress testin*9 volume testin*9 securit& testin*3 recover& testin*3 installation testin*3 error testin*3 and confi*uration testin*. N. What are test drivers! ?er& simple pro*ram which accept data3 eCecute software under test3 store results3 compare results with test drivers can be made in J test3 WinRunner. K. Wh& test plan is controlled document! 6fter preparin* a test plan a walkthrou*h is conducted with development team W s&stem anal&st onl& then the test plan can *o further hence it is a 'ontrolled +ocument.

O. What information &ou need to formulate test! 8usiness procedure doc3 $. What template &ou used to write test plan! /Ccel3 otus 4otes3 Word document3 and etc. #I. What is %R :%odification Request;! %R is a bu* trackin* tool thru which tester will communicate with others :dev team; in s&stem and keep record of the histor& of defects9 it is used to keep track of bu*s9 s&stem mana*ement team can monitor pro*ress of bu* fiCin*9 helps all to anal&.es the software qualit& because thru this we will view how man& severe bu*s are comin* from the test phase. ##. Wh& &ou write %R! To keep track of defects till resolved. #2. What information it contains! -everit&/due date/developer/ assi*ned to/ status/ release version/ platform/ module/ person responsible for fiCin* bu*/ description of bu* resolution. #=. Qive me few eCamples of the %Rs &ou wrote. If &ou entered data on a field3 W it crashed the application9 &ou did not enter data in required field W the s&stem let &ou *o further -everit& # %R)to be resolved within 2J hours. #J. What is White 8oC/Bnit testin*! 8ased on knowled*e of internal lo*ic of an application codes9 test based on covera*e code statements. Bnit testin* is the most micro scale of testin*3 it ma& require developer test driver modules or test harnesses. White W unit testin* are almost same9 White boC testin*: thorou*h knowled*e of code3 eCamine internal desi*n of pro*ram W requires that the tester has detailed knowled*e of its structure3 eCtreme testin* :$/$$$$$;3 eCceptional cases :book return at bookstore ,2IS disc when purchased3 when &ou return did the& deduct 2IS!;3 when unit test are done on a white boC3 the& are essentiall& path tests3 the idea is to focus on a relativel& small se*ment of code W aim to eCercise a hi*h percenta*e of the internal paths3 a path is a instruction sequence that threads thru the pro*ram from initial entr& to final eCit. #L. What is inte*ration testin*!

Testin* combined parts of the applications to determine if the& function to*ether correctl&. #N. What is black boC testin*! Testin* functionalit& of application of application from business point of view)user point of view #K. What knowled*e &ou require to do the white boC3 inte*ration and black boC testin*! White boC , understand the pro*ram lan*ua*e and application procedures. Inte*ration , requires detailed of product internals and understand the pro*ram lan*ua*e as well because tester need to combined all the units of the application to determine if the& function to*ether correctl&. 8lack boC ) 1ust observin* the output of the s&stem #O. Row man& testers were in the test team! +epends on the pro0ect9 the last pro0ect there were #I testers a the be*innin* W alter dwindled to N at the end of the pro0ect. There were 2 automation and performance testers in pro0ect. #$. What was the test team hierarch&! Pro0ect %ana*er3 Q6 lead3 +esi*n team3 8usiness 6nal&sis3 and Q6 tester. 2I. Which %R tool &ou used to write %R! Bse %R tool to write %R3 %R includes the t&pe of error3 the subs&stem the error is related3 the severit& of an error3 and a short description of the error9 otus notes 2#. What is re*ression testin*! Retestin* of scenarios after fiCin* of %R from point of view of enhancements of software9 retestin* after fiCes or modifications of soft ware or its environment. 22. Wh& we do re*ression testin*! 8ecause it is important to verif& that bu* fiCes did not break some other part of the s&stem. 2=. Row we do re*ression testin*! We use automation tool %ust be one complete set of re*ression tests done for the entire product before it leaves s&stem testin*.

'apture &our tests and pla& them back whenever required. 2J. What are the different automation tools &ou know! QTP3 WinRunner3 -ilk3 Rational Robot 2L. What is difference between re*ression automation tool and performance automation tool! Re*ression automation tool can pla& back the scripts and ad0ust the speed pla& back9 also it will test the functionalit& of application under sin*le user load. Performance test tool main purpose is to create virtual users3 put load on s&stem3 review time under load condition3 do anal&sis of response time3 *enerate reports W *raphs3 monitor server resources under load condition. 2N. What is client server architecture! 6ll shared resources are present on a server machine3 client machine use the resources on server: files3 printers c/s architecture is *ood for scalabilit& with inc P of users9 scalabilit& is done hori.ontall&:clients; W verticall& :disk memor&; 'lient: a computer that requests service -erver: a computer that provides the service In a network3 the client/ server model provides a convenient wa& to interconnect pro*rams that are distributed efficientl& across different locations. 2K. What is three tier and multi)tier architecture! Presentation server :*ui;3 db server :data;3 application server lo*ic. %ulti tier , man& servers3 www ,web server3 application server3 database server3 *atewa&3 le*ac& server 2O. What is Internet! The Internet3 sometimes called simpl& <the 4et3< is a worldwide s&stem of computer networks ) a network of networks in which users at an& one computer can3 if the& have permission3 *et information from an& other computer :and sometimes talk directl& to users at other computers;. 2$. Row Intranet is different from client)server! Intranet , QBI resides on server3 we use browsers html3 1ava applets :eCecutable codes written for web in 1ava;3 shareable '/-) QBI is client specific not shareable =I. What is different about Web Testin* than 'lient -erver Testin*! Web , navi*ation not well defined: needs a lot of testin*9 P of users not predictable therefore performance test is an issue9 an environment is unknown9 use of BR Ys h&perlinks.

'lient/-erver , navi*ation is well defined9 P of users known9 environment is known9 navi*ation is thru menu or push button. =#. What is b&te code file!

=2. What is an 6pplet! 6n applet is a pro*ram written in the 1ava pro*rammin* lan*ua*e that can be include in a RT% pa*e3 much in the same wa& an ima*e is included. ==. Row applet is different from application! 4o html file needed in application to run9 application contains main class9 -tand alone pro*rams are called 1ava applications :console applications3 windowed 1ava applicationYs; 1ava application contains main class3 independent application9 not embedded in html code =J. What is 1ava ?irtual %achine! Qives &ou additional tools for solvin* pro*rammin* problems in 1ava Is the software implementation of a b'PBY desi*ned to run and compiled 1ava code. =L. What is I-5)$III! RereYs how it works. Hou decide that &ou need to develop a qualit& s&stem that meets the I-5)$III standards. Hou choose to follow this path because &ou feel the need to control the qualit& of &our products and services3 to reduce the costs associated with poor qualit&3 or to become more competitive3 or &ou choose this path simpl& because &our customers eCpect &ou to do so or because a re*ulator& bod& has made it mandator&. =N. What is Q%5! Q%5 is a set of process and *uidelines that software s&stems pro0ect must follow to compl& with I-5)$II# 'ontains K phases Initiate the pro0ect +esi*n the s&stem 8uild the s&stem Test the s&stem +eplo& the s&stem -upport the s&stem =K. What are the different phases of software development c&cle!

Pro0ect proposal initiate/requirement anal&sis/ function specification W desi*n/ develop/document preparation/inte*ration testin*/ test/user acceptance testin*/production rollout/ support Wmaintenance /updates/retestin*. =O. Row do help developers to track the faults is the software! 8& conductin* different test cases for same features usin* different combinations of data. Bse &our ima*ination and ask3 bRow can I break the s&stem! 6nd use some positive and ne*ative scenarios. =$. What are positive scenarios! Testin* the pro*ram the wa& its desi*ned to work. JI. What are ne*ative scenarios! Invalid data3 sequence3 ob0ects3 s&ntaC3 limits3 and invalid confi*uration parameters J#. What are individual test cases! 6n individual test is for sin*le feature or requirement J2. What are workflow test cases! Those scenarios which verif& the related features of sequence in application which correspond to a unit of work: b& creatin* quotation3 creatin* order3 shippin* order3 billin* order3 postin* the pa&ment. Workflow tests involve a lar*e number of areas than individual tests. Testin* workflow too earl& ma& be inefficient if man& problems are found and that requires entire workflow to be re)tested. Therefore miC workflow scenarios with individual tests. J=. If we have eCecuted individual test cases3 wh& we do workflow scenarios! Individual test cases focus on individual features3 we should test related sequence of features to find errors when cases are miCed. JJ. What is ob0ect)oriented model! 6llow reusabilit& of code9 blueprint is defined usin* lasses W then &ou create ob0ects for those classes. Hou can create man& subclasses3 which can inherit method W properties of the super classes. In 55% data is hidden/encapsulated within ob0ect3 therefore callin* its method onl& can chan*e its data3 then ob0ect will chan*e its own data. JL. What is procedural model!

Flow is top down3 data is in # section3 commands in another section9 waterfall technolo*&9 in ob0ect oriented it is multi directional. JN. What is an ob0ect! In ob0ect)oriented pro*rammin*3 ob0ects are the thin*s &ou think about first in desi*nin* a pro*ram and the& are also the units of code that are eventuall& derived from the process. In between3 each ob0ect is made into a *eneric class of ob0ect and even more *eneric classes are defined so that ob0ects can share models and reuse the class definitions in their code. /ach ob0ect is an instance of a particular class or subclass with the classAs own method or procedures and data variable. 6n ob0ect is what actuall& runs in the computer. JK. What is class! 6n ob0ect is defined via its class3 which determines ever&thin* about an ob0ect In ob0ect)oriented pro*rammin*3 a class is a template definition of the method and variable in a particular kind of ob0ect. Thus3 an ob0ect is a specific instance of a class9 it contains real values instead of variables. The class is one of the definin* ideas of ob0ect)oriented pro*rammin*. 6mon* the important ideas about classes are: 6 class can have subclasses that can inherit all or some of the characteristics of the class. In relation to each subclass3 the class becomes the superclass. -ubclasses can also define their own methods and variables that are not part of their superclass. The structure of a class and its subclasses is called the class hierarch&. JO. What is encapsulation! Qive one eCample In *eneral3 encapsulation is the inclusion of one thin* within another thin* so that the included thin* is not apparent. +ecapsulation is the removal or the makin* apparent a thin* previousl& encapsulated. In ob0ect)oriented pro*rammin*3 encapsulation is the inclusion within a pro*ram ob0ect of all the resources need for the ob0ect to function ) basicall&3 the method and the data. The ob0ect is said to <publish its interfaces.< 5ther ob0ects adhere to these interfaces to use the ob0ect without havin* to be concerned with how the ob0ect accomplishes it. The idea is <donAt tell me how &ou do it9 0ust do it.< 6n ob0ect can be thou*ht of as a self)contained atom. The ob0ect interface consists of public methods and instantiated data. 2; In telecommunication3 encapsulation is the inclusion of one data structure within another structure so that the first data structure is hidden for the time bein*. For eCample3 a T'P/IP)formatted data packet can be encapsulated within an as&nchronous transfer mode frame :another kind of transmitted data unit;. Within the conteCt of transmittin* and receivin* the 6T% frame3 the encapsulated packet is simpl& a stream of bits between the 6T% data that describes the transfer. J$. What is inheritance! Qive eCample In ob0ect)oriented pro*rammin*3 inheritance is the concept that when a class of ob0ect is defined3 an& subclass that is defined can inherit the definitions of one or more *eneral classes. This means for the pro*rammer that an ob0ect in a subclass need not carr& its own definition of data and methods that are *eneric to the class :or

classes; of which it is a part. This not onl& speeds up pro*ram development9 it also ensures an inherent validit& to the defined subclass ob0ect :what works and is consistent about the class will also work for the subclass;. LI. What is Pol&morphism! Qive eCample In ob0ect)oriented pro*rammin*3 pol&morphism :from the Qreek meanin* <havin* multiple forms<; is the characteristic of bein* able to assi*n a different meanin* to a particular s&mbol or <operator< in different conteCts. For eCample3 the plus si*n :U; can operate on two ob0ects such that it adds them to*ether :perhaps the most common form of the U operation; or3 as in boolean searchin*3 a U can indicate a lo*ical <and< :meanin* that both words separated b& the U operator must be present in order for a citation to be returned;. In another conteCt3 the U si*n could mean an operation to concatenate the two ob0ects or strin*s of letters on either side of the U si*n. 6 *iven operator can also be *iven &et another meanin* when combined with another operator. For eCample3 in the 'UU lan*ua*e3 a <UU< followin* a variable can mean <increment this value b& #<. The meanin* of a particular operator is defined as part of a class definition. -ince the pro*rammer can create classes3 the pro*rammer can also define how operators work for this class of ob0ects9 in effect3 the pro*rammer can redefine the computin* lan*ua*e. L#. What are the different t&pes of %Rs! There are several t&pes of %Rs. The most common ones are a; -5FTW6R/ )when &ou find a bu* in the software b; +5'B%/4T6TI54 ) when the installation *uide or the learnin* support material is wron* c; '54FIQBR6TI54 ) when the s&stem fails due to the bad or missin* confi*uration parameters3 d; /4R64'/%/4T- ) when a tester has a su**estion on improvin* a specific feature. L2. What is test %etrics! %easurin* tool :template prepared in eCcel; durin* eCecution phase: total P of test cases9 P of test cases eCecuted so far9 P of test cases passed9 P of test cases failed9 P of test cases deferred to neCt released9 %easures the qualit& of the product L=. What is the use %etrics! The test metrics required b& Q%5 are: #. Total tests 2. Tests run =. Tests passed J. Tests failed L. Tests deferred N. Tests passed the first time. LJ. Row we decide which automation tool we are *oin* to use for the re*ression testin*!

Plannin* of test strate*& on how to automate the testin*. Which test cases will be eCecuted for re*ression testin*. 4ot all the test cases will be eCecuted durin* re*ression testin*. Which test cases are worth automatin*. -ome test cases require more time to automate than to eCecute manuall& due to the t&pe of ob0ects on the window e.* custom ob0ects3 drawin* ob0ects3 etc. LL. What is the impact of environment of the actual results of performance testin*! /ronment pla&s a role in the results of the tests. Particularl& in the areas of performance testin*. -ome of the areas &ou cannot control 5ther traffic on the network 5ther processes runnin* on the server 5ther processes runnin* on the +8%-. LN. What is the -tress testin*3 Performance testin*3 -ecurit& testin*3 Recover& testin* and ?olume testin*! -tress testin* ) Its *oal is to demonstrate that the pro*ram is not able to handle hu*e amounts of data3 althou*h it has been developed for this :this is especiall& necessar& for real time s&stems;. Performance ) Timin*s for both read and update transactions should be *athered to determine whether s&stem functions are bein* performed in an acceptable timeframe. This should be done in stand)alone3 and in multi)user environment9 We test under load condition that server doesnYt crash3 we test that functionalit& doesnYt break3 we test response time of s&stem under load conditions3 performance tunin* W testin* of abap/J pro*ram W function modules. -ecurit& ) The s&stem should be secure from authori.ed use and unauthori.ed data access9 should be confidential3 inte*rated3 availabilit&9 outsiders cannot view3 edit or delet data3 s&stem is secure from hackers3 test a t = levels9 whether internet protocols are secure9 encr&ption W decr&ption3 -- 3 di*ital certificates. Recover& , 6 s&stem should be tested to see how it responds to errors and abnormal conditions such as: s&stem crash loss of device3 communication3 and power9 after s&stem crashes we can recover *racefull&3 we donYt lose data3 no duplicate records3 no broken records3 no *aps or bad data. ?olume ) ar*e volume of data should be fed to the s&stem to make sure it can correctl& process such amount. -&stems can often respond unpredictabl& when lar*e volume causes files to overflow. LK. What are criteria &ou will follow to assi*n severit& and due date to the %R! If bu* &ou find in application halts &our testin* then &ou cannot move on or breaks s&stem , it is of severit& # W assi*n hi*hest priorit& W due date of 2J hours9 if the bu* doesnYt halt the s&stem but it is critical to the business then assi*n severit& 23 it should be fiCed within this release but &ou will assi*n due date b& ne*otiatin* with developer3 after bu* is fiCed W &ouYve tested the bu* &ouYve to do the re*ression testin*. LO. What is user acceptance testin*!

When the product enters -&stem Test3 it has completed Inte*ration test and must meet the Inte*ration test eCit criteria. 'heck Inte*ration eCit criteria and product test entrance criteria in the %aster Test Plan or Test -trate*& document. %ake sure all defects are found in functional / inte*ration/ re*ression testin* L$. What are build3 version3 and release! When different modules of application are linked to*ether :L modules of application;. 8uild #//When all fiCes of bu* in 8uild # ,8uild 2 ?ersion ,a an& minor chan*es to software. NI. What are the entrance and eCit criteria in the s&stem test! The entrance/eCit criteria into and out of each testin* phase is written in the %aster Test Plan. -&stem Test /ntrance 'riteria Inte*ration test eCit criteria have been successfull& met 6ll installation documentation is completed. 6ll shippable software has been successfull& built. -&stem Test Plan is baselined b& completin* the walkthrou*h of the test plan. Test environment should be setup. 6ll severit& # %Rs of inte*ration test phase should be closed. -&stem Test /Cit 'riteria 6ll the test case in the test plan should be tested. 6ll the test c&cle should be eCecuted 6ll %Rs or problems are closed3 rolled3 or deferred. Re*ression testin* c&cle should be eCecuted after closin* the %Rs. 6ll documents are reviewed3 finali.ed and si*ned)off 6n& problem areas that require fiC or under investi*ation are included in the current release. N#. What are the roles of Test Team eader! Team leader is responsible for formulatin* master test plan3 decide test environment3 collection of master data3 keepin* track of testin* schedules3 make sure all actions are completed within timelines3 testes have conducted W6Z of test plans3 mana*e effort of software transport from development to environment3 attend %R meetin*s3 make sure %RYs are fiCed within time lines3 responsible for sharin* test metrics with other team members3 resource identification W allocation3 bud*et3 defect trackin* repositor&. N2. What are the roles of -r. Test /n*ineer! -r. Test /n*ineer should know the entire process of the application and the procedure of testin* the application. 8esides3 must be s&stem anal&sis W desi*n.@ N=. Row do &ou decide what functionalit&Ys of the application are to be tested! Qo with requirement document 6lwa&s test at the be*innin* of the application.

NJ. If there are no requirements3 how will &ou write &our test plan! Review desi*n document3 schedule meetin* with developers3 s&stem desi*n people W construct test case9 'onsider usin* test cases as a means of documentin* the s&stems behavior3 users can review test cases , is this what we want to happen3 developers use test cases as a checklist for ) what the& must build3 what the& should test before release. NL. What is smoke testin*! Qo different application and check in a quick format. NN. What is soak testin*! Require automated scripts +urin* infrastructure testin*3 automation testin* of the server 'QI script can be left runnin* 'heck out response to failures. NK. What is a pre)condition data! +ata required to setup in the s&stem before the test eCecution9 for eCample3 %aster data which should be present in application to test the functionalit& of application3 eCample , products in Q%5 application &ou cannot drive the application without this^.eCample , for deletin* an order &ou need an eCistin* order3 &ou need pricin* information before bookin* a fli*ht. NO. What are the different documents in Q6! Test plan3 master test plan3 BR+/8R+/FR+/+++/T-+ 5R --+ :Bser requirement3 bus req3 func req3 det des3 tech spec/s&stem spec; RT- , review trackin* sheet. N$. Row do &ou rate &ourself in software testin*! I think I have skills and necessar& eCperience to do an outstandin* 0ob. KI. With all the skills3 do &ou prefer to be a developer or a tester! 6nd wh&! %& stren*th are in Q63 m*t. skills W would like to bein* test team lead position9 testin* *ives more eCposure to s&stems3 development3 detailed knowled*e of a particular area3 as a tester &ou are also a developer because &ou develop automated scripts. K#. What are the best webs sites that &ou frequentl& visit to up*rade &our Q6 skills! %ercur& interactive. 'om3 sqe.com3 sun*uru.com

K2. 6re the words bPreventionc and b+etectionc soundin* familiar! /Cplain. /rror +etection rate are: #II test ran first da& and JII neCt da& #I error first da& and 2I neCt da& error detection rate #IS for the first da& and LS of the neCt da& Qualit& assurance deals with monitorin* W -oftware qualit& assurance deals with prevention. +etection , tester will detect the bu*s9 prevention , developer Q6 deals with *uidelines3 standards3 methodolo*&3 and confi*uration m*t. K=. Is defect resolution a technical skill or interpersonal skill from Q6 viewpoint! Interpersonal skill KJ. 'an &ou automate all the test scripts! /Cplain 4o3 it is not possible to automate all the test scripts. Routine W repetitive tasks can be automated9 can automated scripts which are eli*ible for re*ression testin*3 where there is repetition involved. KL. What is /nd to /nd business lo*ic testin*! /nd)to)/nd verification of business process or point to point: business lo*ic :start to end;9 doin* a business process X4 from end to end9 :QT' ,qt3 order3 ship3 inv/billin*3 post to led*er. KN. /Cplain to me about a most critical defect &ou found in &our last pro0ect! P4' bank online application form , when a mandator& field was not filled in the form was still processed. Interview question Part II #. What is /)commerce! /)commerce :electronic commerce or /'; is the bu&in* and sellin* of *oods and services on the Internet3 especiall& the World Wide Web. In practice3 this term and a new term3 <e)business3< are often used interchan*eabl&. For online retail sellin*3 the term e)tailin* is sometimes used. 2. Qive eCample of /)commerce application!

/)commerce can be divided into: /)tailin* or <virtual storefronts< on Web sites with online catalo*s3 sometimes *athered into a <virtual mall<

The *atherin* and use of demo*raphic data throu*h Web contacts /lectronic +ata Interchan*e :/lectronic +ata Interchan*e;3 the business)to)business eCchan*e of data e)mail and faC and their use as media for reachin* prospects and established customers :for eCample3 with newsletters; 8usiness)to)business bu&in* and sellin* The securit& of business transactions =. /Cplain the business process of the Qreat +eals applications! Provide online access to trade on stocks online in ondon -tock /Cchan*e , requirement *oes thru *atewa& server3 can view price of stock3 can sell stock3 set quantit& limits. Tech description3 desi*n3 browser3 web server3 *atewa& server3 customer acct3 and price. J. What are 'IQ :common *atewa& interface; The common *atewa& interface :'QI; is a standard wa& for a Web server to pass a Web userAs request to an application pro*ram and to receive data back to forward to the user. When the user requests a Web pa*e :for eCample3 b& clickin* on a hi*hli*hted word or enterin* a Web site address;3 the server sends back the requested pa*e. Rowever3 when a user fills out a form on a Web pa*e and sends it in3 it usuall& needs to be processed b& an application pro*ram. The Web server t&picall& passes the form information to a small application pro*ram that processes the data and ma& send back a confirmation messa*e. This method or convention for passin* data back and forth between the server and the application is called the common *atewa& interface :'QI;. It is part of the WebAs R&perteCt Transfer Protocol L. What is 6PI :application pro*ram interface; 6n 6PI :application pro*ram interface; is the specific method prescribed b& a computer operatin* s&stem or b& another application pro*ram b& which a pro*rammer writin* an application pro*ram can make requests of the operatin* s&stem or another application. 6n 6PI can be contrasted with a *raphical user interface or a command interface :both of which are direct user interfaces; as interfaces to an operatin* s&stem or a pro*ram. N. What is Internet! The Internet3 sometimes called simpl& <the 4et3< is a worldwide s&stem of computer networks ) a network of networks in which users at an& one computer can3 if the& have permission3 *et information from an& other computer :and sometimes talk directl& to users at other computers;. K. What is /Ctranet! 6n eCtranet is a private network that uses the Internet protocol and the public telecommunication s&stem to securel& share part of a businessAs information or operations with suppliers3 vendors3 partners3 customers3 or other businesses. 6n eCtranet can be viewed as part of a compan&As Intranet that is eCtended to users outside the compan&. It has also been described as a <state of mind< in which the

Internet is perceived as a wa& to do business with other companies as well as to sell products to customers. The same benefits that RT% 3 R&perteCt Transfer Protocol3 -imple %ail Transfer Protocol3 and other Internet technolo*ies have brou*ht to the Internet and to corporate intranets now seem desi*ned to accelerate business between businesses O. What are firewalls! 6 firewall is a set of related pro*rams3 located at a network *atewa& server3 that protects the resources of a private network from users from other networks. :The term also implies the securit& polic& that is used with the pro*rams.; 6n enterprise with an intranet that allows its workers access to the wider Internet installs a firewall to prevent outsiders from accessin* its own private data resources and for controllin* what outside resources its own users have access to. $. What is RT% ! RT% :R&perteCt %arkup an*ua*e; is the set of <markup< s&mbols or codes inserted in a file intended for displa& on a World Wide Web browser. The markup tells the Web browser how to displa& a Web pa*eAs words and ima*es for the user. The individual markup codes are referred to as elements :but man& people also refer to them as ta*;. #I. What is X% X% :/Ctensible %arkup an*ua*e; is a fleCible wa& to create common information formats and share both the format and the data on the World Wide Web3 intranets3 and elsewhere. For eCample3 computer makers mi*ht a*ree on a standard or common wa& to describe the information about a computer product :processor speed3 memor& si.e3 and so forth; and then describe the product information format with X% . -uch a standard wa& of describin* data would enable a user to send an intelli*ent a*ent :a pro*ram; to each computer makerAs Web site3 *ather data3 and then make a valid comparison. X% can be used b& an& individual or *roup of individuals or companies that wants to share information in a consistent wa&. ##. What is +RT% ! It refers to the web pa*es that appears to behave d&namicall& after the Web Pa*e is downloaded b& the browser. #2. What is RTTP! :R&perteCt transfer Protocol; The RTTP protocol was ori*inall& developed to reduce the inefficiencies of the FTP protocol >www@3 >ftp@. The *oal was fast request)response interaction without requirin* state at the server. To see the performance advanta*e of RTTP over FTP3 we can compare the process of file retrieval transactions in each protocol. 8oth protocols use a reliable3 connection)oriented transport protocol3 T'P >tcp@. #=. What is -- :-ecure -ocket a&er;

6 securit& protocol that provides privac& over the Internet. The protocol allows client/server applications to communicate in a wa& that cannot be eavesdropped. -ervers are alwa&s authenticated and clients are optionall& authenticated. #J. /Cplain the process of electronic pa&ment :-/T; /lectronic Pa&ments refer to financial transactions that are made without the use of paper documents3 such as checks or sharedrafts. +irect +eposit of Pa&roll is the most familiar electronic pa&ment. 6utomated Pa&ments3 such as pre)authori.ed direct pa&ments3 telephone bill pa&ments3 P' bankin* and point)of)sale or debit card transactions are bein* used b& more consumers ever&da&. #L. What do &ou verif& in 4avi*ational testin*! -ee if &ou can move between each window b& initiatin* an& of the functions from an& other appropriate window in the s&stem. This should be done without necessaril& performin* an& detailed processin* when there. #N. What is absolute address! 6 fiCed address in memor&. The term absolute distin*uishes from relative address3 which indicates a location b& specif&in* a distance from another location. 6bsolute address are also called real addresses and machine addresses.. #K. What is a relative address! 6n address specified b& indicatin* its distance from another address called the base address. For eCample3 a relative address mi*ht be 8U#L3 8 bein* the base address and #L the distance :called the offset;. #O. Wh& do web server *ives /RR5R JIJ! :File 4ot Found\; The anno&in* little reminders that either someone has miswritten the BR 3 deleted the file &ou are lookin* for3 or the net spirits are a*ainst &ou and will see to it &ou will never find the information &ou are lookin* for. #$. What is cookies! 'ookies are bits of information that &our browser picks up and carries around with it internall&. These bits of information can be read and chan*ed b& a site and make it possible to identif& people :or more accuratel& browsers; who have been to &our site before. 2I. What is Plu*)in! For the developers3 thatAs a tin*lin* food for thou*ht. 6nd to think that its not so unaffordable afterall3 is another *reat news. Hou simpl& have to let &our browser know how to do it. When we sa& AbrowserA3 we mean the popular 4etscape . The lettin* the browser know means we will be usin* %icrosoft ?isual 'UU to befriend the

browser. This pro*ram that introduces 4etscape to the various file eCtensions and how to handle these files is called a plu*)in . 2#. What is broken links! 8roken inks reviews web sites from the end users perspective. We *ive &ou an unbiased anal&sis of &our site with su**estions to increase site optimi.ation. Increasin* search en*ine rankin*s3 decreasin* load times3 removin* outdated time sensitive information3 verif&in* email response time3 etc 22. What are the reasons of havin* broken links! 8roken inks has developed a s&stemic approach to review &our site. It can answer the question: +o end)users eCperience what &ou eCpect! 2=. What is 1P/Q! 1P/Q is short for the A1oint Photo*raphic /Cperts QroupA. This was :and is; a *roup of eCperts nominated b& national standards bodies and ma0or companies to work to produce standards for continuous tone ima*e codin*. The A0ointA refers to its status as a committee workin* on both I-5 and ITB)T standards. The AofficialA title of the committee is I-5/I/' 1T'# -'2$ Workin* Qroup #3 and is responsible for both 1P/Q and 18IQ standards. The best known standard from 1P/Q is I- #I$#O)# :ITB)T T.O#;3 which is the first of a multi)part set of standards for still ima*e compression. 6 basic version of the man& features of this standard3 in association with a file format placed into the public domain b& ')'ube %icros&stems :1FIF; is what most people think of as 1P/Q\ Ropefull& this site will improve &our knowled*e of the real work of the 1P/Q committee. 2J. What is QIF! Is a *raphical Interface Format3 usin* ima*es for the Internet animation. 2L. What is a 1ava 6pplet! 6n applet is a pro*ram written in the 1ava pro*rammin* lan*ua*e that can be included in an RT% pa*e3 much in the same wa& an ima*e is included. When &ou use a 1ava technolo*& enabled browser to view a pa*e that contains an applet3 the appletYs code is transferred to &our s&stem and eCecuted b& the browserYs 1ava ?irtual %achine :1?%;. 2N. Row will &ou do the functional testin* of the web application! /nsure e)commerce applications work as eCpected. 2K. What internationali.ation testin* is required for the e)commerce application Functional Testin* Products: /nsure e)commerce applications work as eCpected.

oad Testin* Products: -tress test e)commerce applications under real)world conditions to predict s&stems behavior and performance and to identif& and isolate problems. Test Process %ana*ement Products: 5r*ani.e and mana*e the testin* process to determine application readiness. Web Performance %onitorin* Products: %onitor Web applications in real time and alerts operations *roups of performance problems before users eCperience them. Rosted Web Performance %onitorin* -ervice: Proactivel& monitor sites in real time. Rosted oad Testin* -ervice: Identif& bottlenecks and capacit& constraints before &our site *oes live. 2O. If &ou have to test under the squee.ed time line3 what strate*& &ou will follow to test the s&stem #. Bse risk anal&sis to determine where testin* should be focused! Which functionall& is most important to the pro0ect intended purpose! Which functionall& is most variable to the user! Which functionall& has the lar*est safet& impact! Which aspects or the applications are most important to the customer! 2$. If there r no requirements3 how will u write &our test plan! Review desi*n doc3 schedule meetin* with developers3 s&stem desi*n people W construct test case. =I. What is manual testin*! Time consumin* Time reliabilit& Ruman resource Time consistent =#. What is automated testin*! -peed Ri*h reliabilit& 'overa*e Repeatedl& Reusabilit& Pro*rammin* 'apabilities Testin* tools automates the process of testin* and can save a lar*e amount of time. In contrast3 manual testin* a lar*e amount of time is require and in accurac&. =2. Row will &ou test the server side pro*rams! 6-P , front/back3 1-P , 1ava server pa*es3 servlets3 'QI , pro*rams ==. What are the test drivers!

?er& simple pro*ram which accept data3 eCecute software under test3 store results3 compare results with Test drivers can be made in J test3 WinnRunner. =J. What are the different risk areas! Bnder our control / not under our control Functionalit& ,confi*uration of browsers3 h/w3 s/w3 o/s3 plu**in*3 cookies Reliabilit&/availabilit& ) crash not work Bsabilit& , user friendl&3 users3 are not trained doc Performance , infinite users3 accessin* site concurrentl& -ecurit& , open to all3 eCposure to risk =L. What if no requirements! -chedule meetin*s and talks with developers3 desi*ners3 and users to better understand how the s&stem is used and what functionalities need to be tested. I will tr& to understand the business process as much as possible b& creatin* process flow dia*rams and data flow dia*rams that show how the s&stem interacts and works. I will walkthrou*h the entire application and write test plan and conduct the walkthrou*h of the test plan with the with the development teams. With m& prior eCperience3 I will anal&.e what kind of testin* is needed3 and desi*n scenarios and eCecute test cases. =N. 'onflict with developers3 how to resolve! +evelopers and testers have a cordial team relationship and both work towards a common *oal. We both have contradictin* roles3 developers build software3 testers break software. 8efore issuin* an %R to the developer3 I will first tr& to troubleshoot the problem on m& end. The problem mi*ht not be a bu*9 it mi*ht be a misunderstandin* of the requirements from m& point of view. It could also be a data or environment problem. 5nl& then will I record the bu* in the defect)trackin* tool and create lo* files3 reports and screen)prints as well as write a resolution for the bu*3 so that I keep a trailin* histor& of the bu*. =K. Work ethics! I am a detail oriented person and a stron* team pla&er. 6s a Q6 en*ineer3 I shoulder a ma0or responsibilit&3 because I validate the software for users to use3 I am accountable for an& more bu*s croppin* up after the application has been put into production. I also share information with other team members and assist others. =O. Functional testin*! 8ased on the requirements document3 it is testin* the functionalit& of all the ob0ects on a pa*e or form. Testin* of individual component/module3 linked/related modules and then testin* of end)to end functionalit&. =$. Re*ression testin*!

It is retestin* of scenarios after an& chan*es of the code are made to make sure that the chan*es in previous builds have not affected other parts of the application. JI. Bnit testin*! Testin* each unit3 path tests3 b& focusin* on a relativel& small section of code and tr&in* to eCercise a hi*h percenta*e of the internal paths3 internals of the code. J#. Web pa*e / QBI testin*! First test the basic functionalities of the web pa*e3 the elements and ob0ects on the pa*e3 and the time it takes for the pa*e to load. ?erif& h&perlinks work and buttons eCecute transactions work. Perform browser pa*e testin* includin* testin* the properties of the ob0ects on the web pa*e3 input forms3 field validations3 data inputs3 and required and optional fields. J2. Row do &ou perform Testin* of Web -creens! When testin* web pa*es/screen the tester should first test for the basic functionalit& of the web pa*e. -tatin* with the looks and feel of the web pa*e3 centerin* and scalin* of ob0ects in the window3 the cosmetic part of the pa*e3 the spellin*3 the ali*nment of the fonts a*ainst the back*round or ima*es and how lon* before the web pa*e displa&s completel&. Traversin* the links and verif& do all the ob0ects on a pa*e load and do the& load in acceptable time for ima*es3 audio files3 video files3 streamed audio and video3 0ava applets and 6ctiveX controls. ?erif& internal and eCternal links. ?erif& R&perlinks and buttons that eCecutes transactions. Perform browser pa*e testin* that includes testin* the properties of the ob0ects on the web pa*e3 input forms3 field validations3 data inputs3 required and optional fields. When the basic functionalit& of the pa*e has been considered as acceptable thorou*h testin* of the web pa*e should be done which include link test3 data flow test3 securit& and most of all the usabilit& of the web pa*e. Bsabilit& requires that the needs of the end user should be satisfied and prioriti.ed. J=. What is 8ackend Testin*! 8ackend Testin* is basicall& testin* the data in the database usin* a utilit& or database lan*ua*e that can mine/retrieve data from the database bein* tested. -Q is one of the most common utilit& used for backend testin*. /nd to end and data flow testin* requires backend testin* to prove that the correct data flows from start of the application process to the end. G /Cecutin* the -Q queries to retrieve and manipulate data from the +82 database. It includes select3 inserts3 delete update3 :inner 0oins3 outer 0oin3 *roup b&3 order b&3 cursors etc.; J=. What is Functional testin*! Functional Testin* is testin* the application on the functionalit& of the s&stem or application bein* tested. Functionalit& testin* can be more accurate if it is based on the requirements documentation. It can be broken down into various components like start with testin* the functionalit& of individual component3 individual module3 linked modules and then end)to end functionalit&.

JJ. What is -&stem Testin*! -&stem Testin* is testin* the entire inte*rated s&stems from the business requirements point of view that includes testin* the , functionalit&3 usabilit& and all the facilities of the entire s&stem. JL. What is Inte*ration Testin*! Inte*ration Testin* is testin* one unit of the application inte*rated with another unit of the application. +ata and correspondin* pa*e/unit should flow properl& in inte*ration testin*. In Inte*ration testin* we check the interface between different modules of the application. JN. Tell some thin* about +ata ?alidation! +ata validation testin* is testin* for data bein* driven or passed from unit to another. 8ackend testin* is one of the data validations testin* usin* -Q to test for data. JK. What is Bser acceptance testin*! Bser acceptance testin* eCecutin* the user B6T test cases that constitutes the unit of work performed b& the user and various business scenarios. B6T is performed first in the test environment then in the production environment. JO. What is /nd to end testin*! /nd)to)/nd Testin* means testin* the application from the start of a transaction to the end to constitute a unit of work that will be performed b& the user. J$. What is Test 'overa*e! Test 'overa*e means the scope of the testin* effort on wide and how deep the test to be performed on application/s&stem. LI. What is +efect Trackin*! +efect Trackin* means durin* eCecutions when actual results of the test cases does not match the eCpected results we lo* it as a defect to a defect trackin* tools. We assi*n it a severit& dependin* upon the severit& of the bu*. The ob0ective of defect trackin* is to provide information to stake holder of the defect found in the product and to establish a time line on who and when will the defect be resolved. 6fter the bu*s are fiCed b& the developer it comes to testin* for testin* the bu* and after testin* the bu* the results are recorded in the trailin* histor& section of the %R. L#. 6t which sta*e in the development process should Q6 *et involved! -hould the& be involved at the start of the pro0ect once a requirements document has been rendered! -hould the& onl& be aware of the details of the pro0ect when the codin* is

completed! -hould Q6 be the facilitators of processes throu*hout the development process! Ri*ht from the be*innin*. In fact3 Q6 should have a hand in the requirements phase. Testin* will be based on requirements so it would make sense to have Q6 members present. This is3 a*ain in m& opinion3 doubl& so if &ou are talkin* about automated testin*. Ideall&3 in m& little world3 I like to be involved because I also want to provide some realit& checks to the requirements. For eCample3 if somethin* is required but it would be a bear t o implement in automation :and automation is desired; I like to make that known. If it will cause a si*nificant amount of performance testin* that there is no time or dedicated resources for3 I like to make that known. In the world of browsers3 if a chan*e will require vast amounts of cross)browser testin* and resources are short3 I want to make that known. Hou ask if Q6 should be <facilitators of processes throu*hout the development process< and I would sa& <Hes< or3 at the ver& least3 the& should have a dual hand in it with product/pro0ect mana*ers. This puts Q6 in the role of not 0ust product improvement but process improvement as well which3 in the end3 helps testabilit& and productivit&. 6lso thereAs an issue with *ettin* the development team used to Q6 bein* around. If &ouAre there from the start I think its much easier to influence the development process. Rather than appearin* on the scene once the pro0ect has been under wa& for a while and tr&in* to incorporate &our Q6 needs into the development teamAs alread& established procedures. I completel& a*ree with both replies to &our messa*e so far. Q6 should either be involved from the ver& start of the pro0ect 5R at least ccAd on ever& email or memo re*ardin* the pro0ect. Whether the& are re*ardin* meetin*s related to the pro0ect or the business mana*er on the pro0ect is sneakin* some new requirements to the developer. Q6 %B-T 6 W6H be in the loop\

B% Testin* Framework3 Testin* B% +ia*rams Introduction -ince the evolution of 'omputers3 there have been man& approaches3 standards and strate*ies for desi*nin* and developin* software applications. The last two decades has witnessed a ma0or revolution in software. There was somethin* missin* which is bein* unnoticed. -ince then researchers have been researchin* in developin* a model which can handle the complete -&stem +evelopment ife '&cle :-+ ';. Finall& we could see the fruits of success3 The Bnified %odelin* an*ua*e3 in short The B% . The industr& thanked the eminent scientists who made this possible , Qrad& 8ooch3 1ames Rambau*h and Ivar 1acobson. 8uildin* software is not a simple task. We need to admit that softwareYs become outdated in no time3 but while buildin* software we need to understand and be clear that it has to live a lifetime. These da&s3 buildin* secure web application3 financial transaction s&stems3 and mission critical s&stems is not a simple task. Row much ever secure and stable is the application hackers find a wa& in. If we can take measures towards the be*innin* of buildin* software3 we can definitel& *et the results in the lon* run. Tar*et 6udience

The ob0ective of this paper is to have an overview of B% and desi*n a framework for testin* pro0ects/applications followin* this standard. This paper is aimed for Testin* Professionals who want to learn about B% and also White 8oC testin* to a certain eCtent. I am currentl& workin* more in the area of 5b0ect 5riented Testin* and want to brin* out effective and efficient methods for testin* 5b0ect 5riented -&stems. The Bnified %odelin* an*ua*e Testin* Framework +ia*rams in B% 6 +ia*ram is the *raphical presentation of a set of elements3 most often rendered as a connected *raph of vertices :thin*s; and arcs :relationships;. There are nine dia*rams included in B% : #. 'lass +ia*ram 2. 5b0ect +ia*ram =. Bse 'ase +ia*ram J. -equence +ia*ram L. 'ollaboration +ia*ram N. -tatechart +ia*ram K. 6ctivit& +ia*ram O. 'omponent +ia*ram $. +eplo&ment +ia*ram

%odel ?iew of B%

Row this framework is structured!

This framework will address testin* of the ?iews. The above dia*ram depicts the %odel ?iew of B% . 6s &ou *o alon* the framework &ou would see that we will be testin* the five views individuall& and finall& we shall inte*rate all of them. The five ?iews are: #. Bse 'ase ?iew. 2. -tructural ?iew. =. 8ehavioral ?iew. J. Implementation ?iew. L. /nvironmental ?iew. The Bse 'ase ?iew of a s&stem encompasses the use cases that describe the behavior of the s&stem as seen b& its end users3 anal&sts and testers. With the B% 3 the static aspects of this view are captured in use case dia*rams. The -tructural ?iew of the s&stem encompasses 'lass and 5b0ect +ia*rams. These dia*rams depict all the classes and ob0ects that will be used in the development of the application. The 8ehavioral ?iew of the s&stem encompasses the d&namism of the 'lasses and 5b0ects. The d&namism of classes and ob0ects are captured in -equence3 'ollaboration3 -tatechart and 6ctivit& +ia*rams. The Implementation ?iew of a s&stem encompasses the components and files that are used to assemble and release the ph&sical s&stem. This view primaril& addresses the confi*uration mana*ement of the s&stemYs releases3 made up of somewhat independent components and files that can be assembled in various wa&s to produce a runnin* s&stem. The /nvironmental ?iew of a s&stem encompasses the nodes that form the s&stemYs hardware topolo*& on which the s&stem eCecutes. This view primaril& addresses the distribution3 deliver& and installation of the parts that make up the ph&sical s&stem.

TesterYs ?iew of B%

The above dia*ram illustrates the role of the B% +ia*rams from a TesterYs perspective.

Testin* 5b0ect 5riented -oftware , 6 brief 5verview We need to understand testin* 5b0ect 5riented -oftware at this point3 as the concept of B% is completel& based on 5b0ect 5riented %ethodolo*&. The strate*ies and techniques for testin* 5b0ect 5riented softwareYs differ from the traditional function)based testin*. The 5b0ect 5riented software testin* follows a specific pattern as the software is structured. We should be coverin* the followin* four t&pes of testin* while testin* ob0ect oriented software: #. %ethod Testin*. 2. 'lass Testin*. =. -ubs&stem Testin* and J. -&stem Testin*. %ethod Testin* G 6 method in a class should be tested usin* a black)boC approach. G The testin* should be*in from the -pecification. G 'onsider each parameter in the methods and identif& its equivalence class. G 8uild a test case specification for each interestin* combination of equivalence class. 'lass Testin* G 6 class contains a set of methods. G Identif& all the classes3 which has interaction with the class &ou are testin*. G Identif& the other classes for which &ou need the instances for testin* a class. -ubs&stem Testin*

G 'hoose a set of Bse 'ases relevant to the subs&stem. G Identif& the classes used in the Bse 'ase. G +erive the flow of communication and desi*n the Test 'ase. -&stem Testin* G -tart with the lowest level Bse 'ase. G Test each s&stem increment b& testin* the new Bse 'ases produced in that increment. G Zeep re)testin* the use cases which are at the low level periodicall&3 especiall& when there is a chan*e in the functionalit&. G Writin* detailed Test Plan for each Bse 'ase helps find errors in the anal&sis model. G 'han*e in the requirements can be matched b& chan*es to Bse 'ase Test Plans. Testin* Bse 'ase ?iew Bse 'ase dia*rams describe the functionalit& of the s&stem. 6 Bse 'ase is a description of set of sequence of actions that a s&stem performs that &ields an observable result of value to a particular actor. 6 use case is used to structure the behavioral thin*s in a model. Bse 'ase +ia*rams Bse 'ase +ia*rams describe the functionalit& of a s&stem and users of the s&stem. These dia*rams contain the followin* elements: G 6ctors3 which represent users of a s&stem3 includin* human users and other s&stems. G Bse 'ases3 which represent functionalit& or services provided b& a s&stem to users. 6 t&pical use case contains the followin* information: Bse 'ase -ection +escription 4ame 6n appropriate name for the Bse 'ase. 8rief +escription 6 brief description of the use caseYs role and purpose. Flow of /vents 6 teCtual representation of what the s&stem does with re*ard to the use case. Bsual notation for the flow of events are: %ain Flow 6lternative Flow /Cceptional Flow -pecial Requirements 6 teCtual description that collects all requirements3 such as non)functional requirements3 on the use case3 that are not considered in the use) case model3 but that need to be taken care of durin* desi*n or implementation. Pre)'onditions 6 teCtual description that defines an& constraints on the s&stem at the time the use case ma& start. Post)'onditions 6 teCtual description that defines an& constraints on the s&stem at the time the use case will terminate. bBsesc Relationship The bBsesc Relationship defines the relation between Bse 'ases. If the use case is related to or usin* the functionalit& of another use case3 it is mentioned here.

b/Ctendsc Relationship The b/Ctendsc Relationship defines the relation between Bse 'ases. If the one use case functionalit& is bein* eCtended to another3 that functionalit& is mentioned here. Test%ap 6 Test%ap is a collection of Test 'ases related b& most common test tar*et3 shared test steps and the sequence of eCecution and also a Test%ap is a pro*ram that defines multiple test cases b& its eCecution paths. Implementin* Bse 'ases For implementin* the Bse 'ases the followin* makes a checklist. #. Form function name from Bse 'ase 4ame. 2. Identif& actors. =. 'heck pre)conditions. J. Invoke bactor actionc functions. L. ?erif& meanin*ful results. N. ?er& post)conditions. Implementin* 6ctor 6ctions For implementin* 6ctor 6ctions the followin* makes a checklist. #. From function name from action description. 2. +efine parameters. ) 5b0ect class record t&pes. ) 5ptional override for actor action mode. =. 6s interface definition evolves3 ) 6dd window declaration include files. ) 6dd window method calls to the function to perform input3 verif& output3 collect response time. Testin* the Bse 'ase ?iew G For each actor involved in the use case3 identif& the possible sequence of interactions between the s&stem and the actor3 and select those that are likel& to produce different s&stem behaviors. G For each input data comin* from an actor to the use case3 or output *enerated from the use case to an actor3 identif& the equivalence classes , sets of values which are likel& to produce equivalent behavior. G Identif& Test 'ases based on Ran*e ?alue 6nal&sis and /rror Quessin*. G /ach test case represents one combination of values from each of the below: ob0ects3 actor interactions3 input / output data. G 8ased on the above anal&sis3 produce a use case test table :scenario; for each use case. G -elect suitable combinations of the table entries to *enerate test case specifications. G For each test table3 identif& the test cases :success or eCtension; tested. G /nsure all eCtensions are tested at least once. G %aintain a Bse 'ase Prioriti.ation Table for the use cases for better covera*e as follows:

Bse 'ase 4o Bse 'ase Risk Frequenc& 'riticalit& Priorit& The Risk column in the table describes the risk involved in the Bse 'ase. The Frequenc& column in the table describes how frequentl& the Bse 'ase occurs in the s&stem. The 'riticalit& column in the table describes the importance of the Bse 'ase in the s&stem. The Priorit& column in the table describes the priorit& for testin* b& takin* the priorit& of the use case from the developer. G -ome use cases mi*ht have to be tested more thorou*hl& based on the frequenc& of use3 criticalit& and the risk factors. G Test the most used parts of the pro*ram over a wider ran*e of inputs than lesser user portions to ensure better covera*e. G Test more heavil& those parts of the s&stem that pose the hi*hest risk to the pro0ect to ensure that the most harmful faults are identified as soon as possible. G The most risk factors such as chan*e in functionalit&3 performance shortfall or chan*e in technolo*& should be bared in mind. G Test the use cases more thorou*hl&3 which have impact on the operation of the s&stem. G The pre)conditions have to be taken into consideration before assumin* the testin* of the use case. %ake test cases for the failure of the pre)conditions and test for the functionalit& of the use case. G The post)conditions speak about the reference to other use cases from the use case &ou are testin*. %ake test cases for checkin* if the functionalit& from the current use case to the use case to which the functionalit& should be flowin* is workin* properl&. G The business rules should incorporated and tested at the place where appropriatel& where the& would be actin* in the use case. G %aintain a Test 'overa*e %atriC for each use case. The followin* format can be used: B' 4o. B' 4ame Flow T' 4oYs 4o. of T'Ys Tested -tatus

In the above table: G The B' 4o. column describes the Bse 'ase 4umber. G The B' 4ame column describes the Bse 'ase 4ame. G The Flow column describes the flow applicable: T&pical Flow3 6lternate Flow #3 6lternate Flow 23 etc. G The T' 4oYs column describes the start and end test case numbers for the flow. G The 4o. of T'Ys column describes the total number of test cases written. G The Tested column describes if the flow is tested or not. G The -tatus column describes the status of the set of test cases3 if the& have passed or failed. -trate*& for desi*nin* the Test 'aseYs and Test -cenarioYs The followin* steps t&picall& speak of a strate*& for desi*nin* the Test 'ase:s; and Test -cenario:s; from the Bse 'ase document.

-tep #: Identif& the module / class / function for which the Bse 'ase belon*s. -tep 2: Identif& the functionalit& of the Bse 'ase with respect to the overall functionalit& of the s&stem. -tep J: Identif& the Functional Points in the Bse 'ase and make a Functional Point +ocument. -tep =: Identif& the 6ctors involved in the Bse 'ase. -tep J: Identif& if the Bse 'ase b/Ctendsc or bBsesc an& other Bse 'ase. -tep L: Identif& the pre)conditions. -tep N: Bnderstand the T&pical Flow of the Bse 'ase. -tep K: Bnderstand the 6lternate Flow of the Bse 'ase. -tep O: Identif& the 8usiness RuleYs. -tep $: 'heck for an& post)conditions and special requirements. -tep #I: +ocument Test 'ases for the T&pical Flow :include an& actions made in the alternate flow if applicable;. -tep ##: +ocument the Test 'ases for 8usiness Rules. -tep #2: +ocument Test 'ases for 6lternate Flow. -tep #=: Finall&3 identif& the main functionalit& of the Bse 'ase and document a complete positive end)to)end scenario. %ake a 'ross Reference %atriC with respect to each: #. Bse 'ase +ocument and Functional Point +ocument. 2. Functional Point +ocument and Test 'ase +ocument. =. Test 'ase +ocument and +efect Report. These 'ross)Reference %atriC +ocuments would be helpful for easil& identif&in* and trackin* out the defects and the functionalit&. ?ersion 'ontrol For ever& Test 'ase +ocument maintain ?ersion 'ontrol +ocument for trackin* out the chan*es in the Test 'ase +ocument. The template can be made in the followin* wa&: ?ersion 4umber +ate 'omments

Testin* the -tructural ?iew The -tructural ?iew depicts 'lass and 5b0ect dia*rams. 6 'lass is a description of a set of ob0ects that share the same attributes3 operations3 relationships and semantics. 'lass +ia*rams

'lass +ia*rams describe the static structure of a s&stem3 or how it is structured rather than how it behaves. These dia*rams contain the followin* elements. G 'lasses3 which represent entities with common characteristics or features. These features include attributes3 operations and associations. G 6ssociations3 which represent relationships which relate two or more other classes where the relationships have common characteristics or features. These attributes and operations. Writin* Realth& 'lasses Rere is a sample checklist3 which I made for checkin* the health of a class: #. +oes the class have an appropriate class name! 2. When definin* member functions outside the class3 is the scope of the function restricted to the class name specified! =. 'an &our member function defined outside the class3 access all the private data of the class to which it belon*s! J. Is the constructor for the class in the public section! 5b0ect +ia*rams 5b0ect +ia*rams describe the static structure of a s&stem at a particular time. Whereas a class model describes all possible situations3 an ob0ect model describes a particular situation. 5b0ect dia*rams contain the followin* elements: G 5b0ects3 which represent particular entities. These are instances of classes. G inks3 which represent particular relationships between ob0ects. These are instances of associations. Testin* the -tructural ?iew Testin* 'lasses 6 e'lassY basicall& is a collection of methods. The followin* can help *ather the required information for testin* class. #. Identif& all the classes. 2. Identif& the communication between each class. =. Identif& the dependenc& of various classes. J. Identif& 6ssociations. This helps in *roupin* of classes3 which have common characteristics. L. %ake a list of all classes alon* with what the class is intended to do and with which other classes it communicates. This helps in easier investi*ation when a particular functionalit& fails. The template for the above mentioned recordin*s can be as follows: 'lass 4ame 6ssociated 'lass 4ame +escription

Testin* 5b0ects When testin* 55 -&stems3 &ou can combine the testin* of 'lasses and 5b0ects. 5b0ects are instances of 'lasses and hence &ou can reuse the test cases used for testin* classes for testin* ob0ects of the same class. Testin* the 8ehavioral ?iew The 8ehavioral ?iew comprises of the followin*: #. -equence +ia*rams. 2. 'ollaboration +ia*rams.

=. -tatechart +ia*rams. J. 6ctivit& +ia*rams. -equence +ia*rams -equence +ia*rams describe interactions amon* classes. These interactions are modeled as eCchan*e of messa*es. These dia*rams focus on classes and the messa*es the& eCchan*e to accomplish some desired behavior. -equence dia*rams are a t&pe of interaction dia*rams. -equence dia*rams contain the followin* elements: G 'lass roles3 which represent roles that ob0ects ma& pla& within the interaction. G ifelines3 which represent the eCistence of an ob0ect over a period of time. G 6ctivations3 which represent the time durin* which an ob0ect is performin* an operation. G %essa*es3 which represent communication between ob0ects. 'ollaboration +ia*rams 'ollaboration +ia*rams describe interactions amon* classes and associations. These interactions are modeled as eCchan*es of messa*es between classes throu*h their associations. 'ollaboration dia*rams are a t&pe of interaction dia*ram. 'ollaboration dia*rams contain the followin* elements. G 'lass roles3 which represent roles that ob0ects ma& pla& within the interaction. G 6ssociation roles3 which represent roles that links ma& pla& within the interaction. G %essa*e flows3 which represent messa*es sent between ob0ects via links. inks transport or implement the deliver& of the messa*e. -tatechart +ia*rams -tatechart :or state; dia*rams describe the states and responses of a class. -tatechart dia*rams describe the behavior of a class in response to eCternal stimuli. These dia*rams contain the followin* elements: G -tates3 which represent the situations durin* the life of an ob0ect in which it satisfies some condition3 performs some activit&3 or waits for some occurrence. G Transitions3 which represent relationships between the different states of an ob0ect. 6ctivit& +ia*rams 6ctivit& dia*rams describe the activities of a class. These dia*rams are similar to -tatechart dia*rams and use similar conventions3 but activit& dia*rams describe the behavior of a class in response to internal processin* rather than eCternal events as in -tatechart dia*ram. G -wimlanes3 which represent responsibilities of one or more ob0ects for actions within an overall activit&9 that is3 the& divide the activit& states into *roups and assi*n these *roups to ob0ects that must perform the activities. G 6ction -tates3 which represent atomic3 or non)interruptible3 actions of entities or steps in the eCecution of an al*orithm. G 6ction flows3 which represent relationships between the different action states of an entit&. G 5b0ect flows3 which represent the utili.ation of ob0ects b& action states and the influence of action states on ob0ects. Testin* the 8ehavioral ?iew In the -tructural ?iew of the B% 3 we have looked at 'lass and 5b0ect +ia*rams and how do we test. In the 8ehavioral ?iew3 we will look more deep into the technicalities of the classes and ob0ects. The 8ehavioral ?iew completel& concentrates on the followin* aspects of classes and ob0ects:

G Interactions amon* classes. G ?arious states and responses of a 'lass. G 6ctivities of a 'lass. 5ne Test -uit can be developed for testin* 8ehavioral view. This view basicall& encompasses the behavior of the classes and ob0ects. Testin* Frame work #. Identif& the set of messa*e eCchan*e sequences amon* a set of classes :interactions amon* class roles;. 2. Identif& the t&pes of interaction dia*rams that focus on one messa*e eCchan*e sequence or a set of such sequences involved in specif&in* the behavior. =. Identif& the sequence of messa*es eCchan*ed b& class roles within a time sequence. J. Identif& the dimension representin* the time over which an interaction occurs. L. Identif& the dimension specif&in* the different class roles participatin* in an interaction. N. Identif& eCpressions :6ctivationYs3 messa*es3 *uard conditions3 iterations3 and messa*e si*natures;3 that ma& be used for eCpressin* usin* pseudo)code or another lan*ua*e. K. Identif& the lifelines3 which represent the life of the ob0ect. O. Identif& the functions3 which are denoted in 6ctivationYs for the 0ob the ob0ect is performin* durin* its operation. $. Identif& the communication of the ob0ects throu*h messa*es. In the 'lass Roles3 #. Identif& the t&pes of ob0ects that ma& participate within interactions and collaborations. 2. Identif& what is required of a class for its participation in the interaction or collaboration. =. Identif& the roles that bind to actual ob0ects when interactions or collaborations are used. Identif& the other properties :business rules3 responsibilities3 variations3 events3 eCceptions etc;. Testin* the Implementation ?iew 'omponent +ia*rams 'omponent dia*rams describe the or*ani.ation of and dependencies amon* software implementation components. These dia*rams contain components3 which represent distributable ph&sical units9 includin* source code3 ob0ect code3 and eCecutable code. The 'omponent dia*rams lie within the Implementation ?iew of a s&stem and render the specification of behavior. 'omponent dia*rams describe the or*ani.ation of and dependencies amon* software implementation components. These dia*rams contain components3 which represent distributable ph&sical units3 includin* source code3 ob0ect code3 and eCecutable code. The 'omponent +ia*rams can be mainl& used while doin* the Inte*ration Testin*. Testin* Framework 'omponent dia*rams depict the ph&sical view of the code. The most a**ressive testin* of code will be performed while testin* the -tructural and 8ehavioral views. 'omponent dia*rams are useful for understandin* how the software is built. I su**est &ou to do the followin* with 'omponent +ia*rams:

#. Identif& the components of the s&stem. 2. %ake a 'ross Reference %atriC for the components and the class / ob0ect in which it is present. This helps in tracin* back to the bu**& components faster and also understand the structure of the software. 'omponent 'lass 4ame Functionalit&

Testin* the /nvironmental ?iew +eplo&ment +ia*rams +eplo&ment dia*rams describe the confi*uration of processin* resource elements and the mappin* of software implementation components onto them. These dia*rams contain components and nodes3 which represent processin* or computational resources3 includin* computers3 printers3 etc. The +eplo&ment dia*rams lie within the /nvironmental ?iew of a s&stem and render the specification of behavior. +eplo&ment dia*rams describe the confi*uration of processin* resource elements and the mappin* of software implementation components onto them. These dia*rams contain components and nodes3 which represent processin* or computational resources3 includin* computers3 printers3 etc. The +eplo&ment +ia*rams can be mainl& used while *oin* about the Installation Testin* Testin* Framework Testin* the +eplo&ment dia*ram is to do an installation testin*. The followin* list of activities will help build up the s&stem while testin* :Inte*ratin* various components of the software;. #. Identif& all the components of the -&stem. 2. %ake a Reference %atriC for the 'omponents of the s&stem. =. Identif& the relationship eCistin* between the components. 'lass 4ame 'omponent Relation to other 'omponents

6 'omplete Framework for Testin* B% 8ased 6pplications B% 3 The Bnified %odelin* an*ua*e desi*ned and developed b& the 5b0ect %ana*ement Qroup :5%Q; is based on the 5b0ect 5rient -oftware +evelopment %ethodolo*&. 8ut considerin* the importance of B% 3 the standards have been followed in all the software development life c&cles. +urin* the above discussion we have looked into the B% in overall and the sta*es of software development lifec&cle where we use different dia*rams and components of B% 3 and also a simple testin* framework for each dia*ram in brief. 4ow lets us take an overall picture of the Testin* activities to be performed while testin* the application bein* developed usin* the B% standard. Refer the Role of B% and 8ehavioral ?iew charts. -tep #: -tud& the -&stem Requirement +ocument to understand the functionalit& of the s&stem. -tep 2: -tud& the Bse 'ase +ocuments3 which have been derived from the -R-.

-tep =: /Cecute Framework for Testin* Bse 'aseYs. -tep J: %ake a 'ross Reference %atriC for the -&stem Requirement +ocument and the Bse 'ase +ocumentYs. -tep L: -tud& the -equence +ia*ramYs for understandin* the sequence of information flow and the structure of 'lasses. -tep N: %ake a document between the Bse 'ases and the different classes in each Bse 'ase to test if all the classes mentioned in the -equence dia*ram are present in the respective module / Bse 'ase. Bse 'ase 4ame 'lasses / 5b0ects Present

-tep K: /Cecute Framework for Testin* 'omponent +ia*rams. -tep O: /Cecute Framework for Testin* +eplo&ment +ia*rams.

8iblio*raph& #. 5b0ect 5riented -oftware /n*ineerin* , -tephen R. -chach 2. B% +istilled , %artin Fowler. =. B% Reference %anual , Qrad& 8ooch3 1ames Rambau*h3 Ivar 1acobson. J. 5b0ect 5riented Testin* , 6 presentation b& F.'ivello

What is -oftware Testin* and Wh& is it Important! 6 brief histor& of -oftware en*ineerin* and the -+ '. The software industr& has evolved throu*h J eras3 LIYs ,NIYs3 mid NIYs ,late KIYs3 mid KIYs) mid OIYs3 and mid OIYs)present. /ach era has its own distinctive characteristics3 but over the &ears the softwareYs have increased in si.e and compleCit&. -everal problems are common to almost all of the eras and are discussed below. The -oftware 'risis dates back to the #$NIYs when the primar& reasons for this situation were less than acceptable software en*ineerin* practices. In the earl& sta*es of software there was a lot of interest in computers3 a lot of code written but no established standards. Then in earl& KIYs a lot of computer pro*rams started failin* and people lost confidence and thus an industr& crisis was declared. ?arious reasons leadin* to the crisis included: Rardware advances outpacin* the abilit& to build software for this hardware. The abilit& to build in pace with the demands. Increasin* dependenc& on softwareYs -tru**le to build reliable and hi*h qualit& software Poor desi*n and inadequate resources. This crisis thou*h identified in the earl& &ears3 eCists to date and we have eCamples of software failures around the world. -oftware is basicall& considered a failure if the pro0ect is terminated because of costs or overrun schedules3 if the pro0ect has

eCperienced overruns in eCcess of LIS of the ori*inal or if the software results in client lawsuits. -ome eCamples of failures include failure of 6ir traffic control s&stems3 failure of medical software3 and failure in telecommunication software. The primar& reason for these failures other than those mentioned above is due to bad software en*ineerin* practices adopted. -ome of the worst software practices include: 4o historical software)measurement data. Re0ection of accurate cost estimates. Failure to use automated estimatin* and plannin* tools. /Ccessive3 irrational schedule pressure and creep in user requirements. Failure to monitor pro*ress and to perform risk mana*ement. Failure to use desi*n reviews and code inspections. To avoid these failures and thus improve the record3 what is needed is a better understandin* of the process3 better estimation techniques for cost time and qualit& measures. 8ut the question is3 what is a process! Process transform inputs to outputs i.e. a product. 6 software process is a set of activities3 methods and practices involvin* transformation that people use to develop and maintain software. 6t present a lar*e number of problems eCist due to a chaotic software process and the occasional success depends on individual efforts. Therefore to be able to deliver successful software pro0ects3 a focus on the process is essential since a focus on the product alone is likel& to miss the scalabilit& issues3 and improvements in the eCistin* s&stem. This focus would help in the predictabilit& of outcomes3 pro0ect trends3 and pro0ect characteristics. The process that has been defined and adopted needs to be mana*ed well and thus process mana*ement comes into pla&. Process mana*ement is concerned with the knowled*e and mana*ement of the software process3 its technical aspects and also ensures that the processes are bein* followed as eCpected and improvements are shown. From this we conclude that a set of defined processes can possibl& save us from software pro0ect failures. 8ut it is nonetheless important to note that the process alone cannot help us avoid all the problems3 because with var&in* circumstances the need varies and the process has to be adaptive to these var&in* needs. Importance needs to be *iven to the human aspect of software development since that alone can have a lot of impact on the results3 and effective cost and time estimations ma& *o totall& waste if the human resources are not planned and mana*ed effectivel&. -econdl&3 the reasons mentioned related to the software en*ineerin* principles ma& be resolved when the needs are correctl& identified. 'orrect identification would then make it easier to identif& the best practices that can be applied because one process that mi*ht be suitable for one or*ani.ation ma& not be most suitable for another. Therefore to make a successful product a combination of Process and Technicalities will be required under the umbrella of a well)defined process. Ravin* talked about the -oftware process overall3 it is important to identif& and relate the role software testin* pla&s not onl& in producin* qualit& software but also maneuverin* the overall process. The computer societ& defines testin* as follows: bTestin* )) 6 verification method that applies a controlled set of conditions and stimuli for the purpose of findin* errors. This is the most desirable method of verif&in* the functional and performance requirements. Test results are documented proof that requirements were met and can be repeated. The resultin* data can be reviewed b& all concerned for confirmation of capabilities.c There ma& be man& definitions of software testin* and man& which appeal to us from time to time3 but its best to start b& definin* testin* and then move on dependin* on the requirements or needs.

=. T&pes of +evelopment -&stems The t&pe of development pro0ect refers to the environment/methodolo*& in which the software will be developed. +ifferent testin* approaches need to be used for different t&pes of pro0ects3 0ust as different development approaches. =.# Traditional +evelopment -&stems The Traditional +evelopment -&stem has the followin* characteristics: G The traditional development s&stem uses a s&stem development methodolo*&. G The user knows what the customer requires :Requirements are clear from the customer;. G The development s&stem determines the structure of the application. What do &ou do while testin*: G Testin* happens at the end of each phase of development. G Testin* should concentrate if the requirements match the development. G Functional testin* is required. =.2 Iterative +evelopment +urin* the Iterative +evelopment: G The requirements are not clear from the user :customer;. G The structure of the software is pre)determined. Testin* of Iterative +evelopment pro0ects should concentrate onl& if the '6-/ :'omputer 6ided -oftware /n*ineerin*; tools are properl& utili.ed and the functionalit& is thorou*hl& tested. =.= %aintenance -&stem The %aintenance -&stem is where the structure of the pro*ram under*oes chan*es. The s&stem is developed and bein* used3 but it demands chan*es in the functional aspects of the s&stem due to various reasons. Testin* %aintenance -&stems requires structural testin*. Top priorit& should be put into Re*ression Testin*. =.J Purchased/'ontracted -oftware 6t times it ma& be required that &ou purchase software to inte*rate with &our product or outsource the development of certain components of &our product. This is Purchased or 'ontracted -oftware. When &ou need to inte*rate third part& software to &our eCistin* software3 this demands the testin* of the purchased software with &our requirements. -ince the two s&stems are desi*ned and developed differentl&3 the inte*ration takes the top priorit& durin* testin*. 6lso3 Re*ression Testin* of the inte*rated software is a must to cross check if the two softwareYs are workin* as per the requirements. J. T&pes of -oftware -&stems The t&pe of software s&stem refers to the processin* that will be performed b& that s&stem. This contains the followin* software s&stem t&pes. J.# 8atch -&stems The 8atch -&stems are a set of pro*rams that perform certain activities which do not require an& input from the user. 6 practical eCample is that when &ou are t&pin* somethin* on a word document3 &ou press the ke& &ou require and the same is printed on the monitor. 8ut processin* :convertin*; the user input of the ke& to the machine understandable lan*ua*e3 makin* the s&stem understand what to be displa&ed and in return the word document displa&in* what &ou have t&ped is performed b& the batch s&stems. These batch s&stems contain one or more 6pplication Pro*rammin* Interface :6PI; which perform various tasks. J.2 /vent 'ontrol -&stems /vent 'ontrol -&stems process real time data to provide the user with results for what command :s; he is *iven. For eCample3 when &ou t&pe on the word document and press 'trl U -3 this tells the computer to save the document. Row this is performed instantaneousl&! These real

time command communications to the computer are provided b& the /vent 'ontrols that are pre)defined in the s&stem. J.= Process 'ontrol -&stems There are two or more different s&stems that communicate to provide the end user a specific utilit&. When two s&stems communicate3 the co)ordination or data transfer becomes vital. Process 'ontrol -&stems are the oneYs which receive data from a different s&stem and instructs the s&stem which sent the data to perform specific tasks based on the repl& sent b& the s&stem which received the data. J.J Procedure 'ontrol -&stems Procedure 'ontrol -&stems are the oneYs which control the functions of another s&stem. J.L 6dvanced %athematical %odels -&stems3 which make use of heav& mathematics3 fall into the cate*or& of %athematical %odels. Bsuall& all the computer software make use of mathematics in some wa& or the other. 8ut3 6dvance %athematical %odels can be classified when there is heav& utili.ation of mathematics for performin* certain actions. 6n eCample of 6dvanced %athematical %odel can be simulation s&stems which uses *raphics and controls the positionin* of software on the monitor or +ecision and -trate*& makin* softwareYs. J.N %essa*e Processin* -&stems 6 simple eCample is the -%- mana*ement software used b& %obile operatorYs which handle incomin* and out*oin* messa*es. 6nother s&stem3 which is noteworth& is the s&stem used b& pa*in* companies. J.K +ia*nostic -oftware -&stems The +ia*nostic -oftware -&stem is one that helps in dia*nosin* the computer hardware components. When &ou plu* in a new device to &our computer and start it3 &ou can see the dia*nostic software s&stem doin* some work. The b4ew Rardware Foundc dialo*ue can be seen as a result of this s&stem. Toda&3 almost all the 5peratin* -&stemYs come packed with +ia*nostic -oftware -&stems. J.O -ensor and -i*nal Processin* -&stems The messa*e processin* s&stems help in sendin* and receivin* messa*es. The -ensor and -i*nal Processin* -&stems are more compleC because these s&stems make use of mathematics for si*nal processin*. In a si*nal processin* s&stem the computer receives input in the form of si*nals and then transforms the si*nals to a user understandable output. J.$ -imulation -&stems 6 simulation s&stem is a software application3 some times used in combination with speciali.ed hardware3 which re)creates or simulates the compleC behavior of a s&stem in its real environment. It can be defined in man& wa&s: <The process of desi*nin* a model of a real s&stem and conductin* eCperiments with this model for the purpose of understandin* the behavior of the s&stem and/or evaluatin* various strate*ies for the operation of the s&stem<)) Introduction to -imulation Bsin* -I%643 b& '. +. Pe*den3 R. /. -hannon and R. P. -adowski3 %cQraw)Rill3 #$$I. b6 simulation is a software packa*e :sometimes bundled with special hardware input devices; that re)creates or simulates3 albeit in a simplified manner3 a compleC phenomena3 environment3 or eCperience3 providin* the user with the opportunit& for some new level of understandin*. It is interactive3 and usuall& *rounded in some ob0ective realit&. 6 simulation is based on some underl&in* computational model of the phenomena3 environment3 or eCperience that it is simulatin*. :In fact3 some authors use model and modelin* as s&non&ms of simulation.;< ))Zurt -chumaker3 6 TaConom& of -imulation -oftware.< earnin* Technolo*& Review.

In simple words simulation is nothin* but a representation of a real s&stem. In a pro*rammable environment3 simulations are used to stud& s&stem behavior or test the s&stem in an artificial environment that provides a limited representation of the real environment. Wh& -imulation -&stems -imulation s&stems are easier3 cheaper3 and safer to use than real s&stems3 and often the onl& wa& to build the real s&stems. For eCample3 learnin* to fl& a fi*hter plane usin* a simulator is much safer and less eCpensive than learnin* on a real fi*hter plane. -&stem simulation mimics the operation of a real s&stem such as the operation in a bank3 or the runnin* of the assembl& line in a factor& etc. -imulation in the earl& sta*e of desi*n c&cle is important because the cost of mistakes increases dramaticall& later in the product life c&cle. 6lso3 simulation software can anal&.e the operation of a real s&stem without the involvement of an eCpert3 i.e. it can also be anal&.ed with a non)eCpert like a mana*er. Row to 8uild -imulation -&stems In order to create a simulation s&stem we need a realistic model of the s&stem behavior. 5ne wa& of simulation is to create smaller versions of the real s&stem. The simulation s&stem ma& use onl& software or a combination of software and hardware to model the real s&stem. The simulation software often involves the inte*ration of artificial intelli*ence and other modelin* techniques. What applications fall under this cate*or&! -imulation is widel& used in man& fields. -ome of the applications are: G %odels of planes and cars that are tested in wind tunnels to determine the aerod&namic properties. G Bsed in computer Qames :/.*. -im'it&3 car *ames etc;. This simulates the workin* in a cit&3 the roads3 people talkin*3 pla&in* *ames etc. G War tactics that are simulated usin* simulated battlefields. G %ost /mbedded -&stems are developed b& simulation software before the& ever make it to the chip fabrication labs. G -tochastic simulation models are often used to model applications such as weather forecastin* s&stems. G -ocial simulation is used to model socio)economic situations. G It is eCtensivel& used in the field of operations research. What are the 'haracteristics of -imulation -&stems! -imulation -&stems can be characteri.ed in numerous wa&s dependin* on the characteri.ation criteria applied. -ome of them are listed below. +eterministic -imulation -&stems +eterministic -imulation -&stems have completel& predictable outcomes. That is3 *iven a certain input we can predict the eCact outcome. 6nother feature of these s&stems is idempotenc&3 which means that the results for an& *iven input are alwa&s the same. /Camples include population prediction models3 atmospheric science etc. -tochastic -imulation -&stems -tochastic -imulation s&stems have models with random variables. This means that the eCact outcome is not predictable for an& *iven input3 resultin* in potentiall& ver& different outcomes for the same input. -tatic -imulation -&stems -tatic -imulation s&stems use statistical models in which time does not pla& an& role. These models include various probabilistic scenarios which are used to calculate the results of an& *iven input. /Camples of such s&stems include financial portfolio valuation models. The most common simulation technique used in these models is the %onte 'arlo -imulation.

+&namic -imulation -&stems 6 d&namic simulation s&stem has a model that accommodates for chan*es in data over time. This means that the input data affectin* the results will be entered into the simulation durin* its entire lifetime than 0ust at the be*innin*. 6 simulation s&stem used to predict the *rowth of the econom& ma& need to incorporate chan*es in economic data3 is a *ood eCample of a d&namic simulation s&stem. +iscrete -imulation -&stems +iscrete -imulation -&stems use models that have discrete entities with multiple attributes. /ach of these entities can be in an& state3 at an& *iven time3 represented b& the values of its attributes. . The state of the s&stem is a set of all the states of all its entities. This state chan*es one discrete step at a time as events happens in the s&stem. Therefore3 the actual desi*nin* of the simulation involves makin* choices about which entities to model3 what attributes represent the /ntit& -tate3 what events to model3 how these events impact the entit& attributes3 and the sequence of the events. /Camples of these s&stems are simulated battlefield scenarios3 hi*hwa& traffic control s&stems3 multiteller s&stems3 computer networks etc. 'ontinuous -imulation -&stems If instead of usin* a model with discrete entities we use data with continuous values3 we will end up with continuous simulation. For eCample instead of tr&in* to simulate battlefield scenarios b& usin* discrete entities such as soldiers and tanks3 we can tr& to model behavior and movements of troops b& usin* differential equations. -ocial -imulation -&stems -ocial simulation is not a technique b& itself but uses the various t&pes of simulation described above. Rowever3 because of the speciali.ed application of those techniques for social simulation it deserves a special mention of its own. The field of social simulation involves usin* simulation to learn about and predict various social phenomenon such as votin* patterns3 mi*ration patterns3 economic decisions made b& the *eneral population3 etc. 5ne interestin* application of social simulation is in a field called artificial life which is used to obtain useful insi*hts into the formation and evolution of life. What can be the possible test approach! 6 simulation s&stemYs primar& responsibilit& is to replicate the behavior of the real s&stem as accuratel& as possible. Therefore3 a *ood place to start creatin* a test plan would be to understand the behavior of the real s&stem. -ub0ective Testin* -ub0ective testin* mainl& depends on an eCpertAs opinion. 6n eCpert is a person who is proficient and eCperienced in the s&stem under test. 'onductin* the test involves test runs of the simulation b& the eCpert and then the eCpert evaluates and validates the results based on some criteria. 5ne advanta*e of this approach over ob0ective testin* is that it can test those conditions which cannot be tested ob0ectivel&. For eCample3 an eCpert can determine whether the 0o&stick handlin* of the fli*ht feels <ri*ht<. 5ne disadvanta*e is that the evaluation of the s&stem is based on the <eCpertAs< opinion3 which ma& differ from eCpert to eCpert. 6lso3 if the s&stem is ver& lar*e then it is bound to have man& eCperts. /ach eCpert ma& view it differentl& and can *ive conflictin* opinions. This makes it difficult to determine the validit& of the s&stem. +espite all these disadvanta*es3 sub0ective testin* is necessar& for testin* s&stems with human interaction. 5b0ective Testin* 5b0ective testin* is mainl& used in s&stems where the data can be recorded while the simulation is runnin*. This testin* technique relies on the application of statistical and automated methods to the data collected.

-tatistical methods are used to provide an insi*ht into the accurac& of the simulation. These methods include h&pothesis testin*3 data plots3 principle component anal&sis and cluster anal&sis. 6utomated testin* requires a knowled*e base of valid outcomes for various runs of simulation. This knowled*e base is created b& domain eCperts of the simulation s&stem bein* tested. The data collected in various test runs is compared a*ainst this knowled*e base to automaticall& validate the s&stem under test. 6n advanta*e of this kind of testin* is that the s&stem can continuall& be re*ression tested as it is bein* developed. -tatistical %ethods -tatistical methods are used to provide an insi*ht into the accurac& of the simulation. These methods include h&pothesis testin*3 data plots3 principle component anal&sis and cluster anal&sis. 6utomated Testin* 6utomated testin* requires a knowled*e base of valid outcomes for various runs of simulation. This knowled*e base is created b& domain eCperts of the simulation s&stem bein* tested. The data collected in various test runs is compared a*ainst this knowled*e base to automaticall& validate the s&stem under test. 6n advanta*e of this kind of testin* is that the s&stem can continuall& be re*ression tested as it is bein* developed. J.#I +atabase %ana*ement -&stems 6s the name denotes3 the +atabase %ana*ement -&stems :+8%-; handles the mana*ement of databases. It is basicall& a collection of pro*rams that enable the stora*e3 modification and eCtraction from the +atabase. The +8%-3 as the& are often referred to as3 can be of various different t&pes ran*in* from small s&stems that run on P'Ys to %ainframeYs. The followin* can be cate*ori.ed as eCample of +8%-: G 'omputeri.ed ibrar& -&stems. G 6utomated Teller %achines. G Passen*er Reservation -&stems. G Inventor& -&stems. J.## +ata 6cquisition +ata 6cquisition s&stems3 taken in real time data and store them for future use. 6 simple eCample of +ata 6cquisition s&stem can be a 6T' :6ir Traffic 'ontrol; -oftware which takes in real time data of the position and speed of the fli*ht and stores it in compressed form for later use. J.#2 +ata Presentation +ata Presentation software stores data and displa&s the same to the user when required. 6n eCample is a 'ontent %ana*ement -&stem. Hou have a web site and this is in /n*lish3 &ou also have &our web site in other lan*ua*es. The user can select the lan*ua*e he wishes to see and the s&stem displa&s the same web site in the user chosen lan*ua*e. Hou develop &our web site in various lan*ua*es and store them on the s&stem. The s&stem displa&s the required lan*ua*e3 the user chooses. J.#= +ecision and Plannin* -&stems These s&stems use 6rtificial Intelli*ence techniques to provide decision)makin* solutions to the user. J.#J Pattern and Ima*e Processin* -&stems These s&stems are used for scannin*3 storin*3 modif&in* and displa&in* *raphic ima*es. The use of such s&stems is now bein* increased as research tests are bein* conducted in visual modelin* and their use in our dail& lives is increasin*. These s&stems are used for securit& requests such as dia*nosin* photo*raph3 thumb impression of the visitor etc. J.#L 'omputer -&stem -oftware -&stems These are the normal computer softwareYs3 that can be used for various purposes. J.#N -oftware +evelopment Tools These s&stems ease the process of -oftware +evelopment.

L. Reuristics of -oftware Testin* Testabilit& -oftware testabilit& is how easil&3 completel& and convenientl& a computer pro*ram can be tested. -oftware en*ineers desi*n a computer product3 s&stem or pro*ram keepin* in mind the product testabilit&. Qood pro*rammers are willin* to do thin*s that will help the testin* process and a checklist of possible desi*n points3 features and so on can be useful in ne*otiatin* with them. Rere are the two main heuristics of software testin*. #. ?isibilit& 2. 'ontrol ?isibilit& ?isibilit& is our abilit& to observe the states and outputs of the software under test. Features to improve the visibilit& are G 6ccess to 'ode +evelopers must provide full access :source code3 infrastructure3 etc; to testers. The 'ode3 chan*e records and desi*n documents should be provided to the testin* team. The testin* team should read and understand the code. G /vent lo**in* The events to lo* include Bser events3 -&stem milestones3 /rror handlin* and completed transactions. The lo*s ma& be stored in files3 rin* buffers in memor&3 and/or serial ports. Thin*s to be lo**ed include description of event3 timestamp3 subs&stem3 resource usa*e and severit& of event. o**in* should be ad0usted b& subs&stem and t&pe. o* file report internal errors3 help in isolatin* defects3 and *ive useful information about conteCt3 tests3 customer usa*e and test covera*e. The more readable the o* Reports are3 the easier it becomes to identif& the defect cause and work towards corrective measures.

G /rror detection mechanisms +ata inte*rit& checkin* and -&stem level error detection :e.*. %icrosoft 6ppviewer; are useful here. In addition3 6ssertions and probes with the followin* features are reall& helpful i 'ode is added to detect internal errors. i 6ssertions abort on error. i Probes lo* errors. i +esi*n b& 'ontract theor&)))This technique requires that assertions be defined for functions. Preconditions appl& to input and violations implicate callin* functions while post)conditions appl& to outputs and violations implicate called functions. This effectivel& solves the oracle problem for testin*. G Resource %onitorin* %emor& usa*e should be monitored to find memor& leaks. -tates of runnin* methods3 threads or processes should be watched :Profilin* interfaces ma& be used for this.;. In addition3 the confi*uration values should be dumped. Resource monitorin* is of particular concern in applications where the load on the application in real time is estimated to be considerable. 'ontrol 'ontrol refers to our abilit& to provide inputs and reach states in the software under test. The features to improve controllabilit& are: G Test Points

6llow data to be inspected3 inserted or modified at points in the software. It is especiall& useful for dataflow applications. In addition3 a pipe and filters architecture provides man& opportunities for test points. G 'ustom Bser Interface controls 'ustom BI controls often raise serious testabilit& problems with QBI test drivers. /nsurin* testabilit& usuall& requires: i 6ddin* methods to report necessar& information i 'ustomi.in* test tools to make use of these methods i Qettin* a tool eCpert to advise developers on testabilit& and to build the required support. i 6skin* third part& control vendors re*ardin* support b& test tools. G Test Interfaces Interfaces ma& be provided specificall& for testin* e.*. /Ccel and Xconq etc. /Cistin* interfaces ma& be able to support si*nificant testin* e.*. Install-heild3 6uto'6+3 Tivoli3 etc. G Fault in0ection /rror seedin*)))instrumentin* low level I/5 code to simulate errors)))makes it much easier to test error handlin*. It can be handled at both s&stem and application level3 Tivoli3 etc. G Installation and setup Testers should be notified when installation has completed successfull&. The& should be able to verif& installation3 pro*rammaticall& create sample records and run multiple clients3 daemons or servers on a sin*le machine. 6 8R56+/R ?I/W 8elow are *iven a broader set of characteristics :usuall& known as 1ames 8ach heuristics; that lead to testable software.

'ate*ories of Reuristics of software testin* G 5perabilit& The better it works3 the more efficientl& it can be tested. The s&stem should have few bu*s3 no bu*s should block the eCecution of tests and the product should evolve in functional sta*es :simultaneous development and testin*;. G 5bservabilit& What we see is what we test. i +istinct output should be *enerated for each input i 'urrent and past s&stem states and variables should be visible durin* testin* i 6ll factors affectin* the output should be visible. i Incorrect output should be easil& identified. i -ource code should be easil& accessible. i Internal errors should be automaticall& detected :throu*h self)testin* mechanisms; and reported. G 'ontrollabilit& The better we control the software3 the more the testin* process can be automated and optimi.ed. 'heck that i 6ll outputs can be *enerated and code can be eCecuted throu*h some combination of input. i -oftware and hardware states can be controlled directl& b& the test en*ineer. i Inputs and output formats are consistent and structured. i Test can be convenientl&3 specified3 automated and reproduced.

G +ecomposabilit& 8& controllin* the scope of testin*3 we can quickl& isolate problems and perform effective and efficient testin*. The software s&stem should be built from independent modules which can be tested independentl&. G -implicit& The less there is to test3 the more quickl& we can test it. The points to consider in this re*ard are functional :e.*. minimum set of features;3 structural :e.*. architecture is modulari.ed; and code :e.*. a codin* standard is adopted; simplicit&. G -tabilit& The fewer the chan*es3 the fewer are the disruptions to testin*. The chan*es to software should be infrequent3 controlled and not invalidatin* eCistin* tests. The software should be able to recover well from failures. G Bnderstandabilit& The more information we will have3 the smarter we will test. The testers should be able to understand well the desi*n3 chan*es to the desi*n and the dependencies between internal3 eCternal and shared components. Technical documentation should be instantl& accessible3 accurate3 well or*ani.ed3 specific and detailed. G -uitabilit& The more we know about the intended use of the software3 the better we can or*ani.e our testin* to find important bu*s. The above heuristics can be used b& a software en*ineer to develop a software confi*uration :i.e. pro*ram3 data and documentation; that is convenient to test and verif&. N. When Testin* should occur! Wron* 6ssumption Testin* is sometimes incorrectl& thou*ht as an after)the)fact activit&9 performed after pro*rammin* is done for a product. Instead3 testin* should be performed at ever& development sta*e of the product .Test data sets must be derived and their correctness and consistenc& should be monitored throu*hout the development process. If we divide the lifec&cle of software development into bRequirements 6nal&sisc3 b+esi*nc3 bPro*rammin*/'onstructionc and b5peration and %aintenancec3 then testin* should accompan& each of the above phases. If testin* is isolated as a sin*le phase late in the c&cle3 errors in the problem statement or desi*n ma& incur eCorbitant costs. 4ot onl& must the ori*inal error be corrected3 but the entire structure built upon it must also be chan*ed. Therefore3 testin* should not be isolated as an inspection activit&. Rather testin* should be involved throu*hout the -+ ' in order to brin* out a qualit& product. Testin* 6ctivities in /ach Phase The followin* testin* activities should be performed durin* the phases G Requirements 6nal&sis ) :#; +etermine correctness :2; Qenerate functional test data. G +esi*n ) :#; +etermine correctness and consistenc& :2; Qenerate structural and functional test data. G Pro*rammin*/'onstruction ) :#; +etermine correctness and consistenc& :2; Qenerate structural and functional test data :=; 6ppl& test data :J; Refine test data. G 5peration and %aintenance ) :#; Retest. 4ow we consider these in detail.

Requirements 6nal&sis The followin* test activities should be performed durin* this sta*e. G Invest in anal&sis at the be*innin* of the pro0ect ) Ravin* a clear3 concise and formal statement of the requirements facilitates pro*rammin*3 communication3 error anal&sis an d test data *eneration. The requirements statement should record the followin* information and decisions: #. Pro*ram function ) What the pro*ram must do! 2. The form3 format3 data t&pes and units for input. =. The form3 format3 data t&pes and units for output. J. Row eCceptions3 errors and deviations are to be handled. L. For scientific computations3 the numerical method or at least the required accurac& of the solution. N. The hardware/software environment required or assumed :e.*. the machine3 the operatin* s&stem3 and the implementation lan*ua*e;. +ecidin* the above issues is one of the activities related to testin* that should be performed durin* this sta*e. G -tart developin* the test set at the requirements anal&sis phase ) +ata should be *enerated that can be used to determine whether the requirements have been met. To do this3 the input domain should be partitioned into classes of values that the pro*ram will treat in a similar manner and for each class a representative element should be included in the test data. In addition3 followin* should also be included in the data set: :#; boundar& values :2; an& non)eCtreme input values that would require special handlin*. The output domain should be treated similarl&. Invalid input requires the same anal&sis as valid input. G The correctness3 consistenc& and completeness of the requirements should also be anal&.ed ) 'onsider whether the correct problem is bein* solved3 check for conflicts and inconsistencies amon* the requirements and consider the possibilit& of missin* cases. +esi*n The desi*n document aids in pro*rammin*3 communication3 and error anal&sis and test data *eneration. The requirements statement and the desi*n document should to*ether *ive the problem and the or*ani.ation of the solution i.e. what the pro*ram will do and how it will be done. The desi*n document should contain: G Principal data structures. G Functions3 al*orithms3 heuristics or special techniques used for processin*. G The pro*ram or*ani.ation3 how it will be modulari.ed and cate*ori.ed into eCternal and internal interfaces. G 6n& additional information. Rere the testin* activities should consist of: G 6nal&sis of desi*n to check its completeness and consistenc& ) the total process should be anal&.ed to determine that no steps or special cases have been overlooked. Internal interfaces3 I/5 handlin* and data structures should speciall& be checked for inconsistencies.

G 6nal&sis of desi*n to check whether it satisfies the requirements ) check whether both requirements and desi*n document contain the same form3 format3 units used for input and output and also that all functions listed in the requirement document have been included in the desi*n document. -elected test data which is *enerated durin* the requirements anal&sis phase should be manuall& simulated to determine whether the desi*n will &ield the eCpected values. G Qeneration of test data based on the desi*n ) The tests *enerated should cover the structure as well as the internal functions of the desi*n like the data structures3 al*orithm3 functions3 heuristics and *eneral pro*ram structure etc. -tandard eCtreme and special values should be included and eCpected output should be recorded in the test data. G ReeCamination and refinement of the test data set *enerated at the requirements anal&sis phase. The first two steps should also be performed b& some collea*ue and not onl& the desi*ner/developer. Pro*rammin*/'onstruction Rere the main testin* points are: G 'heck the code for consistenc& with desi*n ) the areas to check include modular structure3 module interfaces3 data structures3 functions3 al*orithms and I/5 handlin*. G Perform the Testin* process in an or*ani.ed and s&stematic manner with test runs dated3 annotated and saved. 6 plan or schedule can be used as a checklist to help the pro*rammer or*ani.e testin* efforts. If errors are found and chan*es made to the pro*ram3 all tests involvin* the erroneous se*ment :includin* those which resulted in success previousl&; must be rerun and recorded. G 6sks some collea*ue for assistance ) -ome independent part&3 other than the pro*rammer of the specific part of the code3 should anal&.e the development product at each phase. The pro*rammer should eCplain the product to the part& who will then question the lo*ic and search for errors with a checklist to *uide the search. This is needed to locate errors the pro*rammer has overlooked. G Bse available tools ) the pro*rammer should be familiar with various compilers and interpreters available on the s&stem for the implementation lan*ua*e bein* used because the& differ in their error anal&sis and code *eneration capabilities. G 6ppl& -tress to the Pro*ram ) Testin* should eCercise and stress the pro*ram structure3 the data structures3 the internal functions and the eCternall& visible functions or functionalit&. 8oth valid and invalid data should be included in the test set. G Test one at a time ) Pieces of code3 individual modules and small collections of modules should be eCercised separatel& before the& are inte*rated into the total pro*ram3 one b& one. /rrors are easier to isolate when the no. of potential interactions should be kept small. Instrumentation)insertion of some code into the pro*ram solel& to measure various pro*ram characteristics , can be useful here. 6 tester should perform arra& bound checks3 check loop control variables3 determine whether ke& data values are within permissible ran*es3 trace pro*ram eCecution3 and count the no. of times a *roup of statements is eCecuted.

G %easure testin* covera*e/When should testin* stop! ) If errors are still found ever& time the pro*ram is eCecuted3 testin* should continue. 8ecause errors tend to cluster3 modules appearin* particularl& error)prone require special scrutin&. The metrics used to measure testin* thorou*hness include statement testin* :whether each statement in the pro*ram has been eCecuted at least once;3 branch testin* :whether each eCit from each branch has been eCecuted at least once; and path testin* :whether all lo*ical paths3 which ma& involve repeated eCecution of various se*ments3 have been eCecuted at least once;. -tatement testin* is the covera*e metric most frequentl& used as it is relativel& simple to implement. The amount of testin* depends on the cost of an error. 'ritical pro*rams or functions require more thorou*h testin* than the less si*nificant functions. 5perations and maintenance 'orrections3 modifications and eCtensions are bound to occur even for small pro*rams and testin* is required ever& time there is a chan*e. Testin* durin* maintenance is termed re*ression testin*. The test set3 the test plan3 and the test results for the ori*inal pro*ram should eCist. %odifications must be made to accommodate the pro*ram chan*es3 and then all portions of the pro*ram affected b& the modifications must be re)tested. 6fter re*ression testin* is complete3 the pro*ram and test documentation must be updated to reflect the chan*es. K. The Test +evelopment ife '&cle :T+ '; Bsuall&3 Testin* is considered as a part of the -&stem +evelopment ife '&cle. With our practical eCperience3 we framed this Test +evelopment ife '&cle. The dia*ram does not depict where and when &ou write &our Test Plan and -trate*& documents. 8ut3 it is understood that before &ou be*in &our testin* activities these documents should be read&. Ideall&3 when the Pro0ect Plan and Pro0ect -trate*& are bein* made3 this is the time when the Test Plan and Test -trate*& documents are also made.

Test +evelopment ife '&cle :T+ '; O. When should Testin* stop! <When to stop testin*< is one of the most difficult questions to a test en*ineer. The followin* are few of the common Test -top criteria: #. 6ll the hi*h priorit& bu*s are fiCed. 2. The rate at which bu*s are found is too small. =. The testin* bud*et is eChausted. J. The pro0ect duration is completed. L. The risk in the pro0ect is under acceptable limit. Practicall&3 we feel that the decision of stoppin* testin* is based on the level of the risk acceptable to the mana*ement. 6s testin* is a never endin* process we can never assume that #II S testin* has been done3 we can onl& minimi.e the risk of shippin* the product to client with X testin* done. The risk can be measured b& Risk anal&sis but for small duration / low bud*et / low resources pro0ect3 risk can be deduced b& simpl&: ) G %easurin* Test 'overa*e. G 4umber of test c&cles. G 4umber of hi*h priorit& bu*s.

$. ?erification -trate*ies What is e?erificationY! ?erification is the process of evaluatin* a s&stem or component to determine whether the products of a *iven development phase satisf& the conditions imposed at the start of that phase. What is the importance of the ?erification Phase! ?erification process helps in detectin* defects earl&3 and preventin* their leaka*e downstream. Thus3 the hi*her cost of later detection and rework is eliminated. $.# Review 6 process or meetin* durin* which a work product3 or set of work products3 is presented to pro0ect personnel3 mana*ers3 users3 customers3 or other interested parties for comment or approval. The main *oal of reviews is to find defects. Reviews are a *ood compliment to testin* to help assure qualit&. 6 few purposesY of -Q6 reviews can be as follows: G 6ssure the qualit& of deliverableYs before the pro0ect moves to the neCt sta*e. G 5nce a deliverable has been reviewed3 revised as required3 and approved3 it can be used as a basis for the neCt sta*e in the life c&cle. What are the various t&pes of reviews! T&pes of reviews include %ana*ement Reviews3 Technical Reviews3 Inspections3 Walkthrou*hs and 6udits. %ana*ement Reviews %ana*ement reviews are performed b& those directl& responsible for the s&stem in order to monitor pro*ress3 determine status of plans and schedules3 confirm requirements and their s&stem allocation. Therefore the main ob0ectives of %ana*ement Reviews can be cate*ori.ed as follows: G ?alidate from a mana*ement perspective that the pro0ect is makin* pro*ress accordin* to the pro0ect plan. G /nsure that deliverables are read& for mana*ement approvals. G Resolve issues that require mana*ementYs attention. G Identif& an& pro0ect bottlenecks. G Zeepin* pro0ect in 'ontrol. -upport decisions made durin* such reviews include 'orrective actions3 'han*es in the allocation of resources or chan*es to the scope of the pro0ect In mana*ement reviews the followin* -oftware products are reviewed: 6udit Reports 'ontin*enc& plans Installation plans Risk mana*ement plans -oftware Q/6 The participants of the review pla& the roles of +ecision)%aker3 Review eader3 Recorder3 %ana*ement -taff3 and Technical -taff. Technical Reviews Technical reviews confirm that product 'onforms to specifications3 adheres to re*ulations3 standards3 *uidelines3 plans3 chan*es are properl& implemented3 chan*es affect onl& those s&stem areas identified b& the chan*e specification. The main ob0ectives of Technical Reviews can be cate*ori.ed as follows: G /nsure that the software confirms to the or*ani.ation standards. G /nsure that an& chan*es in the development procedures :desi*n3 codin*3 testin*; are implemented per the or*ani.ation pre)defined standards. In technical reviews3 the followin* -oftware products are reviewed

G -oftware requirements specification G -oftware desi*n description G -oftware test documentation G -oftware user documentation G Installation procedure G Release notes The participants of the review pla& the roles of +ecision)maker3 Review leader3 Recorder3 Technical staff. What is Requirement Review! 6 process or meetin* durin* which the requirements for a s&stem3 hardware item3 or software item are presented to pro0ect personnel3 mana*ers3 users3 customers3 or other interested parties for comment or approval. T&pes include s&stem requirements review3 software requirements review. Who is involved in Requirement Review! G Product mana*ement leads Requirement Review. %embers from ever& affected department participates in the review Input 'riteria -oftware requirement specification is the essential document for the review. 6 checklist can be used for the review. /Cit 'riteria /Cit criteria include the filled W completed checklist with the reviewersY comments W su**estions and the re)verification whether the& are incorporated in the documents. What is +esi*n Review! 6 process or meetin* durin* which a s&stem3 hardware3 or software desi*n is presented to pro0ect personnel3 mana*ers3 users3 customers3 or other interested parties for comment or approval. T&pes include critical desi*n review3 preliminar& desi*n review3 and s&stem desi*n review. Who involve in +esi*n Review! G Q6 team member leads desi*n review. %embers from development team and Q6 team participate in the review. Input 'riteria +esi*n document is the essential document for the review. 6 checklist can be used for the review. /Cit 'riteria /Cit criteria include the filled W completed checklist with the reviewersY comments W su**estions and the re)verification whether the& are incorporated in the documents. What is 'ode Review! 6 meetin* at which software code is presented to pro0ect personnel3 mana*ers3 users3 customers3 or other interested parties for comment or approval. Who is involved in 'ode Review! G Q6 team member :In case the Q6 Team is onl& involved in 8lack 8oC Testin*3 then the +evelopment team lead chairs the review team; leads code review. %embers from development team and Q6 team participate in the review. Input 'riteria

The 'odin* -tandards +ocument and the -ource file are the essential documents for the review. 6 checklist can be used for the review. /Cit 'riteria /Cit criteria include the filled W completed checklist with the reviewersY comments W su**estions and the re)verification whether the& are incorporated in the documents. $.2 Walkthrou*h 6 static anal&sis technique in which a desi*ner or pro*rammer leads members of the development team and other interested parties throu*h a se*ment of documentation or code3 and the participants ask questions and make comments about possible errors3 violation of development standards3 and other problems. The ob0ectives of Walkthrou*h can be summari.ed as follows: G +etect errors earl&. G /nsure :re;established standards are followed: G Train and eCchan*e technical information amon* pro0ect teams which participate in the walkthrou*h. G Increase the qualit& of the pro0ect3 thereb& improvin* morale of the team members. The participants in Walkthrou*hs assume one or more of the followin* roles: a; Walk)throu*h leader b; Recorder c; 6uthor d; Team member To consider a review as a s&stematic walk)throu*h3 a team of at least two members shall be assembled. Roles ma& be shared amon* the team members. The walk) throu*h leader or the author ma& serve as the recorder. The walk)throu*h leader ma& be the author. Individuals holdin* mana*ement positions over an& member of the walk)throu*h team shall not participate in the walk)throu*h. Input to the walk)throu*h shall include the followin*: a; 6 statement of ob0ectives for the walk)throu*h b; The software product bein* eCamined c; -tandards that are in effect for the acquisition3 suppl&3 development3 operation3 and/or maintenance of the software product Input to the walk)throu*h ma& also include the followin*: d; 6n& re*ulations3 standards3 *uidelines3 plans3 and procedures a*ainst which the software product is to be inspected e; 6nomal& cate*ories The walk)throu*h shall be considered complete when a; The entire software product has been eCamined b; Recommendations and required actions have been recorded c; The walk)throu*h output has been completed $.= Inspection 6 static anal&sis technique that relies on visual eCamination of development products to detect errors3 violations of development standards3 and other problems. T&pes include code inspection9 desi*n inspection3 6rchitectural inspections3 Test ware inspections etc. The participants in Inspections assume one or more of the followin* roles: a; Inspection leader b; Recorder c; Reader d; 6uthor e; Inspector

6ll participants in the review are inspectors. The author shall not act as inspection leader and should not act as reader or recorder. 5ther roles ma& be shared amon* the team members. Individual participants ma& act in more than one role. Individuals holdin* mana*ement positions over an& member of the inspection team shall not participate in the inspection. Input to the inspection shall include the followin*: a; 6 statement of ob0ectives for the inspection b; The software product to be inspected c; +ocumented inspection procedure d; Inspection reportin* forms e; 'urrent anomalies or issues list Input to the inspection ma& also include the followin*: f; Inspection checklists *; 6n& re*ulations3 standards3 *uidelines3 plans3 and procedures a*ainst which the software product is to be inspected h; Rardware product specifications i; Rardware performance data 0; 6nomal& cate*ories The individuals ma& make additional reference material available responsible for the software product when requested b& the inspection leader. The purpose of the eCit criteria is to brin* an unambi*uous closure to the inspection meetin*. The eCit decision shall determine if the software product meets the inspection eCit criteria and shall prescribe an& appropriate rework and verification. -pecificall&3 the inspection team shall identif& the software product disposition as one of the followin*: a; 6ccept with no or minor rework. The software product is accepted as is or with onl& minor rework. :For eCample3 that would require no further verification;. b; 6ccept with rework verification. The software product is to be accepted after the inspection leader or a desi*nated member of the inspection team :other than the author; verifies rework. c; Re)inspect. -chedule a re)inspection to verif& rework. 6t a minimum3 a re) inspection shall eCamine the software product areas chan*ed to resolve anomalies identified in the last inspection3 as well as side effects of those chan*es. #I. Testin* T&pes and Techniques Testin* t&pes Testin* t&pes refer to different approaches towards testin* a computer pro*ram3 s&stem or product. The two t&pes of testin* are black boC testin* and white boC testin*3 which would both be discussed in detail in this chapter. 6nother t&pe3 termed as *ra& boC testin* or h&brid testin* is evolvin* presentl& and it combines the features of the two t&pes. Testin* Techniques Testin* techniques refer to different methods of testin* particular features a computer pro*ram3 s&stem or product. /ach testin* t&pe has its own testin* techniques while some techniques combine the feature of both t&pes. -ome techniques are G /rror and anomal& detection technique G Interface checkin* G Ph&sical units checkin* G oop testin* : +iscussed in detail in this chapter; G 8asis Path testin*/%c'abeYs c&clomatic number: +iscussed in detail in this chapter; G 'ontrol structure testin*: +iscussed in detail in this chapter; G /rror Quessin*: +iscussed in detail in this chapter;

G 8oundar& ?alue anal&sis : +iscussed in detail in this chapter; G Qraph based testin*: +iscussed in detail in this chapter; G /quivalence partitionin*: +iscussed in detail in this chapter; G Instrumentation based testin* G Random testin* G +omain testin* G RalsteadYs software science G 6nd man& more -ome of these and man& others would be discussed in the later sections of this chapter. +ifference between Testin* T&pes and Testin* Techniques Testin* t&pes deal with what aspect of the computer software would be tested3 while testin* techniques deal with how a specific part of the software would be tested. That is3 testin* t&pes mean whether we are testin* the function or the structure of the software. In other words3 we ma& test each function of the software to see if it is operational or we ma& test the internal components of the software to check if its internal workin*s are accordin* to specification. 5n the other hand3 eTestin* techniqueY means what methods or wa&s would be applied or calculations would be done to test a particular feature of a software :-ometimes we test the interfaces3 sometimes we test the se*ments3 sometimes loops etc.; Row to 'hoose a 8lack 8oC or White 8oC Test! White boC testin* is concerned onl& with testin* the software product9 it cannot *uarantee that the complete specification has been implemented. 8lack boC testin* is concerned onl& with testin* the specification9 it cannot *uarantee that all parts of the implementation have been tested. Thus black boC testin* is testin* a*ainst the specification and will discover faults of omission3 indicatin* that part of the specification has not been fulfilled. White boC testin* is testin* a*ainst the implementation and will discover faults of commission3 indicatin* that part of the implementation is fault&. In order to completel& test a software product both black and white boC testin* are required. White boC testin* is much more eCpensive :In terms of resources and time; than black boC testin*. It requires the source code to be produced before the tests can be planned and is much more laborious in the determination of suitable input data and the determination if the software is or is not correct. It is advised to start test plannin* with a black boC testin* approach as soon as the specification is available. White boC tests are to be planned as soon as the ow evel +esi*n : +; is complete. The ow evel +esi*n will address all the al*orithms and codin* st&le. The paths should then be checked a*ainst the black boC test plan and an& additional required test cases should be determined and applied. The consequences of test failure at initiative/requirements sta*e are ver& eCpensive. 6 failure of a test case ma& result in a chan*e3 which requires all black boC testin* to be repeated and the re)determination of the white boC paths. The cheaper option is to re*ard the process of testin* as one of qualit& assurance rather than qualit& control. The intention is that sufficient qualit& is put into all previous desi*n and production sta*es so that it can be eCpected that testin* will pro0ect the presence of ver& few faults3 rather than testin* bein* relied upon to discover an& faults in the software3 as

in case of qualit& control. 6 combination of black boC and white boC test considerations is still not a completel& adequate test rationale. #I.# White 8oC Testin* What is W8T! White boC testin* involves lookin* at the structure of the code. When &ou know the internal structure of a product3 tests can be conducted to ensure that the internal operations performed accordin* to the specification. 6nd all internal components have been adequatel& eCercised. In other word W8T tends to involve the covera*e of the specification in the code. 'ode covera*e is defined in siC t&pes as listed below. G -e*ment covera*e , /ach se*ment of code b/w control structure is eCecuted at least once. G 8ranch 'overa*e or 4ode Testin* , /ach branch in the code is taken in each possible direction at least once. G 'ompound 'ondition 'overa*e , When there are multiple conditions3 &ou must test not onl& each direction but also each possible combinations of conditions3 which is usuall& done b& usin* a eTruth TableY G 8asis Path Testin* , /ach independent path throu*h the code is taken in a pre) determined order. This point will further be discussed in other section. G +ata Flow Testin* :+FT; , In this approach &ou track the specific variables throu*h each possible calculation3 thus definin* the set of intermediate paths throu*h the code i.e.3 those based on each piece of code chosen to be tracked. /ven thou*h the paths are considered independent3 dependencies across multiple paths are not reall& tested for b& this approach. +FT tends to reflect dependencies but it is mainl& throu*h sequences of data manipulation. This approach tends to uncover bu*s like variables used but not initiali.e3 or declared but not used3 and so on. G Path Testin* , Path testin* is where all possible paths throu*h the code are defined and covered. This testin* is eCtremel& laborious and time consumin*. G oop Testin* , In addition top above measures3 there are testin* strate*ies based on loop testin*. These strate*ies relate to testin* sin*le loops3 concatenated loops3 and nested loops. oops are fairl& simple to test unless dependencies eCist amon* the loop or b/w a loop and the code it contains. What do we do in W8T! In W8T3 we use the control structure of the procedural desi*n to derive test cases. Bsin* W8T methods a tester can derive the test cases that G Quarantee that all independent paths within a module have been eCercised at least once. G /Cercise all lo*ical decisions on their true and false values. G /Cecute all loops at their boundaries and within their operational bounds G /Cercise internal data structures to ensure their validit&. White boC testin* :W8T; is also called -tructural or Qlass boC testin*. Wh& W8T! We do W8T because 8lack boC testin* is unlikel& to uncover numerous sorts of defects in the pro*ram. These defects can be of the followin* nature:

G o*ic errors and incorrect assumptions are inversel& proportional to the probabilit& that a pro*ram path will be eCecuted. /rror tend to creep into our work when we desi*n and implement functions3 conditions or controls that are out of the pro*ram G The lo*ical flow of the pro*ram is sometimes counterintuitive3 meanin* that our unconscious assumptions about flow of control and data ma& lead to desi*n errors that are uncovered onl& when path testin* starts. G T&po*raphical errors are random3 some of which will be uncovered b& s&ntaC checkin* mechanisms but others will *o undetected until testin* be*ins. -kills Required Talkin* theoreticall&3 all we need to do in W8T is to define all lo*ical paths3 develop test cases to eCercise them and evaluate results i.e. *enerate test cases to eCercise the pro*ram lo*ic eChaustivel&. For this we need to know the pro*ram well i.e. We should know the specification and the code to be tested9 related documents should be available too us .We must be able to tell the eCpected status of the pro*ram versus the actual status found at an& point durin* the testin* process. imitations Bnfortunatel& in W8T3 eChaustive testin* of a code presents certain lo*istical problems. /ven for small pro*rams3 the number of possible lo*ical paths can be ver& lar*e. For instance3 a #II line ' an*ua*e pro*ram that contains two nested loops eCecutin* # to 2I times dependin* upon some initial input after some basic data declaration. Inside the interior loop four if)then)else constructs are required. Then there are approCimatel& #I#J lo*ical paths that are to be eCercised to test the pro*ram eChaustivel&. Which means that a ma*ic test processor developin* a sin*le test case3 eCecute it and evaluate results in one millisecond would require =#KI &ears workin* continuousl& for this eChaustive testin* which is certainl& impractical. /Chaustive W8T is impossible for lar*e software s&stems. 8ut that doesnYt mean W8T should be considered as impractical. imited W8T in which a limited no. of important lo*ical paths are selected and eCercised and important data structures are probed for validit&3 is both practical and W8T. It is su**ested that white and black boC testin* techniques can be coupled to provide an approach that that validates the software interface selectivel& ensurin* the correction of internal workin* of the software. Tools used for White 8oC testin*: Few Test automation tool vendors offer white boC testin* tools which: #; Provide run)time error and memor& leak detection9 2; Record the eCact amount of time the application spends in an& *iven block of code for the purpose of findin* inefficient code bottlenecks9 and =; Pinpoint areas of the application that have and have not been eCecuted. #I.#.# 8asis Path Testin* 8asis path testin* is a white boC testin* technique first proposed b& Tom %c'abe. The 8asis path method enables to derive a lo*ical compleCit& measure of a procedural desi*n and use this measure as a *uide for definin* a basis set of eCecution paths. Test 'ases derived to eCercise the basis set are *uaranteed to eCecute ever& statement in the pro*ram at least one time durin* testin*. #I.#.2 Flow Qraph 4otation The flow *raph depicts lo*ical control flow usin* a dia*rammatic notation. /ach structured construct has a correspondin* flow *raph s&mbol. #I.#.= '&clomatic 'ompleCit&

'&clomatic compleCit& is a software metric that provides a quantitative measure of the lo*ical compleCit& of a pro*ram. When used in the conteCt of a basis path testin* method3 the value computed for '&clomatic compleCit& defines the number for independent paths in the basis set of a pro*ram and provides us an upper bound for the number of tests that must be conducted to ensure that all statements have been eCecuted at least once. 6n independent path is an& path throu*h the pro*ram that introduces at least one new set of processin* statements or a new condition. 'omputin* '&clomatic 'ompleCit& '&clomatic compleCit& has a foundation in *raph theor& and provides us with eCtremel& useful software metric. 'ompleCit& is computed in one of the three wa&s: #. The number of re*ions of the flow *raph corresponds to the '&clomatic compleCit&. 2. '&clomatic compleCit&3 ?:Q;3 for a flow *raph3 Q is defined as ? :Q; " /)4U2 Where /3 is the number of flow *raph ed*es3 4 is the number of flow *raph nodes. =. '&clomatic compleCit&3 ? :Q; for a flow *raph3 Q is also defined as: ? :Q; " PU# Where P is the number of predicate nodes contained in the flow *raph Q. #I.#.J Qraph %atrices The procedure for derivin* the flow *raph and even determinin* a set of basis paths is amenable to mechani.ation. To develop a software tool that assists in basis path testin*3 a data structure3 called a *raph matriC can be quite useful. 6 Qraph %atriC is a square matriC whose si.e is equal to the number of nodes on the flow *raph. /ach row and column corresponds to an identified node3 and matriC entries correspond to connections between nodes. #I.#.L 'ontrol -tructure Testin* +escribed below are some of the variations of 'ontrol -tructure Testin*. 'ondition Testin* 'ondition testin* is a test case desi*n method that eCercises the lo*ical conditions contained in a pro*ram module. +ata Flow Testin* The data flow testin* method selects test paths of a pro*ram accordin* to the locations of definitions and uses of variables in the pro*ram. #I.#.N oop Testin* oop Testin* is a white boC testin* technique that focuses eCclusivel& on the validit& of loop constructs. Four classes of loops can be defined: -imple loops3 'oncatenated loops3 nested loops3 and unstructured loops. -imple oops The followin* sets of tests can be applied to simple loops3 where enY is the maCimum number of allowable passes throu*h the loop. #. -kip the loop entirel&. 2. 5nl& one pass throu*h the loop. =. Two passes throu*h the loop. J. emY passes throu*h the loop where mL. n)#3 n3 nU# passes throu*h the loop. 4ested oops If we eCtend the test approach from simple loops to nested loops3 the number of possible tests would *row *eometricall& as the level of nestin* increases. #. -tart at the innermost loop. -et all other loops to minimum values. 2. 'onduct simple loop tests for the innermost loop while holdin* the outer loops at their minimum iteration parameter values. 6dd other tests for out)of)ran*e or eCclude values.

=. Work outward3 conductin* tests for the neCt loop3 but keep all other outer loops at minimum values and other nested loops to bt&picalc values. J. 'ontinue until all loops have been tested. 'oncatenated oops 'oncatenated loops can be tested usin* the approach defined for simple loops3 if each of the loops is independent of the other. Rowever3 if two loops are concatenated and the loop counter for loop # is used as the initial value for loop 23 then the loops are not independent. Bnstructured oops Whenever possible3 this class of loops should be redesi*ned to reflect the use of the structured pro*rammin* constructs. #I.2 8lack 8oC Testin* 8lack boC is a test desi*n method. 8lack boC testin* treats the s&stem as a <black) boC<3 so it doesnAt eCplicitl& use Znowled*e of the internal structure. 5r in other words the Test en*ineer need not know the internal workin* of the b8lack boCc. It focuses on the functionalit& part of the module. -ome people like to call black boC testin* as behavioral3 functional3 opaque)boC3 and closed)boC. While the term black boC is most popularl& use3 man& people prefer the terms <behavioral< and <structural< for black boC and white boC respectivel&. 8ehavioral test desi*n is sli*htl& different from black)boC test desi*n because the use of internal knowled*e isnAt strictl& forbidden3 but itAs still discoura*ed. Personall& we feel that there is a trade off between the approaches used to test a product usin* white boC and black boC t&pes. There are some bu*s that cannot be found usin* onl& black boC or onl& white boC. If the test cases are eCtensive and the test inputs are also from a lar*e sample space then it is alwa&s possible to find ma0orit& of the bu*s throu*h black boC testin*. Tools used for 8lack 8oC testin*: %an& tool vendors have been producin* tools for automated black boC and automated white boC testin* for several &ears. The basic functional or re*ression testin* tools capture the results of black boC tests in a script format. 5nce captured3 these scripts can be eCecuted a*ainst future builds of an application to verif& that new functionalit& hasnAt disabled previous functionalit&. 6dvanta*es of 8lack 8oC Testin* ) Tester can be non)technical. ) This testin* is most likel& to find those bu*s as the user would find. ) Testin* helps to identif& the va*ueness and contradiction in functional specifications. ) Test cases can be desi*ned as soon as the functional specifications are complete +isadvanta*es of 8lack 8oC Testin* ) 'hances of havin* repetition of tests that are alread& done b& pro*rammer. ) The test inputs needs to be from lar*e sample space. ) It is difficult to identif& all possible inputs in limited testin* time. -o writin* test cases is slow and difficult 'hances of havin* unidentified paths durin* this testin* #I.2.# Qraph 8ased Testin* %ethods -oftware testin* be*ins b& creatin* a *raph of important ob0ects and their relationships and then devisin* a series of tests that will cover the *raph so that each ob0ects and their relationships and then devisin* a series of tests that will cover the *raph so that each ob0ect and relationship is eCercised and error is uncovered. #I.2.2 /rror Quessin*

/rror Quessin* comes with eCperience with the technolo*& and the pro0ect. /rror Quessin* is the art of *uessin* where errors can be hidden. There are no specific tools and techniques for this3 but &ou can write test cases dependin* on the situation: /ither when readin* the functional documents or when &ou are testin* and find an error that &ou have not documented. #I.2.= 8oundar& ?alue 6nal&sis 8oundar& ?alue 6nal&sis :8?6; is a test data selection technique :Functional Testin* technique; where the eCtreme values are chosen. 8oundar& values include maCimum3 minimum3 0ust inside/outside boundaries3 t&pical values3 and error values. The hope is that3 if a s&stem works correctl& for these special values then it will work correctl& for all values in between. i /Ctends equivalence partitionin* i Test both sides of each boundar& i ook at output boundaries for test cases too i Test min3 min)#3 maC3 maCU#3 t&pical values i 8?6 focuses on the boundar& of the input space to identif& test cases i Rational is that errors tend to occur near the eCtreme values of an input variable There are two wa&s to *enerali.e the 8?6 techniques: #. 8& the number of variables o For n variables: 8?6 &ields Jn U # test cases. 2. 8& the kinds of ran*es o Qenerali.in* ran*es depends on the nature or t&pe of variables i 4eCt+ate has a variable %onth and the ran*e could be defined as f1an3 Feb3 ^ +ecg i %in " 1an3 %in U# " Feb3 etc. i Trian*le had a declared ran*e of f#3 2I3IIIg i 8oolean variables have eCtreme values True and False but there is no clear choice for the remainin* three values 6dvanta*es of 8oundar& ?alue 6nal&sis #. Robustness Testin* ) 8oundar& ?alue 6nal&sis plus values that *o be&ond the limits 2. %in ) #3 %in3 %in U#3 4om3 %aC )#3 %aC3 %aC U# =. Forces attention to eCception handlin* J. For stron*l& t&ped lan*ua*es robust testin* results in run)time errors that abort normal eCecution imitations of 8oundar& ?alue 6nal&sis 8?6 works best when the pro*ram is a function of several independent variables that represent bounded ph&sical quantities #. Independent ?ariables o 4eCt+ate test cases derived from 8?6 would be inadequate: focusin* on the boundar& would not leave emphasis on Februar& or leap &ears o +ependencies eCist with 4eCt+ateAs +a&3 %onth and Hear o Test cases derived without consideration of the function 2. Ph&sical Quantities o 6n eCample of ph&sical variables bein* tested3 telephone numbers ) what faults mi*ht be revealed b& numbers of III)IIII3 III)III#3 LLL)LLLL3 $$$)$$$O3 $$$) $$$$! #I.2.J /quivalence Partitionin* /quivalence partitionin* is a black boC testin* method that divides the input domain of a pro*ram into classes of data from which test cases can be derived. /P can be defined accordin* to the followin* *uidelines:

#. If an input condition specifies a ran*e3 one valid and one two invalid classes are defined. 2. If an input condition requires a specific value3 one valid and two invalid equivalence classes are defined. =. If an input condition specifies a member of a set3 one valid and one invalid equivalence class is defined. J. If an input condition is 8oolean3 one valid and one invalid class is defined. #I.2.L 'omparison Testin* There are situations where independent versions of software be developed for critical applications3 even when onl& a sin*le version will be used in the delivered computer based s&stem. It is these independent versions which form the basis of a black boC testin* technique called 'omparison testin* or back)to)back testin*. #I.2.N 5rtho*onal 6rra& Testin* The 5rtho*onal 6rra& Testin* -trate*& :56T-; is a s&stematic3 statistical wa& of testin* pair)wise interactions b& derivin* a suitable small set of test cases :from a lar*e number of possibilities;. ##. +esi*nin* Test 'ases There are various techniques in which &ou can desi*n test cases. For eCample3 the below illustrated *ives &ou an overview as to how &ou derive test cases usin* the basis path method: The basis path testin* method can be applied to a procedural desi*n or to source code. The followin* steps can be applied to derive the basis set: #. Bse the desi*n or code as a foundation3 draw correspondin* flow *raph. 2. +etermine the '&clomatic compleCit& of the resultant flow *raph. =. +etermine a basis set of linearl& independent paths. J. Prepare test cases that will fore eCecution of each path in the basis set. et us now see how to desi*n test cases in a *eneric manner: #. Bnderstand the requirements document. 2. 8reak the requirements into smaller requirements :if it improves &our testabilit&;. =. For each Requirement3 decide what technique &ou should use to derive the test cases. For eCample3 if &ou are testin* a o*in pa*e3 &ou need to write test cases basin* on error *uessin* and also ne*ative cases for handlin* failures. J. Rave a Traceabilit& %atriC as follows: Requirement 4o :In R+; Requirement Test 'ase 4o What this Traceabilit& %atriC provides &ou is the covera*e of Testin*. Zeep fillin* in the Traceabilit& matriC when &ou complete writin* test caseYs for each requirement. #2. ?alidation Phase The ?alidation Phase falls into picture after the software is read& or when the code is bein* written. There are various techniques and testin* t&pes that can be appropriatel& used while performin* the testin* activities. et us eCamine a few of them. #2.# Bnit Testin* This is a t&pical scenario of %anual Bnit Testin* activit&) 6 Bnit is allocated to a Pro*rammer for pro*rammin*. Pro*rammer has to use eFunctional -pecificationsY document as input for his work. Pro*rammer prepares ePro*ram -pecificationsY for his Bnit from the Functional -pecifications. Pro*ram -pecifications describe the pro*rammin* approach3 codin* tips for the BnitYs codin*. Bsin* these ePro*ram specificationsY as input3 Pro*rammer prepares eBnit Test 'asesY document for that particular Bnit. 6 eBnit Test 'ases 'hecklistY ma& be used to check the completeness of Bnit Test 'ases document.

ePro*ram -pecificationsY and eBnit Test 'asesY are reviewed and approved b& Qualit& 6ssurance 6nal&st or b& peer pro*rammer. The pro*rammer implements some functionalit& for the s&stem to be developed. The same is tested b& referrin* the unit test cases. While testin* that functionalit& if an& defects have been found3 the& are recorded usin* the defect lo**in* tool whichever is applicable. The pro*rammer fiCes the bu*s found and tests the same for an& errors. -tubs and +rivers 6 software application is made up of a number of eBnitsY3 where output of one eBnitY *oes as an eInputY of another Bnit. e.*. 6 e-ales 5rder Printin*Y pro*ram takes a e-ales 5rderY as an input3 which is actuall& an output of e-ales 5rder 'reationY pro*ram. +ue to such interfaces3 independent testin* of a Bnit becomes impossible. 8ut that is what we want to do9 we want to test a Bnit in isolation\ -o here we use e-tubY and e+river. 6 e+riverY is a piece of software that drives :invokes; the Bnit bein* tested. 6 driver creates necessar& eInputsY required for the Bnit and then invokes the Bnit. 6 Bnit ma& reference another Bnit in its lo*ic. 6 e-tubY takes place of such subordinate unit durin* the Bnit Testin*. 6 e-tubY is a piece of software that works similar to a unit which is referenced b& the Bnit bein* tested3 but it is much simpler that the actual unit. 6 -tub works as a e-tand)inY for the subordinate unit and provides the minimum required behavior for that unit. Pro*rammer needs to create such e+riversY and e-tubsY for carr&in* out Bnit Testin*. 8oth the +river and the -tub are kept at a minimum level of compleCit&3 so that the& do not induce an& errors while testin* the Bnit in question. /Cample ) For Bnit Testin* of e-ales 5rder Printin*Y pro*ram3 a e+riverY pro*ram will have the code which will create -ales 5rder records usin* hardcoded data and then call e-ales 5rder Printin*Y pro*ram. -uppose this printin* pro*ram uses another unit which calculates -ales discounts b& some compleC calculations. Then call to this unit will be replaced b& a e-tubY3 which will simpl& return fiC discount data. Bnit Test 'ases It must be clear b& now that preparin* Bnit Test 'ases document :referred to as BT' hereafter; is an important task in Bnit Testin* activit&. Ravin* an BT'3 which is complete with ever& possible test case3 leads to complete Bnit Testin* and thus *ives an assurance of defect)free Bnit at the end of Bnit Testin* sta*e. -o let us discuss about how to prepare a BT'. Think of followin* aspects while preparin* Bnit Test 'ases , /Cpected Functionalit&: Write test cases a*ainst each functionalit& that is eCpected to be provided from the Bnit bein* developed. e.*. If an -Q script contains commands for creatin* one table and alterin* another table then test cases should be written for testin* creation of one table and alteration of another. It is important that Bser Requirements should be traceable to Functional -pecifications3 Functional -pecifications be traceable to Pro*ram -pecifications and Pro*ram -pecifications be traceable to Bnit Test 'ases. %aintainin* such traceabilit& ensures that the application fulfills Bser Requirements. Input values: o /ver& input value: Write test cases for each of the inputs accepted b& the Bnit. e.*. If a +ata /ntr& Form has #I fields on it3 write test cases for all #I fields. o ?alidation of input: /ver& input has certain validation rule associated with it. Write test cases to validate this rule. 6lso3 there can be cross)field validations in which one field is enabled dependin* upon input of another field. Test cases for these should not be missed. e.*. 6 combo boC or list boC has a valid set of values associated with it. 6 numeric field ma& accept onl& positive values.

6n email address field must have ampersand :V; and period :.; in it. 6 e-ales taC codeY entered b& user must belon* to the e-tateY specified b& the user. o 8oundar& conditions: Inputs often have minimum and maCimum possible values. +o not for*et to write test cases for them. e.*. 6 field that accepts epercenta*eY on a +ata /ntr& Form should be able to accept inputs onl& from # to #II. o imitations of data t&pes: ?ariables that hold the data have their value limits dependin* upon their data t&pes. In case of computed fields3 it is ver& important to write cases to arrive at an upper limit value of the variables. o 'omputations: If an& calculations are involved in the processin*3 write test cases to check the arithmetic eCpressions with all possible combinations of values. 5utput values: Write test cases to *enerate scenarios3 which will produce all t&pes of output values that are eCpected from the Bnit. e.*. 6 Report can displa& one set of data if user chooses a particular option and another set of data if user chooses a different option. Write test cases to check each of these outputs. When the output is a result of some calculations bein* performed or some formulae bein* used3 then approCimations pla& a ma0or role and must be checked. -creen / Report a&out: -creen a&out or web pa*e la&out and Report la&out must be tested a*ainst the requirements. It should not happen that the screen or the report looks beautiful and perfect3 but user wanted somethin* entirel& different\ It should ensure that pa*es and screens are consistent. Path covera*e: 6 Bnit ma& have conditional processin* which results in various paths the control can traverse throu*h. Test case must be written for each of these paths. 6ssumptions: 6 Bnit ma& assume certain thin*s for it to function. For eCample3 a Bnit ma& need a database to be open. Then test case must be written to check that the Bnit reports error if such assumptions are not met. Transactions: In case of database applications3 it is important to make sure that transactions are properl& desi*ned and no wa& inconsistent data *ets saved in the database. 6bnormal terminations: 8ehavior of the Bnit in case of abnormal termination should be tested. /rror messa*es: /rror messa*es should be short3 precise and self)eCplanator&. The& should be properl& phrased and should be free of *rammatical mistakes. BT' +ocument Qiven below is a simple format for BT' document. Test 'ase 4o. Test 'ase purpose Procedure /Cpected Result 6ctual result I+ which can be referred to in other documents like eTraceabilit& %atriCY3 Root 'ause 6nal&sis of +efects etc. What to test Row to test What should happen What actuall& happened! This column can be omitted when +efect Recordin* Tool is used. 4ote that as this is a sample3 we have not provided columns for Pass/Fail and Remarks. /Cample: et us sa& we want to write BT' for a +ata /ntr& Form below:

Qiven below are some of the Bnit Test 'ases for the above Form: Test 'ase 4o. Test 'ase purpose Procedure /Cpected Result 6ctual result # Item no. to start b& e6Y or e8Y. #.'reate a new record. 2.T&pe Item no. startin* with e6Y. =.T&pe item no. startin* with e8Y. J.T&pe item no. startin* with an& character other than e6Y and e8Y. 23=. -hould *et accepted and control should move to neCt field. J. -hould not *et accepted. 6n error messa*e should be displa&ed and control should remain in Item no. field. 2. Item Price to be between #III to 2III if Item no. starts with e6Y. #.'reate a new record with Item no. startin* with e6Y. 2.-pecif& price _ #III =.-pecif& price E2III. J.-pecif& price " #III. L.-pecif& price " 2III. N.-pecif& price between #III and 2III. 23=./rror should *et displa&ed and control should remain in Price field. J3L3N.-hould *et accepted and control should move to neCt field. BT' 'hecklist BT' checklist ma& be used while reviewin* the BT' prepared b& the pro*rammer. 6s an& other checklist3 it contains a list of questions3 which can be answered as either a eHesY or a e4oY. The e6spectsY list *iven in -ection J.= above can be referred to while preparin* BT' checklist. e.*. Qiven below are some of the checkpoints in BT' checklist , #. 6re test cases present for all form field validations! 2. 6re boundar& conditions considered! =. 6re /rror messa*es properl& phrased! +efect Recordin* +efect Recordin* can be done on the same document of BT'3 in the column of e/Cpected ResultsY. This column can be duplicated for the neCt iterations of Bnit Testin*. +efect Recordin* can also be done usin* some tools like 8u*.illa3 in which defects are stored in the database. +efect Recordin* needs to be done with care. It should be able to indicate the problem in clear3 unambi*uous manner3 and reproducin* of the defects should be easil& possible from the defect information. 'onclusion /Chaustive Bnit Testin* filters out the defects at an earl& sta*e in the +evelopment ife '&cle. It proves to be cost effective and improves Qualit& of the -oftware before the smaller pieces are put to*ether to form an application as a whole. Bnit Testin* should be done sincerel& and meticulousl&3 the efforts are paid well in the lon* run. #2.2 Inte*ration Testin*

Inte*ration testin* is a s&stematic technique for constructin* the pro*ram structure while at the same time conductin* tests to uncover errors associated with interfacin*. The ob0ective is to take unit tested components and build a pro*ram structure that has been dictated b& desi*n. Bsuall&3 the followin* methods of Inte*ration testin* are followed: #. Top)down Inte*ration approach. 2. 8ottom)up Inte*ration approach. #2.2.# Top)+own Inte*ration Top)down inte*ration testin* is an incremental approach to construction of pro*ram structure. %odules are inte*rated b& movin* downward throu*h the control hierarch&3 be*innin* with the main control module. %odules subordinate to the main control module are incorporated into the structure in either a depth)first or breadth)first manner. #. The Inte*ration process is performed in a series of five steps: 2. The main control module is used as a test driver and stubs are substituted for all components directl& subordinate to the main control module. =. +ependin* on the inte*ration approach selected subordinate stubs are replaced one at a time with actual components. J. Tests are conducted as each component is inte*rated. L. 5n completion of each set of tests3 stub is replaced with the real component. N. Re*ression testin* ma& be conducted to ensure that new errors have not been introduced. #2.2.2 8ottom)Bp Inte*ration 8ottom)up inte*ration testin* be*ins construction and testin* with atomic modules :i.e. components at the lowest levels in the pro*ram structure;. 8ecause components are inte*rated from the bottom up3 processin* required for components subordinate to a *iven level is alwa&s available and the need for stubs is eliminated. #. 6 8ottom)up inte*ration strate*& ma& be implemented with the followin* steps: 2. ow level components are combined into clusters that perform a specific software sub function. =. 6 driver is written to coordinate test case input and output. J. The cluster is tested. +rivers are removed and clusters are combined movin* upward in the pro*ram structure. #2.= -&stem Testin* -&stem testin* concentrates on testin* the complete s&stem with a variet& of techniques and methods. -&stem Testin* comes into picture after the Bnit and Inte*ration Tests. #2.=.# 'ompatibilit& Testin* 'ompatibilit& Testin* concentrates on testin* whether the *iven application *oes well with third part& tools3 software or hardware platform. For eCample3 &ou have developed a web application. The ma0or compatibilit& issue is3 the web site should work well in various browsers. -imilarl& when &ou develop applications on one platform3 &ou need to check if the application works on other operatin* s&stems as well. This is the main *oal of 'ompatibilit& Testin*. 8efore &ou be*in compatibilit& tests3 our sincere su**estion is that &ou should have a cross reference matriC between various softwareYs3 hardware based on the application requirements. For eCample3 let us suppose &ou are testin* a web application. 6 sample list can be as follows: Rardware -oftware 5peratin* -&stem Pentium , II3 #2O %8 R6% I/ J.C3 5pera3 4etscape Windows $L Pentium , III3 2LN %8 R6% I/ L.C3 4etscape Windows XP Pentium , I?3 L#2 %8 R6% %o.illa inuC

'ompatibilit& tests are also performed for various client/server based applications where the hardware chan*es from client to client. 'ompatibilit& Testin* is ver& crucial to or*ani.ations developin* their own products. The products have to be checked for compliance with the competitors of the third part& tools3 hardware3 or software platform. /.*. 6 'all center product has been built for a solution with X product but there is a client interested in usin* it with H product9 then the issue of compatibilit& arises. It is of importance that the product is compatible with var&in* platforms. Within the same platform3 the or*ani.ation has to be watchful that with each new release the product has to be tested for compatibilit&. 6 *ood wa& to keep up with this would be to have a few resources assi*ned alon* with their routine tasks to keep updated about such compatibilit& issues and plan for testin* when and if the need arises. 8& the above eCample it is not intended that companies which are not developin* products do not have to cater for this t&pe of testin*. There case is equall& eCistent3 if an application uses standard software then would it be able to run successfull& with the newer versions too! 5r if a website is runnin* on I/ or 4etscape3 what will happen when it is opened throu*h 5pera or %o.illa. Rere a*ain it is best to keep these issues in mind and plan for compatibilit& testin* in parallel to avoid an& catastrophic failures and dela&s. #2.=.2 Recover& Testin* Recover& testin* is a s&stem test that focuses the software to fall in a variet& of wa&s and verifies that recover& is properl& performed. If it is automatic recover& then re) initiali.ation3 check pointin* mechanisms3 data recover& and restart should be evaluated for correctness. If recover& requires human intervention3 the mean)time)to) repair :%TTR; is evaluated to determine whether it is within acceptable limits. #2.=.= Bsabilit& Testin* Bsabilit& is the de*ree to which a user can easil& learn and use a product to achieve a *oal. Bsabilit& testin* is the s&stem testin* which attempts to find an& human)factor problems. 6 simpler description is testin* the software from a usersY point of view. /ssentiall& it means testin* software to prove/ensure that it is user)friendl&3 as distinct from testin* the functionalit& of the software. In practical terms it includes er*onomic considerations3 screen desi*n3 standardi.ation etc. The idea behind usabilit& testin* is to have actual users perform the tasks for which the product was desi*ned. If the& canAt do the tasks or if the& have difficult& performin* the tasks3 the BI is not adequate and should be redesi*ned. It should be remembered that usabilit& testin* is 0ust one of the man& techniques that serve as a basis for evaluatin* the BI in a user)centered approach. 5ther techniques for evaluatin* a BI include inspection methods such as heuristic evaluations3 eCpert reviews3 card)sortin*3 matchin* test or Icon intuitiveness evaluation3 co*nitive walkthrou*hs. 'onfusion re*ardin* usa*e of the term can be avoided if we use eusabilit& evaluationY for the *eneric term and reserve eusabilit& testin*Y for the specific evaluation method based on user performance. Reuristic /valuation and Bsabilit& Inspection or co*nitive walkthrou*h does not involve real users. It often involves buildin* protot&pes of parts of the user interface3 havin* representative users perform representative tasks and seein* if the appropriate users can perform the tasks. In other techniques such as the inspection methods3 it is not performance3 but someoneAs opinion of how users mi*ht perform that is offered as evidence that the BI is acceptable or not. This distinction between performance and opinion about performance is crucial. 5pinions are sub0ective. Whether a sample of users can accomplish what the& want or not is ob0ective. Bnder man& circumstances it is more useful to find out if users can do what the& want to do rather than askin* someone. P/RF5R%I4Q TR/ T/-T

#. Qet a person who fits the user profile. %ake sure that &ou are not *ettin* someone who has worked on it. 2. -it them down in front of a computer3 *ive them the application3 and tell them a small scenario3 like: bThank &ou for volunteerin* makin* it easier for users to find what the& are lookin* for. We would like &ou to answer several questions. There is no ri*ht or wron* answers. What we want to learn is wh& &ou make the choices &ou do3 what is confusin*3 wh& choose one thin* and not another3 etc. 1ust talk us throu*h &our search and let us know what &ou are thinkin*. We have a recorder which is *oin* to capture what &ou sa&3 so &ou will have to tell us what &ou are clickin* on as &ou also tell us what &ou are thinkin*. 6lso think aloud when &ou are stuck somewherec =. 4ow donYt speak an&thin*. -ounds eas&3 but see if &ou actuall& can shut up. J. Watch them use the application. If the& ask &ou somethin*3 tell them &ouAre not there. Then shut up a*ain. L. -tart notin* all the thin*s &ou will have to chan*e. N. 6fterwards ask them what the& thou*ht and note them down. K. 5nce the whole thin* is done thank the volunteer. T55 - 6?6I 68 / F5R B-68I ITH T/-TI4Q G /r*o i*ht Bsabilit& -oftware offers comprehensive QBI qualit& solutions for the professional Windows application developer. /r*o i*ht offers solutions for developers of Windows applications for testin* and evaluatin* their usabilit&. G Web%etrics Tool -uite from 4ational Institute of -tandards and Technolo*& contains rapid3 remote3 and automated tools to help in producin* usable web sites. The Web -tatic 6nal&.er Tool :Web-6T; checks the html of a web pa*e a*ainst numerous usabilit& *uidelines. The output from Web-6T consists of identification of potential usabilit& problems3 which should be investi*ated further throu*h user testin*. The Web 'ate*or& 6nal&sis Tool :Web'6T; lets the usabilit& en*ineer quickl& construct and conduct a simple cate*or& anal&sis across the web. G 8obb& from 'enter for 6pplied -pecial Technolo*& is a web)based public service offered b& '6-T that anal&.es web pa*es for their accessibilit& to people with disabilities as well as their compatibilit& with various browsers. G +RB% from -erco Bsabilit& -ervices is a tool3 which has been developed b& close cooperation between Ruman Factors professionals and software en*ineers to provide a broad ran*e of support for video)assisted observational studies. G Form Testin* -uite from 'orporate Research and 6dvanced +evelopment3 +i*ital /quipment 'orporation Provides a test suite developed to test various web browsers. The test results section provides a description of the tests. B-68I ITH 68G The Bsabilit& 'enter :B 68; is a full service or*ani.ation3 which provides a <-treet) Wise< approach to usabilit& risk mana*ement and product usabilit& eCcellence. It has custom desi*ned B 68 facilities. G Bsabilit& -ciences 'orporation has a usabilit& lab in +allas consistin* of two lar*e offices separated b& a one wa& mirror. The test room in each lab is equipped with multiple video cameras3 audio equipment3 as well as ever&thin* a user needs to operate the pro*ram. The video control and observation room features five monitors3 a video recorder with special effects switchin*3 two)wa& audio s&stem3 remote camera controls3 a P' for test lo* purposes3 and a telephone for use as a help desk. G BserWorks3 Inc. :formerl& %an)%ade -&stems; is a consultin* firm in the Washin*ton3 +' area speciali.in* in the desi*n of user)product interfaces. BserWorks does anal&ses3 market research3 user interface desi*n3 rapid protot&pin*3 product usabilit& evaluations3 competitive testin* and anal&ses3 er*onomic anal&ses3 and human factors contract research. BserWorks offers several portable usabilit& labs :audio)video data collection s&stems; for sale or rent and an observational data lo**in* software product for sale.

G odestone Research has usabilit&)testin* laborator& with state of the art audio and visual recordin* and testin* equipment. 6ll equipment has been desi*ned to be portable so that it can be taken on the road. The lab consists of a test room and an observation/control room that can seat as man& as ten observers. 6)? equipment includes two :soon to be =; full& controllable -?R- cameras3 capture/feed capabilities for test participantAs P' via scan converter and direct split si*nal :to ?Q6 <slave< monitors in observation room;3 up to ei*ht video monitors and four ?'6 monitors for observer viewin*3 miCin*/editin* equipment3 and <wiretap< capabilities to monitor and record both sides of telephone conversation :e.*.3 if participant calls customer support;. G 5nline 'omputer ibrar& 'enter3 Inc provides insi*ht into the usabilit& test laborator&. It *ives an overview of the infrastructure as well as the process bein* used in the laborator&. /4+ Q56 - 5F B-68I ITH T/-TI4Q To summari.e the *oals3 it can be said that it makes the software more user friendl&. The end result will be: G 8etter qualit& software. G -oftware is easier to use. G -oftware is more readil& accepted b& users. G -hortens the learnin* curve for new users. #2.=.J -ecurit& Testin* -ecurit& testin* attempts to verif& that protection mechanisms built into a s&stem will3 in fact3 protect it from improper penetration. +urin* -ecurit& testin*3 password crackin*3 unauthori.ed entr& into the software3 network securit& are all taken into consideration. #2.=.L -tress Testin* -tress testin* eCecutes a s&stem in a manner that demands resources in abnormal quantit&3 frequenc&3 or volume. The followin* t&pes of tests ma& be conducted durin* stress testin*9 G -pecial tests ma& be desi*ned that *enerate ten interrupts per second3 when one or two is the avera*e rate. G Input data rates ma& be increases b& an order of ma*nitude to determine how input functions will respond. G Test 'ases that require maCimum memor& or other resources. G Test 'ases that ma& cause eCcessive huntin* for disk)resident data. G Test 'ases that m& cause thrashin* in a virtual operatin* s&stem. #2.=.N Performance Testin* Performance testin* of a Web site is basicall& the process of understandin* how the Web application and its operatin* environment respond at various user load levels. In *eneral3 we want to measure the Response Time3 Throu*hput3 and Btili.ation of the Web site while simulatin* attempts b& virtual users to simultaneousl& access the site. 5ne of the main ob0ectives of performance testin* is to maintain a Web site with low response time3 hi*h throu*hput3 and low utili.ation. Response Time Response Time is the dela& eCperienced when a request is made to the server and the serverAs response to the client is received. It is usuall& measured in units of time3 such as seconds or milliseconds. Qenerall& speakin*3 Response Time increases as the inverse of unutili.ed capacit&. It increases slowl& at low levels of user load3 but increases rapidl& as capacit& is utili.ed. Fi*ure # demonstrates such t&pical characteristics of Response Time versus user load. Fi*ure#. T&pical characteristics of latenc& versus user load

The sudden increase in response time is often caused b& the maCimum utili.ation of one or more s&stem resources. For eCample3 most Web servers can be confi*ured to start up a fiCed number of threads to handle concurrent user requests. If the number of concurrent requests is *reater than the number of threads available3 an& incomin* requests will be placed in a queue and will wait for their turn to be processed. 6n& time spent in a queue naturall& adds eCtra wait time to the overall Response Time. To better understand what Response Time means in a t&pical Web farm3 we can divide response time into man& se*ments and cate*ori.e these se*ments into two ma0or t&pes: network response time and application response time. 4etwork response time refers to the time it takes for data to travel from one server to another. 6pplication response time is the time required for data to be processed within a server. Fi*ure 2 shows the different response time in the entire process of a t&pical Web request. Fi*ure 2 shows the different response time in the entire process of a t&pical Web request. Total Response Time " :4# U 42 U 4= U 4J; U :6# U 62 U 6=; Where 4C represents the network Response Time and 6C represents the application Response Time. In *eneral3 the Response Time is mainl& constrained b& 4# and 4J. This Response Time represents the method &our clients are usin* to access the Internet. In the most common scenario3 e)commerce clients access the Internet usin* relativel& slow dial) up connections. 5nce Internet access is achieved3 a clientAs request will spend an indeterminate amount of time in the Internet cloud shown in Fi*ure 2 as requests and responses are funneled from router to router across the Internet. To reduce these networks Response Time :4# and 4J;3 one common solution is to move the servers and/or Web contents closer to the clients. This can be achieved b& hostin* &our farm of servers or replicatin* &our Web contents with ma0or Internet hostin* providers who have redundant hi*h)speed connections to ma0or public and private Internet eCchan*e points3 thus reducin* the number of network routin* hops between the clients and the servers. 4etwork Response Times 42 and 4= usuall& depend on the performance of the switchin* equipment in the server farm. When traffic to the back)end database *rows3 consider up*radin* the switches and network adapters to boost performance. Reducin* application Response Times :6#3 623 and 6=; is an art form unto itself because the compleCit& of server applications can make anal&.in* performance data and performance tunin* quite challen*in*. T&picall&3 multiple software components interact on the server to service a *iven request. Response time can be introduced b& an& of the components. That said3 there are wa&s &ou can approach the problem: G First3 &our application desi*n should minimi.e round trips wherever possible. %ultiple round trips :client to server or application to database; multipl& transmission and resource acquisition Response time. Bse a sin*le round trip wherever possible. G Hou can optimi.e man& server components to improve performance for &our confi*uration. +atabase tunin* is one of the most important areas on which to focus. 5ptimi.e stored procedures and indeCes. G ook for contention amon* threads or components competin* for common resources. There are several methods &ou can use to identif& contention bottlenecks. +ependin* on the specific problem3 eliminatin* a resource contention bottleneck ma& involve restructurin* &our code3 appl&in* service packs3 or up*radin* components on &our server. 4ot all resource contention problems can be completel& eliminated3 but &ou should strive to reduce them wherever possible. The& can become bottlenecks for the entire s&stem.

G Finall&3 to increase capacit&3 &ou ma& want to up*rade the server hardware :scalin* up;3 if s&stem resources such as 'PB or memor& are stretched out and have become the bottleneck. Bsin* multiple servers as a cluster :scalin* out; ma& help to lessen the load on an individual server3 thus improvin* s&stem performance and reducin* application latencies. Throu*hput Throu*hput refers to the number of client requests processed within a certain unit of time. T&picall&3 the unit of measurement is requests per second or pa*es per second. From a marketin* perspective3 throu*hput ma& also be measured in terms of visitors per da& or pa*e views per da&3 althou*h smaller time units are more useful for performance testin* because applications t&picall& see peak loads of several times the avera*e load in a da&. 6s one of the most useful metrics3 the throu*hput of a Web site is often measured and anal&.ed at different sta*es of the desi*n3 develop3 and deplo& c&cle. For eCample3 in the process of capacit& plannin*3 throu*hput is one of the ke& parameters for determinin* the hardware and s&stem requirements of a Web site. Throu*hput also pla&s an important role in identif&in* performance bottlenecks and improvin* application and s&stem performance. Whether a Web farm uses a sin*le server or multiple servers3 throu*hput statistics show similar characteristics in reactions to various user load levels. Fi*ure = demonstrates such t&pical characteristics of throu*hput versus user load. Fi*ure =. T&pical characteristics of throu*hput versus user load 6s Fi*ure = illustrates3 the throu*hput of a t&pical Web site increases proportionall& at the initial sta*es of increasin* load. Rowever3 due to limited s&stem resources3 throu*hput cannot be increased indefinitel&. It will eventuall& reach a peak3 and the overall performance of the site will start de*radin* with increased load. %aCimum throu*hput3 illustrated b& the peak of the *raph in Fi*ure =3 is the maCimum number of user requests that can be supported concurrentl& b& the site in the *iven unit of time. 4ote that it is sometimes confusin* to compare the throu*hput metrics for &our Web site to the published metrics of other sites. The value of maCimum throu*hput varies from site to site. It mainl& depends on the compleCit& of the application. For eCample3 a Web site consistin* lar*el& of static RT% pa*es ma& be able to serve man& more requests per second than a site servin* d&namic pa*es. 6s with an& statistic3 throu*hput metrics can be manipulated b& selectivel& i*norin* some of the data. For eCample3 in &our measurements3 &ou ma& have included separate data for all the supportin* files on a pa*e3 such as *raphic files. 6nother siteAs published measurements mi*ht consider the overall pa*e as one unit. 6s a result3 throu*hput values are most useful for comparisons within the same site3 usin* a common measurin* methodolo*& and set of metrics. In man& wa&s3 throu*hput and Response time are related3 as different approaches to thinkin* about the same problem. In *eneral3 sites with hi*h latenc& will have low throu*hput. If &ou want to improve &our throu*hput3 &ou should anal&.e the same criteria as &ou would to reduce latenc&. 6lso3 measurement of throu*hput without consideration of latenc& is misleadin* because latenc& often rises under load before throu*hput peaks. This means that peak throu*hput ma& occur at a latenc& that is unacceptable from an application usabilit& standpoint. This su**ests that Performance reports include a cut)off value for Response time3 such as:2LI requests/second V L seconds maCimum Response time

Btili.ation

Btili.ation refers to the usa*e level of different s&stem resources3 such as the serverAs 'PB:s;3 memor&3 network bandwidth3 and so forth. It is usuall& measured as a percenta*e of the maCimum available level of the specific resource. Btili.ation versus user load for a Web server t&picall& produces a curve3 as shown in Fi*ure J. Fi*ure J. T&pical characteristics of utili.ation versus user load 6s Fi*ure J illustrates3 utili.ation usuall& increases proportionall& to increasin* user load. Rowever3 it will top off and remain at a constant when the load continues to build up. If the specific s&stem resource tops off at #II)percent utili.ation3 itAs ver& likel& that this resource has become the performance bottleneck of the site. Bp*radin* the resource with hi*her capacit& would allow *reater throu*hput and lower latenc&[ thus better performance. If the measured resource does not top off close to #II) percent utili.ation3 it is probabl& because one or more of the other s&stem resources have alread& reached their maCimum usa*e levels. The& have become the performance bottleneck of the site. To locate the bottleneck3 &ou ma& need to *o throu*h a lon* and painstakin* process of runnin* performance tests a*ainst each of the suspected resources3 and then verif&in* if performance is improved b& increasin* the capacit& of the resource. In man& cases3 performance of the site will start deterioratin* to an unacceptable level well before the ma0or s&stem resources3 such as 'PB and memor&3 are maCimi.ed. For eCample3 Fi*ure L illustrates a case where response time rises sharpl& to JL seconds when 'PB utili.ation has reached onl& NI percent. Fi*ure L. 6n eCample of Response Time versus utili.ation 6s Fi*ure L demonstrates3 monitorin* the 'PB or memor& utili.ation alone ma& not alwa&s indicate the true capacit& level of the server farm with acceptable performance. 6pplications While most traditional applications are desi*ned to respond to a sin*le user at an& time3 most Web applications are eCpected to support a wide ran*e of concurrent users3 from a do.en to a couple thousand or more. 6s a result3 performance testin* has become a critical component in the process of deplo&in* a Web application. It has proven to be most useful in :but not limited to; the followin* areas: G 'apacit& plannin* G 8u* fiCin* 'apacit& Plannin* Row do &ou know if &our server confi*uration is sufficient to support two million visitors per da& with avera*e response time of less than five seconds! If &our compan& is pro0ectin* a business *rowth of 2II percent over the neCt two months3 how do &ou know if &ou need to up*rade &our server or add more servers to the Web farm! 'an &our server and application support a siC)fold traffic increase durin* the 'hristmas shoppin* season! 'apacit& plannin* is about bein* prepared. Hou need to set the hardware and software requirements of &our application so that &ouAll have sufficient capacit& to meet anticipated and unanticipated user load. 5ne approach in capacit& plannin* is to load)test &our application in a testin* :sta*in*; server farm. 8& simulatin* different load levels on the farm usin* a Web application performance testin* tool such as W6-3 &ou can collect and anal&.e the test results to better understand the performance characteristics of the application. Performance charts such as those shown in Fi*ures #3 =3 and J can then be *enerated to show the eCpected Response Time3 throu*hput3 and utili.ation at these load levels. In addition3 &ou ma& also want to test the scalabilit& of &our application with different hardware confi*urations. For eCample3 load testin* &our application on servers with one3 two3 and four 'PBs respectivel& would help to determine how well the

application scales with s&mmetric multiprocessor :-%P; servers. ikewise3 &ou should load test &our application with different numbers of clustered servers to confirm that &our application scales well in a cluster environment. 6lthou*h performance testin* is as important as functional testin*3 itYs often overlooked .-ince the requirements to ensure the performance of the s&stem is not as strai*htforward as the functionalities of the s&stem3 achievin* it correctl& is more difficult. The effort of performance testin* is addressed in two wa&s: G oad testin* G -tress testin* oad testin* oad testin* is a much used industr& term for the effort of performance testin*. Rere load means the number of users or the traffic for the s&stem. oad testin* is defined as the testin* to determine whether the s&stem is capable of handlin* anticipated number of users or not. In oad Testin*3 the virtual users are simulated to eChibit the real user behavior as much as possible. /ven the user think time such as how users will take time to think before inputtin* data will also be emulated. It is carried out to 0ustif& whether the s&stem is performin* well for the specified limit of load. For eCample3 et us sa& an online)shoppin* application is anticipatin* #III concurrent user hits at peak period. In addition3 the peak period is eCpected to sta& for #2 hrs. Then the s&stem is load tested with #III virtual users for #2 hrs. These kinds of tests are carried out in levels: first # user3 LI users3 and #II users3 2LI users3 LII users and so on till the anticipated limit are reached. The testin* effort is closed eCactl& for #III concurrent users. The ob0ective of load testin* is to check whether the s&stem can perform well for specified load. The s&stem ma& be capable of accommodatin* more than #III concurrent users. 8ut3 validatin* that is not under the scope of load testin*. 4o attempt is made to determine how man& more concurrent users the s&stem is capable of servicin*. Table # illustrates the eCample specified. -tress testin* -tress testin* is another industr& term of performance testin*. Thou*h load testin* W -tress testin* are used s&non&mousl& for performance,related efforts3 their *oal is different. Bnlike load testin* where testin* is conducted for specified number of users3 stress testin* is conducted for the number of concurrent users be&ond the specified limit. The ob0ective is to identif& the maCimum number of users the s&stem can handle before breakin* down or de*radin* drasticall&. -ince the aim is to put more stress on s&stem3 think time of the user is i*nored and the s&stem is eCposed to eCcess load. The *oals of load and stress testin* are listed in Table 2. Refer to table = for the inference drawn throu*h the Performance Testin* /fforts. et us take the same eCample of online shoppin* application to illustrate the ob0ective of stress testin*. It determines the maCimum number of concurrent users an online s&stem can service which can be be&ond #III users :specified limit;. Rowever3 there is a possibilit& that the maCimum load that can be handled b& the s&stem ma& found to be same as the anticipated limit. The Table_PPEillustrates the eCample specified.

-tress testin* also determines the behavior of the s&stem as user base increases. It checks whether the s&stem is *oin* to de*rade *racefull& or crash at a shot when the load *oes be&ond the specified limit. Table #: oad and stress testin* of illustrative eCample T&pes of Testin* 4umber of 'oncurrent users +uration oad Testin* # Bser i LI Bsers i#II Bsers i2LI Bsers iLII Bsers^^^^. i#IIIBsers #2 Rours -tress Testin* # Bser i LI Bsers i#II Bsers i2LI Bsers iLII Bsers^^^^. i#IIIBsers i8e&ond #III Bsers^^^.. i%aCimum Bsers #2 Rours

Table 2: Qoals of load and stress testin* T&pes of testin* Qoals oad testin* G Testin* for anticipated user base G ?alidates whether s&stem is capable of handlin* load under specified limit -tress testin* G Testin* be&ond the anticipated user base G Identifies the maCimum load a s&stem can handle G 'hecks whether the s&stem de*rades *racefull& or crashes at a shot Table =: Inference drawn b& load and stress testin* T&pe of Testin* Inference oad Testin* Whether s&stem 6vailable! If &es3 is the available s&stem is stable! -tress Testin* Whether s&stem is 6vailable! If &es3 is the available s&stem is stable! If Hes3 is it movin* towards Bnstable state! When the s&stem is *oin* to break down or de*rade drasticall&! 'onductin* performance testin* manuall& is almost impossible. oad and stress tests are carried out with the help of automated tools. -ome of the popular tools to automate performance testin* are listed in table J. Table J: oad and stress testin* tools Tools ?endor oadRunner %ercur& Interactive Inc 6stra load test %ercur& Interactive Inc -ilk performer -e*ue Web oad Radview -oftware Q6 oad 'ompuWare e) oad /mpiriC -oftware e?alid -oftware research Inc Web-pra& '6I network Test%ana*er Rational Web application center test %icrosoft technolo*ies 5pen oad 5pen+emand 64T- Red Qate -oftware 5pen-T6 5pen source 6stra oadtest %ercur& interactive Inc W6PT 4ovasoft Inc -itestress Webmaster solutions Quatiumpro Quatium technolo*ies

/as& Web oad Prime%ail Inc 8u* FiCin* -ome errors ma& not occur until the application is under hi*h user load. For /Cample3 memor& leaks can eCacerbate server or application problems sustainin* hi*h load. Performance testin* helps to detect and fiC such problems before launchin* the application. It is therefore recommended that developers take an active role in performance testin* their applications3 especiall& at different ma0or milestones of the development c&cle. #2.=.K 'ontent %ana*ement Testin* e'ontent %ana*ementY has *ained a predominant importance after the Web applications took a ma0or part of our lives. What is 'ontent %ana*ement! 6s the name denotes3 it is mana*in* the content. Row do the& work! et us take a common eCample. Hou are in 'hina and &ou wanted to open the Hahoo\ 'hinese version. When &ou choose 'hinese version on the main pa*e of Hahoo\ Hou *et to see the entire content in 'hinese. Hahoo\ Would strate*icall& plan and have various servers for various lan*ua*es. When &ou choose a particular version of the pa*e3 the request is redirected to the server which mana*es the 'hinese content pa*e. The 'ontent %ana*ement s&stems help is placin* content for various purposes and also help in displa&in* when the request comes in. 'ontent %ana*ement Testin* involves: #. Testin* the distribution of the content. 2. Request3 Response TimeYs. =. 'ontent displa& on various browsers and operatin* s&stems. J. oad distribution on the servers. In fact all the performance related testin* should be performed for each version of the web application which uses the content mana*ement servers. #2.=.O Re*ression Testin* Re*ression testin* as the name su**ests is used to test / check the effect of chan*es made in the code. %ost of the time the testin* team is asked to check last minute chan*es in the code 0ust before makin* a release to the client3 in this situation the testin* team needs to check onl& the affected areas. -o in short for the re*ression testin* the testin* team should *et the input from the development team about the nature / amount of chan*e in the fiC so that testin* team can first check the fiC and then the side effects of the fiC. In m& present or*ani.ation we too faced the same problem. -o we made a re*ression bucket :this is a simple eCcel sheet containin* the test cases that we need think assure us of bare minimum functionalit&; this bucket is run ever& time before the release. In fact the re*ression testin* is the testin* in which maCimum automation can be done. The reason bein* the same set of test cases will be run on different builds multiple times. 8ut a*ain the eCtent of automation depends on whether the test cases will remain applicable over the time3 In case the automated test cases do not remain applicable for some amount of time then test en*ineers will end up in wastin* time to automate and donYt *et enou*h out of automation. i What is Re*ression testin*! Re*ression Testin* is retestin* unchan*ed se*ments of application. It involves rerunnin* tests that have been previousl& eCecuted to ensure that the same results can be achieved currentl& as were achieved when the se*ment was last tested. The selective retestin* of a software s&stem that has been modified to ensure that an& bu*s have been fiCed and that no other previousl& workin* functions have failed

as a result of the reparations and that newl& added features have not created problems with previous versions of the software. 6lso referred to as verification testin*3 re*ression testin* is initiated after a pro*rammer has attempted to fiC a reco*ni.ed problem or has added source code to a pro*ram that ma& have inadvertentl& introduced errors. It is a qualit& control measure to ensure that the newl& modified code still complies with its specified requirements and that unmodified code has not been affected b& the maintenance activit&. i What do &ou do durin* Re*ression testin*! o Rerunnin* of previousl& conducted tests o Reviewin* previousl& prepared manual procedures o 'omparin* the current test results with the previousl& eCecuted test results i What are the tools available for Re*ression testin*! 6lthou*h the process is simple i.e. the test cases that have been prepared can be used and the eCpected results are also known3 if the process is not automated it can be ver& time)consumin* and tedious operation. -ome of the tools available for re*ression testin* are: Record and Pla&back tools , Rere the previousl& eCecuted scripts can be rerun to verif& whether the same set of results are obtained. /.*. Rational Robot i What are the end *oals of Re*ression testin*! o To ensure that the unchan*ed s&stem se*ments function properl& o To ensure that the previousl& prepared manual procedures remain correct after the chan*es have been made to the application s&stem o To verif& that the data dictionar& of data elements that have been chan*ed is correct Re*ression testin* as the name su**ests is used to test / check the effect of chan*es made in the code. %ost of the time the testin* team is asked to check the last minute chan*es in the code 0ust before makin* a release to the client3 in this situation the testin* team needs to check onl& the affected areas. -o in short for the re*ression testin* the testin* team should *et the input from the development team about the nature / amount of chan*e in the fiC so that testin* team can first check the fiC and then the affected areas. In m& present or*ani.ation we too faced the same problem. -o we made a re*ression bucket :this is a simple eCcel sheet containin* the test cases that we need think assure us of bare minimum functionalit&; this bucket is run ever& time before the release. In fact the re*ression testin* is the testin* in which maCimum automation can be done. The reason bein* the same set of test cases will be run on different builds multiple times. 8ut a*ain the eCtent of automation depends on whether the test cases will remain applicable over the time3 In case the automated test cases do not remain applicable for some amount of time then test en*ineers will end up in wastin* time to automate and donYt *et enou*h out of automation. #2.J 6lpha Testin* 6 software protot&pe sta*e when the software is first available for run. Rere the software has the core functionalities in it but complete functionalit& is not aimed at. It would be able to accept inputs and *ive outputs. Bsuall& the most used functionalities :parts of code; are developed more. The test is conducted at the developerYs site onl&.

In a software development c&cle3 dependin* on the functionalities the number of alpha phases required is laid down in the pro0ect plan itself. +urin* this3 the testin* is not a throu*h one3 since onl& the protot&pe of the software is available. 8asic installation , uninstallation tests3 the completed core functionalities are tested. The functionalit& complete area of the 6lpha sta*e is *ot from the pro0ect plan document. 6im G is to identif& an& serious errors G to 0ud*e if the indented functionalities are implemented G to provide to the customer the feel of the software 6 throu*h understandin* of the product is done now. +urin* this phase3 the test plan and test cases for the beta phase :the neCt sta*e; is created. The errors reported are documented internall& for the testers and developers reference. 4o issues are usuall& reported and recorded in an& of the defect mana*ement/bu* trackers Role of test lead G Bnderstand the s&stem requirements completel&. G Initiate the preparation of test plan for the beta phase. Role of the tester G to provide input while there is still time to make si*nificant chan*es as the desi*n evolves. G Report errors to developers #2.L Bser 6cceptance Testin* Bser 6cceptance testin* occurs 0ust before the software is released to the customer. The end)users alon* with the developers perform the Bser 6cceptance Testin* with a certain set of test cases and t&pical scenarios. #2.N Installation Testin* Installation testin* is often the most under tested area in testin*. This t&pe of testin* is performed to ensure that all Installed features and options function properl&. It is also performed to verif& that all necessar& components of the application are3 indeed3 installed. Installation testin* should take care of the followin* points: ) #. To check if while installin* product checks for the dependent software / patches sa& -ervice pack=. 2. The product should check for the version of the same product on the tar*et machine3 sa& the previous version should not be over installed on the newer version. =. Installer should *ive a default installation path sa& b':hpro*ramsh.c J. Installer should allow user to install at location other then the default installation path. L. 'heck if the product can be installed b5ver the 4etworkc N. Installation should start automaticall& when the '+ is inserted. K. Installer should *ive the remove / Repair options. O. When uninstallin*3 check that all the re*istr& ke&s3 files3 +ll3 shortcuts3 active X components are removed from the s&stem. $. Tr& to install the software without administrative privile*es :lo*in as *uest;. #I. Tr& installin* on different operatin* s&stem. Tr& installin* on s&stem havin* non)compliant confi*uration such as less memor& / R6% / R++. #2.K 8eta Testin*

The 8eta testin* is conducted at one or more customer sites b& the end)user of the software. The beta test is a live application of the software in an environment that cannot be controlled b& the developer. The -oftware reaches beta sta*e when most of the functionalities are operatin*. The software is tested in customerYs environment3 *ivin* user the opportunit& to eCercise the software3 find the errors so that the& could be fiCed before product release. 8eta testin* is a detailed testin* and needs to cover all the functionalities of the product and also the dependent functionalit& testin*. It also involves the BI testin* and documentation testin*. Rence it is essential that this is planned well and the task accomplished. The test plan document has to be prepared before the testin* phase is started3 which clearl& la&s down the ob0ectives3 scope of test3 tasks to be performed and the test matriC which depicts the schedule of testin*. 8eta Testin* 5b0ectives G /valuate software technical content G /valuate software ease of use G /valuate user documentation draft G Identif& errors G Report errors/findin*s Role of a Test ead G Provide Test Instruction -heet that describes items such as testin* ob0ectives3 steps to follow3 data to enter3 functions to invoke. G Provide feedback forms and comments. Role of a tester G Bnderstand the software requirements and the testin* ob0ectives. G 'arr& out the test cases Report defects #=. Bnderstandin* /Cplorator& Testin* </Cplorator& testin* involves simultaneousl& learnin*3 plannin*3 runnin* tests3 and reportin* / troubleshootin* Results.< ) +r. 'em Zaner. </Cplorator& testin* is an interactive process of concurrent product eCploration3 test desi*n and test eCecution. To the eCtent that the neCt test we do is influenced b& the result of the last test we did3 we are doin* eCplorator& testin*.c ) 1ames 8ach. /Cplorator& testin* is defined as simultaneous test desi*n3 test eCecution and bu* reportin*. In this approach the tester eCplores the s&stem :findin* out what it is and then testin* it; without havin* an& prior test cases or test scripts. 8ecause of this reason it also called as ad hoc testin*3 *uerrilla testin* or intuitive testin*. 8ut there is some difference between them. In operational terms3 eCplorator& testin* is an interactive process of concurrent product eCploration3 test desi*n3 and test eCecution. The outcome of an eCplorator& testin* session is a set of notes about the product3 failures found3 and a concise record of how the product was tested. When practiced b& trained testers3 it &ields consistentl& valuable and auditable results. /ver& tester performs this t&pe of testin* at one point or the other. This testin* totall& depends on the skill and creativit& of the tester. +ifferent testers can eCplore the s&stem in different wa&s dependin* on their skills. Thus the tester has a ver& vital role to pla& in eCplorator& testin*.

This approach of testin* has also been advised b& -W/85Z for testin* since it mi*ht uncover the bu*s3 which the normal testin* mi*ht not discover. 6 s&stematic approach of eCplorator& testin* can also be used where there is a plan to attack the s&stem under test. This s&stematic approach of eCplorin* the s&stem is termed Formali.ed eCplorator& testin*. /Cplorator& testin* is a powerful approach in the field of testin*. Het this approach has not *ot the reco*nition and is often misunderstood and not *ained the respect it needs. In man& situations it can be more productive than the scripted testin*. 8ut the real fact is that all testers do practice this methodolo*& sometime or the other3 most often unknowin*l&\ /Cplorator& testin* believes in concurrent phases of product eCploration3 test desi*n and test eCecution. It is cate*ori.ed under 8lack)boC testin*. It is basicall& a free) st&le testin* approach where &ou do not be*in with the usual procedures of elaborate test plans and test steps. The test plan and strate*& is ver& well in the testerYs mind. The tester asks the ri*ht question to the product / application and 0ud*es the outcome. +urin* this phase he is actuall& learnin* the product as he tests it. It is interactive and creative. 6 conscious plan b& the tester *ives *ood results. Ruman bein*s are unique and think differentl&3 with a new set of ideas emer*in*. 6 tester has the basic skills to listen3 read3 think and report. /Cplorator& testin* is 0ust tr&in* to eCploit this and structure it down. The richness of this process is onl& limited to the breadth and depth of our ima*ination and the insi*ht into the product under test. Row does it differ from the normal test procedures! The definition of eCplorator& testin* conve&s the difference. In the normal testin* st&le3 the test process is planned well in advance before the actual testin* be*ins. Rere the test desi*n is separated from the test eCecution phase. %an& a times the test desi*n and test eCecution is entrusted on different persons. /Cplorator& testin* should not be confused with the dictionar& meanin* of bad)hocc. 6d hoc testin* normall& refers to a process of improvised3 impromptu bu* searchin*. 8& definition3 an&one can do ad hoc testin*. The term beCplorator& testin*c)) b& +r. 'em Zaner3 in Testin* 'omputer -oftware))refers to a sophisticated3 s&stematic3 thou*htful approach to ad hoc testin*. What is formali.ed /Cplorator& Testin*! 6 structured and reasoned approach to eCplorator& testin* is termed as Formali.ed /Cplorator& Testin*. This approach consists of specific tasks3 ob0ectives3 and deliverables that make it a s&stematic process. Bsin* the s&stematic approach :i.e. the formali.e approach; an outline of what to attack first3 its scope3 the time required to be spent etc is achieved. The approach mi*ht be usin* simple notes to more descriptive charters to some va*ue scripts. 8& usin* the s&stematic approach the testin* can be more or*ani.ed focusin* at the *oal to be reached. Thus solvin* the problem where the pure /Cplorator& Testin* mi*ht drift awa& from the *oal. When we appl& /Cplorator& Plannin* to Testin*3 we create /Cplorator& plannin*. The formali.ed approach used for the /Cplorator& Testin* can var& dependin* on the various criteria like the resource3 time3 the knowled*e of the application available etc. +ependin* on these criteria3 the approach used to attack the s&stem will also var&. It ma& involve creatin* the outlines on the notepad to more sophisticated wa& b& usin* charters etc. -ome of the formal approaches used for /Cplorator& Testin* can be summari.ed as follows.

i Identif& the application domain. The eCplorator& testin* can be performed b& identif&in* the application domain. If the tester has *ood knowled*e of domain3 then it would be easier to test the s&stem without havin* an& test cases. If the tester were well aware of the domain3 it would help anal&.in* the s&stem faster and better. Ris knowled*e would help in identif&in* the various workflows that usuall& eCist in that domain. Re would also be able to decide what are the different scenarios and which are most critical for that s&stem. Rence he can focus his testin* dependin* on the scenarios required. If a Q6 lead is tr&in* to assi*n the tester to a task3 it is advisable that the tester identifies the person who has the domain knowled*e of that s&stem for /T. For eCample3 consider software has been built to *enerate the invoices for its customers dependin* on the number of the units of power that has been consumed. In such a case eCplorator& testin* can be done b& identif&in* the domain of the application. 6 tester who has eCperience of the billin* s&stems for the ener*& domain would fit better than one who does not have an& knowled*e. The tester who has knowled*e in the application domain knows the terminolo*& used as well the scenarios that would be critical to the s&stem. Re would know the wa&s in which various computations are done. In such a case3 tester with *ood knowled*e would be familiar to the terms like to line item3 billin* rate3 billin* c&cle and the wa&s in which the computation of invoice would be done. Re would eCplore the s&stem to the best and takes lesser time. If the tester does not have domain knowled*e required3 then it would take time to understand the various workflows as well the terminolo*& used. Re mi*ht not be able to focus on critical areas rather focus on the other areas. i Identif& the purpose. 6nother approach to /Cplorator& Testin* is b& identif&in* the purpose of the s&stem i.e. What is that s&stem used for. 8& identif&in* the purpose tr& to anal&.e to what eCtent it is used. The effort can be more focused b& identif&in* the purpose. For eCample3 consider software developed to be used in %edical operations. In such case care should be taken that the software build is #IIS defect free. Rence the effort that needs to be focused is more and care should be taken that the various workflows involved are covered. 5n the other hand3 if the software build is to provide some entertainment then the criticalit& is lesser. Thus effort that needs to be focused varies. Identif&in* the purpose of the s&stem or the application to be tested helps to a *reat eCtent. i Identif& the primar& and secondar& functions. Primar& Function: 6n& function so important that3 in the estimation of a normal user3 its inoperabilit& or impairment would render the product unfit for its purpose. 6 function is primar& if &ou can associate it with the purpose of the product and it is essential to that purpose. Primar& functions define the product. For eCample3 the function of addin* teCt to a document in %icrosoft Word is certainl& so important that the product would be useless without it. Qroups of functions3 taken to*ether3 ma& constitute a primar& function3 too. For eCample3 while perhaps no sin*le function on the drawin* toolbar of Word would be considered primar&3 the entire toolbar mi*ht be primar&. If so3 then most of the functions on that toolbar should be operable in order for the product to pass 'ertification. -econdar& Function or contributin* function: 6n& function that contributes to the utilit& of the product3 but is not a primar& function. /ven thou*h contributin* functions are not primar&3 their inoperabilit& ma& be *rounds for refusin* to *rant 'ertification. For eCample3 users ma& be technicall& able to do useful thin*s with a product3 even if it has an bBndoc function that never works3 but most users will find that intolerable.

-uch a failure would violate fundamental eCpectations about how Windows products should work. Thus b& identif&in* the primar& function and secondar& functions for the s&stem3 testin* can be done where more focus and effort can be *iven to Primar& functions compared to the secondar& functions. /Cample: 'onsider a web based application developed for online shoppin*. For such an application we can identif& the primar& functions and secondar& functions and *o ahead with /Cplorator& Testin*. The main functionalit& of that application is that the items selected b& the user need to be properl& added to the shoppin* cart and price to be paid is properl& calculated. If there is online pa&ment3 then securit& is also an aspect. These can be considered as the primar& functions. Whereas the bulletin board provided or the mail functionalit& provided are considered as the secondar& functions. Thus testin* to be performed is more focused at the primar& functions rather than on the secondar& functions. If the primar& functions do not work as required then the main intention of havin* the application is lost. i Identif& the workflows. Identif&in* the workflows for testin* an& s&stem without an& scripted test cases can be considered as one of the best approaches used. The workflows are nothin* but a visual representation of the scenarios as the s&stem would behave for an& *iven input. The workflows can be simple flow charts or +ata Flow +ia*ramYs :+F+; or the somethin* like state dia*rams3 use cases3 models etc. The workflows will also help to identif& the scope for that scenario. The workflows would help the tester to keep track of the scenarios for testin*. It is su**ested that the tester navi*ates throu*h the application before he starts eCplorin*. It helps the tester in identif&in* the various possible workflows and issues an& found which he is comfortable can be discussed with the concerned team. /Cample: 'onsider a web application used for online shoppin*. The application has various links on the web pa*e. If tester is tr&in* to test if the items that he is addin* to cart are properl& bein* added3 then he should know the flow for the same. Re should first identif& the workflow for such a scenario. Re needs to lo*in and then select a cate*or& and identif& the items and then add the item he would require. Thus without knowin* the workflow for such a scenario would not help the tester and in the process loses his time. In case he is not aware of the s&stem3 tr& to navi*ate throu*h the application once and *et comfortable. 5nce the application is dull& understood3 it is easier to test and eCplore more bu*s. i Identif& the break points. 8reak points are the situations where the s&stem starts behavin* abnormall&. It does not *ive the output it is supposed to *ive. -o b& identif&in* such situations also testin* can be done. Bse boundar& values or invariance for findin* the break points of the application. In most of the cases it is observed that s&stem would work for normal inputs or outputs. Tr& to *ive input that mi*ht be the ideal situation or the worse situation. /Cample: consider an application build to *enerate the reports for the accounts department of a compan& dependin* on the criteria *iven. In such cases tr& to select a worse case of report *eneration for all the emplo&ees for their service. The s&stem mi*ht not behave normall& in the situation. Tr& to input a lar*e input file to the application that provides the user to upload and save the data *iven.

Tr& to input LII characters in the teCt boC of the web application. Thus b& tr&in* to identif& the eCtreme conditions or the breakpoints would help the tester to uncover the hidden bu*s. -uch cases mi*ht not be covered in the normal scripted testin*. Rence this helps in findin* the bu*s which mi*ht not covered in the normal testin*. i 'heck BI a*ainst Windows interface etc standards. The /Cplorator& Testin* can be performed b& identif&in* the Bser interface standards. There are set standards laid down for the user interfaces that need to be developed. These user standards are nothin* but the look and feel aspects of the interfaces the user interacts with. The user should be comfortable with an& of the screens that :s;he workin*. These aspects help the end user to accept the s&stem faster. /Cample: For Web application3 o Is the back*round as per the standards! If the bri*ht back*round is used3 the user mi*ht not feel comfortable. o What is si.e of the font used! o 6re the buttons of the required si.e and are the& placed in the comfortable location. o -ometimes the applications are developed to avoid usa*e of the scroll bar. The content can be seen with out the need to scroll. 8& identif&in* the Bser standards3 define an approach to test because the application developed should be user friendl& for the userYs usa*e. Re should feel comfortable while usin* the s&stem. The more familiar and easier the application for usa*e3 faster the user feels comfortable to the s&stem. i Identif& eCpected results. The tester should know what he is testin* for and eCpected output for the *iven input. Bntil and unless the aim of his testin* is not known3 there is no use of the testin* done. 8ecause the tester ma& not succeed in distin*uishin* the real error and normal workflow. First he needs to anal&.e what is the eCpected output for the scenario he is testin*. /Cample: 'onsider software used to provide the user with an interface to search for the emplo&ee name in the or*ani.ation *iven some of the inputs like the first name or last name or his id etc. For such a scenario3 the tester should identif& the eCpected output for an& combination of input values. If the input provided does not result in an& data and shows a messa*e c/rror not data foundc. The tester should not misinterpret this as an error3 because this mi*ht be as per requirement when no data is found. Instead for a *iven input3 the messa*e shown is b JIJ) File not foundc3 the tester should identif& it as an error not a requirement. Thus he should be able to distin*uish between an error and normal workflow. i Identif& the interfaces with other interfaces/eCternal applications. In the a*e of component development and maCimum reusabilit&3 developers tr& to pick up the alread& developed components and inte*rate them. Thus3 achievin* the desired result in short time. In some cases it would help the tester eCplore the areas where the components are coupled. The output of one component should be correctl& sent to other component. Rence such scenarios or workflows need to be identified and eCplored more. %ore focus on some of the shown areas that are more error prone. /Cample: consider the online shoppin* application. The user adds the items to his cart and proceeds to the pa&ments details pa*e. Rere the items added3 their quantit&

etc should be properl& sent to the neCt module. If there is an& error in an& of the data transfer process3 the pa& details will not be correct and the user will be billed wron*. There b& leadin* to a ma0or error. In such a scenario3 more focus is required in the interfaces. There ma& be eCternal interfaces3 like the application is inte*rated with another application for the data. In such cases3 focus should be more on the interface between the two applications. Row data is bein* passed3 is correct data bein* passed3 if there is lar*e data3 is transfer of entire data done or is s&stem behavin* abnormall& when there is lar*e data are few points which should be addressed. i Record failures In eCplorator& testin*3 we do the testin* without havin* an& documented test cases. If a bu* has been found3 it is ver& difficult for us to test it after fiC. This is because there are no documented steps to navi*ate to that particular scenario. Rence we need to keep track of the flow required to reach where a bu* has been found. -o while testin*3 it is important that at least the bu*s that have been discovered are documented. Rence b& recordin* failures we are able to keep track of work that has been done. This would also help even if the tester who was actuall& doin* /T is not available. -ince the document can be referred and list all the bu*s that have been reported as well the flows for the same can be identified. /Cample: for eCample consider the online shoppin* site. 6 bu* has been found while tr&in* to add the items of *iven cate*or& into the cart. If the tester can 0ust document the flow as well as the error that has occurred3 it would help the tester himself or an& other tester. It can be referred while testin* the application after a fiC. i +ocument issues and question. The tester tr&in* to test an application usin* /Cplorator& Testin* methodolo*& should feel comfortable to test. Rence it is advisable that the tester navi*ates throu*h the application once and notes an& ambi*uities or queries he mi*ht feel. Re can even *et the clarification on the workflows he is not comfortable. Rence b& documentin* all the issues and questions that have been found while scannin* or navi*atin* the application can help the tester have testin* done without an& loss in time. i +ecompose the main task into smaller tasks .The smaller ones to still smaller activities. It is alwa&s easier to work with the smaller tasks when compared to lar*e tasks. This is ver& useful in performin* /Cplorator& Testin* because lack of test cases mi*ht lead us to different routes. 8& havin* a smaller task3 the scope as well as the boundar& are confined which will help the tester to focus on his testin* and plan accordin*l&. If a bi* task is taken up for testin*3 as we eCplore the s&stem3 we mi*ht *et deviated from our main *oal or task. It mi*ht be hard to define boundaries if the application is a new one. With smaller tasks3 the *oal is known and hence the focus and the effort required can be properl& planned. /Cample: 6n application that provides email facilit&. The new users can re*ister and use the application for the email. In such a scenario3 the main task itself can be divided into smaller tasks. 5ne task to check if the BI standards are met and it is user friendl&. The other task is to test if the new users are able to re*ister with the application and use email facilit&. Thus the two tasks are smaller which will the correspondin* *roups to focus their testin* process. i 'harter) states the *oal and the tactics to be used.

'harter -ummar&: o b6rchitectin* the 'hartersc i.e. Test Plannin* o 8rief information / *uidelines on: o %ission: Wh& do we test this! o What should be tested! o Row to test :approach;! o What problems to look for! o %i*ht include *uidelines on: o Tools to use o -pecific Test Techniques or tactics to use o What risks are involved o +ocuments to eCamine o +esired output from the testin*. 6 charter can be simple one to more descriptive *ivin* the strate*ies and outlines for the testin* process. /Cample: Test the application for report *eneration. 5r. Test the application if the report is bein* *enerated for the date before I#/I#/2III.Bse the use cases models for identif&in* the workflows. i -ession 8ased Test %ana*ement:-8T%;: -ession 8ased Test %ana*ement is a formali.ed approach that uses the concept of charters and the sessions for performin* the /T. 6 session is not a test case or bu* report. It is the reviewable product produced b& chartered and uninterrupted test effort. 6 session can last from NI to $I minutes3 but there is no hard and fast rule on the time spent for testin*. If a session lasts closer to JL minutes3 we call it a short session. If it lasts closer to two hours3 we call it a lon* session. /ach session desi*ned depends on the tester and the charter. 6fter the session is completed3 each session is debriefed. The primar& ob0ective in the debriefin* is to understand and accept the session report. 6nother ob0ective is to provide feedback and coachin* to the tester. The debriefin*s would help the mana*er to plan the sessions in future and also to estimate the time required for testin* the similar functionalit&. The debriefin* session is based on a*enda called PR55F. Past: What happened durin* the session! Results: What was achieved durin* the session! 5utlook: What still needs to be done! 5bstacles: What *ot in the wa& of *ood testin*! Feelin*: Row does the tester feel about all this! The time spent bon charterc and bon opportunit&c is also noted. 5pportunit& testin* is an& testin* that doesnYt fit the charter of the session. The tester is not restricted to his charter3 and hence allowed to deviate from the *oal specified if there is an& scope of findin* an error.

6 session can be broadl& classified into three tasks :namel& the T8- metrics;. -ession test up: Time required in settin* up the application under test. Test desi*n and eCecution: Time required scannin* the product and test. 8u* investi*ation and reportin*: Time required findin* the bu*s and reportin* to the concerned. The entire session report consists of these sections: i -ession charter :includes a mission statement3 and areas to be tested; i Tester name:s; i +ate and time started i Task breakdown :the T8- metrics; i +ata files i Test notes i Issues i 8u*s For each session3 a session sheet is made. The session sheet consist of the mission of testin*3 the tester details3 duration of testin*3 the T8- metrics alon* with the data related to testin* like the bu*s3 notes3 issues etc. +ata files if an& used in the testin* would also be enclosed. The data collected durin* different testin* sessions are collected and eCported to /Ccel or some database. 6ll the sessions3 the bu*s reported etc can be tracked usin* the unique id associated with each. It is eas& for the client as well to keep track. Thus this concept of testers testin* in sessions and producin* the required output which are trackable is called as -ession based test mana*ement. i +efect +riven /Cplorator& Testin*: +efect driven eCplorator& testin* is another formali.ed approach used for /T. +efect +riven /Cplorator& Testin* :++/T; is a *oal)oriented approach focused on the critical areas identified on the +efect anal&sis stud& based on Procedural Testin* results. In Procedural testin*3 the tester eCecutes readil& available test cases3 which are written based on the requirement specifications. 6lthou*h the test cases are eCecuted completel&3 defects were found in the software while doin* eCplorator& testin* b& 0ust wanderin* throu*h the product blindl&. 1ust eCplorin* the product without si*ht was akin to *ropin* in the dark and did not help the testers unearth all the hidden bu*s in the software as the& were not ver& sure about the areas that needed to be eCplored in the software. 6 reliable basis was needed for eCplorin* the software. Thus +efect driven eCplorator& testin* is an idea of eCplorin* that part of the product based on the results obtained durin* procedural testin*. 6fter anal&.in* the defects found durin* the ++/T process3 it was found that these were the most critical bu*s3 which were camoufla*ed in the software and which if present could have made the software e4ot fit for BseY. There are some pre requisites for ++/T: o In)depth knowled*e of the product. o Procedural Testin* has to be carried out. o +efect 6nal&sis based on -cripted Tests. 6dvanta*es of ++/T: o Tester has clear clues on the areas to be eCplored. o Qoal oriented approach 3 hence better results. o 4o wasta*e of time.

Where does /Cplorator& Testin* Fit: In *eneral3 /T is called for in an& situation where itYs not obvious what the neCt test should be3 or when &ou want to *o be&ond the obvious tests. %ore specificall&3 freest&le /Cplorator& Testin* fits in an& of the followin* situations: i Hou need to provide rapid feedback on a new product or feature. i Hou need to learn the product quickl&. i Hou have alread& tested usin* scripts3 and seek to diversif& the testin*. i Hou want to find the sin*le most important bu* in the shortest time. i Hou want to check the work of another tester b& doin* a brief independent Investi*ation. i Hou want to investi*ate and isolate a particular defect. i Hou want to investi*ate the status of a particular risk3 in order to evaluate the need for scripted tests in that area. Pros and 'ons: Pros i +oes not require eCtensive documentation. i Responsive to chan*in* scenarios. i Bnder ti*ht schedules3 testin* can be more focused dependin* on the bu* rate or risks. i Improved covera*e. 'ons i +ependent on the testerYs skills. i Test trackin* not concrete. i %ore prone to human error. i 4o contin*enc& plan if the tester is unavailable. What specifics affect /Cplorator& Testin*! Rere is a list that affects eCplorator& testin*: G The mission of the particular test session G The tester skills3 talents3 preferences G 6vailable time and other resources G The status of other testin* c&cles for the product G Row much the tester knows about the product %ission The *oal of testin* needs to be understood first before the work be*ins. This could be the overall mission of the test pro0ect or could be a particular functionalit& / scenario. The mission is achieved b& askin* the ri*ht questions about the product3 desi*nin* tests to answer these questions and eCecutin* tests to *et the answers. 5ften the tests do not completel& answer3 in such cases we need to eCplore. The test procedure is recorded :which could later form part of the scripted testin*; and the result status too. Tester The tester needs to have a *eneral plan in mind3 thou*h ma& not be ver& constrained. The tester needs to have the abilit& to desi*n *ood test strate*&3 eCecute *ood tests3 find important problems and report them. Re simpl& has to think out of the boC. Time Time available for testin* is a critical factor. Time falls short due to the followin* reasons: o %an& a time in pro0ect life c&cles3 the time and resources required in creatin* the test strate*&3 test plan and desi*n3 eCecution and reportin* is overlooked. /Cplorator& testin* becomes useful since the test plan3 desi*n and eCecution happen to*ether. o 6lso when testin* is essential on a short period of notice o 6 new feature is implemented

o 'han*e request come in much later sta*e of the c&cle when much of the testin* is done with In such situations eCplorator& testin* comes hand&. Practicin* /Cplorator& Testin* 6 basic strate*& of eCplorator& testin* is to have a *eneral plan of attack3 but also allow &ourself to deviate from it for short period of time. In a session of eCplorator& testin*3 a set of test ideas3 written notes :simple /n*lish or scripts; and bu* reports are the results. This can be reviewed b& the test lead / test mana*er. i Test -trate*& It is important to identif& the scope of the test to be carried. This is dependent on the pro0ect approach to testin*. The test mana*er / test lead can decide the scope and conve& the same to the test team. i Test desi*n and eCecution The tester crafts the test b& s&stematicall& eCplorin* the product. Re defines his approach3 anal&.e the product3 and evaluate the risk i +ocumentation The written notes / scripts of the tester are reviewed b& the test lead / mana*er. These later form into new test cases or updated test materials. Where does /Cplorator& Testin* Fit! /Cplorator& testin* fits almost in an& kind of testin* pro0ects3 pro0ects with ri*orous test plans and procedures or in pro0ects where testin* is not dictated completel& in advance. The situations where eCplorator& testin* could fit in are: i 4eed to provide a rapid feedback on a new feature implementation / product i ittle product knowled*e and need to learn it quickl& i Product anal&sis and test plannin* i +one with scripted testin* and need to diversif& more i Improve the qualit& of eCistin* test scripts i Write new scripts The basic rule is this: eCplorator& testin* is called for an& time the neCt test &ou should perform is not obvious3 or when &ou want to *o be&ond the obvious. 6 Qood /Cplorator& Tester /Cplorator& testin* approach relies a lot on the tester himself. The tester activel& controls the desi*n of tests as the& are performed and uses the information *ained to desi*n new and better ideas. 6 *ood eCplorator& tester should i Rave the abilit& to desi*n *ood tests3 eCecute them and find important problems i -hould document his ideas and use them in later c&cles. i %ust be able to eCplain his work i 8e a careful observer: /Cplorator& testers are more careful observers than novices and eCperienced scripted testers. -cripted testers need onl& observe what the script tells. /Cplorator& tester must watch for an&thin* unusual or m&sterious. i 8e a critical thinker: The& are able to review and eCplain their lo*ic3 lookin* out for errors in their own thinkin*. i Rave diverse ideas so as to make new test cases and improve eCistin* ones. 6 *ood eCplorator& tester alwa&s asks himself3 whatYs the best test I can perform now! The& remain alert for new opportunities. 6dvanta*es /Cplorator& testin* is advanta*eous when

G Rapid testin* is essential G Test case development time not available G 4eed to cover hi*h risk areas with more inputs G 4eed to test software with little knowled*e about the specifications G +evelop new test cases or improve the eCistin* G +rive out monoton& of normal step , b& ) step test eCecution +rawbacks G -killed tester required G +ifficult to quanti.e 8alancin* /Cplorator& Testin* with -cripted Testin* /Cplorator& testin* relies on the tester and the approach he proceeds with. Pure scripted testin* doesnYt under*o much chan*e with time and hence the power fades awa&. In test scenarios where in repeatabilit& of tests are required3 automated scripts havin* an ed*e over eCplorator& approach. Rence it is important to achieve a balance between the two approaches and combine the two to *et the best of both. #J. Bnderstandin* -cenario 8ased Testin* -cenario 8ased Tests :-8T; are best suited when &ou need to tests need to concentrate on the functionalit& of the application3 than an&thin* else. et us take an eCample3 where &ou are testin* an application which is quite old :a le*ac& application; and it is a bankin* application. This application has been built based on the requirements of the or*ani.ation for various bankin* purposes. 4ow3 this application will have continuous up*rades in the workin* :technolo*& wise and business wise;. What do &ou do to test the application! et us assume that the application is under*oin* onl& functional chan*es and not the BI chan*es. The test cases should be updated for ever& release. 5ver a period of time3 maintainin* the test ware becomes a ma0or set back. The -cenario 8ased Tests would help &ou here. 6s per the requirements3 the base functionalit& is stable and there are no BI chan*es. There are onl& chan*es with respect to the business functionalit&. 6s per the requirements and the situation3 we clearl& understand that onl& re*ression tests need to be run continuousl& as part of the testin* phase. 5ver a period of time3 the individual test cases would become difficult to mana*e. This is the situation where we use -cenarios for testin*. What do &ou do for derivin* -cenarios! We can use the followin* as the basis for derivin* scenarios: L. From the requirements3 list out all the functionalities of the application. N. Bsin* a *raph notation3 draw depictions of various transactions which pass throu*h various functionalities of the application. K. 'onvert these depictions into scenarios. O. Run the scenarios when performin* the testin*. Will &ou use -cenario 8ased Tests onl& for e*ac& application testin*! 4o. -cenario 8ased Tests are not onl& for le*ac& application testin*3 but for an& application which requires &ou to concentrate more on the functional requirements. If &ou can plan out a perfect test strate*&3 then the -cenario 8ased Tests can be used for an& application testin* and for an& requirements. -cenario 8ased tests will be a *ood choice with a combination of various test t&pes and techniques when &ou are testin* pro0ects which adopt B% :Bnified %odelin* an*ua*e; based development strate*ies. Hou can derive scenarios based on the Bse 'aseYs. Bse 'aseYs provide *ood covera*e of the requirements and functionalit&. #L. Bnderstandin* 6*ile Testin* The concept of 6*ile testin* rests on the values of the 6*ile 6lliance ?alues3 which states that:

bWe have come to value: Individuals and interactions over processes and tools Workin* software over comprehensive documentation 'ustomer collaboration over contract ne*otiation Respondin* to chan*e over followin* a plan That is3 while there is value in the items on the ri*ht3 we value the items on the left more.< ) http://www.a*ilemanifesto.or*/ What is 6*ile testin*! #; 6*ile testers treat the developers as their customer and follow the a*ile manifesto. The 'onteCt driven testin* principles :eCplained in later part; act as a set of principles for the a*ile tester. 2; 5r it can be treated as the testin* methodolo*& followed b& testin* team when an entire pro0ect follows a*ile methodolo*ies. If so what is the role of a tester in such a fast paced methodolo*&!; Traditional Q6 seems to be totall& at lo**erheads with the 6*ile manifesto in the followin* re*ard where: G Process and tools are a ke& part of Q6 and testin*. G Q6 people seem to love documentation. G Q6 people want to see the written specification. G 6nd where is testin* without a P 64! -o the question arises is there a role for Q6 in 6*ile pro0ects! There answer is ma&be but the roles and tasks are different. In the first definition of 6*ile testin* we described it as one followin* the 'onteCt driven principles. The conteCt driven principles which are *uidelines for the a*ile tester are: #. The value of an& practice depends on its conteCt. 2. There are *ood practices in conteCt3 but there are no best practices. =. People3 workin* to*ether3 are the most important part of an& pro0ectYs conteCt. J. Pro0ects unfold over time in wa&s that are often not predictable. L. The product is a solution. If the problem isnYt solved3 the product doesnYt work. N. Qood software testin* is a challen*in* intellectual process. K. 5nl& throu*h 0ud*ment and skill3 eCercised cooperativel& throu*hout the entire pro0ect3 are we able to do the ri*ht thin*s at the ri*ht times to effectivel& test our products. http://www.conteCt)driven)testin*.com/ In the second definition we described 6*ile testin* as a testin* methodolo*& adopted when an entire pro0ect follows 6*ile :development; %ethodolo*&. We shall have a look at the 6*ile development methodolo*ies bein* practiced currentl&: 6*ile +evelopment %ethodolo*ies

G /Ctreme Pro*rammin* :XP; G 'r&stal G 6daptive -oftware +evelopment :6-+; G -crum G Feature +riven +evelopment :F++; G +&namic -&stems +evelopment %ethod :+-+%; G Xbreed In a fast paced environment such as in 6*ile development the question then arises as to what is the bRolec of testin*! Testin* is as relevant in an 6*ile scenario if not more than a traditional software development scenario. Testin* is the Readli*ht of the a*ile pro0ect showin* where the pro0ect is standin* now and the direction it is headed. Testin* provides the required and relevant information to the teams to take informed and precise decisions. The testers in a*ile frameworks *et involved in much more than findin* bsoftware bu*sc3 an&thin* that can bbu*c the potential user is a issue for them but testers donYt make the final call3 itYs the entire team that discusses over it and takes a decision over a potential issues. 6 firm belief of 6*ile practitioners is that an& testin* approach does not assure qualit& itYs the team that does :or doesnYt; do it3 so there is a heav& emphasis on the skill and attitude of the people involved. 6*ile Testin* is not a *ame of b*otchac3 itYs about findin* wa&s to set *oals rather than focus on mistakes. 6mon* these 6*ile methodolo*ies mentioned we shall look at XP :/Ctreme Pro*rammin*; in detail3 as this is the most commonl& used and popular one. The basic components of the XP practices are: G Test) First Pro*rammin* G Pair Pro*rammin* G -hort Iterations W Releases G Refactorin* G Bser -tories G 6cceptance Testin* We shall discuss these factors in detail.

Test)First Pro*rammin* i +evelopers write unit tests before codin*. It has been noted that this kind of approach motivates the codin*3 speeds codin* and also and improves desi*n results in better desi*ns :with less couplin* and more cohesion; i It supports a practice called Refactorin* :discussed later on;. i 6*ile practitioners prefer Tests :code; to TeCt :written documents; for describin* s&stem behavior. Tests are more precise than human lan*ua*e and the& are also a lot more likel& to be updated when the desi*n chan*es. Row man& times have &ou seen desi*n documents that no lon*er accuratel& described the current workin*s of the software! 5ut)of)date desi*n documents look prett& much like up)to)date documents. 5ut)of)date tests fail.

i %an& open source tools like CBnit have been developed to support this methodolo*&. Refactorin* i Refactorin* is the practice chan*in* a software s&stem in such a wa& that it does not alter the eCternal behavior of the code &et improves its internal structure. i Traditional development tries to understand how all the code will work to*ether in advance. This is the desi*n. With a*ile methods3 this difficult process of ima*inin* what code mi*ht look like before it is written is avoided. Instead3 the code is restructured as needed to maintain a coherent desi*n. Frequent refactorin* allows less up)front plannin* of desi*n. i 6*ile methods replace hi*h)level desi*n with frequent redesi*n :refactorin*;. -uccessful refactorin* 8ut it also requires a wa& of ensurin* checkin* whether that the behavior wasnYt inadvertentl& chan*ed. ThatYs where the tests come in. i %ake the simplest desi*n that will work and add compleCit& onl& when needed and refactor as necessar&. i Refactorin* requires unit tests to ensure that desi*n chan*es :refactorin*s; donYt break eCistin* code. 6cceptance Testin* i %ake up user eCperiences or Bser stories3 which are short descriptions of the features to be coded. i 6cceptance tests verif& the completion of user stories. i Ideall& the& are written before codin*. With all these features and process included we can define a practice for 6*ile testin* encompassin* the followin* features. G 'onversational Test 'reation G 'oachin* Tests G Providin* Test Interfaces G /Cplorator& earnin* ookin* deep into each of these practices we can describe each of them as: 'onversational Test 'reation i Test case writin* should be a collaborative activit& includin* ma0orit& of the entire team. 6s the customers will be bus& we should have someone representin* the customer. i +efinin* tests is a ke& activit& that should include pro*rammers and customer representatives. i +onAt do it alone. 'oachin* Tests i 6 wa& of thinkin* about 6cceptance Tests. i Turn user stories into tests. i Tests should provide Qoals and *uidance3 Instant feedback and Pro*ress measurement i Tests should be in specified in a format that is clear enou*h that users/ customers can understand and that is specific enou*h that it can be eCecuted i -pecification should be done b& eCample. Providin* Test Interfaces i +evelopers are responsible for providin* the fiCtures that automate coachin* tests

i In most cases XP teams are addin* test interfaces to their products3 rather than usin* eCternal test tools Test Interaction %odel

/Cplorator& earnin* i Plan to eCplore3 learn and understand the product with each iteration. i ook for bu*s3 missin* features and opportunities for improvement. i We donYt understand software until we have used it. We believe that 6*ile Testin* is a ma0or step forward. Hou ma& disa*ree. 8ut re*ardless 6*ile Pro*rammin* is the wave of the future. These practices will develop and some of the eCtreme ed*es ma& be worn off3 but itYs onl& *rowin* in influence and attraction. -ome testers ma& not like it3 but those who donYt fi*ure out how to live with it are simpl& *oin* to be left behind. -ome testers are still upset that the& donYt have the authorit& to block the release. +o the& think that the& now have the authorit& to block the adoption of these new development methods! The&Yll need to *et on this ship and if the& want to tr& to keep it from the shoals. -ta& on the dock if &ou wish. 8on ?o&a*e\ #N. 6PI Testin* 6pplication pro*rammable Interfaces :6PIs; are collections of software functions or procedures that can be used b& other applications to fulfill their functionalit&. 6PIs provide an interface to the software component. These form the critical elements for the developin* the applications and are used in varied applications from *raph drawin* packa*es3 to speech en*ines3 to web)based airline reservation s&stems3 to computer securit& components. /ach 6PI is supposed to behave the wa& it is coded3 i.e. it is functionalit& specific. These 6PIs ma& offer different results for different t&pe of the input provided. The errors or the eCceptions returned ma& also var&. Rowever once inte*rated within a product3 the common functionalit& covers a ver& minimal code path of the 6PI and the functionalit& testin* / inte*ration testin* ma& cover onl& those paths. 8& considerin* each 6PI as a black boC3 a *enerali.ed approach of testin* can be applied. 8ut3 there ma& be some paths which are not tested and lead to bu*s in the application. 6pplications can be viewed and treated as 6PIs from a testin* perspective. There are some distinctive attributes that make testin* of 6PIs sli*htl& different from testin* other common software interfaces like QBI testin*. i Testin* 6PIs requires a thorou*h knowled*e of its inner workin*s ) -ome 6PIs ma& interact with the 5- kernel3 other 6PIs3 with other software to offer their functionalit&. Thus an understandin* of the inner workin*s of the interface would help in anal&.in* the call sequences and detectin* the failures caused. i 6dequate pro*rammin* skills ) 6PI tests are *enerall& in the form of sequences of calls3 namel&3 pro*rams. /ach tester must possess eCpertise in the pro*rammin* lan*ua*e:s; that are tar*eted b& the 6PI. This would help the tester to review and scrutini.e the interface under test when the source code is available.

i ack of +omain knowled*e , -ince the testers ma& not be well trained in usin* the 6PI3 a lot of time mi*ht be spent in eCplorin* the interfaces and their usa*e. This problem can be solved to an eCtent b& involvin* the testers from the initial sta*e of development. This would help the testers to have some understandin* on the interface and avoid eCplorin* while testin*. i 4o documentation , /Cperience has shown that it is hard to create precise and readable documentation. The 6PIs developed will hardl& have an& proper documentation available. Without the documentation3 it is difficult for the test desi*ner to understand the purpose of calls3 the parameter t&pes and possible valid/invalid values3 their return values3 the calls it makes to other functions3 and usa*e scenarios. Rence havin* proper documentation would help test desi*ner desi*n the tests faster. i 6ccess to source code , The availabilit& of the source code would help tester to understand and anal&.e the implementation mechanism used9 and can identif& the loops or vulnerabilities that ma& cause errors. Thus if the source code is not available then the tester does not have a chance to find anomalies that ma& eCist in the code. i Time constraints , Thorou*h testin* of 6PIs is time consumin*3 requires a learnin* overhead and resources to develop tools and desi*n tests. Zeepin* up with deadlines and ship dates ma& become a ni*htmare. Testin* of 6PI calls can be done in isolation or in -equence to var& the order in which the functionalit& is eCercised and to make the 6PI produce useful results from these tests. +esi*nin* tests is essentiall& desi*nin* sequences of 6PI calls that have a potential of satisf&in* the test ob0ectives. This in turn boils down to desi*nin* each call with specific parameters and to buildin* a mechanism for handlin* and evaluatin* return values. Thus desi*nin* of the test cases can depend on some of the *eneral questions like i Which value should a parameter take! i What values to*ether make sense! i What combination of parameters will make 6PIs work in a desired manner! i What combination will cause a failure3 a bad return value3 or an anomal& in the operatin* environment! i Which sequences are the best candidates for selection! etc. -ome interestin* problems for testers bein* #. /nsurin* that the test harness varies parameters of the 6PI calls in wa&s that verif& functionalit& and eCpose failures. This includes assi*nin* common parameter values as well as eCplorin* boundar& conditions. 2. Qeneratin* interestin* parameter value combinations for calls with two or more parameters. =. +eterminin* the content under which an 6PI call is made. This mi*ht include settin* eCternal environment conditions :files3 peripheral devices3 and so forth; and also internal stored data that affect the 6PI. J. -equencin* 6PI calls to var& the order in which the functionalit& is eCercised and to make the 6PI produce useful results from successive calls. 8& anal&.in* the problems listed above3 a strate*& needs to be formulated for testin* the 6PI. The 6PI to be tested would require some environment for it to work. Rence it is required that all the conditions and prerequisites understood b& the tester. The neCt step would be to identif& and stud& its points of entr&. The QBIs would have items like menus3 buttons3 check boCes3 and combo lists that would tri**er the event or action

to be taken. -imilarl&3 for 6PIs3 the input parameters3 the events that tri**er the 6PI would act as the point of entr&. -ubsequentl&3 a chief task is to anal&.e the points of entr& as well as si*nificant output items. The input parameters should be tested with the valid and invalid values usin* strate*ies like the boundar& value anal&sis and equivalence partitionin*. The fourth step is to understand the purpose of the routines3 the conteCts in which the& are to be used. 5nce all this parameter selections and combinations are desi*ned3 different call sequences need to be eCplored. The steps can be summari.ed as followin* #. Identif& the initial conditions required for testin*. 2. Identif& the parameters , 'hoosin* the values of individual parameters. =. Identif& the combination of parameters , pick out the possible and applicable parameter combinations with multiple parameters. J. Identif& the order to make the calls , decidin* the order in which to make the calls to force the 6PI to eChibit its functionalit&. L. 5bserve the output. #.Identif& the initial condition: The testin* of an 6PI would depend lar*el& on the environment in which it is to be tested. Rence initial condition pla&s a ver& vital role in understandin* and verif&in* the behavior of the 6PI under test. The initial conditions for testin* 6PIs can be classified as i %andator& pre)setters. i 8ehavioral pre)setters. %andator& Pre)setters The eCecution of an 6PI would require some minimal state3 environment. These t&pe of initial conditions are classified under the mandator& initiali.ation :%andator& pre) setters; for the 6PI. For eCample3 a non)static member function 6PI requires an ob0ect to be created before it could be called. This is an essential activit& required for invokin* the 6PI. 8ehavioral pre)setters To test the specific behavior of the 6PI3 some additional environmental state is required. These t&pes of initial conditions are called the behavioral pre)setters cate*or& of Initial condition. These are optional conditions required b& the 6PI and need to be set before invokin* the 6PI under test thus influencin* its behavior. -ince these influence the behavior of the 6PI under test3 the& are considered as additional inputs other than the parameters Thus to test an& 6PI3 the environment required should also be clearl& understood and set up. Without these criteria3 6PI under test mi*ht not function as required and leave the testerYs 0ob undone. 2.Input/Parameter -election: The list of valid input parameters need to be identified to verif& that the interface actuall& performs the tasks that it was desi*ned for. While there is no method that ensures this behavior will be tested completel&3 usin* inputs that return quantifiable and verifiable results is the neCt best thin*. The different possible input values :valid and invalid; need to be identified and selected for testin*. The techniques like the boundar& values anal&sis and equivalence)partitionin* need to be used while tr&in* to consider the input parameter values. The boundar& values

or the limits that would lead to errors or eCceptions need to be identified. It would also be helpful if the data structures and other components that use these data structures apart from the 6PI are anal&.ed. The data structure can be loaded b& usin* the other components and the 6PI can be tested while the other component is accessin* these data structures. ?erif& that all other dependent components functionalit& are not affected while the 6PI accesses and manipulates the data structures The availabilit& of the source code to the testers would help in anal&.in* the various inputs values that could be possible for testin* the 6PI. It would also help in understandin* the various paths which could be tested. Therefore3 not onl& are testers required to understand the calls3 but also all the constants and data t&pes used b& the interface. =. Identif& the combination of parameters: Parameter combinations are eCtremel& important for eCercisin* stored data and computation. In 6PI calls3 two independentl& valid values mi*ht cause a fault when used to*ether which mi*ht not have occurred with the other combinational values. Therefore3 a routine called with two parameters requires selection of values for one based on the value chosen for the other. 5ften the response of a routine to certain data combinations is incorrectl& pro*rammed due to the underl&in* compleC lo*ic. The 6PI needs to be tested takin* into consideration the combination of different parameter. The number of possible combinations of parameters for each call is t&picall& lar*e. For a *iven set of parameters3 if onl& the boundar& values have been selected3 the number of combinations3 while relativel& diminished3 ma& still be prohibitivel& lar*e. For eCample3 consider an 6PI which takes three parameters as input. The various combinations of different values for the input values and their combinations needs to be identified. Parameter combination is further complicated b& the function overloadin* capabilities of man& modern pro*rammin* lan*ua*es. It is important to isolate the differences between such functions and take into account that their use is conteCt driven. The 6PIs can also be tested to check that there are no memor& leaks after the& are called. This can be verified b& continuousl& callin* the 6PI and observin* the memor& utili.ation. J.'all -equencin*: When combinations of possible ar*uments to each individual call are unmana*eable3 the number of possible call sequences is infinite. Parameter selection and combination issues further complicate the problem call)sequencin* problem. Faults caused b& improper call sequences tend to *ive rise to some of the most dan*erous problems in software. %ost securit& vulnerabilities are caused b& the eCecution of some such seemin*l& improbable sequences. L.5bserve the output: The outcome of an eCecution of an 6PI depends upon the behavior of that 6PI3 the test condition and the environment. The outcome of an 6PI can be at different wa&s i.e.3 some could *enerall& return certain data or status but for some of the 6PIAs3 it mi*ht not return or shall be 0ust waitin* for a period of time3 tri**erin* another event3 modif&in* certain resource and so on. The tester should be aware of the output that needs to be eCpected for the 6PI under test. The outputs returned for various input values like valid/invalid3 boundar& values etc needs to be observed and anal&.ed to validate if the& are as per the functionalit&. 6ll the error codes returned and eCceptions returned for all the input combinations should be evaluated.

6PI Testin* Tools: There are man& testin* tools available. +ependin* on the level of testin* required3 different tools could be used. -ome of the 6PI testin* tools available are mentioned here. 1?erif&: This is from %an %achine -&stems. 1?erif& is a 1ava class/6PI testin* tool that supports a unique invasive testin* model. The invasive model allows access to the internals :private elements; of an& 1ava ob0ect from within a test script. The abilit& to invade class internals facilitates more effective testin* at class level3 since controllabilit& and observabilit& are enhanced. This can be ver& valuable when a class has not been desi*ned for testabilit&. 1ava-pec: 1ava-pec is a -unTestAs 6PI testin* tool. It can be used to test 1ava applications and libraries throu*h their 6PI. 1ava-pec *uides the users throu*h the entire test creation process and lets them focus on the most critical aspects of testin*. 5nce the user has entered the test data and assertions3 1ava-pec automaticall& *enerates self)checkin* tests3 RT% test documentation3 and detailed test reports. Rere is an eCample of how to automate the 6PI testin*. 6ssumptions: ) #. Test en*ineer is supposed to test some 6PI. 2. The 6PIYs are available in form of librar& :.lib;. =. Test en*ineer has the 6PI document. There are mainl& two thin*s to test in 6PI testin*: ) #. 8lack boC testin* of the 6PIYs 2. Interaction / inte*ration testin* of the 6PIYs. 8& black boC testin* of the 6PI mean that we have to test the 6PI for outputs. In simple words when we *ive a known input :parameters to the 6PI; then we also know the ideal output. -o we have to check for the actual out put a*ainst the idle output. For this we can write a simple c pro*ram that will do the followin*: ) 6; Take the parameters from a teCt file :this file will contain man& of such input parameters;. 8; 'all the 6PI with these parameters. '; %atch the actual and idle output and also check the parameters for *ood values that are passed with reference :pointers;. +; o* the result. -econdl& we have test the inte*ration of the 6PIYs. For eCample there are two 6PIYs sa& Randle h " handle createconteCt :void;9 When the handle to the device is to be closed then the correspondin* function 8ool bishandledeleted " bool deleteconteCt :handle Wh;9 Rere we have to call the two 6PIYs and check if the& are handled b& the created createconteCt :; and are deleted b& the deleteconteCt :;. This will ensure that these two 6PIYs are workin* fine. For this we can write a simple c pro*ram that will do the followin*: ) 6; 'all the two 6PIYs in the same order. 8; Pass the output parameter of the first as the input of the second '; 'heck for the output parameter of the second 6PI +; o* the result.

The eCample is over simplified but this works because we are usin* this kind of test tool for eCtensive re*ression testin* of our 6PI librar&. #K. Bnderstandin* Rapid Testin* Rapid testin* is the testin* software faster than usual3 without compromisin* on the standards of qualit&. It is the technique to test as thorou*h as reasonable within the constraints. This technique looks at testin* as a process of heuristic inquir& and lo*icall& speakin* it should be based on eCplorator& testin* techniques. 6lthou*h most pro0ects under*o continuous testin*3 it does not usuall& produce the information required to deal with the situations where it is necessar& to make an instantaneous assessment of the productAs qualit& at a particular moment. In most cases the testin* is scheduled for 0ust prior to launch and conventional testin* techniques often cannot be applied to software that is incomplete or sub0ect to constant chan*e. 6t times like these Rapid Testin* can be used. It can be said that rapid testin* has a structure that is built on a foundation of four components namel&3 G People G Inte*rated test process G -tatic Testin* and G +&namic Testin* There is a need for people who can handle the pressure of ti*ht schedules. The& need to be productive contributors even throu*h the earl& phases of the development life c&cle. 6ccordin* to 1ames 8ach3 a core skill is the abilit& to think criticall&. It should also be noted that d&namic testin* lies at the heart of the software testin* process3 and the plannin*3 desi*n3 development3 and eCecution of d&namic tests should be performed well for an& testin* process to be efficient. TR/ R6PI+ T/-TI4Q PR6'TI'/ It would help us if we scrutini.e each phase of a development process to see how the efficienc&3 speed and qualit& of testin* can be improved3 bearin* in mind the followin* factors: G 6ctions that the test team can take to prevent defects from escapin*. For eCample3 practices like eCtreme pro*rammin* and eCplorator& testin*. G 6ctions that the test team can take to mana*e risk to the development schedule. G The information that can be obtained from each phase so that the test team can speed up the activities. If a test process is desi*ned around the answers to these questions3 both the speed of testin* and the qualit& of the final product should be enhanced. -ome of the aspects that can be used while rapid testin* are *iven below: #. Test for link inte*rit& 2. Test for disabled accessibilit& =. Test the default settin*s J. 'heck the navi*ationYs L. 'heck for input constraints b& in0ectin* special characters at the sources of data N. Run %ultiple instances K. 'heck for interdependencies and stress them O. Test for consistenc& of desi*n $. Test for compatibilit& #I. Test for usabilit& ##. 'heck for the possible variabilit&Ys and attack them #2. Qo for possible stress and load tests #=. 6nd our favorite , ban*in* the ke&board

#O. Test Ware +evelopment Test Ware development is the ke& role of the Testin* Team. What comprises Test Ware and some *uidelines to build the test ware is discussed below: #O.# Test -trate*& 8efore startin* an& testin* activities3 the team lead will have to think a lot W arrive at a strate*&. This will describe the approach3 which is to be adopted for carr&in* out test activities includin* the plannin* activities. This is a formal document and the ver& first document re*ardin* the testin* area and is prepared at a ver& earl& sta* in -+ '. This document must provide *eneric test approach as well as specific details re*ardin* the pro0ect. The followin* areas are addressed in the test strate*& document. #O.#.# Test evels The test strate*& must talk about what are the test levels that will be carried out for that particular pro0ect. Bnit3 Inte*ration W -&stem testin* will be carried out in all pro0ects. 8ut man& times3 the inte*ration W s&stem testin* ma& be combined. +etails like this ma& be addressed in this section. #O.#.2 Roles and Responsibilities The roles and responsibilities of test leader3 individual testers3 pro0ect mana*er are to be clearl& defined at a pro0ect level in this section. This ma& not have names associated: but the role has to be ver& clearl& defined. The review and approval mechanism must be stated here for test plans and other test documents. 6lso3 we have to state who reviews the test cases3 test records and who approved them. The documents ma& *o thru a series of reviews or multiple approvals and the& have to be mentioned here. #O.#.= Testin* Tools 6n& testin* tools3 which are to be used in different test levels3 must be3 clearl& identified. This includes 0ustifications for the tools bein* used in that particular level also. #O.#.J Risks and %iti*ation 6n& risks that will affect the testin* process must be listed alon* with the miti*ation. 8& documentin* the risks in this document3 we can anticipate the occurrence of it well ahead of time and then we can proactivel& prevent it from occurrin*. -ample risks are dependenc& of completion of codin*3 which is done b& sub)contractors3 capabilit& of testin* tools etc. #O.#.L Re*ression Test 6pproach When a particular problem is identified3 the pro*rams will be debu**ed and the fiC will be done to the pro*ram. To make sure that the fiC works3 the pro*ram will be tested a*ain for that criteria. Re*ression test will make sure that one fiC does not create some other problems in that pro*ram or in an& other interface. -o3 a set of related test cases ma& have to be repeated a*ain3 to make sure that nothin* else is affected b& a particular fiC. Row this is *oin* to be carried out must be elaborated in this section. In some companies3 whenever there is a fiC in one unit3 all unit test cases for that unit will be repeated3 to achieve a hi*her level of qualit&. #O.#.N Test Qroups From the list of requirements3 we can identif& related areas3 whose functionalit& is similar. These areas are the test *roups. For eCample3 in a railwa& reservation s&stem3 an&thin* related to ticket bookin* is a functional *roup9 an&thin* related with

report *eneration is a functional *roup. -ame wa&3 we have to identif& the test *roups based on the functionalit& aspect. #O.#.K Test Priorities 6mon* test cases3 we need to establish priorities. While testin* software pro0ects3 certain test cases will be treated as the most important ones and if the& fail3 the product cannot be released. -ome other test cases ma& be treated like cosmetic and if the& fail3 we can release the product without much compromise on the functionalit&. This priorit& levels must be clearl& stated. These ma& be mapped to the test *roups also. #O.#.O Test -tatus 'ollections and Reportin* When test cases are eCecuted3 the test leader and the pro0ect mana*er must know3 where eCactl& we stand in terms of testin* activities. To know where we stand3 the inputs from the individual testers must come to the test leader. This will include3 what test cases are eCecuted3 how lon* it took3 how man& test cases passed and how man&)failed etc. 6lso3 how often we collect the status is to be clearl& mentioned. -ome companies will have a practice of collectin* the status on a dail& basis or weekl& basis. This has to be mentioned clearl&. #O.#.$ Test Records %aintenance When the test cases are eCecuted3 we need to keep track of the eCecution details like when it is eCecuted3 who did it3 how lon* it took3 what is the result etc. This data must be available to the test leader and the pro0ect mana*er3 alon* with all the team members3 in a central location. This ma& be stored in a specific director& in a central server and the document must sa& clearl& about the locations and the directories. The namin* convention for the documents and files must also be mentioned.

#O.#.#I Requirements Traceabilit& %atriC Ideall& each software developed must satisf& the set of requirements completel&. -o3 ri*ht from desi*n3 each requirement must be addressed in ever& sin*le document in the software process. The documents include the R +3 +3 source codes3 unit test cases3 inte*ration test cases and the s&stem test cases. Refer the followin* sample table which describes Requirements Traceabilit& %atriC process. In this matriC3 the rows will have the requirements. For ever& document fR +3 + etcg3 there will be a separate column. -o3 in ever& cell3 we need to state3 what section in R + addresses a particular requirement. Ideall&3 if ever& requirement is addressed in ever& sin*le document3 all the individual cells must have valid section ids or names filled in. Then we know that ever& requirement is addressed. In case of an& missin* of requirement3 we need to *o back to the document and correct it3 so that it addressed the requirement. For testin* at each level3 we ma& have to address the requirements. 5ne inte*ration and the s&stem test case ma& address multiple requirements. +TP -cenario 4o +T' Id 'ode Requirement # Uve/)ve #323=3J Requirement 2 Uve/)ve #323=3J Requirement = Uve/)ve #323=3J Requirement J Uve/)ve #323=3J ^ ^ + -ection

^ ^ ^ ^ ^ Requirement 4 Uve/)ve #323=3J T/-T/R T/-T/R +/?/ 5P/R T/-T /6+ #O.#.## Test -ummar& The senior mana*ement ma& like to have test summar& on a weekl& or monthl& basis. If the pro0ect is ver& critical3 the& ma& need it on a dail& basis also. This section must address what kind of test summar& reports will be produced for the senior mana*ement alon* with the frequenc&. The test strate*& must *ive a clear vision of what the testin* team will do for the whole pro0ect for the entire duration. This document will/ma& be presented to the client also3 if needed. The person3 who prepares this document3 must be functionall& stron* in the product domain3 with a ver& *ood eCperience3 as this is the document that is *oin* to drive the entire team for the testin* activities. Test strate*& must be clearl& eCplained to the testin* team members ti*ht at the be*innin* of the pro0ect. #O.2 Test Plan The test strate*& identifies multiple test levels3 which are *oin* to be performed for the pro0ect. 6ctivities at each level must be planned well in advance and it has to be formall& documented. 8ased on the individual plans onl&3 the individual test levels are carried out. The plans are to be prepared b& eCperienced people onl&. In all test plans3 the /T?X f/ntr&)Task)?alidation)/Citg criteria are to be mentioned. /ntr& means the entr& point to that phase. For eCample3 for unit testin*3 the codin* must be complete and then onl& one can start unit testin*. Task is the activit& that is performed. ?alidation is the wa& in which the pro*ress and correctness and compliance are verified for that phase. /Cit tells the completion criteria of that phase3 after the validation is done. For eCample3 the eCit criterion for unit testin* is all unit test cases must pass. /T?X is a modelin* technique for developin* worldl& and atomic level models. It sands for /ntr&3 Task3 ?erification and /Cit. It is a task)based model where the details of each task are eCplicitl& defined in a specification table a*ainst each phase i.e. /ntr&3 /Cit3 Task3 Feedback In3 Feedback 5ut3 and measures. There are two t&pes of cells3 unit cells and implementation cells. The implementation cells are basicall& unit cells containin* the further tasks. For eCample if there is a task of si.e estimation3 then there will be a unit cell of si.e estimation. Then since this task has further tasks namel&3 define measures3 estimate si.e. The unit cell containin* these further tasks will be referred to as the implementation cell and a separate table will be constructed for it. 6 purpose is also stated and the viewer of the model ma& also be defined e.*. top mana*ement or customer. #O.2.# Bnit Test Plan fBTPg The unit test plan is the overall plan to carr& out the unit test activities. The lead tester prepares it and it will be distributed to the individual testers3 which contains the followin* sections. #O.2.#.# What is to be tested!

The unit test plan must clearl& specif& the scope of unit testin*. In this3 normall& the basic input/output of the units alon* with their basic functionalit& will be tested. In this case mostl& the input units will be tested for the format3 ali*nment3 accurac& and the totals. The BTP will clearl& *ive the rules of what data t&pes are present in the s&stem3 their format and their boundar& conditions. This list ma& not be eChaustive9 but it is better to have a complete list of these details. #O.2.#.2 -equence of Testin* The sequences of test activities that are to be carried out in this phase are to be listed in this section. This includes3 whether to eCecute positive test cases first or ne*ative test cases first3 to eCecute test cases based on the priorit&3 to eCecute test cases based on test *roups etc. Positive test cases prove that the s&stem performs what is supposed to do9 ne*ative test cases prove that the s&stem does not perform what is not supposed to do. Testin* the screens3 files3 database etc.3 are to be *iven in proper sequence. #O.2.#.J 8asic Functionalit& of Bnits Row the independent functionalities of the units are tested which eCcludes an& communication between the unit and other units. The interface part is out of scope of this test level. 6part from the above sections3 the followin* sections are addressed3 ver& specific to unit testin*. G Bnit Testin* Tools G Priorit& of Pro*ram units G 4amin* convention for test cases G -tatus reportin* mechanism G Re*ression test approach G /T?X criteria #O.2.2 Inte*ration Test Plan The inte*ration test plan is the overall plan for carr&in* out the activities in the inte*ration test level3 which contains the followin* sections. 2.2.# What is to be tested! This section clearl& specifies the kinds of interfaces fall under the scope of testin* internal3 eCternal interfaces3 with request and response is to be eCplained. This need not *o deep in terms of technical details but the *eneral approach how the interfaces are tri**ered is eCplained. #O.2.2.#-equence of Inte*ration When there are multiple modules present in an application3 the sequence in which the& are to be inte*rated will be specified in this section. In this3 the dependencies between the modules pla& a vital role. If a unit 8 has to be eCecuted3 it ma& need the data that is fed b& unit 6 and unit X. In this case3 the units 6 and X have to be inte*rated and then usin* that data3 the unit 8 has to be tested. This has to be stated to the whole set of units in the pro*ram. Qiven this correctl&3 the testin* activities will lead to the product3 slowl& buildin* the product3 unit b& unit and then inte*ratin* them. #O.2.2.2 ist of %odules and Interface Functions There ma& be 4 number of units in the application3 but the units that are *oin* to communicate with each other3 alone are tested in this phase. If the units are desi*ned in such a wa& that the& are mutuall& independent3 then the interfaces do not come into picture. This is almost impossible in an& s&stem3 as the units have to communicate to other units3 in order to *et different t&pes of functionalities eCecuted. In this section3 we need to list the units and for what purpose it talks to the others

need to be mentioned. This will not *o into technical aspects3 but at a hi*her level3 this has to be eCplained in plain /n*lish. 6part from the above sections3 the followin* sections are addressed3 ver& specific to inte*ration testin*. G Inte*ration Testin* Tools G Priorit& of Pro*ram interfaces G 4amin* convention for test cases G -tatus reportin* mechanism G Re*ression test approach G /T?X criteria G 8uild/Refresh criteria fWhen multiple pro*rams or ob0ects are to be linked to arrived at sin*le product3 and one unit has some modifications3 then it ma& need to rebuild the entire product and then load it into the inte*ration test environment. When and how often3 the product is rebuilt and refreshed is to be mentionedg. #O.2.= -&stem Test Plan f-TPg The s&stem test plan is the overall plan carr&in* out the s&stem test level activities. In the s&stem test3 apart from testin* the functional aspects of the s&stem3 there are some special testin* activities carried out3 such as stress testin* etc. The followin* are the sections normall& present in s&stem test plan. #O.2.=.# What is to be tested! This section defines the scope of s&stem testin*3 ver& specific to the pro0ect. 4ormall&3 the s&stem testin* is based on the requirements. 6ll requirements are to be verified in the scope of s&stem testin*. This covers the functionalit& of the product. 6part from this what special testin* is performed are also stated here. #O.2.=.2 Functional Qroups and the -equence The requirements can be *rouped in terms of the functionalit&. 8ased on this3 there ma& be priorities also amon* the functional *roups. For eCample3 in a bankin* application3 an&thin* related to customer accounts can be *rouped into one area3 an&thin* related to inter)branch transactions ma& be *rouped into one area etc. -ame wa& for the product bein* tested3 these areas are to be mentioned here and the su**ested sequences of testin* of these areas3 based on the priorities are to be described. #O.2.=.= -pecial Testin* %ethods This covers the different special tests like load/volume testin*3 stress testin*3 interoperabilit& testin* etc. These testin* are to be done based on the nature of the product and it is not mandator& that ever& one of these special tests must be performed for ever& product. 6part from the above sections3 the followin* sections are addressed3 ver& specific to s&stem testin*. G -&stem Testin* Tools G Priorit& of functional *roups G 4amin* convention for test cases G -tatus reportin* mechanism G Re*ression test approach G /T?X criteria G 8uild/Refresh criteria #O.2.J 6cceptance Test Plan f6TPg The client at their place performs the acceptance testin*. It will be ver& similar to the s&stem test performed b& the -oftware +evelopment Bnit. -ince the client is the one

who decides the format and testin* methods as part of acceptance testin*3 there is no specific clue on the wa& the& will carr& out the testin*. 8ut it will not differ much from the s&stem testin*. 6ssume that all the rules3 which are applicable to s&stem test3 can be implemented to acceptance testin* also. -ince this is 0ust one level of testin* done b& the client for the overall product3 it ma& include test cases includin* the unit and inte*ration test level details. 6 sample Test Plan 5utline alon* with their description is as shown below: Test Plan 5utline #. 86'ZQR5B4+ , This item summari.es the functions of the application s&stem and the tests to be performed. 2. I4TR5+B'TI54 =. 6--B%PTI54- , Indicates an& anticipated assumptions which will be made while testin* the application. J. T/-T IT/%- ) ist each of the items :pro*rams; to be tested. L. F/6TBR/- T5 8/ T/-T/+ ) ist each of the features :functions or requirements; which will be tested or demonstrated b& the test. N. F/6TBR/- 45T T5 8/ T/-T/+ ) /Cplicitl& lists each feature3 function3 or requirement which wonAt be tested and wh& not. K. 6PPR56'R ) +escribe the data flows and test philosoph&. -imulation or ive eCecution3 /tc. This section also mentions all the approaches which will be followed at the various sta*es of the test eCecution. O. IT/% P6--/F6I 'RIT/RI6 8lanket statement ) Itemi.ed list of eCpected output and tolerances $. -B-P/4-I54/R/-B%PTI54 'RIT/RI6 ) %ust the test run from start to completion! Bnder what circumstances it ma& be resumed in the middle! /stablish check)points in lon* tests. #I. T/-T +/ I?/R68 /- ) What3 besides software3 will be delivered! Test report Test software ##. T/-TI4Q T6-Z- Functional tasks :e.*.3 equipment set up; 6dministrative tasks #2. /4?IR54%/4T6 4//+-ecurit& clearance 5ffice space W equipment Rardware/software requirements #=. R/-P54-I8I ITI/Who does the tasks in -ection #I! What does the user do! #J. -T6FFI4Q W TR6I4I4Q #L. -'R/+B / #N. R/-5BR'/#K. RI-Z- W '54TI4Q/4'I/#O. 6PPR5?6 The schedule details of the various test pass such as Bnit tests3 Inte*ration tests3 -&stem Tests should be clearl& mentioned alon* with the estimated efforts. #O.= Test 'ase +ocuments +esi*nin* *ood test cases is a compleC art. The compleCit& comes from three sources: i Test cases help us discover information. +ifferent t&pes of tests are more effective for different classes of information. i Test cases can be b*oodc in a variet& of wa&s. 4o test case will be *ood in all of them.

i People tend to create test cases accordin* to certain testin* st&les3 such as domain testin* or risk)based testin*. Qood domain tests are different from *ood risk) based tests. WhatYs a test case! b6 test case specifies the pretest state of the IBT and its environment3 the test inputs or conditions3 and the eCpected result. The eCpected result specifies what the IBT should produce from the test inputs. This specification includes messa*es *enerated b& the IBT3 eCceptions3 returned values3 and resultant state of the IBT and its environment. Test cases ma& also specif& initial and resultin* conditions for other ob0ects that constitute the IBT and its environment.c WhatYs a scenario! 6 scenario is a h&pothetical stor&3 used to help a person think throu*h a compleC problem or s&stem. 'haracteristics of Qood -cenarios 6 scenario test has five ke& characteristics. It is :a; a stor& that is :b; motivatin*3 :c; credible3 :d; compleC3 and :e; eas& to evaluate. The primar& ob0ective of test case desi*n is to derive a set of tests that have the hi*hest attitude of discoverin* defects in the software. Test cases are desi*ned based on the anal&sis of requirements3 use cases3 and technical specifications3 and the& should be developed in parallel with the software development effort. 6 test case describes a set of actions to be performed and the results that are eCpected. 6 test case should tar*et specific functionalit& or aim to eCercise a valid path throu*h a use case. This should include invalid user actions and ille*al inputs that are not necessaril& listed in the use case. 6 test case is described depends on several factors3 e.*. the number of test cases3 the frequenc& with which the& chan*e3 the level of automation emplo&ed3 the skill of the testers3 the selected testin* methodolo*&3 staff turnover3 and risk. The test cases will have a *eneric format as below. Test case I+ ) The test case id must be unique across the application Test case description ) The test case description must be ver& brief. Test prerequisite ) The test pre)requisite clearl& describes what should be present in the s&stem3 before the test can be eCecutes. Test Inputs ) The test input is nothin* but the test data that is prepared to be fed to the s&stem. Test steps ) The test steps are the step)b&)step instructions on how to carr& out the test. /Cpected Results ) The eCpected results are the ones that sa& what the s&stem must *ive as output or how the s&stem must react based on the test steps. 6ctual Results , The actual results are the ones that sa& outputs of the action for the *iven inputs or how the s&stem reacts for the *iven inputs. Pass/Fail ) If the /Cpected and 6ctual results are same then test is Pass otherwise Fail. The test cases are classified into positive and ne*ative test cases. Positive test cases are desi*ned to prove that the s&stem accepts the valid inputs and then process them correctl&. -uitable techniques to desi*n the positive test cases are -pecification derived tests3 /quivalence partitionin* and -tate)transition testin*. The ne*ative test cases are desi*ned to prove that the s&stem re0ects invalid inputs and does not process them. -uitable techniques to desi*n the ne*ative test cases are /rror *uessin*3 8oundar& value anal&sis3 internal boundar& value testin* and -tate) transition testin*. The test cases details must be ver& clearl& specified3 so that a new

person can *o throu*h the test cases step and step and is able to eCecute it. The test cases will be eCplained with specific eCamples in the followin* section. For eCample consider online shoppin* application. 6t the user interface level the client request the web server to displa& the product details b& *ivin* email id and Bsername. The web server processes the request and will *ive the response. For this application we will desi*n the unit3 Inte*ration and s&stem test cases. Fi*ure N.Web based application Bnit Test 'ases :BT'; These are ver& specific to a particular unit. The basic functionalit& of the unit is to be understood based on the requirements and the desi*n documents. Qenerall&3 +esi*n document will provide a lot of information about the functionalit& of a unit. The +esi*n document has to be referred before BT' is written3 because it provides the actual functionalit& of how the s&stem must behave3 for *iven inputs. For eCample3 In the 5nline shoppin* application3 If the user enters valid /mail id and Bsername values3 let us assume that +esi*n document sa&s3 that the s&stem must displa& a product details and should insert the /mail id and Bsername in database table. If user enters invalid values the s&stem will displa& appropriate error messa*e and will not store it in database. Fi*ure K: -napshot of o*in -creen Test 'onditions for the fields in the o*in screen /mail)It should be in this format :For /* clickmeV&ahoo.com;. Bsername , It should accept onl& alphabets not *reater than N.4umerics and special t&pe of characters are not allowed. Test Prerequisite: The user should have access to 'ustomer o*in screen form screen 4e*ative Test 'ase Pro0ect 4ame)5nline shoppin* ?ersion)#.# %odule)'atalo* Test P +escription Test Inputs /Cpected Results 6ctual results Pass/Fail # 'heck for inputtin* values in /mail field /mail"keerthiVrediffmail Bsername"Xavier Inputs should not be accepted. It should displa& messa*e b/nter valid /mailc 2 'heck for inputtin* values in /mail field /mail"0ohn2NPrediffmail.com Bsername"1ohn Inputs should not be accepted. It should displa& messa*e b/nter valid /mailc = 'heck for inputtin* values in Bsername field /mail"shilpaV&ahoo.com Bsername"%ark2J Inputs should not be accepted. It should displa& messa*e b/nter correct Bsernamec Positive Test 'ase Test P +escription Test Inputs /Cpected Results 6ctual results Pass/Fail # 'heck for inputtin* values in /mail field /mail"shanV&ahoo.com Bsername"dave Inputs should be accepted. 2 'heck for inputtin* values in /mail field /mail"knkiVrediffmail.com Bsername"0ohn Inputs should be accepted. = 'heck for inputtin* values in Bsername field /mail"CavV&ahoo.com Bsername"mark Inputs should be accepted.

Inte*ration Test 'ases 8efore desi*nin* the inte*ration test cases the testers should *o throu*h the Inte*ration test plan. It will *ive complete idea of how to write inte*ration test cases. The main aim of inte*ration test cases is that it tests the multiple modules to*ether. 8& eCecutin* these test cases the user can find out the errors in the interfaces between the %odules. For eCample3 in online shoppin*3 there will be 'atalo* and 6dministration module. In catalo* section the customer can track the list of products and can bu& the products online. In administration module the admin can enter the product name and information related to it.

Table=: Inte*ration Test 'ases Test P +escription Test Inputs /Cpected Results 6ctual results Pass/Fail # 'heck for o*in -creen /nter values in /mail and Bser4ame. For /*: /mail "shilpaV&ahoo.com Bsername"shilpa Inputs should be accepted. 8ackend ?erification -elect email3 username from 'us9 The entered /mail and Bsername should be displa&ed at sqlprompt. 2 'heck for Product Information 'lick product information link It should displa& complete details of the product = 'heck for admin screen /nter values in Product Id and Product name fields. For /*: Product Id)2JL Product name)4orton 6ntivirus Inputs should be accepted. 8ackend verification -elect pid 3 pname from Product9 The entered Product id and Product name should be displa&ed at the sql prompt. 45T/: The tester has to eCecute above unit and Inte*ration test cases after codin*. 6nd Re/-he has to fill the actual results and Pass/fail columns. If the test cases fail then defect report should be prepared. -&stem Test 'ases: ) The s&stem test cases meant to test the s&stem as per the requirements9 end)to end. This is basicall& to make sure that the application works as per -R-. In s&stem test cases3 :*enerall& in s&stem testin* itself;3 the testers are supposed to act as an end user. -o3 s&stem test cases normall& do concentrate on the functionalit& of the s&stem3 inputs are fed throu*h the s&stem and each and ever& check is performed usin* the s&stem itself. 4ormall&3 the verifications done b& checkin* the database tables directl& or runnin* pro*rams manuall& are not encoura*ed in the s&stem test. The s&stem test must focus on functional *roups3 rather than identif&in* the pro*ram units. When it comes to s&stem testin*3 it is assume that the interfaces between the modules are workin* fine :inte*ration passed;. Ideall& the test cases are nothin* but a union of the functionalities tested in the unit testin* and the inte*ration testin*. Instead of testin* the s&stem inputs outputs throu*h database or eCternal pro*rams3 ever&thin* is tested throu*h the s&stem itself. For eCample3 in a online shoppin* application3 the catalo* and administration screens :pro*ram units; would have been independentl& unit tested and the test results would be verified throu*h the database. In s&stem testin*3 the tester will mimic as an end user and hence checks the application throu*h its output. There are occasions3 where some/man& of the inte*ration and unit test cases are repeated in s&stem testin* also9 especiall& when the units are tested with test stubs before and not actuall& tested with other real modules3 durin* s&stem testin* those cases will be performed a*ain with real modules/data in

#$. +efect %ana*ement +efects determine the effectiveness of the Testin* what we do. If there are no defects3 it directl& implies that we donYt have our 0ob. There are two points worth considerin* here3 either the developer is so stron* that there are no defects arisin* out3 or the test en*ineer is weak. In man& situations3 the second is provin* correct. This implies that we lack the knack. In this section3 let us understand +efects. #$.# What is a +efect! For a test en*ineer3 a defect is followin*: ) G 6n& deviation from specification G 6n&thin* that causes user dissatisfaction G Incorrect output G -oftware does not do what it intended to do. 8u* / +efect / /rror: ) G -oftware is said to have bu* if it features deviates from specifications. G -oftware is said to have defect if it has unwanted side effects. G -oftware is said to have /rror if it *ives incorrect output. 8ut as for a test en*ineer all are same as the above definition is onl& for the purpose of documentation or indicative. #$.2 +efect TaConomies 'ate*ories of +efects: 6ll software defects can be broadl& cate*ori.ed into the below mentioned t&pes: G /rrors of commission: somethin* wron* is done G /rrors of omission: somethin* left out b& accident G /rrors of clarit& and ambi*uit&: different interpretations G /rrors of speed and capacit& Rowever3 the above is a broad cate*ori.ation9 below we have for &ou a host of varied t&pes of defects that can be identified in different software applications: #. 'onceptual bu*s / +esi*n bu*s 2. 'odin* bu*s =. Inte*ration bu*s J. Bser Interface /rrors L. Functionalit& N. 'ommunication K. 'ommand -tructure O. %issin* 'ommands $. Performance #I. 5utput ##. /rror Randlin* /rrors #2. 8oundar&)Related /rrors #=. 'alculation /rrors #J. Initial and ater -tates #L. 'ontrol Flow /rrors #N. /rrors in Randlin* +ata #K. Race 'onditions /rrors #O. oad 'onditions /rrors #$. Rardware /rrors 2I. -ource and ?ersion 'ontrol /rrors 2#. +ocumentation /rrors 22. Testin* /rrors

#$.= ife '&cle of a +efect

The followin* self eCplanator& fi*ure eCplains the life c&cle of a defect:

2I. %etrics for Testin* What is a %etric! e%etricY is a measure to quantif& software3 software development resources3 and/or the software development process. 6 %etric can quantif& an& of the followin* factors: G -chedule3 G Work /ffort3 G Product -i.e3 G Pro0ect -tatus3 and G Qualit& Performance %easurin* enables^. %etrics enables estimation of future work. That is3 considerin* the case of testin* ) +ecidin* the product is fit for shipment or deliver& depends on the rate the defects are found and fiCed. +efect collected and fiCed is one kind of metric. :www.processimpact.com; 6s defined in the %I-R6 Report3 It is beneficial to classif& metrics accordin* to their usa*e. I/// $2O.# >J@ identifies two classes: i; Process , 6ctivities performed in the production of the -oftware ii; Product , 6n output of the Process3 for eCample the software or its documentation. +efects are anal&.ed to identif& which are the ma0or causes of defect and which is the phase that introduces most defects. This can be achieved b& performin* Pareto anal&sis of defect causes and defect introduction phases. The main requirements for an& of these anal&sis is -oftware +efect %etrics. Few of the +efect %etrics are: +efect +ensit&: :4o. 5f +efects Reported b& -Q6 U 4o. +efects Reported 8& Peer Review;/6ctual -i.e. The -i.e can be in Z 5'3 - 5'3 or Function Points. The method used in the 5r*ani.ation to measure the si.e of the -oftware Product. The -Q6 is considered to be the part of the -oftware testin* team. Test effectiveness: et / :tUBat; where t"total no. of defects reported durin* testin* and Bat " total no. of defects reported durin* Bser acceptance testin* Bser 6cceptance Testin* is *enerall& carried out usin* the 6cceptance Test 'riteria accordin* to the 6cceptance Test Plan. +efect Removal /fficienc&: :Total 4o 5f +efects Removed /Total 4o. 5f +efects In0ected;M#II at various sta*es of -+ ' +escription

This metric will indicate the effectiveness of the defect identification and removal in sta*es for a *iven pro0ect Formula G Requirements: +R/ " >:Requirement defects corrected durin* Requirements phase; / :Requirement defects in0ected durin* Requirements phase;@ M #II G +esi*n: +R/ " >:+esi*n defects corrected durin* +esi*n phase; / :+efects identified durin* Requirements phase U +efects in0ected durin* +esi*n phase;@ M #II G 'ode: +R/ " >:'ode defects corrected durin* 'odin* phase; / :+efects identified durin* Requirements phase U +efects identified durin* +esi*n phase U +efects in0ected durin* codin* phase;@ M #II G 5verall: +R/ " >:Total defects corrected at all phases before deliver&; / :Total defects detected at all phases before and after deliver&;@ M #II %etric Representation Percenta*e 'alculated at -ta*e completion or Pro0ect 'ompletion 'alculated from 8u* Reports and Peer Review Reports +efect +istribution: Percenta*e of Total defects +istributed across Requirements 6nal&sis3 +esi*n Reviews3 'ode Reviews3 Bnit Tests3 Inte*ration Tests3 -&stem Tests3 Bser 6cceptance Tests3 Review b& Pro0ect eads and Pro0ect %ana*ers. -oftware Process %etrics are measures which provide information about the performance of the development process itself. Purpose: #. Provide an Indicator to the Bltimate Qualit& of -oftware bein* Produced 2. 6ssists to the 5r*ani.ation to improve its development process b& Ri*hli*htin* areas of Inefficienc& or error)prone areas of the process. -oftware Product %etrics are measures of some attribute of the -oftware Product. :/Cample3 -ource 'ode;. Purpose: #. Bsed to assess the qualit& of the output What are the most *eneral metrics! Requirements %ana*ement %etrics 'ollected #. Requirements b& state , 6ccepted3 Re0ected3 Postponed 2. 4o. of baselined requirements =. 4umber of requirements modified after base linin* +erived %etrics #. Requirements -tabilit& IndeC :R-I; 2. Requirements to +esi*n Traceabilit& Pro0ect %ana*ement %etrics 'ollected +erived %etrics #. Planned 4o. of da&s 2. 6ctual 4o. of da&s #. -chedule ?ariance #. /stimated effort 2. 6ctual /ffort #. /ffort ?ariance #. /stimated 'ost 2. 6ctual 'ost #. 'ost ?ariance #. /stimated -i.e 2. 6ctual -i.e #. -i.e ?ariance Testin* W Review %etrics 'ollected #. 4o. of defects found b& Reviews 2. 4o. of defects found b& Testin*

=. 4o. of defects found b& 'lient J. Total 4o. of defects found b& Reviews +erived %etrics #. 5verall Review /ffectiveness :5R/; 2. 5verall Test /ffectiveness Peer Reviews %etrics 'ollected #. Z 5' / FP per person hour : an*ua*e; for Preparation 2. Z 5' / FP per person hour : an*ua*e; for Review %eetin* =. 4o. of pa*es / hour reviewed durin* preparation J. 6vera*e number of defects found b& Reviewer durin* Preparation L. 4o. of pa*es / hour reviewed durin* Review %eetin* N. 6vera*e number of defects found b& Reviewer durin* Review %eetin* K. Review Team -i.e ?s +efects O. Review speed ?s +efects $. %a0or defects found durin* Review %eetin* #I. +efects ?s Review /ffort +erived %etrics #. Review /ffectiveness :%a0or; 2. Total number of defects found b& reviews for a pro0ect 5ther %etrics %etrics 'ollected #. 4o. of Requirements +esi*ned 2. 4o. of Requirements not +esi*ned =. 4o. of +esi*n elements matchin* Requirements J. 4o. of +esi*n elements not matchin* Requirements L. 4o. of Requirements Tested N. 4o. of Requirements not Tested K. 4o. of Test 'ases with matchin* Requirements O. 4o. of Test 'ases without matchin* Requirements $. 4o. of +efects b& -everit& #I. 4o. of +efects b& sta*e of ) 5ri*in3 +etection3 Removal +erived %etrics #. +efect +ensit& 2. 4o. of Requirements +esi*ned ?s not +esi*ned =. 4o. of Requirements Tested ?s not Tested J. +efect Removal /fficienc& :+R/; -ome %etrics /Cplained -chedule ?ariance :-?; +escription This metric *ives the variation of actual schedule vs. the planned schedule. This is calculated for each pro0ect , sta*e wise Formula -? " >:6ctual no. of da&s , Planned no. of da&s; / Planned no. of da&s@ M #II %etric Representation Percenta*e 'alculated at -ta*e completion 'alculated from -oftware Pro0ect Plan for planned number of da&s for completin* each sta*e and for actual number of da&s taken to complete each sta*e +efect Removal /fficienc& :+R/; +escription This metric will indicate the effectiveness of the defect identification and removal in sta*es for a *iven pro0ect

Formula G Requirements: +R/ " >:Requirement defects corrected durin* Requirements phase; / :Requirement defects in0ected durin* Requirements phase;@ M #II G +esi*n: +R/ " >:+esi*n defects corrected durin* +esi*n phase; / :+efects identified durin* Requirements phase U +efects in0ected durin* +esi*n phase;@ M #II G 'ode: +R/ " >:'ode defects corrected durin* 'odin* phase; / :+efects identified durin* Requirements phase U +efects identified durin* +esi*n phase U +efects in0ected durin* codin* phase;@ M #II G 5verall: +R/ " >:Total defects corrected at all phases before deliver&; / :Total defects detected at all phases before and after deliver&;@ M #II %etric Representation Percenta*e 'alculated at -ta*e completion or Pro0ect 'ompletion 'alculated from 8u* Reports and Peer Review Reports 5verall Review /ffectiveness +escription This metric will indicate the effectiveness of the Review process in identif&in* the defects for a *iven pro0ect Formula G 5verall Review /ffectiveness: 5R/ " >:4umber of defects found b& reviews; / :Total number of defects found b& reviews U 4umber of defects found durin* Testin* U 4umber of defects found durin* post)deliver&;@ M #II %etric Representation G Percenta*e 'alculated at G %onthl& G -ta*e completion or Pro0ect 'ompletion 'alculated from G Peer reviews3 Formal Reviews G Test Reports G 'ustomer Identified +efects 5verall Test /ffectiveness :5T/; +escription This metric will indicate the effectiveness of the Testin* process in identif&in* the defects for a *iven pro0ect durin* the testin* sta*e Formula G 5verall Test /ffectiveness: 5T/ " >:4umber of defects found durin* testin*; / :Total number of defects found durin* Testin* U 4umber of defects found durin* post deliver&;@ M #II %etric Representation G Percenta*e 'alculated at G %onthl& G 8uild completion or Pro0ect 'ompletion 'alculated from G Test Reports G 'ustomer Identified +efects /ffort ?ariance :/?; +escription This metric *ives the variation of actual effort vs. the estimated effort. This is calculated for each pro0ect -ta*e wise Formula

G /? " >:6ctual person hours , /stimated person hours; / /stimated person hours@ M #II %etric Representation G Percenta*e 'alculated at G -ta*e completion as identified in -PP 'alculated from G /stimation sheets for estimated values in person hours3 for each activit& within a *iven sta*e and 6ctual Worked Rours values in person hours. 'ost ?ariance :'?; +escription This metric *ives the variation of actual cost ?s the estimated cost. This is calculated for each pro0ect -ta*e wise Formula G '? " >:6ctual 'ost , /stimated 'ost; / /stimated 'ost@ M #II %etric Representation G Percenta*e 'alculated at G -ta*e completion 'alculated from G /stimation sheets for estimated values in dollars or rupees3 for each activit& within a *iven sta*e G 6ctual cost incurred -i.e ?ariance +escription This metric *ives the variation of actual si.e ?s. the estimated si.e. This is calculated for each pro0ect sta*e wise Formula G -i.e ?ariance " >:6ctual -i.e , /stimated -i.e; / /stimated -i.e@ M #II %etric Representation G Percenta*e 'alculated at G -ta*e completion G Pro0ect 'ompletion 'alculated from G /stimation sheets for estimated values in Function Points or Z 5' G 6ctual si.e Productivit& on Review Preparation , Technical +escription This metric will indicate the effort spent on preparation for Review. Bse this to calculate for lan*ua*es used in the Pro0ect Formula For ever& lan*ua*e :such as '3 'UU3 1ava3 X- 3 etc^; used3 calculate G :Z 5' or FP ; / hour :M an*ua*e; M an*ua*e , '3 'UU3 1ava3 X% 3 etc^ %etric Representation G Z 5' or FP per hour 'alculated at G %onthl& G 8uild completion 'alculated from G Peer Review Report 4umber of defects found per Review %eetin*

+escription This metric will indicate the number of defects found durin* the Review %eetin* across various sta*es of the Pro0ect Formula G 4umber of defects per Review %eetin* %etric Representation G +efects / Review %eetin* 'alculated at G %onthl& G 'ompletion of Review 'alculated from G Peer Review Report G Peer Review +efect ist Review Team /fficienc& :Review Team -i.e ?s +efects Trend; +escription This metric will indicate the Review Team si.e and the defects trend. This will help to determine the efficienc& of the Review Team Formula G Review Team -i.e to the +efects trend %etric Representation G Ratio 'alculated at G %onthl& G 'ompletion of Review 'alculated from G Peer Review Report G Peer Review +efect ist Review /ffectiveness +escription This metric will indicate the effectiveness of the Review process Formula Review /ffectiveness " >:4umber of defects found b& Reviews; / ::Total number of defects found b& reviews; U Testin*;@ M #II %etric Representation G Percenta*e 'alculated at G 'ompletion of Review or 'ompletion of Testin* sta*e 'alculated from G Peer Review Report G Peer Review +efect ist G 8u*s Reported b& Testin* Total number of defects found b& Reviews +escription This metric will indicate the total number of defects identified b& the Review process. The defects are further cate*ori.ed as Ri*h3 %edium or ow Formula Total number of defects identified in the Pro0ect %etric Representation G +efects per -ta*e 'alculated at G 'ompletion of Reviews 'alculated from G Peer Review Report G Peer Review +efect ist +efects vs. Review effort , Review Hield

+escription This metric will indicate the effort eCpended in each sta*e for reviews to the defects found Formula G +efects / Review effort %etric Representation G +efects / Review effort 'alculated at G 'ompletion of Reviews 'alculated from G Peer Review Report G Peer Review +efect ist Requirements -tabilit& IndeC :R-I; +escription This metric *ives the stabilit& factor of the requirements over a period of time3 after the requirements have been mutuall& a*reed and baselined between Ivesia -olutions and the 'lient Formula G R-I " #II M > :4umber of baselined requirements; , :4umber of chan*es in requirements after the requirements are baselined; @ / :4umber of baselined requirements; %etric Representation G Percenta*e 'alculated at G -ta*e completion and Pro0ect completion 'alculated from G 'han*e Request G -oftware Requirements -pecification 'han*e Requests b& -tate +escription This metric provides the anal&sis on state of the requirements Formula G 4umber of accepted requirements G 4umber of re0ected requirements G 4umber of postponed requirements %etric Representation G 4umber 'alculated at G -ta*e completion 'alculated from G 'han*e Request G -oftware Requirements -pecification Requirements to +esi*n Traceabilit& +escription This metric provides the anal&sis on the number of requirements desi*ned to the number of requirements that were not desi*ned Formula G Total 4umber of Requirements G 4umber of Requirements +esi*ned G 4umber of Requirements not +esi*ned %etric Representation G 4umber 'alculated at G -ta*e completion

'alculated from G -RG +etail +esi*n +esi*n to Requirements Traceabilit& +escription This metric provides the anal&sis on the number of desi*n elements matchin* requirements to the number of desi*n elements not matchin* requirements Formula G 4umber of +esi*n elements G 4umber of +esi*n elements matchin* Requirements G 4umber of +esi*n elements not matchin* Requirements %etric Representation G 4umber 'alculated at G -ta*e completion 'alculated from G -RG +etail +esi*n Requirements to Test case Traceabilit& +escription This metric provides the anal&sis on the number of requirements tested ?s the number of requirements not tested Formula G 4umber of Requirements G 4umber of Requirements Tested G 4umber of Requirements not Tested %etric Representation G 4umber 'alculated at G -ta*e completion 'alculated from G -RG +etail +esi*n G Test 'ase -pecification Test cases to Requirements traceabilit& +escription This metric provides the anal&sis on the number of test cases matchin* requirements ?s the number of test cases not matchin* requirements Formula G 4umber of Requirements G 4umber of Test cases with matchin* Requirements G 4umber of Test cases not matchin* Requirements %etric Representation G 4umber 'alculated at G -ta*e completion 'alculated from G -RG Test 'ase -pecification 4umber of defects in codin* found durin* testin* b& severit& +escription This metric provides the anal&sis on the number of defects b& the severit& Formula G 4umber of +efects G 4umber of defects of low priorit& G 4umber of defects of medium priorit&

G 4umber of defects of hi*h priorit& %etric Representation G 4umber 'alculated at G -ta*e completion 'alculated from G 8u* Report +efects , -ta*e of ori*in3 detection3 removal +escription This metric provides the anal&sis on the number of defects b& the sta*e of ori*in3 detection and removal. Formula G 4umber of +efects G -ta*e of ori*in G -ta*e of detection G -ta*e of removal %etric Representation G 4umber 'alculated at G -ta*e completion 'alculated from G 8u* Report +efect +ensit& +escription This metric provides the anal&sis on the number of defects to the si.e of the work product Formula +efect +ensit& " >Total no. of +efects / -i.e :FP / Z 5';@ M #II %etric Representation G Percenta*e 'alculated at G -ta*e completion 'alculated from G +efects ist G 8u* Report Row do &ou determine metrics for &our application! 5b0ectives of %etrics are not onl& to measure but also understand the pro*ress to the 5r*ani.ational Qoal. The Parameters for determinin* the %etrics for an application: G +uration G 'ompleCit& G Technolo*& 'onstraints G Previous /Cperience in -ame Technolo*& G 8usiness +omain G 'larit& of the scope of the pro0ect 5ne interestin* and useful approach to arrive at the suitable metrics is usin* the Qoal)Question)%etric Technique. 6s evident from the name3 the QQ% model consists of three la&ers9 a Qoal3 a -et of Questions3 and lastl& a -et of 'orrespondin* %etrics. It is thus a hierarchical structure startin* with a *oal :specif&in* purpose of measurement3 ob0ect to be measured3 issue to be measured3 and viewpoint from which the measure is taken;. The *oal is refined into several questions that usuall& break down the issue into its ma0or components. /ach question is then refined into metrics3 some of them

ob0ective3 some of them sub0ective. The same metric can be used in order to answer different questions under the same *oal. -everal QQ% models can also have questions and metrics in common3 makin* sure that3 when the measure is actuall& taken3 the different viewpoints are taken into account correctl& :i.e.3 the metric mi*ht have different values when taken from different viewpoints;. In order to *ive an eCample of application of the model: Qoal Purpose Issue 5b0ect ?iew Point Improve the timeliness of 'han*e Request Processin* from the Pro0ect %ana*erYs viewpoint Question What is the current 'han*e Request Processin* -peed! %etric 6vera*e '&cle Time -tandard +eviation S cases outside of the upper limit Question Is the performance of the process improvin*! %etric 'urrent avera*e c&cle time 8aseline avera*e c&cle time #II -ub0ective ratin* of mana*erAs satisfaction When do &ou determine %etrics! When the requirements are understood in a hi*h)level3 at this sta*e3 the team si.e3 pro0ect si.e must be known to an eCtent3 in which the pro0ect is at a <defined< sta*e.

References G /ffective %ethods of -oftware Testin*3 William / Perr&. G -oftware /n*ineerin* , 6 Practitioners 6pproach3 Ro*er Pressman. G 6n 6PI Testin* %ethod b& 6lan 6 1or*ensen and 1ames 6 Whittaker. G 6PI Testin* %ethodolo*& b& 6noop Zumar P3 workin* for 4ovell -oftware +evelopment :I; Pvt td.3 8an*alore. G bWh& is 6PI Testin* +ifferent bb& 4ikhil 4ilakantan3 Rewlett Packard and Ibrahim Z. /l)Far3 Florida Institute of Technolo*&. G Test -trate*& W Test Plan Preparation , Trainin* course attended V -oft-mith G +esi*nin* Test 'ases ) 'em Zaner3 1.+.3 Ph.+. G -cenario Testin* ) 'em Zaner3 1.+.3 Ph.+. G /Cplorator& Testin* /Cplained3 v.#.= J/#N/I= b& 1ames 8ach. G /Cplorin* /Cplorator& Testin* b& 6nd& Tinkham and 'em Zaner. G -ession)8ased Test %ana*ement b& 1onathan 8ach :first published in -oftware Testin* and Qualit& /n*ineerin* ma*a.ine3 ##/II;. G +efect +riven /Cplorator& Testin* :++/T; b& 6nanthalakshmi. G -oftware /n*ineerin* 8od& of Znowled*e v#.I :http://www.sei.cmu.edu/publications; G Bnit Testin* *uidelines b& -cott Ri*het :http://www.-tick&minds.com; G http://www.sas&stems.com G http://www.softwareqatest.com G http://www.en*.mu.edu/corliss*/#$O.2II#/ZF4Dch##)tools.html G http://www.ics.uci.edu/d0robbins/ics#2LwIJ/nonav/howto)reviews.html G I/// -5FTW6R/ R/?I/W- -td #I2O)#$$K G http://www.a*ilemanifesto.or* G http://www.processimpact.com G The Qoal Question %etric 6pproach3 ?ictor R. 8asili# Qianlui*i 'aldiera# R. +ieter Rombach2 G http://www.webopedia.com

Q4B Free +ocumentation icense ?ersion #.23 4ovember 2II2 'op&ri*ht :'; 2III32II#32II2 Free -oftware Foundation3 Inc. L$ Temple Place3 -uite ==I3 8oston3 %6 I2###)#=IK B-6 /ver&one is permitted to cop& and distribute verbatim copies of this license document3 but chan*in* it is not allowed. I. PR/6%8 / The purpose of this icense is to make a manual3 teCtbook3 or other functional and useful document <free< in the sense of freedom: to assure ever&one the effective freedom to cop& and redistribute it3 with or without modif&in* it3 either commerciall& or noncommerciall&. -econdaril&3 this icense preserves for the author and publisher a wa& to *et credit for their work3 while not bein* considered responsible for modifications made b& others. This icense is a kind of <cop&left<3 which means that derivative works of the document must themselves be free in the same sense. It complements the Q4B Qeneral Public icense3 which is a cop&left license desi*ned for free software. We have desi*ned this icense in order to use it for manuals for free software3 because free software needs free documentation: a free pro*ram should come with manuals providin* the same freedoms that the software does. 8ut this icense is not limited to software manuals9 it can be used for an& teCtual work3 re*ardless of sub0ect matter or whether it is published as a printed book. We recommend this icense principall& for works whose purpose is instruction or reference. #. 6PP I'68I ITH 64+ +/FI4ITI54This icense applies to an& manual or other work3 in an& medium3 that contains a notice placed b& the cop&ri*ht holder sa&in* it can be distributed under the terms of this icense. -uch a notice *rants a world)wide3 ro&alt&)free license3 unlimited in duration3 to use that work under the conditions stated herein. The <+ocument<3 below3 refers to an& such manual or work. 6n& member of the public is a licensee3 and is addressed as <&ou<. Hou accept the license if &ou cop&3 modif& or distribute the work in a wa& requirin* permission under cop&ri*ht law. 6 <%odified ?ersion< of the +ocument means an& work containin* the +ocument or a portion of it3 either copied verbatim3 or with modifications and/or translated into another lan*ua*e. 6 <-econdar& -ection< is a named appendiC or a front)matter section of the +ocument that deals eCclusivel& with the relationship of the publishers or authors of the +ocument to the +ocumentAs overall sub0ect :or to related matters; and contains nothin* that could fall directl& within that overall sub0ect. :Thus3 if the +ocument is in part a teCtbook of mathematics3 a -econdar& -ection ma& not eCplain an& mathematics.; The relationship could be a matter of historical connection with the sub0ect or with related matters3 or of le*al3 commercial3 philosophical3 ethical or political position re*ardin* them. The <Invariant -ections< are certain -econdar& -ections whose titles are desi*nated3 as bein* those of Invariant -ections3 in the notice that sa&s that the +ocument is released under this icense. If a section does not fit the above definition of -econdar& then it is not allowed to be desi*nated as Invariant. The +ocument ma&

contain .ero Invariant -ections. If the +ocument does not identif& an& Invariant -ections then there are none. The <'over TeCts< are certain short passa*es of teCt that are listed3 as Front)'over TeCts or 8ack)'over TeCts3 in the notice that sa&s that the +ocument is released under this icense. 6 Front)'over TeCt ma& be at most L words3 and a 8ack)'over TeCt ma& be at most 2L words. 6 <Transparent< cop& of the +ocument means a machine)readable cop&3 represented in a format whose specification is available to the *eneral public3 that is suitable for revisin* the document strai*htforwardl& with *eneric teCt editors or :for ima*es composed of piCels; *eneric paint pro*rams or :for drawin*s; some widel& available drawin* editor3 and that is suitable for input to teCt formatters or for automatic translation to a variet& of formats suitable for input to teCt formatters. 6 cop& made in an otherwise Transparent file format whose markup3 or absence of markup3 has been arran*ed to thwart or discoura*e subsequent modification b& readers is not Transparent. 6n ima*e format is not Transparent if used for an& substantial amount of teCt. 6 cop& that is not <Transparent< is called <5paque<. /Camples of suitable formats for Transparent copies include plain 6-'II without markup3 TeCinfo input format3 aTeX input format3 -Q% or X% usin* a publicl& available +T+3 and standard)conformin* simple RT% 3 Post-cript or P+F desi*ned for human modification. /Camples of transparent ima*e formats include P4Q3 X'F and 1PQ. 5paque formats include proprietar& formats that can be read and edited onl& b& proprietar& word processors3 -Q% or X% for which the +T+ and/or processin* tools are not *enerall& available3 and the machine)*enerated RT% 3 Post-cript or P+F produced b& some word processors for output purposes onl&. The <Title Pa*e< means3 for a printed book3 the title pa*e itself3 plus such followin* pa*es as are needed to hold3 le*ibl&3 the material this icense requires to appear in the title pa*e. For works in formats which do not have an& title pa*e as such3 <Title Pa*e< means the teCt near the most prominent appearance of the workAs title3 precedin* the be*innin* of the bod& of the teCt. 6 section </ntitled XH]< means a named subunit of the +ocument whose title either is precisel& XH] or contains XH] in parentheses followin* teCt that translates XH] in another lan*ua*e. :Rere XH] stands for a specific section name mentioned below3 such as <6cknowled*ements<3 <+edications<3 </ndorsements<3 or <Ristor&<.; To <Preserve the Title< of such a section when &ou modif& the +ocument means that it remains a section </ntitled XH]< accordin* to this definition. The +ocument ma& include Warrant& +isclaimers neCt to the notice which states that this icense applies to the +ocument. These Warrant& +isclaimers are considered to be included b& reference in this icense3 but onl& as re*ards disclaimin* warranties: an& other implication that these Warrant& +isclaimers ma& have is void and has no effect on the meanin* of this icense. 2. ?/R86TI% '5PHI4Q Hou ma& cop& and distribute the +ocument in an& medium3 either commerciall& or noncommerciall&3 provided that this icense3 the cop&ri*ht notices3 and the license notice sa&in* this icense applies to the +ocument are reproduced in all copies3 and that &ou add no other conditions whatsoever to those of this icense. Hou ma& not use technical measures to obstruct or control the readin* or further cop&in* of the copies &ou make or distribute. Rowever3 &ou ma& accept compensation in eCchan*e for copies. If &ou distribute a lar*e enou*h number of copies &ou must also follow the conditions in section =. Hou ma& also lend copies3 under the same conditions stated above3 and &ou ma& publicl& displa& copies. =. '5PHI4Q I4 QB64TITH If &ou publish printed copies :or copies in media that commonl& have printed covers; of the +ocument3 numberin* more than #II3 and the +ocumentAs license notice requires 'over TeCts3 &ou must enclose the copies in covers that carr&3 clearl& and le*ibl&3 all these 'over TeCts: Front)'over TeCts on the front cover3 and 8ack)'over

TeCts on the back cover. 8oth covers must also clearl& and le*ibl& identif& &ou as the publisher of these copies. The front cover must present the full title with all words of the title equall& prominent and visible. Hou ma& add other material on the covers in addition. 'op&in* with chan*es limited to the covers3 as lon* as the& preserve the title of the +ocument and satisf& these conditions3 can be treated as verbatim cop&in* in other respects. If the required teCts for either cover are too voluminous to fit le*ibl&3 &ou should put the first ones listed :as man& as fit reasonabl&; on the actual cover3 and continue the rest onto ad0acent pa*es. If &ou publish or distribute 5paque copies of the +ocument numberin* more than #II3 &ou must either include a machine)readable Transparent cop& alon* with each 5paque cop&3 or state in or with each 5paque cop& a computer)network location from which the *eneral network)usin* public has access to download usin* public) standard network protocols a complete Transparent cop& of the +ocument3 free of added material. If &ou use the latter option3 &ou must take reasonabl& prudent steps3 when &ou be*in distribution of 5paque copies in quantit&3 to ensure that this Transparent cop& will remain thus accessible at the stated location until at least one &ear after the last time &ou distribute an 5paque cop& :directl& or throu*h &our a*ents or retailers; of that edition to the public. It is requested3 but not required3 that &ou contact the authors of the +ocument well before redistributin* an& lar*e number of copies3 to *ive them a chance to provide &ou with an updated version of the +ocument. J. %5+IFI'6TI54Hou ma& cop& and distribute a %odified ?ersion of the +ocument under the conditions of sections 2 and = above3 provided that &ou release the %odified ?ersion under precisel& this icense3 with the %odified ?ersion fillin* the role of the +ocument3 thus licensin* distribution and modification of the %odified ?ersion to whoever possesses a cop& of it. In addition3 &ou must do these thin*s in the %odified ?ersion: G 6. Bse in the Title Pa*e :and on the covers3 if an&; a title distinct from that of the +ocument3 and from those of previous versions :which should3 if there were an&3 be listed in the Ristor& section of the +ocument;. Hou ma& use the same title as a previous version if the ori*inal publisher of that version *ives permission. G 8. ist on the Title Pa*e3 as authors3 one or more persons or entities responsible for authorship of the modifications in the %odified ?ersion3 to*ether with at least five of the principal authors of the +ocument :all of its principal authors3 if it has fewer than five;3 unless the& release &ou from this requirement. G '. -tate on the Title pa*e the name of the publisher of the %odified ?ersion3 as the publisher. G +. Preserve all the cop&ri*ht notices of the +ocument. G /. 6dd an appropriate cop&ri*ht notice for &our modifications ad0acent to the other cop&ri*ht notices. G F. Include3 immediatel& after the cop&ri*ht notices3 a license notice *ivin* the public permission to use the %odified ?ersion under the terms of this icense3 in the form shown in the 6ddendum below. G Q. Preserve in that license notice the full lists of Invariant -ections and required 'over TeCts *iven in the +ocumentAs license notice. G R. Include an unaltered cop& of this icense. G I. Preserve the section /ntitled <Ristor&<3 Preserve its Title3 and add to it an item statin* at least the title3 &ear3 new authors3 and publisher of the %odified ?ersion as *iven on the Title Pa*e. If there is no section /ntitled <Ristor&< in the +ocument3 create one statin* the title3 &ear3 authors3 and publisher of the +ocument as *iven on its Title Pa*e3 then add an item describin* the %odified ?ersion as stated in the previous sentence. G 1. Preserve the network location3 if an&3 *iven in the +ocument for public access to a Transparent cop& of the +ocument3 and likewise the network locations *iven in the

+ocument for previous versions it was based on. These ma& be placed in the <Ristor&< section. Hou ma& omit a network location for a work that was published at least four &ears before the +ocument itself3 or if the ori*inal publisher of the version it refers to *ives permission. G Z. For an& section /ntitled <6cknowled*ements< or <+edications<3 Preserve the Title of the section3 and preserve in the section all the substance and tone of each of the contributor acknowled*ements and/or dedications *iven therein. G . Preserve all the Invariant -ections of the +ocument3 unaltered in their teCt and in their titles. -ection numbers or the equivalent are not considered part of the section titles. G %. +elete an& section /ntitled </ndorsements<. -uch a section ma& not be included in the %odified ?ersion. G 4. +o not retitle an& eCistin* section to be /ntitled </ndorsements< or to conflict in title with an& Invariant -ection. G 5. Preserve an& Warrant& +isclaimers. If the %odified ?ersion includes new front)matter sections or appendices that qualif& as -econdar& -ections and contain no material copied from the +ocument3 &ou ma& at &our option desi*nate some or all of these sections as invariant. To do this3 add their titles to the list of Invariant -ections in the %odified ?ersionAs license notice. These titles must be distinct from an& other section titles. Hou ma& add a section /ntitled </ndorsements<3 provided it contains nothin* but endorsements of &our %odified ?ersion b& various parties))for eCample3 statements of peer review or that the teCt has been approved b& an or*ani.ation as the authoritative definition of a standard. Hou ma& add a passa*e of up to five words as a Front)'over TeCt3 and a passa*e of up to 2L words as a 8ack)'over TeCt3 to the end of the list of 'over TeCts in the %odified ?ersion. 5nl& one passa*e of Front)'over TeCt and one of 8ack)'over TeCt ma& be added b& :or throu*h arran*ements made b&; an& one entit&. If the +ocument alread& includes a cover teCt for the same cover3 previousl& added b& &ou or b& arran*ement made b& the same entit& &ou are actin* on behalf of3 &ou ma& not add another9 but &ou ma& replace the old one3 on eCplicit permission from the previous publisher that added the old one. The author:s; and publisher:s; of the +ocument do not b& this icense *ive permission to use their names for publicit& for or to assert or impl& endorsement of an& %odified ?ersion. L. '5%8I4I4Q +5'B%/4THou ma& combine the +ocument with other documents released under this icense3 under the terms defined in section J above for modified versions3 provided that &ou include in the combination all of the Invariant -ections of all of the ori*inal documents3 unmodified3 and list them all as Invariant -ections of &our combined work in its license notice3 and that &ou preserve all their Warrant& +isclaimers. The combined work need onl& contain one cop& of this icense3 and multiple identical Invariant -ections ma& be replaced with a sin*le cop&. If there are multiple Invariant -ections with the same name but different contents3 make the title of each such section unique b& addin* at the end of it3 in parentheses3 the name of the ori*inal author or publisher of that section if known3 or else a unique number. %ake the same ad0ustment to the section titles in the list of Invariant -ections in the license notice of the combined work. In the combination3 &ou must combine an& sections /ntitled <Ristor&< in the various ori*inal documents3 formin* one section /ntitled <Ristor&<9 likewise combine an& sections /ntitled <6cknowled*ements<3 and an& sections /ntitled <+edications<. Hou must delete all sections /ntitled </ndorsements.< N. '5 /'TI54- 5F +5'B%/4THou ma& make a collection consistin* of the +ocument and other documents released under this icense3 and replace the individual copies of this icense in the various documents with a sin*le cop& that is included in the collection3 provided that

&ou follow the rules of this icense for verbatim cop&in* of each of the documents in all other respects. Hou ma& eCtract a sin*le document from such a collection3 and distribute it individuall& under this icense3 provided &ou insert a cop& of this icense into the eCtracted document3 and follow this icense in all other respects re*ardin* verbatim cop&in* of that document. K. 6QQR/Q6TI54 WITR I4+/P/4+/4T W5RZ6 compilation of the +ocument or its derivatives with other separate and independent documents or works3 in or on a volume of a stora*e or distribution medium3 is called an <a**re*ate< if the cop&ri*ht resultin* from the compilation is not used to limit the le*al ri*hts of the compilationAs users be&ond what the individual works permit. When the +ocument is included in an a**re*ate3 this icense does not appl& to the other works in the a**re*ate which are not themselves derivative works of the +ocument. If the 'over TeCt requirement of section = is applicable to these copies of the +ocument3 then if the +ocument is less than one half of the entire a**re*ate3 the +ocumentAs 'over TeCts ma& be placed on covers that bracket the +ocument within the a**re*ate3 or the electronic equivalent of covers if the +ocument is in electronic form. 5therwise the& must appear on printed covers that bracket the whole a**re*ate. O. TR64- 6TI54 Translation is considered a kind of modification3 so &ou ma& distribute translations of the +ocument under the terms of section J. Replacin* Invariant -ections with translations requires special permission from their cop&ri*ht holders3 but &ou ma& include translations of some or all Invariant -ections in addition to the ori*inal versions of these Invariant -ections. Hou ma& include a translation of this icense3 and all the license notices in the +ocument3 and an& Warrant& +isclaimers3 provided that &ou also include the ori*inal /n*lish version of this icense and the ori*inal versions of those notices and disclaimers. In case of a disa*reement between the translation and the ori*inal version of this icense or a notice or disclaimer3 the ori*inal version will prevail. If a section in the +ocument is /ntitled <6cknowled*ements<3 <+edications<3 or <Ristor&<3 the requirement :section J; to Preserve its Title :section #; will t&picall& require chan*in* the actual title. $. T/R%I46TI54 Hou ma& not cop&3 modif&3 sublicense3 or distribute the +ocument eCcept as eCpressl& provided for under this icense. 6n& other attempt to cop&3 modif&3 sublicense or distribute the +ocument is void3 and will automaticall& terminate &our ri*hts under this icense. Rowever3 parties who have received copies3 or ri*hts3 from &ou under this icense will not have their licenses terminated so lon* as such parties remain in full compliance.

Anda mungkin juga menyukai