Anda di halaman 1dari 4

1)What does SQA ensure? What are the goals of SQA activities?

(Here, i am giving the answer of 'SQA group'.) -> #According to IEEE Standard Glossary of Software Engineering Terminology: 1. Quality relates to the degree to which a system, system component, or process meets specified requirements. 2. Quality relates to the degree to which a system, system component, or process , meets customer, or user, needs or expectations. #The software quality assurance (SQA) group is a team of people with the necessa ry training and skills to ensure that all necessary actions are taken during the development process so that the resulting software conforms to established tech nical requirements. #Goals of SQA activities: 1. As the SQA group serves as the customer's representative & advocate, it has a responsibility to look after the customer's interests. 2. They work with project managers and testers to develop quality-related polici es and quality assurance plans for each project. 3. The group is also involved in measurement collection and analysis, record kee ping, and reporting. 4. The SQA group will keep track of the problem and fix reports to ensure that a ll problems are resolved. 5. The definitions of defects categories, and maintenance of a defect data base are the responsibilities of SQA group. 6. The SQA team members participate in reviews, and audits (special types of rev iews that focus on adherence to standards, guidelines, and procedures), record a nd track problems, and verify that corrections have been made. 2)What is meant by software quality control? Explain the method of measuring the software reliability in SQA. -> #Quality control has origins in modern manufacturing processes where random samp ling techniques,testing, measurements, inspections, defect casual analysis, and acceptance sampling were among the techniques used to ensure the manufacture of quality products. #Quality control consists of the procedures and practices employed to ensure tha t a work product or deliverable conforms to standards or requirements. #It is the set of activities designed to evaluate the quality of developed or ma nufactured products. #Software reliability-Software reliability is the ability of a system or compone nt to perform its required functions under stated conditions for a specified per iod of time. #Measurements for software reliability: 1. As we test the software and move through the levels of testing we observe fai lures and try to remove defects. Our attempts to remove defects are sometimes su ccessful, and sometimes when making changes we introduce new defects or create c onditions that allow other defects to cause failures. As this process progresses we collect the failure data especially during a system test. 2. It is essential that during system test we use test data that reflects actual usage patterns. As testing continues we monitor the system as it executes. 3. Ideally, we hope to observe that the incidence of failures is decreasing, tha t is, we have longer and longer time periods where the software is executing and no failures are observed. Sometimes this is true and sometime it is not, at lea st over the short term. 4. In any case let us assume that we have observed i-1 failures. We can record t he interfailure times or the times elapsed between each of the i-1 failures, t1, t2, t3, . . . , ti-1. The average of those observed values is what we call the mean time to failure, MTTF. 5. The computation of the MTTF is dependent on accurately recording the elapsed time between the observed failures. Time units need to be precise. CPU execution time is in most cases more appropriate than wall clock time.

6. After a failure has been observed, software engineers must repair the code. W e can calculate the average time it takes to repair a defect in the code and cal culate this time as the mean time to repair, MTTR. 7. We use both of these measures to calculate a value called the mean time betwe en failure, MTBF. MTBF = MTTF + MTTR 8. Some researchers use the measure of mean time between failures as a estimate of a system s reliability. Other suggested measures for reliability (R) have been suggested by Shooman : R = MTBF/(1+MTBF). 3)How is software usability measured? Describe usability testing methods. -> #Usability is a quality factor that is related to the effort needed to learn, op erate, prepare input, and interpret the output of a computer program. #Types of usability testing: Rubin describes four types of tests: (i) exploratory, (ii) assessment, (iii) val idation, and (iv) comparison. 1. Exploratory usability testing Exploratory tests are carried out early in the life cycle between requirements a nd detailed design. A user profile and usage model should be developed in parall el with this activity. The objective of exploratory usability testing is to exam ine a high-level representation of the user interface to see if it characterizes the user s mental model of the software. The results of these types of tests are of particular importance to de signers who get early feedback on the appropriateness of the preliminary user in terface design. More than one design approach can be presented via paper screens , early versions of the user manual, and/or prototypes with limited functionalit y. Users may be asked to attempt to perform some simple tasks, or if it is too e arly in the prototying or development process, then the users can walkthrough or r eview the product and answer questions about it in the presence of a tester. The users and testers interact. They may explore the product together; the user may be asked to think aloud about the product. Users are usually asked for their inpu t on how weak, unclear, and confusing areas can be improved. The data collected in this phase is more qualitative then quantitative. 2. Assessment usability testing Assessment tests are usually conducted after a high-level design for the softwar e has been developed. Findings from the exploratory tests are expanded upon; det ails are filled in. For these types of tests a functioning prototype should be a vailable, and testers should be able to evaluate how well a user is able to actu ally perform realistic tasks. More quantitative data is collected in this phase of testing then in the previous phase. Examples of useful quantitative data that can be collected are: (i) number of tasks corrected completed/unit time; (ii) number of help references/unit time; (iii) number of errors (and error type); (iv) error recovery time. Using this type of data, as well as questionnaire responses from the users, test ers and designers gain insights into how well the usability goals as specified i n the requirements have been addressed. The data can be used to identify weak ar eas in the design, and help designers to correct problems before major portions of the system are implemented. 3. Validation Usability Testing This type of usability testing usually occurs later in the software life cycle, close to release time, and is intended to certify the software s usability. A prin cipal objective of validation usability testing is to evaluate how the product compares to some predetermined usability standard or benchmark. Testers want to determine whether the software meets the standards prior to release; if it does not, the reasons for this need to be established. 4. Comparison Usability Testing

Comparison tests may be conducted in any phase of the software life cycle in con junction with other types of usability tests. This type of test uses a side-by-s ide approach, and can be used to compare two or more alternative interface desig ns, interface styles, and user manual designs. Early in the software life cycle comparison test is very useful for comparing various user interface design appro aches to determine which will work best for the user population. Later it can be used at a more fine-grained level, for example, to determine which color combin ations work best for interface components. Finally, comparison test can be used to evaluate how the organization s software product compares to that of a competin g system on the market. 4) What are different components of cost? Explain in detail. -> #the costs of quality can be decomposed into three major areas: (i) prevention, (ii) appraisal, and (iii) failure. #The costs to find and fix a defect increase rapidly as we proceed through preve ntion, detection, and failure phases. #Prevention costs are associated with activities that identify the causes of def ects and those actions that are taken to prevent them. It includes: quality planning; test and laboratory equipment; training; formal technical reviews. #Appraisal costs involve the costs of evaluating the software product to determi ne its level of quality. According to Pressman these include: testing; equipment calibration and maintenance; in-process and interprocess inspections. #Failure costs are those costs that would not exist if the software had no defec ts; that is, if it met all of the user s requirements. Pressman partitions failure costs into internal and external categories. Internal failure costs occur when a defect is detected in the software prior to shipment. These costs include: diagnosis, or failure mode analysis; repair and regression testing; rework. External failure costs occur when defects are found in the software after it has been shipped to the client. These include: complaint resolution; return of the software and replacement; help line support; warranty work. 5) Explain Ishikawa's 7 basic tools for quality control. -> #Quality pros have many names for these seven basic tools of quality (such as "T he Old Seven", "The First Seven.", "The Basic Seven."), first emphasized by Kaor u Ishikawa, a professor of engineering at Tokyo University and the father of qual ity circles. # Ishikawa's tools: 1. Cause-and-effect diagram (also called Ishikawa or fishbone chart): Identifies many possible causes for an effect or problem and sorts ideas into useful categ ories. 2. Check sheet: A structured, prepared form for collecting and analyzing data; a generic tool that can be adapted for a wide variety of purposes. 3. Control charts: Graphs used to study how a process changes over time. 4. Histogram: The most commonly used graph for showing frequency distributions, or how often each different value in a set of data occurs. 5. Pareto chart: Shows on a bar graph which factors are more significant. 6. Scatter diagram: Graphs pairs of numerical data, one variable on each axis,

to look for a relationship. 7. Stratification: A technique that separates data gathered from a variety of s ources so that patterns can be seen (some lists replace "stratification" with "f lowchart" or "run chart"). 6) Explain in detail different software quality attributes. -> Software quality attributes #Correctness: This is the degree to which the software performs its required fun ctions. A common measure is defect density (number of defects/KLOC). #Efficiency: This is an attribute that is used to evaluate the ability of a soft ware system to perform its specified functions under stated or implied condition s within appropriate time frames. One useful measure is response time the time it takes for the system to respond to a user request. #Testability: This attribute is related to the effort needed to test a software system to ensure it performs its intended functions A quantification of testabil ity could be the number of test cases required to adequately test a system, or t he cyclomatic complexity of an individual module. #Maintainability: The effort required to make a change. Sometimes defined in ter ms of the mean-time-to-repair (MTTR) which reflects the time it takes to analyze a change request, design a modification, implement the change, test it, and dis tribute it. #Portability: This relates to the effort (time) required to transfer a software system from one hardware/software environment to another. #Reusability: This attribute refers to the potential for the newly developed cod e to be reused in future products. One measurement that reflects reusability is the number of lines of new code that has been inserted into a reuse library as a result of the current development effort.