1.Requirement
2.Analysis
3.Design
4.Code
5.Unit Test
6.Integration Test
7.System Test
8.UAT Test
9.Production/Release/Maintenance
The Software life cycle phase start on Unit Testing. They are done by
developers .
2. Can you explain PDCA cycle and where does testing fit?
Ans : PDCA means PLAN , DO , CHECK , ACT.
Plan: - Define your goal and the plan of how you will achieve that goal.
Do / Execute: - Depending on the plan strategy decided during the plan stage
we do execution accordingly in this phase.
Check: - Check / Test to make sure that we are moving according to plan and we
are getting the desired results.
Act: - During the check cycle if any issues are there then take appropriate
action accordingly and revise your plan again.
So developers and other stake holders of the project do the “plan and build”, while
testers do the check part of the cycle.
3. What is the difference between white box, black box and gray box testing?
Ans : White box : Testing based on an analysis of the Internal structure of the
component or system.
4. Define Defect?
Ans: A flaw in a component or system that can cause the component or system
to fail to perform its required function.
Extra: - A requirement incorporated into the product that was not given by the
end
10. As a manager what process did you adopt to define testing policy?
Ans : Below are the important steps to define testing policy in general. But it can
change according to how you implemented in your organization. Let’s understand
in detail the below steps of implementing a testing policy in an organization.
Definition: - The first thing any organization need to do is define one unique
definition for testing within organization. So that every one is on the same mind
set.
How to achieve : - How are we going to achieve our objective?. Is there going to
be a testing committee, will there be compulsory test plans which needs to be
executed etc etc.
Evaluate : - After testing is implemented in project how do we evaluate the same.
Are we going to derive metrics of defect per phase, per programmer etc etc.
Finally it’s
important to let know every one how testing has added value to the project.
Standards : - Finally what are the standards we want to achieve by testing. For
instance we can define saying that more than 20 defects per KLOC will be
considered below standard and code review should be done for the same.
Figure: - Establishing a testing policy
11. Should testing be only after build and execution?
Ans : No, It is not necessary that testing should be done only after build and
execution, in most of the life cycle testing begins from design phase.
Ans : Design phase is the more error prone than the execution phase. One of
the most frequent defects which occur during design is that, it does not cover the
complete requirements of the customer. Second wrong or bad architecture and
technical decision makes the next phase that is execution more prone to defects.
Because the design phase drives the execution phase it’s the most critical phase
to test. The testing of the design phase can be done by good review. On an
average 60% defect occur during design and 40% during execution phase.
13. What kind of inputs do we need from the end user to start proper testing?
Ans: Test data is required from end-user to start proper valid records to enter
into the database. Test data will be the exact record which end-user will enter
when they use this application.
The problems caused by the Lateral bugs will not cause damage as of now, but
they are just waiting to reveal themselves later.
One good example of a Latent bug the reason for the Y2K problem. At the
beginning the year was given only 2 numeric fields, but actually it needs 4
numeric fields. The problem prevails in the system for a long time and identified
later and then fixed. Also the problem does not cause the damage all of a sudden
and it caused only by the year 2000, which certainly needs 4 numeric field.
Masked defect is the defect which is hiding the other defect, which is not
detected. Means if there is a existing defect which is not caused (found) to
reproduce another defect. It means the other defect is masked with the previous
defect.
Eg : If you are testing the Help of any application. If there is a defect in one link
say Add Customer. But this defect was not found by QA and it is went to the live.
Then there is one defect residingin the Add Customer link say the link is not
working on the Add Customer page (Say there is bug in help of adding cell no in
the help in the Add Customer)
Then this Add cell no is a masked defect. It means it is been masked with the
Add Customer defect.
15. A defect which could have been removed during initial stage is removed in later
stage how does it affect cost?
Ans : Cost will increase if defects are found in later stages as it will need rework
from the beginning which results in increasing the amount of work needed to do,
the resources needed to complete the additional task, the time spent on
it, meetings and of course amount.
Input : - Every task needs some defined input and entrance criteria. So for every
work bench we need defined inputs. Input fo rms the first steps of the work
bench. Execute : - This is the main task of the work bench which will transform
the input in to expected output.
Check: - Check steps assure that the output after execution meets the desired
result.
Production output : - If the check is right Production output forms the exit criteria
of the workbench.
Rework : - During the check step if the output is not as desired then we need to
again start from the execute step.
Execution phase work bench: - This is the actual execution of the project. Input
is the technical document; execution is nothing but implementation / coding
according to the technical document and output of this phase is the
implementation / source code.
Testing phase work bench: - This is the testing phase of the project. Input is the
source code which needs to be tested; execution is executing the test case and
output is the test results.
Deployment phase work bench: - This is the deployment phase. There are two
inputs
for this phase one is the source code which needs to be deployed and that is
dependent on the test results. Output of this project is that the customer gets the
product which he can
now start using.
Maintenance phase work bench: - Input to this phase is the deployment
results, execution is implementing change request from the end customer, check
part is nothing but running regression testing after every change request
implementation and output is a new release after every change request
execution.
Figure: - Workbench and software life cycles
Ans : Alpha Testing : It is a form of UAT performed by end user at developer site
in a controlled environment.
Beta Testing : It is also a form of UAT perform by end user at one or more
customer sites in an uncontrolled environment.
21. What are the different strategies of rollout to the end users?
Ans : Rollout is the last phase of any software development , a final check before
a successful deployment.
Ans : Pilot testing – It is real world test done by the group of user before the final
deployment to find as many defects as possible. The main purpose of pilot testing is
to catch potential problems before they become costly mistakes.
Beta testing – It is the testing done by end users before the final release when the
development and testing are essentially completed. The purpose is to find final
problems and defects.
Figure: - Pilot and Beta testing
Ans : The objective of performing risk analysis as part of test planning help allocate
limited test resources to those software components that pose the greatest risk to the
organization. Testing minimizes software risks. To make software testing most effective
it is important to assure all the high risks associated with the software, will be tested
first.
25. How do you conclude which section is most risky in your application?
Ans : As per Requirement analysis the critical section are identified for testing.
They are screens or business scenarios used most by the customers in real time.
Exit Criteria - It ensures that the project is complete before exiting the test
stage.E.g. Planned deliverables are ready, High severity defects are fixed,
Documentation is complete and updated.
Project plan: - Project plan prepared by the project manager also serves as a
good input to finalize your acceptance test .
28. What's the relation between environment reality and test phases?
Ans : Environment reality becomes more important as test phases start moving
ahead . For instance during unit testing you need the environment to be least
real, but at the acceptance phase you should have a 100% real environment, or
we can say it should be the real environment .
29. What are different types of verifications?
Ans : Re-testing: After fix the bug or modifying the build, we will verify the same
functionality with different inputs.
Regression testing: After the bug fixed, testing the application whether the fixed
bug is affecting remaining functionality of the application or not.
32. What do you mean by coverage and what are the different types of coverage
techniques?
Ans : Coverage is a form of white box testing activity. It describes the measure to
which the code has been tested The following are the types of coverage
techniques
• Condition Coverage - Execute each decision with all possible outcomes at least
once.
It acess source code of a program and identify the suitable criteria first needed to
be defined based on the following criteria
5. Path coverage - identifies all possible paths from a given starting point in the
code has been executed.
34. What is configuration management?
Ans : The dynamic nature of most business activities causes software or system
changes. Configuration management also known as change control is process to
keep track of all changes done throughout the software life cycle. It involves
coordination and control of requirements, code, libraries, design, test efforts,
documentation etc. The primary objective of CM is to get the right change
installed at the right time.
Ans : Baseline is the point at which some deliverable produced during the
software engineering process is put under formal change control.Every baseline
should have a date of when it was baselined, a description of what the baseline
represents, and it should have a version control number.
37. How do test documents in a project span across software development life
cycle?
Ans : The test documents starts with the Test Plan, Test Strategy
Etc.
Last we create a closure document and lesson learnt
document.
Ans : They are list of items to be tested which has a meaning or purpose.
Ans : Analysis and design for testing projects depends on project planning and
scope and can be done as following
• Conducting Reviews
• Tracking number
• Location
• Calibration Intervals
• Calibration procedure
• Calibration history
• Calibration Due
41. Which test cases are first written white boxes or black box?
Ans : Black box test cases are written at initial stage based on requirement
documents and project plan.
Ans : Applications are installed on machine (PC, mainframe, client serve) that
also serves as host for other application.They share common files and resources
on the machine.They are termed Cohabiting software. Cohabiting software
resides in the test environment but don’t interact with the application being
tested.
43. What different impact rating's you have used in your project?
Minor : Very low impact but does not affect operations on large scale.
Ans : Test log is a document which contains information about the passed and
failed test cases.
Integration and testing - Code are integrated and tested for defects
• System Design
• System Coding
• Implementation
Ans : Water fall model is divided as Bing bang waterfall model and Phased
waterfall model. In big bang model all stages are freezed one at a time as it
flows down. The following are the stages of big bang waterfall model
• Requirement Analysis
• Design
• Implementation
• Testing
• Integration
• Maintaianence
Ans : It is a type of waterfall model where the project is divided into small phases
and delivered at intervals by different teams.
Different Teams work in parallel for each small phase and integrates at the end of
the project.
49. Explain Iterative model, Incremental model, Spiral model, Evolutionary model
and V-Model?
Ans : Incremental model - It is non integrated development model.This model
divides work in chunks and one team can work on many chunks. It is more
flexible.
50. Explain Unit testing, Integration tests, System testing and Acceptance testing?
Acceptance testing - Testing to ensure that the system meets the needs of the
organization and the end user or customer (i.e., validates that the right system
was built).
51. What’s the difference between system and acceptance testing?
Acceptance testing - Testing to ensure that the system meets the needs of the
organization and the end user or customer (i.e., validates that the right system
was built).
Figure: - V model cycle flow
Ans : The best model for testing depends on your company's 1) projects, 2)
resources, 3) budget, and 4) time allotted for testing.
Example : Agile Model is the best model but all the companies uses V-Model as
it is the best model.
Ans : When it comes to testing every one in the world can be involved right from
the developer to the project manager to the customer. But below are different
types of team groups which can be present in a project.
Isolated test team: - There is a special team of testers which do only testing.
The testing team is not related to any project. It like having a pool of testers in an
organization, which are picked up on demand by the project and after completion
again pushed back to the pool. This approach is costly but the best is we have a
different angle of thinking from a different group which is isolated from
development. But yes because it’s a complete isolated team it definitely comes at
a cost.
Inside test team: - In this approach we have a separate team which belongs to
the project. Project allocates separate budget for testing and this testing team
specially works this project only. Good side you have a dedicated team and
because they are involved in the project they have good knowledge about the
same. Bad part you need to budget for them in short it increases the project cost.
Testing techniques
Ans : This technique helps to create test cases around the boundaries of the
valid data.Usually values passed are exact boundary values, + or – 1 at the lower
boundary and+ or – 1 at the higher boundary. This is a technique to prove
that software is error-proof to boundaries.
For Example if you are writing a test case for the condition of age should be
greater that 18 & less than 35, the we have to write test case for 17,18,19 &
34,35,36
when input values change from valid to invalid errors are most
Ans : This technique will help to narrow down the possible test cases using
equivalence classes. Equivalence class is one, which accepts same types of
input data. Few test cases for every equivalence class will help to avoid
exhaustive testing.
57. Can you explain how state transition diagram can be helpful during testing?
Ans : It lists all possible state transition combinations not just the valid ones, it
unveils combinations that were not identified or documented or dealth with in the
requirements. Its beneficial to discover theses defect before coding begins.
Ans : A black box test design technique where test cases are selected, possibly
using a pseudo – random generation algorithm, to match an operational profile.
This technique can be used for testing non-functional attributes such as reliability
and performance.
Ans : As the name specifies semi – random testing is nothing but controlling
random testing and removing redundant test cases. So what we do is we have
random test cases, we apply equivalence partitioning to those test cases, which
in turn removes redundant test case thus giving us semi – random test cases.
Ans : Orthogonal arrays are two dimensional arrays of numbers where choosing
any two columns in the array covers even distribution of all pair wise
combinations of values in the array.
It is useful to detect pair wise defects and they can reduce redundancy.
Ans : Orthogonal array is a two dimension array in which if we choose any two
columns in the array, all the combinations of number will appear in those
columns. Below figure shows a simple L9 (34) orthogonal array is shown. In this
the 9 indicates that it has 9 rows. 4 indicate that it has 4 columns and 3 indicate
that each cell contains a 1, 2 and 3. Choose any two columns. Let’s choose
column 1 and 2. It has (1,1), (1,2), (1,3), (2,1), (2,2), (2,3), (3,1), (3,2), (3,3)
combination values. If you see closely these values covers all the values in the
array. Compare the values with combination column 3 and 4 and they will fall in
some pair. This is applied in software testing which helps us eliminate duplicate
test cases.
Figure: - Sample orthogonal array
Now let’s try to apply orthogonal array in actual testing field. Let’s say we have a
scenario in which we need to test mobile handset with different plan type, term
and size. So below are the different situations:-
· Each handset should be tested with every plan type, term and size.
· Each plan type should be tested with every Handset, term and size.
· Each size should be tested with every handset , plan type and term
So now you must be thinking that means we have 81 combinations, but we can
test all these conditions with only 9 test cases. Below is the orthogonal array for
the same.
Figure: - Orthogonal array in actual testing
Orthogonal array is very useful because most defects are pair wise defects and
with orthogonal array we can reduce redundancy to a huge extent.
Ans : A decision table lists causes and effects in a matrix. Each column
represents a unique combination.
1. Condition
2. Condition alternatives/combinations
3. Action
4. Action entries
66. How did you define severity ratings in your project?
Ans : Severity defines how severe is the impact of the defect. It can be rated as
Critical, major and minor.
Software process
· Identify the risk of the project by discussion, proper requirement gathering and
forecasting.
· Once you have identified the risk prioritize which risk has the most impact and
68. What are the different cost elements involved in implementing process in an
organization?
Ans : Salary : This forms the major component of implementing any process
salary of the employees. Normally while implementing process in a company
either organization recruit full time guys or they can share a resource part time on
implementing the process .
Consultant : if the process is new it can also involve in taking consultants which
is again an added cost.
Training Cost : Employee of the company also have to undergo training in order
to implement the new process.
Tools : In order to implement process organization will also need to buy tools
which again need t to be budgeted.
Figure: - Cost of implementing process
Ans : Model is nothing but best practices followed in an industry to solve issues
and problems. Models are not made in a day but are finalized and realized by
years of experience and continuous improvements.
Figure: - Model
Eg CMMI levels.
71. Can you explain the concept of process area in CMMI?
Implementation -It is the task performed according to a process. This is the initial
stage when the organization implements any new process.
Staged Model - It uses predefined sets of process area to define an improvement. Each
level of maturity is further decomposed into number of ...
77. Can you explain the different maturity levels in staged representation?
Maturity level of a process defines the nature and maturity present in the organization.
These levels help to understand and set a benchmark for the organization.
• Level 1 Initial – Processes are characterized as chaotic and ad-hoc, heroic
efforts required by individuals to successfully complete projects. A few Processes
are in place; successes may not be repeatable.
System Engineering - This model can be used for development of total system.
Software Engineering - This model can be used for development ...
80. How many process areas are present in CMMI and in what classification do they fall
in?
There are total 22 Key Process areas in CMMI. Ratings are awarded for level 2 through
level 5.
Maturity Level 2 - Managed
• Project Requirement Management
• Project Planning
• Quality process & product Quality Assurance
• Project Monitoring & control
• Measurement & Analysis
• Configuration management
• Supplier Agreement Management
• Technical solution
• Product Integration
• Verification
• Validation
• Organization process Focus
• Organizational process Definition
• Organizational Training
• Risk Analysis
• Decision Analysis & resolution
• Integrated Project management
Maturity Level 4 - Quantitatively managed
• Organization process performance
• Quantitative project Management
Maturity Level 5 - Optimization
• Casual analysis and Resolution
• Organization innovation & deployment
82. What different sources are needed to verify authenticity for CMMI implementation?
Ans : There are three different sources from which an appraiser can verify that did
the organization follow process or not.
Individuals before starting the assessment. So that before hand appraiser knows some
Interview: - It’s a formal meeting between one or more members of the organization in
Which they are asked some questions and the appraiser makes some judgments based
on those interviews. During the interview the member represents some process areas
or role which he performs in context of those process areas. For instance the appraiser
may interview a tester or programmer asking him indirectly what metrics he has
submitted to his project manager. By this the appraiser gets a fair idea of CMMI
implementation in that organization.
Documents: - It’s a written work or product which serves as an evidence that a process
is followed. It can be hard copy, word document, email or any type of written official
proof.
Below is the pictorial view of sources to verify how much compliant the organization is
with CMMI.
SCAMPI is an acronym for Standard CMMI Appraisal Method for Process Improvement.
A SCAMPI assessment must be led by an SEI Authorized SCAMPI Lead App ...
SCAMPI is an acronym for Standard CMMI Appraisal Method for Process Improvement.
A SCAMPI assessment must be led by an SEI Authorized SCAMPI Lead App ..
SCAMPI is an acronym for Standard CMMI Appraisal Method for Process Improvement.
A SCAMPI assessment must be led by an SEI Authorized SCAMPI Lead Appraiser.
SCAMPI is supported by the SCAMPI Product Suite, which includes the SCAMPI
Method Description, maturity questionnaire, work aids, and templates. Currently,
SCAMPI is the only method that can provide a rating, the only method recognized by
the SEI, and the method of most interest to organizations.
There are 3 SCAMPI methods
• SCAMPI class A Appraisal
• SCAMPI class B Appraisal
• SCAMPI class C Appraisal
Appraisal Classes
For benchmarking against other organizations, appraisals must result in consistent
ratings. The SEI has developed a document to assist in identifying or developing
appraisal methods that are compatible with the CMMI Product Suite. This document is
the Appraisal Requirements for CMMI (ARC).
SEI Appraisal Classes
The ARC describes a full benchmarking class of appraisal as Class A. Other CMMI-
based appraisal methods might be more appropriate for a given set of sponsor needs,
including self-assessments, initial appraisals, quick-look or mini-appraisals, incremental
appraisals, and external appraisals.
Thus, a particular appraisal method is declared an ARC Class A, B, or C appraisal
method. This designation implies the sets of ARC requirements that the method
developer has addressed when designing the method.
The SCAMPI family of appraisals includes Class A, B, and C appraisal methods.
SCAMPI A is the most rigorous method and the only method that can result in a rating.
SCAMPI B provides options in model scope, but the characterization of practices is
fixed to one scale and is performed on implemented practices.
SCAMPI C provides a wide range of options, including characterization of planned
approaches to process implementation according to a scale defined by the user.
Using SCAMPI B, every practice in the appraisal scope is characterized on a three
point scale indicating the risk of CMMI goal satisfaction if the observed practices were
deployed across the organizational unit. Model scope is not limited to the Process Areas
but could include sets of related practices.
SCAMPI C can be scoped at any level of granularity and the scale can be tailored to the
appraisal objectives, which might include the fidelity of observed practices to model/goal
achievement or the return on investment to the organization from implementing
practices.
Reliability, rigor, and cost might go down from A to B to C, but risk might go up.
Characteristics of Appraisal Classes
Clas Clas Clas
Characteristic
sA sB sC
Testing - What is the importance of PII in SCAMPI? - August 11, 2008 at 13:10 pm
by Raj meet Ghai
87. Can you explain implementation of CMMI in one of the Key process areas?
CMMI - Explain the different Process Area in CMMI. - March 02, 2010 at 5:00 AM
by Vidya Sagar
CMMI - Explain the different Process Area - Aug 12, 2009 at 10:00 AM by Shuchi
Gauri
Motorola developed a concept called Six Sigma. Six Sigma focuses on defect rates, as
opposed to percent performed correctly.
“Sigma” is a statistical term meaning one standard deviation. Six Sigma means six
standard deviations. At the Six Sigma statistical level, only 3.4 items per million are
outside of the acceptable level. Thus, the Six Sigma quality level means that out of
every one million items/opportunities 999,996.6 will be correct, and not more than 3.4
will be defective.
Sigma level Defects per million opportunities
Level 1 690,000
Level 2 308,537
Level 3 66,807
Level 4 6,210
Level 5 233
Six Sigma
90. what does it mean?
Six Sigma at many organizations simply means a measure of quality that strives for
near perfection. Six Sigma is a disciplined, data-driven approach and methodology for
eliminating defects (driving toward six standard deviations between the mean and the
nearest specification limit) in any process -- from manufacturing to transactional and
from product to service.
The statistical representation of Six Sigma describes quantitatively how a process is
performing. To achieve Six Sigma, a process must not produce more than 3.4 defects
per million opportunities. A Six Sigma defect is defined as anything outside of customer
specifications. A Six Sigma opportunity is then the total quantity of chances for a defect.
Process sigma can easily be calculated using a Six Sigma calculator.
The fundamental objective of the Six Sigma methodology is the implementation of a
measurement-based strategy that focuses on process improvement and variation
reduction through the application of Six Sigma improvement projects. This is
accomplished through the use of two Six Sigma sub-methodologies: DMAIC and
DMADV. The Six Sigma DMAIC process (define, measure, analyze, improve, control) is
an improvement system for existing processes falling below specification and looking for
incremental improvement. The Six Sigma DMADV process (define, measure, analyze,
design, verify) is an improvement system used to develop new processes or products at
Six Sigma quality levels. It can also be employed if a current process requires more
than just incremental improvement. Both Six Sigma processes are executed by Six
Sigma Green Belts and Six Sigma Black Belts, and are overseen by Six Sigma Master
Black Belts.
According to the Six Sigma Academy, Black Belts save companies approximately
$230,000 per project and can complete four to 6 projects per year. General Electric, one
of the most successful companies implementing Six Sigma, has estimated benefits on
the order of $10 billion during the first five years of implementation. GE first began Six
Sigma in 1995 after Motorola and Allied Signal blazed the Six Sigma trail. Since then,
thousands of companies around the world have discovered the far reaching benefits of
Six Sigma.
Many frameworks exist for implementing the Six Sigma methodology. Six Sigma
Consultants all over the world have developed proprietary methodologies for
implementing Six Sigma quality, based on the similar change management philosophies
and applications of tools.
91. Can you explain the different methodology for execution and design process in SIX
sigma?
The main focus of SIX sigma is on reducing defects and variations in the
processes.DMAIC and DMADV are the models used in most SIX sigma initiatives.
DMADV is model for designing process while DMAIC is for improving the process.
92. What does executive leaders, champions, Master Black belt, green belts and black
belts mean?
SIX sigma is not only about techniques, tools and statistics, but the main thing depends
Executive
leaders
Champions
Master
black belt
Black
belts
Green
belts
Executive leaders : - They are the main person who actually decides that we need to
do
SIX sigma. They promote it throughout organization and ensure commitment of the
organization in SIX sigma. Executive leaders are the guys who are mainly either CEO or
from the board of directors. So in short they are the guys who fund the SIX sigma
initiative. They should believe that SIX sigma will improve the organization process and
that they will succeed. They should be determined that they ensure resources get
proper
training on SIX sigma, understand how it will benefit the organization and track
themetrics.
SIX sigma mainly between the business users. He understand SIX sigma thoroughly ,
serves as a coach and mentor , selects project , decides objectives , dedicates resource
to
black belts and removes obstacles which come across black belt players. Historically
Champions always fight for a cause. In SIX sigma they fight to remove black belt
hurdles.
Master Black-Belt: - This role requires highest level of technical capability in SIX
sigma. Normally organizations that are just starting up with SIX sigma will not have the
same. So normally outsiders are recruited for the same. The main role of Master Black
belt is to train, mentor and guide. He helps the executive leaders in selecting
candidates,
right project, teach the basic and train resources. They regularly meet with black belt
and
Black-Belt: - Black belt leads a team on a selected project which has to be show cased
for SIX sigma. They are mainly responsible to find out variations and see how these
variations can be minimized. Mast black belt basically selects a project and train
resources, but black belt are the guys who actually implement it. Black belt normally
works in projects as team leads or project manager. They are central to SIX sigma as
they
Green Belt: - Green belt assist black belt in their functional areas. They are mainly in
projects and work part time on SIX sigma implementation. They apply SIX sigma
methodologies to solve problems and improve process at the bottom level. They have
just
enough knowledge of SIX sigma and they help to define the base of SIX sigma
implementation in the organization. They assist black belt in SIX sigma implementation
actually.
93. What are the different kinds of variations used in six sigma?
Variation is the basis of six sigma. It defines how much changes are happening in an
sigma we identify variations in the process, control them and reduce or eliminate
defects.
There are four basic ways of measuring variations Mean, Median, Mode and Range.
Let’s
Mean: - In mean the variations are measured and compared using math’s averaging
techniques. For instance you can see the below figure which shows two weekly
measures
of how many computers are manufactured. So for that we have tracked two weeks one
we
have named as Week 1 and the other as Week 2. So to calculate variation by using
mean
we calculate the mean of week1 and week2. You can see from the calculations below
we
have got 5.083 for week and 2.85 for week2. So we have a variation of 2.23.
Figure: - Measuring variations by using Mean
Median: - Median value is a mid point in our range of data. Mid point can be found out
using by finding the difference between highest and lowest value then divide it by two
and finally add the lowest value to the same. For instance for the below figure in week1
we have 4 as the lowest value and 7 as the highest value. So first we subtract the
lowest
value from the highest value i.e. 7 -4. Then we divide it by two and add the lowest value.
So for week1 the median is 5.5 and for week2 the median is 2.9. So the variation is 5.5
–
2.9.
difference between highest and lowest values in particular data range. For instance you
can see for recorded computer data of two week we have found out the range values by
Mode: - Mode is nothing but the most occurred values in a data range. For instance in
our
computer manufacturing data range 4 is the most occurred value in Week1 and 3 is the
most occurred value in week 2. So the variation is 1 between these data ranges.
average spread of data around the mean. It’s but complicated than the deviation
process
Below is the formula for Standard deviation. “s “ symbol stands for standard deviation.
X is the observed values; X (with the top bar) is the arithmetic mean and n is the
number
The first step is to calculate the mean. This can be calculated by adding all the
observed
The second step is to subtract the average from each observation, square them and
then
sum them. Because we square them we will not get negative values. Below figure
In the third step we divide the same with the number of observations as shown the
figure.
In the final step we take the square root which gives the standard deviation.
In industrial statistics, the X-bar chart is a type of control chart that is used to monitor
the arithmetic means of successive samples of constant size, n. This type of control
chart is used for characteristics that can be measured on a continuous scale, such as
weight, temperature, thickness etc.
For the purposes of control limit calculation, the sample means are assumed to be
normally distributed, an assumption justified by the Central Limit Theorem.
The X-bar chart is often used in conjunction with a variation chart such as the R-chart or
s-chart. The average sample range, R, or the average sample standard deviation, s,
can be used to derive the X-bar chart's control limits.
99. Can you explain the concept of fish bone/ Ishikawa diagram?
project. Fish bone or Ishikawa diagram is one of the important concept which can help
you list down your root cause of the problem. Fish bone was conceptualized by
Ishikawa,
so in the honor of its inventor this concept was named as Ishikawa diagram. Inputs to
conduct a fish bone diagram comes from discussion and brain storming with people
who
were involved in the project. Below figure shows how the structure of the Ishikawa
diagram is.
Below is a sample fish bone diagram. The main bone is the problem which we need to
address and to know what caused the failure. For instance the below fish bone is
constructed to know what caused the project failure. To know this cause we have taken
four main bones as inputs Finance, Process, People and Tools. For instance on the
people
front there are many resignations this was caused because there was no job
satisfaction
this
was caused because the project was a maintenance project. In the same way
causes are analyzed on the Tools front also. In tools No tools were used in the project
because
no resource had enough knowledge about the same this happened
because
of lack of planning. In process front the process was adhoc this was because of tight
dead lines this was caused because marketing people over promised and did not
Now once the diagram is drawn the end bones of the fish bone signify the main cause
of
project failure. From the below diagram here’s a list:-
No
training was provided fo r the resources regarding tool.
Marketing
people over promised with customer which lead to tight dead lines.
Resources
resigned because it’s a maintenance project.
Measures are quantitative ly unit defined elements for instance Hours, Km etc.
Measures
basically comprises of more than one measure for instance we can have metrics like
Number of defects is one of the measures used to measure test effectiveness. One of
the
side effects of number of defects is that all bugs are not equal. So it becomes necessary
to
weight bugs according to there criticality level. If we using Number of defects as the
Number
of bugs that originally existed significantly impacts the number of bugs
All
defects are not equal so defect should be numbered with criticality level to get
Below are three simple tables which show number of defects SDLC phase wise ,
module
This is one of the most effective measure. Number of defects found in production or the
customer is recorded. The only issue with this measure is it can have latent and masked
Defect seeding is a technique that was developed to estimate the number of defects
resident in a piece of software. It’s a bit offline technique and should not be used by
every one. The process is the following we inject the application with defects and the
see
if the defect is found or not. So for instance if we have injected 100 defects we try to get
three values first how many seeded defects where discovered, how many were not
discovered and how many new defects (unseeded) are discovered. By using defect
Let’s understand the concept of defect seeding by doing some detail calculation and
also
try to understand how we can predict the number of defects remaining in a system.
Below
is the calculation for the same.
First
calculate the seed ratio using the below given formulae i.e. Number of seed
After
that we need to calculate the total number of defect by using the formulae
Finally
we can know the estimated defects by using the formulae Total number of
Below figure shows a sample with step by step calculation. You can see first we
calculate the seed ratio, then total number of defects and finally we get the estimated
defects.
this metric we come to know how many bugs we found out from the set of bugs which
we could have found. Below is the formula for calculating DRE. So we need two inputs
for calculating this metric number of bugs found during development and number of
Figure:-DRE formulae
But success of DRE depends on lot of factors. Below are listed some factors:-
Severity
and distribution of bugs must be taken in to account.
Second
how do we confirm when the customer has found all the bugs. This is
Test effectiveness is measure of the bug finding ability of our tests. In short it measures
how good the tests were?. So effectiveness is the ratio of measure of bugs found during
testing to total bugs found. Total bugs are the sum of new defects found by user + bugs
found in test. Below figure explains the calculation in a more pictorial format.
QUESTIONS 106-120
Defect Age is the difference in time between the date a defect is detected and the
current date (if the defect is still open) or the date the defect was fixed. It is a useful
measure of defect effectiveness.
Defect Spoilage is a metric.
Spoilage =Sum of ( Number of defects * Discovered Phage)/total number of defects
The defect age is often calculated as a phase age (ie: how many phases it exists).
Because defects are more expensive the later in the test they are found, it is a good
idea to also calculate the defect spoilage which is a quotient.
Defect Age Calculated in Phases : Defect Fixed Phase - Defect Injection phase.
Let’s say the software life cycle has the following phases:
1. Requirements Development
2. High-Level Design
3. Detail Design
4. Coding
5. Unit Testing
6. Integration Testing
7. System Testing
8. Acceptance Testing
If a defect is identified in ‘System Testing’ and the defect was introduced in
‘Requirements Development’, the Defect Age is 6.
Defect age is used in another metric called defect spoilage to measure the effectiveness
of defect removal activities.
Most, but not all, types of tests can be automated. Certain types of tests like user
comprehension tests test that run only once and tests that require constant human
intervention are usually not worth the investment incurred to automate. The following
are examples of criteria that can be used to identify tests that are prime candidates for
automation.
High path frequency – Automated testing can be used to verify the performance of
application paths that are used with a high degree of frequency when the software is
running in full production. Examples include: creating customer records.
108. Which automation tool have you worked and can you explain them in brief?
109. Can you explain how does load testing conceptually work for websites?
Web Load Testing involves checking how a website or service performs as the load on
it (the number and volume of requests) is increased.
There are many reasons for load-testing a Web application. The most basic type of load
testing is used to determine the Web application’s behavior under both normal and
anticipated peak load conditions. As you begin load testing, it is recommended that you
start with a small number of virtual users and then incrementally increase the load from
normal to peak. You can then observe how your application performs during this
gradually increasing load condition. Eventually, you will cross a threshold limit for your
performance objectives. For example, you might continue to increase the load until the
server processor utilization reaches 75 percent, or when end-user response times
exceed 8 seconds.
A website needs to stay up under normal and expected load conditions, but in addition
to that, the load will not remain constant, since there are several reasons why the
number of requests might suddenly spike:
• Increased popularity of the website - This could happen very quickly, for example when
a popular site links to yours (Digg and Slashdot effects). If your site is not able to cope
with the sudden onrush of requests, it can render your site unusable.
• Denial of Service Attacks - This is what happens when somebody tries to bring down
your site by making more requests than it can handle. The most common form is a
distributed denial of service attack, in which case numerous computers are used to
make requests on the site at the same time.
Here are some things to keep in mind when creating Web LoadTests:
• Test the most common behaviour that you expect from the users first
• Test behaviours that you expect will cause high loads (e.g. Operations that involve a lot
of database access might fall into this category)
• Spend at least some of the time doing exploratory testing.
The basic approach to performing load testing on a Web application is:
1. Identify the performance-critical scenarios.
2. Identify the workload profile for distributing the entire load among the key scenarios.
3. Identify the metrics that you want to collect in order to verify them against your
performance objectives.
4. Design tests to simulate the load.
5. Use tools to implement the load according to the designed tests, and capture the
metrics.
6. Analyze the metric data captured during the tests.
By using an iterative testing process, these steps should help you achieve your
performance objectives.
110. Can you explain how did you perform load testing using tool?
Keyword-driven testing separates the test creation process into two distinct stages: a
Planning Stage, and an Implementation Stage.
Although keyword testing can be used for manual testing, it is a technique particularly
well suited to automated testing. The advantages for automated tests are the reusability
and therefore ease of maintenance of tests that have been created at a high level of
abstraction.
The keyword-driven testing methodology divides test creation into two stages:-
• Planning Stage
• Implementation Stage
Planning Stage
A simple keyword (one action on one object), e.g. entering a username into a text field.
Actio
Object Data
n
One left
Button (login) Click
click
Implementation Stage
Pros
Cons
1. Longer Time to Market (as compared to manual testing or record and replay technique)
2. Moderately high learning curve initially
113. How can you perform data-driven testing using Automated QA?
114. What are the different ways of doing black box testing?
TPA is a technique to estimate test effort for black box testing. Inputs for TPA are the
counts which are derived from function points (function points will be discussed in more
Rating:
3 Low: the importance of the function relative to the other functions is low.
6 Normal: the importance of the function relative to the other functions is normal.
12 High: the importance of the function relative to the other functions is high.
Usage-intensity
The usage intensity has been defined as the frequency with which a certain function is
processed by the users and the size of the user group that uses the function. As with
user-
Rating:
2 Low: the function is only used a few times per day or per week.
4 Normal: the function is being used a great many times per day
Interfacing
Interfacing is an expression of the extent to which a modification in a given
function affects
other parts of the system. The degree of interfacing is determined by ascertaining
first the
logical data sets (LDSs) which the function in question can modify, then the other
functions
which access these LDSs.
Complexity
The complexity of a function is determined on the basis of its algorithm. The general
structure
of the algorithm may be described using pseudo code, Nassi-Shneiderman or ordinary
text.
The complexity rating of the function depends on the number of conditions in the
function’s
algorithm.
Rating:
3 The function contains no more than five conditions.
6 The function contains between six and eleven conditions.
12 The function contains more than eleven conditions.
Uniformity (U):
This factor defines how reusable a system is. Clones and dummies come under
this heading.
A uniformity factor of 0.6 is assigned in cases of the kinds where there are clone
functions , dummy functions and virtually unique function reoccurring ;
otherwise a uniformity factor of 1 is assigned
Df = ((Ue + Uy + I + C)/16) * U
Df = weighting factor for the function-dependent factors
Ue = user-importance
Uy = usage-intensity
I = interfacing
C = complexity
U = uniformity
Dynamic quality characteristics (Qd)
The third step is to calculate Qd. Qd, i.e, dynamic quality characteristics, have two
parts: explicit characteristics (Qde) and implicit characteristics (Qdi).Qde has five
important characteristics: Functionality, Security, Suitability,Performance, and
Portability.
Qdi defines the implicit characteristic part of the Qd. These are not standard and vary
from project to project. For instance, we have identified for this accounting application
four characteristics: user friendly, efficiency, performance, and maintainability.
Qd = Qde + Ddi
TPf = FPf * Df * Qd
TPf = number of test points assigned to the function
FPf = number of function points assigned to the function
Df = weighting factor for the function-dependent factors
Qd = weighting factor for the dynamic quality characteristics
Calculate static test points Qs
In this step we take into account the static quality characteristic of the project. This is
done by defining a checklist of properties and then assigning a value of 16 to those
properties. For this project we have only considered easy-to-use as a criteria and
hence assigned 16 to it.
Total number of test points
The total number of test points assigned to the system as a whole is calculated by
entering the data so far obtained into the following formula:
TP = ΣTPf + (FP * Qi) / 500
TP = total number of test points assigned to the system as a whole
ΣTPf = sum of the test points assigned to the individual functions (dynamic test points)
FP = total number of function points assigned to the system as a whole (minimum
value 500)
Qi = weighting factor for the indirectly measurable quality characteristics
Calculate Productivity/Skill factors
Productivity/skill factors show the number of test hours needed per test points. It’s a
measure of experience, knowledge, and expertise and a team’s ability to perform.
Productivity factors vary from project to project and also organization to organization.
For instance, if we have a project team with many seniors then productivity increases.
But if we have a new testing team productivity decreases. The higher the productivity
factor the higher the number of test hours required.
Calculate environmental Factor (E)
The number of test hours for each test point is influenced not only by skills but also by
the environment in which those resources work.
Calculate primary test hours (PT)
Primary test hours are the product of test points, skill factors, and
environmental factors. The following formula shows the concept in more detail:
Primary test hours = TP * Skill factor * E
Function points are measurement of a unit for software which resembles an hour
measuring time. The functionality of the software is quantified by function points on the
request provided by the customer primarily based on logical design. Function points
measures software development and its maintenance consistently among all projects
and enterprises.
Function points are used a metric in software testing. They are used to measure the
size of the software, functionality by measuring the requirements. Function points are
consistent and independent of design. FP can be used in estimations,
E.g. counting the number of screens, Menus in the application.
Application boundary considers users perspective. It indicates the margin between the
software measured and the end user. It helps to identify what is available to the end
user externally from the interface to interact with the internal of the system. This helps to
identify the scope of the system.
The first step in FPA is defining boundary. There are two types of major boundaries:
Internal Application Boundary
Does it have or will have any other interface to maintain its data, which is not
Application” and at the end of accounting year, you have to report to tax
Department. Tax department has his own website where companies can connect
and report there Tax transaction. Tax department application has other
These maintenance screens are used internally by the Tax department. So Tax
Online interface has other interface to maintain its data which is not your scope,
Does your program have to go through a third party API or layer? In order
The best litmus test is to ask yourself do you have full access over the system.
If you have full rights or command to change then its internal application
An Elementary process is the smallest unit of any business activity. It has to have a
meaning or a purpose. An elementary process is complete, when the user comes to
closure on the process and all the business information are in a static and complete
condition.
When it is complete, the elementary process leaves the business area in a self-
consistent state. That is, the business person has come to closure on the process and
all the business information is in a static and complete condition. These elementary
processes can occur at any level, at or below level 1, within the business model.
Conversely, a complex high level process may require analysis through a number of
levels of diagram in order to break it down into functional components that are
sufficiently low level to be termed elementary processes. If there is sufficient interest in
any given process to analyze it further, then clearly it is not an elementary process.
Elementary processes can be described in elementary process descriptions (or EPD's).
These are typically about half a page of narrative and can add useful detail to a
business process model.
elementary processes are combined they interact to form what we call a software
system or
elementary processes are woven together becoming interdependent. There are two
basic types
motion has the characteristic of moving data inside to outside the application boundary
or outside
As said in introduction FPA is breaking huge systems in to smaller pieces and analyzing
119. Can you explain the concept of static and dynamic elementary process?
.Dynamic elementary is a process where data moves from internal application boundary
to external application boundary or vice-versa.
Example: Input data screen where user inputs data in to application. Data moves from
the input screen inside application.
Static elementary is a process where data of application is maintained either inside
application boundary or in external application boundary.
Example: For instance in a customer maintenance screen maintaining customer data is
static elementary process
120. Can you explain concept of FTR, ILF, EIF, EI, EO , EQ and GSC ?
between ILF and technical database design, then FPA can go very
view its only Supplier. As logically they are all Supplier details.
EIF is used only for reference purpose and are not maintained by internal
application.
If there is no sub-group of ILF then count the ILF itself as one RET.
A group of RET’s within ILF are logically related. Most probably with a parent
child relationship. Example: - Supplier had multiple addresses and every address
can have multiple phone numbers (See detail image below which shows database
diagrams).So Supplier, Supplier Address and Supplier phone numbers are RET’s.
we have kept auto increment field (Supplierid) for primary key. Supplierid field
from user point of view never exists at all , its only from software designing
DET should be non-recursive field in ILF. DET should not repeat in the same
Count foreign keys as one DET. “Supplierid” does not qualifies as DET but
FTR should be ILF or EIF. So count each ILF or EIF read during process.
If the EP is maintaining an ILF then count that as FTR. So by default you will
It’s a dynamic elementary process [For definition see “Dynamic and Static
application boundary.
Example: - User Interaction Screens, when data comes from User Interface
to Internal Application.
EI may maintain ILF of the application, but it’s not compulsory rule.
Example: - A calculator application does not maintain any data, but still the
An import batch process running from command line does not have screen,
It’s a dynamic elementary process in which result data is retrieved from one or
In this EP some input request has to enter the application boundary.
EQ does not contain any derived data. Derived data means any complex
calculated data. Derived data is not just mere retrieval but are combined with additional
formulae to generate results. Derived data is not part of ILF or EIF,they are generated
on fly.
It’s a dynamic elementary process in which derived data crosses from Internal
Process should be the smallest unit of activity that is meaningful to end user
in business.
DET is different from other EO’s.So this ensures to us that we do not count
EO’s twice.
Major difference between EO and EQ is that data passes across application boundary.
Example: - Exporting Accounts transaction to some external file format like XML or
some other format. Which later the external accounting software can import. Second
This section is the most important section. All the above discussed sections are
counting
sections. They relate only to application. But there are other things also to be
considered
while making software, like are you going to make it an N-Tier application, what's the
performance level the user is expecting etc these other factors are called GSC. These
are
external factors which affect the software a lot and also the cost of it. When you submit
a function point to a client, he normally will skip everything and come to GSC first. GSC
121-130 MISSING
131. You have people in your team who do not meet there deadlines or do not perform
what are the actions you will take?
Ans: In such kind of question they want to see your delegation skills. The best answer
to this question is a job of a project manager is managing projects and not problems of
people, so I will delegate this work to HR or upper authority….
Ans: Risk is high at the start of projects, but by proper POC (Proof of concept) risk is
brought in control. Good project managers always have proper risk mitigation plan at
the start of project. As the project continues one by one risk is eliminated thus bringing
down the risk.
Analysis: Here the Company level people and Client or Customer side people will
participate in a meeting called Kickoff meeting. The Client provides the information and
The Company side people (Business Analyst will participate to gather the information
from the client. The Business Analyst who is well in Domain Skills, Technical Skills and
Functionality Skills.
By the Gathered information the Business Analyst will prepare the BRS Document
which is also called as Business Requirement Specification. Then later the same
document is also called as FRD document. That's Functional Requirement Document.
Project Manager will prepare SRS Document i.e: System Requirement Specification
Document.
Later all these documents are verified by Quality Analyst. Here the quality Analyst will
check the gaps or loopholes in between the document to map the client Specification
document and Business Requirement Specification Document.
Again Business Analyst will involve to prepare the Use Case Document and later these
all documents are maintained as baseline Document, The Base line Document which is
called called as Stable document.
---------
Output: Here the Analysis output is BRS, SRS, FRS, Use case and Test plan
Documents
Ans: In "The Waterfall" approach, the whole process of software development is divided
into separate process phases. The phases in Waterfall model are: Requirement
Specifications phase, Software Design, Implementation and Testing & Maintenance. All
these phases are cascaded to each other so that second phase is started as and when
defined set of goals are achieved for first phase and it is signed off, so the name
"Waterfall Model". All the methods and processes undertaken in Waterfall Model are
more visible.
Implementation & Unit Testing: On receiving system design documents, the work is
divided in modules/units and actual coding is started. The system is first developed in
small programs called units, which are integrated in the next phase. Each unit is
developed and tested for its functionality; this is referred to as Unit Testing. Unit testing
mainly verifies if the modules/units meet their specifications.
Integration & System Testing: As specified above, the system is first divided in units
which are developed and tested for their functionalities. These units are integrated into
a complete system during Integration phase and tested to check if all modules/units
coordinate between each other and the system as a whole behaves as per the
specifications. After successfully testing the software, it is delivered to the customer.
Operations & Maintenance: This phase of "The Waterfall Model" is virtually never
ending phase (Very long). Generally, problems with the system developed (which are
not found during the development life cycle) come up after its practical use starts, so the
issues related to the system are solved after deployment of the system. Not all the
problems come in picture directly but they arise time to time and needs to be solved;
hence this process is referred as Maintenance.
Big bang waterfall model delivers the complete solution once and in one go. That’s why
it’s termed Big Bang.
Following is the approach:
• Customer provides complete overall requirements.
• Design follows
• The design is built / developed.
• The development work is tested.
• The system is implemented.
One should go with Big bang waterfall model if:
• Contract of work is completely defined and accurate.
• Requirement and acceptance criteria is completely defined and accurate
• It is feasible to finish the work within given constraints.
• No change in requirements is expected.
• Problem and proposed solution both are clearly understood by all stakeholders.
• No mistakes can occur in requirements, design phases.
136. Can you explain phased waterfall model?
Ans: Unlike the big bang waterfall model, the phased model is suitable if the work can
be grouped into separate units and delivered in steps rather than everything once and
together, by different teams. Consider a system that consists of 4 subsystems, each
being developed by a separate team. In the end all the 4 subsystems make up one
complete system, giving the flexibility of breaking the system down in 4 parts and
allowing each being developed separately. It’s more like a collection of mini projects run
by different teams approach.
137. Explain Iterative model, Incremental model, Spiral model, Evolutionary model and
V-Model?
Spiral model: The spiral model is a software development process combining elements
of both design and prototyping-in-stages, in an effort to combine advantages of top-
down and bottom-up concepts.
V model: A framework to describe the software development life cycle activities from
requirements specification to maintenance. The V-model illustrates how testing activities
can be integrated into each phase of the software development life cycle.
138. Explain Unit testing, Integration tests, System testing and Acceptance testing?
Ans: unit testing: In computer programming, unit testing is a software verification and
validation method in which a programmer tests if individual units of source code are
fit for use. A unit is the smallest testable part of an application. In procedural
programming a unit may be an individual function or procedure.
To verify and validate behaviors of the entire system against the original system
objectives Software testing is a process that identifies the correctness,
completeness, and quality of software.
Acceptance testing: In engineering and its various subdisciplines, acceptance testing
is black-box testing performed on a system (e.g. software, lots of manufactured
mechanical parts, or batches of chemical products) prior to its delivery. ...
The Causal Analysis and Resolution process area involves the following:
Identifying and analyzing causes of defects and other problems
Taking specific actions to remove the causes and prevent the occurrence of
those types of defects and problems in the future
The advantage of CAR is that root causes are scientifically identified and their
corrective and preventive actions are carried out. CAR needs to be performed at
project initiation, all phase and project ends and on a monthly basis. Fishbone
diagram is one of the ways you can do CAR.
The Decision Analysis and Resolution (DAR) process is a quick and effective
method of evaluating key decisions and proposed solutions. DAR can apply to all
levels of decisions made within a program or project. Typically it is applied to
management or technical decisions that are high-risk or that have a significant
consequence later in the project. DAR ensures a controlled decision process, rather
than a reactionary one for critical choices.
Communications Plan
The Project Management Office (PMO) is the department or group that defines and
maintains the standards and processes related to project management within an
organisation.
For years, IT departments have struggled to deliver projects on time and within budget.
But with today’s emphasis on getting more bang for the buck, IT has to rein in projects
more closely than ever. That challenge has led many to turn to project management
offices (PMOs) as a way to boost IT efficiency, cut costs, and improve on project
delivery in terms of time and budget.
While not a new solution, the trend toward implementing PMOs to instill much-needed
project management discipline in IT departments is spreading fast. "More people lately
have been talking to me about PMOs than they have in the last 10 years," says Don
Christian, a partner at PricewaterhouseCoopers. PMOs can help CIOs by providing the
structure needed to both standardize project management practices and facilitate IT
project portfolio management, as well as determine methodologies for repeatable
processes. The Sarbanes-Oxley Act—which requires companies to disclose
investments, such as large projects, that may affect a company’s operating performance
—is also a driver, since it forces companies to keep closer watch on project expenses
and progress. W.W. Grainger, an industrial products distributor, has a PMO that
"enables us to complete more projects on time and on budget with fewer resources,"
says Tim Ferrarell, senior vice president of enterprise systems.
But PMOs are no panacea for project challenges, including battling today’s tepid
business climate. For one thing, there is no uniform recipe for success—it’s important
that the PMO structure closely hews to a company’s corporate culture. PMOs also won’t
give organizations a quick fix or deliver immediate, quantifiable savings. And companies
with PMOs report that they don’t necessarily yield easy to use cost-saving benchmarks
and performance metrics. In a survey conducted by CIO and the Project Management
Institute (PMI), 74 percent of respondents said that lower cost was not a benefit of their
PMOs.
However, survey respondents still reported positive benefits from the formation of a
PMO, even if quantifiable ROI is elusive. Out of 450 people surveyed, 303, or 67
percent, said their companies have a PMO. Of those with a PMO, half said the PMO
has improved project success rates, while 22 percent didn’t know or don’t track that
metric, and 16 percent said success rates stayed the same. There is also a strong link
between the length of time a PMO has been operating and project success rates: The
longer the better. While 37 percent of those who have had a PMO for less than one year
reported increased success rates, those with a PMO operating for more than four years
reported a 65 percent success rate increase. The top two reasons for establishing a
PMO, according to the survey: improving project success rates and implementing
standard practices. In a finding that indicates PMOs’ importance, a survey-leading 39
percent of respondents said the PMO is a strategic entity employed at the corporate
level, meaning it sets project standards across the enterprise and is supported by upper
managers.
NO
Gantt chart is a chart that depicts progress in relation to time, often used in planning
and tracking a project. Eg.
26. Two resources are having issues how do you handle the same?
(CR) A formally submitted artifact that is used to track all stakeholder requests (including
new features, enhancement requests, defects, changed requirements, etc.) along with
related status information throughout the project lifecycle. ...
Requirement traceability matrix is a table to match the functional requirements with the
prepared Test cases. Its importance is to ensure that the all requirements are covered
and all changes which are made in the requirement are tracked and covered in the test
cases.
OR
The concept of Traceability Matrix is very important from the Testing perspective. It is
document which maps requirements with test cases. By preparing Traceability
matrix, we can ensure that we have covered all the required functionalities of the
application in our test cases. Some of the features of the traceability matrix:
It is a method for tracing each requirement from its point of origin, through each
development phase and work product, to the delivered product
Will indicate for each work product the requirement(s) this work product satisfies
The dynamic nature of most business activities causes software or system changes.
Configuration management also known as change control is process to keep track of all
changes done throughout the software life cycle. It involves coordination and control of
requirements, code, libraries, design, test efforts, documentation etc. The primary
objective of CM is to get the right change installed at the right time.
In order to provide support for multiple users creating and updating large amounts of
geographic information in an enterprise geodatabase, ArcSDE provides an editing
environment that supports concurrent multiuser editing without creating multiple copies
of the data. This editing environment is called versioning.
Let's think again about what a project is: You might remember that we defined that a
project requires a start date in order that it can be called a project. Similarly, you could
say that a project needs a kick-off meeting or a start workshop in order to work, or in
other words, to really get off the ground. The main reason behind this is that even
though you might know what you want to achieve within your project, your team
members may not (yet): You have to explicitly communicate these facts to your team.
The people you are going to work with need at least to know what you are up to (the
goals of the project), who will be doing what (responsibilities and roles) and how
project administration and controlling will work. In addition, if the goals are not clear
enough you might also want to specify a number of "not-goals", i.e., things you
specifically do not want to be done within the scope of the project.
When deciding on overall responsibilities you should not forget that this is often more
about leadership, communication, organizational skills and trust rather than about
technical competency. Regarding project administration and controlling methods, it is
important that you choose the "right" method for your organization/project size and type:
This is often called "adaptive project management".
It is especially important that the project management methods you choose are not
oversized for your project: The people working with you need to understand why they
have to do the things they are required to do. Otherwise, they will work against you and
the methods you chose, because they think that the methods do not make sense and
that they are not really needed. On the others side, if everyone on the team
understands the methods and why they are needed, your team members will probably
even make suggestions during the project on how the process you have chosen can be
improved.
Finally, an explicit kick-off meeting or start workshop makes sense in order to really
"rally the troops", i.e., it can be used to motivate people and to formally give the "Go"
for the project.
42. Did you have project audits if yes how was it handled?
A Report raised during Quality System Audit on the processes whenever it violates from
the Quality System Standards, Policies, Procedures and whenever it fails to deliver
consistency in the Customer Satisfaction and Continual Improvement.
Examples:
The non-conformance report includes who, what, where, when. The report generally
initiates an investigation into Root Cause (why). It generally escalates to CAPA
(Corrective and Preventative Action)
To remedy these shortcomings, below are 12 ideas for boosting the accuracy of your
estimates:
Maintain an ongoing "actual hours" database of the recorded time spent on each
aspect of your projects. Use the data to help estimate future projects and identify
the historically accurate buffer time needed to realistically perform the work.
Create and use planning documents, such as specifications and project plans.
Consider simpler or more efficient ways to organise and perform the work.
Plan and estimate the project rollout from the very beginning so that the rollout won't
become a chaotic scramble at the end. For instance, you could propose using a
minimally disruptive approach, such as a pilot programme or a phased
implementation.
Develop contingency plans by prioritising the deliverables right from the start into
"must-have" and "nice-to-have" categories.
Refer to your lessons-learned database for "20:20 foresight" on new projects, and
incorporate your best practices into future estimates.
The success of your business or organization depends largely on the people that make
up your team. Whether they are salespeople, customer service representatives,
executive managers or service providers, your team can make or break the success of
your organization or business.
Therefore, motivating your team to continually meet and exceed goals and expectations
is essential to the overall success of your organization. How to motivate your team?
Here are seven useful tips that will help you keep your team motivated and working
hard to achieve your organization's goals.
1. Set clear and realistic goals. The first essential step towards meeting your team's
goals and inspiring your team to participate and achieve, is to set clear and realistic
goals. Set both short-term and long-term goals and build on each success as a goal is
achieved.
2. Clearly communicate goals and expectations. In order for your team to meet and
exceed goals and expectations, they must have a solid understanding of what they are
working to obtain. Clearly communicate the goals and expectations that have been
established in order to set your team up for success.
3. Provide all necessary tools. Morale and motivation can only get your team so far.
To maintain a happy team that is motivated to work hard to achieve their goals, ensure
that they have the proper tools for the job. This may include equipment, training,
supplies, support or coaching.
4. Use work plans. Work plans are incredibly effective tools that act both as to do lists
complete with goals and deadlines, and well-organized agendas for follow up meetings
and check-ins.
5. Stay connected and follow up. Schedule regular check-ins with each team
member, as well as the whole team. This is a great way to help team members maintain
accountability to the team, as well as an easy way to immediately discover if anyone is
falling behind on their responsibilities or facing challenges that need to be addressed.
6. Involve your team setting goals and creating work plans. There is no better way
to establish ownership of goals and expectations than to involve your team in setting the
goals and establishing the expectations that they will then work together to meet.
7. Celebrate successes and use incentives. Celebrate both individual and team
successes whenever goals are met or exceeded. Incentives can range from small gifts
or public praise, to workplace perks and recognition. Incentives don't have to be
expensive to show your team that you appreciate them and that their efforts are noticed.
47. How did you confirm that your modules are resource independent?
48. Was your project show cased for CMMI or any other project process
standardization?
49. What are the functions of the Quality Assurance Group (QAG)?
is complete.
Example:
Project Approach
This section should outline the way you will roll out the technology, including the
highest level milestones.
For example:
Entry criteria – It ensures that the proper environment is in place to start test process of
a project
e.g. All hardware/software platforms are successfully installed and functional, Test plan,
test case are reviewed and signed off.
Exit Criteria - It ensures that the project is complete before exiting the test stage.E.g.
Planned deliverables are ready, High severity defects are fixed, Documentation is
complete and updated.
53. How much are you as leader and how much are you as PM?
54. How can he handle the conflicts between peers and subordinates?
55. In your team you have highly talented people how did you handle their motivation?
Highly talented people have very different values and motivation from the majority of
people. More is expected of them and they expect more in return. They are often high-
impact but high-maintenance too. They think differently (and faster). They get bored
more readily. They need different kinds of challenges. They can deal with more
complexity but are more complex in themselves. They get frustrated more readily and
express themselves readily.
They are a different kind of person - and they need a different kind of management. The
manager of a talented team needs to learn quickly how to spot and respond to talent,
how to encourage it to grow, whilst gently directing its course. The manager of talent
needs to be able to cope with the fact that certain members of the team may be in some
respects brighter and more able than they are - and they need to be comfortable about
that. The manager of a talented team needs to completely understand what role they
play in the team's success and communicate that subtly but effectively. The manager
must be respected and be the person that the talented individual is happy to be led by.
Additional information:
What do we mean by talented? Proposition: there is something that makes non-
conformity or independent-mindedness an essential ingredient in our contemporary
definition of talent. Proposition: defining talent simply as above average performance
doesn’t get us very far. By whose definition? Different people may have different
definitions? In particular, the managers and the managed may have different definitions.
Is the label ‘talent’ purely subjective? Or is there some pattern in how people define it?
Can there be an objective basis for calling someone ‘talented’? What synonyms are
used for ‘talent’? Is the definition of talent essentially contextual, a function of time and
place? Different domains are likely to define talent differently Are there some domains
where the word talented tends not to be used? When did you ever hear ‘talented’
applied to a clerk or administrator, secretary, labourer, taxi driver, tax officer…? Are
definitions of talent changing? If so, how? How has talent been defined through history?
(Any off-beat definitions?) Are there various discernible types of talent? If so, what are
the types and what do they consist of? What are the common ingredients? Is versatility
an essential ingredient in talent? Or speed of learning? 3. How do you manage these
people? Why are they difficult to manage?
What allowances get made for talented people? With what effects? What allowances
should be made? Is Belbin’s idea of ‘allowable weakness’ more trouble than it’s worth?
What can you do about it? Manage expectations Continue to develop their talent
Manage them on the move Trust them Talk to them Get clued up How do managers
tend to tackle these problems? (What is current practice?) With what effects? (as
respectively assessed by the managers, the talent, others…) How should managers
tackle these problems? Why? What’s the rationale? What’s the evidence that other
approaches work? Manage Expectations One of the ways talent gets wasted is through
the unrealistic expectations which are held of it. (Something about the time dimension in
here.) We all think we’re talented. Or do we? Is it part of a manager’s job to let people
know that they are talented? Or that they are not? Proposition: there are higher
expectations of talent; more is expected of them and they expect more in return, that’s
an aspect of what makes them different, and perhaps the essence of what makes them
difficult to manage. So what do managers these days expect of talented people? Part of
the price of being reckoned to have talent is that you are given bigger, more difficult,
more demanding jobs AND expected to do them better and faster than others. Is one of
the ways in which talent is squandered by managers…not managing expectations about
the rate at which talent is going to develop. And what do talented people expect of their
managers?
What do managers perceive that they get? And how do they feel about that? What do
talented people perceive that they get? And how do they feel about that? Example
cases of ‘good’ and ‘bad’ manager-talent relationships? How do the parties respectively
define good and bad? Continue to develop their talent Talent is not an entity but an
incremental/emergent property…needs investment and maintenance. (Some interesting
connections with Carol Dweck’s work here.) Proposition: a key part of the role in
managing talent is to give it time and space (‘fighter cover’), and scaffold its
development. Talent is seldom a given, it has to be developed. Managers have a role in
developing (realising) rather than stifling talent. How? What are the key things to do to
develop and nurture talent, to get the most out of it? This still leaves prime, but not sole
responsibility for developing talent with the talented person themselves. Manage them
on the move
Proposition: there is something fluid, mobile, dynamic, quick about talent which requires
that it is managed in a fluid, mobile, dynamic (clued up) way. Trust them Where does
trust come into all this? Proposition: trust is an essential ingredient in the greater give
and take required to manage – and get the best out of – talented people. (Link to
Herriot’s stuff on how you can’t get innovation or change from people if they don’t trust
you…and you look to your talented people to lead innovation and change.) Talk to them
Proposition: dialogue is the key to developing the relationship and the trust. (So you’d
better be skilful at managing dialogue.) Get clued up How do the clued up angles play
into this issue of managing talent?
Proposition: one of the characteristics of talented people is that they like to put ideas
into action. Proposition: talented people tend to think differently to most people. So you
need good thinking to understand them. Proposition: talented people can handle more
complexity (and may be more complex themselves) but may consequently be harder for
others to understand Proposition: there is a lot of politics around talented people!!!!!
Proposition: talented people often express themselves differently. You need to talk their
language (and certainly not talk down to them.) Proposition: they are individuals (by
definition?) so you need to develop a tailored management approach – pay attention to
the clues 4. How does having talented people change a manager’s role? Managing
talent is not a bolt-on activity; it’s part of the day job as a manager. Proposition: having
talented people makes a manager’s role more ambiguous, more fluid, more dynamic.
Because talent will push the boundaries, the manager often experiences a lack of role
clarity, a more confused relationship than with other staff. Proposition: therefore
managers have to work harder and more continuously at the relationship with their
talented people. Paradoxically, the talented ones may take up the most time! Or may
need periodic bouts of intense attention (a different pattern of managing than other
staff.)
Proposition: managers have to negotiate and contract with talented people in a way/to
an extent that they don’t with other staff. Talented people are more demanding, though
you get more in return. Proposition: talent, people who are particularly good at
something which is of value for the organisation, present two fundamental challenges
for any manager:- balancing the value of the talented person’s individualism with the
need for control balancing the value of the talented person’s individualism with the need
for teamwork How does having talented people affect the balancing act that any
manager has to maintain between the three perennial requirements for control, co-
operation and autonomy? When do these dilemmas occur? How does this relate to the
situations where managers report finding talented people problematic? How is balance
achieved in practice? What are the options and choices? What are the consequences
and associated with each?
Co-operation Control Autonomy Proposition: it’s not all down to the manager... Talented
people have a responsibility, if they want to realise their talents fully, to recognise and
engage with (though certainly not just give in to) the demands of the context they are
working in. These demands include some requirement for control and co-operation. You
can’t have pure autonomy. It’s only on offer if you go and work for yourself (and actually
not even then!) But the manager does have a particular responsibility, which is to make
sure that is understood and not to shirk the difficult discussions (too often done as
managers accommodate difficult talents to placate and keep them on-side) How do you
manage your whole team? Can you have too much talent in your team? (reflections on
what Belbin found out about team workers and specialists) How do you deal with those
without talent? How do/should managers differentiate their role from that of their
talented people? How can a manager make a distinct and additive contribution to what
talent does? As a ‘talented’ manager what do you want from work? Do you have all the
same issues as we have been describing for talent? What are the consequences of
that? 5. What does this all mean for you? Proposition: Being effective as a manager of
talent will be a combination of acting appropriately with your team and dealing with you
own shit. (Nothing new there then!) How much do you know about your team? Who has
talent and who doesn’t? More specifically what are their capabilities, motivations etc.
What about you? How well do you manage you team now? What evidence have you
got? What do you do well/badly? Under what circumstances? Have you got talent?
What are the consequences? How are you being managed? How could you improve
your managers performance? So what are you going to do differently then? By when
etc. What will help and hinder etc. etc.
56. How can you balance between underperforming and outperforming people?
57. You need to make choice between delivery and quality what’s your take?
58. Define risk?
There may be external circumstances or events that cannot occur for the project to be
successful. If you believe such an event is likely to happen, then it would be a risk.
Identifying something as a risk increases its visibility, and allows a proactive risk
management plan to be put into place.
If an event is within control of the project team, such as having testing complete by a
certain date, then it is not a risk. If an event has 100 percent chance of occurring, then it
is not a risk, since there is no "likelihood" or risk involved (it is just a fact).
Risk Analysis:
Test Plan
Risk
Identification
Testing,
Risk Inspection etc.
Assessment
Risk
Matrix: Cost Mitigation Test Metrics
and Probability
Risk
Reporting
Risk
Prediction
RiskyProject also has a number of risk templates. Risk templates are standard
risk breakdown structures that allow you to quickly and simplify add risks to
tasks and projects. A number of templates are included to RiskyProject
package. In addition, it is easy to create your own templates, which you can
use for your own projects.
Budgetary Risks
Environmental Risks
Legal Risks
Resource Risks
Staff turnover
Requirement/Client
Relationship
Other Risks
Risk Management
In rushing to take advantage of SMS features, organizations might overlook the risks
involved in running a technically complex implementation project that touches nearly
every component of your infrastructure.
You must actively manage any risk. To manage risks effectively, identify the risks, and
then design contingency plans for dealing with those risks.
Risk Analysis
The best way to avoid risks is to plan your SMS implementation carefully. For example,
using the default settings provided by Express Setup to install SMS presents
considerable risks to your computing environment. The default settings cannot
guarantee a successful deployment for every organization. Properly planning
configuration settings before deploying SMS in your production environment is the
preferred method of performing an SMS installation.
Table 7.2 outlines some potential risks that you should be aware of before completing
your project plan.
Hindered network
Create a project plan
infrastructure stability,
and follow the planning
reduction in available
and installation
Deploying SMS without bandwidth, reduced
guidelines in this book or
planning performance due to
in the Microsoft
improper server sizing, and
Solutions Framework
the potential for SMS to
documentation.
collect data that is not valid
Interoperability problems
and reduced ability to:
Thoroughly test your
· Provide support staff SMS deployment, run a
Not testing in a lab with needed skills pilot project, and
environment before and experience document your results
deployment before deploying any
· Eliminate the costs SMS component on your
associated with incorrect production network.
design, which could lead to
a costly redeployment
Develop a formal
change management
process and tracking
system to ensure that
No use of change Inability to troubleshoot
changes are made only
control or change system failure if changes to
where necessary to fulfill
management system are not tracked
objectives, and that all
implications and risks
are understood in
advance.
Security breaches -
Plan for security early,
Not understanding and unauthorized access of
so that you can ensure
planning for SMS client computers or
the security of your
security policies malicious destruction of
computing environment.
client computers
Most of your significant project design changes are likely to occur as the result of
testing. In the pre-planning phase, begin thinking about how you want to control and
manage change throughout the planning and deployment phases of the project.
Change control requires tracking and reviewing changes to your implementation plan
made during testing cycles and after deployment. Change management requires testing
potential system changes in a lab environment before implementing them in your
production environment. By identifying all affected systems and processes before a
change is implemented, you can mitigate or eliminate potential adverse effects.
Appropriate plans vary from one enterprise to another, depending on variables such as
the type of business, the processes involved, and the level of security needed. Disaster
recovery planning may be developed within an organization or purchased as a software
application or a service. It is not unusual for an enterprise to spend 25% of its
information technology budget on disaster recovery.
Nevertheless, the consensus within the DR industry is that most enterprises are still ill-
prepared for a disaster. According to the Disaster Recovery site, "Despite the number of
very public disasters since 9/11, still only about 50 percent of companies report having a
disaster recovery plan. Of those that do, nearly half have never tested their plan, which
is tantamount to not having one at all."
Company owners and project managers use the Work Breakdown Structure (WBS) to
make complex projects more manageable. The WBS is designed to help break down a
project into manageable chunks that can be effectively estimated and supervised.
A work breakdown structure is just one of many project management forms and
templates.
To start out, the project manager and subject matter experts determine the main
deliverables for the project. Once this is completed, they start decomposing the
deliverables they have identified, breaking them down to successively smaller chunks of
work.
"How small?" you may ask. That varies with project type and management style, but
some sort of predetermined “rule” should govern the size and scope of the smallest
chunks of work. There could be a two weeks rule, where nothing is broken down any
smaller than it would take two weeks to complete. You can also use the 8/80 rule, where
no chunk would take less than 8 hours or longer than 80 hours to complete.
Determining the chunk size “rules” can take a little practice, but in the end these rules
make the WBS easier to use.
Regarding the format for WBS design, some people create tables or lists for their work
breakdown structures, but most use graphics to display the project components as a
hierarchical tree structure or diagram. In the article Five Phases of Project
Management, author Deanna Reynolds describes one of many methods for developing
a standard WBS.
A WBS diagram expresses the project scope in simple graphic terms. The diagram
starts with a single box or other graphic at the top to represent the entire project. The
project is then divided into main, or disparate, components, with related activities (or
elements) listed under them. Generally, the upper components are the deliverables and
the lower level elements are the activities that create the deliverables.
Information technology projects translate well into WBS diagrams, whether the project is
hardware or software based. That is, the project could involve designing and building
desktop computers or creating an animated computer game. Both of these examples
have tasks that can be completed independently of other project tasks. When tasks in a
project don’t need to be completed in a linear fashion, separating the project into
individual hierarchical components that can be allotted to different people usually gets
the job done quicker.
One common view is a Gantt chart. In a recent article, Joe Taylor, Jr. discusses the Top
Ten Benefits of a Gantt Chart.
Building a Desktop Computer - Say your company plans to start building desktop
computers. To make the work go faster, you could assign teams to the different aspects
of computer building, as shown in the diagram E-1 shown below. This way, one team
could work on the chassis configuration while another team secured the components.
The first number in WBS denotes the project. For instance in figure ‘WBS numbering’
we have show the number ‘1’ as the project number which is further extended according
to level. Numbering and numeric and alphanumeric or combination of both. Figure
‘Different Project Number’ shows the project number is ‘528’
Now the tasks at the final root are assigned to the resources. Table ‘Assign task to
resource’ shows how the task are now allocated to resourc
One of the main uses of WBS is for scheduling. WBS forms as a input to network
diagrams from scheduling aspect.
Network diagram shows logical relationship between project activities. Network diagram
helps us in the following ways: -
It helps us understand which activity is independent of other activity. For instance you
can start coding/execution of transactional screens with out master screens being
completed. This also gives an other view saying that you can execute both the activities
in a parallel fashion.
Network diagram also gives list of activities which can not be delayed. Like we can
delay the master screens of a project, but not the transactional.
67. What are the different types of network diagram?
we have two types of network diagrams one is AON (Activity Networks) and other is
AOA (Arrow Networks). Below figure ‘Types of Network Diagrams’ shows the
classification in a more visual format. CPM / CPA (Critical Path Method / Critical Path
Analysis) and PERT (Program Evaluation and Review Technique) come under Arrow
networks. PDM (Precedence Diagrams) comes under activity diagram.
Helps us find our critical / non-critical activities. So if we know our critical activities we
would like to allocate our critical people on the critical task and medium performing
people on the non-critical activities.
This also helps us to identify which activities we can run in parallel, thus reducing the
total project time.
This also helps us to identify which activities we can run in parallel, thus reducing the
total project time.
The Precedence Diagram Method is a tool for scheduling activities in a project plan. It is
a method of constructing a project schedule network diagram that uses boxes, referred
to as nodes, to represent activities and connects them with arrows that show the
dependencies.
two types of network diagrams one is AON (Activity Networks) and other is AOA (Arrow
Networks). Below figure ‘Types of Network Diagrams’ shows the classification in a more
visual format. CPM / CPA (Critical Path Method / Critical Path Analysis) and PERT
(Program Evaluation and Review Technique) come under Arrow networks. PDM
(Precedence Diagrams) comes under activity diagram.
71. Can you explain Critical path?
CPA / CPM (Critical path analysis / method) are an effective way to analyze complex
projects. A project consists of set of activities. CPA represents the critical set of activities
to complete a project. Critical path helps us to focus on essential activities which are
critical to run the project. Once we identify the critical activities we can devote good
resources and prioritize the same accordingly.
CPM (Critical Path Method) uses the following times for an activity.
(EST)Early start Time is the earliest time the activity can begin.
(LST)Late start Time is the latest time the activity can begin and still allow the project to
be completed on time.
(EFT) Early finish Time is the earliest time the activity can end.
(LFT) Late finish Time is the latest time the activity can end and still allow the project to
be completed on time.
figure: start and end
According to CPM calculation the start date should be minimum 1-Jan-2009 and
maximum end date is 30-jan-2009. Our EST, EFT, LST and LFT should fall between
these lines.
First we need to calculate EST and EFT. EST and EFT are calculated using the forward
pass methodology. Figure ‘EST and EFT’ shows how the forward calculation works. We
add "0" to the start date i.e. 1-Jan-2009 which becomes the EST of ‘Get Faculties’. ‘Get
Faculties’ task takes the 6 days and adds to EST which gives us 7-Jan-2009 which is
the EFT for ‘Get Faculties’. EFT becomes the EST of the next task i.e. ‘Buy Computers’.
Again we add number of days of ‘Buy Computers’ task to get EFT and so on. In short
EFT is calculated by subtracting number of days from EST. EFT of this task becomes
the EST of the next task.
Figure: -EST and EFT
Float (also known as slack, total float and path float) is computed for each task by
subtracting the EFT from the LFT (or the early start from the late start). Float is the
amount of time the task can slip without delaying the project finish date. Free float is the
amount of time a task can slip without delaying the early start of any task that
immediately follows it.
A PERT chart is a project management tool used to schedule, organize, and coordinate
tasks within a project. PERT stands for Program Evaluation Review Technique,
75. Can you explain Gantt chart?
GANTT chart is a time and activity bar chart. Gantt charts are easy-to-read charts
thatdisplay the project schedule in task sequence and by the task start and finish dates.
Gantt charts are simple chart which display the project schedule in task sequence and
by thetask start and finish dates. Lets consider the below given simple four activity
network figure.
Figure: -Simple Activity Network
The top bar shows the total activity period. Dependencies are shown by one arrow
connecting to the other arrow; we have circled how the dependencies are shown. Task
B can only start if task A is completed. GNATT chart is a helpful way to communicate
schedule information to top management since it provides an easy-to-read visual picture
of the project activities.
It does not show clear dependencies/relationships between tasks, for instance, which
task comes first, then second, and so on. It also fails in showing the critical and non-
critical tasks. GANTT chart is best used to show summary of the whole project to the
top management as it does not show detail information for every activity.
How It Works
In a Monte Carlo simulation, a random value is selected for each of the tasks, based on
the range of
estimates. The model is calculated based on this random value. The result of the model
is recorded,
and the process is repeated. A typical Monte Carlo simulation calculates the model
hundreds or
In the Monte Carlo simulation, we will randomly generate values for each of the tasks,
then calculate
the total time to completion1. The simulation will be run 500 times. Based on the results
of the
simulation, we will be able to describe some of the characteristics of the risk in the
model.
To test the likelihood of a particular result, we count how many times the model returned
that result in
the simulation. In this case, we want to know how many times the result was less than
or equal to a
12 Months 1 0%
13 Months 31 6%
Monte Carlo
simulation, however, we can see that out of 500 trials using random values, the total
time was 14
Put another way, in the simulation there is only a 34% chance – about 1 out of 3 – that
any individual
trial will result in a total time of 14 months or less. On the other hand, there is a 79%
chance that the
project will be completed within 15 months. Further, the model demonstrates that it is
extremely
unlikely, in the simulation, that we will ever fall at the absolute minimum or maximum
total values.
Schedule variance is the difference between Earned value and planned value
SV = EV – PV
SV Description
Cost variance is the difference between earned value and the actual cost.
CV = EV – AC
CV Description
CPI = EV / AC
Descripti
CPI
on
You are
1 right on
budget.
You are
Less than 1 over
budget.
you are
Greater than 1 under
budget.
Let’s take a small sample project. We need to make 400 breads and following is the
estimation of the project:-
- Size
- Effort
- Schedule.
Size - The size of the project needs to be determined in order to estimate the testing.
Size can be measured in 3 ways 1. LOC(Lines of Code) 2. Function Points(Functions
Features in the application has to be taken as inputs) 3. No. of Screens/Forms. While
estimating the size we should take the time required for automation and number of
configuration the application to be tested.
Effort - Once the Size estimation is done the effort requried to be estimated which can
be done using the Size estimates+Productivity(Time that can be taken for Test
authoring/execution per organization standards)+amount of time takes to test the
product on multiple combinations.
Schedule - The testing should be categorized into different phases and based on the
effort estimated for the project the schedule will be estimated.
86.Can you explain LOC method of estimation?
Lines of Code (LOC) method measures software and the process by which it is being
developed. Before an estimate for software is made, it is important and necessary to
understand software scope and estimate its size.
Lines of Code (LOC) is a direct approach method and requires a higher level of detail by
means of decomposition and partitioning. In contrast, Function Points (FP) is indirect an
approach method where instead of focusing on the function it focuses on the domain
characteristics.
where,
Example:
Problem Statement: Take the Library management system case. Software developed
for library will accept data from operator for issuing and returning books. Issuing and
returning will require some validity checks. For issue it is required to check if the
member has already issued the maximum books allowed. In case for return, if the
member is returning the book after the due date then fine has to be calculated. All the
interactions will be through user interface. Other operations include maintaining
database and generating reports at regular intervals.
1. User interface
2. Database management
3. Report generation
Sopt : 1800
Sm : 2000
Spess : 4000
EV for user interface
EV = 2300
Sopt : 4600
Sm : 6900
Spess : 8600
EV = 6800
Sopt : 1200
Sm : 1600
Spess : 3200
EV = 1800
Short for Constructive Cost Model, a method for evaluating and/or estimating the cost of
software development. There are three levels in the COCOMO hierarchy:
o Organic Mode: Development projects typically are uncomplicated and involve small
experienced teams. The planned software is not considered innovative and requires a
relatively small amount of DSIs (typically under 50,000).
o Semidetached Mode: Development projects typically are more complicated than in
Organic Mode and involve teams of people with mixed levels of experience. The
software requires no more than 300,000 DSIs. The project has characteristics of both
projects for Organic Mode and projects for Embedded Mode.
o Embedded Mode: Development projects must fit into a rigid set of requirements
because the software is to be embedded in a strongly joined complex of hardware,
software, regulations and operating procedures.
Detailed COCOMO: an extension of the Intermediate model that adds effort multipliers
for each phase of the project to determine the cost driver??s impact on each step.
COCOMO (Constructive Cost Model) is a model that allows software project managers
to estimate project cost and duration. It was developed initially (COCOMO 81) by Barry
Boehm in the early eighties. The COCOMO II model is a COCOMO 81 update to
address software development practices in the 1990's and 2000's. The model is by now
invigorative software engineering artifact that has, from customer perspective, the
following features:
There are similar COCOMO formulas for project duration (expressed in months) and
average size of project team. Interestingly, project duration in COCOMO is
approximately cube root of effort (in person-months).
In practice, COCOMO parameters can be greatly different from its typical values.
COCOMO II provides classification of factors that can have an influence on project cost,
and lets you make better approximation of coefficients and scaling factors for your
particular project.
Ans. An application boundary defines the scope of an application. A process can contain
multiple application boundaries. An application running inside one application boundary
cannot directly access the code running inside another application boundary. However,
it can use a proxy to access the code running in other application boundaries.
Ans. An Elementary process is the smallest unit of any business activity. It has to have a
meaning or a purpose. An elementary process is complete, when the user comes to
closure on the process and all the business information are in a static and complete
condition.
94. Can you explain the concept of static and dynamic elementary process?
Ans. Dynamic elementary is a process where data moves from internal application
boundary to external application boundary or vice-versa. Example: Input data screen
where user inputs data in to application. Data moves from the input screen inside
application.Static elementary is a process where data of application is maintained either
inside application boundary or in external application boundary.Example: In a customer
Information screen maintaining customer data.
95. Can you explain concept of FTR, ILF, EIF, EI, EO , EQ and GSC ?
Ans. External Inputs (EI) - is an elementary process in which data crosses the boundary
from outside to inside. This data may come from a data input screen or another
application. The data may be used to maintain one or more internal logical files. The
data can be either control information or business information. If the data is control
information it does not have to update an internal logical file. The graphic represents a
simple EI that updates 2 ILF's (FTR's).
External Outputs (EO) - an elementary process in which derived data passes across the
boundary from inside to outside. Additionally, an EO may update an ILF. The data
creates reports or output files sent to other applications. These reports and files are
created from one or more internal logical files and external interface file. The following
graphic represents on EO with 2 FTR's there is derived information (green) that has
been derived from the ILF's
External Inquiry (EQ) - an elementary process with both input and output components
that result in data retrieval from one or more internal logical files and external interface
files. The input process does not update any Internal Logical Files, and the output side
does not contain derived data. The graphic below represents an EQ with two ILF's and
no derived data.
Internal Logical Files (ILF’s) - a user identifiable group of logically related data that
resides entirely within the applications boundary and is maintained through external
inputs.
External Interface Files (EIF’s) - a user identifiable group of logically related data that is
used for reference purposes only. The data resides entirely outside the application and
is maintained by another application. The external interface file is an internal logical file
for another application.
96. How can you estimate number of acceptance test cases in a project?
Ans. A use case describes what the system must do to provide value to the
stakeholders.
A use case describes the interactions between one of more Actors and the system in
order to provide an observable result of value for the initiating actor.
The concept of a (use case) transaction helps to deal with the variation in length and
conciseness typical of use case descriptions. use case specifications can be tersely
written, or be rather verbose/detailed, depending on the use case template used, the
approach adopted, the business context involved, or the personal taste of the
Requirements Specifier. The number of steps in a use case flow, which describes the
interaction between an actor and the system, can also vary widely both across and
within scenarios. You can test for "sameness of size" by detecting and counting the use
case transactions that are involved in your use case specifications. If two use case
specifications have the same number of unique transactions, they have the same size.
Ans. The number of use case points in a project is a function of the following:
• the environment in which the project will be developed (such as the language,
the team’s motivation, and so on)
101.Can you explain on what basis does TPA actually work?
Ans. Size, test strategy and productivity are the three elements which determine the test
efforts for black box testing.Based on them TPA are calculated.
Ans: Measure: The verb means "to ascertain the measurements of"
Measurement: The figure, extent, or amount obtained by measuring"
Metric: "A standard of measurement"
Benchmark: "A standard by which others may be measured"
So we collect data (measurements), determine how those will be expressed as a
standard (metric), and compare the measurement to the benchmark to evaluate
progress. For example, we measure number of lines of code written by each
programmer during a week. We measure (count) the number of bugs in that code. We
establish "bugs per thousand lines of code" as the metric. We compare each
programmer's metric against the benchmark of "fewer than 1 defect (bug) per thousand
lines of code".
What To Measure
Measure those activities or results that are important to successfully achieving your
organization's goals. Key Performance Indicators, also known as KPI or Key Success
Indicators (KSI), help an organization define and measure progress toward its goals.
They differ depending on the organization. A business may have as one of its Key
Performance Indicators the percentage of its income that comes from return customers.
A Customer Service department may have as one of its KPIs the percentage of
customer calls answered in the first minute. A Key Performance Indicator for a
development organization might be the number of defects in their code.
You may need to measure several things to be able to calculate the metrics in your
KPIs. To measure progress toward its customer calls KPI, the Customer Service (CS)
department will need to measure (count) how many calls it receives. It must also
measure how long it takes to answer each call. Then the CS Manager can calculate the
percentage of customer calls answered in the first minute and manage toward
improving that KPI.
How To Measure
How you measure is as important as what you measure. In the previous example, we
can measure the number of calls by having each CS representative (CSR) count their
own calls and tell their supervisor at the end of the day. We could have an operator
counting the number of calls transferred to the CS department. The best option,
although the most expensive, would be to purchase a software program that counts the
number of incoming calls, measures how long it takes to answer each, records who
answered the call, and measures how long the call took to complete. These
measurements are current, accurate, complete, and unbiased.
Collecting the measurements in this way enables the manager to calculate the
percentage of customer calls answered in the first minute. In addition, it provides
additional measurements that help him or her manage toward improving the percentage
of calls answered quickly. Knowing the call durations lets the manager calculate if there
is enough staff to reach the goal. Knowing which CSRs answer the most calls identifies
for the manager expertise that can be shared with other CSRs.
50 Bug total Bug in real application 50*650/30 = 1084 bug is available in Application
Or
Or
Defect seeding is actually the process of inserting some code in the program
intentionally for the software to miss-behave...this practice is carried out to test the team
performance and how much they know about the product. Developers have this
knowledge where they have kept that code...so in this way we can check out the testing
team performance and their capabilities.......
Ans
Ans In recent times, independent testing teams have become increasingly popular.
Many organizations have either created an independent test department within the
organization or outsourced testing work to other organizations which specialize in
providing test services. In the current paper, a model has been proposed to measure
effectiveness of either kind of independent testing team.
Though there are a number of metrics available for tracking test life cycle and product
quality, most of them do not provide much insight into how well test team is doing and
improving.
1. Related to test execution & tracking – Schedule variance, effort variance, etc
2. Related to test coverage – Requirements to test case mapping, Test cases per
KLOC, etc
3. Defects related measurements – Defect arrival rate, Defect closure rate, Defect
leakage, etc
Most of the above mentioned metrics are more focused towards measuring product
quality and whether we are on track for meeting test timelines or not. A good
performance on these metrics may not mean that test team is doing well as the good
performance could be just because of good quality product available for testing.
Similarly, a poor performance on something like schedule could be simply due to poor
quality of product under test even though test team might be doing a great job in
The focus of this paper is solely on those metrics which can measure how effectively
test
The other consideration was for the number of metrics to be tracked. With addition of
each additional metrics that we choose to track, we are increasing the overhead for
maintaining them.
Besides, the metrics used should not be too complex to measure for test team as well
as
There are a variety of metrics that are used during test phase of project life cycle. Some
of the most common ones have been described and categorized below:
1. Related to test execution & tracking: These metrics measure how well test
test resulting in test team logging many more/less bugs than anticipated,
b. Effort variance: This measures how planned effort for testing a system
varies from actual effort required. This could happen due to poor
c. Regression test timeline: This measures how much time does test team
take to do one pass of regression test on the system under test. This
number can be used for planning future releases as well as budgeting for
test team on the basis of product roadmap. This also measures the
2. Related to test coverage: These metrics measure what kind of test coverage test
b. Test cases per KLOC: This metric measures how test suite is growing with
increasing size of system under test. Though code and test cases may not
have 1-1 mapping but by tracking this metric, we can identify when there
is any discrepancy in the test cases identified for any new functionality
that has been added. If we add a lot of code for a new functionality but
number of test cases added is low, then this should be investigated. This
can track number of new test cases being written for new/existing
requirements.
d. Test case efficiency: This metric measures the efficiency of test cases in
identifying bugs in the system. This is equal to bugs found which are
3. Defects related measurements: There are a number of metrics falling under this
category.
a. Defect arrival rate: For a test phase, we can measure the rate at which
defects arrived. Ideally, the defect arrival rate should reduce over the test
phase.
b. Defect closure rate: Rate at which defects are getting closed. A combined
chart of defect arrival and closure gives a good picture of how product is
measures how well test team understands the product as well as their
communication skill.
bug in the build it is introduced but it may not happen due to a variety of
be tested in that build, impact of changes done was not understood or test
team just missed it. This metric compares when a bug was identified to
There will always be different opinions about when the bug got introduced
and in some cases it may not be possible go back to older builds and check
this fact and everyone may have to take the word of development team in
this regard. This should be used in places where test process has matured a
lot and there is good deal of trust between Development and Test teams.
metric measures the efficiency of test team in identifying all issues with
3 Proposed Model
The following two metrics are must to have in evaluating independent test teams.
This measures how much time does test team take to do one pass of regression test on
the system under test. This number can be used for planning future releases as well as
budgeting for test team on the basis of product roadmap. This also measures the
Since regression is something test team would be running very frequently for most
systems, this number can be measured with a very good accuracy. Also, since test
schedules and budget would be calculated using this, it will be in best interests of
both test team as well as management to calculate this correctly. Test team would not
like to fudge any improvement in this metric as they know they would end up burning
their fingers in case their budget is based on an incorrect number. Moreover, this
metric is mostly influenced by how efficiently test team is managing itself and free of
external factors. Management can see any improvement in this metric as improving
b. Faster execution: As test team learns the product better, their speed in
executing test cases pick up and this would result in better test timeline.
c. Optimal number of test cases: After multiple test execution, a test team
would get better at understanding product and its issues. This would help
the team in optimizing the test suite and removing useless test cases.
Feature 2 … …..
Depending upon the complexity of the product, we may have sub features also and
test cases might be divided into different complexities with different time taken for
Finally, depending on circumstances, it might be a good idea to add some buffer for
defect logging, dependencies into this which should be based on experience while
executing test cases for the system. This would be applicable if schedule & budget
decisions are to be made on the basis of this metric. If this metric is being used for
Please note that it is not possible to predict any test timeline with 100% accuracy. The
buffer element takes care of some of the unknowns and based on experience it should
be modified but still there would be times when test schedule goes awry.
2. Defect Leakage
percentage of total defects logged by test team. This metric measures the efficiency of
which can be reproduced in house and were part of test team’s execution plan. Some
products are deployed under diverse conditions in production and some of the issues
found are impossible to be replicated in-house. For such products, this metric can
give skewed results and one must consider these factors before calling any production
defect as leakage.
An excel sheet can be maintained with all production issues in one tab and all inhouse
defects in other tab for calculating this metric. One can select what all
production bug logging date, one can create a graph showing trend of defect leakage
The above mentioned two metrics complement each other very well. The first one
shows how efficient test team is becoming in test execution and second metric
shows that improvement in time is not coming at the cost of quality as seen by
outside people/users.
For most test teams, these two metrics are sufficient to track their progress and how
they
have improved over a period of time. In some cases where test teams are internal to the
organization or outsourced vendors are for long term and have become comfortable
with
1. Schedule variance: We can measure schedule variance for different releases and
see how test team is improving in this across releases. For a long standing test
2. Test case efficiency: Ideally all defects should be found through documented test
cases. This is especially true for long standing teams as they are expected to
understand the system under test very well
Ans Defect Age is the difference in time between the date a defect is detected and
the current date (if the defect is still open) or the date the defect was fixed. It is a useful
measure of defect effectiveness. Defect Spoilage is a metric. Spoilage =Sum of
( Number of defects * Discovered Phage)/total number of defects Phage – Defect age
Or
The defect age is often calculated as a phase age (ie: how many phases it exists).
Because defects are more expensive the later in the test they are found, it is a good
idea to also calculate the defect spoilage which is a quotient.
You can find a lot more numbers in "Systematic software testing "
Both numbers can be very hard to calculate or even finding all the facts.
Or
Defect Age Calculated in Phases : Defect Fixed Phase - Defect Injection phase.
Let’s say the software life cycle has the following phases:
1. Requirements Development
2. High-Level Design
3. Detail Design
4. Coding
5. Unit Testing
6. Integration Testing
7. System Testing
8. Acceptance Testing
If a defect is identified in ‘System Testing’ and the defect was introduced in
‘Requirements Development’, the Defect Age is 6.
Defect age is used in another metric called defect spoilage to measure the
effectiveness of defect removal activities.
Planning
Implementation is the part of the process where software engineers actually program
the code for the project.
Software testing is an integral and important part of the software development process.
This part of the process ensures that defects are recognized as early as possible.
Documenting the internal design of software for the purpose of future maintenance and
enhancement is done throughout development. This may also include the writing of an
API, be it external or internal. It is very important to document everything in the project.
Deployment starts after the code is appropriately tested, is approved for release and
sold or otherwise distributed into a production environment.
Software Training and Support is important and a lot of developers fail to realize that. It
would not matter how much time and planning a development team puts into creating
software if nobody in an organization ends up using it. People are often resistant to
change and avoid venturing into an unfamiliar area, so as a part of the deployment
phase, it is very important to have training classes for new clients of your software.
Maintaining and enhancing software to cope with newly discovered problems or new
requirements can take far more time than the initial development of the software. It may
be necessary to add code that does not fit the original design to correct an unforeseen
problem or it may be that a customer is requesting more functionality and code can be
added to accommodate their requests. If the labor cost of the maintenance phase
exceeds 25% of the prior-phases' labor cost, then it is likely that the overall quality of at
least one prior phase is poor.[citation needed] In that case, management should consider the
option of rebuilding the system (or portions) before maintenance cost is out of control.
Bug Tracking System tools are often deployed at this stage of the process to allow
development teams to interface with customer/field teams testing the software to
identify any real or perceived issues. These software tools, both open source and
commercially licensed, provide a customizable process to acquire, review,
acknowledge, and respond to reported issues. (software maintenance).
Ans
Ans A model can come in many shapes, sizes, and styles. It is important to
emphasize that a model is not the real world but merely a human construct to help us
better understand real world systems. In general all models have an information input,
an information processor, and an output of expected results. Modeling Methodology for
Physics Teachers (more info) (1998) provides an outline of generic model structure that
is useful for geoscience instruction. In "Modeling the Environment" Andrew Ford gives a
philosophical discussion of what models are and why they are useful. The first few
paragraphs of Chapter 1 of Ford's book are worth a look.
Conceptual Models are qualitative models that help highlight important connections in
real world systems and processes. They are used as a first step in the development of
more complex models.
Teaching with Visualizations By this we mean anything that can help one visualize how
a system works. A visualization model can be a direct link between data and some
graphic or image output or can be linked in series with some other type of model so to
convert its output into a visually useful format. Examples include 1-, 2-, and 3-D
graphics packages, map overlays, animations, image manipulation and image analysis
Ans The Capability Maturity Model (CMM) is a service mark owned by Carnegie
Mellon University (CMU) and refers to a development model elicited from actual data.
The data was collected from organizations that contracted with the U.S. Department of
Defense, who funded the research, and became the foundation from which CMU
created the Software Engineering Institute (SEI). Like any model, it is an abstraction of
an existing system.
When it is applied to an existing organization's software development processes, it
allows an effective approach toward improving them. Eventually it became clear that the
model could be applied to other processes. This gave rise to a more general concept
that is applied to business processes and to developing people
The CMM was originally intended as a tool to evaluate the ability of government
contractors to perform a contracted software project. It has been used for and may be
suited to that purpose, but critics pointed out that process maturity according to the
CMM was not necessarily mandatory for successful software development. There
were/are real-life examples where the CMM was arguably irrelevant to successful
software development, and these examples include many shrinkwrap companies (also
called commercial-off-the-shelf or "COTS" firms or software package firms). Such firms
would have included, for example, Claris, Apple, Symantec, Microsoft, and Lotus.
Though these companies may have successfully developed their software, they would
not necessarily have considered or defined or managed their processes as the CMM
described as level 3 or above, and so would have fitted level 1 or 2 of the model. This
did not - on the face of it - frustrate the successful development of their software.
Level 1 - Initial (Chaotic)
Level 2 - Repeatable
Level 3 - Defined
It is characteristic of processes at this level that there are sets of defined and
documented standard processes established and subject to some degree of
improvement over time. These standard processes are in place (i.e., they are the
AS-IS processes) and used to establish consistency of process performance
across the organization.
Level 4 - Managed
Level 5 - Optimizing
127. What different sources are needed to verify authenticity for CMMI implementation ?
Ans. An appraiser can evaluate and verify authencity of CMMI implementation using the
following
• Conducting formal Interviews with the leads and the team members
• Documents prepared by the team while following the model
• Conducting survey and questionnaires
128. Can you explain SCAMPI process ?
Ans. SCAMPI is an acronym for Standard CMMI Appraisal Method for Process
Improvement.
A SCAMPI assessment must be led by an SEI Authorized SCAMPI Lead Appraiser.
SCAMPI is supported by the SCAMPI Product Suite, which includes the SCAMPI
Method Description, maturity questionnaire, work aids, and templates. Currently,
SCAMPI is the only method that can provide a rating, the only method recognized by
the SEI, and the method of most interest to organizations.
There are 3 SCAMPI methods
• SCAMPI class A Appraisal
• SCAMPI class B Appraisal
• SCAMPI class C Appraisal
129. How is appraisal done in CMMI ?
Ans. The CMMI Appraisal is an examination of one or more processes by a trained
team of professionals using an appraisal reference model as the basis for determining
strengths and weaknesses of an organization.
130. Which appraisal method class is the best ?
Ans.
136. Can you explain the different methodology for execution and design process in
SIX sigma?
DMAIC and DMADV are 2 methodology for exceution and design process in six sigma
DMAIC is used to improve an existing business process and has five phasesDefine -
Define Opportunity
Measure - Measure performance
Analyze - Analyze opportunity
Improve - Improve performance
Control - Control Performance
DMADV is used for new product or process design development and has five phases
Define - Define Opportunity
Measure - Measure CTQ (Critical to Quality)
Analyze - Analyze Relationship
Design - Design solution
Verify - Verify functionality
137. What does an executive leader, champions, Master Black belt, green belts and
black belts mean?
Executive leaders - They are the person who take the leadership of Six sigma, CEO,
owner , promoter of Six Sigma throughout organization.
Champions - They have daya to day responsibility for business process being improved.
They make sure the Six sigma project team has the required resources to execute their
tasks.
Master Black belt - They address the most complex process improvement projects and
provide guidelines and training to black belts and green belts.
Green belts - Greens belts assist black belts. They have enough knowledge of six
sigma.They apply six sigma methodologies at bottom level to solve problems and
improve process
Black belts - They work as team lead or project manager of project chosen for six
sigma.Black belts selects projects and train resources, and implement it. They find out
the variations and see how to minimize them.
138. What are the different kinds of variations used in six sigma?
Mean - variations are measured and compared bye using mathematics averaging
technique.
Range - variations are measured as difference between highest and lowest values in
particular data range.
140. Can you explain the concept of fish bone/ Ishikawa diagram?
Fish bone/ Ishikawa diagram is named after Kaoru Ishikawa, a quality expert from
Japan.It is a tool used to visualize, identify and classify possible causes of problems in
process, product or services. Also known as cause effect diagram.Using this tool root
cause of problems can be identified.
Following are the steps:
1. Identify a problem (effect) with a list of potential causes and Write the effect.
3. Identify major causes of the problem, which become “big branches”.
4. Fill in the “small branches” with subcauses of each major cause until the lowest-level
subcause is identified.
5. Review the completed diagram with the work process to verify that these causes
(factors) do affect the problem being resolved.
6. Work on the most important causes first.
7. Verify the root causes by collecting appropriate data (sampling) to validate
arelationship to the problem.
8. Continue this process to identify all causes, and, ultimately the root cause.
The Pareto principle (also known as the 80-20 rule,[1] the law of the vital few, and the
principle of factor sparsity) states that, for many events, roughly 80% of the effects
come from 20% of the causes.[2][3]
Business management thinker Joseph M. Juran suggested the principle and named it
after Italian economist Vilfredo Pareto, who observed in 1906 that 80% of the land in
Italy was owned by 20% of the population; he developed the principle by observing that
20% of the pea pods in his garden contained 80% of the peas.[3]
It is a common rule of thumb in business; e.g., "80% of your sales come from 20% of
your clients". Mathematically, where something is shared among a sufficiently large set
of participants, there must be a number k between 50 and 100 such that "k% is taken by
(100 - k)% of the participants". The number k may vary from 50 (in the case of equal
distribution, i.e. 100% of the population have equal shares) to nearly 100 (when a tiny
number of participants account for almost all of the resource). There is nothing special
about the number 80% mathematically, but many real systems have k somewhere
around this region of intermediate imbalance in distribution.
The Pareto principle is only tangentially related to Pareto efficiency, which was also
introduced by the same economist. Pareto developed both concepts in the context of
the distribution of income and wealth among the population.
Quality function deployment is a quality tool which builds and deliver a quality product
by focusing on the various business functions towards achieving a goal.
8. Based in these process steps, determine set-up requirements, process controls and
quality controls to assure achievement of these critical assembly or part characteristics.
FMEA - Failure modes and Effect analysis is a technique to identify the potential
problems in system design or process by examining the effects of lower level failures.
Based on the result actions are taken for the problems/failures to reduce
the re occurrence and reduce the risk if it occurs again.
Failure modes are any potential or actual problem/defect in design or process.
Effect Analysis is study of the effects of the failure/problems.
Steps for FMEA
1. Identify the effect of each failure by failure mode analysis and identifiy single
failure points that are critical.
2. Rank each failure and its probability of occurrence
3. Take action
144. Can you explain X bar charts?
Agile means quick or lively. This quickness and liveliness can be physical or even
mental. A dancer could have very agile leaps and jumps. A student might be
commended for having a very agile mind. Do you think you may be agile?
Agile Modeling (AM) defines a collection of core and supplementary principles that
when applied on a software development project set the stage for a collection of
modeling practices. Some of the principles have been adopted from eXtreme
Programming (XP) and are well documented in Extreme Programming Explained,
which in turn adopted them from common software engineering techniques. For the
most part the principles are presented with a focus on their implications to modeling
efforts and as a result material adopted from XP may be presented in a different light.
Supplementa
Core Principles
ry Principles
• Assume
Simplicity
• Embrace
Change
• Enabling the
•
Next Effort is
Your
Conten
Secondary
t is
Goal
More
• Incremental
Import
Change
ant
• Maximize
Than
Stakeholder
Repres
ROI
entatio
• Model With a
n
Purpose
• Open
• Multiple
and
Models
Honest
• Quality Work
Comm
• Rapid
unicati
Feedback
on
• Working
Software Is
Your Primary
Goal
• Travel Light
Before creating any document ask a question do we need it and if we who is the stake
holder. Document should exist only if needed and not for the sake of existence.
The most important thing is we need to create documentation to provide enough data
and no more than that. It should be simple and should communicate to stakeholders
what it needs to communicate. For instance below figure ‘Agile Documentation’ shows
two views for a simple class diagram. In the first view we have shown all the properties
for “Customer” and the “Address” class. Now have a look at the second view where we
have only shown the broader level view of the classes and relationships between them.
The second view is enough and not more. If the developer wants to get in to details we
can do that during development.
Figure: - Agile documentation
Document only for the current and not for future. In short whatever documentation we
require now we should produce and not something we need in the future.
Documentation changes its form as it travels through every cycle. For instance in the
requirement phase it’s the requirement document, in design it’s the technical
documentation and so on.
150. What are the different methodologies to implement Agile?
152. What are User Stories in XP and how different are they from requirement?
Use story is nothing but end users requirement. What differentiates a user story from a
requirement is that they are short and sweet. In one sentence they are just enough and
nothing more than that. User story ideally should be written on index cards. Below figure
‘User Story Index Card’ shows the card. Its 3 x 5 inches (8 x 13 cm) card. This will keep
your stories as small as possible. Requirement document go in pages. As we are
keeping the stories short its simple to read and understand. Traditional requirement
documents are verbose and they tend to loose the main requirement of the project.
It’s written and owned by the end customer and no one else
XP development cycle consists of two phases one is ‘Release Planning’ and the other is
‘Iteration Planning’. In release planning we decide what should be delivered and in
which priority. In iteration planning we break the requirements in to tasks and plan how
to deliver those activities decided in release planning. Below figure ‘Actual Essence‘
shows what actually these two phases deliver.
Figure: - Actual Essence
If you are still having the old SDLC in mind below figure ‘Mapping to Traditional Cycle’
shows how the two phases map to SDLC.
So let’s explore both these phases in a more detailed manner. Both phases “Release
Planning” and “Iteration Planning” have three common phases “Exploration”,
“Commitment” and “Steering”.
1. Exploration Phase
2. Planning Phase
3. Iterations to Release Phase
4. Productionizing Phase
5. Maintenance Phase
The first phase that an XP project experiences is the Exploration phase (Beck, 2000),
encompassing the initial requirements modeling and initial architectural
modeling aspects of the agile software development lifecycle. This phase includes
development of the architectural spike and the development of the initial user
stories. From a requirements point of view he suggests that you require enough
material in the user stories to make a first good release and the developers should be
sufficiently confident that they can’t estimate any better without actually implementing
the system. Every project has a scope, something that is typically based on a
collection of initial requirements for your system. Although the XP lifecycle presented
in Figure 1 does not explicitly include a specific scope definition task it implies one with
user stories being an input into release planning. User stories are a primary driver of
the XP methodology – they provide high-level requirements for your system and are the
critical input into your planning process. The implication is that you need a collection of
user stories, anywhere from a handful to several dozen, to get your XP project started.
157. Can you explain how planning game works in Extreme Programming?
If you read the Agile cycle carefully (explained in the previous section) you will see Agile
estimation happens at two places.
User Story Level Estimation: - In this level a User story is estimated using Iteration
Team velocity and the output is Ideal Man days or Story points.
Task Level Estimation: - This is a second level of estimation. This estimation is at the
developer level according to the task assigned. This estimation ensures that the User
story estimation is verified.
Estimation happens at two levels one when we take the requirement and one when we
are very near to execution that’s at the task level. This looks very much logical because
as we are very near to complete task estimation is more and more clear. So task level
estimation just comes as a cross verification for user story level estimation.
User story should normally be prioritized from the business importance point of view. In
real scenarios this is not the only criteria. Below are some of the factors to be accounted
when prioritizing user stories:-
Prioritize by business value: - Business user assigns a value according to the
business needs. There three level of ratings for business value:-
o Most important features: - With out these features the software has no meaning.
o Important features: - It’s important to have features. But if these features do not exist
there are alternatives by which user can manage.
o Nice to have features: - These features are not essential features but rather it’s over
the top cream for the end user.
Prioritize by risk: - This factor helps us prioritize by risk from the development angle.
Risk index is assigned from 0 to 2 and are classified in three main categories :-
o Completeness
o Volatility
o Complexity
Below figure “Risk Index” shows the values and the classification accordingly.
160. Can you point out simple differences between Agile and traditional SDLC?
Lengthy requirement documents are now simple and short user stories.
Estimation unit man days and man hours are now ideal days and ideal hours
respectively.
In traditional approach we freeze the requirement and complete the full design and then
start coding. But in Agile we do designing task wise. So just before the developer starts
a task he does design.
In traditional SDLC we used always hear this voice ‘After signoff nothing can be
changed’, in Agile we work for the customer, so we do accept changes.
Unit test plans are written after coding or during coding in traditional SDLC. In Agile we
write unit test plans before writing the code.
Figure: - Agile and Traditional SDLC
Refactoring is "the process of changing a software system in such a way that it does not
alter the external behavior of the code yet improves its internal structure," according to
Martin Fowler, the "father" of refactoring. The concept of refactoring covers practically
any revision or cleaning up of source code, but Fowler consolidated many best
practices from across the software development industry into a specific list of
"refactorings" and described methods to implement them in his book, Refactoring:
Improving the Design of Existing Code. While refactoring can be applied to any
programming language, the majority of refactoring current tools have been developed
for the Java language.
Ans There are in all five phases in DSDM project life cycle:-
1. Feasibility Study: - During this stage the can the project be used for DSDM is
examined. For that we need to answer questions like "Can this project fulfill
business needs?", "Is the project fit for DSDM?" and "What are the prime risks
involved in the project?".
2. Business Study: - Once we have concluded that the project has passed the
feasibility study in this phase we do a business study. Business study involves
meeting with the end customer/user to discuss about a proposed system. In one
sentence it’s a requirement gathering phase. Requirements are then prioritized
and time boxed. So the output of this phase is a prioritized requirement list with
time frames.
3. Functional Model Iteration: - In this phase we develop prototype which is
reviewed by the end user.
4. Design and Build Iteration: - The prototype which was agreed by the user in
the previous stage is designed and built in this stage and given to the end user
for testing.
5. Implementation: - Once the end user has confirmed everything is alright its time
to implement the same to the end user.
Figure: - DSDM Project Life Cycle
173. Can you explain LSD ?
Ans. Lean software development has derived its principles from lean manufacturing.
Below figure ‘Principles of LSD’ shows all the principles.
ASD (Adaptive Software Development) accepts that change is a truth. It also accepts in
principles that mistakes can happen and it’s important to learn from those mistakes in
the future. Below figure ‘ASD Cycle’ shows three important phases in ASD.
Figure: - Speculate
Collaborate (coding / execution):-Execute as per the task and test it.
Learn (Review and give feedback to planning):- At the end of iteration see what
lessons we have learnt and apply the same for the next iteration.