Manufacturing
ISEN 645
FA2016
There’s no crying in the Gemba - 6σ is 15: 7DEC* (WED) Project briefings Schedule and timing TBD
performed at the point of production, get
your PPE ready… 16: Final 9DEC 0730-0930a
***KD scheduled review after class at 7p; team leads – quad-chart status briefing (3-5min)
Lean system engineering is the systematic reduction of “waste” in the production system
Factory Physics provides principles and practices to operate the system effectively despite the variability
6σ is a systematic, continuous, and rigorous effort to reduce and/or eliminate variability
ISEN 645, Lean Production System Design, is a fusion of all three
Project news
Project summary &
Approach
Technical objectives
Inventory planning and SOP for management of Liquid mix equipment and
1 9 Bottling line
management incoming and FGI processing
SOP for load, time, distance Route process WM&M;
2 Routing 10 Routing
balancing algorithm.*** route balancing plan; SOP
Inventory planning and Demand planning tool (Q,r)
3
management model – FGI policy
Layout; equipment Inventory planning and Tool to assist in the
4 Botting line 12
specification management inventory management
Load planning and
5 Routing 13 Bottling line Equipment; C/O
management;
Heijunka tool; demand load
6 Bottling line Layout and QC 14 Bottling line
leveling; EPEI
SOP characterizing the
7 Routing process of route planning 15 Bottling line Bottling equipment and C/O
and management
8 Routing ?
The production of goods or services is
lean if it is accomplished with minimal
buffering costs.
-Wallace J. Hopp (Supply Chain Science)
Week 12
Prelude to Class24…
• DMAIC the value and perfection principle of Lean – fused into our design.
• D is for Define – this is the activity that the other four hang on; if we do not define the production system, the information to be managed, and
make careful distinctions of terms then the measurement, analysis, improvement, and control phases are at risk of the dreaded type III problem
(solving the wrong problem). This is our chance to capture the VOC thread throughout the PS definition. Ideally this phase would have been
accomplished during the PS design, but we can produce the IDEF0 of the PS at anytime.
• M – rigor and precision of the VOC translated into the Ys
• A – analyze the data (surrogates for the effects) and vector our resources to the “cause”
• I – make the improvements that eliminate root cause and alleviate the effects
• C – operate and monitor the health of the system as compared to our expectation from the PS as designed
• If we were on our engineering toes the DMAI phases would be dealt with during the LPS design [hence the phrase, Lean-6σ] and the C phase would
be put in place to monitor the PS during operation in the Gemba. The care and feeding of the PS (during its entire life-cycle) is the job of the ISEN.
Belts help, but the PS is under the care of the chief physician, namely the ISEN. The better schooled we are in SE – the better the LPS will operate
IAW its design.
Harmony through SE: LPS design and 6σ… SEeing the light!
• We obviously can perform 6σ in isolation of LPS design, but ISEN 645 is about LPS design and the 1st
and 5th principles are directed at a design that is VOC centric and variation resilient
• As ISENs we are charged with the definition, design, implementation, operation, monitoring,
remediation etc of the entire production system. It is critical to understand issues within the greater
context – the PS
• We are ISENs first and “belts” second – for LPS design we instrument the production system with the
VOC, to be measureable, to be controllable.
• Performed in isolation from the LPS design the D within the DMAIC is often “flowcharted” and
forgotten. But we should be careful to avoid this form of definition – we risk local optimization if we
do not locate this D within the larger context of the PS (aka the CONOPS of the PS)
• Critical work takes place during definition – the PS processes are engineered and enabled for
monitoring and control
• As ISENs we need to take stock in the SE part of our degree and build the blueprints for the PS using
bona fide SE methods; integrate the threads of 6σ into the artifacts that represent the LPS design.
• If you’ll note the major thread of 6σ is the VOC and its translation from Need to CTQ (requirements)
to Y (what we measure in the PS) to X (what we control to produce the Y effectively, efficiently,
repeatedly, reliably, consistently). Design and define; define at anytime – we can and should locate
what we are doing, the changes being made to the transformative processes, within the CONOPS of
the PS.
LPS design is a fusion of SE and Lean P&P and instrumented for continuous variability reduction via 6σ P&P
Lean 6σ – an overview
Point of origin…
• The roots of Six Sigma as a measurement standard can be traced back to
Carl Friedrich Gauss (1777-1855) who introduced the concept of the normal
curve.
• 1920’s Walter Shewhart showed that three sigma from the mean is the point
where a process requires correction.
• Many measurement standards (Cpk, Zero Defects, etc.) since.
• The term “Six Sigma” goes to a Motorola engineer named Bill Smith.
• “Six Sigma” is a federally registered trademark of Motorola.
• It has evolved into more of a method (perhaps a system) from its humble
beginnings as a measurement
The use of 6σ as a
metric is profound
• If my process is 3σ capable then any shift in the mean will
produce > 1350 ppm defects
• But if the process is 6σ capable then the process can support a
shift in the process mean of up to 1.5σ and still produce only
3.4 ppm defects
• Motorola determined that, in the long run, processes tend to
drift – that is the process mean drifts. To ensure that their
processes were still “capable” an adherence to 6σ processes
became the standard at Motorola.
• Now the production process had 6σ on either side of the
center of the process.
• It all looks good on paper and in graphs – but the reality is that
the engineers must build the processes that adhere to the
standard!
• At which point you have likely and astutely identified that 3.4
ppm defects is actually associated with 4.5σ from the mean –
yes, when we discuss 6σ we are making the tacit assumption
that it can handle the worst case of a shift in the process mean
of up to 1.5σ
• During the analyze phase of DMAIC we move into the drivers [X’s] or causes for
the performance of the Y’s
• We then improve the X’s to impact the Y’s
SIPOC
SIPOC’s value includes
identifying the production
causal chain
• One process’ outputs may be another’s inputs – the X’s and the Y’s can exchange roles; C/E chains
• Example: the VOC complains about delays in order receipt: Y = % on time delivery; X1 = order lead time;
X2=scrap rate; X3=%uptime. For the production manager the order lead time is Y and the production and
assembly cell cycle times might be the X’s. For the cell supervisors the cell CT is the Y and the workstation CT
is X1. At the workstation the CT is the Y and the control settings on the equipment is X1.
X’s, Y’s, Causes, Effects, Symptoms, Problems, Treatments, Cures… if we are going to be successful we need to make clear distinctions.
Making Distinctions is the backbone behind Systems Engineering and Knowledge Engineering
Analysis of Variance for Gage R&R Are we victims of inadvertently
cooking the data?
• The Analysis of Variance (ANOVA)
can also be used to analyze Gage
Yijk Parti Operatorj repetition(ij ) k
R&R studies.
• In ANOVA terminology, most Gage
R&R studies have an ANOVA type
data structure.
• A variance component analysis can
• Part has variance p2
• Operator has variance o2
easily be done in most software
packages.
FMEA
Preventive
external
external
external
Divide and conquer
internal
internal
internal
measure separate convert reduce/eliminate (repeat)
The Tools and Techniques
Benchmarking Confidence Intervals Affinity Diagram DFSS [Design For 6σ] Control Charts
FMEA Measurement System Brainstorming DOE Control Plan
Analysis PF/CE/CNX/SOP:
IPO Diagram Cause & Effect Kanban Reaction Plan
Nominal Group Diagram PF – Process Flow
Kano’s Model Mistake Proofing Run Charts CE – Cause & Effect
Technique
e-test
Knowledge Based PF/CE/CNX/SOP Standard Operating CNX – Constant, Noise, Experimental
Pairwise Ranking
Management F-test Procedures SOP – Standard Operation Procedure
Standard Work
Physical Process
Project Charter Fault Tree Analysis
Flow Takt Time
SIPOC Model FMEA For the record there are as many variants of this
Process Capability Theory of Constraints
Quality Function Analysis Histogram
Total Productive type of chart as there are tools.
Deployment
Process Flow Historical Data Maintenance
Voice of Customer Diagram Analysis
Visual Management Many tools overlap and are used in many phases of
Task Appraisal / Task Process Observation Pareto Chart the DMAIC. The bucketing of the tools into the
Work Cell Design
Summary
Time Value Map Reality Tree phases is not critical and certainly not mutually
5S Workplace
Value Stream Mapping
Value Stream Regression Analysis Organization exclusive. Some listed are amalgams of others.
Mapping
Scatter Diagram
Waste Analysis
t-test What is critical is that we define a project that
Thematic Content makes a difference and one where we can measure
Analysis that difference reliably even if it is performed with
Tukey End Count Test chalk, a protractor, and an abacus.
5 Whys
Before we continue - a word on other 6σ methods in use
in industry and potential sources of confusion
the method
Lean production system [LPS] Define project definition
I1
A1
Without a Lean Transformation effort
the DMAIC method is a structured
Measure verified, vetted measurement data
continuous improvement program
A2
IQC
• As we know – Lean is a fusion of:
• Lean P&P applied to the Process
• Integrated PC
• Integrated IC, and
• Integrated QC
• Lean makes QC everyone’s job – it
must since we have a tenet that
demands zero-defects
• But IQC is data driven – this means
that the veracity of the data is also
everyone’s job – so the process and
procedures for acquiring this data are
also part of the integration
IQC Principles
• Design and operate the process to prevent defects
• Self-correction of production errors – one-piece flow
• Display QC characteristics prominently
• Conformance ahead of output rates
• Technicians can stop the process; better through Poka-Yokes
• Check 100% of critical part attributes
• Continuous quality improvement at the cell, workstation – Quality circles
• Eliminate incoming inspection
• Eliminate setup time to enable unit flow
• Lot size 1
• 5s in the work place
• Data veracity***
If you are operating the PS as if the resources are scarce – you are running Lean
M1: Takeaways
• Our interest in 6σ is to improve our LPS designs – perfection is planned
for, the VOC is manifold and must be integrated throughout
• Lean P&P structure and facilitate our design
• Once in operation – striving towards the design we need a continuous
program of measurement, comparison, refinement
• The backbone of Lean 6σ is quantitative analysis; data driven improvement
• The program is better served with a cadre aboard
• Can management keep the Lean ship from dwarfing the profit goal?
• Design it, implement it, monitor it, fix it … perfect product at the takt
• We turn next to the tools and what data driven analysis means
M2:
Define
Is the 6σ engineer a role, a formal position, or both?
Definition means:
Support from corporate [buy in, backing, and participation]
Identification of candidates that make a difference Often a
Candidates need definition [aka process definition] Business
Quantifiable and Measureable [“data driven”] case is
Risk & Return estimated [ROI is often estimated] required
Selection [Critical to… Cost, Delivery, Quality]
Charter [document w/ signatures and authorization]
DMAIC
ROI is almost always a point of contention since it is a projection. Credible data helps – but the due diligence
Define phase exercise in obtaining that data can start to equivocate to “working the project before working the project”.
There must be a level of rational risk assumed on the part of the decision makers regarding the ROI.
Much of the literature describing the execution of the DMAIC method often [explicitly] assumes 6σ experts [Master, Black, Green] are leading the project…
As a good first step we should see what our own data is telling us about the high drivers
Vilfredo Pareto to the rescue
37
Define
Phase
• Pareto Analysis
• Part of the responsibility of the champion is to perform a
cost of poor quality (COPQ) analysis.
• This is based on the PAF categorization of costs.
• P-A-F stands for Prevention Appraisal Failure a method for
classifying quality costs
• Performing a study of internal and external failure costs
will help to determine where the most benefit can be
found.
• Problem Definition
• Project definition consists of a problem statement, project
goals/objectives, primary metrics, secondary metrics, and
team member identification.
Project Charter
Define
In this phase, the leaders of the project create a Project
Charter, a high-level view of the process, and begin to
understand the needs of the customers.
40
This is an accounting exercise
finding out what is important and its relation to how we produce
• VOC – largely handled through interviews and survey
• Kano is a tool to help prioritize the VOC through a lens that distinguishes
between performance, nice to have, and expected but not articulated
• Translation to CTQ [what is measurable; libraries exist]
• SIPOC – is a mapping between our core process, the core artifacts [inputs
and outputs], and the core players [suppliers and customers]
• Measureable means…
M3:
Measure
Quo Vadimus?
• 6σ is a vehicle to help us realize our Lean design both during
and after the transformation to Lean
• 6σ has moved from design standard to standard method:
DMAIC
• DMAIC as a method
• Define – identify the VOC transform it into the Y’s
• Measure the Y’s make their measurement repeatable and
reproducible
• Analyze the system – what gives rise to the Y’s or prevents us from
achieving the customer’s level of Y? ANS: The X’s aka the causes
• Improve the system Y’s put engineering and management controls on
the X’s
• Control the system, quantitatively, to ensure that we sustain the gains
From VOC to Y’s [aka CTQs]
• One reasonable way to quantify the VOC is to:
• Capture what is critical to the customer in a survey, interviews
• At this point we are translating Needs into Requirements
• Many tools can be used to assist in the classification and prioritization of these needs: Kano,
QFD, XY matrix – but what we are doing is classifying and aligning and prioritizing
• This has been done so many times that patterns exist and the resulting classifications end up
something we can catalog IAW what is Critical to Quality [CTQ]…
• The translation from VOC to CTQ is the journey from a statement that may be vague or
qualitative in nature to something we can measure in our production process
• During the analyze phase of DMAIC we move into the drivers [X’s] or causes for the
performance of the Y’s
• We then improve the X’s to impact the Y’s
Know the process
Know what’s critical to that process
What quantifiable measurement encapsulates that criticality
Can we make that measurement while minimizing other sources of error
We need to ensure that the apparatus used for measurement and the operator using the apparatus are not sources
DMAIC
Measure phase
Measure Phase
• Two Major Steps in the Measure Phase
• Selecting process outcomes
• Verifying measurements
• Measure Phase Tools
• Process Map
• XY Matrix
• Gage R&R
• Capability Assessment
We’ve discussed this before
Process maps
• There many maps but few models
• Maps lack a standard
• Models are developed using a method – there’s enough error in what we
do and how we communicate there is not sense in adding to it.
• Methods make the work and the products of that work repeatable and
reliable.
• VSMs help
• If we want to measure we need to characterize the process in detail – we
need to know how the measurement values arise
• Bad processes produce bad values
• Bad processes can lead the operator into errors
What factors drive the quantifiable analog to the VOC? http://asq.org/service/body-of-knowledge/tools-xy-matrix
Step 1. Start with voice of customer input. This may come from surveys, focus groups, or other efforts to collect customer priorities. List the customer requirements at the top of a column – these are the Y
data for the XY matrix. In the example, a computer support group is trying to determine how to improve its services. Customers indicated that “fast and easy to get help” is their number one priority.
Step 2. List the priorities identified through voice of customer above each column. In this case, five is the highest priority; one is lowest. This prioritization can be used for weighting the results.
Step 3. List the possible inputs, or service improvements, in each row – these become the X data for the XY matrix.
Step 4. Assess the relationship between the customer priorities and each of the inputs or ideas for the X rows. In the example, the team chose to use a scale of one to 10 (10 being highest correlation)
Step 5. Correlation assessment: use the calculated values in the far right column to determine ranking.
Measure Phase
• Selecting Process Outcomes
• To define process outcomes, you first need to understand the process. This involves
process mapping. A process map is a flowchart with responsibility. The goal with a
process map is to identify non-value added activities.
• Two important measures that are monitored are defects per unit (DPU) and defects
per million opportunities (DPMO).
• The XY matrix is used to identify inputs (X’s) and outputs (Y’s) from a project you
have mapped and are desiring to pursue.
Measure Phase
• Verifying Measurements
• It is necessary to use gauges, calipers and other tools when measuring critical
characteristics of processes.
• Measurement system analysis (MSA) is used to determine if measurements are
consistent.
• Product and process capability analysis is another approach for verifying
measurements. [NB: we will review this in the Analyze phase]
• Gage Repeatability and Reproducibility Analysis (Gage R&R) is the most commonly
used MSA.
Measure Phase
• Verifying Measurements (continued)
• Reasons for problems in measurement
• The measurement gauges are faulty.
• Operators are using gauges improperly.
• Training in measurement procedures is lacking.
• The gauge is calibrated incorrectly.
• Statistical experiments using analysis of variance (ANOVA) are useful in performing
Gage R&R.
• Two-way ANOVA is used to determine whether variation comes from the part being
measured, differences in operator measurements, or from the measurement
instrument.
Gage R&R
Estimating measurement components
Gage capability and acceptability measures
http://www.stat.purdue.edu/~kuczek/
Tom Kuczek’s course in STAT has very good discussion of Gage R&R and ANOVA
The material in this section is based on a discussion in the Minitab manual plus notes from Kuczek’s
discussion of Gage R&R
Gage R&R
Process Variability is composed of variation from many sources: how much is due to the measurement system itself?
From Minitab…
How Might Measurement Error Appear?
15
LSL USL
Frequency
10
No measurement error
5
30 40 50 60 70 80 90 100 110
Process
15
LSL USL
10
Frequency
With measurement error 5
30 40 50 60 70 80 90 100 110
Observ ed
Measurement System Terminology
• Discrimination - Smallest detectable increment between two measured values
• Accuracy related terms
• True value - Theoretically correct value
• Bias - Difference between the average value of all measurements of a sample and the true
value for that sample
• Precision related terms
• Repeatability - Variability inherent in the measurement system under constant conditions
• Reproducibility - Variability among measurements made under different conditions (e.g.
different operators, measuring devices, etc.)
• Stability - distribution of measurements that remains constant and predictable over time for both the mean and standard
deviation
• Linearity - A measure of any change in accuracy or precision over the range of instrument capability
Terms and Definitions
•Repeatability refers to the measurement
variation obtained when one person
repeatedly measures the same item
with the same gage.
•Reproducibility refers to the variation due
to different operators using the same
gage measuring the same item.
http://www.stat.purdue.edu/~kuczek/
Isolate and get rid of the equipment and operator as sources of error, if possible
Layout of Typical Gage R&R Study
Analysis of Gage R&R Study Data
Ranges are [were] used as a quick way to estimate variability
Recommended, especially
Generally considered
useful when trying to sort or
to be an acceptable
Under 10 classify parts or when
measurements
tightened process control is
system.
required.
http://asq.org/conferences/six-sigma/2010/pdf/proceedings/c6.pdf
M3: Takeaways
• Measurement ensures that we quantify the process or critical aspects of the
process – developing a detailed description of the process is critical
• The process of measurement itself is open to various sources of error
• The “gage” used to perform the measurement, the operator performing the
measurement – all sources of variability that can bias the results and lead us
to think there is significance when in fact there is none
• Therefore we must account for their contribution to the overall variability
term and identify how significant their contribution is
• Gage R&R
M4:
Analyze
Before we continue - a word on other 6σ methods in use
in industry and potential sources of confusion
Analyze
C/E analysis
5-Whys moving away from
symptoms and towards cause
Why did the fuel control fail test? The cpin diagnostic failed
Why couldn’t the instruction sets be run? The mix chamber could not be pressurized
Why was the seal was compromised? The O-ring was bad…
The Toyota Way; Liker
The widely used Ishikawa “Fishbone” diagram
This is a recording device not a method
Is there are a way to get ahead? Anticipate problems and design them out of the system?
Working from the system towards modes of failure - anticipatory
Analyze
• Allows us to identify areas of our process that most impact our customers
• Helps us identify how our process is most likely to fail
• Points to process failures that are most difficult to detect
History of FMEA
• First used in the 1960’s in the Aerospace industry
during the Apollo missions
• In 1974, the Navy developed MIL-STD-1629 regarding
the use of FMEA
• In the late 1970’s, the automotive industry was driven
by liability costs to use FMEA to reduce risks related to
poor quality
Inputs Outputs
C&E Matrix List of actions to
Process Map prevent causes or
Process History detect failure
Procedures FMEA modes
Knowledge
Experience History of actions
taken
Severity, Occurrence, and Detection
• Severity
• Importance of the effect on customer requirements
• Occurrence
• Frequency with which a given cause occurs and
creates failure modes (obtain from past data if possible)
• Detection
• The ability of the current control scheme to detect
(then prevent) a given cause (may be difficult to estimate early in process
operations).
“The Risk Priority Number, or RPN, is a
Rating Scale numeric assessment of risk assigned to a
process, or steps in a process, as part of
Failure Modes and Effects Analysis (FMEA), in
• Severity which a team assigns each failure mode
numeric values that quantify likelihood of
• 1 = Not Severe, 10 = Very Severe occurrence, likelihood of detection, and
severity of impact.”
• Occurrence
• 1 = Not Likely, 10 = Very Likely
• Detection
• 1 = Easy to Detect, 10 = Not easy to Detect
Improve is dedicated to problem elimination first then mitigating its effects second.
external
external
external
internal
internal
internal
measure separate convert reduce/eliminate (repeat)
Set-up Time
Last First
Good Good
Piece Piece
Total elapsed
Time
Set-up Time
There are two types of setup time:
Internal Set-up:
Those activities that must be performed while the
machine is shut down. (Work content done in
addition to Machine Time.)
Example: Removing dies and tooling
External Set-up:
Those activities that are performed while the
machine is operating.
Example: Preparing tooling for the next set-up
Set-up Reduction Process
Standardize procedures to
9 Preset tooling (Tool Change) 6 1'48 6 x
externalize this step.
Standardize procedures to
12 Qualify 1st Piece (Gage) 12 2'26 12 x
externalize this step.
Tool Change
- Replacing existing tooling
Programming
- Making adjustments or changes to a CNC program in order to accommodate the new set-up
Walk time
- The time an operator must walk to retrieve a fixture, tools etc.
1st Piece
- The time required to produce a good unit after the initial set-up
Gage
- The time required to qualify the 1st piece
Pareto Analysis on each Time Category
80
70
60
time in min 50
40
30
20
10
Fixture Chg Search Tool Chg 1st Piece check Calibrate Gauge
Separate the Elements
Internal:
Those activities that must be performed while the machine is shut down.
Example: Removing dies and tooling
External:
Those activities that are performed while the machine is operating.
Example: Acquire fixture/tools for next set-up
Both types of actions must be separated. Once the machine is stopped, the
technician should never depart to perform any part of the external set up.
Clearly the change-over time depends on the model of the
coffee maker, but suppose the coffee once brewed operates in
Make Coffee… a warmer plate mode that disengages the brew mode…
The real point is that with a little thinking and some decent equipment decisions - most of the change-over can be
made external, where external means in parallel with the operation of the target process or equipment
Separate the Elements
Set-up Operations Analysis Chart Mach # 3456 Before Kaizen Analysis
Minutes From To Area / Department Date 3 /7/9 9
14 6 Part # Model A Model B Ma chining Cell
Internal or External?
Develop Improvement Plan
5S +1 “safety” (6S) discipline is the key to quick changeover!
Over-Production (inventory)
Transportation
Inspection (mass)
Motion
Storage area
Processing, itself for Dies
Tool Room
+ “Unused Creativity”
Develop Improvement Plan
Specifically:
external
external
external
internal
internal
internal
measure separate convert reduce/eliminate (repeat)
Employ Improvement Initiatives
Once as much of the internal set-up has
been converted to external set-up and
waste has been eliminated:
Assure that all jigs, gauges, dies, tools, etc. function before
the set-up begins.
2. Change the Internal set-up into External, then improve the remaining Internal
set-up time.
4. If you have to use your hands make sure your feet stay put.
A. List each element on the Set-up Operations Standard Chart (after Kaizen)
Part #
Zunker 2003
300-ton Press
The written instruction embodying the improvement is termed the Standard Operating Procedure [SOP]
SPC is our health monitoring system to ensure that the X’s don’t get out of line
DMAIC
Control phase
SQC ≈ SPC
Backgrounder
• 1920s – Bell Labs [formalized SQC] –
Walter Shewhart [also of Western
Electric]
• Post WW-II Japan educates the shop
floor in the 7 tools of quality
• Deming, Juran, Feigenbaum lead
• TQC is in the hands of the producer
• SQC based on inferential statistics [small
sample sets used to draw conclusions on
“parent population”]:
• Acceptance sampling
• Defect rate observed after the fact
• AQL – acceptable quality level
• Lean in tension w AQL concept
• Control charts
• Tracks accuracy and variance of a process
• Assignable cause v inherent [natural] cause
SPC- Control Charts
“the concept”
• Multiple samples of size n [5 in this
example] are taken at 25 uniform
points in time with respect to some
measureable characteristic of the
product or process
• 𝑋 𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡𝑠 𝑡ℎ𝑒 𝑔𝑟𝑎𝑛𝑑 𝑎𝑣𝑒𝑟𝑎𝑔𝑒
• UCL and LCL are 3σ away from the
grand average
• Points that fall outside the control
limits prompt a search for an
assignable cause; likewise
“unusual” runs must be
investigated – known as the
sensitizing rules
• 𝑋 ~ N(∙) by the CLT regardless of
what population governs the r.v. X
Natural Variation – completely random is expected; but what if we have systematic variation? Assignable and can be investigated and remediated
Western Electric Rules [1956]:
1. One point plots outside the three-sigma limits; Western Electric
These rules are based on Bernoulli and
2. Eight consecutive points plot on one side of the Kodak
Binomial distributions
center line;
3. Two out of three consecutive points plot beyond
two-sigma warning limits on the same side of
the center line; or Control Chart Sensitizing Rules
4. Four out of five consecutive points plot beyond
one-sigma warning limits on the same side of what constitutes an issue for investigation?
the center line.
If chart shows lack of control, investigate for special cause Additional Sensitizing Rules [Kodak]:
5. One or more points very near a control limit.
6. Six points in a row steadily increasing or decreasing.
7. Eight points in a row on both sides of the center line,
but none in-between the one-sigma warning limits on
both sides of the center line.
8. Fourteen points in a row alternating above and below
the center line.
9. Fifteen points in a row anywhere between the one-
sigma warning limits (including either side of the center
line).
10. Any unusual or non-random pattern to the plotted
points.
SPC means data driven
• We are taking samples of the process performance
• Charting the samples
• Comparing the sample performance to what we expect the process is
capable of
• Making decisions on the result – in control?
• Assignable cause?
• Process performance is drifting?
Samples
To measure the process, we take samples and analyze the
sample statistics following these steps
Samples of the product,
say five steering arms Each of these blue
boxes represents one
are removed from the sample of five
machining line to check steering arms
how they vary from each
other in weight # #
Frequency
# # #
# # # #
# # # # # # #
# # # # # # # # # #
Weight
Principles of Operations Management; [Heizer/Render] 7th edition – primary source for the following SPC discussion
Samples
To measure the process, we take samples and
analyze the sample statistics following these steps
The solid line
represents the
After enough distribution
samples are
taken from a
stable process,
Frequency
they form a
pattern called a
distribution
Weight
Principles of Operations Management; [Heizer/Render] 7th edition – primary source for the following SPC discussion
Samples
To measure the process, we take samples and
analyze the sample statistics following these steps
Samples
To measure the process, we take samples and
analyze the sample statistics following these steps
If only natural
causes of
variation are
Prediction
Frequency
present, the
output of a
process forms a
distribution that is
stable over time Weight
and is predictable
Principles of Operations Management; [Heizer/Render] 7th edition – primary source for the following SPC discussion
Samples
To measure the process, we take samples and
analyze the sample statistics following these steps
?
?? ??
If assignable causes ?
? ?
?
are present, the ?
?
? ?
?
??
?? ?
process output is
Frequency
not stable over time Prediction
and is not
predicable
Weight
Control Charts
Constructed from historical data, the
purpose of control charts is to help
distinguish between natural variations and
variations due to assignable causes
Principles of Operations Management; [Heizer/Render] 7th edition – primary source for the following SPC discussion
Process Control
(a) In statistical control
and capable of
producing within
Frequency control limits
Size
(weight, length, speed, etc.)
Types of Data
Variables Attributes
• Characteristics that o Defect-related
can take any real value characteristics
• May be in whole or in o Classify products as either
fractional numbers good or bad or count defects
• Continuous random o Categorical or discrete
variables random variables
SPC makes a strong appeal to the… Using convolution, we can
easily see that as we add more
Uniform
| | | | | | |
17 = UCL
Variation due to
16 = Mean natural causes
15 = LCL
Variation due
| | | | | | | | | | | |
to assignable
1 2 3 4 5 6 7 8 9 10 11 12 Out of causes
Sample number control
Setting Chart Limits
For x-Charts when we don’t know
where
R = average range of the samples
D3 and D4 = control chart factors from Table
Setting Control Limits
Average range R = 5.3 pounds
Sample size n = 5
From Table D4 = 2.115, D3 = 0
UCL
(x-chart detects
x-chart shift in central
tendency)
LCL
UCL
(R-chart does not
R-chart detect change in
mean)
LCL
Mean and Range Charts
These
sampling (Sampling mean is
distributions constant but
result in the dispersion is
charts below increasing)
UCL
(x-chart does not
x-chart detect the increase in
dispersion)
LCL
UCL
(R-chart detects
R-chart increase in
dispersion)
LCL
Steps In Creating Control Charts
1. Take samples from the population and compute the
appropriate sample statistic
2. Use the sample statistic to calculate control limits and
draw the control chart
3. Plot sample results on the control chart and determine the
state of the process (in or out of control)
4. Investigate possible assignable causes and take any
indicated actions
5. Continue sampling from the process and reset the control
limits when necessary
Manual and Automated Control Charts Minitab SPC tools
Control Charts also exist for Attributes
• For variables that are categorical
• Good/bad, yes/no, acceptable/unacceptable
• Measurement is typically counting defectives
• Charts may measure
• Percent defective (p-chart)
• Number of defects (c-chart)
NIST: “A process capability index uses both the process variability and the
process specifications to determine whether the process is "capable“.
Process Capability We are often required to compare the output of a stable process with the
process specifications and make a statement about how well the process
meets specification.”
213 - 207
=
6(.516)
= 1.938 Process is centered on the
design spec and is
capable
Cp is for centered
processes…
• To use Cp the production process must
be centered on the design target
value
• In the examples at the right all of the
cases reflect a Cp = 1 but clearly cases
B and C are producing bad product,
use of Cp as a measure of process
capability is misleading in these cases
• We need a way to deal with processes
that are not centered
• Cpk provides a capability index for
when the production process has
shifted away from the design target
Again, charting the histogram would
be valuable here – rather than
pinning our decision on an index.
Lean is visual – let’s see the data!
Process Capability Index
Upper Lower
Cpk = minimum of Specification
, -x x - Specification
Limit Limit
3 3
Process mean, m
In the cell or at the workstation – we train
the technicians to keep the cell Lean
7-Tools of
Lean QC – a
brief synopsis
LE [Black & Phillips Ch.14]
7-Tools of Lean QC
LE [Black & Phillips Ch.14]
What about Quality that is designed in
from the start?
controllable operating parameters part of the A graphical depiction of loss describing a phenomenon
noise that our product and process must affecting the value of products produced by a company.
effectively deal with Quality does not suddenly plummet when the spec limit
• This approach was pioneered by Genichi is not met; instead 'loss' in value progressively increases
as variation increases from the nominal target.
Taguchi [Engineer and Statistician] in the
1950s Helped fuel the continuous improvement movement
known as Lean Manufacturing.
What do you mean designed in?
http://support.minitab.com/en-us/minitab/17/topic-library/modeling-statistics/doe/taguchi-designs/taguchi-designs/
• A well-known example of Taguchi designs is from the Ina Tile Company of Japan in the 1950s.
• The company was manufacturing too many tiles outside specified dimensions.
• A quality team discovered that the temperature in the kiln used to bake the tiles varied, causing
nonuniform tile dimension. They could not eliminate the temperature variation because building a
new kiln was too costly. Thus, temperature was taken to be a noise factor.
• Using Taguchi designed experiments, the team found that by increasing the clay's lime content, a
control factor, the tiles became more resistant, or robust, to the temperature variation in the kiln,
letting them manufacture more uniform tiles.
• Taguchi designs use orthogonal arrays, which estimate the effects of factors on the response
mean and variation.
• An orthogonal array means the design is balanced so that factor levels are weighted equally.
Because of this, each factor can be assessed independently of all the other factors, so the effect of
one factor does not affect the estimation of a different factor.
Defining the Taguchi Approach –
• The Point Then Is To Produce Processes Or Products that are ROBUST AGAINST NOISE
• Don’t spend the money to eliminate all noise, build designs (product and process) that can perform as desired – low variability – in
the presence of noise
Interactions
• Brainstorming
• Shainin’s technique where they are determined by looking at the products Components
Search
Multi-vari
Charts
Paired
Comparisons
Moisture
Catalyzer Transportation
Content
Robust Design and Taguchi Methods
- Accept/Reject is expensive
The classic way of handling this is to categorize
Ina Tile Example.. Temperature as a factor in the design thereby
requiring a redesign of the entire Kiln system
Inside
tiles
Outside
tiles
Outside
tiles
TARGET
LSL USL
Inside
tiles
Inside
tiles
Outside
tiles
after
before
TARGET
LSL USL
The key with Taguchi is that he placed
Taguchi DOE leverages ANOVA more of the burden on the product and
process design and moved what would
(i) Determine which input variables have the most influence on the output; have traditionally been controllable factors
to the noise [natural variability] term.
(ii) Determine what value of xi’s will lead us closest to our desired value of y;
(iii) Determine where to set the most influential xi’s so as to reduce the variability of y;
(iv) Determine where to set the most influential xi’s such that the effects of the uncontrollable variables (zi’s) are minimized.
Controllable input
parameters
x 1 x 2 … xn
z1 z2 … zm
Uncontrollable
factors (noise)
• Standard Operation Procedures
• The mechanism engineers use to provide a
The Voice of the Process:
blueprint and direction to the technicians on
the floor to keep the process viable
once we have insight – we
• These are living documents that reflect the need to make it manifest
best insights from engineering into the best
operation of the process and permanent - SOP
• An SOP must be configuration managed
Where? Standard Operating procedures characteristics:
Contact Method
A contact method functions by detecting whether a sensing
device makes contact with a part or object within the process.
Toggle Switches
Limit Switches
http://www.landp.com.au/special/presentation_demos/mproof_smpl_1.ppt
Energy Contact Devices
Photoelectric switches can
Light
be used with objects that
are translucent or
transparent depending upon
the need.
Transmission method: two units,
Transmitter Receiver one to transmit light, the other
to receive.
Reflecting method:PE sensor
responds to light reflected from
object to detect presence.
Object
http://www.landp.com.au/special/presentation_demos/mproof_smpl_1.ppt
Contact Device
An example of a
contact device using a
limit switch. In this
case the switch makes
contact with a metal
barb sensing it’s
presence. If no
contact is made the
process will shut
down.
http://www.landp.com.au/special/presentation_demos/mproof_smpl_1.ppt
Counting Method
Used when a fixed number of operations are required within a process, or
when a product has a fixed number of parts that are attached to it.
A sensor counts the number of times a part is used or a process is completed
and releases the part only when the right count is reached.
http://www.landp.com.au/special/presentation_demos/mproof_smpl_1.ppt
Motion-Sequence Method
The third poka-yoke method uses sensors to determine if a motion or a step
in a process has occurred. If the step has not occurred or has occurred out of
sequence, the sensor signals a timer or other device to stop the machine and
signal the operator.
http://www.landp.com.au/special/presentation_demos/mproof_smpl_1.ppt
Types of Sensing Devices
1. Physical contact devices
2. Energy sensing devices
3. Warning Sensors
http://www.landp.com.au/special/presentation_demos/mproof_smpl_1.ppt
Physical Contact Sensors
These devices work by physically
touching something. This can be a
machine part or an actual piece being
manufactured.
In most cases these devices send an
electronic signal when they are
touched. Depending on the process,
this signal can shut down the operation
or give an operator a warning signal.
http://www.landp.com.au/special/presentation_demos/mproof_smpl_1.ppt
Touch Switch
Used to physically detect the presence or absence of an
object or item-prevents missing parts.
Used to physically detect the height of a part or
dimension.
http://www.landp.com.au/special/presentation_demos/mproof_smpl_1.ppt
Energy Sensors
These devices work by
using energy to detect
whether or not an defect
has occurred.
Fiber optic
Vibration
Photoelectric
http://www.landp.com.au/special/presentation_demos/mproof_smpl_1.ppt
Warning Sensors
Color Code
Warning sensors signal the
operator that there is a
problem. These sensors use
colors, alarms, lights to get
the workers attention
Lights connected
Lights to Micro switches
& timers
http://www.landp.com.au/special/presentation_demos/mproof_smpl_1.ppt
M6: Takeaways
• Quality is everyone’s task
• Read Deming, Juran
• SPC applies rigor and precision to continued success
• Processes have weaknesses – we must strengthen those points in the
process and work to eliminate or [at least] control the uncertainty
• Error-proofing is the way to zero defects
• We can also get smart in design and make the design robust in the
face of “noise” [Taguchi]
• SOPs put our engineering knowledge on the front line – it is the VOC
translated through the DMAIC onto the shop floor
Next time…
• Quiz 12 (DMAIC)
• HW10 OEE, Gauge R&R