Anda di halaman 1dari 49

Probability Distributions: Learning Objective

At the end of this Measure topic, all learners will be able to describe and interpret normal, binomial, Poisson, chi square, Students t, and F distributions.

Types of Distribution
There are different probability distributions and a typically used list is below. Fitting data to distributions is crucial for purposes of prediction and decisionmaking. See below to learn more.

Normal Distribution

Produces a bell-shaped curve Depends on two factors: the mean and the standard deviation o Mean determines the datas center o Standard deviation determines the graphs height and width When the standard deviation is high, the curve is lower and wider, thus showing data with a large spread (dispersion) If the standard deviation is small, the curve is tall and narrow, thus a tighter dispersion

Student's t-Distribution

A symmetrical continuous distribution Similar to the normal distribution, but the extreme tail probabilities are larger for sample sizes less than 31 Requires use of degrees of freedom (df) - 1 less than the sample size

Binomial Distribution
Use binomial distributions for: modeling discrete (attribute) data having only two possible outcomes (i.e., pass or fail, yes or no). finding a process that is producing some nonconforming units in a sample with two mutually-exclusive possible outcomes (i.e., pass or fail).

Poisson Distribution

For estimating the probability of a discrete event where values are X = 0, 1, 2, 3, etc. Use as a model for the number of events (such as the number of customer complaints at a business, arrivals per hour, defects per unit, etc.) in a specific time period. Use when the average number of defects per unit (DPU) is the information sought, not just the defective information.

Weibull Distribution

A statistical approach, using graphical plots of the time-to-failure A family of distributions that evaluates a design. The Weibull distribution is used to deduce the percentage of failures of a design's population for a given time through a design's life expectancy. One of the most widely used tools in reliability engineering A statistical approach evaluating a design to deduce the percentage of failures

Other Distributions

The following are additional types of distributions, each having a specific purpose: Chi-square F-distribution Uniform Exponential Lognormal Bivariate normal

Using t-distributions, t-tables and normal tables, all essential for interpreting results in hypothesis testing, are covered in another section.


Chi-Square Distributions
Chi-square, Students t, and F distributions are used in Six Sigma to test hypotheses, construct confidence intervals, and compute control limits. It is necessary when using the chi-square, students t or f distribution to determine whether a one-tailed or two-tailed hypothesis test should be performed.

A one-tailed test is used to determine if a sample is significantly different than a historic mean. For example, we are interested if the customer complaint issues are significantly above five per quarter. The null is rejected if the customer complaints are significantly over five per quarter. A two-tailed test is used to determine if one sample is significantly different from one another. For example, we are interested to see if the customer complaint issues for the days of the week are significantly above the service level agreement of five per quarter. We would look at the average customer complaints per day of the week and compare them to the five complaints per quarter as stated in the service level agreement. The null is rejected if the number of issues were significantly above the test mean of five for any given day. The Chi-square is the most popular discrete data hypothesis testing method, and is used when comparing proportions of two or more samples. This involves comparing the observed proportion to the expected proportion using a Chi-squared table orcontingency table. An example using the Chi-square statistic will be given in the Analysis section of this course. The contingency table is used to analyze data via a two-way classification involving two factors. In other words, it tests relationships between two sources of variation. This data is attribute in type such as counts or frequency. The relationship can be statistically described by a hypothesis test: Null (No relationship/ No difference) Alternate (Relationship/Difference) Ho: Factor A is independent of Factor B Ha: Factor A is NOT independent of Factor B

Students t Distribution
A Students t distribution (t statistic) is commonly used to test hypothesis for means when a sample size is small and often the true population and standard deviation are unknown. It is also used in the creation of confidence intervals for the means. The t distribution can be used whenever samples are drawn from populations possessing a normal, bell-shaped distribution. W. S. Gossett (1908) discovered the distribution through his work at the Guinness brewery. At that time, Guinness did not allow its staff to publish, so Gossett used the pseudonym Student.

The equation above is the Student's t distribution (t score) where x is the sample mean,s is the standard deviation of the sample, and n is the sample size. The particular form of the t distribution is determined by its degrees of freedom. Degrees of freedom (n-1) refers to the number of independent observations equal to the sample size minus one. For example, the distribution of the t statistic from a sample size of 10 would be described by a t distribution having 10 - 1 or nine degrees of freedom. Similarly, a t distribution having four degrees of freedom would be used with a sample of size five.


Measurement System Analysis: Learning Objective

At the end of this Measure topic, all learners will be able to calculate, analyze, and interpret measurement system capability using repeatability and reproducibility (Gauge R&R), measurement correlation, bias, linearity, percent agreement, and precision/tolerance (P/T).

Measurement System Analysis: Introduction

Decisions are made throughout each step within the processes of our industries. All decisions have rules or guidelines associated with them to steer the decision maker through the process. If decisions are made correctly, the required process is completed. On the other hand, if decisions are made incorrectly, a spiral effect within the process or product is created that potentially initiates negative consequences for the organization, such as: Increased cycle time Increased cost Rework/scrap Legal ramifications

Measurement System Analysis (MSA)

According to John L. Hradesky in Productivity and Quality Improvement, inspection methods are subject to variation, no matter how simple the measuring device. Therefore, taking steps to ensure the accuracy and consistency of measurements through analysis and review of the measurement system is necessary to obtain ongoing statistical process control. To ensure that a measurement method is accurate and producing quality results, a method must be defined to test the measurement process as well as ensure that the process yields statistically stable data. Measurement Systems Analysis (MSA) refers to the analysis of precision and accuracy measurement methods. MSA is an experimental and mathematical method of determining how much the variation within the measurement process contributes to overall process variability. Three characteristics contribute to the effectiveness of a measurement method: Accuracy Reproducibility Repeatability

Repeatability and reproducibility often come under the heading of precision. Precision requires that the same measurement results are achieved for the condition of interest with the selected measurement

method. Accuracy, once again, is related to the difference between the average and the reference value (bias). Stability and linearity are contributors to nonaccuracy or bias. Linearity: How does the size of the part affect the accuracy of the measurement method? Stability: How accurately does the measurement method perform over time?

A measurement method must first be repeatable. Repeatability assumes that all elements of the test can be repeated, including gauge check (if required), set-up, positioning the gauge, and taking a reading. It is not appropriate to just take two measurements, one after another, without performing the rest of the functions. A user of the method must be able to repeat the same results given multiple opportunities with the same conditions. The method must then be reproducible. Several different users must be able to use it and achieve the same measurement results. Finally, the measurement method must be accurate. The results the method produces must meet an external standard or a true value given the condition of interest.

Repeatability and Reproducibility

Assuming that a gauge is determined to be accurate (that is, the measurements generated by the gauge are the same as those of a recognized standard), the measurements produced must be repeatable and reproducible. Recall from the Key Definitions section that repeatability and reproducibility represent two aspects of precision and help describe the variability of a measurement method: Repeatability describes the minimum variability in results and implies that the variability of the measuring instrument itself is consistent. Reproducibility describes the variability in results and implies that variability across operators is consistent.

Gauge R&R Studies

Repeatability and reproducibility (R&R) studies are a method for determining the variation of a measurement system. Both are expressed in terms of standard deviations. This section looks more closely at the conditions required to measure these two forms of variation (error), as well as how to conduct the studies and interpret the results. A study must be conducted to understand how much variance observed in the process (if any) is due to variation in the measurement system. Three

methods are typically used for this purpose: The range method quantifies both repeatability and reproducibility together. The average and range method determines the total variability, and allows repeatability and reproducibility to be separated. The analysis of variance method (ANOVA) is the most accurate of the three methods. In addition to determining repeatability and reproducibility, ANOVA also looks at the interaction between those involved in looking at the measurement method and the attributes/parts themselves.

The repeatability standard deviation is a measure of dispersion of the distribution of test results under repeatability conditions. The reproducibility standard deviation is a measure of dispersion of the distribution of test results under reproducibility conditions.

Factors in R&R Studies

The table below identifies four factors important to the measurement conditions within a laboratory; each factor may be held constant (same) or changed (different). These factors are considered the main contributors to variability of measurements. Factor Same Time Measurements made at the same time Measurement Conditions Within a Laboratory Different Measurements made at different times Calibration conducted between measurements Different operators Different equipment

Calibration No calibration between measurements Operator Same operator

Equipment Same equipment without recalibration

R&R Factors
Same time measurements are those conducted in as short a time as possible to minimize changes in conditions that cannot always be guaranteed constant (for example, environmental conditions). Different time measurements are those carried out at longer intervals of time and may include effects due to changes in environmental conditions. Calibration' refers here to the calibration process that occurs at regular intervals between groups of measurements within a laboratory. Although the operator is often an individual, in some situations the operator is a team of individuals, each person performing a specific part of the measurement procedure. Any change in composition of the team or assignment of duties within the team would be considered a different operator. When sets of equipment are involved, any change in any significant component (for example, a

thermometer or a batch of reagent) is considered different equipment. Under repeatability conditions, all four factors remain the same. Since under reproducibility conditions results are obtained by different laboratories, all four factors are varied. Additional between-laboratory factors can also affect these test results, including differences in management and maintenance of the laboratories and training levels of operators.

Preparing for an R&R Study

Step 1 Determine the purpose of the study and the kind of information needed to satisfy the purpose. The following questions should be answered: What approach should be used? How many operators are to be involved? How many sample parts are to be tested? What number of repeat readings will be needed?

If dimensions are especially critical, the study will require more parts and/or trials to gain the desired level of confidence. Bulky or especially heavy parts might force fewer samples and more trials. Step 2 Select the parts to be used for the study. They must represent the full range of product variation that exists in production. Since they must reflect the entire operating range of the measuring process, the parts should have different values. This might be accomplished, for example, by taking one sample per day for several days. Since each part will probably be measured several times, each must be identified by a number.

Step 3 Select the operators to be used in the study. When possible, operators should represent individuals who normally operate the measurement equipment. Otherwise, study personnel should be trained properly. The following additional steps help reduce the likelihood of inaccurate results: Ensure that measurements are taken in random order so any drift or fluctuations that might occur will be spread randomly within the study. Prevent bias by ensuring that operators are unaware of which numbered part they are measuring. Estimate readings to the nearest number that can be obtained. According to the MSA Reference Manual, For electronic readouts, the measurement plan must establish a common policy for recording the right-most significant digit of display. Analog devices should be recorded to one-half the smallest graduation or limit of sensitivity as resolution. For analog devices if the smallest scale graduation is 0.0001, then the measurement should be recorded to 0.00005. Be sure that someone who understands the nuances involved in conducting a reliable study is available to

oversee it.

Using the Results

R&R studies that show a relative lack of repeatability or reproducibility can point to possible sources of error. The table below summarizes some of the possibilities. Possible Indications from R&R Study Results If lack of repeatability is large in comparison If lack of reproducibility is large in to reproducibility: comparison to repeatability: the measuring equipment may need maintenance. the measuring equipment could be redesigned for rigidity. the process for clamping or locating the part in the measuring equipment may need to be improved. within-part variation may be excessive. operator training on how to use and read the measurement equipment may be needed. calibrations may need to be more clearly defined.

Whether a measuring system is acceptable for its intended application depends largely on the percentage of part tolerance that is reflected by system errora combination of gauge accuracy, repeatability, reproducibility, stability, and linearity.

Acceptance of Results
The MSA Reference Manual suggests general criteria for the acceptance of repeatability and reproducibility as outlined in the table below (a ratioor percentof measurement error to total tolerance). Guidelines for Acceptability of Measuring Equipment % of Measurement Error Acceptability to Total Tolerance Total measurement error of Acceptable measuring equipment. < 10% of total tolerance Total measurement error of Possibly acceptable based on the importance of the application, 10-30% of total tolerance* cost of the measuring equipment, cost of repairs, etc. Total measurement error of Generally unacceptable; every effort should be made to identify > 30% of total tolerance* and correct the problem. Customers should be involved in determining how the problem will be resolved. *Other authors and books may vary slightly from these guidelines.

Measurement Correlation
Measurement correlation is typically the comparison of the measurement values between two or more different measurement systems. However, it can also be defined as the comparison of values obtained through the measurement of different attributes using different measurement methods. An example of the latter might be looking at the correlation of hardness and strength of a material. Measurement correlation may be made against a known standard. Both may have variation, but comparing the variation of a measurement instrument to a known standard may also identify correctable issues with the measuring device. Other components besides repeatability and reproducibility whose combined effect explains measurement correlation are: Bias Linearity Precision/Tolerance (P/T)

Bias is often due to human error. Whether intentional or not, bias can generate inaccurate or misleading results. In other words, bias causes a difference between the output of the measurement method and the true value. Types of bias include: Participants in a study tend to remember their assessments for prior trials. o To avoid this type of bias, before administering the second trial: collect assessment sheets immediately after each trial. change the order of the inputs/transactions or questions. give enough time after the initial trial to make it difficult for the participants to remember. Participants spend extra time when they know they are being evaluated. To counter this bias, give specific timeframes. Another big example of bias is improperly set equipment; e.g., someone sets the bathroom scale high by 15 lbs so that the user actually weighs 150 lbs even though the scale displays 165 lbs.

Hawthorne Effect
The Hawthorne Effect states that if people know they are being observed, they behave differently. This effect was first observed and documented between 1928 and 1932 in the Western Electric Hawthorne factory located in Cicero, Illinois. Industrial engineers wanted to study the effects of the amount of light available in the workplace on worker productivity. Participants were selected and separated from their work groups. Their productivity was tracked as the amount of light in their work area gradually increased. Each time the amount of light increased, so did productivity. The conclusion of the study was that increasing light improves productivity. An additional study was conducted to validate the original assessment, but in this case, the amount of light was decreased. Researchers found that productivity kept increasing as the amount of light decreased. The level of light had little, if anything, to contribute to productivity. What impacted the productivity was that the participants knew they were being monitored. This had an emotional impact to drive them to perform better. Keep in mind that the Hawthorne Effect is present if participants know they are being evaluated. It is a strong example of bias.

Process Capability and Performance: Learning Objectives

At the end of this Measure topic, all learners will be able to: identify, describe, and apply the elements of designing and conducting process capability studies, including identifying characteristics, identifying specifications and tolerances, developing sampling plans, and verifying stability and normality. distinguish between natural process limits and specification limits, and calculate process performance metrics such as percent defective. define, select, and calculate Cp and Cpk, and assess process capability. define, select, and calculate Pp, Ppk, Cpm, and assess process performance. describe the assumptions and conventions that are appropriate when

only short-term data are collected and when only attributes data are available. describe the changes in relationships that occur when long-term data are used, and interpret the relationship between long- and short-term capability as it relates to a 1.5 sigma shift. compute the sigma level for a process and describe its relationship to Ppk.

Process Capability Studies

By examining patterns of statistically-based process behavior, process capability studies aim to establish the ability of a process to meet product requirements. Comparing the natural process limits (serving as the voice of the process) against the specifications (voice of the customer) demonstrates whether the process is centered within the specification limits and the degree of variation produced by the process. Basic Goals of Process Capability Studies To define a process's capability in terms of its average and variability To use the knowledge of the process capability to predict how well the process can meet product design requirements

Use When Communicating with design personnel to ensure requirements are being met Evaluating new equipment Reviewing tolerances Assigning production to machines Auditing a process Measuring the effects of adjustments


Identify the process/product Ensure the process is in statistical control Identify the most critical characteristics Establish capability goals to be compared with the results Plan the data collection

Critical Characteristics

A companys product may have many important characteristics. Identify the most critical characteristics and develop the process capability study to analyze them. The criteria for choosing critical characteristics may depend on many factors, such as their influence on product reliability, customer satisfaction, cost, etc.

1. 2. 3. 4. 5. 6. Verify process stability and normality of process data Estimate process parameters Measure process capability Compare actual capability to desired capability Make a decision concerning process changes Report the results of the study with recommendations

The points outlined here are found in much greater detail in Measuring Process Capability by Davis R. Bothe.

Other Useful Tools

Control charts Designed experiments (DOE) Histograms Probability plots

Possible Actions

Do nothing (all is well) Change the specs (any change must be approved by product/service design personnel and the customer) Center the process to the target (alignment) Reduce the variations (spread) Review design to see if changes can be made and still meet customer expectations Accept the loss (related to costing scrap and rework)

Developing Sampling Plans

A sampling plan is a detailed outline about measurements: which measurements, at what times, on which material, in what manner, and by whom. Design sampling plans so the resulting data contains a representative sample within the identified parameters. Questions about sampling and sampling size are critical to the success of all statistical analysis. Read below to learn more about sampling plans.

Sampling Questions to Consider

How many samples should be taken? How should the sampling be conducted? Is the sample large enough? Should extra samples be taken?

There are no standards to sampling plans. Therefore, the decision is up to the Six Sigma Green Belt based on the project at hand and the specific needs.

Calculations to Consider

Standard deviation for population to be tested

Alpha risk Beta risk Meaningful difference (difference in means the test expects to detect)

Steps for Developing a Sampling Plan

1. Identify: o parameters to be measured. o range of possible values. o required resolution. 2. Design a sampling scheme detailing how and when sampling occurs. 3. Select sample sizes. 4. Design data storage formats. 5. Assign roles and responsibilities to team members.

Determining Sampling Size

To determine sampling size, establish: an alpha risk (typically 0.05). a beta risk (typically 0.10). the standard deviation (known or estimated). a meaningful difference for a two sample t-test.

Verifying Stability and Normality

Although the process may be in a stable state, variations still exist. Stabilityis the difference between individual measurements taken over time of the same part and using the same technique. Determined by using control charts, stability statistically examines the datas overall consistency. Data are grouped into ranges, and the points in each range are compared to the predicted number from the distribution. Linking the voice of the process limits (results) to the voice of the customer (specifications) is an ongoing theme throughout Six Sigma. Two groups of factors interact to determine if the operation successfully achieves quality: natural process limits and specification limits. Processes can be predictable or unpredictable and products are either conforming or nonconforming. Therefore, four possible combinations exist: Producing Producing Producing Producing conformances in an unpredictable state nonconformances in an unpredictable state nonconformances in a predictable state conformances in a predictable state

Specification Limits
Specification limits are the voice of the customer through either customer requirements or industry standards. The amount of variance (process spread) the customer is willing to accept sets the specification limits. For example, a customer wants a supplier to produce 12-inch rulers. Specifications call for an acceptable variation of +/- 0.03 inches on each side of the target (12.00 inches). So the customer is saying acceptable rulers will

be from 11.97 to 12.03 inches. If the process is not meeting the customer's specification limits, two choices exist to correct the situation: change the processs behavior or change the customers specification, which of course requires customer approval. Examples of Specification Limits Specification limits are commonly found in: Blueprints Engineering drawings and specifications Cost per transaction unit Competitors benchmarks

Process Capability Indices

The goal of performance metric indices is to establish a controlled process, and then maintain that process over time. Numbered values are a shortcut method indicating the quality level of a process in parts per million (ppm). Once the status of the process is determined, the causes in variation (based on statistical significance) may be identified. Courses of action might be to: change the specifications (not very often). center the process. reduce the variation in the Six Sigma process spread. accept the losses (not very often).

Introducing Process Capability Indices

Process capability indices (Cp and Cpk) and process performance indices (Pp, Ppk, and Cpm) identify the current state of the process and provide statistical evidence for comparing after-adjustment results to the starting point. Although these indices have a common purpose, they differ in their approach. According to Douglas C. Montgomery in Introduction to Statistical Quality Control, an underlying assumption of the process capability ratios is that their usual interpretation is based on a normal distribution of process output. See below to view more information about each index.


Cp measures the ratio between the specification tolerance (USL-LSL) and process spread (estimated from control charts or other process information) A process that is normally distributed and is exactly midway between the specification limits would yield a Cp of 1 if the spread is +/- 3 standard deviations. A generally accepted minimum value for Cp is 1.33 this differs by industry, but the larger the number the better. Limitations to this index include its requirements for both an upper and lower specification and is used once the process is centered.


Cpk measures the absolute distance of the mean to the nearest specification limit. Generally speaking, a Cpk of at least 1 is required, and over 1.33 is desired, but this differs for industries. Cpk takes into account the centering process, unlike Cp. Together with Cp, Cpk provides a common measurement for assigning an initial process capability to center on specification limits.


Pp measures the ratio between the specification tolerance and process spread. Pp helps to measure improvement over time (as do Cp and Cpk) Pp signals where the process is in comparison to the customer s specifications (as does Cp and Cpk)


Ppk measures the absolute distance of the mean to the nearest specification limit. Ppk provides an initial measurement to center on specification limits. Ppk examines variation within and between subgroups.


Cpm is also referred to as the Taguchi index. This index is touted by some to be more accurate and reliable than the other indices. Cpm is based on the notion of reducing the variation from a target value (T). T represents the target in this index, T receives more focus than the specification

limits. Variation from the target T is expressed as process variability or 2and process centering ( - T), where = process average. Cpm provides a common measurement assigning an initial process capability to a process for aligning the mean of the sample to the target.

Note: The use of these indices and equations varies by industry and author.

Using Cp
Use When
identifying the processs current state. measuring the actual capability of a process to operate within customer-defined specification limits. the data set is from a controlled, continuous process.

Information Needed
Standard deviation/Sigma (estimated from control charts or other process information) USL and LSL (specifications) Normal probability distribution knowledge

Cp Index User Tips

Cp does not tell about: the process's ability to align with the target (centered on the customer requirement; Cpk's function). whether the process variation is centered on, or located within, the customers specifications Cpk values Cp requires upper and lower specification limits. Cp measures "can it fit", while Cpk measures "does it fit."

If Cp > 1, then (USL LSL) > 6sp: The process spread is less than the

tolerance width. Note: This does not mean the process variation is located inside the tolerance limits. Cp does not provide any information on where the process variation is located in relationship to the specification. Cp must be compared to Cpk to determine location. Assuming the process variation is located within the specification limits, the higher the values of Cp above 1.00, the greater potential the process has for meeting specification requirements If Cp < 1, then (USL LSL) < 6sp: The process variation is wider than the tolerance limits. No matter where the process is centered, nonconforming products will be produced. If Cp = 1, then (USL LSL) = 6sp: The process spread has the same width as the specification. Calculate sigma using the appropriate value from the table of control chart constants, R-bar/d2 (or other value, depending on the type of chart being used). Data must be continuous and from a process that is under control. The estimated sigma can be artificially low depending on the subgroup size, sample interval, or sampling plan. An artificially low sigma can lead to an overinflated (higher) Cp for a process that drifts, which makes a process look better than it is (there is a gap in this line larger than the previous lines)

Cpk Index
Cpk, the most widely used probability index, is a centering statistic for determining the location of the data points related to the customer s target within specifications (LSL and USL). Expressed as an index, Cpk penalizes the process for being targeted too close to either of the specifications. Unlike Cp, Cpk, can be used to determine minimum process capability. A Cpk, of 1.00 or greater indicates that the process spread is located entirely within the specification limits.

Cpk User Tips

If Cpk = Cp, the process is centered. The closer the Cpk value is to the Cp value, the better. The closer the Cp value is to the Cpk value, the more centered the process is Cpk will always be equal to or less than Cp. Therefore, when Cpk = 1.00 or greater (regardless of the Cp value), the entire process spread (+ / - 3 standard dev) is located within the specification limits. Compare the values of Cp to Cpk to determine how well the process is centered A Cpk value of 0.00 indicates the process is centered at one of the specification limits. Approximately 50% of the process spread will be outside the specification limits A Cpk that has a negative value (for example Cpk = - 0.25) means the process is centered outside one of the specification limits. More than 50% of the process spread is located outside the specification limit. Data must be continuous from a controlled process. Calculate Sigma from data collected from control charts. Decide which specification is the process center nearest (USL or LSL). Calculate distance from the nearest specification (DNS). DNS = Mean - LSL or the UPS - mean. The estimated sigma can be artificially low depending on the subgroup size, sample interval or sampling plan. An artificially low sigma can lead to an over-inflated (higher) Cpkfor a process that drifts, which makes a process look better than it is.

Calculating Cp and Cpk

To calculate Cp: the tolerance band (USL - LSL) is divided by the process spread.


X = 10, Sigma = 1, and Specifications = 10 +/- 4

Cp = (14 - 6) / 6(1) Cp = 8 / 6 Cp = 1.33

To calculate Cpk:

subtract X value from the nearest spec limit, then divide the value by 3 standard deviations. Example X = 10, Sigma = 1, and Specifications = 10 +/- 4 For USL: Cpk = (14 - 10) / 3 Cpk = 4 / 3 Cpk = 1.33 For LSL: Cpk = (10 - 6) / 3 Cpk = 4 / 3 Cpk = 1.33 When Cp = Cpk, the process is centered. Using these examples as a model, calculate Cp and Cpk for each of the two previously created process capability diagrams.

Calculation Answer #1
To calculate Cp, the tolerance band (USL - LSL) is divided by the

process spread. Given: X = 7, Sigma = 1, and Specifications = 10 +/- 4 Cp = (14 - 6) / (10 - 4) Cp = 8 / 6 Cp = 1.33 To calculate Cpk, subtract X value from the nearest spec limit, then divide the value by 3 sigma Given: X = 7, Sigma = 1, and Specifications = 10 +/- 4 Cpk = (7 - 6) / 3 Cpk = 1 / 3 Cpk = 0.33

Calculation Answer #2
To calculate Cp, the tolerance band (USL - LSL) is divided by the process spread. Given: X = 10, Sigma = 0.5, and Specifications = 10 +/- 1 Cp = (11 - 9) / (6 - .5) Cp = 2 / 3 Cp = 0.67 To calculate Cpk, subtract X value from the nearest spec limit, then divide the value by 3 sigma Given:X = 10, Sigma = 0.5, and Specifications = 10 +/- 1 Cpk = (11 - 10) / 1.5 Cpk = 1 / 1.5 Cpk = 0.67 Note: Process variations are too great when both Cp and Cpk are less than 1.

Pp Index
Pp (process performance) is an index to determine whether the data points from a data set fit with the customers specification limits. As an estimate used to measure the performance of a system in meeting customer demands or needs, Pp does not take into account whether the process is centered at its mean. There is a computation difference between Cp and Pp. Cp uses the long-term standard deviation formula (calculated using the formula r/d2), and Pp use the short-term standard deviation formula (calculated from all actual data points using the root mean square calculation).

Sigma and Process Capability

When means and variances wander over time, a standard deviation (symbolized by the Greek letter ) is the most common way to describe how data in a sample varies from its mean. A Six Sigma goal is to have 99.99976% error-free work (reducing the defects to 3.4 per million). By computing sigma and relating to a process capability index such as Ppk, you can determine the number of nonconformances (or failure rate) produced by the process. To compute sigma (), use the following equation for a population:

Where: N = number of items in the population x bar is the mean of the population data xi is each data point

To use the equation: For each value x, calculate the difference between x bar (the mean) and x Calculate the squares of these differences Find the average of the squared differences (by dividing by N) this equals the variance 2 Compute the square root of the variance to obtain sigma

Ppk Index
Ppk is a process capability index determining the position of the data points centering on the USL and LSL. The Ppk index is similar to its Cpk counterpart. It combines the spread and the centering of the process into a single measure. This measure also penalizes the user if the center (mean) is too close to one of its specifications.

Data for Capability

Since Six Sigma initially focused on manufacturing, process capability refers to a machines ability to complete a specific task to achieve measurable results to customer specifications. Process capability studies use two data types: attribute and variable. Attribute data (also called discrete data) is countable data for two purposes: Computing proportions about the problems extent Qualifying quantities to show the nature of the problem

Variable data (also called continuous data) is measurable, more accurate, and more costly.

Short-term Capability
Process capability may be examined as both short-term and long-term capability. Short-term capability is measured over a very short time period since it focuses on the machines ability based on design and quality of construction. By focusing on one machine with one operator during one shift, you will limit the influence of other outside long-term factors, including: operator environmental conditions such as temperature and humidity machine wear different material lots

Thus, short-term capability can measure the machines ability to produce parts with a specific variability based on the customers requirements. Short-term capability uses a limited amount of data relative to a short time, and the number of pieces produced to remove the effects of long-term components. If the machines are not capable of meeting the customers requirements, changes may have a limited impact on the machines ability to produce acceptable parts. Remember, though, that short-term capability only provides a snapshot of the situation. Since short-term data does not contain any special cause variation (such as that found in long-term data), short-term capability is typically rated higher.

Measure Module: Lesson Summary

Measure is about measuring what is measurable. Phillip Crosby stated, "Quality measurement is effective only when it is done in a manner that produces information that people understand and use." Process analysis and documentation Measuring the existing process establishes valid and reliable data for monitoring progress by completing a SIPOC, process maps, and fishbone diagram. By validating work instructions and procedures, you can identify processes, information, suppliers, and customers. Probability and statistics Statistics are built upon knowledge about populations, sampling, probability, and drawing conclusions.

Collecting and summarizing data Measure is about data: data to be identified, collected, described, and displayed. Different types of data (variable and attribute) undergo different analysis. A sound sampling technique assures data accuracy and integrity. Tools such as stem-and-leaf, box-andwhisker, run charts, scatter diagrams, Pareto charts, histograms, and probability plots depict relationships between data. Probability distributions Different probability distributions (normal, Poisson, binomial, and Chi-square) describe the data that leads the team down the hypothesis testing roadmap during Analyze. Measurement system analysis Repeatability and reproducibility (Gauge R&R), correlations, linearity, percent agreement and precision/tolerance are tools which measure the capability of the people doing process. By discovering if these people are performing and consistently repeating the standards and expectations, these tools pinpoint training issues. Process capability and performance Process capability studies link the voice of the customer to the voice of the process. The customer sets the target and specification limits while the provider must measure the process's results and compare it to the customer's expectations. Process performance indices such as Cp, Cpk, Pp, Ppk, and Cpm are numerical values indicating where the process lies in terms of targets, specifications, and sigma levels. Analyze, the third phase of DMAIC, will further explain how to determine and analyze the root causes(s) of the defects.