SQC refers to the use of statistical methods to improve or enhance quality for customer satisfaction. It is a strategy for reducing variability, the root cause of many quality problems. However this task is seldom trivial because real world processes are affected by numerous uncontrolled factors. For eg. Within every factory, conditions fluctuate with time. Variations occur in incoming materials, in machine conditions, in the environment and in operator performance. Many of these variations cannot be predicted with certainty, though sometimes it is possible to trace the unusual patterns of such variations to their root causes. If we have collected sufficient data from these variations, we can tell, in terms of probability, what is most likely to occur next if no action is taken. If we know what is likely to occur next under the given conditions, we can take suitable actions to try to maintain or improve the acceptability of the output. This is the rationale of statistical quality control.
On a control chart, a certain characteristic of the product is plotted. Under normal conditions, these plotted points are expected to vary in a usual way on the chart. When abnormal points or patterns appear on the chart, it is a statistical indication that the process parameters or production conditions might have changed undesirably. At this point an investigation is conducted to discover unusual or abnormal conditions (eg. Tool breakdown, use of wrong raw material, temperature controller failure etc.). Subsequently, corrective actions are taken to remove the abnormality. In summary, SPC aims at maintaining a stable, capable and predictable process. Example: Let us consider a detergent filling machine. If we make n observations of the working of the machine and note the filled weight of each packet, we can compute the mean and standard deviation based on these observations. Also, we can make use of several such samples of n observations and construct a distribution of the means of the samples. The distribution of the sample means follows a normal distribution.
Variability due to assignable causes: It refers to the variation that can be linked to specific or special causes that disturb the process. Examples are tool failure, power supply interruption, process controller malfunction, adding wrong ingredients or wrong quantities etc. Assignable causes are few in number and are usually identifiable through investigation in the shop floor. The effect i.e. variation in the process caused by an assignable factor is usually large and detectable when compared with the inherent variability seen in the process. From statistical point of view, assignable causes of variations impact the distribution of the process in more ways than one. One possibility is that the average shifts out of the desired level or it is also possible that the process gets skewed to one side. How do we know whether the observed changes are due to random variations or assignable causes?
Normal Distribution
Normal distribution is characterized by the location of the average, the spread of the distribution and the shape. Normal distribution is symmetric around the centre. The spread of the normal distribution is a direct function of the standard deviation. If the standard deviation is large, the spread is also large and vice versa. It is well known that the area covered under three standard deviations on either side of the mean of the normal distribution is 99.7%. Therefore if one gathers observations from a process and plots the means of the samples, it gives a normal distribution curve. It helps one to understand if the observed variations are due to random fluctuations or otherwise. If the observed samples are within +3 or -3 , then one can conclude that there is a 99.7% probability that the variations are due to random events. Conversely, if the observed values are outside these limits, then it may be due to some assignable causes.
Important Terms related to Acceptance Sampling: Acceptance Sampling is used to accept a lot of items based on the observed number of defects in the sample drawn out of the lot. The key assumption behind the approach is that the sample is random and every item in the overall lot has an equally likely probability to get sampled.
A sampling plan describes the lot size (N), size of the sample to be assessed (n), the acceptance number (c), i.e. the number of defects permissible in the observed sample and whether it is a single sampling plan or a multiple sampling plan. In case of multiple sampling plans, for each plan, the sample size (n1, n2 and so on) and the acceptance number (c1, c2 and so on) needs to be specified. The actual number of defects detected in the sample is denoted as d. Single Sampling Plan: It is denoted by (N, n, c). The sampling plan works as follows. From a lot of N items, sample n items for assessing the quality. If the number of defects (d) observed in the sample is less than or equal to (c), the lot is accepted. On the other hand, if d>c, then the lot is rejected and is subjected to 100% quality assessment. Acceptable Quality Level: AQL is the percent defect that the buyer is willing to tolerate in the lot delivered by the supplier. For instance, the buyer may say that 1% defective is tolerable in the long run.
Lot Tolerance Percent Defective: LTPD is the worst quality beyond which the manufacturer is not willing to accept the incoming lot. For example, the manufacturer may say that in the worst case, he may not want a lot with more than 10% defective. However, due to the very nature of acceptance sampling where the decision making on the entire lot is based on imperfect information obtained from only a random sample, there is an element of risk for both the supplier and the buyer. Producers risk: When the supplier faces the risk of getting his/her lot rejected when indeed it was conforming to the agreed terms it is known as producers risk or type I error in statistical terms, indicated by . Consumers risk: The risk that the manufacturer faces by accepting a lot when it is beyond his/her tolerance level is known as consumers risk or type II error in statistical terms, indicated by .