Skip to content

Review: Acceptance Sampling

Abstract

Standards for quality control become higher as technology is miniaturized and advances. It is necessary to develop advanced sampling plans to meet high standards because traditional sampling plans need a relatively big sample size that is not economical. There have been approaches to reduce the sample size while controlling the correctness of a test. This paper includes basic approaches and their implications, such as some principles of sample size reduction. One can understand basic concepts of acceptance sampling plans and how to calculate corresponding probabilities by reading this paper. Anyone who is interested in sampling plans but doesn't know where to start is a potential reader for this paper.

Introduction

There are numerous ways to check the quality of a single product. However, to test the lot quality, there are two main ways: testing entire products one by one and testing a sample drawn from the population. The former can persuade a buyer easily, but it is an expensive way. On the other hand, the latter is cheaper, while it is hard to persuade a buyer.

Producers produce a large number of products. They prefer to test a sample because testing an entire population is too expensive. Testing only a sample leads to a lower price of the product, which is also beneficial to buyers. So testing only a sample is a prevalent approach to quality control.

The appropriate sample size (\(n\)) is in question. Producers want to make the sample size as small as possible, because the smaller the sample size, the smaller cost is needed to do a test. However, sampling a single product cannot be the best way for every test, because buyers will be pessimistic about the result of the test. So one must choose an appropriate sample size that makes both producer and consumer satisfied.

The Decision-making Process of both producers and consumers needs appropriate standards. Usual standards are producer risk (\(\alpha\)) and consumer risk (\(\beta\)) which are probabilities of making a wrong decision. Producers vary the sample size while controlling producer risk on some level. However, consumers are satisfied if the suggested sample size makes consumer risk lower than their threshold.

Some studies try to calculate appropriate sample sizes using various sampling models. Samohyl[1] used the hypergeometric model, and Suresh[2] uses the binomial model and the Poisson model. However, binomial and Poisson models give approximated results. Also, these discrete probabilistic models can be applied to only rejection rules that use integer-valued thresholds.

In this paper, two decision-making processes will be reviewed. The first one is the single sampling plan, which makes decisions based on only current sampling. The second one is the chain sampling plan which uses a current sampling result and previous sampling results. The concepts necessary to make a decision will be introduced together.


Collecting references

The main purpose of a review is to understand the decision-making process of the agents and to introduce some simple models (e.g., single sampling plans, chain sampling plans, Bayesian sampling plans) used to make a decision. Since more complex models were derived from these simple models, readers will be able to understand complex models easier. Readers will also be able to notice some principles of sample size reduction, which have great importance on developing models.

The paper that was written by Samohyl[1] was recommended to me by my thesis advisor. This paper is appropriate for beginners since it supposed the simplest case while explaining essential concepts such as LTPD, AQL, consumer risk, producer risk, and the true defective rate \(p\). It showed that reducing a sample size \(n\) by using the hypergeometric distribution instead of the binomial or the Poisson distribution. Readers can observe one reason for sample size reduction.

After reading the paper[1], one may be curious about the existence of a method that uses several previous test results to make a decision. The paper of Dodge[4] found in the references of [1] suggested a partial answer. Readers can observe that using previous information can lead to sample size reduction through the characteristic function, which is the main analyzing tool of this paper. This paper has had historical value since the author is a pioneer of quality control and the introduced method is the first trial to integrate previous test results into decision-making.

For prediction problems, Bayesian approaches perform well since they reduce variance a lot while giving up some bias. Readers can notice that calculating the risks of the agents is kind of a prediction problem. The paper written by Suresh[2] was found by google searching with the keyword "Bayesian chain sampling plan". There are several papers about the keyword, but [2] was selected because it is enough to show the specific example of the Bayesian approach without complex models. This paper introduced a Bayesian approach for the decision-making process by imposing the gamma distribution on the true defective rate \(p\).

The book that was written by Edward G. Schilling, Dean V. Neubauer[3] was found by google searching with the keyword "acceptance sampling textbook". It contains comprehensive concepts about the decision-making process but it covers too broad content for beginners. However, the book is selected (not reviewed) because It is appropriate for the role of dictionary and further reference. Especially, this book can help readers who are not familiar with probabilistic distributions.

Reviews

Single sampling plan

The paper published by Samohyl [1] introduced the single sampling plan based on the hypergeometric distribution and relevant essential concepts, such as consumer risk and Lot Tolerance Percent Defective. The reason why the author selects the hypergeometric distribution is that it is superior in the sense of suggesting a smaller \(n\) than binomial and Poisson distributions.

An easy way for consumers to evaluate the quality of a lot is by deciding the maximum number of defective items. Lot Tolerance Percent Defective (LTPD) is a decided maximum number divided by the entire lot size(N). If a population defective rate of a lot \((p)\) is smaller than LTPD, then consumers accept a lot. Accepted Quality Level (AQL) is a similar concept for producers. That is, producers reject a lot if \(p\) is larger than AQL. Thus AQL and LTPD can be thought of as subjective lot quality metrics for producers and consumers, respectively. AQL and LTPD can be different since producers have more strict standards than consumers in general. Note that no one can know the true \(p\) because of repercussions.

Since the true \(p\) is unknown, accepting or rejecting a lot based on \(p\) and LTPD (AQL) is impossible. Instead, the agents can make a decision as follows: pick a sample of size \(n\) from a lot and accept if the number of defective items (\(d\)) in the sample is less than the predetermined cutoff value(\(c\)). The triple \((N,n,c)\) is called a sampling plan [1]. Each of the agents wants to decide on a common sampling plan that satisfies their needs. For consumers, a smaller \(c\) and a larger \(n\) are preferred because such a pair means many tests with tight standards. On the other hand, producers want a smaller \(n\) and a bigger \(c\) for the opposite reason. Therefore, there exists a conflict between the stakeholders when choosing an appropriate sampling plan.

After determining the acceptance rule for a lot, each agent can evaluate their risks. Producer risk (\(\alpha\)) is the probability of rejecting a good quality lot, i.e. \(Pr(\text{reject} \mid p \le AQL)\). Consumer risk (\(\beta\)) is the probability of accepting a bad quality lot, \(Pr(\text{accept} \mid p \ge LTPD)\). Both agents must decide on a sampling plan \((N,n,c)\) while controlling for their risks under some levels. Note that the \(\alpha\) and \(\beta\) are related to type I errors and type II errors, respectively, by setting the null hypothesis as "lot quality is good."

By varying \(p\) and using an appropriate probabilistic model (e.g., hypergeometric, binomial, and Poisson), \(\alpha\) and \(\beta\) can be calculated numerically when the sampling plan \((N,n,c)\) is given. Setting \(p=LTPD\) and \(p=AQL\) is needed to yield the most conservative risks since \(p\) is unknown.

Figure 1 is the plot of producer risk and consumer risk that follow the sampling plan\((3000,1600,c)\), where \(c\) varies from 15 to 0. \(LTPD\) and \(AQL\) are set to \(1\%\) and \(0.5\%\), respectively. The horizontal and vertical lines are equal to \(0.05\), which is a predetermined threshold for consumer and producer risks. The sampling plan \((3000,1600,11)\) is an appropriate sampling plan for both agents that satisfies the threshold. When both agents consider a single sampling plan and thresholds are given, they can determine appropriate \(n\) and \(c\) by plotting \(\alpha\) and \(\beta\).

Figure 1. The plot of producer risk and consumer risk for (3000, 1600, c).

However, the same sampling plans using the binomial and the Poisson model do not satisfy thresholds since they are asymptotic distributions of the hypergeometric distribution that need a larger sample size. Thus using the hypergeometric model is appropriate for the exact calculation of risks. Notice that reducing approximation errors is one of the principles of sample size reduction.

Chain sampling plan

For vendors, it is necessary to reduce the sample size because of economical reasons. The single sampling plan described above only uses the current result of tests. However, stakeholders' decision-making processes should also account for previous test results. This approach would lead to a smaller sample size due to larger amounts of information. The group of sampling plans that take previous results into account is called cumulative results plans[3].

The primary example of a cumulative results plan is the chain sampling plan (ChSP-1) proposed by Dodge [4]. The author improved the single sampling plan of \(c = 0\) by considering several previous test results together. The paper also explained the notion of a characteristic function that makes specifying risks easier. ChSP-1 is specified by \((n,i)\), where \(n\) is the sample size and \(i\) is the number of previous samples. Decision-makers accept a lot if no defective items are observed in a sample or only one defect is observed, after \(i\) samples with no defective items. The probability of accepting a lot(i.e., characteristic function) is \(p(0)+p(1)[p(0)]i\), where \(p(j)\) is a probability of observing \(j\) defective items in a sample. Note that ChSP-1 with \(i = \infty\) reduces to \((N,n,0)\) of the single sampling plan using the binomial distribution.

Figure 2 is the plot of the characteristic function versus \(p\) from 0 to 0.2 when \((n,i)=(10,1)\) based on the binomial sampling model. The producer risk is the length of the vertical line connecting \((AQL,f(AQL))\) and \(y=1\) orthogonally, and the consumer risk is the distance of \((LTPD,f(LTPD))\) to \(y=0\).

Figure 2. Graph of characteristic function with (n, i) = (10, 1), based on the binomial distribution.

As \(i\) grows, the characteristic function declines faster when \(p\) is relatively small, and vice versa. This means producer risk can be controlled without significant loss of consumer risk because \(LTPD > AQL\) in general. Since the binomial distribution is an approximation of hypergeometric distribution, one may calculate more accurate characteristic functions with a specified lot size \(N\). Note that using previous test results is a principle of sample size reduction.

Bayesian chain sampling plan (BChSP-1)

Since producers have recorded a large number of testing results, one may use the Bayesian approach by imposing appropriate prior distribution (e.g., beta-binomial, gamma, etc) on \(p\) based on accumulated records. For example, the approach proposed by Suresh[2] used the Poisson sampling distribution and the gamma prior distribution.

The characteristic function of ChSP-1 based on the Poisson distribution is \(Pr(n, i \lvert p) = e^{-np} + e^{-np(1+i)}np\) when there are \(i\) immediate previous tests with sample size \(n\). By choosing appropriate shape parameter \((s)\) and rate parameter \((t)\) for gamma distribution, the histogram of observed \(p\) can be approximated by the gamma density function \(Pr(p) = \frac{t^s p^{s-1} e^{-tp}}{\Gamma(s)}\). The reason for choosing the Poisson distribution and the gamma prior is that they are conjugate and the gamma distribution is flexible enough to be fitted to any histograms.

The average probability of acceptance is \(\bar{P}(\mu) = \int_0^\infty Pr(n, i\lvert p)Pr(p) dp = \frac{s^s}{(s+n\mu)^s} + \frac{n\mu s^{s+1}}{(s + n\mu + in\mu )^{s+1}}\) where \(\mu = s/t\). Note that \(\bar{P}\) is weighted mean of characteristic function. Since \(p\) is ruled out by integration, a decision for accepting a lot must be based on \(\mu\). The paper[2] described decision rules based on quantiles of \(\bar{P}(\mu)\) called quality decision regions.

Calculating the posterior distribution of \(p\) is a prevalent Bayesian approach. Instead, Suresh calculated a weighted average of characteristic function with specified \((s, i)\) to get "Bayesian characteristic function". The author did not provide evidence of sample size reduction because BChSP-1 does not use the same rejection rule for lots as ChSP-1 . However, the paper suggested a new scheme of decision-making.

Discussion and Conclusion

Limitations

Nowadays, producers and consumers want a smaller defective rate than \(0.5\%\) as technology advances faster. Applying ChSP-1 or the single sampling plan with the hypergeometric distribution for \(p\) about 1ppm while controlling risks needs big \((N, n, i)\). If \(p = 10^{-6}\) and \(N = 10^4\) then the hypergeometric approach cannot be used, because there are no defective items in the lot by the model. The binomial and the Poisson model tend to exaggerate risks so they need a large sample size that is not preferred.

If products are sold by multiple consumers, then producers will need multiple standards. For example, industries making semiconductor wafers sell their product to companies making SoC, RAM, or SSD. In this case, producers will need more complex sampling plans to reduce the cost and ensure each consumer. However, ChSP-1 and the single sampling plan cannot be applied to the case.

Models introduced in this paper are based on identical, and independent sampling hypotheses. However, the status of products may be influenced by the condition of the factory so the hypothesis may not be true in a real situation. In this case, one may impose a correlation between samples to improve the performance.

Conclusion

Reducing sample size is an important issue of quality control. In this paper, single sampling plans, chain sampling plans, Bayesian chain sampling plans, and corresponding basic concepts(e.g., consumer and producer risk, LTPD, AQL) have been reviewed. Some principles of sample size reduction are revealed by investigating the idea behind models.

In single sampling plans, producers were able to reduce the sample size by using a more accurate sampling distribution. Using previous test results reduced sample size in Chain sampling plans. Bayesian chain sampling plans were worth reviewing because they suggested a new method of decision-making based on quality decision regions although it did not result in sample size reduction.

However, complex situations make these models less useful as described in limitations. To find a more accurate model to reduce sample size, it is necessary to correct unreal hypotheses, integrate some principles of sample size reduction, and develop a new method of decision-making. Adapting to complex situations will be the next problem of quality control.

References

  1. Samohyl, R.W. (2018). Acceptance sampling for attributes via hypothesis testing and the hypergeometric distribution. Journal of Industrial Engineering International, 14, 395-414.
  2. Suresh K, Sangeetha V (2011) Construction and selection of bayesian chain sampling plan (BChSP-1) using quality regions. Modern Appl Sci 5(2):226–234. doi:10.5539/mas.v5n2p226
  3. Edward G. Schilling, Dean V. Neubauer(2017) Acceptance sampling in quality control. Chapman and Hall/CRC Press.
  4. Dodge, H.F., 1995a, Chain sampling inspection plan, Industrial Quality Control, 11(4):10-13.