Contact Customer Service
Support Pin

Call Us
We are here to help
+1 888 322 7617
**You'll need this Pin when you Contact Support 

 Marketplace
 Information
 MSA Cpk Ppk SPC CI
MSA Cpk Ppk SPC CI
Category: InformationDescription: MSA Cpk Ppk SPC CI app was created using Appy Pie, World's #1 App Builder for creating Android & iPhone Apps. It is a Information category app. Click below to download the MSA Cpk Ppk SPC CI app.
About Us
This application presents (in the "MSA Cpk Ppk SPC CI" tab feature): 1. Process Capability measures typically used, demonstrates examples and procedures for their computation, and interpretation. 2. Variable Gauge R&R / Measurement System Analysis (MSA) designed tests, demonstrates procedures for their computation, and interpret them. 3. Confidence Intervals measures typically used (including sample size), demonstrates examples and procedures for their computation, and interpretation. 4. Statistical Process Control (SPC) / Statistical Quality Control: How to calculate control limits, rational subgroups, reaction plan, sampling guidelines for control charts/SPC, among other SPC topics.
The developer hopes that you enjoy this mobile devices application and that you are able to use it on a regular basis. The developer looks forward to continue including on this application additional Chapters on Statistical Topics, and also more tab features about other Topics for users.
Founded : 2016
About The Users : Message from the Developer
Mission : To be a knowledge and skills enabler to users on any location in the world and to serve as a mobile devices consult application for Statistical Topics starting with Process Capability, Measurement System Analysis (Variable Gauge R&R), Confidence Intervals, and Statistical Process Control (SPC).
weburl : https://www.linkedin.com/in/sixsigmaleanqualitycontrolqualityassurance115188128/
Videos
Minitab Videos
How to Asses Process Capability Using Minitab
Minitab Introduction and Overview
Control
Click on the home button at the end of the Chapter Content List / Section (after the Chapter Content List) to return here. Chapter Content List by Section Number (Click on the link next to the number on chapter content list below to see the specific section immediately): Chapter 1. Statistical Process Control (SPC) for Variable and Attribute Data Content List: Purpose SPC History History of Ages Terminology/Definitions Notation What is Statistical Process Control (SPC) and Why Use It? Types of Control Charts and How to Select a Control Chart Elements of a Control Chart Control Limits versus Specification Limits Control Charts for Continuous (Variable) Data Control Limits Formulas for Continuous (Variable) Data Factors (Constants) for Continuous (Variable) Control Charts Formulas Control Chart Exhibit for Continuous (Variable) Data Selecting a Control Chart for Continuous (Variable Data) Normal Distribution / Graphical View of Variation and Six Sigma Performance NonNormal Distributions and the Central Limit Theorem Control Limits Formulas for Attribute Data Rational Subgroups Sample Size and Frequency Guidelines SPC Tests / Rules for detecting Special Cause Variation Shewhart’s Approach to Interpreting Data Risks and SPC Tests/Rules Errors/Mistakes Reaction Plan Steps for Creating ImR Chart Steps for Creating Xbar and R Charts / S Chart Control Charts for Attribute Data Binomial Data Poisson Data Tips for Converting Attribute Data to Continuous (Variable) Data Steps for Creating pChart, npChart, cChart, and uChart Example of ImR Control Chart in Minitab Example of Xbar & R Charts in Minitab Example of pChart in Minitab Example of npChart Example of uChart Example of cChart in Minitab References Section 1 (SPC). Purpose To present the variable statistical process control (SPC) types of control charts, tests/rules for detecting and analyzing special (assignable) cause variation, applications/examples of control charts and reaction plans, and to demonstrate procedures for the computation and interpretation of control limits in a control chart. Section 2. SPC History Concern over variation in manufactured products produced by the Western Electric Company and studies of sampling results led Dr. Walter A. Shewhart of the Bell Laboratories to the development of the control chart as early as 1924. The first applications of the control chart by Shewhart were on fuses, heat controls, and station apparatus at the Hawthorne Works of the Western Electric Company. It was also applied in the production of weapons during WWII. Training courses on quality control principles (including the use of control charts and acceptance sampling plans) were sponsored by the Office of Production Research and Development of the War Production Board during World War II as a means for maintaining quality and promoting continual improvement [a]. SPC was extensively implemented at the Japanese industries during the postwar rebuild work. Successful results were achieved which are presently reflected in the Japanese modern economy. These achievements at Japan are principally attributed to SPC along with other quality tools/improvement methods. Section 3 (SPC). History of Ages Where are we now? Knowledgebased economy: Information Age Digital age Wireless age Resources such as knowhow are more critical than other economic resources Wisdom invokes questions of judgment, ethics, experience and intuition What is Knowledge Economy? A knowledge economy is either economy of knowledge focused on the economy of the producing and management of knowledge, or a knowledgebased economy. In the second meaning, more frequently used, it is a phrase that refers to the use of knowledge to produce economic benefits. The phrase was popularized if not invented by Peter Drucker as the heading to chapter 12 in his book The Age of Discontinuity [b]. So, what is the problem/opportunity on the Information Age? The problem/opportunity with our information age was stated by Daniel Boorstin as: “Information is random and miscellaneous, but knowledge is orderly and cumulative”. Before information can be useful it must be analyzed, interpreted, and assimilated. Figure 1. Understanding Principles Pyramid [c]. Section 4 (SPC). Terminology/Definitions Common causes: Factors, generally many in number but each of relatively small importance, contributing to variation that have not necessarily been identified. The common cause variation is due to random small variations in factors that are always present in the process. [Note: Chance causes are sometimes referred to as Chance Causes of variation]. Control chart: Chart with upper and/or lower control limits on which values of some statistical measure for a series of samples or subgroups are plotted, usually in time or sample number order. The chart frequently shows a central line to assist detection of shift/trend/pattern of plotted values toward either control limit and out of control limit values. Defect: A failure to meet one of the acceptance criteria. A defective unit might have multiple defects. Defective: Is when an entire unit fails to meet acceptance criteria, regardless of the number of defects on the unit. How is a sample different than a population? A population includes all of the elements from a set of data. A sample consists of several observations from the population. How is a statistic different than a parameter? A measurable characteristic of a sample, such as a mean or standard deviation is called a statistic. A measurable characteristic of a population, such as a mean or standard deviation, is called a parameter. Process: Set of interrelated resources and activities that transform inputs into outputs. [Note: Resources may include personnel, finance, facilities, equipment, techniques, and methods]. Process in control, stable process: Process in which each of the quality measures (e.g., the average and variability or fraction nonconforming or average number of nonconformities of the product or service) is in a state of statistical control. Special Causes: Factors (usually systematic) that can be detected and identified as contributing to a change in a quality characteristic or process level. Special causes are identified when variation that is above and beyond common cause variation arises from factors that are not always present in the process. [Note: Assignable causes are sometimes referred to as Assignable Causes of variation]. State of statistical control: State in which the variations among the observed sampling results can be attributed to a system of chance causes that does not appear to change with time. [Note: Such a system of chance causes generally will behave as though the results are simple random samples from the same population). Section 5 (SPC). Notation Notation. The standard statistical notation will be used wherever possible. That is, a measurement of a quality characteristic will be denoted by an x, parameters will be denoted by Greek letters, and statistics by Roman letters. An overbar will denote an average, and double overbars will indicate an average of averages. Other symbols will be defined as they are used. Figure 2. Example of Notation depicting Sample Statistics versus Population Parameters Notation. Section 6. What is Statistical Process Control (SPC) and Why Use It? Control Charts are used to monitor, control, and improve process performance over time by studying variation and its source. A Control Chart: Focuses attention on detecting and monitoring process variation over time. Distinguishes special from common causes of variation, as a guide to local or management action. Serves as a tool for ongoing control of a process. Helps improve a process to perform consistently and predictably for higher quality, lower cost, and higher effective capacity. Provides a common language for discussing process performance [d]. Also, a Control Chart can be used to establish a measurement baseline, and to confirm/quantify the impact of continuous process improvement activities. Section 7. Types of Control Charts and How to Select a Control Chart The flow chart in Figure 3 below depicts the process to select an adequate control chart based on type of data, defects/defective classification, defect type, and sample subgroup sizes. Figure 3. Selection of the Appropriate Control Chart. Section 8. Elements of a Control Chart An average or centerline for the data: It is the sum of all the input data divided by the total number of data points. This central line (X) is added as a visual reference for detecting shifts or trends – this is also referred to as the process location. Upper and lower control limits (UCL and LCL): These are computed from available data and placed equidistant from the central line. This is also referred to as process dispersion. An upper control limit (UCL): It is three process standard deviations above the average. A lower control limit (LCL): It is three process standard deviations below the average. Figure 4. Elements of a Control Chart. Section 9. Control Limits versus Specification Limits Control Limits (voice of the process) are not Specification Limits (voice of the customer). Control Limits are based on sample data (natural process variation) and indicate how a process is actually performing. Specification Limits are based on customer requirements and indicate how the process should perform to satisfy the customer. Figure 5. Control Limits versus Specification Limits. Although controlling a process will reduce variation, control does not mean capable. Given that there is no relation between control limits and specification limits it is possible to be in control and out of specification. It is also possible to be in specification and out of control. A process should not be adjusted based on specification limits or instinct. Purposely moving a process that has not changed and is in its optimum condition will actually make things worse. This is called tampering with the process. Section 10. Control Charts for Continuous (Variable) Data Two charts are typically created for each set of continuous data. The first chart shows the actual data points or averages, the second chart shows the ranges or standard deviations. Why use both? The data (I or Xbar) chart… Shows changes in the average value of the process Is a visualization of the longerterm variation For an Xbar chart the key question: “Is the variation between the averages of the subgroups more than that predicted by the variation within the subgroups?” The data (mR or R) chart… Reflects shortterm variation The Rcharts used with Xbar charts depict the ranges within subgroups of data; the key question: “Is the variation within subgroups consistent? [e]”. Section 11. Control Limits Formulas for Continuous (Variable) Data [e] The factors (constants) D4, D3, A2, A3, B4, and B3 can be found in Table 1 Section 12 below. Section 12. Factors (Constants) for Continuous (Variable) Control Charts Formulas [e] Table 1. Factors (Constants) for Control Charts Formulas. Section 13. Control Chart Exhibit for Continuous (Variable) Data Figure 6 below shows a depiction of Individual Value and Moving Range Control Charts. Section 31 below shows a detailed example for Individual Value and Moving Range Control Charts. Figure 6. Individual Value and Moving Range Control Charts Exhibit [e]. Section 14. Selecting a Control Chart for Continuous (Variable Data) ImR chart (Individuals, moving Range): Plots individuals data (I) on one chart and moving ranges (mR— the differences between each two adjacent points) on a second chart. Use when the best subgroup size is one, which will happen when… There are very few units produced (= low output rate) relative to how often process variables (sources of variation) may change There is little choice due to data scarcity A process drifts over time and needs to be monitored Sampling is very expensive or involves destructive testing ImR is a good chart to start with when evaluating continuous data [e]. Xbar&R chart (Xbar&R, Average + Range): Plots averages of subgroups (Xbar) on one chart and the ranges (R) within the subgroups on the other chart. The Xbar&R Chart is used with a sampling plan to monitor repetitive processes. Subgroup sizes typically range from 3 to 9 items. Frequently, practitioners will choose subgroups of 5. The Xbar chart will highlight changes to the average (“between subgroups” or process accuracy). The Rchart will detect changes to “within subgroup” dispersion (process precision). The Xbar&R chart is the most commonly used control chart because it uses the Central Limit Theorem to normalize data—meaning it does not matter as much what the underlying distribution of the data is. It is also more sensitive than the ImR to process shifts [e]. Note: It has been demonstrated that continuous data Control Charts are robust for detecting out of control points even if the distribution is not normal [c]. Xbar&S chart (Xbar&S, Average + Standard Deviation): Plots subgroup averages (Xbar) plus standard deviations of the subgroups (S). Similar in use to Xbar&R charts except these can be used only when you have sample sizes of at least 10 units (statisticians believe that the standard deviation is reliable only when sample sizes are 9 or larger). It is far more common to use smaller sample sizes (≤9) so in most cases an Xbar&R chart will be a better choice [e]. Section 15. Normal Distribution / Graphical View of Variation and Six Sigma Performance In many situations, data follow a normal distribution (bellshaped curve). One of the key properties of the normal distribution is the relationship between the shape of the curve and the standard deviation (σ for population; s for sample). 99.73% of the area under the curve of the normal distribution is contained between −3 standard deviations and +3 standard deviations from the mean (as shown on Figure 7 below). Another way of expressing this is that 0.27% of the data is more than 3 standard deviations from the mean; 0.135% will fall below −3 standard deviations and 0.135% will be above +3 standard deviations. To use these probabilities, your data must be random, independent, and normally distributed. Figure 7. Normal Distribution. Figure 8 below presents graphically the variation (pieces/parts vary from each other) and distribution concepts. Figure 8. Graphical View of Variation and Six Sigma Performance [e]. Section 16. NonNormal Distributions and the Central Limit Theorem Central Limit Theorem: The distribution of an average tends to be Normal, even when the distribution from which the average is computed is decidedly nonNormal (refer to Figures 9, 10, 11 and 12 below). Thus, the Central Limit theorem is the foundation for many statistical procedures, including Statistical Process Control Charts, because the distribution of the phenomenon under study does not have to be Normal because its average will be. Furthermore, this normal distribution will have the same mean as the parent distribution, AND, variance equal to the variance of the parent divided by the sample size. Note: It has been demonstrated that continuous data Control Charts are robust for detecting out of control points even if the distribution is not normal [c]. Figure 9. Central Limit Theorem. Figure 10. Normal Distribution and the Central Limit Theorem. Figure 11. NonNormal Distribution and the Central Limit Theorem. Figure 12. NonNormal Distribution and the Central Limit Theorem. Section 17. Control Limits Formulas for Attribute Data [e] Section 18. Rational Subgroups For both Xbar&R and Xbar&S charts, it is necessary to collect data in sets of points called subgroups, then calculate and plot the averages for those subgroups. Rational subgrouping is the process of selecting a subgroup based upon “logical” grouping criteria or statistical considerations. Often, you can use natural breakpoints to determine subgroups: Example: If you have 3 shifts operating per day, collect 1 data point per shift and calculate the average for those 3 data points (you’ll plot one “average” reading per day). Or if you want to look for differences between shifts, collect, say, 5 data points per shift (you’ll plot 3 average readings every day, 1 per shift). If the data are not normally distributed, you may use the guidelines on the Central Limit Theorem (refer to Section 16 above), and rational subgrouping guidelines to determine the proper subgroup size [e]. Note: It has been demonstrated that continuous data Control Charts are robust for detecting out of control points even if the distribution is not normal [c]. Subgroup size selection can also be used to address the following data problems: 1) Trends and patterns: Use subgrouping to “average out” special cause patterns caused by logical grouping or time cycles. Examples: – A predictable difference in size for the diameters from different mold cavities grouped together into one shot in an injection molding process. – A predictable difference in the output of 3 shifts grouped into 1 day. – A predictable difference in incoming calls per day (M–F) grouped into 1 week. 2) Too much data: Sometimes it is necessary to use subgrouping to reduce the number of data points plotted on a chart, which can make it easier to spot trends and other types of special cause variation [e]. Section 19. Sample Size and Frequency Guidelines Data Requirements for Control Limits Calculation: Minimum of 25 consecutive subgroups, and Minimum of 100 to 125 consecutive observations Must be in time series order [e] Subgroup Size (needs to be established for Sampling Plan) Guideline: For attribute charts the suggested subgroup sample size is at least fifty, and for variable data charts a suggested minimum subgroup sample size is three to five. For a c or u chart, the subgroup sample size needs to be large enough to average five or more defects per lot. Sampling Frequency: Frequency of sampling will depend on an ability to discern patterns in the data. Consider hourly, daily, shifts, monthly, annually, lots, and so on. Note: You can also use 100% automated data entry (of all data) into Statistical Process Control software. Section 20. SPC Tests / Rules for detecting Special Cause Variation Control charts indicate if a process is running incontrol or outofcontrol. For example, any point outside of the control limits is an indication that the process is not incontrol. There are SPC Tests/Rules that have been established for recognizing if the process is outofcontrol. The SPC Tests/Rules apply for both Attribute and Variable data control charts. Many of these SPC Tests/Rules relate to “zones,” which mark off the standard deviations from the mean. Zone C is ± 1 std dev.; Zone B is between 1 and 2 std. dev.; and Zone A is between 2 and 3 std dev. [e]. Typically Required SPC Tests/Rules (1st to 4th) [e]: 1) Mean Shift (M): 9 points in a row on one side of the average in Zone C or beyond: Detects a shift in the process mean. Figure 13. Control Chart Displaying: Mean Shift. 2) Out of Control Limits 1 point beyond Zone A (O): Detects a shift in the mean, an increase in the standard deviation, or a single aberration in the process. Check your Rchart to rule out increases in variation. Figure 14. Control Chart Displaying: Out of Control Limits. 3) Repeat Pattern 14 points in a row alternating up and down (R): Detects systematic effects, such as two alternately used machines, vendors, or operators. Figure 15. Control Chart Displaying: Repeat Pattern (14 points in a row alternating up and down). 4) Trend 6 points in a row steadily increasing or decreasing (T): Detects a trend or drift in the process mean. Small trends will be signaled by this test before the first test. Figure 16. Control Chart Displaying: Trends. Note: These typically required SPC tests/rules (first 4 SPC tests/rules: 1st to 4th in this section above) are called by the acronym “the MORT tests/rules” (Mean Shift, Out of Control Limits, Repeat Pattern, Trend) and can be used in the process for Statistical Process Control to ensure that adequate realtime reaction measures (refer to Reaction Plan Section below) are taken for correcting out of control processes. The 5th to 8th additional SPC tests/rules (below in this section) can also be added for Statistical Process Control. Typically Additional SPC Tests/Rules (5th to 8th) [e]: 5) 2 out of 3 points in a row in Zone A or beyond: Detects a shift in the process average or increase in the standard deviation. Any two out of three points provide a positive test. Figure 17. Control Chart Displaying: 2 out of 3 Points in a Row in Zone A or Beyond. 6) 4 out of 5 points in Zone B or beyond: Detects a shift in the process mean. Any four out of five points provide a positive test. Figure 18. Control Chart Displaying: 4 Out of 5 Points in Zone B or Beyond. 7) 15 points in a row in Zone C, above and below the centerline: Detects stratification of subgroups—appears when observations in a subgroup come from sources with different means. Figure 19. Control Chart Displaying: 15 Points in a Row in Zone C. 8) 8 points in a row on both sides of the centerline with none in Zone C: Detects stratification of subgroups when the observations in one subgroup come from a single source, but subgroups come from different sources with different means. Figure 20. Control Chart Displaying: 8 Points in a Row on Both Sides of the Centerline with None in Zone C. Section 21. Shewhart’s Approach to Interpreting Data “A process is said to be in control, when, through the use of past experience, we can predict, at least within limits, how the process will behave in the future”. The essence of statistical control is predictability (refer to Figure 21 below). InControl Process: A process that has displayed a reasonable degree of statistical control in the past is likely to continue to do so in the future. OutofControl Process: A process that has failed to display a reasonable degree of statistical control in the past is unlikely to begin do so in the future. Figure 21. Stable versus Unstable Process. Section 22. Risks and SPC Tests/Rules Mistakes Risk 1/Error 1/Mistake 1: Producer’s Risk: Probability of rejecting a good lot. Type I Error – Identifying a stable process as unstable. Mistake One: Interpreting noise (common causes of variation) as if it were a signal (special causes of variation), AKA ‘false alarm’ [c]. Risk 2/Error 2/Mistake 2: Consumer’s Risk: Probability of accepting a bad lot. Type II Error – Identifying an unstable process as stable. Mistake Two: Failing to detect a signal, AKA ‘failure to catch an opportunity’ [c]. The Control Chart Approach strikes a balance between Error 1/Mistake 1 and Error 2/Mistake 2. Section 23. Reaction Plan When the process is found to be outofcontrol the reaction plan should be followed. The reaction plan indicates when outofcontrol point (s) are found what remediation actions should be taken in the process for bringing the process to be back incontrol (refer to Table 2 below). Table 2. Reaction Plan Example. Section 24. Steps for Creating ImR Chart [e] 1. Determine sampling plan. 2. Take a sample at each specified time or production interval. 3. Calculate the moving ranges for the sample: To calculate each moving range, subtract each measurement from the previous one. For Example, subtract Observation 2 from Observation 1; or Observation 15 from Observation 14; and treat all ranges as positive even if the difference is negative. (Ex: 10 – 15 = −5 but is recorded as a range of +5). There will be no moving range for the first observation on the chart (because no data value preceded it). 4. Plot the data (the original data values on one chart and the moving ranges on another). 5. After your sample measurements are completed, calculate control limits for moving Range chart. 6. If the Range chart is not in control, take appropriate action. 7. If the Range chart is in control, calculate control limits for the Individuals chart. 8. If the Individuals chart is not in control, take appropriate action. Section 25. Steps for Creating Xbar and R Charts / S Chart [e] 1. Determine an appropriate subgroup size and sampling plan. 2. Collect the samples at specified intervals of time or production. 3. Calculate the mean and range (or standard deviation) for each subgroup. 4. Plot the data. The subgroup means go on one chart and the subgroup ranges or standard deviations on another. 5. After your sample measurements are completed, calculate control limits for the Range chart. 6. If the Range chart is not in control, take appropriate action. 7. If the Range chart is in control, calculate control limits for the Xbar chart. 8. If the Xbar chart is not in control, take appropriate action. Section 26. Control Charts for Attribute Data Attribute control charts are similar to variable control charts except they plot proportion or count data rather than variable measurements. Attribute control charts have only one chart which tracks proportions or counts over time (there is no range chart or standard deviation chart like there is with continuous data). Section 27. Binomial Data When data points can have only one of two values—such as when comparing a product or service to a standard and classifying it as being acceptable or not (pass/fail)—it is called binomial data. Use one of the following control charts for binomial data: pchart: Charts the proportion of defectives in each subgroup. npchart: Charts the number of defectives in each subgroup (must have same sample size each time). The Figure 22 below shows a pChart. Note how Control Limits change (on Figure 22 below) as subgroup size changes. This pChart has variable subgroup sizes [e]. Figure 22. Example of a pChart for Defective Pizzas [e]. Section 28. Poisson Data A Poisson (pronounced pwasahn) distribution describes count data where you can easily count the number of occurrences (Ex: errors on a form, dents on a part), but not the number of nonoccurrences (there is no such thing as a “nondent”). These data are best charted on either: cchart: Charts the defect count per sample (must have the same sample size each time). uchart: Charts the number of defects per unit sampled in each subgroup (uses a proportion, so it is OK if sample size varies). The Figure 23 below shows a cChart. If the sample size is always the same (10% variation in sample size is OK) use c–charts. If the sample size varies use the uchart [e]. Figure 23. Example of a cChart for Blemishes [e]. Section 29. Tips for Converting Attribute Data to Continuous (Variable) Data In general, much more information is contained in continuous data than in attribute data, so control charts for continuous data are preferred. Possible alternatives to attribute charting for different situations [e]: Situation Possible Solution Infrequent failures. Plot time between failures on an ImR chart. Similar subgroup size. Plot the failure rate on an ImR chart. Table 3. Possible alternatives to attribute charting for different situations [e]. Section 30. Steps for Creating pChart, npChart, cChart, and uChart When charting continuous data, you normally create two charts, one for the data and one for ranges (ImR, Xbar&R, etc.). In contrast, charts for attribute data use only the chart of the count or percentage. Determine an appropriate sampling plan. Collect the sample data: Take a set of readings at each specified interval of time. Calculate the relevant metric (n, np, c, or u). Calculate the appropriate centerline. Plot the data. After your sample measurements are completed, calculate control limits. If the chart is not in control, take appropriate action [e]. Section 31. Example of ImR Control Chart in Minitab As the distribution manager at a limestone quarry, you want to monitor the weight (in pounds) and variation in the 45 batches of limestone that are shipped weekly to an important client. Each batch should weigh approximately 930 pounds [f]. To create a control chart in the Minitab Statistical Software [f] follow the steps listed below in this section. 1. Open the worksheet EXH_QC.MTW. 2. Choose ‘Stat > Control Charts > Variables Charts for Individuals > IMR’ The Figure 24 below shows step 2 above in the Minitab Statistical Software [e]. Figure 24. Selecting Control Chart Type in Minitab Statistical Software [f]. 1. In ‘Variables’, enter ‘Weight’ 2. Click ‘IMR Options’, then click the ‘Tests’ tab. 3. Choose ‘Perform selected tests for special causes’, 4. Then click ‘OK’ in each dialog box. The Figure 25 below shows steps (14 above) in the Minitab Statistical Software [e]. Figure 25. Selection of Variables for Charting and SPC Tests/Rules in Minitab Statistical Software [e]. Figure 26. Example of ImR Control Charts in Minitab [f]. Test Results for I Chart of Weight for Figure 26 above: TEST 1. One point more than 3.00 standard deviations from center line. Test Failed at points: 14, 23, 30, 31, 44, 45 TEST 2. 9 points in a row on same side of center line. Test Failed at points: 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 33, 34, 35, 36 Interpreting the results for Figure 26 above (shows the Individual and Moving Range control charts): The individuals chart shows 6 points outside the control limits and multiple points inside the control limits exhibiting a nonrandom pattern (mean shifts), suggesting the presence of special causes (unstable process, process outofcontrol). The moving range chart shows one point above the control limit. The manufacturing processes should be examined to improve control over the weight of limestone shipments. Section 32. Example of Xbar & R Charts in Minitab You work at an automobile engine assembly plant. One of the parts, a camshaft, must be 600 mm +/ 2 mm long to meet engineering specifications. There has been a chronic problem with camshaft length being out of specification, which causes poorfitting assemblies, resulting in high scrap and rework rates. Your supervisor wants to run X and R charts to monitor this characteristic, so for a month, you collect a total of 100 observations (20 samples of 5 camshafts each) from all the camshafts used at the plant, and 100 observations from each of your suppliers. First you will look at camshafts produced by Supplier 2. 1. Open the worksheet CAMSHAFT.MTW. 2. Choose ‘Stat > Control Charts > Variables Charts for Subgroups > XbarR’. 3. Choose ‘All observations for a chart are in one column’, then enter Supp2. 4. In ‘Subgroup sizes’, enter ‘5’. Click ‘OK’. Figure 27. Example of Xbar & R Control Charts in Minitab [f]. Test Results for Xbar Chart of Supp2 for Figure 27 above: TEST 1. One point more than 3.00 standard deviations from center line. Test Failed at points: 2, 14 Interpreting the results for Figure 27 above (shows Xbar & Range control charts): The center line on the X chart is at 600.23, implying that your process is falling within the specification limits, but two of the points fall outside the control limits, implying an unstable process. The upper control limit on the R chart, 7.866, is also quite large considering the maximum allowable variation is +/ 2 mm. There may be excess variability in your process. Section 33. Example of pChart in Minitab Suppose you work in a plant that manufactures picture tubes for televisions. For each lot, you pull some of the tubes and do a visual inspection. If a tube has scratches on the inside, you reject it. If a lot has too many rejects, you do a 100% inspection on that lot. A Pchart can define when you need to inspect the whole lot. 1. Open the worksheet EXH_QC.MTW. 2. Choose ‘Stat > Control Charts >Attributes Charts > P’. 3. In ‘Variables’, enter ‘Rejects’. 4. In ‘Subgroup sizes’, enter ‘Sampled’. Click ‘OK’. Figure 28. Example of PChart in Minitab [f]. Test Results for P Chart of Rejects (for Figure 28 above): TEST 1. One point more than 3.00 standard deviations from center line. Test Failed at points: 6 Interpreting the results for Figure 28 above (shows PChart): Sample 6 is outside the upper control limit. Consider inspecting the lot. Section 34. Example of npChart You work in a toy manufacturing company and your job is to inspect the number of defective bicycle tires. You inspect 200 samples in each lot and then decide to create an NP chart to monitor the number of defectives. To make the NP chart easier to present at the next staff meeting, you decide to split the chart by every 10 inspection lots. 1. Open the worksheet TOYS.MTW. 2. Choose ‘Stat > Control Charts > Attributes Charts > NP’. 3. In ‘Variables’, enter ‘Rejects’. 4. In ‘Subgroup sizes’, enter ‘Inspected’. 5. Click ‘NP Chart Options’, then click the ‘Display’ tab. 6. Under ‘Split chart into a series of segments for display purposes’, choose ‘Number of Subgroups in each Segment’ and enter ‘10’. 7. Click ‘OK’ in each dialog box. Figure 29. Example of NPChart in Minitab [f] for 30 inspection lots. Test Results for NP Chart of Rejects for Figure 29 above: TEST 1. One point more than 3.00 standard deviations from center line. Test Failed at points: 9, 20 Interpreting the results for Figure 29 above (shows NPChart): Inspection lots 9 and 20 fall above the upper control limit, indicating that special causes may have affected the number of defectives for these lots. You should investigate what special causes may have influenced the outofcontrol number of bicycle tire defectives for inspection lots 9 and 20. Section 35. Example of uChart As production manager of a toy manufacturing company, you want to monitor the number of defects per unit of motorized toy cars. You inspect 20 units of toys and create a U chart to examine the number of defects in each unit of toys. You want the U chart to feature straight control limits, so you fix a subgroup size of 102 (the average number of toys per unit). 1. Open the worksheet TOYS.MTW. 2. Choose ‘Stat > Control Charts > Attributes Charts > U’. 3. In ‘Variables’, enter ‘Defects’. 4. In ‘Subgroup sizes’, enter ‘Sample’. 5. Click ‘U Chart Options’, then click the ‘Limits’ tab. 6. Under ‘When subgroup sizes are unequal, calculate control limits’, choose ‘Assuming all subgroups have size’ then enter ‘102’. 7. Click ‘OK’ in each dialog box. Figure 30. Example of UChart in Minitab [f]. Test Results for U Chart of Defects for Figure 30 above: TEST 1. One point more than 3.00 standard deviations from center line. Test Failed at points: 5, 6 TEST 2. 9 points in a row on same side of center line. Test Failed at points: 17, 18, 19 Interpreting the results for Figure 30 above (shows UChart): Units 5 and 6 are above the upper control limit line, and unit 19 shows that there are 9 points below the center line (mean shift) indicating that special causes may have affected the number of defects in these units. You should investigate what special causes may have influenced the outofcontrol number of motorized toy car defects for these units. Section 36. Example of cChart in Minitab Suppose you work for a linen manufacturer. Each 100 square yards of fabric can contain a certain number of blemishes before it is rejected. For quality purposes, you want to track the number of blemishes per 100 square yards over a period of several days, to see if your process is behaving predictably. You want the control chart to show control limits at 1, 2, and 3 standard deviations above and below the center line. 1. Open the worksheet EXH_QC.MTW. 2. Choose ‘Stat > Control Charts > Attributes Charts > C’. 3. In ‘Variables’, enter ‘Blemish’. 4. Click ‘C Chart Options’, then click the ‘Limits’ tab. 5. Under ‘Display additional sigma limits at’, enter ‘1 2’ in ‘These multiples of the standard deviation’. 6. Under ‘Place bounds on control limits’, check ‘Lower standard deviation limit bound’ and enter ‘0’. 7. Click ‘OK’ in each dialog box. Figure 31. Example of CChart in Minitab [f]. Interpreting the results for Figure 31 above (shows CChart): Because the points fall in a random pattern, within the bounds of the 3s control limits, you conclude the process is behaving predictably and is in control. Section 37 (SPC). References Note: If you wish to purchase any of the books listed below in this "References" section please go to the "ECommerce" tab feature on this phone/tablet application for the preferred booksellers and click directly on the links provided on the ECommerce tab feature. [a] “Juran’s Quality Handbook”, 5th Edition by Joseph M. Juran; A. Blanton Godfrey. Published by McGrawHill, 1999. [b] “The Age of Discontinuity: Guidelines to Our Changing Society”, by Peter Drucker. Published by Transaction Publishers, 1992. [c] “Understanding Statistical Process Control”, 2nd Edition by Donald J. Wheeler; David S. Chambers. Published by SPC Press, 1992. [d] Sheehy, Paul; Daniel Navarro; Robert Silvers; Victoria Keyes. "The Black Belt Memory Jogger: A Pocket Guide for Six Sigma Success". Published by GOAL/QPC and Six Sigma Academy, January 2002. [e] George, Michael; Maxey, John; Rowlands, David; Price, Mark. “The Lean Six Sigma Pocket Toolbook: A Quick Reference Guide to Nearly 100 Tools for Improving Quality and Speed”. Published by GOAL/QPC and Six Sigma Academy, 20041013. [f] Smith, Duane R. “The Ultimate Roll and Web Defect Troubleshooting Guide”. Published by TAPPI Press, 2013. [g] Minitab® 17.1.0. © 2013 Minitab Inc.
Analyze
Click on the home button at the end of the Chapter Content List / Section (after the Chapter Content List) to return here. Chapter Content List by Section Number (Click on the link next to the number on chapter content list below to see the specific section immediately): Chapter 1. Confidence Intervals Content List: Purpose Terminology/Definitions Notation What is Confidence Intervals and Why Use It? Sample Size for Confidence Interval How to Do Confidence Intervals Confidence Interval for the Mean (includes Example) Confidence Interval for the Standard Deviation (includes Example) Confidence Interval for the Proportion Defective (includes Example) Confidence Intervals Example using Minitab Statistical Software References Section 1 (Confidence Intervals). Purpose To present some of the confidence intervals measures typically used, demonstrate procedures for their computation, examples and interpretation. Section 2 (Confidence Intervals). Terminology/Definitions Alpha (α) Risk: Error that can be made in statistical inference referred to as Type I error. If the α risk is 0.05, any determination from a statistical test that the population has changed runs a 5% risk that it really has not changed. Alternate Hypothesis (H1, also abbreviated as Ha): Is the position that will be embraced if the null hypothesis is rejected [b]. In a statistical test the alternate hypothesis is stated as there is significant difference between specified populations (tested through samples of the population), any observed difference being due to sampling or experimental error. Average: Also called the "mean" of a set of values (the values are obtained from the natural variation of the process). The average or mean is calculated by dividing the sum of the values in the set by the total number of values in the set. Null hypothesis (H0, also abbreviated as Ho): Is the proposition to be tested directly in a hypothesis test [b]. In a statistical test the null hypothesis is stated as there is no significant difference between specified populations (tested through samples of the population), any observed difference being due to sampling or experimental error. Overall Standard Deviation: The overall standard deviation (represented by the Greek letter sigma σ or the Latin letter s) is a measure that is used to quantify the amount of variation or dispersion of the data values for the entire sample. This is also called the global, or long term standard deviation (WithinSubgroup Standard Deviation Definition is listed below. BetweenSubgroup Standard Deviation Definition is listed above). Parameter: Is a measurable characteristic of a population, such as a mean or standard deviation. Population: Population is the entire group from process characteristic from which a statistical sample is drawn. The information obtained from the sample allows statisticians to infer conclusions regarding the population. Information is collected from a sample because of the difficulty of studying the entire population. Pvalue: The Pvalue or calculated probability is the estimated probability of rejecting the null hypothesis (Ho) of a study question when that null hypothesis is true. Typically, if it is lower than 0.05 (when the significance alpha level is 0.05), the null hypothesis is rejected. Randomization: Is defined as when each individual in the population has an equal opportunity to be selected for the sample. Sample: A sample is a representative set of values selected from the population: A sample reflects the characteristics of the population, hence, the sample findings can be generalized to the population. Statistic: Is a measurable characteristic of a sample, such as a mean or standard deviation. Ztable: Is a mathematical table for the values of the cumulative distribution function of the normal distribution. The Ztable is also called the "standard normal table" or "unit normal table". Zvalue (or Zscore): Is a measure of how many standard deviations below or above the population mean a data point is. A Zscore is also known as a standard score and it can be placed on a normal distribution curve. Section 3 (Confidence Intervals). Notation The standard statistical notation will be used wherever possible. That is, a measurement of a quality characteristic will be denoted by an x, parameters will be denoted by Greek letters, and statistics by Roman letters. An overbar will denote an average. Other symbols will be defined as they are used. Section 4. What is Confidence Intervals and Why Use It? In many processes, it is very costly and inefficient to measure every unit of product, service, or information produced. In these instances, a sampling plan is implemented and statistics such as the average, standard deviation, and proportion are calculated and used to make inferences about the population parameters. When a known population is sampled many times, the calculated sample averages can be different even though the population is stable (as shown in the Figure 1 below). Figure 1. Sample Averages. The differences in these sample averages are simply due to the nature of random sampling. Given that these differences exist, the key is to estimate the true population parameter. The confidence interval allows the organization to estimate the true population parameter with a known degree of certainty. The confidence interval is bounded by a lower limit and upper limit that are determined by the risk associated with making a wrong conclusion about the parameter of interest. For example, if the 95% confidence interval is calculated for a subgroup of data of sample size n, and the lower confidence limit and the upper confidence limit are determined to be 85.2 and 89.3, respectively, it can be stated with 95% certainty that the true population average lies between these values. Conversely, there is a 5% risk (alpha (α) = 0.05) that this interval does not contain the true population average. The 95% confidence interval could also show that: Ninetyfive of 100 subgroups collected with the same sample size n would contain the true population average. If another 100 subgroups were collected, ninetyfive of the subgroups’ averages would fall within the upper and lower confidence limits. Note: When sampling from a process, the samples are assumed to be randomly chosen and the subgroups are assumed to be independent. Whether the true population average lies within the upper and lower confidence limits that were calculated cannot be known. Thus, confidence intervals use the alpha risk (α), which is the risk that it does not. For a 95% confidence interval, the α risk is always 5% [a]. Section 5. Sample Size for Confidence Interval [b] The purpose of calculating a confidence interval from sample data to estimate a population parameter is to provide the information needed to make a databased decision about the management of the process. The width of a confidence interval decreases as the number of observations increases. It would seem that collecting lots of data to make the resulting confidence interval very narrow would be desirable, but we can’t generally afford to waste time and resources collecting superfluous data. So how wide should a confidence interval be? In general, a confidence interval should be just narrow enough so that a single unique management action is indicated from one end of the confidence interval to the other. Sample sizes that give intervals that are tighter than necessary are wasteful of time and resources. Sample sizes that give intervals that are too wide cannot be used to determine the single appropriate management action that should be taken on the process. In fact, a confidence interval calculated from too few data will be so wide that two, three, or more management actions might be indicated across the span of the interval. It should be apparent that if we can identify the confidence interval width that just barely indicates a unique management action then we should be able to determine the sample size required to obtain an interval of just that width. It is the purpose of this section to introduce the sample size calculations necessary to determine a confidence interval with a specified width. This provides protection from the risks associated with oversampling and undersampling. Confidence Interval for One Population Mean The (1 – α) 100% confidence interval for the population mean µ has the form: Φ(xbar − δ < μ <+ xbar + δ) = 1 – α where δ = Zα/2 (σ/√n) is called the maximum error of the estimate. Note that this interval is centered on the sample mean, which can be determined from sample data, and provides upper and lower bounds for the true but possibly unknown population mean. By design, the range indicated by these bounds has probability 1 – α of containing the true mean of the population so (1 – α) 100% of the intervals constructed from sample means using the confidence interval equation above should contain the true population mean. Consequently, the interval given by the confidence interval equation above is called a (1 – α) 100% confidence interval for µ. For a specified value of δ the smallest required sample size must meet the condition: In order to make use of the equation above for n, the population standard deviation σ must be known. If σ is not known then it will be estimated by the sample standard deviation (s) of the experimental data. In this case, the t distribution with n – 1 degrees of freedom should be used instead of the Z distribution in the equation above for n. This equation is transcendental (that is, both sides will depend on n) and will have to be solved for n by iteration. It will still be necessary to estimate σ to complete the sample size calculation. Of course, the validity of the sample size obtained will depend on the accuracy of the σ estimate (s). Example 5.1: Find the sample size required to estimate the population mean to within ±0.8 with 95 percent confidence if measurements are normally distributed with standard deviation σ = 2.3. Solution: We have δ = 0.8, σ = 2.3, and α = 0.05. The sample size must meet the condition: which indicates that n = 32 is the smallest sample size that will deliver a confidence interval with the desired width. Example 5.2: Find the sample size required to estimate the population mean to within ±100 with 95 percent confidence if measurements are normally distributed. The population standard deviation is unknown but from knowledge of similar processes it is expected to be about s= 80. Solution: We have δ = 100, s = 80, and α = 0.05. The sample size required must meet the condition: If the sample size is very large then t 0.025 ≈ (Z 0.025 = 1.96) and the sample size would be: Obviously n = 3 does not meet the large sample size condition. By trial and error, when n = 5 then t0.025, 4 = 2.776 and: This calculation indicates that the smallest sample size that will deliver a confidence interval with the required width is n = 5 although the accuracy of this sample size depends on the accuracy of the σ estimate (s). Confidence Interval for the Difference between Two Population Means The (1 – α) 100% confidence interval for the difference between two population means ∆µ = µ1  µ2 has the form: Φ(xbar − δ < ∆μ <+ xbar + δ) = 1 – α where ∆xbar = x1bar – x2bar. For a specified value of δ, the required sample size must meet the condition: In order to make use of this interval, the population standard deviations σ1 and σ2 must be known and equal for the distribution of x1 and x2. Example 5.3: What sample size should be used to determine the difference between two population means to within ±6 of the estimated difference with 99 percent confidence? The populations are normal and both have standard deviation σ = 12.5. Solution: We have δ = 6, σ = 12.5, and α = 0.01. The required sample size is: Section 6. How to Do Confidence Intervals Depending on the population parameter of interest, the sample statistics that are used to calculate the confidence interval subscribe to different distributions. Aspects of these distributions are used in the calculation of the confidence intervals. Listed below (in sections 7, 8, 9) are the different confidence intervals, the distribution the sample statistics subscribes to, the formulas to calculate the intervals, and an example of each. Notice how these confidence intervals are affected by the sample size, n. Larger sample sizes result in tighter confidence intervals, as expected from the Central Limit Theorem [a]. For Central Limit Theorem theory refer to “Statistical Process Control for Variable and Attribute Data”, Section 16 in the Control Phase on this Mobile Phone Application. Section 7. Confidence Interval for the Mean (includes Example) The confidence interval for the mean utilizes a tdistribution and can be calculated using the following formula: Example 7.1 [a]: A manufacturer of inserts for an automotive engine application was interested in knowing, with 90% certainty, the average strength of the inserts currently being manufactured. A sample of twenty inserts was selected and tested on a tensile tester. The average strength and standard deviation of these samples were determined to be 167,950 and 3,590 psi, respectively. The confidence interval for the mean μ would be: Section 8. Confidence Interval for the Standard Deviation (includes Example) The confidence interval for the standard deviation subscribes to a chisquare distribution (refer to Figure 2 below) and can be calculated as follows: Figure 2. Chisquare Distribution. Example 8.1 [a]: A manufacturer of nylon fiber is interested in knowing, with 95% certainty, the amount of variability in the tenacity (a measure of strength) of a specific yarn fiber they are producing. A sample of fourteen tubes of yarn was collected, and the average tenacity and standard deviation were determined to be 2.830 and 0.341 g/denier, respectively. To calculate the 95% confidence interval for the standard deviation: Caution: Some software and texts will reverse the direction of reading the table; therefore: Section 9. Confidence Interval for the Proportion Defective (includes Example) The exact solution for proportion defective (p) utilizes the binomial distribution; however, in this example the normal approximation will be used. The normal approximation to the binomial may be used when np and n(1p) are greater than or equal to five. A statistical software package will use the binomial distribution. (This formula is best used when np and n(1p) > 5.) Example 9.1: A financial company has been receiving customer phone calls indicating that their monthend financial statements are incorrect. The company would like to know, with 95% certainty, the current proportion defective for these statements. Twelvehundred statements were sampled and fourteen of these were deemed to be defective. The 95% confidence interval for the proportion defective would be: Note: np = 1200 (0.012) = 14.4, which is > 5 and n(1p) = 1200 (10.012) = 1200 (0.988) = 1185.6, which is > 5 so the normal approximation to the binomial may be used. Section 10. Confidence Intervals Example using Minitab Statistical Software [c] Students in an introductory statistics course participated in a simple experiment. Each student recorded his or her resting pulse. Then they all flipped coins, and those whose coins came up heads ran in place for one minute. Then the entire class recorded their pulses. You want to examine the students' resting pulse rates. 1. Open the worksheet PULSE.MTW. 2. Choose ‘Stat > Basic Statistics > Graphical Summary’. 3. In ‘Variables’, enter ‘Pulse1’. Click ‘OK’. Figure 3. Graphical Summary (Graph window output). Interpreting the results The Figure 3 above shows that the mean of the students' resting pulse is 72.870 (95% confidence intervals of 70.590 and 75.149). The standard deviation is 11.009 (95% confidence intervals of 9.615 and 12.878). Using a significance level of 0.05, the AndersonDarling normality test (ASquared = 0.98, PValue = 0.013) indicates that the resting pulse data does not follow a normal distribution. Section 11. References [a] Sheehy, Paul; Daniel Navarro; Robert Silvers; Victoria Keyes. "The Black Belt Memory Jogger: A Pocket Guide for Six Sigma Success". Published by GOAL/QPC and Six Sigma Academy, January 2002. [b] Matthews, Paul G. “Design of Experiments with MINITAB”. Published by American Society for Quality (ASQ), October 2004. [c] Minitab® 17.1.0. © 2013 Minitab Inc.
Measure
Click on the home button at the end of each Chapter Content List / Section (after the Chapter Content Lists) to return here. Chapters List/Chapter Content Lists by Section Number (Click on the link next to the number on each chapter/chapter content list below to see the specific section immediately): Chapters List: Chapter 1. Process Capability Chapter 2. Measurement System Analysis (Gauge R&R) for Variable Data Chapter 1. Process Capability Content List: Purpose Terminology Notation What is Process Capability and Why Use It? ShortTerm Process Capability Indices Definitions LongTerm Process Capability Indices Definitions What are good (industry standard) or excellent (six sigma) Cpk and Ppk values? Process Capability Formulas Assessing Process Capability Graphically Numerical Measure of Process Capability Process Capability Using a Histogram (includes Example) Process Capability Using Control Charts (includes Examples) Assessing Process Capability Using Capability Indexes (includes Examples) Process Capability Using a Statistical Package (includes Example) Summary Appendix A: Constants for control charts Appendix B: Standard Normal Table Reference Chapter 2. Measurement System Analysis (Gauge R&R) for Variable Data Content List: Purpose Terminology/Definitions Notation Methods of Variable Gauge R&R / Variable Measurement System (MSA) Analysis What is Variable Gauge R&R / Variable Measurement System Analysis (MSA) and Why Use It? Acceptance Criteria for Variable Gauge R&R and Gauge Number of Distinct Categories Formulas for Variable Gauge R&R Preparation for a Variable Measurement System Study How to SetUp a Gauge R&R Study in Minitab How to Determine the Variable MSA Number of Runs Conducting a Variable Measurement System Study Examples of NonDestructive Gauge R&R Analysis using Minitab References Chapter 1. Process Capability Section 1 (Process Capability). Purpose To present some of the process capability measures typically used, demonstrate procedures for their computation, and interpret them. Section 2 (Process Capability). Terminology Average: Also called the "mean" of a set of values (the values are obtained from the natural variation of the process). The average or mean is calculated by dividing the sum of the values in the set by the total number of values in the set. Between Subgroup Standard Deviation: Is a measure that is used to quantify the amount of variation or dispersion between adjacent subgroups in a sample. This shortterm betweensubgroup standard deviation is represented by the Greek letter sigma σ or the Latin letter s (Overall Standard Deviation and WithinSubgroup Standard Deviation Definitions are listed below). Common cause variation: Is an oscillation caused by unknown factors resulting in a stable but random distribution of the individual data values around the average of the data. It is a measure of the process potential, this is how well the process can perform when special cause variation removed. Common cause variation is also called chance cause variation. Confidence Interval: Two set values defined such that there is probability (expressed as a percentage, for example 95% confidence interval) that a population parameter will fall between these two set values. Defect: Is a characteristic that is outside the acceptable levels of variation. DPM: Defects per million (DPM) is the average number of defects per unit observed on the product under study normalized to one million. This is also called "parts per million" (PPM). DPMO: Defects per million opportunities (DPMO) is the average number of defects per unit observed divided by the number of opportunities to make a defect on the product under study normalized to one million. Lower Specification Limit: Is a value that designates a lower limit above which the process characteristic performance is unacceptable (Upper Specification Limit Definition is listed below). Normal Distribution: Is a function that represents the distribution of many random variables as a symmetrical bellshaped graph. Overall Standard Deviation: The overall standard deviation (represented by the Greek letter sigma σ or the Latin letter s) is a measure that is used to quantify the amount of variation or dispersion of the data values for the entire sample. This is also called the global, or long term standard deviation (WithinSubgroup Standard Deviation Definition is listed below. BetweenSubgroup Standard Deviation Definition is listed above). Parameter: Is a measurable characteristic of a population, such as a mean or standard deviation. Population: Population is the entire group from process characteristic from which a statistical sample is drawn. The information obtained from the sample allows statisticians to infer conclusions regarding the population. Information is collected from a sample because of the difficulty of studying the entire population. Process in control, stable process: Process in which each of the process characteristics (that is, the average and variability or fraction nonconforming or average number of nonconformities of the product or service) is in a state of statistical control. Process Sigma: In Six Sigma the process sigma metric is derived using the same method as a zvalue (zscore). However, in Six Sigma what is measured is the distance of a sample mean from above or below a specification limit (there can be an upper and a lower specification limit that a sample must fall between too). As in the zvalue, the same normaldeviates from the ztable are used to approximate the area under the curve. The process sigma metric is in principle a Z equivalent. Randomization: Is defined as when each individual in the population has an equal opportunity to be selected for the sample. Rational Subgroup: Is a subset of the data that is defined by a specific factor such as a stratifying factor (for example, the hourly thickness of 4 different cavities of an injection molding process), a time period, or different raw material batches. The rational for creating a subgroup is based on identifying samples within a rational subgroup that are as homogeneous as possible resulting in minimum variation within the subgroup deemed as natural cause variation (also called common causes variation). Therefore, rational subgrouping identifies and separates special cause variation (variation between subgroups caused by specific, identifiable factors). Representativeness: Although variation is present in set of values, sample must be as similar to the population as possible. Sample: A sample is a representative set of values selected from the population: A sample reflects the characteristics of the population, hence, the sample findings can be generalized to the population. Six Sigma: Is a disciplined, datadriven approach and methodology for eliminating defects (driving toward six standard deviations between the mean and the nearest specification limit) and improving any process (a manufacturing process or a business process, a product or a service). Special Cause Variation: Is a shift in the process output variable data caused by a specific factor such as an unusual extreme environmental condition or a change in the quality of a raw material batch. It can be identified and assigned as a directly contributing factor to the unusual change in a process characteristic. An identified special cause of variation can potentially be removed and is a measure of process control. Special cause variation is also called assignable cause variation. State of statistical control: State in which the variations among the observed sampling results can be attributed to a system of chance causes that does not appear to change with time. Such a system of chance causes generally will behave as though the results are simple random samples from the same population. Statistic: Is a measurable characteristic of a sample, such as a mean or standard deviation. Upper Specification Limit: Is a value that designates an upper limit above which the process characteristic performance is unacceptable (Lower Specification Limit Definition is listed above). Variation: Is the difference between individual values in a process. WithinSubgroup Standard Deviation: The subgroup standard deviation (represented by the Greek letter sigma σ or the Latin letter s) is a measure that is used to quantify the amount of variation or dispersion of a single subgroup in a sample. This is also called the local, or short term standard deviation (Overall Standard Deviation and BetweenSubgroup Standard Deviation Definitions are listed above). Ztable: Is a mathematical table for the values of the cumulative distribution function of the normal distribution. The Ztable is also called the "standard normal table" or "unit normal table". Zvalue (or Zscore): Is a measure of how many standard deviations below or above the population mean a data point is. A Zscore is also known as a standard score and it can be placed on a normal distribution curve. Section 3 (Process Capability). Notation A measurement of a process characteristic will be denoted by an x, parameters will be denoted by Greek letters, and statistics by Roman letters. An overbar will denote an average, and double overbars will indicate an average of subgroup averages. Other symbols will be defined as they are used. Section 4. What is Process Capability and Why Use It? Process capability refers to the capability of a process to consistently make a product that meets a customerspecified specification range (tolerance). Capability indices are used to predict the performance of a process by comparing the width of process variation to the width of the specified tolerance. It is used extensively in many industries and only has meaning if the process being studied is stable (in statistical control) [a]. The Process capability analysis statistical technique is used in [b]: Assessing process variability; Establishing specification limits (or, setting up realistic tolerances); Determining how well the process will hold the tolerances (the difference between specifications); Analyzing the process variability relative to the specifications; Reducing or eliminating the variability to a great extent in products and processes. The capability of a process should be constantly measured and analyzed. Capability analysis can help us answer the following questions [b]: Is the process meeting customer specifications? How will the process perform in the future? Are improvements needed in the process? Have we sustained these improvements, or has the process regressed to its previous unimproved state? Section 5. ShortTerm Process Capability Indices Definitions The shortterm capability indices Cp and Cpk are measures calculated using the shortterm process standard deviation. Because the shortterm process variation is used, these measures are free of subgroup drift in the data and take into account only the within subgroup variation. Cp is a ratio of the customerspecified tolerance to six standard deviations of the shortterm process variation. Cp is calculated without regard to location of the data mean within the tolerance, so it gives an indication of what the process could perform to if the mean of the data was centered between the specification limits. Because of this assumption, Cp is sometimes referred to as the process potential. Cpk is a ratio of the distance between the process average and the closest specification limit, to three standard deviations of the shortterm process variation. Because Cpk takes into account location of the data mean within the tolerance, it is a more realistic measure of the process capability. Cpk is sometimes referred to as the process performance [a]. The Cp and Cpk indices are typically calculated from a sample which means they are sample statistics and not population parameters. Therefore, a confidence interval can be calculated for the Cp and Cpk statistics. Section 6. LongTerm Process Capability Indices Definitions The longterm capability indices Pp and Ppk are measures calculated using the longterm process standard deviation. Because the longterm process variation is used, these measures take into account subgroup drift in the data as well as the within subgroup variation. Pp is a ratio of the customerspecified tolerance to six standard deviations of the longterm process variation. Like Cp, Pp is calculated without regard to location of the data mean within the tolerance. Ppk is a ratio of the distance between the process average and the closest specification limit, to three standard deviations of the longterm process variation. Like Cpk, Ppk takes into account the location of the data mean within the tolerance. Because Ppk uses the longterm variation in the process and takes into account the process centering within the specified tolerance, it is a good indicator of the process performance the customer is seeing [a]. The Pp and Ppk indices are typically calculated from a sample which means they are sample statistics and not population parameters. Therefore, a confidence interval can be calculated for the Pp and Ppk statistics. Another index not so commonly used is the Cpm. The Cpm is an overall capability index that measures whether the process meets specification and is on target. Cpm compares the specification spread to the spread of the data, considering the data's deviation from the target value instead of its deviation from the process mean. Large distances between the target and the observations result in a small Cpm value. As the process improves and approaches the target, the value of the Cpm index increases. The Cpm value will be at its best when the process spread is within the specification limits and on target. Section 7 (Process Capability). What are good (industry standard) or excellent (six sigma) Cpk and Ppk values? Because both Cp and Cpk are ratios of the tolerance width to the process variation, larger values of Cp and Cpk are better. The larger the Cp and Cpk, the wider the tolerance width relative to the process variation. The same also applies for Pp and Ppk. What determines a “good ” or an "excellent" value depends on the definitions of “good” and "excellent". An industry standard is a Cp/Cpk of 1.33 which is approximately equivalent to a shortterm Z of 4. A Pp/Ppk of 1.33 is approximately equivalent to a longterm Z of 4. A Six Sigma process typically has a shortterm Z of 6 or a longterm Z of 4.5. However, standards for considering a process to be capable sometimes differ from industry to industry and organization to organization. A general rule is that a Cp/Cpk index value of 1.33 is a minimum acceptable standard for capability. As Table 1 shows, this corresponds to 63 parts per million defective (ppmd), assuming a normal distribution, which is a 4 sigma level of quality (see Table 1 footnote). In this context, 4 sigma refers ± 4 standard deviations from the process mean—a total spread of 8 standard deviations. The larger the value of the capability ratio, the larger the magnitude of an assignable cause event that can be tolerated without generating large amounts of outofspecification material [d]. For a comprehensive Sigma Conversion Table (from the ASQ American Society for Quality) refer to the "Website" tab feature on this Phone/Tablet Application. Quality level Cp ppm defective 3 sigma 1.00 2,700 4 sigma 1.33 63 5 sigma 1.67 0.57 6 sigma* 2.00 0.002 * The Six Sigma method allows the distribution mean to drift by ± 1.5 standard deviations. Six sigma quality without the drift equates to 0.002 ppm defective. Six Sigma quality with the drift allowed equates to the often quoted 3.4 ppm defective or 3.4 defective parts per million opportunities [d]. Section 8. Process Capability Formulas LongTerm (Overall) Standard Deviation Formula: Where s is the sample longterm standard deviation (the greek letter σ would be used for the population longterm standard deviation). ShortTerm (WithinSubgroup) Standard Deviation Formula: σ st = Rbar/ d2 or σ st = sbar / c4 Where: Rbar is the Average of the ranges for the subgroups. sbar is the Average of standard deviations for the subgroups. A subgroup range (R) is the maximum individual value minus the minimum individual value in the subgroup. The c4 and d2 are statistical constants. For c4 and d2 constants refer to Appendix A. A subgroup standard deviation formula is: Note: The average of the standard deviations "s" for the subgroups has to be divided over c4 to obtain the shortterm standard deviation "σ st" for all the subgroups. Also, the standard deviation for a single subgroup can be divided over c4 to obtain the shortterm standard deviation "σ st" for a single subgroup, and then the average shortterm standard deviation "σ st" for all the subgroups can be calculated. Capability Indices Formulas: Cp = (USL  LSL) / 6σ st Where σ st is the shortterm pooled standard deviation. Pp = (USL  LSL) / 6σ lt Where σ lt = longterm standard deviation. Cpl = (Mean – LSL)/3σ st Cpu = (USL – Mean)/3 σ st Cpk = Min (Cpl, Cpu) Ppl = (Mean – LSL)/3σ lt Ppu = (USL – Mean)/3 σ lt Ppk = Min (Cpl, Cpu) ZValue (Sigma Level) Formula: Z st = (x – mean) / 6σ st Where Z st is the shortterm Zvalue. Z lt = (x – mean) / 6σ lt Where Z lt is the longterm Zvalue (also called Zscore). Defects Per Million (also called Parts per Million) Formula: DPMO = (Defects * 1,000,000) / Sample Size DPMO Formula: DPM = (Defects * 1,000,000) / (Defects Opportunities per Unit * Number of Opportunities) Section 9. Assessing Process Capability Graphically Process capability analysis uses a histogram of the data distribution and compares the natural process variation (spread) to the specification limit (s) as shown in Figure 1. Figure 1. Six Sigma Spread as the Process Capability [b] One way of determining the capability of a process is to construct a frequency histogram when the process is believed to be in control. Such histogram should be created using a large sample of individual measurements preferably, 50 or more. Next, the specification limits and the target value of the process characteristic being studied are plotted on the histogram. The plots in Figures 2 through 8 are explained subsequently [b]. Figure 2. Products are outside the upper specification limit [b] Figure 3. Products are outside the upper and lower specification limits [b] Figure 4. Products are outside the lower specification limit [b] Figure 5. Products are outside the upper specification limit [b] Figure 6. Process variation is small but offcentered or shifted from the target [b] Figure 7. Process variation is small but offcentered or shifted from the target [b] Figure 8. The process is centered, and there is reduced variation [b] Figures 2, 3, 4, and 5 show that a significant percentage of products are outside the upper or lower specification limit. In Figure 2, the process is not centered meaning it is off target and the process variation is large. Due to this, a large percentage of the products are outside the upper specification limit (USL). In Figure 3, the process is centered but the process variation is large resulting into a large percent of the products above and below the specification limits. None of these processes is capable as they do not meet the specification requirements. Figure 4 shows that the process variation is small but is shifted to the left. As a result, a large percentage is below the lower specification limit (LSL). Figure 5 shows that the process is off centered and has a large process variation. The process is not meeting the USL. In Figures 6 and 7, the process variation is small but offcentered or shifted from the target. These processes are within the specification limits and are meeting the customer requirements. The processes are capable but any change in the process over time or specification requirements will make the process incapable resulting into process improvement initiatives to restore the process. The process in Figure 8 is centered with much less variation. The process is meeting the specification limits and is capable of satisfying the customer requirements. This is the most desirable of all the cases described earlier. A graphical representation of Cp or the process capability ratio is shown in Figures 9 and 10 below. Figure 9. Cp or process capability ratio [b] Figure 10. Graphical Representation of process capability for Cp < 1.0, Cp = 1.0 and Cp > 1.0 [b] Section 10. Numerical Measure of Process Capability There are several methods of quantifying and determining the process capability. We discuss the following methods: Process capability using a histogram and normal distribution—by finding the number or percentage of the products outside of the specification limits. Process capability using control charts Process capability using capability indexes Process capability using a statistical package (most commonly used in the industry) Section 11. Process Capability Using a Histogram (includes Example) One way of expressing process capability is to determine the percentage of the products outside of the specification limits or the percent of nonconforming products. In cases where the process data follow a normal distribution, the nonconformance percentage can be estimated even if the specification limits are not known. In the example presented here, we will use a histogram to estimate the nonconformance rate. When using a histogram to assess the nonconformance rate, it is suggested that at least 100 observations are available and the process is stable so that a reasonable estimate of process capability can be obtained. Example 1: Calculating Process Capability Using Histograms [b] We will consider the length of 150 measurements (in cm) of a machined part. Using the distribution of the length data, we will determine the process capability. Suppose that the specification limits on the length are 6.00 ± 0.05. We would like to determine the percentage of the parts outside of the specification limits. Since the measurements are very close to normal, we can use the normal distribution to calculate the nonconforming percentage. Figure 11 shows the histogram of the length data with the target value and specification limits. Figure 12 shows that the data appears to be normally distributed and the process is operating very close to the target. From Figures 11 and 12, it is evident that the process is producing a small percentage of nonconforming products above the upper and below the lower specification limits (see calculations below). The percentage above and below the specification limits using normal distribution can be calculated as The mean and standard deviation are estimated from the data. The estimated values are shown in Figures 11 and 12. From the standard Normal Table (in Appendix B), Z1 = −2.50 corresponds to 0.4938, and Z2 =2.56 corresponds to 0.4948 Therefore, the percentage of products within the specification limits = 0.4938 + 0.4948 = 0.9886 or 98.86 percent The percentage falling outside of the specification limits is (1 − 0.9886) or, 0.0114 (1.14 percent). In parts per million (PPM) it translates to 0.0114 × 106 or 11,400 parts outside the specification limits. Figure 11. Histogram of the length data with specification limits and target [b] Figure 12. Fitted normal curve with reference line for the length data [b] Section 12. Process Capability Using Control Charts (includes Examples) It is a common practice to take the Six Sigma spread of a process’s inherent variation as a measure of process capability when the process is stable. Thus, the process capability is the process spread which is equal to Six Sigma. This concept is illustrated using an example here. Example 2: Calculating Process Capability using Control Charts [b] A chemical company manufactures and markets 50 lb. nitrogen fertilizer for the lawns. Due to some recent problems in their production process, overfilling and underfilling of the bags of fertilizer have been reported. The problem was investigated and appropriate adjustments were made to the machines that were used to fill the fertilizer bags. When the process was believed to be stable, the quality supervisor collected 30 samples each of size five. The control charts for x̄ and R were constructed and the tests were conducted for the special or assignable causes. The process was found to be in control and no special (assignable) causes of variation were present. The x̄ and R control charts for the process are shown in Figure 13. Determine the process capability for this process based on the average of range value, or Rbar reported on the R chart. Note that Rbar (the average of all subgroup range) can be used to estimate the process standard deviation. Solution: (a) First, estimate the standard deviation, σhat (short term standard deviation, also called within subgroup standard deviation) from the given information using: Note: σhat is the estimate of σ for the subgroups. The value of Rbar is reported in the chart for range in Figure 11 and d2 is obtained from the table ‘constants for control charts’ in Appendix A. The value of d2 from the table for a subgroup size of five (n = 5) is 2.326. The process capability, Figure 13. The x̄ and R control charts for Example 2 [b] The process capability here is a function of the estimated standard deviation. A reduction in the value of process capability means reduced process variability and improved capability. The process capability obtained here is more meaningful when compared to the ongoing process at a later stage. (b) The process capability can also be determined by estimating the σhat using the average of standard deviations, s̄ of the subgroups instead of average of the subgroup range. Here we demonstrate the calculation of process capability using the standard deviation. Figure 14 shows the x̄–S control charts (the control charts for the average and standard deviation). Determine the process capability using the value of s̄ in the control chart. First, estimate the standard deviation, σhat from the given information using: Note: σ̂ (σhat) is the estimate of σ. The value of s̄ is reported in the chart for the standard deviation s. The bottom chart in Figure 14 and c4 is obtained from the table ‘constants for control charts’ in Appendix A. The value of c4 from the table for a subgroup size of five (n = 5) is 0.9400. The process capability: 6σ̂ = 6(0.841) = 5.046 Figure 14. The x̄ and S control charts [b] Example 3: Improvement in Process Capability Using Control Charts [b] In an effort to continuously improve and refine the fertilizer bag filling process in the previous example 2, the process was further refined and precise adjustments were made in the machines used to fill the fertilizer bags. This led to further reduction in the process variation and the Six Sigma team was able to reduce the process standard deviation. The x̄  R charts for the improved process are shown in Figure 15. The process was stable and the control chart tests for the special (assignable) causes of variation showed no problem. Calculate the standard deviation using the Rchart and calculate the process capability for the improved process. Determine any improvement in the process capability. First, estimate the standard deviation, σ from the information in the Rchart in Figure 15: Note: σ̂ (σhat) is the estimate of σ for the subgroups. The value of Rbar is reported in the chart for range in Figure 15 and d2 is obtained from the table ‘constants for control charts’ in Appendix A. The value of d2 from the table for a subgroup size of five (n = 5) is 2.326. The process capability, Comparing this process capability to the process capability of the initial process in Example 2, we find approximately 15 percent improvement in the process capability. Note that the process capability of the initial process in Example 2 is 5.052 and the process capability of the improved process in Example 9.3 is 4.278. Figure 15. The x−bar and R control charts for the improved process [b] Section 13. Assessing Process Capability Using Capability Indexes (includes Examples) The calculation of Cp in assumes that the process has both upper and lower specifications. For a onesided specification limit, the Cpk is calculated as Cpl and Cpu (for LSL and USL respectively). Note that all of the indexes (Cp, Cpl, Cpu, and Cpk) use an estimate of the process standard deviation, and the results obtained by these indexes are very sensitive to the estimated value of the standard deviation. The estimate of the standard deviation also differs depending upon the longterm or shortterm process capability being assessed. Using the appropriate estimate of standard deviation is critical to assessing the correct process capability. Example 4 [b]: Calculate the capability indexes—Cp, Cpl, Cpu, and Cpk for the process for which the data are given here. Interpret their meaning. Explain the difference between Cp and Cpk. USL = 10.050, LSL = 9.950, Mean = 9.999 and σhat = 0.0165 as estimates of mean and σhat. Solution: (Note: σhat is the estimate of standard deviation for the subgroups). Cp = 1.01 means that the process is marginally capable (just able to meet the specifications). Cp =Cpk means that the process is centered. For this process, these values are not equal; therefore, the process is slightly offcentered. Difference between Cp and Cpk: The process capability ratio or Cp does not take into account the shift in the process mean. It does not consider where the mean is relative to the specifications. Cp measures only the spread of the specifications relative to the six sigma spread or the process spread. Cpk on the other hand, takes into account the shift in the process mean. Example 5 [b] a. Given Xbar = 70 and σhat = 2 (σhat is an estimate of σ for the withinsubgroup variation), LSL = 58, USL = 82. Calculate the process capability indexes: Cp, Cpl, Cpu, and Cpk. Solution: The problem is visually shown in Figure 16. b. Calculate the capability indexes for the data in part (a) if the mean has shifted from 70 to 73 (all the other values are same as in part (a)). Solution: Figure 17 shows the original mean and the shift Figure 16. LSL and USL for Example 5 [b] Figure 17. Shift in the mean from 70 to 73 [b] Section 14. Process Capability Using a Statistical Package (includes Example) [b] Process capability can be assessed easily using computer packages. Here an example is shown of assessing the process capability using MINITAB statistical package. MINITAB uses both the graphical and numerical approach to process capability. The example demonstrates the capability of a production process that produces certain PVC pipe. The diameter of the pipe is of concern. The specification limits on the pipes are 7.000 ± 0.025 cm. There has been a consistent problem with meeting the specification limits, and the process produces a high percentage of rejects. The data on the diameter of the pipes were collected to determine the process capability of the current process. This would also provide an idea about the improvement for the future production runs. A random sample of 150 pipes was selected and the diameters measured. A process capability report shown in Figure 16 was generated using MINITAB. Figure 18. Process capability report of pipe diameter: Run 1 [b] The process capability report in Figure 18 shows that the process producing the pipes is stable. The histogram of the data shows that the measurements appear to follow a normal distribution. Since the process is stable and the measurements are appear to be normally distributed, the normal distribution option of process capability analysis can be used to assess the process capability. Interpreting the Results Refer to the process capability report in Figure 18. This figure provides a detailed process capability of the pipe manufacturing process divided into different sections using boxes. We have labeled the boxes using numbers. The entries in box are explained in the following: The upper left box (Box 1) reports the process data including the LSL, target, and the USL. These values are input to the program and are reported as a part of the capability report. Based on the sample data, the program calculates the sample mean and the estimates of within and overall standard deviations. The StDev(Within) is the standard deviation within the subgroup.The potential or within capability indexes (explained below) are calculated based on the estimate of σhat within or the variation within each subgroup. If the data are in one column and the subgroup size is one (like in this example where a sample of 150 diameters is entered in one column), this standard deviation is calculated based on the moving range (the adjacent observations are treated as subgroups). If the subgroup size is greater than one, the within standard deviation is calculated using the range or standard deviation control chart. (In MINITAB, you can specify the method you want.) The other standard deviation—StDev(Overall), or the overall variation, is the variation of the entire data in the study. The overall capability indexes are calculated based on this estimate. These indexes are explained below. Box 2 in Figure 18 shows the histogram of the data along with two normal curves overlaid on the histogram. One normal curve (with a solid line) is generated using the mean and the estimate of within subgroup standard deviation while the other normal curve (the one with a dotted line) is plotted using the mean and overall estimate of the standard deviation. The histogram and the normal curves can be used to check visually whether the process data are normally distributed. The histogram and the normal curves show that the process appears to be normally distributed. There is a deviation of the process mean (7.010) from the target value of 7.000. Since the process mean is greater than the target value, the pipes produced by this process exceed the USL. A significant percentage of the pipes are outside of the USL. Box 3 reports the potential or within process capability and the overall capability of the process (see the right hand side of Figure 18). The potential capability of the process tells what the process would be capable of producing if the process did not have shifts and drifts; or how the process could perform relative to the specification limits (if the shifts in the process mean could be eliminated). The overall capability of the process tells how the process is actually performing relative to the specification limits. If there is a substantial difference between within and overall variation, it may be an indication that the process is out of control, or that the other sources of variation are not estimated by within capability (MINITAB). In Box 3, The value of Cp = 0.86 indicates that the process is not capable (Cp < 1). Also, Cpk = 0.50 is less than Cp = 0.86. This means that the process is offcentered. Note that when Cpk = Cp, then the process is centered midway between the specification limits. The Cpk index provides information about how close the process is to the specification limits. The value Cpk = 0.50 (less than 1) is an indication that an improvement in the process is warranted. The process can be improved by centering the process and by reducing the variation. In Box 3, the overall capability indexes or the process performance indexes Pp, PPL, PPU, Ppk, and Cpm are also calculated and reported in the capability report (Figure 18). Note that these indexes are based on the estimate of overall standard deviation and they determine the overall or longterm capability of the process. Note that Ppk is the index for the whole process. Pp and Ppk have similar interpretation as Cp and Cpk. For this example, Cp and Cpk values (0.86 and 0.50 respectively) are very close to Pp and Ppk (0.88 and 0.51). When Cpk equals Ppk then the within subgroup standard deviation is the same as that of the overall process standard deviation. For this process, the within and overall standard deviations are close. The index Cpm is calculated for the specified target value. If no target value is specified, Cpm is not calculated. The Cpm tells whether the process is offcenter or deviates from the target. A high Cpm value index indicates a better process. The process is centered if Cpm, Ppk, and Pp values are the same. For this process, Pp = 0.88, Ppk = 0.51, and Cpm = 0.59. A comparison of these values indicates that the process is offcenter. The bottom three boxes (Boxes 4, 5, and 6 ) in Figure 18 report the observed performance, expected within performance, and expected overall process performance in PPM. The observed performance (Box 4) in Figure 18 shows the values in Table 2 below. This means that the number of pipes below the LSL is zero; that is, the process is able to meet the LSL. The number of pipes (out of a million) above the USL is 53333.33. The total number of nonconforming products produced by this process is 53,333 out of a million. These are actual process performances. The values in expected within performance (Box 5 in Figure 18) are based on the estimate of within subgroup standard deviation. These are the average number of parts below and above the specification limits in PPM. For this process, the Expected within Performance measures are shown on Table 3 below. The values in Table 3 show the average or the expected performance based on the estimate of the within process standard deviation. These may be interpreted as shortterm process performance. Box 6—The Expected Overall Performance is calculated using similar formulas used in within performance except the estimate of standard deviation is based on overall data. For this process, the Expected Overall Performance measures are shown in Table 4. These values are based on the estimate of overall standard deviation and may be interpreted as the longterm performance of the process. As can be seen from the earlier analysis, the process is producing a large number of products that do not meet the specifications. An improvement in the process is warranted to reduce the nonconforming products. Table 2. Observed Performance [b] Table 3. Expected Within Performance [b] Table 4. Expected Overall Performance [b] Section 15 (Process Capability). Summary Process Capability is the ability of the process to meet specifications and is one of the important aspects of overall quality improvement. To attain superior quality, the capability of a process should be constantly measured and analyzed. The process capability tells us: (a) whether the process is meeting customer specifications, (b) how will the process perform in the future, (c) whether the process needs improvement, and (d) if we have sustained these improvements, or if the process regressed to its previous unimproved state. The process capability also gives us an overall state of the quality by telling us the number of products in a million that do not conform to the specifications. Different ways of assessing process capability were demonstrated with examples: (1) Graphical method, (2) capability using histograms and normal distribution, (3) process capability using control charts, (4) process capability using capability indexes, and (5) computer method for assessing process capability using MINITAB computer software. A detailed process capability report using MINITAB was presented with this example. Section 16 (Process Capability). Appendix A: Constants for Control Charts [c] N c4 c5 d2 d3 d4 1 * * 1 0.82 1 2 0.797885 0.603 1.128 0.8525 0.954 3 0.886227 0.463 1.693 0.8884 1.588 4 0.921318 0.389 2.059 0.8794 1.978 5 0.939986 0.341 2.326 0.8641 2.257 6 0.951533 0.308 2.534 0.848 2.472 7 0.959369 0.282 2.704 0.8332 2.645 8 0.96503 0.262 2.847 0.8198 2.791 9 0.969311 0.246 2.97 0.8078 2.915 10 0.972659 0.232 3.078 0.7971 3.024 11 0.97535 0.22 3.173 0.7873 3.121 12 0.977559 0.21 3.258 0.7785 3.207 13 0.979406 0.202 3.336 0.7704 3.285 14 0.980971 0.194 3.407 0.763 3.356 15 0.982316 0.187 3.472 0.7562 3.422 16 0.983484 0.181 3.532 0.7499 3.482 17 0.984506 0.175 3.588 0.7441 3.538 18 0.98541 0.17 3.64 0.7386 3.591 19 0.986214 0.166 3.689 0.7335 3.64 20 0.986934 0.161 3.735 0.7287 3.686 21 0.987583 0.157 3.778 0.7242 3.73 22 0.98817 0.153 3.819 0.7199 3.771 23 0.988705 0.15 3.858 0.7159 3.811 24 0.989193 0.147 3.895 0.7121 3.847 25 0.98964 0.144 3.931 0.7084 3.883 Section 17 (Process Capability). Appendix B: Standard Normal Distribution Table [b] Note: If you cannot read the values in the table below the Standard Normal Distribution Table can be found on this link: http://www.itl.nist.gov/div898//handbook/eda/section3/eda3671.htm You can open this link directly on this phone application by going to the "Websites" tab feature on this phone/tablet application and clicking on "Normal Table". Section 18 (Process Capability). References Note: If you wish to purchase any of the books listed below in this "References" section please go to the "ECommerce" tab feature on this phone/tablet application for the preferred booksellers and click directly on the links provided on the ECommerce tab feature. [a] Sheehy, Paul; Daniel Navarro; Robert Silvers; Victoria Keyes. "The Black Belt Memory Jogger: A Pocket Guide for Six Sigma Success". Published by GOAL/QPC and Six Sigma Academy, January 2002. [b] Sahay, Amar. "Managing and Improving Quality". Published by Business Expert Press, December 2015. [c] Wheeler, Donald J.; and Chambers, David S. "Understanding Statistical Process Control", Second Edition. Published by SPC Press, Inc., 1992. [d] Sower, Victor E. "Statistical Process Control for Managers". Published by Business Expert Press, 2014. Chapter 2. Measurement System Analysis (Gauge R&R) for Variable Data Section 1 (Gauge R&R). Purpose To present the variable gauge measurement system analysis (MSA) designed tests, demonstrate procedures for their computation, and interpret them. Section 2 (Gauge R&R). Terminology/Definitions Variable (gauge) Measurement System Analysis (MSA): Is a test/experimental and mathematical method of determining how much the variation within the measurement process contributes to overall process variability. There are five statistics/parameters to investigate in a variable MSA: Bias, linearity, stability, repeatability and reproducibility (refer to Figure 1 below). Gage R&R: Stands for gage repeatability and reproducibility. It is a statistical tool that measures the amount of variation in the measurement system arising from the measurement device and the people taking the measurement. Gage system’s Repeatability (how likely it is that the same person using the same method will get the same measurement) and Reproducibility (how likely it is that different people using the same method/tools will get the same measurement). A Gauge R&R measures the variability in the response minus the variation due to differences in parts. This takes into account variability due to the gage, the operators, and the operator by part interaction [b]. Figure 1. Sources of Variation. Gauge Accuracy: Is the extent to which the averages of the measurements deviate from the true value [b]. In other words, accuracy is the difference in the average measurement compared to a standard. Accuracy is a qualitative term referring to whether there is agreement between a measurement made on an object and its true (target or reference) value. Gauge Bias: Is the term given to the distance between the observed average measurement and the true value, or “right” answer [b]. Bias is quantitative term that reflects the difference between observed average measurements on the same object and its “true” value obtained from a master or gold standard, or from a different measurement technique known to produce accurate values. If bias exists in the average measured value, the measurement system may require calibration [a]. In statistical terms, bias is identified when the averages of measurements differ by a fixed amount from the “true” value (refer to Figure 2 below). Bias effects include: Operator bias – Different operators get detectable different averages for the same value. Can be evaluated using the Gage R&R graphs. Instrument bias – Different instruments get detectably different averages for the same measurement on the same part. If instrument bias is suspected, set up a specific test where one operator uses multiple devices to measure the same parts under otherwise identical conditions. Create a “by instrument” chart similar to the “by part” and “by operator” charts. Figure 2. Bias (accuracy) concept. Gauge Precision: Measure of the inherent variability in the measurement system. Accuracy (bias) versus Precision: Accuracy (bias) refers to the ability of the instrument to measure the true value correctly on average, whereas precision is a measure of the inherent variability in the measurement system (refer to Figures 3 and 4). Gauge Repeatability: “Within the gage”  amount of difference that a single data collector/inspector obtained when measuring the same item over and over again. It is also called “test/retest error.” Gauge Reproducibility: Amount of difference that occurred when different people measured the same item. It is also called “between operators” error. ParttoPart: An estimate of the variation between the parts being measured. Figure 3. The concepts of accuracy (bias) and precision (repeatability and reproducibility). Figure 4. The concepts of accuracy and precision. (a) The gauge is accurate and precise. (b) The gauge is accurate but not precise. (c) The gauge is not accurate but it is precise. (d) The gauge is neither accurate nor precise [c]. Gauge Stability: Variation in the measurement over an extended period of time when measurements are done by the same person with the same equipment. Stability, or different levels of variability in different operating regimes, can result from warmup effects, environmental factors, wear, inconsistent operator performance, and inadequate standard operating procedure. Stability can be assessed as the change in bias over time, a drift. A stable measurement process is in statistical control with respect to location. The stability or the drift is the total variation in measurements when the measurements are obtained with the same measurement equipment on the same part while measuring a single characteristic over an extended period of time. The stability determines the measurement system’s ability to measure consistently over time such that the measurement system does not drift [a]. Refer to Figure 5 below. Figure 5. Gauge stability overtime concept. Gauge Linearity: Consistency of the measurement system across the entire range of the measurement system. The linearity of a measurement system reflects the differences in observed accuracy and/or precision experienced over the range of measurements made by the system. A simple linear regression model is often used to describe this feature. Problems with linearity are often the result of calibration and maintenance issues. Linearity can be assessed as the change in bias over the normal operating range, a systematic error component of the measurement system. The linearity determines whether a bias exists in the measurement system over its operating range. For example, thermometers or scales may be biased when measuring at the low end of the scale, if the instruments are intended for larger values of measure [a]. Figure 6. Linearity concept [d]. Interaction: In statistics, an interaction may arise when considering the relationship among three or more variables, and describes a situation in which the simultaneous influence of two variables on a third is not additive. The effect of one independent variable on the dependent variable of interest may not be the same at all levels of the other independent variable. Another way to put this is that the effect of one independent variable may depend on the level of the other independent variable. Pvalue: The Pvalue or calculated probability is the estimated probability of rejecting the null hypothesis (Ho) of a study question when that null hypothesis is true. Typically, if it is lower than 0.05 (when the significance alpha level is 0.05), the null hypothesis is rejected. Null Hypothesis (H0, also abbreviated as Ho): In a statistical test the null hypothesis is stated as there is no significant difference between specified populations (tested through samples of the population), any observed difference being due to sampling or experimental error. Alternate Hypothesis (H1, also abbreviated as Ha): In a statistical test the alternate hypothesis is stated as there is significant difference between specified populations (tested through samples of the population), any observed difference being due to sampling or experimental error. Gauge Discriminating Power: Is the ability of the gauge to distinguish between units of product. Discrimination: Is the measurement system’s ability to detect small changes in the characteristic being measured. This is also referred to as readability or resolution [a]. A general rule of thumb is the measuring instrument discrimination ought to be at least onetenth of the range to be measured. Traditionally this range has been taken to be the product specification. Recently the 10 to 1 rule is being interpreted to mean that the measuring equipment is able to discriminate to at least onetenth of the process variation (refer to Figure 7 below). The measure of this ability is typically the value of the smallest graduation on the scale of the instrument. If the instrument has “coarse” graduations, then a halfgraduation can be used (refer to Figure 8 below). Figure 7: Measurement system (variable gauge) Discrimination [a]. Figure 8: Smallest graduation on the scale of an instrument [d]. Section 3 (Gauge R&R). Notation A measurement of a process characteristic will be denoted by an x, parameters will be denoted by Greek letters, and statistics by Roman letters. An overbar will denote an average, and double overbars will indicate an average of subgroup averages. Other symbols will be defined as they are used. Section 4. Methods of Variable Gauge R&R / Variable Measurement System (MSA) Analysis The Figure 9 below shows a chart with the “Methods of Measurement System (variable Gage) Analysis”. Figure 9: Methods of Measurement System (variable Gage) Analysis. Section 5. What is Variable Gauge R&R / Variable Measurement System Analysis (MSA) and Why Use It? Measurement systems, if not functioning properly, can be a source of variability that can negatively impact capability. If measurement is a source of variability, organizations can be rejecting good units and/or accepting bad units. Therefore, it must be determined if the measurement system is reliable before the baseline capability can be determined. Doing so allows the organization to properly accept good units and properly reject bad units, thus establishing the true quality level [a]. The gauge measurement system analysis is used for the following: To ensure that the differences in the data are due to actual differences in what is being measured and not to variation in measurement methods [b]. To determine how much of the total observed variability is due to gauge or instrument. Total process variation typically comprises variations from two sources (variation inherent in the process and variation due to measurement). Mathematically, this may be represented as: Isolate components of variability in the measurement system. Assess whether the instrument or gauge is capable (that is, is it suitable for the intended application). Section 6. Acceptance Criteria for Variable Gauge R&R and Gauge Number of Distinct Categories Gauge R&R Acceptance Criteria: Common standards (such as AIAG) for %Study Var are shown on Table 1 and Figure 10. G R&R Decision Comments Under 10 percent Generally considered to be an acceptable measurement system. Recommended, especially useful when trying to sort or classify parts or when tightened process control is required [d]. It means little variation is due to the measurement system; most of it is true variation [b]. 10 percent to 30 percent May be acceptable for some applications. Decision should be based upon, for example, importance of application measurement, cost of measurement device, and cost of rework or repair [d]. May be acceptable depending on the application (30% is maximum acceptable for any process improvement effort) [b]. Over 30 percent Considered to be unacceptable. Every effort should be made to improve the measurement system because it is too unpredictable [b]. This condition maybe addressed by the use of an appropriate measurement strategy; for example, using the average result of several readings of the same part characteristic in order to reduce final measurement variation [d]. Table 1. Gauge R&R Acceptance Criteria. Figure 10. Gauge R&R Acceptance Criteria [a]. The recommended number of distinct categories is 5 or more, as shown on Figure 11 below. Figure 11. Impact of Number of Distinct Categories (ndc) of the Process Distribution on Control and Analysis Activities [d]. Four Acceptance Criteria Summary in Variable Gage R&R: 1) % Study Variation (%R&R) is based on standard deviation. 2) % Tolerance (P/T Ratio) is based on USL and LSL. 3) % Contribution is based on variance. 4) The number of distinct categories is based on process variation. Ideally, all four categories should be in the GREEN zone. Examining the visual aids below shows commonly used evaluation criteria for each category. Figure 12. Four Acceptance Criteria Summary in Variable Gage R&R. Section 7. Formulas for Variable Gauge R&R The P/T ratio is commonly defined in terms of an interval that contains 99 percent (5.15σ) of the variation of the theoretical distribution. Some organizations use different definitions. Some use a 99.7 percent (6σ) interval; others use a 95 percent (4σ) interval for defining intervals for the P/T ratio. Section 8. Preparation for a Variable Measurement System Study 1) The approach to be used should be planned. For instance, determine by using engineering evaluation, visual observations, or a gage study, if there is an appraiser influence in calibrating or using the instrument. There are some measurement systems where the effect of reproducibility can be considered negligible; for example, when a button is pushed and a number is printed out. You can treat such cases as if there were a single operator. 2) The number of appraisers, number of sample parts, and number of repeat readings should be determined in advance. Some factors to be considered in this selection are: (a) Criticality of dimension: Critical dimensions require more parts and/or trials. The reason being the degree of confidence desired for the gage study estimations. (b) Part configuration: Bulky or heavy parts may dictate fewer samples and more trials. (c) Customer requirements. 3) Since the purpose is to evaluate the total measurement system, the appraisers chosen should be selected randomly from those who normally operate the instrument. 4) Selection of the sample parts is critical for proper analysis and depends entirely upon the design of the MSA study, purpose of the measurement system, and availability of part samples that represent the variation of the production process. The availability of samples over the entire operating range becomes very important. The sample parts must be selected from the process and represent the entire production operating range. 5) The instrument should have a discrimination that allows at least onetenth of the expected process variation of the characteristic to be read directly. For example, if the characteristic’s variation is 0.001, the equipment should be able to “read” a change of 0.0001. 6) Assure that the measuring method (i.e., appraiser and instrument) is measuring the dimension of the characteristic and is following the defined measurement procedure. 7) Calibration can affect the R&R study. Ideally, the device should be calibrated before the study begins and not recalibrated until the study has ended. If this is not possible, variation due to calibration may appear in the study. 8) Plan to minimize the variation within sample for the R&R study. In the case of destructive tests, it is often possible to minimize withinsample effects by selecting test pieces from each sample from a homogeneous and compact area of a master sample. Other cases, such as surface texture measurements with a profilometer, may require directing the operators to test at exactly the same point, as long as the test does not change the characteristic being measured. 9) Determine whether individual measurements or averages will be used. Generate the measurement in the same manner that it is produced in the standard operating procedure for product evaluation. If a single measurement is typically used, then do that in the R&R study. If more than one measurement is made and averaged for the official measurement, then do that for the R&R. Make sure all operators report the measurement in the same way. However, when averages are reported, record the individual measurements. The individual measurements may be useful for diagnostics [d]. Section 9. How to SetUp a Gauge R&R Study in Minitab Go to: Stat > Quality Tools > Gage Study > Create Gage R&R Study Worksheet. Enter the number of parts, the number of operators, and the number of replicates. Click OK. Refer to Figure 13 below. Figure 13. Creating Gage R&R Worksheet in Minitab Statistical Software [f]. Gage R&R Study Worksheet (refer to Figure 14 below): Parts: 10 Operators: 3 Replicates: 2 Total runs: 60 Figure 14. Gage R&R Worksheet in Minitab Statistical Software [f]. Section 10. How to Determine the Variable MSA Number of Runs If practical, use 10 samples. However, select enough samples so that: (# of samples) x (# of operators) is greater than 15. If this is not possible or practical, increase the number of trials [e]. The recommended number of trials for a given number of ranges (([# samples]*[# of operators] assures that the estimates of σ (for EV: Equipment Variation) are based on at least 14 degrees of freedom, the minimum for the case where (# samples)x(# of operators) is > 15 and there are two trials. If one does the standard study with 10 samples, two trials are sufficient if at least two operators are used. Note: The Gauge R&R study must always have at least 2 replicates (2 trials). When there is a single operator (if there is no operator, assume a single operator), the rule concerning (# of samples)x(# of operators)>15 becomes simple (# of samples)>15. If the number of samples is 15 or less, then the number of trials should be increased accordingly. The parts used in the study should cover the range of measurements expected. However, there is also a requirement that the range of measurements used in the study be defined such that the average range on each part is expected to estimate a common σ (for EV: Equipment Variation). If this homogeneity is not met, the results become erratic. In other words, between parts variation is expected to be large (or representative of the process variation), but the within parts variation is expected to be minimum. Section 11. Conducting a Variable Measurement System Study 1) The measurements should be made in a random order to ensure that any drift or changes that could occur will be spread randomly throughout the study. The appraisers should be unaware of which numbered part is being checked in order to avoid any possible knowledge bias. However, the person conducting the study should know which numbered part is being checked and record the data accordingly, that is Appraiser A, Part 1, first trial; Appraiser B, Part 4, second trial, etc. 2) In reading the equipment, measurement values should be recorded to the practical limit of the instrument discrimination. Mechanical devices must be read and recorded to the smallest unit of scale discrimination. For electronic readouts, the measurement plan must establish a common policy for recording the rightmost significant digit of display. Analog devices should be recorded to onehalf the smallest graduation or limit of sensitivity and resolution. For analog devices, if the smallest scale graduation is 0.0001”, then the measurement results should be recorded to 0.00005”. 3) The study should be managed and observed by a person who understands the importance of conducting a reliable study [d]. Section 12. Examples of NonDestructive Gauge R&R Analysis using Minitab In these examples, a gage R&R study will be performed on two data sets: One in which the measurement system variation contributes little to the overall observed variation (Example 1A: GAGEAIAG), and one in which the measurement system variation contributes a lot to the overall observed variation (Example 1B: GAGE2). For comparison, one can analyze the data using both the “ANOVA method (below)” and the “Xbar and R method”. One can also look at the same data plotted on a “Gage Run Chart”. The GAGEAIAG (Example 1A) data was taken from the Measurement Systems Analysis Reference Manual, 3rd edition [4]. Ten parts were selected that represent the expected range of the process variation. Three operators measured the ten parts, three times per part, in a random order. For the GAGE2 (Example 1B) data, three parts were selected that represent the expected range of the process variation. Three operators measured the three parts, three times per part, in a random order. Gage R&R Study Worksheet for Example 1A (GAGEAIAG) Parts: 10 Operators: 3 Replicates: 3 Total runs: 90 Gage R&R Study Worksheet for Example 1B (GAGE2) Parts: 3 Operators: 3 Replicates: 3 Total runs: 27 Example 1A (GAGEAIAG) Using the ANOVA method with GAGEAIAG data in the Minitab statistical software [f]. a. Open the worksheet GAGEAIAG.MTW. b. Choose Stat > Quality Tools > Gage Study > Gage R&R Study (Crossed). Refer to Figure 15 below. c. In Part numbers, enter Part. Refer to Figure 16 below. d. In Operators, enter Operator. Refer to Figure 16 below. e. In Measurement data, enter Measurement. Refer to Figure 16 below. f. Under Method of Analysis, choose ANOVA. Refer to Figure 16 below. g. Click Options. Under Process tolerance, choose Upper spec  Lower spec and enter 8. Refer to Figure 16 below. h. Click OK in each dialog box. Refer to Figure 16 below. Figure 15. Choosing Stat > Quality Tools > Gage Study > Gage R&R Study (Crossed). Figure 16. Gauge R&R Study (Crossed) Dialog Boxes. Table 2. Example 1A Minitab Numerical Data Analysis (Session Window Output). The Table 2 above can be analyzed as follows: Part: H0: The Parts means are not different. H1: The Parts means are different. The pvalue for the source of variation Part is 0.000 (which is lower than the alpha level of significance = 0.05). Hence, the null hypothesis (H0) is rejected. The parts chosen for this study are statistically different. Operator: H0: The Operators means are not different. H1: The Operators means are different. The pvalue for the Operator is 0.000 (which is lower than the alpha level of significance = 0.05). Hence, the null hypothesis (H0) is rejected. The Operators are measuring different. Part* Operator: H0: The Part*Operator interaction means are not different. H1: The Part*Operator interaction means are different. The pvalue for the Part*Operator interaction is 0.974 (which is lower than the alpha level of significance = 0.05). Hence, there is no evidence to reject the null hypothesis (H0). There is no evidence that the Part*Operator interaction means on this study are statistically different. Note: When the pvalue for Operator by Part is > 0.05, Minitab omits this from the full model. Notice there is an ANOVA table without the interaction (on Table 3 below) because the pvalue for the interaction “Operator by Part” is 0.974. Table 3. Example 1A Minitab Numerical Data Analysis (Session Window Output continued from Table 2 above). The Table 3 above can be analyzed as follows: Operator*Part interaction: When the pvalue for Operator by Part is > 0.05, Minitab fits the model without the interaction and uses the reduced model to define the Gage R&R statistics. Part: H0: The Parts means are not different. H1: The Parts means are different. The pvalue for the source of variation Part is 0.000 (which is lower than the alpha level of significance = 0.05). Hence, the null hypothesis (H0) is rejected. The parts chosen for this study are statistically different. Operator: H0: The Operators means are not different. H1: The Operators means are different. The pvalue for the Operator is 0.000 (which is lower than the alpha level of significance = 0.05). Hence, the null hypothesis (H0) is rejected. The Operators are measuring different. Table 4. Example 1A Minitab Numerical Data Analysis (Session Window Output continued from Table 3 above). The Table 4 above can be analyzed as follows: The %Contribution from PartToPart (92.24) is larger than that of Total Gage R&R (7.76). This shows that much of the variation is due to differences between parts. Table 5. Example 1A Minitab Numerical Data Analysis (Session Window Output continued from Table 4 above). The Table 5 above can be analyzed as follows: %Study Var column: The Total Gage R&R equals 27.86% of the study variation. While the Total Gage R&R %Contribution is acceptable, there is room for improvement. The % Tolerance Column: The Total P/T ratio is 22.68 % which can be marginally acceptable depending on the application, there is room for improvement. A Number of Distinct Categories equal to 4 indicates that the measuring system is not adequate for estimating process parameters and indices since it only provides coarse estimates. Figure 17. Example 1A Minitab Gauge R&R (ANOVA) graphical analysis. The Figure 17 above shows the graphical analysis for the Example 1A. Each one of the 6 charts in the report shown in Figure 17 will be explained below. Figure 18. Example 1A Gauge R&R Components of Variation. In the Figure 18 (above) what you are looking for is the following [b]: You want the parttopart bars to be much taller than the others because that means most of the variation is from true differences in the items being measured. If the Gage R&R, Repeat, and Reprod bars are tall, that means the measurement system is unreliable. (Repeat + Reprod = Gage R&R) Focus on the %Study Var bars—This is the amount of variation (expressed as a percentage) attributed to measurement error. Specifically, this is the calculation that divides the standard deviation of the gage component by the total observed standard deviation then multiplies by 100. Common standards (such as AIAG) for % Study Var are: Less than 10% is good—it means little variation is due to your measurement system; most of it is true variation. 10% − 30% may be acceptable depending on the application (30% is maximum acceptable for any process improvement effort). More than 30% is unacceptable (your measurement system is too unpredictable). Example 1A Explanation: In the Figure 18, the % contribution from PartToPart is larger than that of Total Gage R&R, this shows that much of the variation is due to differences between parts. Figure 19. Example 1A Gauge R Chart by Operators. Repeatability is checked by using Range Graph (shown in Figure 19 above) of the Gage R&R control charts. This chart shows the variation in the measurements made by each operator on each part. In the Figure 19 (above) what you are looking for is the following [b]: Is the range chart in control? Any points that fall above the UCL need to be investigated. If the difference between the largest value and the smallest value of the same part does not exceed the UCL, then that gage and operator may be considered Repeatable. Example 1A Explanation: In the Figure 19, Operator B measures parts inconsistently. Figure 20. Example 1A Xbar Chart by Operators. Reproducibility is graphically represented by looking for significant differences between the patterns of data generated by each operator measuring the same items (refer to Figure 20 above). In the Figure 20 (above) what you are looking for is the following [b]: This is one instance where you want the points to consistently go outside the upper and lower control limits (LCL, UCL). The control limits are determined by gage variance and these plots should show that gage variance is much smaller than variability within the parts. Example 1A Explanation: In the Figure 20, most of the points in the X and R chart are outside the control limits, indicating variation is mainly due to differences between parts. Figure 21. Example 1A Xbar Chart by Operators Desired versus Unacceptable [b]. In the Xbar Chart by Operators shown in Figure 20 (above), also compare patterns between operators. If they are not similar, there may be significant operator/part or operator/equipment interactions (meaning different operators are using the equipment differently or measuring parts differently). Note: If the samples do not represent the total variability of the process, the gage (repeatability) variance may be larger than the part variance and invalidate the results (refer to Figure 21 above for a graphical depiction of this note) [b]. Figure 22. Example 1A Response by Parts. The Figure 22 (above) shows the data for the parts for all operators plotted together. It displays the raw data and highlights the average of those measurements. This chart shows the measurements (taken by three different operators) for each of 10 parts. In the Figure 22 (above) what you are looking for is the following [b]: The chart should show different parts center tendency values (between parts: From parttopart) representing the process variation. The chart should also show a consistent range of variation (smallest to the largest dimensions) for the same parts (within the subgroup). Want the range of readings for each part to be consistent with the range for other parts. If the spread between the biggest and smallest values varies a lot between different sets of points, that may mean that the parts chosen for the calibration were not truly representative of the variation within the process. Note: If a part shows a large spread, it may be a poor candidate for the test because the feature may not be clear or it may be difficult to measure that characteristic every time the same way. Example 1A Explanation: In the Figure 22, there are large differences between parts, as shown by the nonlevel line and the ranges within a subgroup are not too different. Figure 23. Example 1A Response by Operators. The Figure 23 (above) groups data by who was collecting the data (“running the process”) rather than by part, so it will help you identify operator issues (such as inconsistent use of operational definitions or of measuring devices). In the Figure 23 (above) what you are looking for is the following [b]: The line connecting the averages (of all parts measured by an operator) should be flat or almost flat. Any significant slope indicates that at least one operator has a bias to measure larger or smaller than the other operators. Example 1A Explanation: In the Figure 23, the differences between operators are small compared to the differences between parts, but are significant (pvalue = 0.00). Operator C appears to measure slightly lower than the others. Figure 24. Example 1A Parts*Operators Interaction. The Figure 24 (above) shows the data for each operator involved in the study. It is the best chart for exposing operatorandpart interaction (meaning differences in how different people measure different parts). In the Figure 24 (above) what you are looking for is the following [b]: If the lines connecting the plotted averages diverge significantly, then there is a relationship between the operator making the measurements and the part being measured. This is not good and needs to be investigated. Example 1A Explanation: The Figure 24 is a visualization of the pvalue for Operators*Parts  0.974 in this case  indicating no significant interaction between each Part and Operator. Example 1B (GAGE2) Use the ANOVA method with GAGE2 data a. Open the file GAGE2.MTW. b. Choose Stat > Quality Tools > Gage Study > Gage R&R Study (Crossed). Refer to Figure 15 above. c. In Part numbers, enter Part. Refer to Figure 16 above. d. In Operators, enter Operator. Refer to Figure 16 above. e. In Measurement data, enter Response. Refer to Figure 16 above. f. Under Method of Analysis, choose ANOVA. Refer to Figure 16 above. g. Click OK. Refer to Figure 16 above. Table 6. Example 1B Minitab Numerical Data Analysis (Session Window Output). The Table 6 above can be analyzed as follows: Part: H0: The Parts means are not different. H1: The Parts means are different. The pvalue for the source of variation Part is 0.166 (which is higher than the alpha level of significance = 0.05). Hence, there is no evidence to reject the null hypothesis (H0). There is no evidence that the parts chosen for this study are statistically different. Operator: H0: The Operators means are not different. H1: The Operators means are different. The pvalue for the Operator is 0.962 (which is higher than the alpha level of significance = 0.05). Hence, there is no evidence to reject the null hypothesis (H0). There is no evidence that the Operators are measuring different. Part* Operator: H0: The Part*Operator interaction means are not different. H1: The Part*Operator interaction means are different. The pvalue for the Part*Operator interaction is 0.484 (which is lower than the alpha level of significance = 0.05). Hence, there is no evidence to reject the null hypothesis (H0). There is no evidence that the Part*Operator interaction means on this study are statistically different. Note: When the pvalue for Operator by Part is > 0.05, Minitab omits this from the full model. Notice there is an ANOVA table without the interaction (on Table 7 below) because the pvalue for the interaction “Operator by Part” is 0.484. Table 7. Example 1B Minitab Numerical Data Analysis (Session Window Output continued from Table 6 above). The Table 7 above can be analyzed as follows: Operator*Part interaction: When the pvalue for Operator by Part is > 0.05, Minitab fits the model without the interaction and uses the reduced model to define the Gage R&R statistics. Operator: H0: The Operators means are not different. H1: The Operators means are different. The pvalue for the Operator is 0.965 (which is higher than the alpha level of significance = 0.05). Hence, there is no evidence to reject the null hypothesis (H0). There is no evidence that the Operators are measuring different. Part: H0: The Parts means are not different. H1: The Parts means are different. The pvalue for the source of variation Part is 0.092 (which is higher than the alpha level of significance = 0.05). Hence, there is no evidence to reject the null hypothesis (H0). There is no evidence that the parts chosen for this study are statistically different. Table 8. Example 1B Minitab Numerical Data Analysis (Session Window Output continued from Table 7 above). The Table 8 above can be analyzed as follows: The %Contribution column: The percent contribution from Total Gage R&R (84.36) is larger than that of PartToPart (15.64). Thus, most of the variation arises from the measuring system; very little is due to differences between parts. Table 9. Example 1B Minitab Numerical Data Analysis (Session Window Output continued from Table 8 above). The Table 9 above can be analyzed as follows: The %Study Var column: The Total Gage R&R equals 91.85% of the study variation. The measurement system is unacceptable and must be improved. A Number of Distinct Categories equal to 1 indicates that the measurement system is poor; it cannot distinguish differences between parts. Figure 25. Example 1B Minitab Gauge R&R (ANOVA) graphical analysis. Figure 1B (above) Explanations: In the Components of Variation chart (located in the upper left corner), the percent contribution from Total Gage R&R is larger than that of ParttoPart, this indicates that most of the variation is due to the measurement system  primarily repeatability; little is due to differences between parts. In the By Part chart (located in upper right corner), there is little difference between parts, as shown by the nearly level line. The ranges between the subgroups are showing some differences. The R Chart by Operator (located in the middle of the left column), shows no out of control points (no points fall above the UCL). This R Chart by Operator shows no considerable differences between operators. In the By Operator chart (located in the middle of the right column), there are no differences between operators, as shown by the level line. In the Xbar Chart by Operator (located in lower left corner), most of the points in the X and R chart are inside the control limits, indicating the observed variation is mainly due to the measurement system. The Operator*Interaction chart (located in lower right corner) is a visualization of the pvalue for Oper*Part  0.484 in this case  indicating the differences between each operator/part combination are insignificant compared to the total amount of variation. Section 13 (Gauge R&R). References Note: If you wish to purchase any of the books listed below in this "References" section please go to the "ECommerce" tab feature on this phone/tablet application for the preferred booksellers and click directly on the links provided on the ECommerce tab feature. [a] Sheehy, Paul; Daniel Navarro; Robert Silvers; Victoria Keyes. "The Black Belt Memory Jogger: A Pocket Guide for Six Sigma Success". Published by GOAL/QPC and Six Sigma Academy, January 2002. [b] George, Michael; Maxey, John; Rowlands, David; Price, Mark. “The Lean Six Sigma Pocket Toolbook: A Quick Reference Guide to Nearly 100 Tools for Improving Quality and Speed”. Published by GOAL/QPC and Six Sigma Academy, 20041013. [c] Montgomery, Douglas C. “Statistical Quality Control”, 7th Edition. Published by John Wiley & Sons, 2012. [d] Down, Michael; Czubak, Frederick; Gruska, Gregory; Stahley, Steve; Benham, David. “Measurement Systems Analysis Reference Manual”, Fourth Edition. Published by Chrysler Group LLC, Ford Motor Company, General Motors Corporation, June 2010. [e] Barrentine, Larry B. “Concepts for R&R Studies”, Second Edition. Published by ASQ, 2003. [f] Minitab® 17.1.0. © 2013 Minitab Inc.
Dynamic Library Reads
Strategy for Preparing Software Organizations for Statistical Process Control
Abstract: Software organizations have increased their interest in software process improvement (SPI). Nowadays, there are several frameworks that support SPI implementation. Some of them, such as CMMI (Capability Maturity Model Integration), propose to implement SPI in levels. At high maturity levels, such as CMMI levels 4 and 5, SPI involves carrying out statistical process control (SPC), which requires measures and data suitable for this context. However, measurement problems have been pointed in the literature as one of the main obstacles for a successful implementation of SPC in SPI efforts. With this scenario in mind, we developed a strategy to help software organizations prepare themselves regarding measurement aspects in order to implement SPC. The strategy is made up of three components: a Reference Software Measurement Ontology, an Instrument for Evaluating the Suitability of a Measurement Repository for SPC, and a Body of Recommendations for Software Measurement Suitable for SPC. In this paper we present the strategy as a whole and describe each one of its components. Keywords: Software measurement · Statistical process control · High maturity · Software measurement ontology. To Read Full Article Click on PDF Icon below.
A DMAIC approach for process capability improvement of an engine crankshaft manufacturing process
Abstract: The define–measure–analyze–improve–control (DMAIC) approach is a fivestrata approach, namely DMAIC. This approach is the scientific approach for reducing the deviations and improving the capability levels of the manufacturing processes. The present work elaborates on DMAIC approach applied in reducing the process variations of the stubendhole boring operation of the manufacture of crankshaft. This statistical process control study starts with selection of the criticaltoquality (CTQ) characteristic in the define stratum. The next stratum constitutes the collection of dimensional measurement data of the CTQ characteristic identified. This is followed by the analysis and improvement strata where the various quality control tools like Ishikawa diagram, physical mechanism analysis, failure modes effects analysis and analysis of variance are applied. Finally, the process monitoring charts are deployed at the workplace for regular monitoring and control of the concerned CTQ characteristic. By adopting DMAIC approach, standard deviation is reduced from 0.003 to 0.002. The process potential capability index (CP) values improved from 1.29 to 2.02 and the process performance capability index (CPK) values improved from 0.32 to 1.45, respectively. Keywords: Critical to quality (CTQ) characteristic, Cause and effect diagram, Statistical process control (SPC), Process monitoring charts (PMC), Failure modes and effects analysis (FMEA), Analysis of variance (ANOVA), Physical mechanism (PM) analysis. To Read Full Article Click on PDF Icon below.
Process capability improvement of an engine connecting rod machining process
Abstract: Statistical process control is an excellent quality assurance tool to improve the quality of manufacture and ultimately scores on end customer satisfaction. SPC uses process monitoring charts to record the key quality characteristics (KQCs) of the component in manufacture. This paper elaborates on one such KQC of the manufacturing of a connecting rod of an internal combustion engine. Here the journey to attain the process potential capability index (Cp) and the process performance capability index (Cpk) values greater than 1.33 is elaborated by identifying the root cause through quality control tools like the causeandeffect diagram and examining each cause one after another. In this paper, the definemeasureanalyzeimprovecontrol (DMAIC) approach is employed. The definition phase starts with process mapping and identifying the KQC. The next phase is the measurement phase comprising the causeandeffect diagram and data collection of KQC measurements. Then follows the analysis phase where the process potential and performance capability indices are calculated, followed by the analysis of variance (ANOVA) of the mean values. Finally, the process monitoring charts are used to control the process and prevent any deviations. By using this DMAIC approach, standard deviation is reduced from 0.48 to 0.048, the Cp values from 0.12 to 1.72, and the Cpk values from 0.12 to 1.37, respectively. Keywords: Key quality characteristic; Causeandeffect diagram; Statistical process control; Process monitoring charts; Failure modes and effects analysis; Analysis of variance. To Read Full Article Click on PDF Icon below.
On the Exponentiated Weibull Distribution for Modeling Wind Speed in South Western Nigeria
Abstract: One of the bases for assessment of wind energy potential for a specified region is the probability distribution of wind speed. Thus, appropriate and adequate specification of the probability distribution of wind speed becomes increasingly important. Several distributions have been proposed for describing wind distribution. Among the most popular distributions is the Weibull whose choice is due to its flexibility. An exponentiated Weibull distribution is proposed as an alternative to model wind speed data with a view to comparing it with the existing Weibull distribution. Results indicate that the proposed distribution outperforms the existing Weibull distribution for modeling wind speed data in terms of minimum Akaike information criterion (AIC) and likelihood function. Thus, the exponentiated Weibull can be used as an alternative distribution that adequately describe the wind speed and thereby provide better representation of the potentials of wind energy. Keywords: Wind power, Weibull, exponentiated Weibull, model selection criteria, maximum likelihood estimation. To Read Full Article Click on PDF Icon below.
Characterizations of Continuous Distributions by Truncated Moment
Abstract: A probability distribution can be characterized through various methods. In this paper, some new characterizations of continuous distribution by truncated moment have been established. We have considered standard normal distribution, Student’s t, exponentiated exponential, power function, Pareto, and Weibull distributions and characterized them by truncated moment. Keywords: Characterization, exponentiated exponential distribution, power function distribution, standard normal distribution, Student’s t distribution, Pareto distribution, truncated moment. To Read Full Article Click on PDF Icon below.