Visit CI Central  | Visit Our Continuous Improvement Store

Assignable Cause

Last updated by Jeff Hajek on December 22, 2020

An assignable cause is a type of variation in which a specific activity or event can be linked to inconsistency in a system. In effect, it is a special cause that has been identified.

As a refresher, common cause variation is the natural fluctuation within a system. It comes from the inherent randomness in the world. The impact of this form of variation can be predicted by statistical means. Special cause variation, on the other hand, falls outside of statistical expectations. They show up as outliers in the data .

Lean Terms Discussion

Variation is the bane of continuous improvement . It decreases productivity and increases lead time . It makes it harder to manage processes.

While we can do something about common cause variation, typically there is far more bang for the buck by attacking special causes. Reducing common cause variation, for example, might require replacing a machine to eliminate a few seconds of variation in cutting time. A special cause variation on the same machine might be the result of weld spatter from a previous process. The irregularities in a surface might make a part fit into a fixture incorrectly and require some time-consuming rework. Common causes tend to be systemic and require large overhauls. Special causes tend to be more isolated to a single process step .

The first step in removing special causes is identifying them. In effect, you turn them into assignable causes. Once a source of variation is identified, it simply becomes a matter of devoting resources to resolve the problem.

Lean Terms Videos

Lean Terms Leader Notes

One of the problems with continuous improvement is that the language can be murky at times. You may find that some people use special causes and assignable causes interchangeably. Special cause is a far more common term, though.

I prefer assignable cause, as it creates an important mental distinction. It implies that you…

Extended Content for this Section is available at academy.Velaction.com

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Ph: 800 -274-2874
  • Email: [email protected]

SPC for Excel Software

SPC for Excel Software

  • Download the Free Demo
  • Statistical Tools in SPC for Excel
  • Control Charts
  • Process Capability
  • Gage R&R Analysis
  • Teaching Guides
  • SPC Seminar
  • SPC Facilitator Seminar
  • Process Capability Improvement Seminar
  • SPC Knowledge Base
  • SPC for Excel and Industries
  • Customer Stories
  • SPC for Excel Around the World
  • SPC for Excel Help
  • Videos – SPC for Excel – Statistical Tools
  • Videos – Statistical Analysis
  • Revision History
  • Control Chart Rules and Interpretation

what is chart telling me about my process

Control charts are a valuable tool for monitoring process performance.  However, you have to be able to interpret the control chart for it to be of any value to you.  Is communication important in your life?  Of course it is – both at work and at home.  Here is the key to effectively using control charts – the control chart is the way the process communicates with you.  Through the control chart, the process will let you know if everything is “under control” or if there is a problem present.  Potential problems include large or small shifts, upward or downward trends, points alternating up or down over time and the presence of mixtures. 

This month’s publication examines 8 rules that you can use to help you interpret what your control chart is communicating to you.  These rules help you identify when the variation on your control chart is no longer random, but forms a pattern that is described by one or more of these eight rules.   These patterns give you insights into what may be causing the “special causes” – the problem in your process. 

In this issue:

Variation Review

Control chart review, the 8 control chart rules, possible causes by pattern.

  • Video: Interpreting Control Charts

Quick Links

You may download a pdf copy of this publication at this link .  You may also leave a comment at the end of the publication.

We have covered variation in 11 publications over the years.  Here is an excerpt from one:

drinking milk

Variation comes from two sources, common and special causes. Think about how long it takes you to get to work in the morning. Maybe it takes you 30 minutes on average. Some days it may take a little longer, some days a little shorter. But as long as you are within a certain range, you are not concerned. The range may be from 25 to 35 minutes. This variation represents common cause variation — it is the variation that is always present in the process. And this type of variation is consistent and predictable. You don’t know how long it will take to get to work tomorrow, but you know that it will be between 25 and 35 minutes as long as the process remains the same.

Now, suppose you have a flat tire when driving to work. How long will it take you to get to work? Definitely longer than the 25 to 35 minutes in your “normal” variation. Maybe it takes you an hour longer. This is a special cause of variation. Something is different. Something happened that was not supposed to happen. It is not part of the normal process. Special causes are not predictable and are sporadic in nature.

different figure

It has been estimated that 94% of the problems a company faces are due to common causes. Only 6% are due to special causes (that may or may not be people related). So, if you always blame problems on people, you will be wrong at least 85% of the time. It is the process most of the time that needs to be changed. Management must set up the system to allow the processes to be changed.”

The only effective way to separate common causes from special causes of variation is through the use of control charts.  A control chart monitors a process variable over time – e.g., the time to get to work.  The average is calculated after you have sufficient data.  The control limits are calculated – an upper control limit (UCL) and a lower control limit (LCL).  The UCL is the largest value you would expect from a process with just common causes of variation present. The LCL is the smallest value you would expect with just common cause of variation present.   As long as the all the points are within the limits and there are no patterns, only common causes of variation are present. The process is said to be “in control.” 

driving to work

Figure 1: Control Chart Example

driving to work control chart

There is one point beyond the UCL in Figure 1.  This is the first pattern that signifies an out of control point – a special cause of variation.  One possible cause is the flat tire.   There are many other possible causes as well – car break down, bad weather, etc.

data on pc

Some of these patterns depend on “zones” in a control chart.    To see if these patterns exits, a control chart is divided into three equal zones above and below the average.  This is shown in Figure 2.

Figure 2: Control Chart Divided into Zones

zones on a control chart

Zone C is the zone closest to the average.  It represents the area from the average to one sigma above the average.  There is a corresponding zone C below the average.  Zone B is the zone from one sigma to two sigma above the average.  Again, there is a corresponding Zone B below the average. Zone A is the zone from two sigma to three sigma above the average – as well as below the average.   

If a process is in statistical control, most of the points will be near the average, some will be closer to the control limits and no points will be beyond the control limits.  The 8 control chart rules listed in Table 1 give you indications that there are special causes of variation present.    Again, these represent patterns.

Table 1: Control Chart Rules

Our SPC for Excel software handles all these out of control tests.

It should be noted that the numbers can be different depending upon the source.  For example, some sources will use 8 consecutive points on one side of the average (Zone C test) instead of the 7 shown in the table above.  But they are all very similar.  Figures 3 through 5 illustrate the patterns.  Figure 3 shows the patterns for Rules 1 to 4. 

Figure 3: Zone Tests (Rules 1 to 4)

rules 1 to 4

Rules 1 (points beyond the control limits) and 2 (zone A test) represent sudden, large shifts from the average.  These are often fleeting – a one-time occurrence of a special cause – like the flat tire when driving to work.   

Rules 3 (zone B) and 4 (Zone C) represent smaller shifts that are maintained over time.  A change in raw material could cause these smaller shifts.  The key is that the shifts are maintained over time – at least over a longer time frame than Rules 1 and 2. 

Figure 4 shows Rules 5 and 6.  Rule 5 (trending up or trending down) represents a process that is trending in one direction.  For example, tool wearing could cause this type of trend.  Rule 6 (mixture) occurs when you have more than one process present and are sampling each process by itself.   Hence the mixture term.   For example, you might be taking data from four different shifts.  Shifts 1 and 2 operate at a different average than shifts 3 and 4.  The control chart could have shifts 1 and 2 in zone B or beyond above the average and shifts 3 and 4 in zone B below the average – with nothing in zone C.

Figure 4: Rules 5 and 6

rules 5 and 6

Figure 5 shows rules 7 and 8.   Rule 7 (stratification) also occurs when you have multiple processes but you are including all the processes in a subgroup.  This can lead to the data “hugging” the average – all the points in zone C with no points beyond zone C.  Rule 8 (over-control) is often due to over adjustment.  This is often called “tampering” with the process.  Adjusting a process that is in statistical control actually increases the process variation.    For example, an operator is trying to hit a certain value.  If the result is above that value, the operator makes an adjustment to lower the value.  If the result is below that value, the operator makes an adjust to raise the value.  This results in a saw-tooth pattern.

Figure 5: Rules 7 and 8

rules 7 and 8

Rules 6 and 7, in particular, often occur because of the way the data are subgrouped.  Rational subgrouping is an important part of setting up an effective control chart.   A previous publication demonstrates how mixture and stratification can occur based on the subgrouping selected.

These rules represent different situations – patterns = on a control chart.  It should be noted that not all rules apply to all types of control charts.  Table 2 summaries the rules by the type of pattern.

Table 2: Rules by Type of Pattern

It is difficult to list possible causes for each pattern because special causes (just like common causes) are very dependent on the type of process.  Manufacturing processes have different issues that service processes.  Different types of control chart look at different sources of variation.  Still, it is helpful to show some possible causes by pattern description.  Table 3 attempts to do this based on the type of pattern.

Table 3: Possible Causes by Pattern

Table 3 provides some guidance on what you should be thinking about as you try to find the reasons for special causes.  For example, if Rule 1 or Rule 2 is violated, you should be asking “what in this process could cause a large shift from the average?”.  Or if Rule 6 occurs, you should be asking “what in this process could cause there to be more than one process present?”  These type of questions can help guide brainstorming sessions to find the reasons for the special cause of variation.   The type of pattern can guide your analysis of the out of control point.

This publication took a look at the 8 control chart rules for identifying the presence of a special cause of variation.  The rules describe certain patterns of variation that will give you insights on where to look for the special cause of variation.  No one table can give you the reasons for out of control points in your process.  You have to use your own knowledge (and that of those closest to the process) to discover the reason.  Our SPC for Excel software handles all the out of control charts.

Video: Interpeting Control Charts

Visit our home page

SPC Training

SPC Consulting

Ordering Information

Thanks so much for reading our publication. We hope you find it informative and useful. Happy charting and may the data always support your position.

Dr. Bill McNeese BPI Consulting, LLC

View Bill McNeese

Connect with Us

Control chart basics.

  • The Impact of Out of Control Points on Baseline Control Limits
  • Which Out of Control Tests Should I Use?
  • The Average Run Length and Detecting Process Shifts
  • Control Charts and Adjusting a Process
  • The Problem of In Control but Out of Specifications
  • How to Mess Up Using Control Charts
  • The Difficulty of Setting Baseline Data for Control Charts
  • Control Charts, ANOVA, and Variation
  • Three Sigma Limits and Control Charts
  • Control Charts and the Central Limit Theorem
  • Applying the Out of Control Tests
  • How Much Data Do I Need to Calculate Control Limits?
  • The Estimated Standard Deviation and Control Charts
  • My Process is Out of Control! Now What Do I Do?
  • When to Calculate, Lock, and Recalculate Control Limits
  • The Purpose of Control Charts
  • Control Limits – Where Do They Come From?
  • Selecting the Right Control Chart
  • The Impact of Statistical Control
  • Use of Control Charts
  • Control Strategies
  • Interpreting Control Charts

Comments (64)

assignable cause on

Hi!  Your page has been significantly helpful.  Can you tell me how these rules would apply for an individuals-moving range chart?  Can these zones still be created?  Thanks in advance!

assignable cause on

The zones test can be applied to the individuals chart; not the moving range chart.  I probably need to do an article of what rules apply to which charts.  But all apply the individuals chart.  On the moving range, points beyond the limits, a run below or above the average (twice as long as individuals chart since each data point is reused in the moving range, overcontrol, an seven trending up or down.

Hi Bill – useful stuff. However, I'm struggling to understand which Control Chart rules I should apply. For example, do I use Westgard, Nelson, WECO etc. – none of which seem to be the rules you've listed above. Are you able to shed any light on which rules to use on an individuals chart? Thanks.

Of course, points beyond the control limits always apply.  With the X chart for individuals, you apply all the rules listed in the article.  However, with the moving range chart, you only use points beyond the control limts, and long runs above or below the average range or trending up or down.   This is because you are reusing the data.  I will do the next publication on which tests apply to which charts.  Software, like SPC for Excel, will automatically select the appropraite tests for the control chart although you can change those options.

Sorry…I suppose what I was really trying to say is that there are slight variations to the available sets of rules. As I’m only just entering the world of SPC charts, my understanding is that WECO is the original set of rules (pretty much a cornerstone for all rule sets) and since then, newer iterations such as Nelson and Westgard have been developed. Therefore, I’m confused on which set of rules I should use. In Rule 5 above, you state the need to observe at least 7 consecutive points whereas Nelson rules (rule 3) state the requirement to observe at least 6. Is there a “correct” choice, or does it come down to how long you wish to observe a trend for before determining it to be out of control? Thanks.

Yes, there are slight variations in the rules.   Some have 7, others 6, others 8.   There is not a correct choice as such.  You are correct – it is how "sure" you want to be that there is signal.  Suppose we were tossing a coin and you paid me a dollar each time it was heads and I paid you a dollar each times it was tails.  If I got six heads in a row, you would start wondering about the coin.  7 times in a row you would wonder even more.  By 8 times, I am sure you think the coin is not a true coin. For example, consider a run above the average.  What is the probablity of getting 6 points in a row above the average?  It is 1.56% (simply .5^6).  For 7 points, it is 0.78%.  For 8 points it is 0.39%.  It is really your choice.  The probability of getting a point beyond the control limits for a true normal distribution (doesn't exist) is 0.27%.  So, picking something around there for the other tests is a good way to approach this – so 7 or 8 points looks good to me.

Hi Bill,Thanks for your page. It is indeed very useful. Tell me, when is it possible for  a control chart which is in control to be actually out of control?Regards, John

Thanks John.  Not sure I fully understand your question.  There is no way to assign a probability to a point being a special cause or not.  A point beyond the control limits could just be common cause of variation.  And just because a point is within the control limits does notmean there is a not a special cause of variation present.  The rules simply give a way of reacting to certain conditions that most likely are out of control points.

Your explanation in this article is really quite good, with one exception. Nowwhere in the article do you mention that the rules you are applying are intended only for use with averages ; usually of n=2 to 5 individual points. This is vitally important. Grouped means (histograms) are always normal distributions, whereas grouped individuals are totally unpredictable. They can result in a wide variety of distributions, usually not normally distributed. The makes control charting of individuals very risky, because the distribution is not normal, most of the time. The Shewart control chart was derived soley for averages, because they are always normal distributions, therebye predictable.

Hi! I work with pharmaceutical compressing process to create tablets, and I have some doubts about our chart crontol. From time to time we take some tablets samples and we analize some parameters like weight. The problem is: my samples have 30 tablets each, and I can't take the individual tablets in the exactly moment they leave the machine. So, how can I analize some events like shifts if I don't have the time precision of wich tablet? I'm from Brazil and we don't have here enought information about the topic. I really could use some help. =) Could you contact me?   Kind Regrats!  

thanks for great explain, would u help to Calculate the probability that an in-control process will yield the “Simplified” Runs Rule violation of having 2 consecutive points at 1.5sigma or beyond

If you have Excel, you can use the NORMSDIST(z) function (or NORM.S.DIST for Excel 2001 and later) to determine this.  For example, the probability of getting a point below 1.5 sigma is NORMSDIST(-1.5) = 0.0668 or about 6.68%.  The probability of geting two beyond 1.5 sigma on the same side of the average is 0.0668^2 or .0045.

thanks for this article it’s really helpful. I wonder is there a standard to define when a process is back in control? How many points ‘under control’ would we need to observe after a special cause event to think it was back in control. I am trying to develop a simple “in control? Yes/No” indicator to sit along side our SPC charts. I don’t want to be continually alerting that there was a single blip 8 months ago for example. Any advice? Thanks

It is back in control, in my opinion, if the next point is back within the control limits – if it is a fleeting special cause of variation that comes and goes.  But suppose that out of control point stays around.  You have a point above the upper control limit.  The next point is back within the limits but it is above the upper control limit.  If it stays about the average for a run and you can't find out why, then you have re-calculate the control limits or adjust the process to bring it back into control.  This link has more details:

<span style="font-size: 13.008px;"> /knowledge/control-chart-basics/when-calculate-lock-and-recalculate-control-limits</span&gt ;

Dear Bill, thank you for the nice and clear explanation. I have one question, Shewhart control chart can still be created if the data are not normal, right? What about these interpretations, they can only be used if the data are normal? or can some of them be applied in case of non normality of the available whole data for the analysis? Thank you.

Thank you.  The data does not have to be normally distributed to use a control chart.  Most Xbar data is symmetrical assuming the subgroup size is large enough.  The zones tests require some symmetry about the average, but basically, you should not worry about normality.  You know  your process and will know if a control chart is signalling a special case most likely.

the method of calculation and underlying statistical basis for establishing the UCL & LCL is not clear in your article.  what are the calculations, and on what are they based?? thanks.

Hello, The calculations vary based on the type of control chart.  Please see this link for the various variable control charts: /spc-for-excel-publications-category#variable This link explains in genearl were they come from: /knowledge/control-chart-basics/control-limits

Hi Dr. Bill.Your info is really helpful. I just started to work on Control Chart that why have some basic question.We have a #4 trend for almost 2 years. I checked all the samples, Technician, collecting data process and machine are OK. I just keep an eye on it.  I have questions:1. If we have to make comment on this trend like  ” In control “ or “ Out of Control”.  Can we say “Our Control chart is IN CONTROL, we need to keep an eye on it and react whenever we got outliner“ ?2. If all condition is the same but the trend Is #4 for long time. Do we need to recalculate Control limit? What can I say to convince other ones to recalculate Control limit?Thx Dr. Mike Nguyen

If you have a long run above the average (or below), it means that something has changed to cause the average to move up or down.  It is "out of contorl".  If you can't find what happened – and it doesn't bascially change the product, then you can recalculate the control limits starting with the shift changed.  And use those for the future.

Texts over the years have allowed  e.g. 1 in 25 or 2 in ~50 points outside Control Limits w/o stating "out of control."  In your experience with data or reference material texts have you encountered any rule re:  % of points beyond limits.  At times I will deal with >50 or 100 Control Chart points.  Thanks… 

A rough rule i have used over the years is that a process is pretty stable if less than 5% of thepoints are out of control.  That is close to what you reference.

Is there a hirearchy for these rules?  In other words, how would they be ranked in order of statictical significance?

You can theortically put a statistical probability to each rule assuming a normal distribution – they are all about the same probability.  In practical terms, start with the points beyond the control limits, then add the test for zone C later and then zone A and B after that.  This approach seems to work well.

Dr, McNeese: My background is in electromagnetic fields and measurements of such for safety purposes. The issue of how often instruments that are used for these measurements should be recalibrated is a common question. A presentation available on the web at http://aashtoresource.org/docs/default-source/newsletter/calibrationintervalspresentation.pdf suggests the use of control charts as one possible approach to assessing the need for recalibrating an instrument. Being totally unfamiliar with control charts, I am confused and hope you can shed some light on this matter. For instruments that are typicaly recalibrated once per year, how would control charts be used to suggest that either a longer, or shorter, recalibration interval might be acceptable? The primary objective is to determine an appropriate recalibration interval. If I follow the suggestion, it would seem that long term experience from repetititve calibrations would be required to accumulate sufficient data before one could deduce whether shorter or longer recal intervals were appropriate. Thank you for your insight. 

Hello Richard, You are correct that it takes experience to judge how often to check the calibration of an instrument.  If it is critical to production, you should check it more frequently.  For example, when I ran a QC lab years ago, we checked each crtiical test at the start of each shift.  There are probably istruments that don't move too much or don't move in such a way to impact production – you might check those for calibration monthly or longer – it all depends on the situation.  Your knowledge of the process is a key in deciding.

if all the observations are within control limits, does that guarantee that the process variation contains only randomness? 

No, it does not.  It is possible there are special causes of variation present even if the point is witin the control limits, just as it is possible that an out of control point could be due to common causes.  The control limits provide an economic way of being fairly sure there is a special cause of variation before you spend time and money looking for it.  

Hi Bill, can you help me answer this question? Thank you so much.Control charts are used to monitor and control a process. They use control limits to define the range of natural variation in a process. If a sample is taken and the plot point falls outside of the control limits what does this​ signify? the process is in control. the process is out of control and should be checked for natural variation. the process should be monitored for future results. the process is out of control and should be checked for assignable variation.

Am I taking a test for you? The process is out of control and should be checked for assignable cause variation. Please read this article

Nicely presented

Hi Bill, I learned that we need to interpret control charts based on the 68-95-99 rule; and I would like to know, in your opinion, if there are no points outside the 3 Sigma limit (all points with 3Sigma each side), is a process still considered in control, if for example: only 1 of 3 consecutive points fall within 1 Sigma either side of the average.. meaning two of the three are either in the 2 or 3 sigma zones. If we have 100 points of data, we would expect 68 of them to be within 1Sigma from the average, if this is not true, but the process has no data point outside the 3Sigma, is the process considered "not in control"?Thank you.

I would not worry too much about probabilities – like 68 points out of 100 should be within one sigma of the average.  That is true for a perfect normal distribution but there are not no perfect normal distributions in real life processes.  If there are no points beyond the limits and none of teh zones tests have been violated, then the process is in statistical control.

Hi. Why aren't these rules applicable for the CUSUM and EWMA charts?

because the CUSUM and EWMA are only looking for a signal that goes beyond hte limits – the values are not symmetrical

HI SIR, , i HAVE GONE THROUGH THE EIGHT RULES OF CONTROL CHART . BUT IN THAT CONTEXT , WHAT IS THE IDEAL CONTOL CHART OR IS THERE ANY PICTURE OF THAT. 

I am assuming you mean a control chart that is in control.   A control chart likes that will have most points near the  middle, a few near the control limits, no beyond the control limits and no patterns.

Hi there,Thank you for this really great article, I have returned to it so many times since I became aware of run charts. Given that Covid had such an impact on data all over the world would you consider this to be a "fleeting" change and control for it  with process shifts or "the new normal" and leave the data as is? I work in the world of crime data so shops closing nd people staying at home impacted Theft from shop and Burglary. TIA

Well, I wish the crystal ball to see into the future.  I think for now it is the new normal – at least until a vaccine is found and adminstered or better treatment is found and people get back to work like they were before the virus.  I think is good you are applying the control charts to crime data.

Thanks and here's hoping vaccines bring about normality again!

If I am plotting c chart for customer complaints, and 0 being my lower control limit. If i have 4 consecutive points touching LCL, then should I assume my process is in control?

If the LCL is below zero, then there really is not a lower control limit. I don't set it to 0.  Yes 4 points in a row at zero is in statistical control.  You need 7 to 9 below the average to be an out of control situation.

if the trending days are 5 in the same direction then the 6th day comes in the the opposite direction slightly less or the same value of the 5th day what should I do:- Exclude this point and continuou counting from the 7 th day as number 6 in the trend OR -Restart counting from day 7 ?Thanks in advance 

Once a trend is broken, you start over with one point.

Also if the trending line is zigzag up then down then more up then down .How will I count the 7 trending points.?Thanks 

Not sure I understand but if it zig-zags, it is not a trend, each point must be above the last one for an upward trend.

Hi Bill, thanks for the great posting! I got several questions: Is it possible that a single point triggers several rules all at the same time? If it is possible, how can I tell which rule was triggered first? In other word, is there any hierarchy or ranking among these eight rules? 

Hello,  Thanks for the comment.  The only hierarchy I pay attention really is point beyond the control limits.  If that occurs, you work to find out what caused that.  A point beyond the limit can change the location of the average and sigma lines making the other tests not really valid.  After that, I would probably look at runs above the average if I have to pick another one (zone C).

U chart can be used in both when we have the same sample size or different sample sizes. Why do we still use c chart when we have the same sample size

It is easier to explain and you don't have to select an inspection unit.  But there is no need to use it since the u chart works also.

Hello Sirs and All…Can a high or increasing yield be a problem in SPC?Does it make sence to make a control chart for high yield?Example>>> LCL = 85% UCL = 95% CL=90% >>> Yield became higher than the UCL. Is this considered yield is out of control?

Yes you can have a control chart on high yield.  If the result is above the UCL, it is out of control – but on the good side.  IF you can find out what happened and make it part of the process, then you have improved it.

Are these rules meant to only be used for Xbar charts or can they be used for range and standard deviation control charts as well?

Please see table 3 in this article:   /knowledge/control-chart-basics/applying-out-of-control-tests

Hi Bill,Nice article, I got clarifications of some finer points. I have a question that is how do you arrive at 2 out of 3 or 4 out of 5 points for different zones to come to conclusion that they could be likely assignable causes. How do you assign propability of occurances of these 2 cases. 

You can estimate the probabilities using a normal distribution.  The tests for zone A and zone B give about the same probability as a point beyond the control limits.  The probability of getting a point beyond the upper control limit is 0.00135.  The probability of getting one point in zone A or beyond is 0.0228. The probability of getting two points in a row in zone A or beyond is then (0.0228)(0.0228) = 0.00052. Note that this probability is smaller than the probability of getting one point beyond one of the control limits. Thus, if two points in a row fall in zone A or beyond, it is a stronger indication of an out of control situation than a point beyond the control limits. <br />Since this probability is so small, the requirement can be loosen somewhat by saying two out of three consecutive points in zone A or beyond. The probability of getting a point somewhere else on the chart besides zone A or beyond is 1 – 0.0228 = 0.9772. The probability of getting two out of three consecutive points in zone A or beyond is then (0.0228)(0.0228)(0.9972)(3) = 0.00156 (or one out of 640). You multiply by 3 because the point not in zone A could be the first, second or third point. The probability of obtaining this pattern for a process that is in control is then 0.00156, a small number. A similar approach can be used for zone B.

If I plot control chart which has only upper limit, is my process in control? How will I summarize on the trend reporting?

Many Thanks for the content!!!

Really useful.  Having difficulty as every chart I tend to create (usually to be used for assurance rather than improvement) the process limits are always hugely wide.  Its normally small numbers used and there is no baseline being applied.

Please send me an example.  [email protected]

Hello, Sir. Your data center line will always depend on your data mean? What if moving data? Does the mean or center line will also change? Thanks.

Your average only "changes" if the control chart shows that has been a significant change in the process.  Then you recalculate the average and limits and interpret that process again.

assignable cause on

Taking a course on SPC, and they didn’t explain why stratification is undesirable. Having read this page, now I understand why. Thanks!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Book cover

Encyclopedia of Production and Manufacturing Management pp 50 Cite as

ASSIGNABLE CAUSES OF VARIATIONS

  • Reference work entry

618 Accesses

1 Citations

Assignable causes of variation are present in most production processes. These causes of variability are also called special causes of variation ( Deming, 1982 ). The sources of assignable variation can usually be identified (assigned to a specific cause) leading to their elimination. Tool wear, equipment that needs adjustment, defective materials, or operator error are typical sources of assignable variation. If assignable causes are present, the process cannot operate at its best. A process that is operating in the presence of assignable causes is said to be “out of statistical control.” Walter A. Shewhart (1931) suggested that assignable causes, or local sources of trouble, must be eliminated before managerial innovations leading to improved productivity can be achieved.

Assignable causes of variability can be detected leading to their correction through the use of control charts.

See Quality: The implications of W. Edwards Deming's approach ; Statistical process control ; Statistical...

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Deming, W. Edwards (1982). Out of the Crisis, Center for Advanced Engineering Study, Massachusetts Institute of Technology, Cambridge, Massachusetts.

Google Scholar  

Shewhart, W. A. (1939). Statistical Method from the Viewpoint of Quality Control, Graduate School, Department of Agriculture, Washington.

Download references

Editor information

Rights and permissions.

Reprints and permissions

Copyright information

© 2000 Kluwer Academic Publishers

About this entry

Cite this entry.

(2000). ASSIGNABLE CAUSES OF VARIATIONS . In: Swamidass, P.M. (eds) Encyclopedia of Production and Manufacturing Management. Springer, Boston, MA . https://doi.org/10.1007/1-4020-0612-8_57

Download citation

DOI : https://doi.org/10.1007/1-4020-0612-8_57

Publisher Name : Springer, Boston, MA

Print ISBN : 978-0-7923-8630-8

Online ISBN : 978-1-4020-0612-8

eBook Packages : Springer Book Archive

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Icon Partners

  • Quality Improvement
  • Talk To Minitab

Leaving Out-of-control Points Out of Control Chart Calculations Looks Hard, but It Isn't

Topics: Control Charts , Lean Six Sigma , Six Sigma , Quality Improvement

Control charts are excellent tools for looking at data points that seem unusual and for deciding whether they're worthy of investigation. If you use control charts frequently, then you're used to the idea that if certain subgroups reflect temporary abnormalities, you can leave them out when you calculate your center line and control limits. If you include points that you already know are different because of an assignable cause, you reduce the sensitivity of your control chart to other, unknown causes that you would want to investigate. Fortunately, Minitab Statistical Software makes it fast and easy to leave points out when you calculate your center line and control limits. And because Minitab’s so powerful, you have the flexibility to decide if and how the omitted points appear on your chart.

Here’s an example with some environmental data taken from the Meyer Park ozone detector in Houston, Texas . The data are the readings at midnight from January 1, 2014 to November 9, 2014. (My knowledge of ozone is too limited to properly chart these data, but they’re going to make a nice illustration. Please forgive my scientific deficiencies.) If you plot these on an individuals chart with all of the data, you get this:

The I-chart shows seven out-of-control points between May 3rd and May 17th.

Beginning on May 3, a two-week period contains 7 out of 14 days where the ozone measurements are higher than you would expect based on the amount that they normally vary. If we know the reason that these days have higher measurements, then we could exclude them from the calculation of the center line and control limits. Here are the three options for what to do with the points:

Three ways to show or hide omitted points

Like it never happened

One way to handle points that you don't want to use to calculate the center line and control limits is to act like they never happened. The points neither appear on the chart, nor are there gaps that show where omitted points were. The fastest way to do this is by brushing :

  • On the Graph Editing toolbar, click the paintbrush.

The paintbrush is between the arrow and the crosshairs.

  • Click and drag a square that surrounds the 7 out-of-control points.
  • Press CTRL + E to recall the Individuals chart dialog box.
  • Click Data Options .
  • Select Specify which rows to exclude .
  • Select Brushed Rows .
  • Click OK twice.

On the resulting chart, the upper control limit changes from 41.94 parts per billion to 40.79 parts per billion. The new limits indicate that April 11 was also a measurement that's larger than expected based on the variation typical of the rest of the data. These two facts will be true on the control chart no matter how you treat the omitted points. What's special about this chart is that there's no suggestion that any other data exists. The focus of the chart is on the new out-of-control point:

The line between the data is unbroken, even though other data exists.

Guilty by omission

A display that only shows the data used to calculate the control line and center limits might be exactly what you want, but you might also want to acknowledge that you didn't use all of the data in the data set. In this case, after step 6, you would check the box labeled Leave gaps for excluded points . The resulting gaps look like this:

Gaps in the control limits and data connect lines show where points were omitted.

In this case, the spaces are most obvious in the control limit line, but the gaps also exist in the lines that connect the data points. The chart shows that some data was left out.

Hide nothing

In many cases, not showing data that wasn't in the calculations for the center line and control limits is effective. However, we might want to show all of the points that were out-of-control in the original data. In this case, we would still brush the points, but not use the Data Options. Starting from the chart that calculated the center line and control limits from all of the data, these would be the steps:

  • Press CTRL + E to recall the Individuals chart dialog box. Arrange the dialog box so that you can see the list of brushed points.
  • Click I Chart Options .
  • Select the Estimate tab.
  • Under Omit the following subgroups when estimating parameters , enter the row numbers from the list of brushed points.

This chart still shows the new center line, control limits, and out-of-control point, but also includes the points that were omitted from the calculations.

Points not in the calculations are still on the chart.

Control charts help you to identify when some of your data are different than the rest so that you can examine the cause more closely. Developing control limits that exclude data points with an assignable cause is easy in Minitab and you also have the flexibility to decide how to display these points to convey the most important information. The only thing better than getting the best information from your data? Getting the best information from your data faster!

The image of the Houston skyline is from Wikimedia commons and is licensed under this creative commons license .

You might also like.

  • Trust Center

© 2023 Minitab, LLC. All Rights Reserved.

  • Terms of Use
  • Privacy Policy
  • Cookies Settings

How to deal with Assignable causes?

How to deal with Assignable causes?

Across the many training sessions conducted one question that keeps raging on is “How do we deal with special causes of variation or assignable causes”. Although theoretically a lot of trainers have found a way of answering this situation, in the real world and especially in Six Sigma projects this is often an open deal. Through this article, I try to address this from a practical paradigm.

Any data you see on any of your charts will have a cause associated with it. Try telling me that the points which make your X MR, IMR or XBar R Charts have dropped the sky and I will tell you that you are not shooting down the right ducks. Then, the following causes seem possible for any data point to appear on the list.

  • A new operator was running the process at the time.
  • The raw material was near the edge of its specification.
  • There was a long time since the last equipment maintenance.
  • The equipment maintenance was just performed prior to the processing.

 The moment any of our data points appear due to some of the causes mentioned below, a slew of steps are triggered. Yeah – Panic! Worse still, these actions below which may have been a result of a mindless brain haemorrhage backed by absolute lack of data, results in more panic!

  • Operators get retraining.
  • Incoming material specifications are tightened.
  • Maintenance schedules change.
  • New procedures are written.

My question is --- Do you really have to do all of this, if you have determined that the cause is a common or a special cause of variation ! Most Six Sigma trainers will tell you that a Control chart will help you identify special cause of variation. True – But did you know of a way you could validate your finding!

  • Check the distribution first. If the data is not normal, transform the data to make it reasonably normal. See if it still has extreme points. Compare both the charts before and after transformation. If they are the same, you can be more or less sure it has common causes of variation.
  • Plot all of the data, with the event on a control chart.  If the point does not exceed the control limits, it is probably a common-cause event.  Use the transformed data if used in step 1.
  • Using a probability plot, estimate the probability of receiving the extreme value.  Consider the probability plot confidence intervals to be like a confidence interval of the data by examining the vertical uncertainty in the plot at the extreme value.   If the lower confidence boundary is within the 99% range, the point may be a common-cause event.  If the lower CI bound is well outside of the 99% range, it may be a special cause.  Of course the same concept works for lower extreme values.
  • Finally, turn back the pages of the history. See how frequently these causes have occurred. If they have occurred rather frequently, you may want to think these are common causes of variation. Why – Did you forget special causes don’t really repeat themselves?

 The four step approach you have taken may still not be enough for you to conclude if it is a common or a special cause of variation. Note – Any RCA approach may not be good enough to reduce or eliminate common causes. They only work with special causes in the truest sense.

So, what does that leave us with! A simple lesson that an RCA activity has to be conducted when you think even with a certain degree of probability that it could be a special cause of variation. To ascertain that if the cause genuinely was a Special cause all you got to do is look back into the history and see if these causes repeated. If they did, I don’t think you would even be tempted to think it to be a special cause of variation.

Remember one thing – While eliminating special causes is considered goal one for most Six Sigma projects, reducing common causes is another story you’d have to consider. The biggest benefit of dealing with common causes is that you can even deal with them in the long run, provided they are able to keep the process controlled and oh yes, the common causes don’t result in effects.

Merely by looking at a chart, I don’t think I have ever been able to say if the point has a Special cause attached to it or not. Yes – This even applies to a Control chart which is by far considered to be the best Special cause identification tool. The best way out is a diligently applied RCA and a simple act of going back and checking if the cause repeated or not.

Our Quality Management Courses Duration And Fees

Explore our top Quality Management Courses and take the first step towards career success

Recommended Reads

A Guide on How to Become a Site Reliability Engineer (SRE)

10 Major Causes of Project Failure

Your One-Stop Guide ‘On How Does the Internet Work?’

How to Improve Your Company’s Training Completion Rates

Root Cause Analysis: All You Need to Know

How to Become a Cybersecurity Engineer?

Get Affiliated Certifications with Live Class programs

Finance for non-financial professionals.

  • 24x7 learner assistance and support
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.
  • Login / Register

Quality Digest

  • Customer Care
  • FDA Compliance
  • 3D Metrology-CMSC
  • Risk Management
  • Supply Chain
  • Sustainability
  • Back Issues (newer)
  • Back Issues (older)
  • Subscribe to e-newsletter
  • Product Demos
  • Submit Press B2B Release
  • Marketing Essentials
  • WRITE FOR US
  • Login / Subscribe
  • All Features

close

More Videos

More Features

assignable cause on

When Assignable Cause Masquerades as Common Cause

Deciding whether you need capa or a bigger boat.

Published: Wednesday, September 27, 2023 - 11:03

  • Send Article (Must Login )
  • Author Archive

T he difference between common (or random) cause and special (or assignable) cause variation is the foundation of statistical process control (SPC). An SPC chart prevents tampering or overadjustment by assuming that the process is in control, i.e., special or assignable causes are absent unless a point goes outside the control limits. An out-of-control signal is strong evidence that there has been a change in the process mean or variation. An out-of-control signal on an attribute control chart is similarly evidence of an increase in the defect or nonconformance rate.

The question arises, however, whether events like workplace injuries, medical mistakes, hospital-acquired infections, and so on are in fact due to random or common cause variation, even if their rates follow binomial or Poisson distributions. Addison’s disease and syphilis have both been called “the Great Pretender” because their symptoms resemble those of other diseases. Special or assignable cause problems can similarly masquerade as random or common cause if their metrics fit the usual np (number of nonconformances) or c (defect count) control charts.

The exponential distribution is used as a model for rare events, and the metric is the time between occurrences such as days between lost-worktime injuries. Sufficiently infrequent workplace injuries could conform to this distribution and convince the chart users that they are, in fact, random variation.

We also know it’s important to not try to track more than one process, or in the case of attribute data, more than one kind of defect or nonconformance, on a single control chart. The latter probably makes an out-of-control signal less likely if one of the attributes does begin to cause trouble; if we do get an out-of-control signal, the chart won’t show which attribute is responsible. It’s similarly futile to have a single control chart for an aggregate of safety incidents with wide arrays of underlying causes and effects.

Are control charts applicable to safety incidents or medical mistakes?

Some very authoritative sources recommend using control charts for workplace injuries, medical mistakes, and so on. According to a 2014 public health report , “Statistical process control charts have recently been used for public health monitoring, predominantly in healthcare and hospital applications, such as the surveillance of patient wait times or the frequency of surgical failures [e.g., 1–10]. Because the frequency of safety incidents like industrial accidents and motor vehicle crashes will follow a similar probability distribution, the use of control charts for their surveillance has also been recommended [11–15]. These control chart uses can be extended to military applications, such as monitoring active-duty Army injuries.” 1

This reference includes control charts for “injuries per 1,000 soldiers,” and the points are all inside the control limits. The reference does cite a decrease in the injury rate, and this could well be due to corrective and preventive action (CAPA) that removed the root causes of the incidents in question to prevent recurrence. That is, CAPA for special or assignable cause problems will make them less frequent, so their aggregated count will exhibit a decrease. The presence of control limits could, however, have the unintended consequence of implying that these incidents result from random variation rather than assignable causes.

Another reference claims, “Deming estimated that common causes may be responsible for as much as 99% of all the accidents in work systems, not the unsafe actions or at-risk behaviors of workers.” 2  Although one might be reluctant to challenge W. Edwards Deming, the truth is that almost all safety incidents have assignable causes. I’ve yet to see the Occupational Health and Safety Administration or the Chemical Safety Board write one off to random variation. When OSHA fines somebody for an unsafe workplace, it’s always for an assignable cause because OSHA cites a rule and how it was violated (e.g., no fall protection). If Deming contended that 99% of all incidents are due to management-controllable factors, that’s another matter entirely. But these factors are ultimately special or assignable causes. If a problem has an identifiable root cause, it’s a special or assignable cause by definition.

Rethinking common vs. assignable cause

Quality practitioners equate common cause and random cause variation. Random is exactly what it says because process and quality characteristics always experience some variation. Common cause relates to factors that aren’t controllable by the workers. Deming’s Red Bead demonstration shows why it’s worse than useless to reward or penalize workers for them. If these factors are correctable by management, it might be better to not equate them to random variation.

The Ford Motor Co. presented an outstanding example of this more than 100 years ago. 3 “Even the simple little sewing machine, of which there are 150 in one department, did not escape the watchful eyes of the safety department. Every now and then the needle of one of these high speed machines would run through an operator’s finger. Sometimes the needle would break after perforating a finger, and a minor operation would become necessary. When such accidents began to occur at the rate of three and four a day, the safety department looked into the matter and devised a little 75-cent guard which makes it impossible for the operator to get his finger in the way of the needle.”

The reference says the accidents took place at a rate of three and four a day; let’s assume an average of 3.5 per day. It’s quite likely that the daily count would have fit a Poisson distribution for undesirable random arrivals, and would have probably served as a textbook example for a c (defect count) control chart. If we view common or random cause as something inherent to the system in which people must work, in this case an unguarded moving sharp object, then this was a common cause problem. The fact that it was possible to put a finger under the needle shows, however, that the root cause was in the machine (equipment) category of the cause-and-effect diagram. The fact that installation of the guards (figure 1) eliminated the problem completely underscores the fact that they were dealing with special, assignable, or correctable cause variation.

Make no mistake: CAPA is, or at least should be, mandatory for every safety incident or near miss, regardless of the frequency of occurrence, because it almost certainly has a correctable cause.

assignable cause on

Shigeo Shingo offered several case studies that involved workers forgetting to install or include parts. 4 It’s quite conceivable that these nonconformances might have followed a binomial or Poisson distribution, and their counts could have been tracked on an np (number nonconforming) or c (defect count) chart. This might convince many process owners that this was random or common cause variation, especially if no points were above the upper control limit. Shingo determined, however, that the root cause was machine and/or method (as opposed to manpower) because the job design permitted the mistakes to happen. Installing simple error-proofing controls that made it impossible to forget to do something fixed these problems entirely.

If we accept the premise that something management-controllable, like a job design that allows mistakes, is common cause variation, then these problems were common cause variation. The fact that specific, assignable causes were found and removed, however, argues otherwise.

Is a known cause always a special cause?

Does the fact that we know a problem’s root cause always make it a special or assignable cause? Suppose a 19th-century army recognizes that a musketeer is unlikely to hit his target from beyond 50–100 yards because muskets are inherently incapable of precise fire, as shown in figure 2. The only way to improve the situation is to rearm the entire army with rifles, which everybody eventually did.

assignable cause on

The prevailing variation in musket fire, however, had to be classified as common cause because the tool was simply not capable of better performance. There was no adjustment a soldier could make to improve this performance, and adjustment in response to common or random cause variation (i.e., tampering) actually makes matters worse. If, however, the shot group from a firearm was centered elsewhere than the bull’s-eye, this was special or assignable cause because the back sight could be adjusted to correct the problem the same way a machine tool that is operating off nominal can be adjusted to bring it back to center.

Another example involves particle-inflicted defects on semiconductor devices. These devices are so small that even microscopic particles will damage or destroy them during fabrication. Thus the cause is known, but the only way to improve the situation is to get a better clean room with an air filtration system that will reduce the particle count, or get better process equipment and chemicals; the latter also must be relatively particle-free.

The takeaway from these examples is that if the problem’s root cause is known but we can solve it only with a large capital investment, retooling, or whatever, we can construe it as common cause variation. This is emphatically not true, however, of safety incidents and medical mistakes.

Joseph Juran and Frank Gryna reinforce this perception. 5 “Random in this sense means of unknown and insignificant cause, as distinguished from the mathematical definition of random—without cause.” If a root cause analysis (RCA) in the course of corrective and preventive action can find a cause, it’s assignable and not random.

The fact that nonconformance data—and safety incidents and medical mistakes are obviously nonconformances—may fit an attribute distribution and behave in the expected manner on an attribute control chart doesn’t make them random or common cause variation that we must accept in the absence of major capital investments or other overhauls. We must recognize upfront that the aggregate of multiple special-cause incidents can masquerade as binomial or Poisson data. We also need to realize that OSHA violations involve failures to conform to a very specific regulation or standard (such as fall protection), which are special or assignable causes by definition.

Medical regulatory agencies such as Medicare do not, meanwhile, deny payment for things that “just happen,” like surgery on the wrong body part, surgery on the wrong patient, medication errors, and so on. 6 These are “never events” that should never happen, so common or random cause variation is not an acceptable explanation.

This underscores the conclusion that any accident or near miss requires corrective and preventive action regardless of whether the count or frequency of these events falls inside traditional control limits, and even raises questions as to whether control limits (which imply the presence of a random underlying distribution) should be used at all.

In summary: If the only way to improve the situation involves extensive retooling, capital investments, and so on, as in “You’re going to need a bigger boat” from the movie Jaws , it’s common cause variation. The issue isn’t urgent because it’s not practical to take immediate action on it. But it is important. If a competitor gets a bigger boat, a superior rifle, a better cleanroom, or a tool with less variation, we will eventually be in trouble.

If the issue has an identifiable root cause that can be removed with corrective and preventive action, it’s a special or assignable cause variation regardless of whether the metric is inside control limits. CAPA is mandatory when the issue involves worker or customer safety, and highly advisable when it involves basic quality.

References 1. Schuh, Anna, and Canham-Chervak, Michelle.  “Statistical Process Control Charts for Public Health Monitoring.” U.S. Army Public Health Command, Public Health Report, 2014. 2. Smith, Thomas.   “Variation and Its Impact on Safety Management.” EHS Today, 2010. 3. Resnick, Louis.  “How Henry Ford Saves Men and Money.” National Safety News , 1920. 4. Shingo, Shigeo. Zero Quality Control: Source Inspection and the Poka-Yoke System . Routledge, 1986. 5. Juran, Joseph, and Gryna, Frank.  Juran’s Quality Control Handbook,   Fourth Edition .  McGraw-Hill, 1988. 6. Centers for Medicare & Medicaid Services.  “Eliminating Serious, Preventable, And Costly Medical Errors—Never Events.”  2006.

Quality Digest does not charge readers for its content. We believe that industry news is important for you to do your job, and Quality Digest supports businesses of all types.

However, someone has to pay for this content. And that’s where advertising comes in. Most people consider ads a nuisance, but they do serve a useful function besides allowing media companies to stay afloat. They keep you aware of new products and services relevant to your industry. All ads in Quality Digest apply directly to products and services that most of our readers need. You won’t see automobile or health supplement ads. Our PROMISE: Quality Digest only displays static ads that never overlay or cover up content. They never get in your way. They are there for you to read, or not.

So please consider turning off your ad blocker for our site.

Thanks, Quality Digest

  • [ 0 Comment ]
  • ( 0 ) Hide Comments

About The Author

William a. levinson.

William A. Levinson, P.E., FASQ, CQE, CMQOE, is the principal of Levinson Productivity Systems P.C. and the author of the book The Expanded and Annotated My Life and Work: Henry Ford’s Universal Code for World-Class Success (Productivity Press, 2013).

assignable cause on

© 2024 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information. “Quality Digest" is a trademark owned by Quality Circle Institute, Inc.

  • Privacy Policy
  • Write for us
  • Subscribe to Quality Digest Daily

Operations Management: An Integrated Approach, 5th Edition by

Get full access to Operations Management: An Integrated Approach, 5th Edition and 60K+ other titles, with a free 10-day trial of O'Reilly.

There are also live events, courses curated by job role, and more.

SOURCES OF VARIATION: COMMON AND ASSIGNABLE CAUSES

If you look at bottles of a soft drink in a grocery store, you will notice that no two bottles are filled to exactly the same level. Some are filled slightly higher and some slightly lower. Similarly, if you look at blueberry muffins in a bakery, you will notice that some are slightly larger than others and some have more blueberries than others. These types of differences are completely normal. No two products are exactly alike because of slight differences in materials, workers, machines, tools, and other factors. These are called common , or random, causes of variation . Common causes of variation are based on random causes that we cannot identify. These types of variation are unavoidable and are due to slight differences in processing.

images

Random causes that cannot be identified.

An important task in quality control is to find out the range of natural random variation in a process. For example, if the average bottle of a soft drink called Cocoa Fizz contains 16 ounces of liquid, we may determine that the amount of natural variation is between 15.8 and 16.2 ounces. If this were the case, we would monitor the production process to make sure that the amount stays within this range. If production goes out of this range—bottles are found to contain on average 15.6 ounces—this would lead us to believe that there ...

Get Operations Management: An Integrated Approach, 5th Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.

Don’t leave empty-handed

Get Mark Richards’s Software Architecture Patterns ebook to better understand how to design components—and how they should interact.

It’s yours, free.

Cover of Software Architecture Patterns

Check it out now on O’Reilly

Dive in for free with a 10-day trial of the O’Reilly learning platform—then explore all the other resources our members count on to build skills and solve problems every day.

assignable cause on

assignable cause on

The Power of Special Cause Variation: Learning from Process Changes

Updated: July 28, 2023 by Marilyn Monda

assignable cause on

I love to see special cause variation! That’s because I know I’m about to learn something important about my process. A special cause is a signal that the process outcome is changing — and not always for the better.  

Overview: What is special cause variation? 

A control chart can show two different types of variation:   common cause variation (random variation from the various process components) and special cause variation.

Special cause variation is present when the control chart of a process measure shows either plotted point(s) outside the control limits or a non-random pattern of variation.

When a control chart shows special cause variation, a process measure is said to be out-of-control or unstable. Common types of special cause variation signals include:

  •   A point outside of the upper control limit or lower control limit
  •   A trend: 6 or 7 points increasing or decreasing
  •   A cycle or repeating pattern
  •   A run: 8 or more points on either side of the average

  A special cause of variation is assignable to a defect, fault, mistake, delay, breakdown, accident, and/or shortage in the process. When special causes are present, process quality is unpredictable.

Special causes are a signal for you to act to make the process improvements necessary to bring the process measure back into control.

RELATED: COMMON CAUSE VARIATION VS. SPECIAL CAUSE VARIATION

Drawbacks of special cause variation .

The source of a special cause can be difficult to find if you are not plotting the control chart in real time.  Unless you have annotated data or a good memory, control charts made from historical data won’t aid your investigation into the source of the special cause. 

If a process measure has never been charted, it is almost certain that it will be out of control.  When you first start studying a process with a control chart, you will usually see a variety of special causes. To find the sources, begin a study of the status of critical process components.  

When a special cause source cannot be found, it will become common to the process.  As time goes on, the special causes repeat and cease being special. They then increase the natural or common cause variation in the process.  

Why is special cause variation important to understand? 

Let’s define quality as minimum variation around an appropriate target. The study of variation using a control chart is one way to tell if the process variation is increasing or if the center is moving away from the desired target over time.  

A special cause is assignable to a process component that has changed or is changing. Investigation into the source of a special cause will:

  • Let you know when to act to adjust or improve the process.
  • Keep you from making the mistake of missing an opportunity to improve a process. If the ignored special cause repeats, you still don’t know how to fix it.
  • Provide data to suggest or evaluate a process improvement.

If no special cause variation exists, that is, the process is in control, you should leave the process alone! Making process changes when there is no special cause present is called Tampering and can increase the variation of the process, lowering its quality.

An industry example of special cause variation 

In this example, a control chart was used to monitor the number of data entry errors on job applications. Each day a sample of applications was reviewed. The number of errors found were plotted on a control chart. 

One day, a point was plotted outside the control limit. Upon investigation, the manager noticed it occurred when a new worker started. It was found the worker wasn’t trained.

The newly trained worker continued data entry. A downward trend of errors followed, indicating the training was a source for the special cause! 

The manager issued guidelines for new worker training. Since then, there have been three new workers without the error count spiking. 

3 best practices when thinking about special cause variation 

Special causes are signals that you need to act to move your process measure back into control.  

Identify the source

When a special cause of variation exists, make a timely effort to identify its source.  A good starting point is to check if any process component changed near to the time the special cause was seen. Also, you could ask process experts to brainstorm why the special cause samples were out of control.

For example, a trend up in screw thickness could be caused by a gage going out of calibration.

Make improvements at the source

Implement improvements to the source of special cause variation.  Once you make improvements to the source of the special cause (like re-calibrating that gage), watch what happens as the next thickness samples are plotted.  If the plot moves back toward stability, you know you found the issue!  

Document everything

As you identify recurring special causes and their sources, document them on a control plan so process operators know what to do if they see the special cause again.

For our gage, the control plan could direct a worker to recalibrate the next time the screw thickness trends up, sending the process back to stability.  

Frequently Asked Questions (FAQ) about special cause variation

  • Are special causes always bad news? 

No. A special cause can indicate either an increase or decrease in the quality of the process measure.

If the special cause shows increased process quality (for example, a decrease in cycle time), then you should make its source common to the process.  

  • If a process is in control (no special causes) is it also capable? 

Not always. Control and capability are two different assessments.  Your process measure can be stable (in control) and still not meet the customer specification (capable). 

Once a process measure is in control, you can then assess its capability against the customer target and specification limits. If the data is within customer limits and on target, the process is considered both in control and capable.

Final thoughts on special causes 

Every process measure will show variation, you will never attain zero variability. Still, it is important to understand the nature of variability so that you can use it to better improve and control your process outcomes. 

The special cause variation signal is the key to finding those critical process components that are the sources of variation needing improvement. Use special cause variation to unlock the path to process control.

About the Author

' src=

Marilyn Monda

Six Sigma Study Guide

Six Sigma Study Guide

Study notes and guides for Six Sigma certification tests

Ted Hessing

X Bar R Control Charts

Posted by Ted Hessing

What are X Bar R Control Charts?

X Bar R charts are the widely used control charts for variable data to examine the process stability in many industries (like hospital patients’ blood pressure over time, customer call handle times, length of a part in a production process, etc).

Selection of the appropriate control chart is very important in control chart mapping; otherwise, you will end up with inaccurate control limits for the data.

assignable cause on

X bar R chart is used to monitor the process performance of continuous data. You can also use them to collect data from subgroups at set time periods. It is actually two plots to monitor the process mean and the process variation over time and is an example of  statistical process control . These combination charts help to understand the stability of processes and detect  special cause variation .

The cumulative sum ( CUSUM ) and the exponentially weighted moving average ( EWMA ) charts also monitor the mean of the process, but the basic difference is unlike the X bar chart. They consider the previous value means at each point.

X Bar R Control Chart Definitions

X bar chart: The mean or average change in a process over time from subgroup values. The control limits on the X bar consider the sample’s mean and center.

R chart: The range of the process over time from subgroup values. This monitors the spread of the process over time.

Use X Bar R Control Charts When:

  • Even a very stable process may have some minor variations, which will cause the process instability. An X-bar R chart will help to identify the process variation over time.
  • When the data is assumed to be  normally distributed.
  • An X bar R chart is for a subgroup size of more than one (for an I-MR chart  the subgroup size is one only), and generally, it is used when rationally collecting measurements in a subgroup size between two and ten observations.
  • The X Bar S Control chart  is to be considered when the subgroup size is more than 10.
  • When the collected data is continuous (i.e., Length, Weight), etc., and captures data in time order.

How to Interpret the X Bar R Control Charts

  • To correctly interpret an X-bar R chart, always examine the R chart first.
  • The X bar chart controls limits that are derived from the R bar (average range) values. If the R chart’s values are out of control, the X bar chart control limits are inaccurate.
  • If the points are out of control in the R chart, then stop the process. Identify the special cause and address the issue. Remove those subgroups from the calculations.
  • Once the R bar chart is in control, then review the X bar chart and interpret the points against the control limits.
  • All the points need to be interpreted against the control limits but not specification limits. As the customer or management provides specification limits, whereas control limits are derived from the average and range values of the subgroups.
  • Process capability studies can only be performed after the X bar and R chart values are within the control limits. There is no need to perform process capability studies for an unstable process.

Steps to follow for X bar R chart

The objective of the chart and subgroup size.

  • Determine the objective of the chart and choose the important variables
  • Choose the appropriate subgroup size and the sampling frequency
  • Collect a minimum of 20 to 25 sets of samples in the time sequence

Example: In the manufacturing industry, plate thickness is one of the important CTQ factors. During the Measure phase, the project team performed the process capability study and identified that the process was not capable (less than 2 sigmas). In the Analyze phase , they collected 20 sets of plate thickness samples with a subgroup size of 4.

assignable cause on

Compute X bar and R values

  • Measure the average of each subgroup i.e., X-bar, then compute the grand average of all X-bar values. This will be the center line for the X-bar chart.
  • Compute the range of each subgroup i.e. Range, then measure grand averages of all range values i.e., R bar, and this will be the center line for the R chart.

assignable cause on

Determine the Control Limits

  • The first set of subgroups determines the process mean and standard deviation. These values are to be considered for creating control limits for both ranges and the mean of each subgroup.

assignable cause on

  • The process is to be in control in the early phase of production. You must identify any special causes if any of the points are out of control during the initial phase. Also, you should remove the subgroup before the calculation.
  • Sometimes, having a few points out of control on the x-bar portion would also be good in the initial phase. Otherwise, if all the values are within the control limits, it may be because of the slop in the measurement system, and the team won’t focus on it. Identify the appropriate Measurement System Evaluation (MSE).

assignable cause on

  • X is the individual value (data)
  • n is the sample size (subgroup size)
  • X bar is the average reading in a sample
  • R is the Range, in other words, the difference between the largest and smallest value in each sample.
  • R-bar is the average of all the ranges.
  • UCL is the Upper control limit
  • LCL is the Lower control limit

The below control chart constants are the approximate values used to measure the control limits for the X-bar R chart and other control charts based on subgroup size.

assignable cause on

Refer to common factors for various control charts.

Example cont: In the above example n=4

assignable cause on

Interpret X bar and R chart

  • Plot both the X bar and R chart and identify the assignable causes

Example Cont: Use the above values and plot the X bar and Range chart

X Bar R Control Charts

From both the X-bar and R charts, it is clearly evident that most of the values are out of control; hence the process is not stable.

Monitor the process after improvement

  • Once the process stabilizes and control limits are in place, monitor the process performance over a set time period.

Example cont: Control Phase – Once the process is improved and matured, the team identifies the X bar R chart as one of the control methods in the Control plan , which is used to monitor the process performance over time.

The following are the measurement values in the Control phase of the project.

assignable cause on

Compute X-bar and Range

assignable cause on

From both the X-bar and R charts, it is clearly evident that the process is almost stable. During the initial phase, one value is out of control. The team has to perform a root cause analysis for the special cause. It also seems that the process is smoothing out from the data set number 16. If that continued, the chart would need new control limits from that point.

  • After the process is stabilized, even if any point goes out of the control limits, it indicates an assignable cause exists in the process that needs to be addressed. This is an ongoing process to monitor the process performance.

Important notes

  • A process that is “in control” means that the process is stable and it is predictable.
  • Just because a process is stable does not mean it has a zero-defect process.
  • Remember to NEVER put specifications on any kind of  control chart.
  • The points on the chart are comprised of averages, not individuals. Specification limits are based on individuals, not averages.
  • The operator might have the tendency to not react to a point that is out of control when the point is within the specification limits.
  • X bar R chart helps to avoid unnecessary adjustments in the process.

  X Bar R Control Chart Videos

Comments (29)

Can you provide a location of where all the constants for different control charts?

Here’s a good reference, Henry: http://web.mit.edu/2.810/www/files/readings/ControlChartConstantsAndFormulae.pdf

You state you want most of the points to be out of control on the X bar chart? Can you please tell me why that is?

Updated the article for greater clarity. Thanks for pointing that out!

Hi, if one or two points are out of the control limits does it mean the process is Not capable?

I like the following quote:

Capability (Cp) and performance (Cpk) charts illustrate a process’s ability to meet specifications. Although SPC control charts can reveal whether a process is stable, they do not indicate whether the process is capable of producing acceptable output—and whether it is performing to capability. (source)

Try this article for calculating process capability .

Applying the method of first video results totally different.

Can you share what you’re doing so I can see?

Sorry for late respond. I took the data set of your first example measure1 measure2 measure3 measure4 44 26 24 34 50 48 51 43 32 28 26 22 52 55 56 44 16 16 21 26 36 36 35 31 21 22 18 21 29 21 23 22 26 46 44 14 24 22 22 44 18 24 24 49 24 20 26 23 19 21 27 28 8 11 12 12 24 18 27 24 56 52 56 50 32 22 18 25 8 12 11 17 51 54 52 49 30 28 35 22 apply instruction in first video ( https://www.youtube.com/watch?v=krowVMzxecI ) I calculated averages of each subgroup. 32 48 27 51.75 19.75 34.5 20.5 23.75 32.5 28 28.75 23.25 23.75 10.75 23.25 53.5 24.25 12 51.5 28.75 then estimated standard deviation of above 20 subgroup’s averages (in excel use formula STDEV.S(…)) I got 12.4605957870569 then calculated standard error of X.bar = 12.46/sqrt(n)= 12.46/sqrt(4)= 6.23 then UCL=X.doublebar+3* standard error of X.bar=29.875+3*6.23=48.566 your UCL by R.bar above is 38.5 Quite big gap, right?

Hung, It looks like you are trying to calculate UCL just using the standard deviation. You need to use the values from the control chart constants table.

X Double Bar+ A2*R Bar = 29.875+ 0.729(11.89) = 38.51

yes, I calculated UCL using standard deviation, that’s instruction from the first video which Ramana PV attached to main topic. I mean that 2 instructions from 2 video in main topic, one calculating by using standard deviation, one by using control chart constants table, lead to 2 different results.

Hung, You make a good point, but I think it is important to remember, the writers of the Six Sigma exams are not out to tricky with multiple valid solution options. If there are multiple valid ways to solve a problem, only one will match an answer option you are given.

Question: I do not see it mentioned above, but why would and Xbar-R chart have gaps in the sample range section? I have not seen it before and am looking to see if anyone has a simple explanation to help me understand. Thank you

I’m not sure what you mean here, can you elaborate?

Are you trying to interpret a control chart that has breaks in data? If so, perhaps the measurements were stopped for some period of time?

Can you help me how to show the USL and LSL in the control chart in minitab software. Thanks so much. Trung Nguyen, Vietnam.

Try this article here , Trung.

Hi, there were 3 questions in the module test about which charts test for Within vs Between and I didn’t see any content regarding this. I found this article which may help others:

https://www.spcforexcel.com/knowledge/variable-control-charts/xbar-mr-r-betweenwithin-control-chart

Thanks for letting us know, Andy. And thank you for finding and sharing an applicable resource!

We’ll add expanding our articles to cover this concept to our improvements list.

hello sir, can you please clear subgroup size of control chart like X bar R chart- you mention different somewhere 2-9 & somewhere 2-10 . which one is correct.

Hello Sunil Kumar,

A subgroup is a group of units that are created under the same set of conditions. Subgroups (or rational subgroups) represent a “snapshot” of the process. Therefore, the measurements within a subgroup must be taken close together in time but still be independent of each other.

In X bar S chart, S chart shows the standard deviation of the subgroup size. S chart provides better understanding of the spread of subgroup data than range. Often, the subgroup size is selected without much thought. If the subgroup size is not large enough, then meaningful process shifts may go undetected. A large total number of observations is clearly advantageous because you can learn more about process performance. However, a large subgroup size is not necessarily better. You have to consider the period of time in which these large numbers of observations are obtained. Hence, if subgroup size is less than 10 use Xbar – R chart.

Appreciate your kind consideration to share study material, its a great value add for us.

We have 2 bending fixture (BF) and multiple part mounting fixture (MF). Each mounting fixture has 50 parts mounted for bending in any one of bending fixture for production. The Quality inspector randomly picks a MF twice in day from each BF and selects randomly 5 parts from this MF fixture for inspection of 2 parameters (Bend length and Angle).

This we have done for 5 days and had 9 samples (9×5 = 45 observations for each bend length and angle) of data for each BF, in this case we should plot I-MR Control Chart or X bar Control Chart or X bar S Control Chart.

The reason I am asking is we need to review and understand root cause for dimensional (Bend length and Angle) variation in production after bending and identify root cause and control these ultimately.

Will be happy to share further details if needed, if you can share email.

Looking forward to your guidance.

Cheers, Digvijay Singh Jodha Founder, The Bicycle Monk

Hi Digvijay,

I appreciate the kind words about the site. I’m not set up to do consulting engagements at this time. However, if you would like, I would be happy to pass your contact information to Six Sigma consultants who may be able to assist.

When R chart does not apply?

Correct Answer: A & B.

All Possible Answers: Subgroup size > 10. Between-group variation. Subgroup size < 10. A & B.

Explanation: Answer: D. A & B. R charts apply when the subgroup is < 10 and within-group variation is observed.

The explanation refers to "within-group variation", answer option B however refers to "between group variation"

Hello Wiebke Zuch,

The answer option and explanation both are accurate, actually, the question is When R chart does not apply?

• The range chart examines the variation within a subgroup • The X chart examines the variation between subgroups

The X-bar chart measures between-sample variation (signal), while the R chart measures within-sample variation (noise)

Hence Answer: D. A & B. is correct. R charts apply when the subgroup is < 10 and within-group variation is observed. Thanks

How can I create the R/Xbar chart if my subgroups have different sizes?

If a new batch/study comes in, how can I add it to the R/Xbar chart of varying sizes?

Hello Matthew Cserhati

For varying subgroup sizes, IMR chart is the best option. However, the Standardization of Shewhart Control Charts (Nelson, Lloyd S.(1989, ASQC)) has provided some guidance on Xbar R chart varying subgroup size.

When subgroup sizes differ there are three approaches usually recommended.

1. Draw the actual control limits for each subgroup separately. 2. Use the average of the subgroup sizes and calculate limits based on this >average size, and calculate the exact limit whenever doubt exists. 3. Standardize the statistic to be plotted and plot the results on a chart with >a centerline of zero and limits at ±3.

If average range of 10 sample of size 4 is 40 then find UCL and LCL for R chart

Thanks for sharing the problem you’re working on. What have you tried so far?

Hello Dear Ted, Thank you for this very useful article. I have a question: X bar R control chart can be implemented as ajn on-line inspection ? If it is on line in the screen of the machine, after each period (ex. 10mins), one more sample of subgroup is added, for the chart in the screen, the UCL and LCL lines, average Xbar, Rbar, etc. need to be calculated again and updated accordingly, as the number of subgroups increases. Am I right ? Many thanks in anticipation. Best regards,

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Insert/edit link

Enter the destination URL

Or link to existing content

Not logged in

Assignable cause, page actions.

  • View source

Assignable causes of variation have an advantage (high proportion, domination) in many known causes of routine variability. For this reason, it is worth trying to identify the assignable cause of variation , in such a way that its impact on the process can be eliminated, of course, assuming that project managers or members are fully aware of the assignable cause of variation. Assignable causes of variation are the result of events that are not part of the normal process. Examples of assignable causes for variability are (T. Kasse, p. 237):

  • incorrectly trained people
  • broken tools
  • failure to comply with the process
  • 1 Identify data of assignable causes
  • 2 Types of data for assignable causes
  • 3 Determining the source of assignable causes of variation in an unstable process
  • 4 Examples of Assignable cause
  • 5 Advantages of Assignable cause
  • 6 Limitations of Assignable cause
  • 7 Other approaches related to Assignable cause
  • 8 References

Identify data of assignable causes

The first step you need to take when planning data collection for assignable causes is to identify them and explain your goals . This step is to ensure that the assignable causes data that the project team gathers provides the answers that are needed to carry out the ' process improvement ' project efficiently and successfully. The characteristics that are desirable and most relevant for an assignable causes are for example: relevant, representative, sufficient. In the planning process for collecting data on assignable causes, the project team should draw and mark a chart that will provide the findings before actual data collection begins. This step gives the project team an indication of what data that can be assigned is needed (A. van Aartsengel, S Kurtoglu, p. 464).

Types of data for assignable causes

There are two types of data for assignable causes, qualitative and quantitative . Qualitative data is obtained from deseriography resulting from observations or measures of different types of characteristics of the results of the process in terms of narrative words and statements. However, the next group of data, which are quantitative data on assignable causes, are derived from the description of observations or measures of process result characteristics in terms of measurable quantity in which numerical values are used (A. van Aartsengel, S. Kurtoglu, p. 464).

Determining the source of assignable causes of variation in an unstable process

If an unstable process occurs then the analyst must identify the sources of assignable cause variation. The source and the cause itself must be investigated and, in most cases, unfortunately also eliminated. Until all such causes are removed, then the actual capacity of the process cannot be determined and the process itself will not work as planned. In some cases, however, assignable cause variability can improve the result, then the process must be redesigned (W. S. Davis, D. C. Yen, p. 76). There are two possibilities for making the wrong decision, which concerns the appearance of assignable cause variations: there is no such reason (or it is incorrectly assessed) or it is not detected (N. Möller, S. O. Hansson, J. E. Holmberg, C. Rollenhagen, p. 339).

Examples of Assignable cause

  • Poorly designed process : A poorly designed process can lead to variation due to the inconsistency in the way the process is operated. For example, if a process requires a certain step to be done in a specific order, but that order is not followed, this can lead to variation in the results of the process.
  • Human error : Human error is another common cause of variation. Examples include incorrect data entry, incorrect calculations, incorrect measurements, incorrect assembly, and incorrect operation of machinery.
  • Poor quality materials : Poor quality materials can also lead to variation. For example, if a process requires a certain grade of material that is not provided, this can lead to variation in the results of the process.
  • Changes in external conditions : Changes in external conditions, such as temperature or humidity, can also cause variation in the results of a process.
  • Equipment malfunctions : Equipment malfunctions can also lead to variation. Examples include mechanical problems, electrical problems, and computer software problems.

Advantages of Assignable cause

One advantage of identifying the assignable causes of variation is that it can help to eliminate their impact on the process. Some of these advantages include:

  • Improved product quality : By identifying and eliminating the assignable cause of variation, product quality will be improved, as it eliminates the source of variability.
  • Increased process efficiency : When the assignable cause of variation is identified and removed, the process will run more efficiently, as it will no longer be hampered by the source of variability.
  • Reduced costs : By eliminating the assignable cause of variation, the cost associated with the process can be reduced, as it eliminates the need for additional resources and labour.
  • Reduced waste : When the assignable cause of variation is identified and removed, the amount of waste produced in the process can be reduced, as there will be less variability in the output.
  • Improved customer satisfaction : By improving product quality and reducing waste, customer satisfaction will be increased, as they will receive a higher quality product with less waste.

Limitations of Assignable cause

Despite the advantages of assigning causes of variation, there are also a number of limitations that should be taken into account. These limitations include:

  • The difficulty of identifying the exact cause of variation, as there are often multiple potential causes and it is not always clear which is the most significant.
  • The fact that some assignable causes of variation are difficult to eliminate or control, such as machine malfunction or human error.
  • The costs associated with implementing changes to eliminate assignable causes of variation, such as purchasing new equipment or hiring more personnel.
  • The fact that some assignable causes of variation may be outside the scope of the project, such as economic or political factors.

Other approaches related to Assignable cause

One of the approaches related to assignable cause is to identify the sources of variability that could potentially affect the process. These can include changes in the raw material, the process parameters, the environment , the equipment, and the operators.

  • Process improvement : By improving the process, the variability caused by the assignable cause can be reduced.
  • Control charts : Using control charts to monitor the process performance can help in identifying the assignable causes of variation.
  • Design of experiments : Design of experiments (DOE) can be used to identify and quantify the impact of certain parameters on the process performance.
  • Statistical Process Control (SPC) : Statistical Process Control (SPC) is a tool used to identify, analyze and control process variation.

In summary, there are several approaches related to assignable cause that can be used to reduce variability in a process. These include process improvement, control charts, design of experiments and Statistical Process Control (SPC). By utilizing these approaches, project managers and members can identify and eliminate the assignable cause of variation in a process.

  • Davis W. S., Yen D. C. (2019)., The Information System Consultant's Handbook: Systems Analysis and Design , CRC Press, New York
  • Kasse T. (2004)., Practical Insight Into CMMI , Artech House, London
  • Möller N., Hansson S. O., Holmberg J. E., Rollenhagen C. (2018)., Handbook of Safety Principles , John Wiley & Sons, Hoboken
  • Van Aartsengel A., Kurtoglu S. (2013)., Handbook on Continuous Improvement Transformation: The Lean Six Sigma Framework and Systematic Methodology for Implementation , Springer Science & Business Media, New York

Author: Anna Jędrzejczyk

  • Recent changes
  • Random page
  • Page information

Table of Contents

  • Special pages

User page tools

  • What links here
  • Related changes
  • Printable version
  • Permanent link

CC BY-SA Attribution-ShareAlike 4.0 International

  • This page was last edited on 17 November 2023, at 16:52.
  • Content is available under CC BY-SA Attribution-ShareAlike 4.0 International unless otherwise noted.
  • Privacy policy
  • About CEOpedia | Management online
  • Disclaimers
  • LCGC: The PFAS Summit 2024

assignable cause on

  • Publications
  • Conferences

Are You Invalidating Out-of-Specification (OOS) Results into Compliance?

LCGC North America

assignable cause on

  • Phase 1 is the laboratory investigation which is to determine if there is an assignable cause for the analytical failure. This is conducted under the auspices of Quality Control, and should be split into two parts. First, the analyst checks their work to identify any gross errors that have occurred, and correct them with appropriate documentation. If this does not identify the cause, the analyst and their supervisor initiate the OOS investigation procedure looking in more detail and determining whether the cause is within the subject of the FDA OOS guidance. If a root cause cannot be identified, then the investigation is escalated to Phase 2.
  • Phase 2 is under the control of Quality Assurance to coordinate the work of both production and the laboratory; there are two elements here: Phase 2a and 2b.
  • In Phase 2a, if no assignable cause is found in the laboratory, then the investigation looks to see if there is a failure in production. If there is no root cause in production, then the investigation moves back to the laboratory.
  • In phase 2b, different hypotheses are formulated to try and identify an assignable cause, and a protocol is generated before any laboratory work is undertaken. Here, resampling can be undertaken if required.

Owing to space, we will only consider Phase 1 laboratory investigations in this article.

OOS Definitions

We have been talking about OOS, but we have not defined this and any associated terms, so let us see what definitions we have. Now here is where it gets interesting. You would think that, in an FDA guidance focused on OOS investigations, the term would be defined early in the document. I mean, logic would dictate this, would it not? Not a chance! We must wait until page 10 to find the definition, and then it is found, not in the main body of text, but in a small font footnote! Your tax dollars at work. Not only that, it is totally separated from the discussion about the individual results from an analysis that is found floating in the middle of page 10. Your tax dollars at work, again. There are the following definitions used in the FDA OOS guidance document:

  • Reportable Result : The term refers to a final analytical result. This result is appropriately defined in the written approved test method, and derived from one full execution of that method, starting from the sample. It is comparable to the specification to determine pass/fail of a test (16). This is easy to understand; it is a one-for-one comparison of the analytical result with the specification and the outcome is either pass or fail. Maybes are not allowed.
  • Individual Result : To reduce variability, two or more aliquots are often analyzed with one or two injections each, and all the results are averaged to calculate the reportable result. It may be appropriate to specify in the test method that the average of these multiple assays is considered one test, and represents one reportable result. In this case, limits on acceptable variability among the individual assay results should be based on the known variability of the method, and should also be specified in the test methodology. A set of assay results not meeting these limits should not be used (16).
  • Therefore, the individual results must have their own and larger limits, due to the variance associated with a single determination. This is an addition to the product specification limits for the reportable result discussed above. Note individual results are NOT compared with the product specification.
  • Out-of-Specification (OOS) Result : A reportable result outside of specification or acceptance criteria limits. As we are dealing with specifications, OOS results can apply to test of raw materials, starting materials, active pharmaceutical ingredients and finished products, and in-process testing. However, if a system suitability test fails, this will not generate an OOS result, as the whole run would be invalidated; however, there needs to be an investigation into the failure (16).
  • Out-of-Trend (OOT) Result : Not an out-of-specification result, but rather the result does not fit with the expected distribution of results. An alternative definition is a time dependent result which falls outside a prediction interval or fails a statistical process control criterion (17). This can include a single result outside of acceptance limits for a replicate result used to calculate a reportable result. If investigated, the same rules as for OOS investigations apply. Think not of regulatory burden but good analytical science here. Is it better to investigate and find the reason for an OOE result, or wait until you have an OOS result that might initiate a batch recall?
  • Out of Expectation (OOE) Result : An atypical, aberrant, or anomalous result within a series of results obtained over a short period of time, but is still within the acceptable range specification.

OOS of Individual Values and Reportable Results

To understand the relationships between the reportable result and individual values, some examples are shown in Figure 1, courtesy of Chris Burgess. The upper and lower specification and individual limits for this procedure are shown in the horizontal lines. You’ll see that the individual limits are wider than the specification limits, as the variance of a single value is greater than a mean result. There are six examples shown in Figure 2, and, from the left to the right, we have the following:

assignable cause on

  • Close individual replicates and mean in the middle of the specification range-an ideal result!
  • The individual results are closely spread, and, although one replicate is outside the specification limit, it is inside the individual limit, and therefore a good result.
  • The individual values are relatively close, and all are within the individual limits, but the reportable result is out-of-specification.
  • One of the individual results is outside of the individual result limit, which means that there an OOS result although the reportable result is within specification.

Examples 5 and 6 would be OOT or OOE results respectively, but are not OOS. Here, the variance of the individual results is wider than expected, and may indicate that there are problems with the procedure. It would therefore be prudent to investigate what are the reasons for this rather than ignore them. We will focus on OOS result only here.

Process Capability of an Analytical Procedure

Do you know how well your analytical procedures perform? If not, why not? This information provides you with valuable evidence that you can use in OOS investigations, and it is also a regulatory requirement, as mentioned earlier (12). For chromatographic methods, Individual calculated values and results should be plotted over time, with the aim being to show how a specific method performs and the variability. There are two main types of plot that can be used:

  • Shewhart plots with upper and lower specification and individual results plotted over time, as illustrated in Figure 2. Both the individual values and reportable results need to be plotted; if only the latter values are used, then the true performance can be missed, as the variance is lost when averaging. Over time, this gives the process capability over time.
  • Cusum or cumulative sum is a control chart that is sensitive to identifying changes in the performance of method, often before OOS results are generated. When the plot direction alters, this often indicates that a change that influences the procedure has occurred, and the reason for this should be investigated to identify the reason. This may be as subtle as a new batch of a solvent or change of column.

If your analytical data are similar to Examples 5 and 6 in Figure 2, then you could have a non-robust analytical procedure that reflects poorly on method development and validation (7); either that, or you have poorly trained staff. Prevention of OOS results is better than the investigation of them!

This Never Happens in Your Laboratory…

Even with all the automation and computerization in the world, there is still the human factor to consider. Consider the following situation. The analytical balance is qualified and has been calibrated, the reference standard is within expiry, the weight taken is within limits, and the vessel is transferred to a volumetric flask. One of three things could happen:

  • The material is transferred to the flask correctly, and the solution is made up to volume as required. All is well with the world.
  • During transfer, some material is dropped outside the flask, but the analyst still dissolves the material and makes the solution up to volume.
  • All material is transferred to the flask correctly, but the flask is overfilled past the meniscus.

At this point, only the analyst preparing the reference solution stands between your organization and a data integrity disaster. The analyst is the only person who knows that options 2 and 3 are wrong. What happens next depends on several factors:

  • Corporate data integrity policies and training
  • The open culture of the laboratory that allows an individual to admit their mistakes
  •  The honesty of the individual analyst
  • Laboratory metrics; for example, Turn Around Time (TAT) targets that can influence the actions of individuals
  • The attitude of the supervisor and laboratory management to such errors.

STOP! This is the correct and only action by the analyst. Document the mistake contemporaneously and repeat the work from a suitable point (in this case, repeat the weighing). It is simpler and easier to repeat now than investigate later.

But what actually happens depends on those factors described above. Preparation of reference standard is one area where the actions of an individual analyst can compromise the integrity of data generated for one or more analytical runs. If the mistake is ignored, the possible outcomes could be an out-of-specification result or the release of an under- or over-strength batch. In the subsequent investigation, unless the mistake is mentioned, it may not be possible to have an assignable cause.

What is the FDA’s View of Analyst Mistakes?

Hidden in the Responsibilities of the Analyst section in the FDA’s Guidance for Industry on Investigating OOS Results is the following statement (16):

If errors are obvious, such as the spilling of a sample solution or the incomplete transfer of a sample composite, the analyst should immediately document what happened.

Analysts should not knowingly continue an analysis they expect to invalidate at a later time for an assignable cause (i.e., analyses should not be completed for the sole purpose of seeing what results can be obtained when obvious errors are known).

The only ethical option open to an analyst is to stop the work, document the error, and repeat the work from a suitable point.

Do You Know Your Laboratory OOS Rate?

According to the FDA:

Laboratory error should be relatively rare. Frequent errors suggest a problem that might be due to inadequate training of analysts, poorly maintained or improperly calibrated equipment, or careless work (16).

This brings me to the first of two metrics: Do you know the percentage of OOS results across all tests that are performed in your laboratory? If not, why not?

Remember that there are three main groups of analytical procedure that can be used as release testing ranging from:

  • Observation (including, but not limited to, appearance, color, and odor, for example), which is relatively simple to perform. If there is an OOS result, it is more likely to be a manufacturing issue than a laboratory one (such as particles in the sample or change in expected color). However, laboratory errors in analyses of this type will be extremely rare.
  • Classical wet chemistry (including, but not limited to, melting point, titration, and loss on drying) involving a variety of analytical techniques often with manual data recording unless automated (autotitration). There is more likelihood of an error but many mistakes, such as transcription or calculation errors, should be identified and corrected during second person review.
  • Instrumental analyses (including, but not limited to, identity, assay, potency, and impurity) using spectrometers and chromatographs for example. Here, the procedures can be more complex, and data analysis requires trained analysts.

As you move down this list, the likelihood of an OOS result increases with the complexity of analytical procedure and more human data interpretation that is involved. Hence the emphasis on instrumental methods, such as chromatography (1) and spectroscopy (18) in inspections.

SSTs Failure Does Not Require an OOS Investigation

Under analyst responsibilities, there is an FDA get out of jail free card for chromatographic runs where the SST injections fail to meet their predefined acceptance criteria:

Certain analytical methods have system suitability requirements, and systems not meeting these requirements should not be used. …. in chromatographic systems, reference standard solutions may be injected at intervals throughout chromatographic runs to measure drift, noise, and repeatability.

If reference standard responses indicate that the system is not functioning properly, all of the data collected during the suspect time period should be properly identified and should not be used. The cause of the malfunction should be identified and, if possible, corrected before a decision is made whether to use any data prior to the suspect period (16).

Here’s where technology can come to help. Some CDS applications allow users to define acceptance criteria for SST injections. If one or more of SST injections fail these criteria, then the run stops automatically. This saves the analyst from trying to determine where there are data that could be used from the run because if the run is stopped before samples are injected, then there are no sample data available. It is important that if this function is used, it must be validated to show that it works-specified in the user requirements specification (URS) and verified that it works in the user acceptance testing (UAT). There will also be corresponding entries in the instrument log book documenting the initial problem and the steps taken to resolve the issue (11,19,20).

What is an OOS Investigation?

An OOS investigation is triggered by an analyst when the reportable result is outside of the specification limits as shown in Figure 2. The analyst informs their supervisor and this should begin the laboratory investigation by following the laboratory OOS procedure. The responsibilities of both individuals is presented in Table II, which is copied from the FDA OOS guidance document. The FDA is very specific in listing the responsibilities of both the analyst who performs the analysis and the laboratory supervisor who will be conducting the investigation. The guidance document will be the source for a laboratory SOP that will detail what will be done for a laboratory OOS investigation.

assignable cause on

The investigation begins by checking the chromatographer’s knowledge of the analytical procedure, and that the right procedure was used. This is followed by ensuring the sampling was performed correctly, and that the right sample was analyzed, and so on throughout the analysis. Areas to check for potential errors and assignable causes are shown in Figure 4 and Table III, and has been derived from the FDA OOS guidance.

assignable cause on

Don’t Do This in Your Laboratory!

The Lupin plant in Nagpur is the source of this example of how not to do undertake an OOS laboratory investigation, and this is quoted from the 483 form that was issued in January 2020 (21). Citation 2 is an observation for failing to follow the investigation SOP, but we will look at citation 1, where the details of inadequate laboratory investigations are documented. Some parts of the citation are heavily redacted, so this is a best attempt at reconstructing the investigation that was carried out:

  • There was an OOS result from a dissolution test.
  • The OOS was hypothesized as being due to an analyst transposing samples from different time points.
  • The original sample solutions were remeasured, but the results appeared to be similar to the original.
  • A comment was added to the record that is redacted in the 483 that appears to document an analyst comment that some of the solution was spilled, and that this would account for the discrepancy in results. This is a very convenient spillage.
  • However, the comment was not made by the analyst who performed the original work. A second analyst was told by his supervisor that the first analyst had said he had made a mistake, and the second analyst documented this, as directed by the supervisor.
  • There was no documentation or corroborating evidence provided to support this, or that an interview with the original analyst ever occurred.
  • A substitution of a solution was made which, when retested, passed. Well, what a surprise!
  • The original test results were invalidated, and the passing results used to release the product.
  • QC and QA personnel signed off the investigation, even though they knew of the substitution of the solution and potential manipulation.
  • The original analyst was unavailable during the inspection.
  • No deviation or CAPA was instigated.
  • No definitive root cause of the OOS result was ever determined.

Any wonder that a 483 observation was raised?

FDA Guidance on Quality Metrics

The second metric that is important in OOS investigations is a topic in the FDA Draft Guidance on Quality Metrics (22) that emphasizes the importance of correct OOS investigations. There are three metrics covering manufacturing and quality control, but there is only one metric for QC that is the percentage of invalidated OOS rate, defined as follows:

Invalidated Out-of-Specification (OOS) Rate (IOOSR) as an indicator of the operation of a laboratory. IOOSR = the number of OOS test results for lot release and long-term stability testing invalidated by the covered establishment due to an aberration of the measurement process divided by the total number of lot release and long-term stability OOS test results in the current reporting timeframe (21).

What is important is that the rate covers not only batch release but also stability testing. The rationale for using the invalidated OOS rate can be seen in Table I and the corresponding 483 observations and warning letters (4,5). An aim is that FDA can conduct risk-based inspections, and, if a firm has low regulatory risk, they will be relying on these quality metrics to extend the time between inspections. Woe betide a firm who massages these metrics.

Outsourced Analysis?

If your organization outsources manufacturing and QC analysis, how should you monitor the work? From a QC perspective, any OOS result should be notified to your organization from the contract facility. You must have oversight of any laboratory investigation on your products or analysis to ensure that the FDA criteria of an investigation are met as outlined above. In addition, if FDA are interested in a metric of the percentage of invalidated OOS results, so are you. You should have these figures for your work, but also across the whole of the outsourced laboratory. Therefore, you should review all OOS investigations on your products, either via video conference or on site during any supplier audits. In words of that great data integrity expert, Ronald Reagan, trust but verify.

Scientifically sound OOS laboratory investigations are an essential part of ensuring data integrity. Outlined here are the key requirements for an OOS investigation to find and assignable or root cause so that a result could be invalidated. Note that the FDA and other regulatory authorities take a very keen interest in invalidated OOS results, especially where analyst error is cited continually as the cause of the OOS. Your laboratory should know the OOS rate as well as the percentage of OOS results invalidated.

Acknowledgment

I would like to thank Chris Burgess for permission to use Figure 2 in this article and for comments made in preparation of this article.

  • R.D. McDowall, Data Integrity and Data Governance: Practical Implementation in Regulated Laboratories . (Royal Society of Chemistry, Cambridge, United Kingdom, 2019).
  • FDA Warning Letter Tismore Health and Wellness Pty Limited (Food and Drug Administration, Silver Spring, Maryland, 2019).
  • FDA Warning Letter Shriram Institute for Industrial Research (Food and Drug Administration, Silver Spring, Maryland, 2020).
  • FDA Warning Letter Lupin Ltd. (Food and Drug Administration, Silver Spring, Maryland, 2017).
  • FDA 483 Observations: Lupin Ltd. (Food and Drug Administration, Silver Spring, Maryland, 2017).
  • FDA Warning Letter Lupin Ltd. (Food and Drug Administration, Silver Spring, Maryland, 2019).
  • R.D. McDowall, LCGC North Am. 38 (4), 233–240 (2020).
  • R.J. Davis, Judge Wolin’s Interpretation of Current Good Manufacturing Practice Issues Contained in the Court’s Ruling United States vs. Barr Laboratories in Development and Validation of Analytical Methods , C.L. Riley and T.W. Rosanske, Editors (Pergammon Press, Oxford, United Kingdon, 1996), p. 252.
  • Barr Laboratories: “Court Decision Strengthens FDA’s Regulatory Power” (1993). Available from: https://www.fda.gov/Drugs/DevelopmentApprovalProcess/Manufacturing/ucm212214.htm .
  • USP General Chapter <1010> Outlier Testing (United States Pharmacopoeia Convention Inc., Rockville, Maryland, 2012).
  • 21 CFR 211 Current Good Manufacturing Practice for Finished Pharmaceutical Products (Food and Drug Administration, Silver Spring, Maryland, 2008).
  • EudraLex - Volume 4 Good Manufacturing Practice (GMP) Guidelines, Chapter 6 Quality Control (European Commission, Brussels, Belgium, 2014).
  • Inspection of Pharmaceutical Quality Control Laboratories (Food and Drug Administration, Rockville, Maryland, 1993).
  • FDA Compliance Program Guide CPG 7346.832 Pre-Approval Inspections (Food and Drug Administration, Silver Spring, Maryland, 2019).
  • R.D. McDowall, Spectroscopy 34 (12), 14–19 (2019).
  • FDA Guidance for Industry Out-of-specification Results (Food and Drug Administration, Silver Spring, Maryland, 2006).
  • C. Burgess, Personal Communication.
  • P.A. Smith and R.D. McDowall, Spectroscopy 34 (9), 22–28 (2019).
  • EudraLex - Volume 4 Good Manufacturing Practice (GMP) Guidelines, Chapter 4 Documentation (European Commission, Brussels, Belgium, 2011).
  • R.D. McDowall, Spectroscopy 32 (12), 8–12 (2017).
  • FDA 483 Observations Lupin Ltd. (Food and Drug Administration, Silver Spring, Maryland, 2019).
  • FDA Guidance for Industry Submission of Quality Metrics Data, Revision 1 (Food and Drug Administration, Rockville, Maryland, 2016).

R.D. McDowall is the director of R.D. McDowall Limited in the UK. Direct correspondence to: [email protected]

assignable cause on

What Goes Around Comes Around?

“Questions of Quality” is 30 years old! What, if anything, has changed in chromatography laboratories over that time?

What Are Orphan Data?

What Are Orphan Data?

The term orphan data is used frequently in the context of data integrity. What does it mean for chromatography data systems? How can we prevent or detect orphan data?

Quo Vadis Analytical Procedure Development and Validation?

Quo Vadis Analytical Procedure Development and Validation?

What do the draft publications ICH Q2(R2) and Q14 for analytical procedure validation and development mean for a regulated GMP laboratory?

How Static Are Static Data?

How Static Are Static Data?

A balance printout is a fixed record, and is also called static data. But how static are static data when the weight is used in a chromatographic analysis? Also, have some regulatory data integrity guidance documents failed to comply with their own regulations?

It’s Qualification, But Not  As We Know It?

It’s Qualification, But Not As We Know It?

Qualification and calibration of high performance liquid chromatography (HPLC) chromatographs is a regulatory requirement, but how proscriptive should guidance be?

The Hidden Factory in Your Laboratory?

The Hidden Factory in Your Laboratory?

There is a hidden factory in plain sight in many laboratories. Not many people know of its existence, yet it stares back at you when you work. What is the output from this hidden factory? Paper.

2 Commerce Drive Cranbury, NJ 08512

609-716-7777

assignable cause on

Monday, August 17, 2015

Chance & assignable causes of variation.

Links to all courses Variation in quality of manufactured product in the respective process in industry is inherent & evitable. These variations are broadly classified as- i) Chance / Non assignable causes ii) Assignable causes i) Chance Causes: In any manufacturing process, it is not possible to produce goods of exactly the same quality. Variation is inevitable. Certain small variation is natural to the process, being due to chance causes and cannot be prevented. This variation is therefore called allowable . ii) Assignable Causes: This type of variation attributed to any production process is due to non-random or so called assignable causes and is termed as preventable variation . Assignable causes may creep in at any stage of the process, right from the arrival of the raw materials to the final delivery of goods. Some of the important factors of assignable causes of variation are - i) Substandard or defective raw materials ii) New techniques or operation iii) Negligence of the operators iv) Wrong or improper handling of machines v) Faulty equipment vi) Unskilled or inexperienced technical staff and so on. These causes can be identified and eliminated and are to discovered in a production process before the production becomes defective. SQC is a productivity enhancing & regulating technique ( PERT ) with three factors- i) Management ii) Methods iii) Mathematics Here, control is two-fold- controlling the process ( process control ) & controlling the finished products (products control). 

About আব্দুল্যাহ আদিল মাহমুদ

2 comments:.

Totally awesome posting! Loads of valuable data and motivation, both of which we all need!Relay welcome your work. maggots in mouth treatment

Bishwo.com on Facebook

Popular Articles

' border=

Like on Facebook

Join our newsletter, portal archive, basics of math, contact form.

  • Privacy Policy

IMAGES

  1. PPT

    assignable cause on

  2. PPT

    assignable cause on

  3. Statistical Quality Control

    assignable cause on

  4. Topic 10

    assignable cause on

  5. ️ Assignable cause on a control chart. Assignable Cause Term Definition

    assignable cause on

  6. ️ Assignable cause on a control chart. Assignable Cause Term Definition

    assignable cause on

VIDEO

  1. Assignable Roles in Workday

  2. SE03-10 Changing Cause Exceptions

  3. Chance Vs Assignable causes

  4. How to use assignable outputs from Motif!!

  5. OUT OF SPECIFICATION PART I IN HINDI

  6. Statistical Quality Control SQC and SPC

COMMENTS

  1. Assignable Cause

    Assignable cause, also known as a special cause, is one of the two types of variation a control chart is designed to identify. Let's define what an assignable cause variation is and contrast it with common cause variation. We will explore how to know if your control is signaling an assignable cause and how to react if it is.

  2. Assignable Cause

    An assignable cause refers to a specific, identifiable factor or reason that contributes to a variation or deviation in a process or system's output. In statistical process control and quality management, assignable causes are distinct from random or common causes, as they are usually identifiable and controllable.

  3. PDF The assignable cause The Control Chart Statistical basis of the control

    the Managing Variation over Time Statistical Process Control often takes the form of a continuous Hypothesis testing. The idea is to detect, as quickly as possible, a significant departure from the norm. A significant change is often attributed to what is known as an assignable cause.

  4. Common cause and special cause (statistics)

    Definitions Common-cause variations Common-cause variation is characterised by: Phenomena constantly active within the system; Variation predictable probabilistically; Irregular variation within a historical experience base; and Lack of significance in individual high or low values.

  5. Assignable Cause: Learn More From Our Online Lean Guide

    An assignable cause is a type of variation in which a specific activity or event can be linked to inconsistency in a system. In effect, it is a special cause that has been identified. As a refresher, common cause variation is the natural fluctuation within a system. It comes from the inherent randomness in the world.

  6. Control Chart Rules and Interpretation

    Rules 1 (points beyond the control limits) and 2 (zone A test) represent sudden, large shifts from the average. These are often fleeting - a one-time occurrence of a special cause - like the flat tire when driving to work. Rules 3 (zone B) and 4 (Zone C) represent smaller shifts that are maintained over time.

  7. PDF Chapter 6. Control Charts for Variables

    R Charts. Equations 5-4 and 5-5 are trial control limits. Determined from m initial samples. Typically 20-25 subgroups of size n between 3 and 5. Any out-of-control points should be examined for assignable causes. If assignable causes are found, discard points from calculations and revise the trial control limits.

  8. 6.3.1. What are Control Charts?

    Where we put these limits will determine the risk of undertaking such a search when in reality there is no assignable cause for variation. Since two out of a thousand is a very small risk, the 0.001 limits may be said to give practical assurances that, if a point falls outside these limits, the variation was caused be an assignable cause.

  9. ASSIGNABLE CAUSES OF VARIATIONS

    (2000). ASSIGNABLE CAUSES OF VARIATIONS . In: Swamidass, P.M. (eds) Encyclopedia of Production and Manufacturing Management. Springer, Boston, MA . https://doi.org/10.1007/1-4020-0612-8_57 Download citation .RIS

  10. Leaving Out-of-control Points Out of Control Chart ...

    If you include points that you already know are different because of an assignable cause, you reduce the sensitivity of your control chart to other, unknown causes that you would want to investigate. Fortunately, Minitab Statistical Software makes it fast and easy to leave points out when you calculate your center line and control limits. ...

  11. How to deal with Assignable causes

    Use the transformed data if used in step 1. Using a probability plot, estimate the probability of receiving the extreme value. Consider the probability plot confidence intervals to be like a confidence interval of the data by examining the vertical uncertainty in the plot at the extreme value.

  12. PDF Statistical Process Control: Assignable Causes and Data Forecasting

    allowed when assignable causes have been shown by history or study to be indifferent to hardware quality. The method is a novel but simple way to correct assignable causes, to perform data forecasting, and to analyze process randomness. Background Table 1 shows the equations used to correct assignable causes and evaluate process randomness.

  13. When Assignable Cause Masquerades as Common Cause

    T he difference between common (or random) cause and special (or assignable) cause variation is the foundation of statistical process control (SPC). An SPC chart prevents tampering or overadjustment by assuming that the process is in control, i.e., special or assignable causes are absent unless a point goes outside the control limits.

  14. PDF 5Methods and Philosophy of Statistical Process Control

    1 an assignable cause occurs. As shown in Fig. 5.1, the effect of this assignable cause is to shift the process mean to a new value m 1 > m 0. At time t 2 another assignable cause occurs, resulting in m = m 0, but now the process standard deviation has shifted to a larger value s 1 > s 0. At time t 3 there is another assignable cause present ...

  15. Sources of Variation: Common and Assignable Causes

    These are called common, or random, causes of variation. Common causes of variation are based on random causes that we cannot identify. These types of variation are unavoidable and are due to slight differences in processing. Common causes of variation. Random causes that cannot be identified.

  16. Control Chart

    Staff — January 4, 2013. A control chart, sometimes referred to as a process behavior chart by the Dr. Donald Wheeler, or Shewhart Charts by some practitioners named after Walter Shewhart. The control chart is meant to separate common cause variation from assignable-cause variation. Use: A control chart is useful in knowing when to act, and ...

  17. Assignable causes of variation and statistical models: another approach

    We consider two different types of assignable causes of variation. One—called type I—affects o... Assignable causes of variation and statistical models: another approach to an old topic - Adler, - 2011 - Quality and Reliability Engineering International - Wiley Online Library Skip to Article Content Skip to Article Information

  18. The Power of Special Cause Variation: Learning from Process Changes

    A special cause is assignable to a process component that has changed or is changing. Investigation into the source of a special cause will: Let you know when to act to adjust or improve the process. Keep you from making the mistake of missing an opportunity to improve a process.

  19. Fuzzy logic based assignable cause diagnosis using control chart

    Relating the patterns exhibited on the control chart to assignable causes is an ambiguous and vague task especially when multiple patterns co-exist. In this work, a rule based fuzzy inference system is developed for X ¯ control chart to prioritize the control chart causes based on the accumulated evidence. When a process goes out of control ...

  20. X Bar R Control Charts

    What are X Bar R Control Charts? X Bar R charts are the widely used control charts for variable data to examine the process stability in many industries (like hospital patients' blood pressure over time, customer call handle times, length of a part in a production process, etc).

  21. Assignable cause

    broken tools failure to comply with the process Identify data of assignable causes The first step you need to take when planning data collection for assignable causes is to identify them and explain your goals.

  22. Are You Invalidating Out-of-Specification (OOS) Results into Compliance?

    The guidance outlines a three-part, two-phase strategy for investigating an OOS chemical analysis result, as shown in Figure 1. Figure 1: Flow chart of OOS results investigations. Phase 1 is the laboratory investigation which is to determine if there is an assignable cause for the analytical failure.

  23. Chance & assignable causes of variation

    This type of variation attributed to any production process is due to non-random or so called assignable causes and is termed as preventable variation. Assignable causes may creep in at any stage of the process, right from the arrival of the raw materials to the final delivery of goods.