Sixsigma DSI

Assignable Cause

Table of contents, what is assignable cause, key characteristics of assignable causes, benefits of an assignable cause, related articles.

Estimated reading time: 3 minutes

An assignable cause refers to a specific, identifiable factor or reason that contributes to a variation or deviation in a process or system’s output. In statistical process control and quality management, assignable causes are distinct from random or common causes, as they are usually identifiable and controllable.

A control chart can identify one of two types of variation: assignable cause (also known as a special cause) and common cause. Let’s look at what assignable cause variation looks like and compare it to common cause variation. This article will explain how to determine your control signals, and how to respond if it does.

A control diagram shows two types of variation. Common cause variation is a random variable that results from process components or  6Ms . special cause variation can be assigned.

What is Assignable Cause?

  • Specificity: Assignable causes are particular factors or events that can be pinpointed as the reason behind a change or anomaly in the process. They are not part of the regular or expected variation within the system.
  • Controllability: These causes are typically within the control of management or those overseeing the process. Once identified, efforts can be made to address or eliminate them to improve the process.
  • Impact on Variation: Assignable causes have a significant impact on process variation, leading to deviations or irregularities in the output. They can result in non-conformance or substandard performance.
  • Corrective Action: Recognizing and addressing them is crucial in quality management. Corrective actions are taken to eliminate or mitigate the effects of these causes to restore the process to its intended performance level.

The identification and management of assignable causes in a process or system offer several benefits:

  • Improved Quality Control: Assignable causes pinpoint specific factors leading to variations or issues within a process. Addressing these causes allows for better control over the process, resulting in improved product or service quality.
  • Enhanced Problem-Solving: Identifying it helps in understanding the root reasons behind deviations or anomalies. This facilitates more effective problem-solving and decision-making to address underlying issues, rather than merely treating symptoms.
  • Preventive Action: By recognizing and eliminating them, organizations can proactively prevent the recurrence of issues. This proactive approach minimizes the likelihood of future problems, reducing waste, rework, and associated costs.
  • Process Optimization: Managing assignable causes leads to process optimization. Continuous improvement efforts can focus on eliminating these causes, thereby streamlining operations, increasing efficiency, and optimizing resources.
  • Increased Productivity: Addressing it can lead to smoother operations and fewer disruptions. This, in turn, can enhance productivity by reducing downtime and improving overall process flow.
  • Customer Satisfaction: Consistent product or service quality resulting from the elimination of it can lead to higher customer satisfaction. Meeting or exceeding customer expectations fosters loyalty and a positive brand reputation.
  • Data-Driven Decision-Making: The identification of them relies on data analysis and systematic problem-solving methods. This promotes a data-driven approach to decision-making, ensuring that actions are based on empirical evidence rather than assumptions.

Overall, effectively managing assignable causes contributes to a more robust quality management system, fosters a culture of continuous improvement, and supports the organization in delivering higher quality products or services while minimizing disruptions and costs.

  • What I Learned about Lean Visual Signals from the Waffle House
  • Special Cause Variation
  • Guide to Implementing the Second S in 5S Set In Order
  • What is Root Cause Analysis (RCA)?

an assignable cause

Popular Articles:

  • What is LEAN?
  • What Is Six Sigma?
  • What is a Green Belt?
  • What is a Black Belt?
  • What is a Yellow Belt?
  • What is a White Belt?
  • What is an FMEA?
  • What is a Value Stream Map?

Insert/edit link

Enter the destination URL

Or link to existing content

Visit CI Central  | Visit Our Continuous Improvement Store

Assignable Cause

Last updated by Jeff Hajek on December 22, 2020

An assignable cause is a type of variation in which a specific activity or event can be linked to inconsistency in a system. In effect, it is a special cause that has been identified.

As a refresher, common cause variation is the natural fluctuation within a system. It comes from the inherent randomness in the world. The impact of this form of variation can be predicted by statistical means. Special cause variation, on the other hand, falls outside of statistical expectations. They show up as outliers in the data .

Lean Terms Discussion

Variation is the bane of continuous improvement . It decreases productivity and increases lead time . It makes it harder to manage processes.

While we can do something about common cause variation, typically there is far more bang for the buck by attacking special causes. Reducing common cause variation, for example, might require replacing a machine to eliminate a few seconds of variation in cutting time. A special cause variation on the same machine might be the result of weld spatter from a previous process. The irregularities in a surface might make a part fit into a fixture incorrectly and require some time-consuming rework. Common causes tend to be systemic and require large overhauls. Special causes tend to be more isolated to a single process step .

The first step in removing special causes is identifying them. In effect, you turn them into assignable causes. Once a source of variation is identified, it simply becomes a matter of devoting resources to resolve the problem.

Lean Terms Videos

Lean Terms Leader Notes

One of the problems with continuous improvement is that the language can be murky at times. You may find that some people use special causes and assignable causes interchangeably. Special cause is a far more common term, though.

I prefer assignable cause, as it creates an important mental distinction. It implies that you…

Extended Content for this Section is available at academy.Velaction.com

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

an assignable cause

Common Cause vs. Special Cause Variation: What’s the Difference?

Updated: January 14, 2024 by iSixSigma Staff

an assignable cause

What is Common Cause Variation?

Common cause variation is the kind of variation that is part of a stable process. These are variations that are natural to a system and are quantifiable and expected. Common cause variations are those that are predictable, ongoing, and consistent. Major changes would typically have to be made in order to change the common cause variations.

One example of a common cause variation would be when a task takes slightly longer or shorter to accomplish than the mean time. Other examples could be normal wear and tear, computer lag time, and measurement errors.

The Benefits of Common Cause Variations

Since common cause variations are always present, they can be measured to establish a baseline using statistical techniques of the normal variation. These types of variations also fit easily within the control limits of a control chart.

How to Identify Common Cause Variation

You can identify common cause variation points on the control chart of a process measure by its random pattern of variation and its adherence to the control limits.

What is Special Cause Variation?

Special cause variations are unexpected glitches that occur that significantly affect a process. It is also known as “assignable cause.” These variations are unusual, unquantifiable, and are variations that have not been observed previously, so they cannot be planned for and accounted for.

These causes are typically the result of a specific change that has occurred in the process, with the result being a chaotic problem.

One example of a special cause variation would be a task taking exorbitantly longer than typical due to an unexpected crisis. Other examples would be power outages, computer crashes, and machine malfunctions.

The Benefits of Special Cause Variation

One benefit of special cause variations is that they are typically connected to a defect in the system or process that is addressable. Changes to components, methods, or processes can help prevent the special cause variation from occurring again.

How to Identify Special Cause Variation

You can identify special cause variation on a control chart by their non-random patterns and out-of-control points.

Common Cause vs. Special Cause: What’s the Difference?

Common cause variation and special cause variation are related in that they can both be present in the performance of a process. The difference between these two types of variation lies in how common cause variations are normal and expected variations that do not deviate from the natural order of a process. With common cause variations, a process remains stable. With special cause variations, however, a process is dramatically affected and becomes unstable. In short, common cause variations reflect a stable process, while special cause variations reflect an unstable process.

Common Cause vs. Special Cause: Who would use A and/or B?

Both of these types of variation are important to have an understanding of in project management. You can keep track of a project’s health by observing control charts and being able to spot the differences between common cause variations and special cause variations. The ability to spot the differences allows for knowing if a process is stable or not and if there are variations that need to be addressed by making changes or if they can likely be left alone.

Choosing Between Common Cause and Special Cause: Real World Scenarios

A project manager has been tasked with looking at the performance of a project during the previous quarter. A control chart is drafted that shows any variance that occurred during that quarter. With an understanding of how common cause and special cause variance is displayed on a control chart, the project manager looks for points on the chart that appear non-random and that go outside the control of the chart.

Upon inspection, the project manager finds a group of points that fall well outside the parameters of what is typical. A few of the workers are called, and it is determined that at the time those points fell under, there was a flood that prevented the necessary work from being done.

This adequately explains the presence of special cause variation on the control chart.

Summary/Conclusion

Variation in a process is normal and expected. Over a given period of time, it is essentially unavoidable. Nevertheless, by understanding control charts and being able to recognize variances that are typical for the process and those that are atypical, we can make changes to processes to prevent or safeguard against the same special cause variation in the future.

About the Author

' src=

iSixSigma Staff

To read this content please select one of the options below:

Please note you do not have access to teaching notes, diagnosis of assignable cause in statistical process control.

International Journal of Quality & Reliability Management

ISSN : 0265-671X

Article publication date: 1 July 1996

Explains that the shifts of a process may be classified into a set of modes (or classifications), each of which is incurred by an assignable cause. Presents an algorithm to determine the process shift mode and estimate the run length when an out‐of‐control status is signalled by the x ‐ or s chart in statistical process control. The information regarding the process shift mode and run length is very useful for diagnosing the assignable cause correctly and promptly. The algorithm includes two stages. First, the process shift modes are established using the sample data acquired during an explorative run. Afterwards, whenever an out‐of‐control case is detected, Bayes’ rule is employed to determine the active process shift mode and estimate the run length. In simulation tests, the proposed algorithm attains a fairly high probability (around 0.85) of correctly determining the active process shift mode and estimating the run length.

  • Bayesian statistics
  • Control charts
  • Shewart charts
  • Statistical process control

Wu, Z. (1996), "Diagnosis of assignable cause in statistical process control", International Journal of Quality & Reliability Management , Vol. 13 No. 5, pp. 61-76. https://doi.org/10.1108/02656719610118124

Copyright © 1996, MCB UP Limited

Related articles

We’re listening — tell us what you think, something didn’t work….

Report bugs here

All feedback is valuable

Please share your general feedback

Join us on our journey

Platform update page.

Visit emeraldpublishing.com/platformupdate to discover the latest news and updates

Questions & More Information

Answers to the most commonly asked questions here

Icon Partners

  • Quality Improvement
  • Talk To Minitab

Leaving Out-of-control Points Out of Control Chart Calculations Looks Hard, but It Isn't

Topics: Control Charts , Lean Six Sigma , Six Sigma , Quality Improvement

Control charts are excellent tools for looking at data points that seem unusual and for deciding whether they're worthy of investigation. If you use control charts frequently, then you're used to the idea that if certain subgroups reflect temporary abnormalities, you can leave them out when you calculate your center line and control limits. If you include points that you already know are different because of an assignable cause, you reduce the sensitivity of your control chart to other, unknown causes that you would want to investigate. Fortunately, Minitab Statistical Software makes it fast and easy to leave points out when you calculate your center line and control limits. And because Minitab’s so powerful, you have the flexibility to decide if and how the omitted points appear on your chart.

Here’s an example with some environmental data taken from the Meyer Park ozone detector in Houston, Texas . The data are the readings at midnight from January 1, 2014 to November 9, 2014. (My knowledge of ozone is too limited to properly chart these data, but they’re going to make a nice illustration. Please forgive my scientific deficiencies.) If you plot these on an individuals chart with all of the data, you get this:

The I-chart shows seven out-of-control points between May 3rd and May 17th.

Beginning on May 3, a two-week period contains 7 out of 14 days where the ozone measurements are higher than you would expect based on the amount that they normally vary. If we know the reason that these days have higher measurements, then we could exclude them from the calculation of the center line and control limits. Here are the three options for what to do with the points:

Three ways to show or hide omitted points

Like it never happened

One way to handle points that you don't want to use to calculate the center line and control limits is to act like they never happened. The points neither appear on the chart, nor are there gaps that show where omitted points were. The fastest way to do this is by brushing :

  • On the Graph Editing toolbar, click the paintbrush.

The paintbrush is between the arrow and the crosshairs.

  • Click and drag a square that surrounds the 7 out-of-control points.
  • Press CTRL + E to recall the Individuals chart dialog box.
  • Click Data Options .
  • Select Specify which rows to exclude .
  • Select Brushed Rows .
  • Click OK twice.

On the resulting chart, the upper control limit changes from 41.94 parts per billion to 40.79 parts per billion. The new limits indicate that April 11 was also a measurement that's larger than expected based on the variation typical of the rest of the data. These two facts will be true on the control chart no matter how you treat the omitted points. What's special about this chart is that there's no suggestion that any other data exists. The focus of the chart is on the new out-of-control point:

The line between the data is unbroken, even though other data exists.

Guilty by omission

A display that only shows the data used to calculate the control line and center limits might be exactly what you want, but you might also want to acknowledge that you didn't use all of the data in the data set. In this case, after step 6, you would check the box labeled Leave gaps for excluded points . The resulting gaps look like this:

Gaps in the control limits and data connect lines show where points were omitted.

In this case, the spaces are most obvious in the control limit line, but the gaps also exist in the lines that connect the data points. The chart shows that some data was left out.

Hide nothing

In many cases, not showing data that wasn't in the calculations for the center line and control limits is effective. However, we might want to show all of the points that were out-of-control in the original data. In this case, we would still brush the points, but not use the Data Options. Starting from the chart that calculated the center line and control limits from all of the data, these would be the steps:

  • Press CTRL + E to recall the Individuals chart dialog box. Arrange the dialog box so that you can see the list of brushed points.
  • Click I Chart Options .
  • Select the Estimate tab.
  • Under Omit the following subgroups when estimating parameters , enter the row numbers from the list of brushed points.

This chart still shows the new center line, control limits, and out-of-control point, but also includes the points that were omitted from the calculations.

Points not in the calculations are still on the chart.

Control charts help you to identify when some of your data are different than the rest so that you can examine the cause more closely. Developing control limits that exclude data points with an assignable cause is easy in Minitab and you also have the flexibility to decide how to display these points to convey the most important information. The only thing better than getting the best information from your data? Getting the best information from your data faster!

The image of the Houston skyline is from Wikimedia commons and is licensed under this creative commons license .

You might also like.

  • Trust Center

© 2023 Minitab, LLC. All Rights Reserved.

  • Terms of Use
  • Privacy Policy
  • Cookies Settings

How to deal with Assignable causes?

How to deal with Assignable causes?

Across the many training sessions conducted one question that keeps raging on is “How do we deal with special causes of variation or assignable causes”. Although theoretically a lot of trainers have found a way of answering this situation, in the real world and especially in Six Sigma projects this is often an open deal. Through this article, I try to address this from a practical paradigm.

Any data you see on any of your charts will have a cause associated with it. Try telling me that the points which make your X MR, IMR or XBar R Charts have dropped the sky and I will tell you that you are not shooting down the right ducks. Then, the following causes seem possible for any data point to appear on the list.

  • A new operator was running the process at the time.
  • The raw material was near the edge of its specification.
  • There was a long time since the last equipment maintenance.
  • The equipment maintenance was just performed prior to the processing.

 The moment any of our data points appear due to some of the causes mentioned below, a slew of steps are triggered. Yeah – Panic! Worse still, these actions below which may have been a result of a mindless brain haemorrhage backed by absolute lack of data, results in more panic!

  • Operators get retraining.
  • Incoming material specifications are tightened.
  • Maintenance schedules change.
  • New procedures are written.

My question is --- Do you really have to do all of this, if you have determined that the cause is a common or a special cause of variation ! Most Six Sigma trainers will tell you that a Control chart will help you identify special cause of variation. True – But did you know of a way you could validate your finding!

  • Check the distribution first. If the data is not normal, transform the data to make it reasonably normal. See if it still has extreme points. Compare both the charts before and after transformation. If they are the same, you can be more or less sure it has common causes of variation.
  • Plot all of the data, with the event on a control chart.  If the point does not exceed the control limits, it is probably a common-cause event.  Use the transformed data if used in step 1.
  • Using a probability plot, estimate the probability of receiving the extreme value.  Consider the probability plot confidence intervals to be like a confidence interval of the data by examining the vertical uncertainty in the plot at the extreme value.   If the lower confidence boundary is within the 99% range, the point may be a common-cause event.  If the lower CI bound is well outside of the 99% range, it may be a special cause.  Of course the same concept works for lower extreme values.
  • Finally, turn back the pages of the history. See how frequently these causes have occurred. If they have occurred rather frequently, you may want to think these are common causes of variation. Why – Did you forget special causes don’t really repeat themselves?

 The four step approach you have taken may still not be enough for you to conclude if it is a common or a special cause of variation. Note – Any RCA approach may not be good enough to reduce or eliminate common causes. They only work with special causes in the truest sense.

So, what does that leave us with! A simple lesson that an RCA activity has to be conducted when you think even with a certain degree of probability that it could be a special cause of variation. To ascertain that if the cause genuinely was a Special cause all you got to do is look back into the history and see if these causes repeated. If they did, I don’t think you would even be tempted to think it to be a special cause of variation.

Remember one thing – While eliminating special causes is considered goal one for most Six Sigma projects, reducing common causes is another story you’d have to consider. The biggest benefit of dealing with common causes is that you can even deal with them in the long run, provided they are able to keep the process controlled and oh yes, the common causes don’t result in effects.

Merely by looking at a chart, I don’t think I have ever been able to say if the point has a Special cause attached to it or not. Yes – This even applies to a Control chart which is by far considered to be the best Special cause identification tool. The best way out is a diligently applied RCA and a simple act of going back and checking if the cause repeated or not.

Our Quality Management Courses Duration And Fees

Explore our top Quality Management Courses and take the first step towards career success

Recommended Reads

A Guide on How to Become a Site Reliability Engineer (SRE)

10 Major Causes of Project Failure

Your One-Stop Guide ‘On How Does the Internet Work?’

How to Improve Your Company’s Training Completion Rates

Root Cause Analysis: All You Need to Know

How to Become a Cybersecurity Engineer?

Get Affiliated Certifications with Live Class programs

Finance for non-financial professionals.

  • 24x7 learner assistance and support
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

Book cover

Encyclopedia of Systems and Control pp 2142–2150 Cite as

Statistical Process Control in Manufacturing

  • O. Arda Vanli 3 &
  • Enrique Del Castillo 4  
  • Reference work entry
  • First Online: 01 January 2021

51 Accesses

Statistical process control (SPC) has been successfully utilized for process monitoring and variation reduction in manufacturing applications. This entry aims to review some of the important monitoring methods. We discuss fundamental process monitoring topics including Shewhart’s model, \(\bar X\) and R control charts, EWMA and CUSUM charts for monitoring small process shifts, process monitoring for autocorrelated data, and integration of statistical and engineering (or automatic) control techniques. We also illustrate the application of SPC in the more recently emerging areas including monitoring profiles, surfaces, and point cloud data sets in manufacturing. The goal is to provide readers from control theory, mechanical engineering, and electrical engineering an expository overview of the key topics in statistical process control.

  • Statistical process control
  • Profile monitoring
  • Point clouds

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Bibliography

Alwan LC, Roberts HV (1988) Time-series modeling for statistical process control. J Bus Econ Stat 6(1):87–95

Google Scholar  

Basseville ME, Nikiforov IV (1993) Detection of abrupt changes: theory and application. Prentice-Hall, Englewood Cliffs, NJ

MATH   Google Scholar  

Box GEP, Kramer T (1992) Statistical process monitoring and feedback adjustment: a discussion. Technometrics 34(3):251–267

Article   MathSciNet   Google Scholar  

Box GEP, Jenkins GW, Reinsel GC (1994) Time Series Analysis, Forecasting and Control, Prentice Hall, Inc

Capizzi G, Masarotto G (2017) Phase I distribution-free analysis of multivariate data. Technometrics 59(4):484–495

Chen N, Zi X, Zou C (2016) A distribution-free multivariate control chart. Technometrics 58(4):448–459

Colosimo BM (2018) Modeling and monitoring methods for spatial and image data. Qual Eng 30(1):94–111

Colosimo BM, Cicorella P, Pacella M, Blaco M (2014) From profile to surface monitoring: SPC for cylindrical surfaces via Gaussian processes. J Qual Technol 46(2):95–113

Article   Google Scholar  

Del Castillo E (2002) Statistical process adjustment for quality control. Wiley, New York

Del Castillo E, Zhao X (2019, to appear) Industrial statistics and manifold data (with discussion). Qual Eng

Gardner MM, Lu JC, Gyurcsik RS, Wortman JJ, Hornung BE, Heinisch HH, Rying EA, Davis JC, Mozumder PK (1997) Equipment fault detection using spatial signatures. IEEE Trans Compon Packag Manuf Technol Part C 20(4):295–304

Harris TJ, Ross WH (1991) Statistical process control procedures for correlated observations. Can J Chem Eng 69(1):48–57

Hawkins DM, Peihua Q, Chang WK (2003) The changepoint model for statistical process control. J Qual Technol 35(4):355–366

Jiang BC, Wang CC, Liu HC (2005) Liquid crystal display surface uniformity defect inspection using analysis of variance and exponentially weighted moving average techniques. Int J Prod Res 43(1):67–80

Article   MATH   Google Scholar  

Jin J, Shi J (2001) Automatic feature extraction of waveform signals for in-process diagnostic performance improvement. J Intell Manuf 12(3): 257–268

Lowry CA, Montgomery DC (1995) A review of multivariate control charts. IIE Trans 27(6):800–810

Lucas JM, Saccucci MS (1990) Exponentially weighted moving average control schemes: properties and enhancements. Technometrics 32(1):1–12

Megahed FM, Woodall WH, Camelio JA (2011) A review and perspective on control charting with image data. J Qual Technol 43(2):83–98

Montgomery DM (2013) Introduction to statistical quality control, 7th edn. Wiley, New York

Pignatiello JJ Jr, Samuel TR (2001) Estimation of the change point of a normal process mean in SPC applications. J Qual Technol 33(1):82–95

Qiu P (2018) Jump regression, image processing, and quality control. Qual Eng 30(1):137–153

Ryan TP (2011) Statistical methods for quality improvement, 3rd edn. Wiley, New York

Book   MATH   Google Scholar  

Shewhart WA (1939) Statistical method from the viewpoint of quality control. The Graduate School of the Department of Agriculture. Washington, DC

Sullivan JH (2002) Detection of multiple change points from clustering individual observations. J Qual Technol 34(4):371–383

Tsung F, Tsui KL (2003) A mean-shift pattern study on integration of SPC and APC for process monitoring. IIE Trans 35(3):231–242

Tsung F, Li Y, Jin M (2008) Statistical process control for multistage manufacturing and service operations. Int J Serv Oper Informatics 3(2):191–204

Vander Wiel SA, Tucker WT, Faltin FW, Doganaksoy N (1992) Algorithmic statistical process control: concepts and an application. Technometrics 34(3):286–297

Wang A, Wang K, Tsung F (2014) Statistical surface monitoring by spatial-structure modeling. J Qual Technol 46(4):359–376

Wells LJ, Megahed FM, Niziolek CB, Camelio JA, Woodall WH (2013) Statistical process monitoring approach for high-density point clouds. J Intell Manuf 24(6):1267–1279

Western Electric (1956) Statistical Quality Control Handbook, Western Electric Corporation, Indianapolis, IN

Woodall WH, Adams BM (1993) The statistical design of CUSUM charts. Qual Eng 5(4):559–570

Woodall WH, Spitzner DJ, Montgomery DC, Gupta S (2004) Using control charts to monitor process and product quality profiles. J Qual Technol 36(3):309–320

Zang Y, Qiu P (2018) Phase II monitoring of free-form surfaces: an application to 3D printing. J Qual Technol 50(4):379–390

Zhao X, del Castillo E (2019) An intrinsic geometrical approach for statistical process control of surface data. Working paper, Engineering Statistics and Machine Learning Laboratory, Department of Industrial Engineering, PSU

Download references

Author information

Authors and affiliations.

Department of Industrial and Manufacturing Engineering, High Performance Materials Institute, Florida A&M University, Florida State University, Tallahassee, FL, USA

O. Arda Vanli

Department of Industrial and Manufacturing Engineering, The Pennsylvania State University, University Park, PA, USA

Enrique Del Castillo

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to O. Arda Vanli .

Editor information

Editors and affiliations.

College of Engineering, Boston University, Boston, MA, USA

John Baillieul

Technological Leadership Institute, University of Minnesota, Minneapolis, MA, USA

Tariq Samad

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this entry

Cite this entry.

Vanli, O.A., Castillo, E.D. (2021). Statistical Process Control in Manufacturing. In: Baillieul, J., Samad, T. (eds) Encyclopedia of Systems and Control. Springer, Cham. https://doi.org/10.1007/978-3-030-44184-5_258

Download citation

DOI : https://doi.org/10.1007/978-3-030-44184-5_258

Published : 04 August 2021

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-44183-8

Online ISBN : 978-3-030-44184-5

eBook Packages : Intelligent Technologies and Robotics Reference Module Computer Science and Engineering

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Not logged in

Assignable cause, page actions.

  • View source

Assignable causes of variation have an advantage (high proportion, domination) in many known causes of routine variability. For this reason, it is worth trying to identify the assignable cause of variation , in such a way that its impact on the process can be eliminated, of course, assuming that project managers or members are fully aware of the assignable cause of variation. Assignable causes of variation are the result of events that are not part of the normal process. Examples of assignable causes for variability are (T. Kasse, p. 237):

  • incorrectly trained people
  • broken tools
  • failure to comply with the process
  • 1 Identify data of assignable causes
  • 2 Types of data for assignable causes
  • 3 Determining the source of assignable causes of variation in an unstable process
  • 4 Examples of Assignable cause
  • 5 Advantages of Assignable cause
  • 6 Limitations of Assignable cause
  • 7 Other approaches related to Assignable cause
  • 8 References

Identify data of assignable causes

The first step you need to take when planning data collection for assignable causes is to identify them and explain your goals . This step is to ensure that the assignable causes data that the project team gathers provides the answers that are needed to carry out the ' process improvement ' project efficiently and successfully. The characteristics that are desirable and most relevant for an assignable causes are for example: relevant, representative, sufficient. In the planning process for collecting data on assignable causes, the project team should draw and mark a chart that will provide the findings before actual data collection begins. This step gives the project team an indication of what data that can be assigned is needed (A. van Aartsengel, S Kurtoglu, p. 464).

Types of data for assignable causes

There are two types of data for assignable causes, qualitative and quantitative . Qualitative data is obtained from deseriography resulting from observations or measures of different types of characteristics of the results of the process in terms of narrative words and statements. However, the next group of data, which are quantitative data on assignable causes, are derived from the description of observations or measures of process result characteristics in terms of measurable quantity in which numerical values are used (A. van Aartsengel, S. Kurtoglu, p. 464).

Determining the source of assignable causes of variation in an unstable process

If an unstable process occurs then the analyst must identify the sources of assignable cause variation. The source and the cause itself must be investigated and, in most cases, unfortunately also eliminated. Until all such causes are removed, then the actual capacity of the process cannot be determined and the process itself will not work as planned. In some cases, however, assignable cause variability can improve the result, then the process must be redesigned (W. S. Davis, D. C. Yen, p. 76). There are two possibilities for making the wrong decision, which concerns the appearance of assignable cause variations: there is no such reason (or it is incorrectly assessed) or it is not detected (N. Möller, S. O. Hansson, J. E. Holmberg, C. Rollenhagen, p. 339).

Examples of Assignable cause

  • Poorly designed process : A poorly designed process can lead to variation due to the inconsistency in the way the process is operated. For example, if a process requires a certain step to be done in a specific order, but that order is not followed, this can lead to variation in the results of the process.
  • Human error : Human error is another common cause of variation. Examples include incorrect data entry, incorrect calculations, incorrect measurements, incorrect assembly, and incorrect operation of machinery.
  • Poor quality materials : Poor quality materials can also lead to variation. For example, if a process requires a certain grade of material that is not provided, this can lead to variation in the results of the process.
  • Changes in external conditions : Changes in external conditions, such as temperature or humidity, can also cause variation in the results of a process.
  • Equipment malfunctions : Equipment malfunctions can also lead to variation. Examples include mechanical problems, electrical problems, and computer software problems.

Advantages of Assignable cause

One advantage of identifying the assignable causes of variation is that it can help to eliminate their impact on the process. Some of these advantages include:

  • Improved product quality : By identifying and eliminating the assignable cause of variation, product quality will be improved, as it eliminates the source of variability.
  • Increased process efficiency : When the assignable cause of variation is identified and removed, the process will run more efficiently, as it will no longer be hampered by the source of variability.
  • Reduced costs : By eliminating the assignable cause of variation, the cost associated with the process can be reduced, as it eliminates the need for additional resources and labour.
  • Reduced waste : When the assignable cause of variation is identified and removed, the amount of waste produced in the process can be reduced, as there will be less variability in the output.
  • Improved customer satisfaction : By improving product quality and reducing waste, customer satisfaction will be increased, as they will receive a higher quality product with less waste.

Limitations of Assignable cause

Despite the advantages of assigning causes of variation, there are also a number of limitations that should be taken into account. These limitations include:

  • The difficulty of identifying the exact cause of variation, as there are often multiple potential causes and it is not always clear which is the most significant.
  • The fact that some assignable causes of variation are difficult to eliminate or control, such as machine malfunction or human error.
  • The costs associated with implementing changes to eliminate assignable causes of variation, such as purchasing new equipment or hiring more personnel.
  • The fact that some assignable causes of variation may be outside the scope of the project, such as economic or political factors.

Other approaches related to Assignable cause

One of the approaches related to assignable cause is to identify the sources of variability that could potentially affect the process. These can include changes in the raw material, the process parameters, the environment , the equipment, and the operators.

  • Process improvement : By improving the process, the variability caused by the assignable cause can be reduced.
  • Control charts : Using control charts to monitor the process performance can help in identifying the assignable causes of variation.
  • Design of experiments : Design of experiments (DOE) can be used to identify and quantify the impact of certain parameters on the process performance.
  • Statistical Process Control (SPC) : Statistical Process Control (SPC) is a tool used to identify, analyze and control process variation.

In summary, there are several approaches related to assignable cause that can be used to reduce variability in a process. These include process improvement, control charts, design of experiments and Statistical Process Control (SPC). By utilizing these approaches, project managers and members can identify and eliminate the assignable cause of variation in a process.

  • Davis W. S., Yen D. C. (2019)., The Information System Consultant's Handbook: Systems Analysis and Design , CRC Press, New York
  • Kasse T. (2004)., Practical Insight Into CMMI , Artech House, London
  • Möller N., Hansson S. O., Holmberg J. E., Rollenhagen C. (2018)., Handbook of Safety Principles , John Wiley & Sons, Hoboken
  • Van Aartsengel A., Kurtoglu S. (2013)., Handbook on Continuous Improvement Transformation: The Lean Six Sigma Framework and Systematic Methodology for Implementation , Springer Science & Business Media, New York

Author: Anna Jędrzejczyk

  • Recent changes
  • Random page
  • Page information

Table of Contents

  • Special pages

User page tools

  • What links here
  • Related changes
  • Printable version
  • Permanent link

CC BY-SA Attribution-ShareAlike 4.0 International

  • This page was last edited on 17 November 2023, at 16:52.
  • Content is available under CC BY-SA Attribution-ShareAlike 4.0 International unless otherwise noted.
  • Privacy policy
  • About CEOpedia | Management online
  • Disclaimers
  • Lab Manager Academy
  • Subscribe Today!
  • Lab Management
  • Quality Management

Investigating for Failures

The fda and other regulatory agencies consider the integrity of laboratory data to be an integral part of the drug manufacturing process. deficiencies of out-of-specification (oos) investigations continue to be the major cause of warning letters in the pharmaceutical industry..

The FDA and other regulatory agencies consider the integrity of laboratory data to be an integral part of the drug manufacturing process. 1,2 Deficiencies of out-of-specification (OOS) investigations continue to be the major cause of warning letters in the pharmaceutical industry. The regulatory agencies require that OOS, out-of-trend (OOT), or aberrant results be investigated.3 An effective and compliant quality management system will ensure thorough, timely, unbiased, well-documented, scientifically sound investigations for OOS, OOT, and aberrant results, which will ensure, if a root cause can be assigned, the implementation of appropriate corrective and preventative actions. The challenge for many firms is having a clearly outlined and well-organized process that is well understood by analysts, supervisors, and manufacturing personnel and that provides for clear, concise, complete documentation. A lack of consistency in the approaches to investigations and root-cause analyses also leads to weak, inconclusive investigations.

The firm’s procedure for failure investigations should discuss the types of errors that may arise and how to deal with them, describe how to investigate failures, and cover timeliness of assessments, including the following: scope, roles and responsibilities, definitions, investigation procedure (phases of the investigation), documentation, corrective and preventative action, and trend analysis.

Interested in Food Science News?

The focus of this article is an OOS investigation; however, the principles are applicable to all analytical laboratory investigations.

The scope of the investigation procedure should clearly state when the investigation is required, and define OOS, OOT, and aberrant results.

OOS results are most often generated due to laboratory or manufacturing-related errors, the setting of inappropriate specifications,4, or poor method development.5,6 OOT results may be within specification but show significant variation from historical results. Aberrant results include unexpected variability in analytical results and system suitability failures. For example, % impurity value of 0.3 for early-phase API lots did not meet criteria of 0.2, which was set based on research lots of 100g quantities; % assay value of 96.0 met the specification of 95.0-105.0 but was lower than historical values of 99.7, 99.4, 99.5, 98.9, and 99.5; and the solvent standard weight in a gas chromatographic headspace was very small, such that the % RSD criteria for the standards were not met.

The scope should also indicate the analytical results to which the procedure is applicable: release test results for components, intermediates, drug substance, and drug products; stability test results; reference standards; method transfers; and method validations. Research samples, in-process checks, and method development are not within the scope.

Roles and responsibilities

The roles and responsibilities of the testing unit—the analyst and supervisor in Quality Control and Quality Assurance (QA) —should be outlined. The analysts should ensure that they are trained on the test method, are aware of the potential problems that can occur during the testing process, and watch for problems that could contribute to inaccurate results. Process flow charts and chromatographic profiles unique to the material facilitate the analyst’s understanding of the testing. The supervisor is responsible for the objective, timely assessment of the investigation to determine if the results might be attributed to laboratory error or indicate problems in manufacturing, a poorly developed or poorly written test method, or inappropriate specifications. QA is responsible for the review, approval, and tracking of the investigation.

Definitions

Terms used in the investigation procedure, such as OOS, OOT, and aberrant result, should be clearly defined. The differences between retest and repeat testing, and corrective and preventative action (CAPA), should be described.

Investigation procedure

The investigation procedure should describe the phases of the investigation and recommended timing for completion of each phase. The investigation consists of the initial assessment, laboratory supervisor’s assessment, practical laboratory investigation, retesting, and conclusion of the investigation. Refer to the OOS investigation flowchart.

The initial assessment should focus on determining the assignable cause, so that laboratory error is confirmed or ruled out. The analyst should confirm the accuracy of test results, identify known errors, and carefully consider observations made during testing, to provide possible insight into the failures. The analyst should also check the data for compliance with test specifications before discarding test preparations. Checklists can be used to aid in identification of these errors (e.g., verification of identity of samples, standards, reagents, and correct preparation of samples) and have the advantage of maintaining consistency in initial assessments. The analyst is responsible for initiating and documenting the investigation, and reporting the occurrence to the laboratory supervisor and QA within a specified time frame.

The laboratory supervisor’s assessment should be objective and timely and include a review of the supporting documentation and a discussion with the analyst to confirm the analyst’s knowledge of and performance of the correct test method. Potential causes of the suspect result should be identified and a plan documented to identify and confirm or rule out a potential cause by conducting a practical laboratory investigation.

QA is responsible for assigning a unique identifier to the investigation at the outset, reviewing and tracking the investigation, and approving the completed investigation and CAPA.

Conducting the practical laboratory investigation

The purpose is to confirm or determine the assignable cause through additional laboratory work. The documented plan should be executed and the results evaluated. It must be noted that the results obtained from the practical investigation are not “reportable results” and are for the purpose of the investigation only. Examination of the retained standard and sample solutions should be performed as part of the investigation.

If an assignable cause is identified, then the original suspect result is invalidated. The error is corrected, results from all affected samples are assessed, and the test is repeated. The result from the repeat test is reported and the investigation concluded. When evidence of laboratory error remains unclear, a full-scale investigation should be conducted.

Expansion of investigations

When the initial assessment does not determine that laboratory error caused the failure and test results appear to be accurate, a full-scale investigation should be conducted. Dependent on the specifics of the failure investigation, the investigation might consist of review of the manufacturing process, and stability results of the lot for previous time points and of other lots, if any. Results of other tests performed on the lot should also be assessed. The investigation might also include additional laboratory testing. The goal is to determine the root cause, followed by implementation of corrective actions prior to any retests of the lot. The longterm action should be a preventative action to decrease the incidence of the error or failure.

Review of manufacturing process or process external to the originator laboratory should involve affected departments, and an evaluation by the multidisciplinary team should be coordinated by QA. If this part of the investigation confirms the OOS result and identifies the root cause, the investigation may be completed.

Retesting is performed to confirm or not confirm the test result. A full-scale investigation may include additional laboratory testing when the initial assessment and practical laboratory investigation fail to clearly identify the cause of the suspect results. The firm’s procedure should clearly state the number of samples and replicates to be used in the retest, prior to start of the retest. The criteria for evaluating the results should also be predefined in the plan. This provides an unbiased approach and preempts the perception of testing into compliance.

The procedure should state what results are to be reported. If an assignable cause has been identified, the original results should be invalidated and the retest results reported. If an assignable cause is not identified, suspect results should not be invalidated. There is no justification for rejecting a suspect result and accepting a passing result. All test results, both passing and suspect, should be reported.

Conclusion of investigation is the final step after an assessment of all the supporting information. QA then dispositions the material.

Documentation

The investigation procedure should describe what information needs to be documented: the reason for the investigation, including what happened, when, and where; initial assessment including checklists; the laboratory supervisor’s assessment; details of the investigation plan; and executed practical investigation, retests, and conclusion of the investigation. The procedure should clearly state where the information is to be recorded and delineate at what stage reviews and approvals by the laboratory supervisor and QA are required.

Root causes and corrective and preventative action

Many firms will note the root cause as “analyst error” without drilling down to the actual root cause, thus missing the opportunity to implement a more relevant preventative action and build a robust, quality laboratory system.

The goal of the investigation is to determine a root cause. This will in turn trigger corrective actions to address the immediate issue, and preventative actions that are aimed at reducing the frequency of failures and/or errors in the long term; for example, the failure of an assay is tracked to an incorrect amount of material weighed. Was the weighing of the incorrect amount due to poor analytical technique? Was the analyst not trained in basic laboratory skills? The corrective action would be to ascertain that the analyst was proficient in pipette use, prior to reweighing the sample. In this case, the preventative action might be to evaluate the training program for laboratory personnel. Or was the pipette out of tolerance? What then was the frequency of calibration? Was the pipette subjected to heavy daily use? The corrective action to address the immediate issue would be to use another pipette that is in calibrated status. The preventative action would be to determine if the pipette has heavy daily use, and to increase the frequency of calibration to every six or three months, to better ensure that the pipette was “within tolerance.”

Trend analysis

A periodic review of trend analysis provides invaluable information for improvements to the laboratory system. It highlights trends in failure investigations by instrumentation, method, analyst, and product.

In conclusion, the best practice, undoubtedly, is to minimize the generation of failures. Careful description of test methods and reportable values, including appropriate system suitability parameters, can help prevent test result failures and anomalies. Scientifically sound test method development and validation approaches, a well-designed instrument/equipment qualification, and a robust metrology program, combined with qualification and training of analysts in basic laboratory skills and analytical techniques—and unambiguous, well-written test methods along with a clear and comprehensive investigation procedure—will help minimize errors and failures.

  • 21 Code of Federal Regulations Part 211 - Current Good Manufacturing Practice for Finished Pharmaceuticals, April 1996.
  • 21 Code of Federal Regulations Part 58 - Good Laboratory Practice for Non- Clinical Laboratory Studies, April 2010.
  • Guidance for Industry Investigating Out of Specification (OOS) Test Results for Pharmaceutical Production, October 2006.
  • ICH Q6A Specifications: Test Procedures and Acceptance Criteria for New Drug Substances and New Drug Products: Chemical Substances, December 2000.
  • ICH Q2A Text on Validation of Analytical Procedures, March 1995.
  • ICH Q2B Validation of Analytical Procedures: Methodology, May 1997.

Violet M. Carvalho , director, Quality Control, Arena Pharmaceuticals, can be reached at [email protected] or by phone at 858-453-7200 ext 1734.

  • SIFT-MS: Real-Time Volatiles Analysis for Continuous Manufacturing of Pharmaceuticals

an assignable cause

  • Publications
  • Conferences

Are You Invalidating Out-of-Specification (OOS) Results into Compliance?

LCGC North America

an assignable cause

  • Phase 1 is the laboratory investigation which is to determine if there is an assignable cause for the analytical failure. This is conducted under the auspices of Quality Control, and should be split into two parts. First, the analyst checks their work to identify any gross errors that have occurred, and correct them with appropriate documentation. If this does not identify the cause, the analyst and their supervisor initiate the OOS investigation procedure looking in more detail and determining whether the cause is within the subject of the FDA OOS guidance. If a root cause cannot be identified, then the investigation is escalated to Phase 2.
  • Phase 2 is under the control of Quality Assurance to coordinate the work of both production and the laboratory; there are two elements here: Phase 2a and 2b.
  • In Phase 2a, if no assignable cause is found in the laboratory, then the investigation looks to see if there is a failure in production. If there is no root cause in production, then the investigation moves back to the laboratory.
  • In phase 2b, different hypotheses are formulated to try and identify an assignable cause, and a protocol is generated before any laboratory work is undertaken. Here, resampling can be undertaken if required.

Owing to space, we will only consider Phase 1 laboratory investigations in this article.

OOS Definitions

We have been talking about OOS, but we have not defined this and any associated terms, so let us see what definitions we have. Now here is where it gets interesting. You would think that, in an FDA guidance focused on OOS investigations, the term would be defined early in the document. I mean, logic would dictate this, would it not? Not a chance! We must wait until page 10 to find the definition, and then it is found, not in the main body of text, but in a small font footnote! Your tax dollars at work. Not only that, it is totally separated from the discussion about the individual results from an analysis that is found floating in the middle of page 10. Your tax dollars at work, again. There are the following definitions used in the FDA OOS guidance document:

  • Reportable Result : The term refers to a final analytical result. This result is appropriately defined in the written approved test method, and derived from one full execution of that method, starting from the sample. It is comparable to the specification to determine pass/fail of a test (16). This is easy to understand; it is a one-for-one comparison of the analytical result with the specification and the outcome is either pass or fail. Maybes are not allowed.
  • Individual Result : To reduce variability, two or more aliquots are often analyzed with one or two injections each, and all the results are averaged to calculate the reportable result. It may be appropriate to specify in the test method that the average of these multiple assays is considered one test, and represents one reportable result. In this case, limits on acceptable variability among the individual assay results should be based on the known variability of the method, and should also be specified in the test methodology. A set of assay results not meeting these limits should not be used (16).
  • Therefore, the individual results must have their own and larger limits, due to the variance associated with a single determination. This is an addition to the product specification limits for the reportable result discussed above. Note individual results are NOT compared with the product specification.
  • Out-of-Specification (OOS) Result : A reportable result outside of specification or acceptance criteria limits. As we are dealing with specifications, OOS results can apply to test of raw materials, starting materials, active pharmaceutical ingredients and finished products, and in-process testing. However, if a system suitability test fails, this will not generate an OOS result, as the whole run would be invalidated; however, there needs to be an investigation into the failure (16).
  • Out-of-Trend (OOT) Result : Not an out-of-specification result, but rather the result does not fit with the expected distribution of results. An alternative definition is a time dependent result which falls outside a prediction interval or fails a statistical process control criterion (17). This can include a single result outside of acceptance limits for a replicate result used to calculate a reportable result. If investigated, the same rules as for OOS investigations apply. Think not of regulatory burden but good analytical science here. Is it better to investigate and find the reason for an OOE result, or wait until you have an OOS result that might initiate a batch recall?
  • Out of Expectation (OOE) Result : An atypical, aberrant, or anomalous result within a series of results obtained over a short period of time, but is still within the acceptable range specification.

OOS of Individual Values and Reportable Results

To understand the relationships between the reportable result and individual values, some examples are shown in Figure 1, courtesy of Chris Burgess. The upper and lower specification and individual limits for this procedure are shown in the horizontal lines. You’ll see that the individual limits are wider than the specification limits, as the variance of a single value is greater than a mean result. There are six examples shown in Figure 2, and, from the left to the right, we have the following:

an assignable cause

  • Close individual replicates and mean in the middle of the specification range-an ideal result!
  • The individual results are closely spread, and, although one replicate is outside the specification limit, it is inside the individual limit, and therefore a good result.
  • The individual values are relatively close, and all are within the individual limits, but the reportable result is out-of-specification.
  • One of the individual results is outside of the individual result limit, which means that there an OOS result although the reportable result is within specification.

Examples 5 and 6 would be OOT or OOE results respectively, but are not OOS. Here, the variance of the individual results is wider than expected, and may indicate that there are problems with the procedure. It would therefore be prudent to investigate what are the reasons for this rather than ignore them. We will focus on OOS result only here.

Process Capability of an Analytical Procedure

Do you know how well your analytical procedures perform? If not, why not? This information provides you with valuable evidence that you can use in OOS investigations, and it is also a regulatory requirement, as mentioned earlier (12). For chromatographic methods, Individual calculated values and results should be plotted over time, with the aim being to show how a specific method performs and the variability. There are two main types of plot that can be used:

  • Shewhart plots with upper and lower specification and individual results plotted over time, as illustrated in Figure 2. Both the individual values and reportable results need to be plotted; if only the latter values are used, then the true performance can be missed, as the variance is lost when averaging. Over time, this gives the process capability over time.
  • Cusum or cumulative sum is a control chart that is sensitive to identifying changes in the performance of method, often before OOS results are generated. When the plot direction alters, this often indicates that a change that influences the procedure has occurred, and the reason for this should be investigated to identify the reason. This may be as subtle as a new batch of a solvent or change of column.

If your analytical data are similar to Examples 5 and 6 in Figure 2, then you could have a non-robust analytical procedure that reflects poorly on method development and validation (7); either that, or you have poorly trained staff. Prevention of OOS results is better than the investigation of them!

This Never Happens in Your Laboratory…

Even with all the automation and computerization in the world, there is still the human factor to consider. Consider the following situation. The analytical balance is qualified and has been calibrated, the reference standard is within expiry, the weight taken is within limits, and the vessel is transferred to a volumetric flask. One of three things could happen:

  • The material is transferred to the flask correctly, and the solution is made up to volume as required. All is well with the world.
  • During transfer, some material is dropped outside the flask, but the analyst still dissolves the material and makes the solution up to volume.
  • All material is transferred to the flask correctly, but the flask is overfilled past the meniscus.

At this point, only the analyst preparing the reference solution stands between your organization and a data integrity disaster. The analyst is the only person who knows that options 2 and 3 are wrong. What happens next depends on several factors:

  • Corporate data integrity policies and training
  • The open culture of the laboratory that allows an individual to admit their mistakes
  •  The honesty of the individual analyst
  • Laboratory metrics; for example, Turn Around Time (TAT) targets that can influence the actions of individuals
  • The attitude of the supervisor and laboratory management to such errors.

STOP! This is the correct and only action by the analyst. Document the mistake contemporaneously and repeat the work from a suitable point (in this case, repeat the weighing). It is simpler and easier to repeat now than investigate later.

But what actually happens depends on those factors described above. Preparation of reference standard is one area where the actions of an individual analyst can compromise the integrity of data generated for one or more analytical runs. If the mistake is ignored, the possible outcomes could be an out-of-specification result or the release of an under- or over-strength batch. In the subsequent investigation, unless the mistake is mentioned, it may not be possible to have an assignable cause.

What is the FDA’s View of Analyst Mistakes?

Hidden in the Responsibilities of the Analyst section in the FDA’s Guidance for Industry on Investigating OOS Results is the following statement (16):

If errors are obvious, such as the spilling of a sample solution or the incomplete transfer of a sample composite, the analyst should immediately document what happened.

Analysts should not knowingly continue an analysis they expect to invalidate at a later time for an assignable cause (i.e., analyses should not be completed for the sole purpose of seeing what results can be obtained when obvious errors are known).

The only ethical option open to an analyst is to stop the work, document the error, and repeat the work from a suitable point.

Do You Know Your Laboratory OOS Rate?

According to the FDA:

Laboratory error should be relatively rare. Frequent errors suggest a problem that might be due to inadequate training of analysts, poorly maintained or improperly calibrated equipment, or careless work (16).

This brings me to the first of two metrics: Do you know the percentage of OOS results across all tests that are performed in your laboratory? If not, why not?

Remember that there are three main groups of analytical procedure that can be used as release testing ranging from:

  • Observation (including, but not limited to, appearance, color, and odor, for example), which is relatively simple to perform. If there is an OOS result, it is more likely to be a manufacturing issue than a laboratory one (such as particles in the sample or change in expected color). However, laboratory errors in analyses of this type will be extremely rare.
  • Classical wet chemistry (including, but not limited to, melting point, titration, and loss on drying) involving a variety of analytical techniques often with manual data recording unless automated (autotitration). There is more likelihood of an error but many mistakes, such as transcription or calculation errors, should be identified and corrected during second person review.
  • Instrumental analyses (including, but not limited to, identity, assay, potency, and impurity) using spectrometers and chromatographs for example. Here, the procedures can be more complex, and data analysis requires trained analysts.

As you move down this list, the likelihood of an OOS result increases with the complexity of analytical procedure and more human data interpretation that is involved. Hence the emphasis on instrumental methods, such as chromatography (1) and spectroscopy (18) in inspections.

SSTs Failure Does Not Require an OOS Investigation

Under analyst responsibilities, there is an FDA get out of jail free card for chromatographic runs where the SST injections fail to meet their predefined acceptance criteria:

Certain analytical methods have system suitability requirements, and systems not meeting these requirements should not be used. …. in chromatographic systems, reference standard solutions may be injected at intervals throughout chromatographic runs to measure drift, noise, and repeatability.

If reference standard responses indicate that the system is not functioning properly, all of the data collected during the suspect time period should be properly identified and should not be used. The cause of the malfunction should be identified and, if possible, corrected before a decision is made whether to use any data prior to the suspect period (16).

Here’s where technology can come to help. Some CDS applications allow users to define acceptance criteria for SST injections. If one or more of SST injections fail these criteria, then the run stops automatically. This saves the analyst from trying to determine where there are data that could be used from the run because if the run is stopped before samples are injected, then there are no sample data available. It is important that if this function is used, it must be validated to show that it works-specified in the user requirements specification (URS) and verified that it works in the user acceptance testing (UAT). There will also be corresponding entries in the instrument log book documenting the initial problem and the steps taken to resolve the issue (11,19,20).

What is an OOS Investigation?

An OOS investigation is triggered by an analyst when the reportable result is outside of the specification limits as shown in Figure 2. The analyst informs their supervisor and this should begin the laboratory investigation by following the laboratory OOS procedure. The responsibilities of both individuals is presented in Table II, which is copied from the FDA OOS guidance document. The FDA is very specific in listing the responsibilities of both the analyst who performs the analysis and the laboratory supervisor who will be conducting the investigation. The guidance document will be the source for a laboratory SOP that will detail what will be done for a laboratory OOS investigation.

an assignable cause

The investigation begins by checking the chromatographer’s knowledge of the analytical procedure, and that the right procedure was used. This is followed by ensuring the sampling was performed correctly, and that the right sample was analyzed, and so on throughout the analysis. Areas to check for potential errors and assignable causes are shown in Figure 4 and Table III, and has been derived from the FDA OOS guidance.

an assignable cause

Don’t Do This in Your Laboratory!

The Lupin plant in Nagpur is the source of this example of how not to do undertake an OOS laboratory investigation, and this is quoted from the 483 form that was issued in January 2020 (21). Citation 2 is an observation for failing to follow the investigation SOP, but we will look at citation 1, where the details of inadequate laboratory investigations are documented. Some parts of the citation are heavily redacted, so this is a best attempt at reconstructing the investigation that was carried out:

  • There was an OOS result from a dissolution test.
  • The OOS was hypothesized as being due to an analyst transposing samples from different time points.
  • The original sample solutions were remeasured, but the results appeared to be similar to the original.
  • A comment was added to the record that is redacted in the 483 that appears to document an analyst comment that some of the solution was spilled, and that this would account for the discrepancy in results. This is a very convenient spillage.
  • However, the comment was not made by the analyst who performed the original work. A second analyst was told by his supervisor that the first analyst had said he had made a mistake, and the second analyst documented this, as directed by the supervisor.
  • There was no documentation or corroborating evidence provided to support this, or that an interview with the original analyst ever occurred.
  • A substitution of a solution was made which, when retested, passed. Well, what a surprise!
  • The original test results were invalidated, and the passing results used to release the product.
  • QC and QA personnel signed off the investigation, even though they knew of the substitution of the solution and potential manipulation.
  • The original analyst was unavailable during the inspection.
  • No deviation or CAPA was instigated.
  • No definitive root cause of the OOS result was ever determined.

Any wonder that a 483 observation was raised?

FDA Guidance on Quality Metrics

The second metric that is important in OOS investigations is a topic in the FDA Draft Guidance on Quality Metrics (22) that emphasizes the importance of correct OOS investigations. There are three metrics covering manufacturing and quality control, but there is only one metric for QC that is the percentage of invalidated OOS rate, defined as follows:

Invalidated Out-of-Specification (OOS) Rate (IOOSR) as an indicator of the operation of a laboratory. IOOSR = the number of OOS test results for lot release and long-term stability testing invalidated by the covered establishment due to an aberration of the measurement process divided by the total number of lot release and long-term stability OOS test results in the current reporting timeframe (21).

What is important is that the rate covers not only batch release but also stability testing. The rationale for using the invalidated OOS rate can be seen in Table I and the corresponding 483 observations and warning letters (4,5). An aim is that FDA can conduct risk-based inspections, and, if a firm has low regulatory risk, they will be relying on these quality metrics to extend the time between inspections. Woe betide a firm who massages these metrics.

Outsourced Analysis?

If your organization outsources manufacturing and QC analysis, how should you monitor the work? From a QC perspective, any OOS result should be notified to your organization from the contract facility. You must have oversight of any laboratory investigation on your products or analysis to ensure that the FDA criteria of an investigation are met as outlined above. In addition, if FDA are interested in a metric of the percentage of invalidated OOS results, so are you. You should have these figures for your work, but also across the whole of the outsourced laboratory. Therefore, you should review all OOS investigations on your products, either via video conference or on site during any supplier audits. In words of that great data integrity expert, Ronald Reagan, trust but verify.

Scientifically sound OOS laboratory investigations are an essential part of ensuring data integrity. Outlined here are the key requirements for an OOS investigation to find and assignable or root cause so that a result could be invalidated. Note that the FDA and other regulatory authorities take a very keen interest in invalidated OOS results, especially where analyst error is cited continually as the cause of the OOS. Your laboratory should know the OOS rate as well as the percentage of OOS results invalidated.

Acknowledgment

I would like to thank Chris Burgess for permission to use Figure 2 in this article and for comments made in preparation of this article.

  • R.D. McDowall, Data Integrity and Data Governance: Practical Implementation in Regulated Laboratories . (Royal Society of Chemistry, Cambridge, United Kingdom, 2019).
  • FDA Warning Letter Tismore Health and Wellness Pty Limited (Food and Drug Administration, Silver Spring, Maryland, 2019).
  • FDA Warning Letter Shriram Institute for Industrial Research (Food and Drug Administration, Silver Spring, Maryland, 2020).
  • FDA Warning Letter Lupin Ltd. (Food and Drug Administration, Silver Spring, Maryland, 2017).
  • FDA 483 Observations: Lupin Ltd. (Food and Drug Administration, Silver Spring, Maryland, 2017).
  • FDA Warning Letter Lupin Ltd. (Food and Drug Administration, Silver Spring, Maryland, 2019).
  • R.D. McDowall, LCGC North Am. 38 (4), 233–240 (2020).
  • R.J. Davis, Judge Wolin’s Interpretation of Current Good Manufacturing Practice Issues Contained in the Court’s Ruling United States vs. Barr Laboratories in Development and Validation of Analytical Methods , C.L. Riley and T.W. Rosanske, Editors (Pergammon Press, Oxford, United Kingdon, 1996), p. 252.
  • Barr Laboratories: “Court Decision Strengthens FDA’s Regulatory Power” (1993). Available from: https://www.fda.gov/Drugs/DevelopmentApprovalProcess/Manufacturing/ucm212214.htm .
  • USP General Chapter <1010> Outlier Testing (United States Pharmacopoeia Convention Inc., Rockville, Maryland, 2012).
  • 21 CFR 211 Current Good Manufacturing Practice for Finished Pharmaceutical Products (Food and Drug Administration, Silver Spring, Maryland, 2008).
  • EudraLex - Volume 4 Good Manufacturing Practice (GMP) Guidelines, Chapter 6 Quality Control (European Commission, Brussels, Belgium, 2014).
  • Inspection of Pharmaceutical Quality Control Laboratories (Food and Drug Administration, Rockville, Maryland, 1993).
  • FDA Compliance Program Guide CPG 7346.832 Pre-Approval Inspections (Food and Drug Administration, Silver Spring, Maryland, 2019).
  • R.D. McDowall, Spectroscopy 34 (12), 14–19 (2019).
  • FDA Guidance for Industry Out-of-specification Results (Food and Drug Administration, Silver Spring, Maryland, 2006).
  • C. Burgess, Personal Communication.
  • P.A. Smith and R.D. McDowall, Spectroscopy 34 (9), 22–28 (2019).
  • EudraLex - Volume 4 Good Manufacturing Practice (GMP) Guidelines, Chapter 4 Documentation (European Commission, Brussels, Belgium, 2011).
  • R.D. McDowall, Spectroscopy 32 (12), 8–12 (2017).
  • FDA 483 Observations Lupin Ltd. (Food and Drug Administration, Silver Spring, Maryland, 2019).
  • FDA Guidance for Industry Submission of Quality Metrics Data, Revision 1 (Food and Drug Administration, Rockville, Maryland, 2016).

R.D. McDowall is the director of R.D. McDowall Limited in the UK. Direct correspondence to: [email protected]

an assignable cause

What Goes Around Comes Around?

“Questions of Quality” is 30 years old! What, if anything, has changed in chromatography laboratories over that time?

What Are Orphan Data?

What Are Orphan Data?

The term orphan data is used frequently in the context of data integrity. What does it mean for chromatography data systems? How can we prevent or detect orphan data?

Quo Vadis Analytical Procedure Development and Validation?

Quo Vadis Analytical Procedure Development and Validation?

What do the draft publications ICH Q2(R2) and Q14 for analytical procedure validation and development mean for a regulated GMP laboratory?

How Static Are Static Data?

How Static Are Static Data?

A balance printout is a fixed record, and is also called static data. But how static are static data when the weight is used in a chromatographic analysis? Also, have some regulatory data integrity guidance documents failed to comply with their own regulations?

It’s Qualification, But Not  As We Know It?

It’s Qualification, But Not As We Know It?

Qualification and calibration of high performance liquid chromatography (HPLC) chromatographs is a regulatory requirement, but how proscriptive should guidance be?

The Hidden Factory in Your Laboratory?

The Hidden Factory in Your Laboratory?

There is a hidden factory in plain sight in many laboratories. Not many people know of its existence, yet it stares back at you when you work. What is the output from this hidden factory? Paper.

2 Commerce Drive Cranbury, NJ 08512

609-716-7777

an assignable cause

Are you a ComplianceQuest Customer?

OOS (Out of Specification)

out of specification

One of the important aspects of drug and drug product development is providing specifications that must be conformed to for the product to be considered acceptable for its intended use. According to the definition by the International Conference on Harmonization (ICH), specifications refer to “a list of tests, references to analytical procedures, and appropriate acceptance criteria, which are numerical limits, ranges, or other criteria for the tests described.” This is a critical quality standard where the manufacturers propose and justify the conditions that form the basis for the approval by regulatory authorities.

The specifications are detailed during the product design stage, clearly defining the objectives the product is expected to achieve using specific components, containers, closures, in-process materials, and finished products.

To ensure that the product conforms to the specifications, laboratory testing is mandated by regulatory bodies such as CGMP (§§ 211.160 and 211.165). It ensures that not only does the product perform as required but all components and other aspects mentioned above meet the specifications criterion.

When the results of the laboratory test show the product values to be outside the specifications or acceptance criteria, it is said to be Out-of-Specification or OOS. Further action needs to be taken to prevent its rejection or receiving a warning letter from the FDA based on a root cause analysis.

What is OOS (Out of Specification)?

During the course of a drug or drug product’s development and as the product is nearing completion, it needs to be tested to ensure that it performs as expected, within the specified limits as mentioned in the compendia, drug master file, or drug application. When it does not and falls outside the specified limits, it is said to be OOS or out of specification. If this happens often, it is an indication of the manufacturing and analytical procedures not being in control. It can lead to frequent customer complaints and the rejection of commercial batches. As a result, the pharmaceutical business will face a heavy inventory loss. It may also compromise the safety of patients and their handlers. Therefore, any incident of OOS result occurrence must be investigated and the root cause addressed.

Typically, the specified limits are detailed in documents such as the compendia, drug master file, or drug application. This happens right at the product design stage after the product has been conceptualized and the details of the nature of the product, its goals, and the raw materials to be used are specified. The testing criteria are also laid down at this stage, explaining how the product should be tested and who will be testing it.

  • From the time of design to the time of manufacturing, the design undergoes several changes. Due to these changes or due to process errors, deviations may be introduced at the time of manufacturing too, This can cause Out of Specification. Some of the common factors of Out of Specification (OOS) include:
  • Deviations during the product manufacturing process
  • Errors during testing due to incorrect procedure
  • Errors caused by malfunctioning analytical equipment
  • Therefore, to understand whether the error is due to the product not conforming to the specifications or other factors, a root cause analysis must be performed to identify the true cause. The OOS causes can be classified as
  • Assignable: When the error is identified.
  • Non-assignable: When the error is not identified.

On observing an OOS result, a laboratory preliminary investigation (Phase-I) is recommended to identify the assignable cause. Based on the findings, further investigations will be conducted. If no error was found, then too, the batch may need a QA to be performed.

These decisions are communicated by the QC team to the designated personnel on OOS being reported, who will be responsible for classifying the OOS as the assignable or non-assignable cause.

Related Assets

Quality management has already changed in plenty of ways and will continue to do so. Many businesses have incorporated it…

Section 21 CFR 211 of Current Good Manufacturing Practice (CGMP) for Finished Pharmaceuticals includes detailed requirements for OOS investigations: 211.84…

In drug manufacturing quality control, a robust, well-defined testing and lab investigation process is critical to ensure that the drug…

Origin of OOS

Statistically speaking, nearly five percent of lots and tests are likely to fall outside the accepted limits, even if the product does not deviate from specifications. Often, manufacturers also apply retesting procedures and averages incorrectly. This could be due to a lack of skills in the application of statistical methods or an attempt to prevent discarding lots for unethical reasons. This results in repeated testing of lots till a sample is shown to be conforming to the specification range. One passing result is considered enough to accept the lot. This approach is called ‘testing into compliance’.

The turning point in the history of Out of Specification (OOS) came in 1993 when Barr Laboratories lost a lawsuit and prompted FDA to interpret the rules differently. FDA had initiated a regulatory lawsuit against the company, asking it to recall millions of its tablets and other drug products for not meeting quality requirements. Called the ‘Barr Decision’, This was a precedent-setting decision, which allowed FDA to recall products that did not meet the quality requirements and did not conform to the FDA Guidance on Investigating OOS Results.

The FDA Guidance tries to address the common misunderstanding of important statistical principles and requires an investigation to be conducted before testing a replicate sample.

oos origin

Efficient OOS and OOT Investigations at your Fingertips with CQ’s Lab Investigation Solution for Pharma and Biotech Companies

Fda's latest guidance for oos results.

FDA regulation (§ 211.192) requires an investigation to be conducted every time there is an OOS test result. The investigation should aim to identify the root cause of the OOS result, which could be an aberration of the measurement or the manufacturing process. Even in case of a rejected batch due to Out of Specification (OOS) result, it should be investigated to understand if other batches of the same drug product or other products are also similarly affected. Despite a batch being rejected, the investigation is necessary. Further, the investigation, conclusions, and follow-up need to be documented (§ 211.192).

  • For the investigation to be meaningful, it should be:
  • Well-documented
  • Scientifically sound

The investigation is conducted in two parts. The first part is Phase I, when it is determined whether the correct methodology was followed as per STP, calibrated instruments were used, the analysts trained, and so on. If the Out of Specification (OOS) is a result of a laboratory error, the analysis should be repeated as defined in the SOP. This includes–Min NLT 6 replicates by two different analysts, one by the original analyst followed by an experienced analyst.

If the OOS is not because of a Laboratory error, then a full-scale, Phase-II investigation is recommended. In this Phase, all critical aspects of the operations, including manufacturing, packaging, sampling/re-sampling, and so on should also be included in the SOP definitions.

If Phase-I reveals an assignable cause, a repeat analysis as established in the SOP should be conducted. In case of no assignable cause being identified, the QA should recommend further investigation/batch disposition.

Phase I: Laboratory Investigation to Identify and Assess OOS Test Results

In Phase-I, the accuracy of laboratory data should be assessed before test preparations are discarded. This helps eliminate possible laboratory errors or instrument malfunctions as the cause of OOS. If no meaningful errors are identified in the analytical method used to arrive at the data, a full-scale out-of-spec investigation should be initiated. In the case of contract laboratories, the manufacturing company’s quality control unit (QCU) should be informed of the data, findings, and supporting documentation by the laboratory so that they can then initiate the full-scale OOS investigation.

Phase II: Full-Scale OOS Investigation

When the initial assessment establishes that laboratory error is not responsible for the Out of Specification (OOS) result and that the test results are accurate, a full-scale, Phase-II OOS investigation should be conducted. The investigation should conform to a predefined procedure and include a review of the production process and/or additional laboratory work. The aim is to determine the root cause of the OOS result and initiate corrective and preventative action (CAPA) .

Some of the steps involved in the investigation include:

  • A review of production and sampling procedures
  • Additional laboratory testing
  • Evaluation of the impact of OOS result(s) on batches that have been distributed already.

How to Handle Out of Specification (OOS) Results?

oos result handling

Once the investigation is complete and the results available, the QCU should evaluate it, determine the quality of the batch, and make a decision about its release using the SOPs as its guideline.

Even if a batch is rejected, further tests should be performed to identify the cause of the failure. This will help take corrective and preventive action to avoid similar OOS results in the future.

Interpretation of the investigation results is a critical aspect of OOS investigation. A batch need with an OOS result need not be rejected outright but needs the findings to be investigated further. The findings of the initial and subsequent investigations should be analyzed and the batch evaluated before deciding on releasing or rejecting the batch (§ 211.165).

Where a cause has been identified and the result invalidated, the quality of the batch should not be assessed based on that result. However, a discrete test result may be invalidated only on observing and documenting a test event that is considered a reasonably sure cause of the OOS result.

Where the cause of the OOS result is identified as a factor that affects the batch quality, the result may be used to evaluate the batch or lot quality. A confirmed OOS result is a clear indication of not meeting the established standards or specifications. This is sufficient ground to reject the batch (§ 211.165(f)).

If the OOS cause is confirmed, the OOS investigation transforms into a batch failure investigation that must encompass other batches or products with possible association with the specific failure (§ 211.192).

If the cause is inconclusive, the QCU might want to release the batch. But this decision should be made only after a thorough investigation proving the OOS result is not a reflection of the quality of the batch. It is better to err on the side of caution.

Drawbacks in Manual Out of Specification Tracing

manual out of specification drawbacks

Traditionally, Out of Specification (OOS) tracing was done using manual procedures. Even in modern times, businesses may continue to use manual processes or use legacy systems for OOS investigations. Considering the number of dependencies associated with OOS testing, this can be quite a challenging task and likely to be fraught with risks of errors and missing out on key indicators.

It also involves much paperwork. When the QA team initiates the investigations, it will have to manually note down the findings. This will be reviewed by the reviewer, who will have to go through all the steps again and recommend the next step, which could be to conduct a fresh investigation into the causes of Out of Specification (OOS).

The reviewer’s findings maybe forwarded to an analyst, who will also receive a sample for further investigation. The steps of testing will have to be repeated and the findings recorded manually. All these leave ample scope to introduce human error and noncompliance.

There have been cases of the production team sending a batch of raw materials or finished goods for production or sale even before the investigation has been completed. This can cause problems during the audit process when all the investigation documents of OOS will be required to be shared with the auditor. It may be deemed noncompliance with FDA regulations .

What is Out of Specification (OOS) Investigation?

A drug or a drug product is part of treatment therapy and aims to alleviate the illness in some form or the other – either as a direct treatment, as a mitigative solution, or for diagnosing the disease. It can be intrusive, inserted in the body, consumed as a drug, etc., or an application or connected to the surface of the body. In all of these situations, it is essential for the product to perform as expected. Any anomaly can cause a life-threatening or injurious repercussion. To mitigate this risk , regulatory bodies introduce checks and balances to enhance the safety of using these products. It has been realized that risk mitigation must begin right from the word go – right from the design stage. Not only the end purpose but also the input materials have an impact on the safety and performance of these products. Therefore, design controls need to be put in place to ensure that the end product is aligned with the vision set forth while designing.

At every stage from design to development and manufacturing, the product needs to be tested for its performance as well as conformance to the specifications established at the beginning. Any variance, referred to as Out-of-Specification, needs to be investigated.

But OOS may be due to lab conditions or a genuine problem. Therefore, it needs to be investigated thoroughly to see whether there is an assignable cause or if it is true. To establish that, manufacturers need to find out whether an entire batch is at variation, only the sample, or whether there is no variation but a faulty test is throwing up a deviation.

Therefore, the first stage of investigation is to eliminate the possibility of a laboratory error. It should be performed before discarding the test samples and reagents to validate the original data.

  • Factors that can cause Out of Specification (OOS) results in a laboratory include:
  • Incorrect handling of the sample
  • Use of wrong reagents
  • Equipment not calibrated
  • Sample not incubated for the correct time and temperature
  • Possible contamination of sample
  • Incorrect reading of the result
  • Not reporting to the correct units of measurement

The solutions and reagents should be retained till a second person has verified all data and that they meet the defined acceptance criteria.

  • Some of the factors to be considered during an Out of Specification (OOS) investigation include
  • Analyzing historical data to capture past trends and errors due to assay, equipment, environment, or analyst.
  • Whether the test methodology is as per the SOP
  • That the test was conducted by a trained analyst
  • Other tests conducted on the same day as the test are reviewed by the same analyst or the same tests being conducted by different analysts.
  • Factoring in any other OOS results obtained on the batch of material being tested
  • Ensuring that the analyst understands the procedure
  • Ensuring the correct samples(s) were tested

If the error is not laboratory-based, then further investigations should be conducted to identify the root cause across all operations. All laboratory investigations must be documented and trends identified and monitored.

  • A typical OOS investigation process covers the following:

oos investigation process

  • Definition of the problem or event
  • Examining trends and history
  • Risk assessment to understand the impact of the Out of Specification (OOS) on the final product
  • Prevent a recurrence of the problem
  • Identify the root cause through a thorough analysis
  • Initiate CAPA to take corrective action and prevent future recurrence

Corrective and preventative actions are the consequences of the investigation and describe the actions taken which are designed to lead to an improvement of process and environmental quality.

quality management processes

Customer Success

Biotech Company Partners with ComplianceQuest, Automates Quality Management Processes

Requirements for out of specification investigations.

The regulatory bodies such as the cGMP and FDA lay down certain requirements that are essential for all drug and device manufacturing companies to comply with. Some of these include:

oos establishing specification

Establishing specifications, standards, sampling plans, test procedures, and other laboratory control mechanisms (§ 211.160).

oos investigation written records

A written record of the investigation should be maintained and must include details about the investigations conducted, the conclusions, and the follow-up action taken. Root cause analysis and CAPA form an essential part of this requirement.

out of specification analysis method

For some analytical methods, system suitability requirements must be met. For instance, chromatographic systems require reference standard solutions to be injected at periodic intervals throughout the chromatographic runs to measure drift, noise, and repeatability. In case the system is not functioning properly as per the reference standard responses, then it is essential to identify the data collected during the suspect time period and not used it in the future. The reason for the malfunction should be determined and corrected, if possible, before taking a decision on whether to use any data before the suspect period.

out of specification for cgmp regulation

According to the CGMP regulations, statistically valid quality control criteria must be determined and include appropriate acceptance and/or rejection levels (§ 211.165(d)).

oos lab invetigation

Investigations to be conducted every time an Out of Specification (OOS) test result is obtained (§ 211.192).

oos instruments

CGMP regulations § 211.160 (b)(4) require the analyst to use only those instruments that meet the established performance specifications and are properly calibrated.

out of specifiaction deviation

Setting a threshold for deviation from acceptance limits among the replicates must also be established. In case of an unexpected variation in replicate determinations, remedial action must be triggered, as required by § 211.160(b)(4).

oos approved drug

For products that are the subject of approved full and abbreviated new drug applications, in case of any failure of a distributed batch to meet any of the specifications established in an application (21 CFR 314.81(b)(1)(ii)), a field alert report (FAR) of information must be submitted within 3 working days.

ComplianceQuest delivers!

I’ve been using ComplianceQuest (CQ) for about 9 months and am extremely pleased with the product, the implementation team and ongoing support. I selected CQ for a number of reasons. Functionality and a simple user interface were key requirements. CQ has all the functionality needed in support of a global QMS. Implementation includes; Document control, change order, personnel training, NC/CAPA, equipment management, supplier management, audit management, and customer complaints all on a single platform. As a small biologics company it was critical to find a single solution to meet our GMP quality system requirements. We wanted a cloud-based system, that would be quick to implement, that could be expanded globally and in other languages, all for a reasonable price. The user interface – it is exactly what I was hoping for. I constantly hear the staff saying, “I love CQ, it’s so straightforward to use”.

The implementation team has met all my expectations. The CQ product ‘out of the box’ meets the majority of my requirements. With some minor configurations we have developed QMS workflows that are highly efficient and robust. A risk-based validation plan also allowed us to move quickly through the foundational functions; Controlled Documents, Change Order, and Personnel. The implementation team offer many industry best practices as solutions to questions and are also highly competent in listening to your requirements and doing whatever is necessary to fulfill them. Customer Support is incredibly timely in their responsiveness; addressing most questions or concerns within 24-48 hours, meanwhile communicating along the way. We are extremely happy with our selection and highly recommend ComplianceQuest.

Donna Matuizek, Sr. Director Quality

just logo

How ComplianceQuest is Superlative for OOS Investigation Success?

ComplianceQuest is a cloud-based, end-to-end workflow management solution that integrates quality, safety, clinical trial, product design, and lab investigation solutions on one platform. This helps pharmaceutical and medical device manufacturers improve the quality and safety of their products while enhancing compliance .

The CQ Product Design Management Solution helps companies implement a comprehensive design development process to mitigate product risks by providing complete visibility at every stage of the product lifecycle. The solution provides manufacturers with 100% traceability over design controls, establishing specifications, facilitating review at every stage to capture deviations, and minimizing the risk of Out of Specification (OOS). A unified repository for all design documentation and advanced collaboration tools lets engineering and quality teams to work together and improve conformance and compliance.

The Design Development solution embeds product risk management, essential for OOS investigation, helping identify and document product-related risks and link them to specific parts or requirements. The Design Control solution also features solutions for storing design history files and complete design-related documents that help QA teams to ensure that the project is progressing as expected, and meeting specifications.

CQ’s Lab Investigation solution automates the investigations process, ensuring a systematic, efficient, and comprehensive approach. It facilitates collaboration and guided workflows that empower QC and QA teams to identify the assignable or root cause of every Out of Specification (OOS) test result, and then act on it while staying compliant due to thorough documentation. Laboratory teams can initiate a Phase-II investigation whenever necessary, conduct a full production review, then escalate to a CAPA if needed to ensure high-quality products. Built-in checklists help QA and QC conduct comprehensive investigations consistently and identify the correct assignable or root cause every time.

OOS Lab Investigation features

The pre-built workflows simplify the process for QC and QA teams. It is also possible to create customizable checklists to guide investigators during lab investigations and even production reviews to make sure no step is missed.

Documents and investigation records are easily accessible online, from anywhere, anytime, with sufficient security. Built-in collaboration tools make it easy to communicate and share progress and allow QC and QA analysts and manufacturing investigators to conduct comprehensive full-scale investigations.

By automating the investigation process, the solution also makes Out of Specification (OOS) investigations time and cost-effective. It is aligned with regulatory standards such as GLP, ISO, and FDA, speeding up the investigation process and automating the document management for reporting and compliance purposes.

As not all OOS result requires a full-scale investigation or a Non-Conformance record, lab supervisors and QAs can escalate Phase-I investigations to a full-scale Phase-II investigation based on need and specify whether a production review is required. Indicating if a Field Alert Assessment or Customer Notification is required and directly launching Non-Conformance at the end of an investigation are also possible.

The solution is integrated into the 5 Why Root Cause analysis solution, helping teams collaborate on identifying the true root cause. Once the cause has been identified, CAPA and management review are just a button click away. All the current and historical data are within easy access to help make informed decisions on the correct action to be taken to prevent future recurrences.

Automatic notifications and action requests allow lab supervisors to assign actions to the members of the investigation team and track the progress effortlessly.

The key features of the CQ Lab Investigation Solution include:

Phased Investigation

Lab supervisors and QA can escalate from phase 1 to phase 2 or close the investigation based on findings.

Can launch NCs and actions directly from the investigation record and link records together.

Automated Alerts and Notifications

Automatic notifications and action requests to people involved in the investigation.

Configurable Queues

To configure the workflow and approval matrix queue to fit your needs and processes.

Formatted Printouts

With ability to assign parent/child document structure and customize printouts of investigations as needed.

Reports and Dashboards

To generate any report or dashboard they need to keep track of lab investigations and identify trends.

Pre-built Checklists

Preliminary lab investigations and production reviews can be populated with pre-built and customizable checklists to guide the investigation, ensure consistency and thoroughness.

Field Alert Assessment

Will automatically show in the lab investigation if the record of the OOS test result is for stability testing, reminding quality teams to send the alert if needed.

Embedded Risk Assessment

Embedded in-lab investigation so that QAs can determine the risk assessment.

Customer Notification

After a lab investigation (phases 1 and 2), quality teams can indicate if notifying customers is needed.

Impacted Material/Batches

Can be referenced in the lab investigation record.

  • Some of the other features that support the lab investigation system include:
  • Audit management.
  • Change management
  • Management review
  • Documentation management
  • Training management

Together, they make ComplianceQuest a comprehensive solution to improve design control, reduce deviations, and minimize OOS results. To know more about the CQ Lab Investigation Solution, visit https://www.compliancequest.com/lab-investigations/

Visit www.compliancequest.com for more information on the company and its solutions

Request a demo: https://www.compliancequest.com/online-demo/

Or contact: https://www.compliancequest.com/contact-us/

Minimize the drawbacks of manual oos test with ComplianceQuests AI-powered pre-built checklist of OOS to find the root cause and take action on it.

Related checklists.

Identification & Assessment of Out-Of-Specification (OOS) Test Results – Laboratory Investigation checklist

Identification & Assessment of Out-Of-Specification (OOS) Test Results – Laboratory Investigation checklist

Checklist | July 12th, 2022

Investigating OOS: Do I Understand My Responsibilities as a “Laboratory Analyst” – Let’s Evaluate!

Investigating OOS: Do I Understand My Responsibilities as a “Laboratory Analyst” – Let’s Evaluate!

Investigating OOS: Do I Understand My Responsibilities as a “Laboratory Supervisor” – Let’s Evaluate!

Investigating OOS: Do I Understand My Responsibilities as a “Laboratory Supervisor” – Let’s Evaluate!

A checklist on Full scale OOS Investigation – Production (Part A)

A checklist on Full scale OOS Investigation – Production (Part A)

A Checklist on Full-scale OOS Investigation – Retesting & Resampling (Part B)

A Checklist on Full-scale OOS Investigation – Retesting & Resampling (Part B)

A Checklist on Full scale OOS Investigation – Reporting Test Results

A Checklist on Full scale OOS Investigation – Reporting Test Results

A Checklist to Conclude OOS Investigation

A Checklist to Conclude OOS Investigation

Quality-centric Companies Rely on CQ QMS

affinivax mono

Frequently Asked Questions

Out-of-Specification or OOS refers to a product that does not meet the specifications defined for it. This indicates a deviation and could have occurred at any stage of the product lifecycle, needing a thorough investigation. The OOS result could also be a laboratory-based error. This should be eliminated before deciding to reject or retain a batch.

While this is an FDA requirement, a deviation can also be a risk for the patients and their handlers. Releasing a product with OOS can cause it to be recalled, which will prove costly for the company, affect brand image, and require going back to the drawing board to manufacture the product.

The purpose of the Out of Specification (OOS) investigation is to determine the true cause of the OOS result. This could be due to lab conditions or a genuine problem due to errors in manufacturing or other operations. The cause needs to be investigated to initiate CAPA to prevent future recurrence while taking corrective action for the current error.

Pharmaceutical manufacturing companies and contract laboratories are required to perform OOS.

The OOS investigation can be out-of-specification when it is thorough, timely, unbiased, scientifically defensible, and well-documented.

Before discarding the test solution for the out-of-specification investigation, the accuracy of the laboratory data should be established following an initial assessment. This allows any hypothesis inferring laboratory error or instrument malfunction to be tested with the same solutions.

If errors are identified after the initial assessment of the analytical process used to obtain the data, it should be followed by a complete failure investigation.

Yes, you must, to ensure that other batches of the same drug product or other products also do not have deviations. Identifying the root cause is also important to prevent its recurrence.

After the accuracy of the laboratory’s data has been assessed and established. This will help test the hypothesis regarding a laboratory error or instrument malfunction using the same solutions.

If the initial assessment shows the analytical process to be accurate and the problem is not a laboratory error, then a full-scale investigation must be initiated.

The analyst is primarily responsible for conducting out-of-specification investigations and ensuring accurate laboratory test results. They should be aware of the potential problems likey to occur when conducting the tests and be alert at the time of testing to identify the problems responsible for the out-of-specification results.

CGMP 211.160(b)(4) holds the analyst responsible for ensuring that the instruments used for out-of-specifications meet the established specifications. It requires the analyst to use only properly calibrated instruments.

If you are facing frequent laboratory errors, you should be concerned. There are four possible reasons for frequent laboratory errors:

analysts not being trained adequately

equipment being maintained poorly,

equipment not being calibrated properly,

careless work.

The practices used include:

Retesting a portion of the original sample,

testing a specimen from the collection of a new sample from the batch,

resampling test data,

using outlier testing.

Retest is required when the investigating testing instrument malfunctions or possible integrity problems during sample handling are identified.

A supervisor must assess data promptly to determine whether laboratory error is responsible for the results or whether it is a manufacturing process problem. He should be objective and timely during the assessment and not have any preconceived notions about the cause of Out of Specification (OOS) results.

Related Insights

ISO 15189 Compliance: Ensuring Laboratory Excellence with a Modern EQMS Solution

ISO 15189 Compliance: Ensuring Laboratory Excellence with a Modern EQMS Solution

ISO 15189 accreditation involves an independent assessment of the medical laboratory with a total of 531 accredited tests, including Inter…

Future-Proof Pharmaceutical Compliance with an Effective QMS

One of the five key questions leaders must ask themselves…

Lab OOS Investigations for Pharmaceutical Products: Why a Systematic Process & Automation are Necessary

Section 21 CFR 211 of Current Good Manufacturing Practice (CGMP)…

OOS and OOT investigations are an essential part of clinical…

In Need of Smarter Ways Forward? Get in Touch.

Got questions we can help.

Chat with a CQ expert, we will answer all your questions.

 alt=

Please confirm your details

By submitting this form you agree that we can store and process your personal data as per our Privacy Statement. We will never sell your personal information to any third party.

IMAGES

  1. What is Assignable Cause, Root Cause, Contributing Cause, and Atypical

    an assignable cause

  2. What is 'assignable cause' 🧑‍🔧

    an assignable cause

  3. PPT

    an assignable cause

  4. ¿Qué es 'causa asignable'

    an assignable cause

  5. Statistical Quality Control

    an assignable cause

  6. Topic 10

    an assignable cause

VIDEO

  1. Assignable Roles in Workday

  2. SE03-10 Changing Cause Exceptions

  3. Chance Vs Assignable causes

  4. 2 where defective

  5. #principledaction #causeandeffect #IndividualRights

  6. How to use assignable outputs from Motif!!

COMMENTS

  1. Assignable Cause

    Assignable cause, also known as a special cause, is one of the two types of variation a control chart is designed to identify. Let's define what an assignable cause variation is and contrast it with common cause variation. We will explore how to know if your control is signaling an assignable cause and how to react if it is.

  2. Assignable Cause

    An assignable cause refers to a specific, identifiable factor or reason that contributes to a variation or deviation in a process or system's output. In statistical process control and quality management, assignable causes are distinct from random or common causes, as they are usually identifiable and controllable.

  3. PDF The assignable cause The Control Chart Statistical basis of the control

    An assignable cause is something that can be discovered and corrected at the machine level. What is the Assignable Cause? An "Assignable Cause" relates to relatively strong changes, outside the random pattern of the process. It is "Assignable", i.e. it can be discovered and corrected at the machine level.

  4. Common cause and special cause (statistics)

    Definitions Common-cause variations Common-cause variation is characterised by: Phenomena constantly active within the system; Variation predictable probabilistically; Irregular variation within a historical experience base; and Lack of significance in individual high or low values.

  5. Assignable Cause: Learn More From Our Online Lean Guide

    An assignable cause is a type of variation in which a specific activity or event can be linked to inconsistency in a system. In effect, it is a special cause that has been identified. As a refresher, common cause variation is the natural fluctuation within a system. It comes from the inherent randomness in the world.

  6. Rule of Seven

    The rule of seven states that if seven or more consecutive measurements fall on one side of the mean that there's an assignable cause that needs investigation. Key Points Rule of seven is a rule of thumb or heuristic.

  7. Common Cause vs. Special Cause Variation: What's the Difference?

    It is also known as "assignable cause." These variations are unusual, unquantifiable, and are variations that have not been observed previously, so they cannot be planned for and accounted for. These causes are typically the result of a specific change that has occurred in the process, with the result being a chaotic problem.

  8. 6.3.1. What are Control Charts?

    We would be searching for an assignable cause if a point would fall outside these limits. Where we put these limits will determine the risk of undertaking such a search when in reality there is no assignable cause for variation.

  9. ASSIGNABLE CAUSES OF VARIATIONS

    Walter A. Shewhart (1931) suggested that assignable causes, or local sources of trouble, must be eliminated before managerial innovations leading to improved productivity can be achieved. Assignable causes of variability can be detected leading to their correction through the use of control charts.

  10. Diagnosis of assignable cause in statistical process control

    The information regarding the process shift mode and run length is very useful for diagnosing the assignable cause correctly and promptly. The algorithm includes two stages. First, the process shift modes are established using the sample data acquired during an explorative run. Afterwards, whenever an out‐of‐control case is detected, Bayes ...

  11. Leaving Out-of-control Points Out of Control Chart ...

    If you include points that you already know are different because of an assignable cause, you reduce the sensitivity of your control chart to other, unknown causes that you would want to investigate. Fortunately, Minitab Statistical Software makes it fast and easy to leave points out when you calculate your center line and control limits. ...

  12. How to deal with Assignable causes

    How to deal with Assignable causes? By Eshna Last updated on Sep 28, 2012 3517 Across the many training sessions conducted one question that keeps raging on is "How do we deal with special causes of variation or assignable causes".

  13. When Assignable Cause Masquerades as Common Cause

    The difference between common (or random) cause and special (or assignable) cause variation is the foundation of statistical process control (SPC). An SPC chart prevents tampering or overadjustment by assuming that the process is in control, i.e., special or assignable causes are absent unless a point goes outside the control limits.

  14. Assignable causes of variation and statistical models: another approach

    This paper presents a fresh approach to the analysis of Shewhart control chart's performance. We consider two different types of assignable causes of variation. One—called type I—affects only the parameters of a model of the underlying distribution. The other—called type X—impacts the type of the original distribution.

  15. Statistical Process Control in Manufacturing

    As an assignable cause, the mean of the disturbance was shifted by a magnitude of 3σ D after sample 100. Figure 4 shows the Shewhart charts monitoring the output Y t and the input X t. The effect of assignable cause (at sample 100) on the output is quickly removed by the controller; however, a sustained shift remains in the control input.

  16. PDF Statistical Process Control: Assignable Causes and Data Forecasting

    allowed when assignable causes have been shown by history or study to be indifferent to hardware quality. The method is a novel but simple way to correct assignable causes, to perform data forecasting, and to analyze process randomness. Background Table 1 shows the equations used to correct assignable causes and evaluate process randomness.

  17. 6.1.4. What to do if the process is "Out of Control"?

    If the process is out-of-control, the process engineer looks for an assignable cause by following the out-of-control action plan (OCAP) associated with the control chart. Out-of-control refers to rejecting the assumption that the current data are from the same population as the data used to create the initial control chart limits.

  18. Assignable cause

    Assignable causes of variation have an advantage (high proportion, domination) in many known causes of routine variability.

  19. Assignable Causes of Variation and Statistical Models ...

    According to the classification proposed by Adler, Shper, and Maksimova (2011), such change is caused by the so-called extrinsic assignable cause, as opposed to intrinsic assignable cause, which ...

  20. PDF The Importance of Understanding Type I Statistical Process Control

    Assignable vs. Common Cause Variation nDr. Walter Shewhart developed Statistical Process Control (SPC) during the 1920s. Dr. W. Edwards Deming promoted SPC during WWII and after. nPremise is that there are three types of variation nCommon Cause Variation -Caused by the system nAssignable (or Special Cause) variation - Controllable by the work group

  21. Investigating for Failures

    If an assignable cause has been identified, the original results should be invalidated and the retest results reported. If an assignable cause is not identified, suspect results should not be invalidated. There is no justification for rejecting a suspect result and accepting a passing result. All test results, both passing and suspect, should ...

  22. Are You Invalidating Out-of-Specification (OOS) Results into Compliance?

    The guidance outlines a three-part, two-phase strategy for investigating an OOS chemical analysis result, as shown in Figure 1. Figure 1: Flow chart of OOS results investigations. Phase 1 is the laboratory investigation which is to determine if there is an assignable cause for the analytical failure.

  23. What is OOS (Out of Specification)? -OOS Investigation and OOS Results

    It facilitates collaboration and guided workflows that empower QC and QA teams to identify the assignable or root cause of every Out of Specification (OOS) test result, and then act on it while staying compliant due to thorough documentation. Laboratory teams can initiate a Phase-II investigation whenever necessary, conduct a full production ...