Book cover

Encyclopedia of Production and Manufacturing Management pp 50 Cite as

ASSIGNABLE CAUSES OF VARIATIONS

  • Reference work entry

618 Accesses

1 Citations

Assignable causes of variation are present in most production processes. These causes of variability are also called special causes of variation ( Deming, 1982 ). The sources of assignable variation can usually be identified (assigned to a specific cause) leading to their elimination. Tool wear, equipment that needs adjustment, defective materials, or operator error are typical sources of assignable variation. If assignable causes are present, the process cannot operate at its best. A process that is operating in the presence of assignable causes is said to be “out of statistical control.” Walter A. Shewhart (1931) suggested that assignable causes, or local sources of trouble, must be eliminated before managerial innovations leading to improved productivity can be achieved.

Assignable causes of variability can be detected leading to their correction through the use of control charts.

See Quality: The implications of W. Edwards Deming's approach ; Statistical process control ; Statistical...

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Deming, W. Edwards (1982). Out of the Crisis, Center for Advanced Engineering Study, Massachusetts Institute of Technology, Cambridge, Massachusetts.

Google Scholar  

Shewhart, W. A. (1939). Statistical Method from the Viewpoint of Quality Control, Graduate School, Department of Agriculture, Washington.

Download references

Editor information

Rights and permissions.

Reprints and permissions

Copyright information

© 2000 Kluwer Academic Publishers

About this entry

Cite this entry.

(2000). ASSIGNABLE CAUSES OF VARIATIONS . In: Swamidass, P.M. (eds) Encyclopedia of Production and Manufacturing Management. Springer, Boston, MA . https://doi.org/10.1007/1-4020-0612-8_57

Download citation

DOI : https://doi.org/10.1007/1-4020-0612-8_57

Publisher Name : Springer, Boston, MA

Print ISBN : 978-0-7923-8630-8

Online ISBN : 978-1-4020-0612-8

eBook Packages : Springer Book Archive

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Volume 8 Supplement 1

Proceedings of Advancing the Methods in Health Quality Improvement Research 2012 Conference

  • Proceedings
  • Open access
  • Published: 19 April 2013

Understanding and managing variation: three different perspectives

  • Michael E Bowen 1 , 2 , 3 &
  • Duncan Neuhauser 4  

Implementation Science volume  8 , Article number:  S1 ( 2013 ) Cite this article

28k Accesses

4 Citations

13 Altmetric

Metrics details

Presentation

Managing variation is essential to quality improvement. Quality improvement is primarily concerned with two types of variation – common-cause variation and special-cause variation. Common-cause variation is random variation present in stable healthcare processes. Special-cause variation is an unpredictable deviation resulting from a cause that is not an intrinsic part of a process. By careful and systematic measurement, it is easier to detect changes that are not random variation.

The approach to managing variation depends on the priorities and perspectives of the improvement leader and the intended generalizability of the results of the improvement effort. Clinical researchers, healthcare managers, and individual patients each have different goals, time horizons, and methodological approaches to managing variation; however, in all cases, the research question should drive study design, data collection, and evaluation. To advance the field of quality improvement, greater understanding of these perspectives and methodologies is needed [ 1 ].

Clinical researcher perspective

The primary goal of traditional randomized controlled trials (RCTs) (ie a comparison of treatment A versus placebo) is to determine treatment or intervention efficacy in a specified population when all else is equal. In this approach, researchers seek to maximize internal validity. Through randomization, researchers seek to balance variation in baseline factors by randomizing patients, clinicians, or organizations to experimental and control groups. Researchers may also increase understanding of variation within a specific study using approaches such as stratification to examine for effect modification. Although the generalizability of outcomes in all research designs is limited by the study population and setting, this can be particularly challenging in traditional RCTs. When inclusion criteria are strict, study populations are not representative of “real world” patients, and the applicability of study findings to clinical practice may be unclear. Traditional RCTs are limited in their ability to evaluate complex processes that are purposefully and continually changing over time because they evaluate interventions in rigorously controlled conditions over fixed time frames [ 2 ]. However, using alternative designs such as hybrid, effectiveness studies discussed in these proceedings or pragmatic RCTs, researchers can rigorously answer a broader range of research questions [ 3 ].

Healthcare manager perspective

Healthcare managers seek to understand and reduce variation in patient populations by monitoring process and outcome measures. They utilize real-time data to learn from and manage variation over time. By comparing past, present, and desired performance, they seek to reduce undesired variation and reinforce desired variation. Additionally, managers often implement best practices and benchmark performance against them. In this process, efficient, time-sensitive evaluations are important. Run charts and Statistical Process Control (SPC) methods leverage the power of repeated measures over time to detect small changes in process stability and increase the statistical power and rapidity with which effects can be detected [ 1 ].

Patient perspective

While the clinical researcher and healthcare manager are interested in understanding and managing variation at a population level, the individual patient wants to know if a particular treatment will allow one to achieve health outcomes similar to those observed in study populations. Although the findings of RCTs help form the foundation of evidence-based practice and managers utilize these findings in population management, they provide less guidance about the likelihood of an individual patient achieving the average benefits observed across a population of patients. Even when RCT findings are statistically significant, many trial participants receive no benefit. In order to understand if group RCT results can be achieved with individual patients, a different methodological approach is needed. “N-of-1 trials” and the longitudinal factorial design of experiments allow patients and providers to systematically evaluate the independent and combined effects of multiple disease management variables on individual health outcomes [ 4 ]. This offers patients and providers the opportunity to collect, analyze, and understand data in real time to improve individual patient outcomes.

Advancing the field of improvement science and increasing our ability to understand and manage variation requires an appreciation of the complementary perspectives held and methodologies utilized by clinical researchers, healthcare managers, and patients. To accomplish this, clinical researchers, healthcare managers, and individual patients each face key challenges.

Recommendations

Clinical researchers are challenged to design studies that yield generalizable outcomes across studies and over time. One potential approach is to anchor research questions in theoretical frameworks to better understand the research problem and relationships among key variables. Additionally, researchers should expand methodological and analytical approaches to leverage the statistical power of multiple observations collected over time. SPC is one such approach. Incorporation of qualitative research and mixed methods can also increase our ability to understand context and the key determinants of variation.

Healthcare managers are challenged to identify best practices and benchmark their processes against them. However, the details of best practices and implementation strategies are rarely described in sufficient detail to allow identification of the key drivers of process improvement and adaption of best practices to local context. By advocating for transparency in process improvement and urging publication of improvement and implementation efforts, healthcare managers can enhance the spread of best practices, facilitate improved benchmarking, and drive continuous healthcare improvement.

Individual patients and providers are challenged to develop the skills needed to understand and manage individual processes and outcomes. As an example, patients with hypertension are often advised to take and titrate medications, modify dietary intake, and increase activity levels in a non-systematic manner. The longitudinal factorial design offers an opportunity to rigorously evaluate the impact of these recommendations, both in isolation and in combination, on disease outcomes [ 1 ]. Patients can utilize paper, smart phone applications, or even electronic health record portals to sequentially record their blood pressures. Patients and providers can then apply simple SPC rules to better understand variation in blood pressure readings and manage their disease [ 5 ].

As clinical researchers, healthcare managers, and individual patients strive to improve healthcare processes and outcomes, each stakeholder brings a different perspective and set of methodological tools to the improvement team. These perspectives and methods are often complementary such that it is not which methodological approach is “best” but rather which approach is best suited to answer the specific research question. By combining these perspectives and developing partnerships with organizational managers, improvement leaders can demonstrate process improvement to key decision makers in the healthcare organization. It is through such partnerships that the future of quality improvement research is likely to find financial support and ultimate sustainability.

Neuhauser D, Provost L, Bergman B: The meaning of variation to healthcare managers, clinical and health-services researchers, and individual patients. BMJ Qual Saf. 2011, 20 (Suppl 1): i36-40. 10.1136/bmjqs.2010.046334.

Article   PubMed Central   PubMed   Google Scholar  

Neuhauser D, Diaz M: Quality improvement research: are randomised trials necessary?. Qual Saf Health Care. 2007, 16: 77-80. 10.1136/qshc.2006.021584.

Article   PubMed Central   CAS   PubMed   Google Scholar  

Eccles M, Grimshaw J, Campbell M, Ramsay C: Research designs for studies evaluating the effectiveness of change and improvement strategies. Quality and Safety in Health Care. 2003, 12: 47-52. 10.1136/qhc.12.1.47.

Olsson J, Terris D, Elg M, Lundberg J, Lindblad S: The one-person randomized controlled trial. Qual Manag Health Care. 2005, 14: 206-216.

Article   PubMed   Google Scholar  

Hebert C, Neuhauser D: Improving hypertension care with patient-generated run charts: physician, patient, and management perspectives. Qual Manag Health Care. 2004, 13: 174-177.

Download references

Author information

Authors and affiliations.

VA National Quality Scholars Fellowship, Tennessee Valley Healthcare System, Nashville, Tennessee, 37212, USA

Michael E Bowen

Division of General Internal Medicine, Department of Medicine, University of Texas Southwestern Medical Center, Dallas, Texas, 75390, USA

Division of Outcomes and Health Services Research, Department of Clinical Sciences, University of Texas Southwestern Medical Center, Dallas, Texas, 75390, USA

Department of Epidemiology and Biostatistics, Case Western Reserve University, Cleveland, Ohio, 44106, USA

Duncan Neuhauser

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Michael E Bowen .

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Bowen, M.E., Neuhauser, D. Understanding and managing variation: three different perspectives. Implementation Sci 8 (Suppl 1), S1 (2013). https://doi.org/10.1186/1748-5908-8-S1-S1

Download citation

Published : 19 April 2013

DOI : https://doi.org/10.1186/1748-5908-8-S1-S1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Statistical Process Control
  • Clinical Researcher
  • Healthcare Manager
  • Healthcare Process
  • Quality Improvement Research

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

assignable variation with examples

Operations Management: An Integrated Approach, 5th Edition by

Get full access to Operations Management: An Integrated Approach, 5th Edition and 60K+ other titles, with a free 10-day trial of O'Reilly.

There are also live events, courses curated by job role, and more.

SOURCES OF VARIATION: COMMON AND ASSIGNABLE CAUSES

If you look at bottles of a soft drink in a grocery store, you will notice that no two bottles are filled to exactly the same level. Some are filled slightly higher and some slightly lower. Similarly, if you look at blueberry muffins in a bakery, you will notice that some are slightly larger than others and some have more blueberries than others. These types of differences are completely normal. No two products are exactly alike because of slight differences in materials, workers, machines, tools, and other factors. These are called common , or random, causes of variation . Common causes of variation are based on random causes that we cannot identify. These types of variation are unavoidable and are due to slight differences in processing.

images

Random causes that cannot be identified.

An important task in quality control is to find out the range of natural random variation in a process. For example, if the average bottle of a soft drink called Cocoa Fizz contains 16 ounces of liquid, we may determine that the amount of natural variation is between 15.8 and 16.2 ounces. If this were the case, we would monitor the production process to make sure that the amount stays within this range. If production goes out of this range—bottles are found to contain on average 15.6 ounces—this would lead us to believe that there ...

Get Operations Management: An Integrated Approach, 5th Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.

Don’t leave empty-handed

Get Mark Richards’s Software Architecture Patterns ebook to better understand how to design components—and how they should interact.

It’s yours, free.

Cover of Software Architecture Patterns

Check it out now on O’Reilly

Dive in for free with a 10-day trial of the O’Reilly learning platform—then explore all the other resources our members count on to build skills and solve problems every day.

assignable variation with examples

  • Basicmedical Key

Fastest Basicmedical Insight Engine

  • BIOCHEMISTRY
  • GENERAL & FAMILY MEDICINE
  • HUMAN BIOLOGY & GENETICS
  • MEDICAL DICTIONARY & TERMINOLOGY
  • MICROBIOLOGY
  • PATHOLOGY & LABORATORY MEDICINE
  • PUBLIC HEALTH AND EPIDEMIOLOGY
  • Abdominal Key
  • Anesthesia Key
  • Otolaryngology & Ophthalmology
  • Musculoskeletal Key
  • Obstetric, Gynecology and Pediatric
  • Oncology & Hematology
  • Plastic Surgery & Dermatology
  • Clinical Dentistry
  • Radiology Key
  • Thoracic Key
  • Veterinary Medicine
  • Gold Membership

Variations in Care

Figure 16-1 . County-level risk-standardized 30-day heart failure readmission rates (%) in Medicare patients by performance quintile for July 2009 to June 2012. (Data from Centers for Medicare & Medicaid Services; available at https://data.medicare.gov/data/hospital-compare .) HISTORY AND DEFINITIONS Variation in clinical care, and what it reveals about that care, is a topic of great interest to researchers and clinicians. It can be divided broadly into outcome variation , which occurs when the same process produces different results in different patients , and process variation , which refers to different usage of a therapeutic or diagnostic procedure among organizations, geographic areas, or other groupings of health care providers . Studies of outcome variation can provide insight into patient characteristics and care delivery that predispose patients to either a successful or an adverse outcome and help identify patients for whom a particular treatment is likely to be effective (or ineffective). Process variation, in contrast, can provide insight into such things as the underuse of effective therapies or procedures and the overuse of ineffective therapies or procedures.   Study of the variation in clinical care dates back to 1938, when Dr. J. Allison Glover published a study revealing geographic variation in the incidence of tonsillectomy in school children in England and Wales that could not be explained by anything other than variation in medical opinion on the indications for surgery. Since then, research has revealed variation among countries and across a range of medical conditions and procedures, including prostatectomy, knee replacement, arteriovenous fistula dialysis, and invasive cardiac procedures. Actual rates of use of procedures, different variability in supply of health care services, and the system of health care organization and financing (health maintenance organizations [HMOs], fee-for-service [FFS], and national universal health care) do not necessarily determine or even greatly affect the degree of variation in a particular clinical practice. Rather, the degree of variation in use relates more to the characteristics of the procedure. Important characteristics include: •  The degree of professional uncertainty about the diagnosis and treatment of the condition the procedure addresses •  The availability of alternative treatments •  Controversy versus consensus regarding the appropriate use of the procedure •  Differences among physicians in diagnosis style and in belief in the efficacy of a treatment   When studying variation in medical practice—or interpreting the results of someone else’s study of variation—it is important to distinguish between warranted variation , which is based on differences in patient preference, disease prevalence, or other patient- or population-related factors ; and unwarranted variation , which cannot be explained by patient preference or condition or the practice of evidence-based medicine . Whereas warranted variation is the product of providing appropriate and personalized evidence-based patient care, unwarranted variation typically indicates an opportunity to improve some aspect of the quality of care provided, including inefficiencies and disparities in care.   John E. Wennberg, MD, MPH, founding editor of the Dartmouth Atlas of Health Care and a leading scholar in clinical practice variation, defines three categories of care and the implications of unwarranted variation within each of them:      1. Effective care is that for which the evidence establishes that the benefits outweigh the risks and the “right rate” of use is 100% of the patients defined by evidence-based guidelines as needing such treatment. In this category, variation in the rate of use within that patient population indicates underuse.      2. Preference-sensitive care consists of those areas of care in which there is more than one generally accepted diagnostic or therapeutic option available, so the “right rate” of each depends on patient preference.      3. Supply-sensitive care is care for which the frequency of use relates to the capacity of the local health care system. Typically, this is viewed in the context of the delivery of care to patients who are unlikely to benefit from it or whose benefit is uncertain; in areas with high capacity for that care (e.g., high numbers of hospital beds per capita) more of these patients receive the care than in areas with low capacity, where the resources have to be reserved for (and are operating at full capacity with) patients whose benefits are more certain. Because studies have repeatedly shown that regions with high use of supply sensitive care do not perform better on mortality rates or quality of life indicators than regions with low use, variation in such care may indicate overuse. Local health care system capacity can influence frequency of use in other ways, too. For example, the county-level association between fewer primary care physicians and higher 30-day hospital readmission rates suggests that inadequate primary care capacity may result in preventable hospitalizations.   Table 16-1 provides examples of warranted and unwarranted variation in each of these categories of care. Table 16-1. Examples of warranted and unwarranted variations in heart failure care.   A second important distinction that must be made when considering variation in care is between common cause and special cause variation . Common cause variation ( also referred to as “expected” or “random” variation) cannot be traced to a root cause and as such may not be worth studying in detail. Special cause variation ( or “assignable” variation) arises from a single or small set of causes that can be traced and identified and then implemented or eliminated through targeted quality improvement initiatives ). Statisticians have a broad range of tests and criteria to determine whether variation is assignable or random and with the increasing sensitivity and power of numerical analysis can measure assignable variation relatively easily. The need for statistical expertise in such endeavors must be emphasized, however; the complexity of the study designs and interpretation of results (particularly in distinguishing true variation from artifact or statistical error) carries a high risk of misinterpretation in its absence. LOCAL VARIATION Although variation in care processes and outcomes frequently is examined and discussed in terms of large-scale geography (among countries, states, or hospital referral regions, as, for example, was shown in the heart failure readmissions national map in Figure 16-1 ), it can be examined and provide equally useful information on a much smaller scale. For example, Figure 16-2 shows variation in 30-day risk-adjusted heart failure readmission rates for hospitals within a single county (Dallas, Texas), ranging from 20% below to 25% above the national average and with three hospitals showing readmission rates that were statistically significantly lower than the national average. Although no hospitals had readmission rates that were statistically significantly higher than the national rate, the poorer performing hospitals might nevertheless be interested in improving. Cooperation among the quality and clinical leaders of the hospitals within Dallas County would enable investigation of differences in practices and resources among the hospitals, which might identify areas to be targeted for improvement for those hospitals with higher readmission rates. Figure 16-2 . Forest plot showing variation in heart failure 30-day risk-standardized readmission rates (HF 30-day RSRR, %) in Medicare patients for hospitals in Dallas County, Texas for July 2009 to June 2012. Hospitals were assigned random number identifiers in place of using names. (Data from Centers for Medicare & Medicaid Services; available at https://data.medicare.gov/data/hospital-compare .)   Local between-provider variation is often encountered in the form of quality reports or scorecards. Such tools seek to identify high versus low performers among hospitals, practices, or physicians to create incentives for high performance either by invoking providers’ competitive spirit or by placing a portion of their compensation at risk according to their performance through value-based purchasing or pay-for performance programs. In other words, they show unwarranted variation in the delivery of care. Care must be taken in presenting and interpreting such variation data, however. For example, league tables (or their graphical equivalent, caterpillar charts), which order providers from the lowest to highest performers on a chosen measure and use CIs to identify providers with performance that is statistically significantly different from the overall average, are both commonly used to compare provider performance on quality measures and easily misinterpreted. One’s instinct on encountering such tables or figures is to focus on the numeric ordering of the providers and assume, for example, that a provider ranked in the 75th percentile provides much higher quality care than one in the 25th percentile. This, however, is not necessarily the case: league tables do not capture the degree of uncertainty around each provider’s point estimate, so much of the ordering in the league table reflects random variation, and the order may vary substantially from one measurement period to another, without providers making any meaningful changes in the quality of care they provide. As such, there may not be any statistically significant or meaningful clinical difference among providers even widely separated in the ranking.   Forest plots, such as Figure 16-2 , for hospitals in Dallas County are a better, although still imperfect, way of comparing provider performance. Forest plots show both the point estimate for the measure of interest (e.g., risk-adjusted heart failure 30-day readmission rates) and its CI (represented by a horizontal line) for each provider, as well as a preselected norm or standard (e.g., national average; represented by a vertical line). By looking for providers for whom not only the point estimate but the entire CI falls to either the left or right of the vertical line, readers can identify those whose performance was either significantly better or significantly worse than the preselected standard. Although Forest plots may be ordered so that hospitals are ranked according to the point estimates, that ranking is vulnerable to the same misinterpretation as in league tables. An easy way to avoid this problem is to order the providers according to something other than the point estimate—for example, alphabetically by name. Because Forest plots are easy to produce without extensive statistical knowledge or programming skills, such an approach can be very useful in situations in which experienced statisticians are not available to assist with the performance comparisons.   The funnel plot is probably the best approach for presenting comparative performance data, but it does require more sophisticated statistical knowledge to produce. In a funnel plot, the rate or measure of interest is plotted on the y axis against the number of patients treated on the x axis; close to the origin, the CI bands drawn on the plot are wide (where the numbers of patients are small) and narrow as the numbers of patients increase. The resulting funnel shape gives its name to the plot. Providers with performance falling outside the CI bands are outliers, with performance that may be statistically significantly better or worse than the overall average. Those that excel can be examined as role models to guide others’ improvement. Those that lag behind their peers can be considered as opportunities for improvement, which might benefit from targeted interventions. And because the funnel plot does not attempt to rank providers (beyond identifying the outliers), it is less open to misinterpretation by readers who fail to consider the influence of random variation.   Control charts (discussed later in detail in the context of examining variation over time) can be used in a manner similar to funnel plots to compare provider performance. In such control charts, the CI bands of the funnel plot are replaced with upper and lower control limits (typically calculated as ±3 standard deviations [SDs] from the mean [or other measure of central tendency]), and providers need not be ordered according to decreasing number of patients in the denominator of the measure of interest. As in the funnel plot, however, the providers whose performance is statistically significantly higher (or lower) than the mean are identified as those for whom the point estimate falls above the upper (or below the lower) control limit. Figure 16-3 shows an example of such a control chart for the risk-adjusted 30-day heart failure readmission rates for the hospitals in Dallas County, Texas. Unlike the forest plot in Figure 16-2 , which compares each hospital’s performance with the national average, Figure 16-3 considers only the variation among the hospitals located in Dallas County. As can be seen, no data points fall outside the control limits. Interpretation of control charts is discussed in greater detail later, but this suggests that all the variation in the readmission rates among these hospitals is explained by common cause variation (not attributable to any specific cause) rather than by any specific difference in the hospitals’ characteristics or practices. This is interesting in light of the Figure 16-2 results, which show that three hospitals’ readmission rates differed significantly from the national average. However, it should be kept in mind, first, that the CIs used to make this determination in Figure 16-2 are set at 95% compared with the control limits in Figure 16-3 which are set at 3 SDs (corresponding to 99.73%) for reasons explained in the following section. Second, Figure 16-3 draws only on the data for 18 hospitals, which is a much smaller sample than the national data, and the smaller number of observations results in relatively wide control limits. Figure 16-3 . Control chart showing variation in heart failure 30-day risk-standardized readmission rates (HF 30-day RSRR, %) in Medicare patients for hospitals in Dallas County for July 2009 to June 2012). Hospitals were assigned random number identifiers in place of using names. LCL, lower control limit; UCL, upper control limit. (Data from Centers for Medicare & Medicaid Services; available at https://data.medicare.gov/data/hospital-compare .)   Finally, variation can be studied at the most local level: within a provider—even within a single physician—over time. Such variation is best examined using control charts, discussed in detail in the next section. QUANTITATIVE METHODS OF STUDYING VARIATION Data-driven practice-variation research is an important diagnostic tool for health care policymakers and clinicians, revealing areas of care where best practices may need to be identified or—if already identified—implemented. It compares utilization rates in a given setting or by a given provider with an average utilization rate; in this it differs from appropriateness of use and patient safety studies, which compare utilization rates with an identified “right rate” and serve as ongoing performance management tools.   A good framework to investigate unwarranted variation should provide:      1. A scientific basis for including or excluding each influencing factor and to determine when the factor is applicable or not applicable      2. A clear definition and explanation of each factor suggested as a cause      3. An explanation of how the factor is operationalized, measured, and integrated with other factors Statistical Process Control and Control Charts Statistical process control (SPC), similar to continuous quality improvement, is an approach originally developed in the context of industrial manufacturing for the improvement of systems processes and outcomes and was adopted into health care contexts only relatively recently. The basic principles of SPC are summarized in Table 16-2 . Particularly in the United States, SPC has been enthusiastically embraced for quality improvement and applied in a wide range of health care settings and specialties and at all levels of health care delivery, from individual patients and providers to entire hospitals and health care systems. Its appeal and value lie in its integration of the power of statistical significance tests with chronological analyses of graphs of summary data as the data are produced. This enables similar insights into the data that classical tests of significance provide but with the time sensitivity so important to pragmatic improvement. Moreover, the relatively simple formulae and graphical displays used in SPC are generally easily understood and applied by nonstatistician decision makers, making this a powerful tool in communicating with patients, other clinicians, and administrative leaders and policymakers. Table 16-3 summarizes important benefits and limitations of SPC in health care contexts. Table 16-2. Basic principles of statistical process control.    1. Individual measurements of any process or outcome will show variation.    2. If the process or outcome is stable (i.e., subject only to common cause variation), the variation is predictable and will be described by one of several statistical distributions (e.g., normal [or bell-shaped], exponential, or Poisson distribution).    3. Special cause variation will result in measured values that deviate from these models in some observable way (e.g., fall outside the predicted range of variation).    4. When the process or outcome is in control, statistical limits and tests for values that deviate from predictions can be established, providing statistical evidence of change. Table 16-3. Benefits and limitations of statistical process control in health care.   Tools used in SPC include control charts, run charts, frequency plots, histograms, Pareto analysis, scatter diagrams, and flow diagrams, but control charts are the primary and dominant tools.   Control charts are time series plots that show not only the plotted values but also upper and lower reference thresholds (calculated using historical data) that define the range of the common cause variation for the process or outcome of interest. When all the data points fall between these thresholds (i.e., only common cause variation is present), the process is said to be “in control.” Points that fall outside the reference thresholds may indicate special cause variation due to events or changes in circumstances that were not typical before. Such events or changes may be positive or negative, making control charts useful both as a warning tool in a system that usually performs well and as a tool to test or verify the effectiveness of a quality improvement intervention deliberately introduced in a system with historically poor performance.   The specific type of control chart needed for a particular measure depends on the type of data being analyzed, as well as the behavior and assumed underlying statistical distribution. The choice of the correct control chart is essential to obtaining meaningful results. Table 16-4 matches the most common data types and characteristics for the appropriate control chart(s). Table 16-4. Appropriate control charts according to data type and distribution.   After the appropriate control chart has been determined, further issues include (1) how the upper and lower control limit thresholds will be set, (2) what statistical rules will be applied to separate special cause variation from common cause variation, and (3) how many data points need to be plotted and at what time intervals.   Broadly speaking, the width of the control limit interval must balance the risk between falsely identifying special cause variation where it does not exist (type I statistical error) and missing it where it does (type II statistical error). Typically, the upper and lower control limits are set at ±3 SDs from the estimated mean of the measure of interest. This range is expected to capture 99.73% of all plotted data compared with the 95% captured by the 2 SDs criterion typically used in traditional hypothesis testing techniques. This difference is important because, unlike in the traditional hypothesis test in which the risk of type I error (false positive) applies only once, in a control chart, the risk applies to each plotted point. Thus, in a control chart with 25 plotted points, the cumulative risk of a false positive is 1 – (0.9973) 25 = 6.5% when 3 SD control limits are used compared with 1 – (0.95) 25 = 72.3% when 2 SD limits are used.   The primary test for special cause variation, then, is a data point that falls outside the upper or lower control limit. Other common tests are listed in Table 16-5 . Although applying these additional tests does slightly increase the false-positive rate from that inherent in the control limit settings, they greatly increase the control chart’s sensitivity to improvements or deteriorations in the measure. The statistical “trick” here lies in observing special cause patterns and accumulating information while waiting for the total sample size to increase to the point where it has the power to detect a statistically significant difference. The volume of data needed for a control chart depends on: Table 16-5. Common control chart tests for special cause variation.

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Facebook (Opens in new window)

Related posts:

  • Case-Control Studies
  • Clinical Trials
  • Health Disparities
  • Diagnostic Testing

assignable variation with examples

Stay updated, free articles. Join our Telegram channel

Comments are closed for this page.

assignable variation with examples

Full access? Get Clinical Tree

assignable variation with examples

Visit CI Central  | Visit Our Continuous Improvement Store

Assignable Cause

Last updated by Jeff Hajek on December 22, 2020

An assignable cause is a type of variation in which a specific activity or event can be linked to inconsistency in a system. In effect, it is a special cause that has been identified.

As a refresher, common cause variation is the natural fluctuation within a system. It comes from the inherent randomness in the world. The impact of this form of variation can be predicted by statistical means. Special cause variation, on the other hand, falls outside of statistical expectations. They show up as outliers in the data .

Lean Terms Discussion

Variation is the bane of continuous improvement . It decreases productivity and increases lead time . It makes it harder to manage processes.

While we can do something about common cause variation, typically there is far more bang for the buck by attacking special causes. Reducing common cause variation, for example, might require replacing a machine to eliminate a few seconds of variation in cutting time. A special cause variation on the same machine might be the result of weld spatter from a previous process. The irregularities in a surface might make a part fit into a fixture incorrectly and require some time-consuming rework. Common causes tend to be systemic and require large overhauls. Special causes tend to be more isolated to a single process step .

The first step in removing special causes is identifying them. In effect, you turn them into assignable causes. Once a source of variation is identified, it simply becomes a matter of devoting resources to resolve the problem.

Lean Terms Videos

Lean Terms Leader Notes

One of the problems with continuous improvement is that the language can be murky at times. You may find that some people use special causes and assignable causes interchangeably. Special cause is a far more common term, though.

I prefer assignable cause, as it creates an important mental distinction. It implies that you…

Extended Content for this Section is available at academy.Velaction.com

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Table of Contents

Types of variance, common cause variation, common cause variation examples, special cause variation, special cause variation example, choose the right program, common cause variation vs. special cause variation.

Common Cause Variation Vs. Special Cause Variation

Every piece of data which is measured will show some degree of variation: no matter how much we try, we could never attain identical results for two different situations - each result will be different, even if the difference is slight. Variation may be defined as “the numerical value used to indicate how widely individuals in a group vary.” 

In other words, variance gives us an idea of how data is distributed about an expected value or the mean. If you attain a variance of zero, it indicates that your results are identical - an uncommon condition. A high variance shows that the data points are spread out from each other—and the mean, while a smaller variation indicates that the data points are closer to the mean. Variance is always nonnegative.

Are you looking forward to making a mark in the Project Management field? If yes, enroll in the PMP Certification Program now and get a step closer to your career goal!

Change is inevitable, even in statistics. You’ll need to know what kind of variation affects your process because the course of action you take will depend on the type of variance. There are two types of Variance: Common Cause Variation and Special Cause Variation. You’ll need to know about Common Causes Variation vs Special Causes Variation because they are two subjects that are tested on the PMP Certification  and CAPM Certification exams. 

6% Growth in PM Jobs By 2024 - Upskill Now

6% Growth in PM Jobs By 2024 - Upskill Now

Common Cause Variation, also referred to as “Natural Problems, “Noise,” and “Random Cause” was a term coined by Harry Alpert in 1947. Common causes of variance are the usual quantifiable and historical variations in a system that are natural. Though variance is a problem, it is an inherent part of a process—variance will eventually creep in, and it is not much you can do about it. Specific actions cannot be taken to prevent this failure from occurring. It is ongoing, consistent, and predictable.

Characteristics of common causes variation are:

  • Variation predictable probabilistically
  • Phenomena that are active within the system
  • Variation within a historical experience base which is not regular
  • Lack of significance in individual high and low values

This variation usually lies within three standard deviations from the mean where 99.73% of values are expected to be found. On a control chart, they are indicated by a few random points that are within the control limit. These kinds of variations will require management action since there can be no immediate process to rectify it. You will have to make a fundamental change to reduce the number of common causes of variation. If there are only common causes of variation on your chart, your process is said to be “statistically stable.”

When this term is applied to your chart, the chart itself becomes fairly stable. Your project will have no major changes, and you will be able to continue process execution hassle-free.

How to Successfully Ace the PMP Exam?

How to Successfully Ace the PMP Exam?

Consider an employee who takes a little longer than usual to complete a specific task. He is given two days to do a task, and instead, he takes two and a half days; this is considered a common cause variation. His completion time would not have deviated very much from the mean since you would have had to consider the fact that he could submit it a little late.

Here’s another example: you estimate 20 minutes to get ready and ten minutes to get to work. Instead, you take five minutes extra to get ready because you had to pack lunch and 15 additional minutes to get to work because of traffic. 

Other examples that relate to projects are inappropriate procedures, which can include the lack of clearly defined standard procedures, poor working conditions, measurement errors, normal wear and tear, computer response times, etc. These are all common cause variation.

Special Cause Variation, on the other hand, refers to unexpected glitches that affect a process. The term Special Cause Variation was coined by W. Edwards Deming and is also known as an “Assignable Cause.” These are variations that were not observed previously and are unusual, non-quantifiable variations.

These causes are sporadic, and they are a result of a specific change that is brought about in a process resulting in a chaotic problem. It is not usually part of your normal process and occurs out of the blue. Causes are usually related to some defect in the system or method. However, this failure can be corrected by making changes to affected methods, components, or processes.

Characteristics of special cause variation are:

  • New and unanticipated or previously neglected episode within the system
  • This kind of variation is usually unpredictable and even problematic
  • The variation has never happened before and is thus outside the historical experience base

On a control chart, the points lie beyond the preferred control limit or even as random points within the control limit. Once identified on a chart, this type of problem needs to be found and addressed immediately you can help prevent it from recurring.

Earn 60 PDUs: Pick from 6 Courses

Earn 60 PDUs: Pick from 6 Courses

Let’s say you are driving to work, and you estimate arrival in 10 minutes every day. One day, it took you 20 minutes to arrive at work because you were caught in the traffic from an accident zone and were held up.

Examples relating to project management are if machine malfunctions, computer crashes, there is a power cut, etc. These kinds of random things that can happen during a project are examples of special cause variation.

One way to evaluate a project’s health is to track the difference between the original project plan and what is happening. The use of control charts helps to differentiate between the common cause variation and the special cause variation, making the process of making changes and amends easier.

Learn new trends, emerging practices, tailoring considerations, and core competencies required of a Project Management professional with the PMP Certification course .

Unlock your project management potential with Simplilearn's comprehensive training. Gain the skills and knowledge needed to lead successful projects, boost efficiency, and exceed goals. Choose the right project management course today and advance your career with confidence.

Program Name PMP® Certification Training Course PMP Plus Post Graduate Program In Project Management Geo All Geos All Geos All Geos University PMI Simplilearn University of Massachusetts Amherst Course Duration 90 Days of Flexible Access to Online Classes 36 Months 6 Months Coding experience reqd No No No Skills you wll learn 8+ PM skills including Work Breakdown Structure, Gantt Charts, Resource Allocation, Leadership and more. 6 courses including Project Management, Agile Scrum Master, Implementing a PMO, and More 9+ skills including Project Management, Quality Management, Agile Management, Design Thinking and More. Additional Benefits Experiential learning through case studies Global Teaching Assistance 35PDUs Learn by working on real-world problems 24x7 Learning support from mentors Earn 60+ PDU’s 3 year course access Cost $$ $$$$ $$$$ Explore Program Explore Program Explore Program

This article has explained special cause variation vs common cause variation which are the two important concepts in project management when it comes to data validation. Simplilearn offers multiple Project Management training courses like the Post Graduate Program in Project Management and learning paths that can help aspiring project managers get the education they need to pass not only exams like the PMP certification and CAPM® but also real-world knowledge useful for any project management career.

PMP is a registered mark of the Project Management Institute, Inc.

Our Project Management Courses Duration And Fees

Project Management Courses typically range from a few weeks to several months, with fees varying based on program and institution.

Recommended Reads

Four Proven Reasons Why Gamification Improves Employee Training

10 Major Causes of Project Failure

A Comprehensive Comparison of NFT Vs. Crypto

Free eBook: Top 25 Interview Questions and Answers: Big Data Analytics

Root Cause Analysis: All You Need to Know

Data Analyst vs. Data Scientist: The Ultimate Comparison

Get Affiliated Certifications with Live Class programs

Pmp® certification training, post graduate program in project management.

  • Receive Post Graduate Program Certificate and Alumni Association Membership from UMass Amherst
  • 8X higher live interaction in live online classes by industry experts
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

Module 8. Statistical quality control

BASIC CONCEPTS OF STATSITICAL QUALITY CONTROL

26.1  Introduction

From the early days of industrial production, the emphasis had been on turning out products of uniform quality by ensuring use of similar raw materials, identical machines, and proper training of the operators.  Inspite of these efforts, the causes of irregularity often crept in inadvertently.  Besides, the men and machines are not infallible and give rise to the variation in the quality of the product.  For keeping this variation within limits, in earlier days, the method used was 100 per cent inspection at various stages of manufacturing.

It was in 1924 that Dr. W.A. Shewhart of Bell Telephone Laboratories, USA developed a method based on statistical principles for controlling quality of products during the manufacturing and thus eliminating the need for 100 per cent inspection.  This technique which is meant to be an integral part of any production process, does not provide an automatic corrective action but acts as sensor and signal for the variation in the quality.  Therefore, the effectiveness of this method depends on the promptness with which a necessary corrective action is carried out on the process.  This technique has since been developed by adding to its armory more and more charts, as a result of its extensive use in the industry during and after the Second World War. In this lesson various terms used in the context of Statistical Quality Control (SQC) have been illustrated.

26.2  Definitions of Various Terms Involved in Statistical Quality Control

The following terms are used to understand the concept of Statistical Quality Control

26.2.1  Quality

The most important word in the term ‘Statistical Quality Control’ is quality. By ‘Quality’ we mean an attribute of the product that determines its fitness for use. Quality can be further defined as “Composite product characteristics of engineering and manufacture that determine the degree to which the product in use will meet the expectations of the customer at reasonable cost.” Quality means conformity with certain prescribed standards in terms of size, weight, strength, colour , taste, package etc.

26.2.2  Quality characteristics

Quality of a product (or service) depends upon the various characteristics that a product possesses. For example, the Kulfi we buy should have the following characteristics.

            (a)  TS  (b)  Sugar  (c)  Flavour   (d)    Body & Texture.

All these individual characteristics constitute the quality of Kulfi .  Of course, some of them are important (critical) without which the Kulfi is not acceptable.  For example Minimum TS, Sugar, Body and Texture score is important.  However, other characteristics such as Colour and Flavour may not be so important. The quality characteristics may be defined as the “distinguishing” factor of the product in the appearance, performance, length of life, dependability, reliability, durability, maintainability, taste, colour , usefulness etc. Control of these quality characteristics in turn means the control of the quality of product.

26.2.3  Types of characteristics

There are two types of characteristics viz., variable characteristics and attribute characteristics.

26.2.3.1  Variable characteristic

Whenever a record is made of an actual measured quality characteristic, such as dimension expressed in mm, cm etc. quality is said to be expressed by variables.  This type of quality characteristics includes e.g., dimension (length, height, thickness etc.),hardness, temperature, tensile strength, weight, moisture percent, yield percent, fat percent etc.

26.2.3.2  Attribute characteristic

Whenever a record shows only the number of articles conforming and the number of articles failing to conform to any specified requirements, it is said to be a record of data by ‘attributes’.  These include:

·          Things judged by visual examination

·          Conformance judged by gauges

·          Number of defects in a given surface area etc.

26.2.4  Control

Control means organizing the following steps:

·            Setting up standards of performance.

·            Comparing the actual observations against the standards. 

·            Taking corrective action whenever necessary.

·            Modifying the standards if necessary.

26.2.5  Quality control

Quality control is a powerful productivity technique for effective diagnosis of lack of quality (or conformity to set standards) in any of the materials, processes, machines or end products. It is essential that the end products possess the qualities that the consumer expects of them, for the progress of the industry depends on the successful marketing of products.  Quality control ensures this by insisting on quality specifications all along the line from the arrival of materials through each of their processing to the final delivery of goods.Quality control, therefore, covers all the factors and processes of production which may be broadly classified as follows:

·          Quality of materials : Material of good quality will result in smooth processing there by reducing the waste and increasing the output.  It will also give better finish to end products.

·          Quality of manpower : Trained and qualified personnel will give increased efficiency due to the better quality production through the application of skill and also reduce production cost and waste.

·          Quality of machines : Better quality equipment will result in efficient working due to lack or scarcity of break downs thus reducing the cost of defectives.

·          Quality of Management : A good management is imperative for increase in efficiency, harmony in relations, growth of business and markets.

26.2.6  Chance and assignable causes of variation

Variation in the quality of the manufactured product in the repetitive process in the industry is inherent and inevitable.  These variations are broadly classified as being due to two causes viz., ( i ) chance causes, and (ii) assignable causes.

26.2.6.1  Chance causes

Some “Stable pattern of variation” or “a constant cause system” is inherent in any particular scheme of production and inspection.  This pattern results from many minor causes that behave in a random manner.  The variation due to these causes is beyond the control of human being and cannot be prevented or eliminated under any circumstance. Such type of variation has got to be allowed within the stable pattern, usually termed as Allowable Variation.  The range of such variation is known as natural tolerance of the process.

26.2.6.2  Assignable causes

The second type of variation attributed to any production process is due to non-random or the so called assignable causes and is termed as Preventable Variation.  The assignable causes may creep in at any stage of the process, right from the arrival of raw materials to the final delivery of the goods.

Some of the important factors of assignable causes of variation are substandard or defective raw material, new techniques or operations, negligence of the operators, wrong or improper handling of machines, faulty equipment, unskilled or inexperienced technical staff and so on.  These causes can be identified and eliminated and are to be discovered in a production process before it goes wrong i.e., before the production becomes defective.

26.3  Statistical Quality Control

By Statistical Quality Control (SQC) we mean the various statistical methods used for the maintenance of quality in a continuous flow of manufactured goods.  The main purpose of SQC is to devise statistical techniques which help us in separating the assignable causes from chance causes of variation thus enabling us to take remedial action wherever assignable causes are present.  The elimination of assignable causes of erratic fluctuations is described as bringing a process under control. A production process is said to be in a state of statistical control if it is governed by chance causes alone, in the absence of assignable causes of variation.

In the above problem, the main aim is to control the manufacturing process so that the proportion of defective items is not excessively large.  This is known as ‘ Process Control’ .  In another type of problem we want to ensure that lots of manufactured goods do not contain an excessively large proportion of defective items.  This is known as ‘ Product or Lot Control ’. The process control and product control are two distinct problems, because even when the process is in control, so that the proportion of defective products for the entire output over a long period will not be large, an individual lot of items may not be of satisfactory quality.  Process Control is achieved mainly through the technique of ‘ Control Charts ’ whereas Product Control is achieved through ‘ Sampling Inspection’ .

26.4  Stages of Production Process

Before production starts, a decision is necessary as to what is to be made.  Next comes the actual manufacturing of the product.  Finally it must be determined whether the product manufactured is what was intended.  It is therefore necessary that quality of manufactured product may be looked at in terms of three functions of specification, production and inspection.

26.4.1  Specification

 This tells us what is to be produced and of what specification.  That is, it gives us dimension and limits within which dimension can vary.  These specifications are laid down by the manufacturer.

26.4.2  Production

Here we should look into what we have manufactured and what was intended to.

26.4.3  Inspection

 Here we examine with the help of SQC techniques whether the manufactured goods are within the specified limits or whether there is any necessity to widen the specifications or not.  So SQC tells us as to what are the capabilities of the production process.

Therefore statistical quality control is considered as a kit of tools, which may influence decisions, related to the functions of specification, production or inspection.  The effective use of SQC generally requires cooperation among those responsible for these three different functions or decisions at a higher level than any one of them.  For this reason, the techniques should be understood at a management level that encompasses all the three functions.

Not logged in

Assignable cause, page actions.

  • View source

Assignable causes of variation have an advantage (high proportion, domination) in many known causes of routine variability. For this reason, it is worth trying to identify the assignable cause of variation , in such a way that its impact on the process can be eliminated, of course, assuming that project managers or members are fully aware of the assignable cause of variation. Assignable causes of variation are the result of events that are not part of the normal process. Examples of assignable causes for variability are (T. Kasse, p. 237):

  • incorrectly trained people
  • broken tools
  • failure to comply with the process
  • 1 Identify data of assignable causes
  • 2 Types of data for assignable causes
  • 3 Determining the source of assignable causes of variation in an unstable process
  • 4 Examples of Assignable cause
  • 5 Advantages of Assignable cause
  • 6 Limitations of Assignable cause
  • 7 Other approaches related to Assignable cause
  • 8 References

Identify data of assignable causes

The first step you need to take when planning data collection for assignable causes is to identify them and explain your goals . This step is to ensure that the assignable causes data that the project team gathers provides the answers that are needed to carry out the ' process improvement ' project efficiently and successfully. The characteristics that are desirable and most relevant for an assignable causes are for example: relevant, representative, sufficient. In the planning process for collecting data on assignable causes, the project team should draw and mark a chart that will provide the findings before actual data collection begins. This step gives the project team an indication of what data that can be assigned is needed (A. van Aartsengel, S Kurtoglu, p. 464).

Types of data for assignable causes

There are two types of data for assignable causes, qualitative and quantitative . Qualitative data is obtained from deseriography resulting from observations or measures of different types of characteristics of the results of the process in terms of narrative words and statements. However, the next group of data, which are quantitative data on assignable causes, are derived from the description of observations or measures of process result characteristics in terms of measurable quantity in which numerical values are used (A. van Aartsengel, S. Kurtoglu, p. 464).

Determining the source of assignable causes of variation in an unstable process

If an unstable process occurs then the analyst must identify the sources of assignable cause variation. The source and the cause itself must be investigated and, in most cases, unfortunately also eliminated. Until all such causes are removed, then the actual capacity of the process cannot be determined and the process itself will not work as planned. In some cases, however, assignable cause variability can improve the result, then the process must be redesigned (W. S. Davis, D. C. Yen, p. 76). There are two possibilities for making the wrong decision, which concerns the appearance of assignable cause variations: there is no such reason (or it is incorrectly assessed) or it is not detected (N. Möller, S. O. Hansson, J. E. Holmberg, C. Rollenhagen, p. 339).

Examples of Assignable cause

  • Poorly designed process : A poorly designed process can lead to variation due to the inconsistency in the way the process is operated. For example, if a process requires a certain step to be done in a specific order, but that order is not followed, this can lead to variation in the results of the process.
  • Human error : Human error is another common cause of variation. Examples include incorrect data entry, incorrect calculations, incorrect measurements, incorrect assembly, and incorrect operation of machinery.
  • Poor quality materials : Poor quality materials can also lead to variation. For example, if a process requires a certain grade of material that is not provided, this can lead to variation in the results of the process.
  • Changes in external conditions : Changes in external conditions, such as temperature or humidity, can also cause variation in the results of a process.
  • Equipment malfunctions : Equipment malfunctions can also lead to variation. Examples include mechanical problems, electrical problems, and computer software problems.

Advantages of Assignable cause

One advantage of identifying the assignable causes of variation is that it can help to eliminate their impact on the process. Some of these advantages include:

  • Improved product quality : By identifying and eliminating the assignable cause of variation, product quality will be improved, as it eliminates the source of variability.
  • Increased process efficiency : When the assignable cause of variation is identified and removed, the process will run more efficiently, as it will no longer be hampered by the source of variability.
  • Reduced costs : By eliminating the assignable cause of variation, the cost associated with the process can be reduced, as it eliminates the need for additional resources and labour.
  • Reduced waste : When the assignable cause of variation is identified and removed, the amount of waste produced in the process can be reduced, as there will be less variability in the output.
  • Improved customer satisfaction : By improving product quality and reducing waste, customer satisfaction will be increased, as they will receive a higher quality product with less waste.

Limitations of Assignable cause

Despite the advantages of assigning causes of variation, there are also a number of limitations that should be taken into account. These limitations include:

  • The difficulty of identifying the exact cause of variation, as there are often multiple potential causes and it is not always clear which is the most significant.
  • The fact that some assignable causes of variation are difficult to eliminate or control, such as machine malfunction or human error.
  • The costs associated with implementing changes to eliminate assignable causes of variation, such as purchasing new equipment or hiring more personnel.
  • The fact that some assignable causes of variation may be outside the scope of the project, such as economic or political factors.

Other approaches related to Assignable cause

One of the approaches related to assignable cause is to identify the sources of variability that could potentially affect the process. These can include changes in the raw material, the process parameters, the environment , the equipment, and the operators.

  • Process improvement : By improving the process, the variability caused by the assignable cause can be reduced.
  • Control charts : Using control charts to monitor the process performance can help in identifying the assignable causes of variation.
  • Design of experiments : Design of experiments (DOE) can be used to identify and quantify the impact of certain parameters on the process performance.
  • Statistical Process Control (SPC) : Statistical Process Control (SPC) is a tool used to identify, analyze and control process variation.

In summary, there are several approaches related to assignable cause that can be used to reduce variability in a process. These include process improvement, control charts, design of experiments and Statistical Process Control (SPC). By utilizing these approaches, project managers and members can identify and eliminate the assignable cause of variation in a process.

  • Davis W. S., Yen D. C. (2019)., The Information System Consultant's Handbook: Systems Analysis and Design , CRC Press, New York
  • Kasse T. (2004)., Practical Insight Into CMMI , Artech House, London
  • Möller N., Hansson S. O., Holmberg J. E., Rollenhagen C. (2018)., Handbook of Safety Principles , John Wiley & Sons, Hoboken
  • Van Aartsengel A., Kurtoglu S. (2013)., Handbook on Continuous Improvement Transformation: The Lean Six Sigma Framework and Systematic Methodology for Implementation , Springer Science & Business Media, New York

Author: Anna Jędrzejczyk

  • Recent changes
  • Random page
  • Page information

Table of Contents

  • Special pages

User page tools

  • What links here
  • Related changes
  • Printable version
  • Permanent link

CC BY-SA Attribution-ShareAlike 4.0 International

  • This page was last edited on 17 November 2023, at 16:52.
  • Content is available under CC BY-SA Attribution-ShareAlike 4.0 International unless otherwise noted.
  • Privacy policy
  • About CEOpedia | Management online
  • Disclaimers

Monday, August 17, 2015

Chance & assignable causes of variation.

Links to all courses Variation in quality of manufactured product in the respective process in industry is inherent & evitable. These variations are broadly classified as- i) Chance / Non assignable causes ii) Assignable causes i) Chance Causes: In any manufacturing process, it is not possible to produce goods of exactly the same quality. Variation is inevitable. Certain small variation is natural to the process, being due to chance causes and cannot be prevented. This variation is therefore called allowable . ii) Assignable Causes: This type of variation attributed to any production process is due to non-random or so called assignable causes and is termed as preventable variation . Assignable causes may creep in at any stage of the process, right from the arrival of the raw materials to the final delivery of goods. Some of the important factors of assignable causes of variation are - i) Substandard or defective raw materials ii) New techniques or operation iii) Negligence of the operators iv) Wrong or improper handling of machines v) Faulty equipment vi) Unskilled or inexperienced technical staff and so on. These causes can be identified and eliminated and are to discovered in a production process before the production becomes defective. SQC is a productivity enhancing & regulating technique ( PERT ) with three factors- i) Management ii) Methods iii) Mathematics Here, control is two-fold- controlling the process ( process control ) & controlling the finished products (products control). 

About আব্দুল্যাহ আদিল মাহমুদ

2 comments:.

Totally awesome posting! Loads of valuable data and motivation, both of which we all need!Relay welcome your work. maggots in mouth treatment

Bishwo.com on Facebook

Popular Articles

' border=

Like on Facebook

Join our newsletter, portal archive, basics of math, contact form.

  • Privacy Policy

Simplilearn

  • Quality Management

Home » Free Resources » »

Six Sigma Control Charts: An Ultimate Guide

  • Written by Contributing Writer
  • Updated on March 10, 2023

six sigma control charts

Welcome to the ultimate guide to Six Sigma control charts, where we explore the power of statistical process control and how it can help organizations improve quality, reduce defects, and increase profitability. Control charts are essential tools in the Six Sigma methodology, visually representing process performance over time and highlighting when a process is out of control.

In this comprehensive guide, we’ll delve into the different types of control charts, how to interpret them, how to use them to make data-driven decisions, and how to become a Lean Six Sigma expert .

Let’s get started on the journey to discover the transformative potential of Six Sigma control charts.

What is a Control Chart?

A control chart is a statistical tool used in quality control to monitor and analyze process variation. No process is free from variation, and it is vital to understand and manage this variation to ensure consistent and high-quality output. The control chart is designed to help visualize this variation over time and identify when a process is out of control.

The chart typically includes a central line, which represents the average or mean of the process data, and upper and lower control limits, which are set at a certain number of standard deviations from the mean. The control limits are usually set at three standard deviations from the mean, encompassing about 99.7 percent of the process data. If the process data falls within these control limits, the process is considered in control, and variation is deemed to be coming from common causes. If the data points fall outside these control limits, this indicates that there is a special cause of variation, and the process needs to be investigated and improved.

Control charts are commonly used in manufacturing processes to ensure that products meet quality standards, but they can be used in any process where variation needs to be controlled. They can be used to track various types of process data, such as measurements of product dimensions, defect rates, or cycle times.

Also Read: What Is Process Capability and Why It’s More Interesting Than It Sounds

Significance of Control Charts in Six Sigma

Control charts are an essential tool in the Six Sigma methodology to monitor and control process variation. Six Sigma is a data-driven approach to process improvement that aims to minimize defects and improve quality by identifying and eliminating the sources of variation in a process. The control chart helps to achieve this by providing a visual representation of the process data over time and highlighting any special causes of variation that may be present.

The Objective of Six Sigma Control Charts

The primary objective of using a control chart in Six Sigma is to ensure that a process is in a state of statistical control. This means that the process is stable and predictable, and any variation is due to common causes inherent in the process. The control chart helps to achieve this by providing a graphical representation of the process data that shows the process mean and the upper and lower control limits. The process data points should fall within these limits if the process is in control.

Detecting Special Cause Variation

One of the critical features of a Six Sigma control chart is its ability to detect special cause variation, also known as assignable cause variation. Special cause variation is due to factors not inherent in the process and can be eliminated by taking corrective action. The control chart helps detect special cause variation by highlighting data points outside control limits.

Estimating Process Average and Variation

Another objective of a control chart is to estimate the process average and variation. The central line represents the process average on the chart, and the spread of the data points around the central line represents the variation. By monitoring the process over time and analyzing the control chart, process improvement teams can gain a deeper understanding of the process and identify areas for improvement.

Measuring Process Capability with Cp and Cpk

Process capability indices, such as Cpk and Cp, help to measure how well a process can meet the customer’s requirements. Here are some details on how to check process capability using Cp and Cpk:

  • Cp measures a process’s potential capability by comparing the data’s spread with the process specification limits.
  • If Cp is greater than 1, it indicates that the process can meet the customer’s requirements.
  • However, Cp doesn’t account for any process shift or centering, so it may not accurately reflect the process’s actual performance.
  • Cpk measures the actual capability of a process by considering both the spread of the data and the process’s centering or shift.
  • Cpk is a more accurate measure of a process’s performance than Cp because it accounts for both the spread and centering.
  • A Cpk value of at least 1.33 is typically considered acceptable, indicating that the process can meet the customer’s requirements.

It’s important to note that while Cp and Cpk provide valuable information about a process’s capability, they don’t replace the need for Six Sigma charts and other statistical tools to monitor and improve process performance.

Also Read: What Are the 5s in Lean Six Sigma?

Steps to Create a Six Sigma Control Chart

To create a Six Sigma chart, you can follow these general steps:

  • Gather Data: Collect data related to the process or product you want to monitor and improve.
  • Determine Data Type: Identify the type of data you have, whether it is continuous, discrete, attribute, or variable.
  • Calculate Statistical Measures: Calculate basic statistical measures like mean, standard deviation, range, etc., depending on the data type.
  • Set Control Limits: Determine the Upper Control Limit (UCL) and Lower Control Limit (LCL) using statistical formulas and tools.
  • Plot Data : Plot the data points on the control chart, and draw the control limits.
  • Analyze the Chart: Analyze the chart to identify any special or common causes of variation, and take corrective actions if necessary.
  • Update the Chart: Continuously monitor the process and update the chart with new data points.

You can use software tools like Minitab, Excel, or other statistical software packages to create a control chart. These tools will automate most of the above steps and help you easily create a control chart.

Know When to Use Control Charts

A Six Sigma control chart can be used to analyze the Voice of the Process (VoP) at the beginning of a project to determine whether the process is stable and predictable. This helps to identify any issues or potential problems that may arise during the project, allowing for corrective action to be taken early on. By analyzing the process data using a control chart, we can also identify the cause of any variation and address the root cause of the issue.

Here are some specific scenarios when you may want to use a control chart:

  • At the start of a project: A control chart can help you establish a baseline for the process performance and identify potential areas for improvement.
  • During process improvement: A control chart can be used to track the effectiveness of changes made to the process and identify any unintended consequences.
  • To monitor process stability : A control chart can be used to verify whether the process is stable. If the process is unstable, you may need to investigate and make necessary improvements.
  • To identify the source of variability : A control chart can help you identify the source of variation in the process, allowing you to take corrective actions.

Four Process States in a Six Sigma Chart

Control charts can be used to identify four process states:

  • The Ideal state: The process is in control, and all data points fall within the control limits.
  • The Threshold state : Although data points are in control, there are some non-conformances over time.
  • The Brink of Chaos state: The process is in control but is on the edge of committing errors.
  • Out of Control state: The process is unstable, and unpredictable non-conformances happen. In this state, it is necessary to investigate and take corrective actions.

Also Read: How Do You Use a Six Sigma Calculator?

What are the Different Types of Control Charts in Six Sigma?

Control charts are an essential tool in statistical process control, and the type of chart used depends on the data type. There are different types of control charts, and the type used depends on the data type.

The seven Six Sigma chart types include: I-MR Chart, X Bar R Chart, X Bar S Chart, P Chart, NP Chart, C Chart, and U Chart. Each chart has its specific use and is suitable for analyzing different data types.

The I-MR Chart, or Individual-Moving Range Chart, analyzes one process variable at a time. It is suitable for continuous data types and is used when the sample size is one. The chart consists of two charts: one for individual values (I Chart) and another for the moving range (MR Chart).

X Bar R Chart

The X Bar R Chart is used to analyze process data when the sample size is more than one. It consists of two charts: one for the sample averages (X Bar Chart) and another for the sample ranges (R Chart). It is suitable for continuous data types.

X Bar S Chart

The X Bar S Chart is similar to the X Bar R Chart but uses the sample standard deviation instead of the range. It is suitable for continuous data types. It is used when the process data is normally distributed, and the sample size is more than one.

The P Chart, or the Proportion Chart, is used to analyze the proportion of nonconforming units in a sample. It is used when the data is binary (conforming or nonconforming), and the sample size is large.

The NP Chart is similar to the P Chart but is used when the sample size is fixed. It monitors the number of nonconforming units in a sample.

The C Chart, also known as the Count Chart, is used to analyze the number of defects in a sample. It is used when the data is discrete (count data), and the sample size is large.

The U Chart, or the Unit Chart, is used to analyze the number of defects per unit in a sample. It is used when the sample size is variable, and the data is discrete.

Factors to Consider while Selecting the Right Six Sigma Chart Type

Selecting the proper Six Sigma control chart requires careful consideration of the specific characteristics of the data and the intended use of the chart. One must consider the type of data being collected, the frequency of data collection, and the purpose of the chart.

Continuous data requires different charts than attribute data. An individual chart may be more appropriate than an X-Bar chart if the sample size is small. Similarly, if the data is measured in subgroups, an X-Bar chart may be more appropriate than an individual chart. Whether monitoring a process or evaluating a new process, the process can also affect the selection of the appropriate control chart.

How and Why a Six Sigma Chart is Used as a Tool for Analysis

Control charts help to focus on detecting and monitoring the process variation over time. They help to keep an eye on the pattern over a period of time, identify when some special events interrupt normal operations, and reflect the improvement in the process while running the project. Six Sigma control charts are considered one of the best tools for analysis because they allow us to:

  • Monitor progress and learn continuously
  • Quantify the capability of the process
  • Evaluate the special causes happening in the process
  • Separate the difference between the common causes and special causes

Benefits of Using Control Charts

  • Early warning system: Control charts serve as an early warning system that helps detect potential issues before they become major problems.
  • Reduces errors: By monitoring the process variation over time, control charts help identify and reduce errors, improving process performance and quality.
  • Process improvement: Control charts allow for continuous monitoring of the process and identifying areas for improvement, resulting in better process performance and increased efficiency.
  • Data-driven decisions: Control charts provide data-driven insights that help to make informed decisions about the process, leading to better outcomes.
  • Saves time and resources: Six Sigma control charts can help to save time and resources by detecting issues early on, reducing the need for rework, and minimizing waste.

Who Can Benefit from Using Six Sigma Charts

  • Manufacturers: Control charts are widely used in manufacturing to monitor and control process performance, leading to improved quality, increased efficiency, and reduced waste.
  • Service providers: Service providers can use control charts to monitor and improve service delivery processes, leading to better customer satisfaction and increased efficiency.
  • Healthcare providers: Control charts can be used in healthcare to monitor and improve patient outcomes and reduce medical errors.
  • Project managers : Project managers can use control charts to monitor and improve project performance, leading to better project outcomes and increased efficiency.

Also Read: What Are the Elements of a Six Sigma Project Charter?

Some Six Sigma Control Chart Tips to Remember

Here are some tips to keep in mind when using Six Sigma charts:

  • Never include specification lines on a control chart.
  • Collect data in the order of production, not from inspection records.
  • Prioritize data collection related to critical product or process parameters rather than ease of collection.
  • Use at least 6 points in the range of a control chart to ensure adequate discrimination.
  • Control limits are different from specification limits.
  • Points outside the control limits indicate special causes, such as shifts and trends.
  • Points inside the limits indicate trends, shifts, or instability.
  • A control chart serves as an early warning system to prevent a process from going out of control if no preventive action is taken.
  • Assume LCL as 0 if it is negative.
  • Use two charts for continuous data and a single chart for discrete data.
  • Don’t recalculate control limits if a special cause is removed and the process is not changing.
  • Consistent performance doesn’t necessarily mean meeting customer expectations.

What are Control Limits?

Control limits are an essential aspect of statistical process control (SPC) and are used to analyze the performance of a process. Control limits represent the typical range of variation in a process and are determined by analyzing data collected over time.

Control limits act as a guide for process improvement by showing what the process is currently doing and what it should be doing. They provide a standard of comparison to identify when the process is out of control and needs attention. Control limits also indicate that a process event or measurement is likely to fall within that limit, which helps to identify common causes of variation. By distinguishing between common causes and special causes of variation, control limits help organizations to take appropriate action to improve the process.

Calculating Control Limits

The 3-sigma method is the most commonly used method to calculate control limits.

Step 1: Determine the Standard Deviation

The standard deviation of the data is used to calculate the control limits. Calculate the standard deviation of the data set.

Step 2: Calculate the Mean

Calculate the mean of the data set.

Step 3: Find the Upper Control Limit

Add three standard deviations to the mean to find the Upper Control Limit. This is the upper limit beyond which a process is considered out of control.

Step 4: Find the Lower Control Limit

To find the Lower Control Limit, subtract three standard deviations from the mean. This is the lower limit beyond which a process is considered out of control.

Importance of Statistical Process Control Charts

Statistical process control charts play a significant role in the Six Sigma methodology as they enable measuring and tracking process performance, identifying potential issues, and determining corrective actions.

Six Sigma control charts allow organizations to monitor process stability and make informed decisions to improve product quality. Understanding how these charts work is crucial in using them effectively. Control charts are used to plot data against time, allowing organizations to detect variations in process performance. By analyzing these variations, businesses can identify the root causes of problems and implement corrective actions to improve the overall process and product quality.

How to Interpret Control Charts?

Interpreting control charts involves analyzing the data points for patterns such as trends, spikes, outliers, and shifts.

These patterns can indicate potential problems with the process that require corrective actions. The expected behavior of a process on a Six Sigma chart is to have data points fluctuating around the mean, with an equal number of points above and below. This is known as a process shift and common cause variation. Additionally, if the data is in control, all data points should fall within the upper and lower control limits of the chart. By monitoring and analyzing the trends and outliers in the data, control charts can provide valuable insights into the performance of a process and identify areas for improvement.

Elements of a Control Chart

Six Sigma control charts consist of three key elements.

  • A centerline representing the average value of the process output is established.
  • Upper and lower control limits (UCL and LCL) are set to indicate the acceptable range of variation for the process.
  • Data points representing the actual output of the process over time are plotted on the chart.

By comparing the data points to the control limits and analyzing any trends or patterns, organizations can identify when a process is going out of control and take corrective actions to improve the process quality.

What is Subgrouping in Control Charts?

Subgrouping is a method of using Six Sigma control charts to analyze data from a process. It involves organizing data into subgroups that have the greatest similarity within them and the greatest difference between them. Subgrouping aims to reduce the number of potential variables and determine where to expend improvement efforts.

Within-Subgroup Variation

  • The range represents the within-subgroup variation.
  • The R chart displays changes in the within-subgroup dispersion of the process.
  • The R chart determines if the variation within subgroups is consistent.
  • If the range chart is out of control, the system is not stable, and the source of the instability must be identified.

Between-Subgroup Variation

  • The difference in subgroup averages represents between-subgroup variation.
  • The X Bar chart shows any changes in the average value of the process.
  • The X Bar chart determines if the variation between subgroup averages is greater than the variation within the subgroup.

X Bar Chart Analysis

  • If the X Bar chart is in control, the variation “between” is lower than the variation “within.”
  • If the X Bar chart is not in control, the variation “between” is greater than the variation “within.”
  • The X Bar chart analysis is similar to the graphical analysis of variance (ANOVA) and provides a helpful visual representation to assess stability.

Benefits of Subgrouping in Six Sigma Charts

  • Subgrouping helps identify the sources of variation in the process.
  • It reduces the number of potential variables.
  • It helps determine where to expend improvement efforts.
  • Subgrouping ensures consistency in the within-subgroup variation.
  • It provides a graphical representation of variation and stability in the process.

Also Read: Central Limit Theorem Explained

Master the Knowledge of Control Charts For a Successful Career in Quality Management

Control charts are a powerful tool for process improvement in the Six Sigma methodology. By monitoring process performance over time, identifying patterns and trends, and taking corrective action when necessary, organizations can improve their processes and increase efficiency, productivity, and quality. Understanding the different types of control charts, their components, and their applications is essential for successful implementation.

A crystal-clear understanding of Six Sigma control charts is essential for aspiring Lean Six Sigma experts because it allows them to understand how to monitor process performance and identify areas of improvement. By understanding when and how to use control charts, Lean Six Sigma experts can effectively identify and track issues within a process and improve it for better performance.

Becoming Six Sigma-certified is an excellent way for an aspiring Lean Six Sigma Expert to gain the necessary skills and knowledge to excel in the field. Additionally, Six Sigma certification can provide you with the tools you need to stay on top of the latest developments in the field, which can help you stay ahead of the competition.

You might also like to read:

How to Use the DMAIC Model?

How Do You Improve Logistics with Six Sigma?

Process Mapping in Six Sigma: Here’s All You Need to Know

What is Root Cause Analysis and What Does it Do?

Describing a SIPOC Diagram: Everything You Should Know About It

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Recommended Articles

Six Sigma Projects

A Guide to Six Sigma Projects

Originally developed for manufacturing processes, the Six Sigma methodology is now leveraged by companies in nearly all industries. In this article, we will share information about successful Six Sigma projects, methods, and more.

Why Choose Six Sigma Methodology for Project Management

Why Choose Six Sigma Methodology for Project Management?

Project management involves planning and organizing business resources to realize the best possible process and achieve operational excellence.

Quality Management Process

Quality Management Process: A Beginner’s Guide

Quality management has been booming recently due to its competitive advantage to organizations. The rise in the demand for professionals who are experts in the quality management process is a natural consequence.

Six Sigma Books

Six Sigma Books Worth Reading

This article highlights some of the best Six Sigma books available today and offers tips for Six Sigma preparation.

What is Lean Six Sigma Green Belt

What is Lean Six Sigma Green Belt?

Wondering “What is Lean Six Sigma Green Belt?” Read this guide to understand the role, benefits, and skills of this essential quality management certification.

How To Get Six Sigma Green Belt Certification

How To Get Six Sigma Green Belt Certification?

Wondering how to get Six Sigma Green Belt certification? Our guide provides a roadmap to earn certification and excel in quality management.

Lean Six Sigma Certification

Learning Format

Online Bootcamp

Program benefits.

  • Green and Black Belt exam training material included
  • Aligned with IASSC-Lean Six Sigma
  • Masterclasses from top faculty of UMass Amherst
  • UMass Amherst Alumni Association membership

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 20, Issue Suppl 1
  • The meaning of variation to healthcare managers, clinical and health-services researchers, and individual patients
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Duncan Neuhauser 1 ,
  • Lloyd Provost 2 ,
  • Bo Bergman 3
  • 1 Department of Epidemiology and Biostatistics, Case Western Reserve University, Cleveland, Ohio, USA
  • 2 Associates in Process Improvement, Austin, Texas, USA
  • 3 Centre for Health Improvement, Chalmers University of Technology, Gothenburg, Sweden
  • Correspondence to Charles Elton Blanchard Professor Duncan Neuhauser, Department of Epidemiology and Biostatistics, Medical School, Case Western Reserve University, 10900 Euclid Ave, Cleveland Ohio 44106-4249, USA; dvn{at}case.edu

Healthcare managers, clinical researchers and individual patients (and their physicians) manage variation differently to achieve different ends. First, managers are primarily concerned with the performance of care processes over time. Their time horizon is relatively short, and the improvements they are concerned with are pragmatic and ‘holistic.’ Their goal is to create processes that are stable and effective. The analytical techniques of statistical process control effectively reflect these concerns. Second, clinical and health-services researchers are interested in the effectiveness of care and the generalisability of findings. They seek to control variation by their study design methods. Their primary question is: ‘Does A cause B, everything else being equal?’ Consequently, randomised controlled trials and regression models are the research methods of choice. The focus of this reductionist approach is on the ‘average patient’ in the group being observed rather than the individual patient working with the individual care provider. Third, individual patients are primarily concerned with the nature and quality of their own care and clinical outcomes. They and their care providers are not primarily seeking to generalise beyond the unique individual. We propose that the gold standard for helping individual patients with chronic conditions should be longitudinal factorial design of trials with individual patients. Understanding how these three groups deal differently with variation can help appreciate these three approaches.

  • Control charts
  • evidence-based medicine
  • quality of care
  • statistical process control

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: http://creativecommons.org/licenses/by-nc/2.0/ and http://creativecommons.org/licenses/by-nc/2.0/legalcode .

https://doi.org/10.1136/bmjqs.2010.046334

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Health managers, clinical researchers, and individual patients need to understand and manage variation in healthcare processes in different time frames and in different ways. In short, they ask different questions about why and how healthcare processes and outcomes change ( table 1 ). Confusing the needs of these three stakeholders results in misunderstanding.

  • View inline

Meaning of variation to managers, researchers and individual patients: questions, methods and time frames

Health managers

Our extensive experience in working with healthcare managers has taught us that their primary goal is to maintain and improve the quality of care processes and outcomes for groups of patients. Ongoing care and its improvement are temporal, so in their situation, learning from variation over time is essential. Data are organised over time to answer the fundamental management question: is care today as good as or better than it was in the past, and how likely is it to be better tomorrow? In answering that question, it becomes crucial to understand the difference between common-cause and special-cause variation (as will be discussed later). Common-cause variation appears as random variation in all measures from healthcare processes. 1 Special-cause variation appears as the effect of causes outside the core processes of the work. Management can reduce this variation by enabling the easy recognition of special-cause variation and by changing healthcare processes—by supporting the use of clinical practice guidelines, for example—but common-cause variation can never be eliminated.

The magnitude of common-cause variation creates the upper and lower control limits in Shewhart control charts. 2–5 Such charts summarise the work of health managers well. Figure 1 shows a Shewhart control chart (p-chart) developed by a quality-improvement team whose aim was to increase compliance with a new care protocol. The clinical records of eligible patients discharged (45–75 patients) were evaluated each week by the team, and records indicating that the complete protocol was followed were identified. The baseline control chart showed a stable process with a centre line (average performance) of 38% compliance. The team analysed the aspects of the protocol that were not followed and developed process changes to make it easier to complete these particular tasks. After successfully adapting the changes to the local environment (indicated by weekly points above the upper control limit in the ‘Implementing Changes’ period), the team formally implemented the changes in each unit. The team continued to monitor the process and eventually developed updated limits for the chart. The updated chart indicated a stable process averaging 83%.

  • Download figure
  • Open in new tab
  • Download powerpoint

Annotated Shewhart control chart—using protocol.

This control chart makes it clear that a stable but inferior process was operating for the first 11 weeks and, by inference, probably before that. The annotated changes (testing, adapting and implementing new processes of care) are linked to designed tests of change which are special (assignable) causes of variation, in this case, to improvement after week 15, after which a new better stable process has taken hold. Note that there is common-cause (random) variation in both the old and improved processes.

After updating the control limits, the chart reveals a new stable process with no special-cause variation, which is to say, no points above or below the control limits (the dotted lines). Note that the change after week 15 cannot easily be explained by chance (random, or common-cause, variation), since the probability of 13 points in a row occurring by chance above the baseline control limit is one divided by 2 to the 13th power. This is the same likelihood that in flipping a coin 13 times, it will come up heads every time. This level of statistical power to exclude randomness as an explanation is not to be found in randomised controlled trials (RCTs). Although there is no hard-and-fast rule about the number of observations over time needed to demonstrate process stability and establish change, we believe a persuasive control chart requires 20–30 or more observations.

The manager's task demonstrates several important characteristics. First is the need to define the key quality characteristics, and choose among them for focused improvement efforts. The choice should be made based on the needs of patients and families. The importance of these quality characteristics to those being served means that speed in learning and improvement is important. Indeed, for the healthcare manager, information for improvement must be as rapid as possible (in real time). Year-old research data are not very helpful here; just-in-time performance data in the hands of the decision-makers provide a potent opportunity for rapid improvement. 6

Second, managerial change is holistic; that is, every element of an intervention that might help to improve and can be done is put to use, sometimes incrementally, but simultaneously if need be. Healthcare managers are actively working to promote measurement of process and clinical outcomes, take problems in organisational performance seriously, consider the root causes of those problems, encourage the formation of problem solving clinical micro-system teams and promote the use of multiple, evolving Plan–Do–Study–Act (PDSA) tests of change.

This kind of improvement reasoning can be applied to a wide range of care processes, large and small. For example, good surgery is the appropriate combination of hundreds of individual tasks, many of which could be improved in small ways. Aggregating these many smaller changes may result in important, observable improvement over time. The protocol-driven, randomised trial research approach is a powerful tool for establishing efficacy but has limitations for evaluating and improving such complex processes as surgery, which are continually and purposefully changing over time. The realities of clinical improvement call for a move from after-the-fact quality inspection to building quality measures into medical information systems, thereby creating real-time quality data for providers to act upon. Caring for populations of similar patients in similar ways (economies of scale) can be of particular value, because the resulting large numbers and process stability can help rapidly demonstrate variation in care processes 7 ; very tight control limits (minimal common-cause variation) allow special-cause variation to be detected more quickly.

Clinical and health-services researchers

While quality-management thinking tends towards the use of data plotted over time in control-chart format, clinical researchers think in terms of true experimental methods, such as RCTs. Health-services researchers, in contrast, think in terms of regression analysis as their principal tool for discovering explainable variation in processes and outcomes of care. The data that both communities of researchers use are generally collected during fixed periods of time, or combined across time periods; neither is usually concerned with the analysis of data over time.

Take, for example, the question of whether age and sex are associated with the ability to undertake early ambulation after hip surgery. Clinical researchers try to control for such variables through the use of entry criteria into a trial, and random assignment of patients to experimental or control group. The usual health-services research approach would be to use a regression model to predict the outcome (early ambulation), over hundreds of patients using age and sex as independent variables. Such research could show that age and sex predict outcomes and are statistically significant, and that perhaps 10% of the variance is explained by these two independent variables. In contrast, quality-improvement thinking is likely to conclude that 90% of the variance is unexplained and could be common-cause variation. The health-services researcher is therefore likely to conclude that if we measured more variables, we could explain more of this variance, while improvement scientists are more likely to conclude that this unexplained variance is a reflection of common-cause variation in a good process that is under control.

The entry criteria into RCTs are carefully defined, which makes it a challenge to generalise the results beyond the kinds of patients included in such studies. Restricted patient entry criteria are imposed to reduce variation in outcomes unrelated to the experimental intervention. RCTs focus on the difference between point estimates of outcomes for entire groups (control and experimental), using statistical tests of significance to show that differences between the two arms of a trial are not likely to be due to chance.

Individual patients and their healthcare providers

The question an individual patient asks is different from those asked by manager and researcher, namely ‘How can I get better?’ The answer is unique to each patient; the question does not focus on generalising results beyond this person. At the same time, the question the patient's physician is asking is whether the group results from the best clinical trials will apply in this patient's case. This question calls for a different inferential approach. 8–10 The cost of projecting general findings to individual patients could be substantial, as described below.

Consider the implications of a drug trial in which 100 patients taking a new drug and 100 patients taking a placebo are reported as successful because 25 drug takers improved compared with 10 controls. This difference is shown as not likely to be due to chance. (The drug company undertakes a multimillion dollar advertising campaign to promote this breakthrough.) However, on closer examination, the meaning of these results for individual patients is not so clear. To begin with, 75 of the patients who took the drug did not benefit. And among those 25 who benefited, some, perhaps 15, responded extremely well, while the size of the benefit in the other 10 was much smaller. To have only the 15 ‘maximum responders’ take this drug instead of all 100 could save the healthcare system 85% of the drug's costs (as well as reduce the chance of unnecessary adverse drug effects); those ‘savings’ would, of course, also reduce the drug company's sales proportionally. These considerations make it clear that looking at more than group results could potentially make an enormous difference in the value of research studies, particularly from the point of view of individual patients and their providers.

In light of the above concerns, we propose that the longitudinal factorial study design should be the gold standard of evidence for efficacy, particularly for assessing whether interventions whose efficacy has been established through controlled trials are effective in individual patients for whom they might be appropriate ( box 1 ). Take the case of a patient with hypertension who measures her blood pressure at least twice every day and plots these numbers on a run chart. Through this informal observation, she has learnt about several factors that result in the variation in her blood pressure readings: time of day, the three different hypertension medicines she takes (not always regularly), her stress level, eating salty French fries, exercise, meditation (and, in her case, saying the rosary), and whether she slept well the night before. Some of these factors she can control; some are out of her control.

Longitudinal factorial design of experiments for individual patients

The six individual components of this approach are not new, but in combination they are new 8 9

One patient with a chronic health condition; sometimes referred to as an ‘N-of-1 trial.’

Care processes and health status are measured over time. These could include daily measures over 20 or more days, with the patient day as the unit of analysis.

Whenever possible, data are numerical rather than simple clinical observation and classification.

The patient is directly involved in making therapeutic changes and collecting data.

Two or more inputs (factors) are experimentally and concurrently changed in a predetermined fashion.

Therapeutic inputs are added or deleted in a predetermined, systematic way. For example: on day 1, drug A is taken; on day 2, drug B; on day 3, drug A and B; day 4, neither. For the next 4 days, this sequence could be randomly reordered.

Since she is accustomed to monitoring her blood pressure over time, she is in an excellent position to carry out an experiment that would help her optimise the effects of these various influences on her hypertension. Working with her primary care provider, she could, for example, set up a table of randomly chosen dates to make each of several of these changes each day, thereby creating a systematically predetermined mix of these controllable factors over time. This factorial design allows her to measure the effects of individual inputs on her blood pressure, and even interactions among them. After an appropriate number of days (perhaps 30 days, depending on the trade-off between urgency and statistical power), she might conclude that one of her three medications has no effect on her hypertension, and she can stop using it. She might also find that the combination of exercise and consistently low salt intake is as effective as either of the other two drugs. Her answers could well be unique to her. Planned experimental interventions involving single patients are known as ‘N-of-1’ trials, and hundreds have been reported. 10 Although longitudinal factorial design of experiments has long been used in quality engineering, as of 2005 there appears to have been only one published example of its use for an individual patient. 8 9 This method of investigation could potentially become widely used in the future to establish the efficacy of specific drugs for individual patients, 11 and perhaps even required, particularly for very expensive drug therapies for chronic conditions. Such individual trial results could be combined to obtain generalised knowledge.

This method can be used to show (1) the independent effect of each input on the outcome, (2) the interaction effect between the inputs (perhaps neither drug A or B is effective on its own, but in combination they work well), (3) the effect of different drug dosages and (4) the lag time between treatment and outcome. This approach will not be practical if the outcome of interest occurs years later. This method will be more practical with patient access to their medical record where they could monitor all five of Bergman's core health processes. 12

Understanding variation is one of the cornerstones of the science of improvement

This broad understanding of variation, which is based on the work of Walter Shewart in the 1920s, goes well beyond such simple issues as making an intended departure from a guideline or recognising a meaningful change in the outcome of care. It encompasses more than good or bad variation (meeting a target). It is concerned with more than the variation found by researchers in random samples from large populations.

Everything we observe or measure varies. Some variation in healthcare is desirable, even essential, since each patient is different and should be cared for uniquely. New and better treatments, and improvements in care processes result in beneficial variation. Special-cause variation should lead to learning. The ‘Plan–Do–Study’ portion of the Shewhart PDSA cycle can promote valuable change.

The ‘act’ step in the PDSA cycle represents the arrival of stability after a successful improvement has been made. Reducing unintended, and particularly harmful, variation is therefore a key improvement strategy. The more variation is controlled, the easier it is to detect changes that are not explained by chance. Stated differently, narrow limits on a Shewhart control chart make it easier and quicker to detect, and therefore respond to, special-cause variation.

The goal of statistical thinking in quality improvement is to make the available statistical tools as simple and useful as possible in meeting the primary goal, which is not mathematical correctness, but improvement in both the processes and outcomes of care. It is not fruitful to ask whether statistical process control, RCTs, regression equations or longitudinal factorial design of experiments is best in some absolute sense. Each is appropriate for answering different questions.

Forces driving this new way of thinking

The idea of reducing unwanted variation in healthcare represents a major shift in thinking, and it will take time to be accepted. Forces for this change include the computerisation of medical records leading to public reporting of care and outcome comparisons between providers and around the world. This in turn will promote pay for performance, and preferred provider contracting based on guideline use and good outcomes. This way of thinking about variation could spread across all five core systems of health, 12 including self-care and processes of healthy living.

  • Bergman B ,
  • Lifvergren S ,
  • Gremyer I ,
  • Hellstrom A ,
  • Neuhauser D
  • Neuhauser D ,

Competing interests None.

Provenance and peer review Not commissioned; externally peer reviewed.

Read the full text or download the PDF:

ProjectPractical.com

Common Cause & Special Cause Variation Explained with Examples

Editorial Team

Common Cause & Special Cause Variation pmp

In any business operation, it is important to ensure consistency in products as well as repeatable results. Managers and workers alike have to be aware of the processes and methods on how to produce consistent outcomes at all costs. However, we cannot deny that producing exactly identical products or results is almost impossible as variance tends to exist. Variation is not necessarily a bad thing as long as it is within the standard of the critical to qualities (CTQs) specification limits.

Process variation is the occurrence when a system deviates from its fixed pattern and produces a result which differs from the usual ones. This is a major key as it concerns the consistencies of the transactional as well as the manufacturing of the business systems. Variation should be evaluated as it portrays the reliability of the business for the customers and stakeholders. Variation may also cost money hence it is crucial to keep variation at bay to prevent too much cost spent on variation. It is crucial to be able to distinguish the types of variance that occur in your business process since it will give the lead on what course of action to take. Mistakes in coming up with an effective reaction plan towards the variance may worsen the processes of the business.

There are two types of process variation which will be further elaborated in this article. The variations are known as common cause variation and special cause variation.

Common Cause Variation Definition

Common cause variation refers to the natural and measurable anomalies that occur in the system or business processes. It naturally exists within the system. While it is true that variance may bring a negative impact to business operations, we cannot escape from this aspect. It is inherent and will always be. In most cases, the common cause variant is constant, regular, and could be predicted within the business operations. The other term used to describe this variation is Natural Problems, Noise, or Random Cause. Common cause variance could be presented and analysed using histogram.

What is Common Cause Variation

There are several distinguishable characteristics of common cause variation. Firstly, the variation pattern is predictable. Common cause variation occurring is also an active event in the operations. it is controlled and is not significantly different from the usual phenomenon.

There are many factors and reasons for common cause variation and it is quite difficult to pinpoint and eliminate them. Some common cause variations are accepted within the business process and operations as long as they are within a tolerable level. Eradicating them is an arduous effort unless a drastic measure is implemented towards the operation.  

Common Cause Variation Examples

There is a wide range of examples for common cause variation. Let’s take driving as an example. Usually, a driver is well aware of their destinations and the conditions of the path to reach the destination. Since they have been regularly using the same road, any defects or problems such as bumps, conditions of the road, and usual traffic are normal. They may not be able to precisely arrive at the destination at the same duration every time due to these common causes. However, the duration to arrive at the destination may not be largely differing day to day.

In terms of project-related variations, some of the examples include technical issues, human errors, downtime, high trafficking, poor computer response times, mistakes in standard procedures, and many more. Some other examples of common causes include poor design of products, outdated systems, and poor maintenance. Inconducive working conditions may also result in to common cause variants which could comprise of ventilation, temperature, humidity, noise, lighting, dirt, and so forth. Errors such as quality control and measurement could also be counted as common cause variation.

Special Cause Variation Definition

On the other hand, special cause variation refers to the unforeseen anomalies or variance that occurs within business operations. This variation, as the name suggests, is special in terms of being rare, having non-quantifiable patterns, and may not have been observed before. It is also known as Assignable Cause. Other opinions also mentioned that special cause variation is not only variance that happens for the first time, a previously overlooked or ignored problem could also be considered a special cause variation.

What is Special Cause Variation

Special cause variation is irregular occurrences and usually happens due to changes that were brought about in the business operations. It is not your mundane defects and may be very unpredictable. Most of the time, special cause variation happens following the flaws within the business processes or mechanism. While it may sound serious and taxing, there are ways to fix this which is by modifying the affected procedures or materials.

One of the characteristics of special cause variation is that it is uncontrolled and hardly predictable. The outcome of special causes variation is significantly different from the usual phenomenon. Since the issues are not predictable, it is usually problematic and may not even be recorded in the historical experience base.

Special Cause Variation Examples

As mentioned earlier, special cause variations are unexpected variants that occur due to factors that may affect the business system or operations. Let’s have an example of a special cause using the same scenario as previously elaborated for common cause variation example. The mentioned defects were common. Now, imagine if there is an unexpected accident that happens on the same road you usually take. Due to this accident, the time for the driver to arrive at the same destination may take longer than normal. Hence this accident is considered as a special cause variation. It is unexpected and results in a significantly different outcome, in this case, a longer time to arrive at the destination.

The example of special cause variation in the manufacturing sector includes environment, materials, manpower, technology, equipment, and many more. In terms of manpower, imagine a new employee is recruited into the team and still lacking in experience. The coaching and instructions should be adapted to consider that the person needs more training to be able to perform their tasks efficiently. Cases where a new supplier is needed in a short amount of time due to issues faced by the existing supplier are also unforeseen hence considered a special cause variation. Natural hazards that are beyond predictions may also be categorized into special cause variation. Some other examples include irregular traffic or fraud attack. An unexpected computer crash or malfunction in some of the components may also be considered as a special cause variation.

Common Cause and Special Cause Variation Detection

Control chart

One of the ways to keep track of common cause and special cause variation is by implementing control charts. When using control charts, the important aspect to be considered is firstly, establishing the average point of measurement. Next, establish the control limits. Usually, there are three standard deviations which are marked above and below the average point earlier. The last step is by determining which points exceed the upper and lower control limits established earlier. The points beyond the limits are special cause variation.

Before we get into the control chart of common cause and special cause variation, let’s have a look at the eight control chart rules first. If a process is stable, the points displayed in the chart will be near the average point and will not exceed the control limits.

However, it should be noted that not all rules are applicable to all types of control charts. That aside, it is quite tough to identify the causes of the patterns since special cause variation may be related to the specific type of processes. The table presented is the general rule that could be applied in most cases but is also subject to changes or differences. Studying the chart should be accompanied by knowledge and experiences in order to pinpoint the reasons for the patterns or variations.

A process is considered stable if special cause variation is not present, even if a common cause exists. A stable operation is important before it could be assessed or being improved. We could look at the stability or instability of the processes as displayed in control charts or run charts .

assignable variation with examples

The points displayed in the chart above are randomly distributed and do not defy any of the eight rules listed earlier. This indicates that the process is stable.

assignable variation with examples

The chart presented above is an example of an unstable process. This is because some of the rules for control chart tests mentioned earlier are violated.

Simply, if the points are randomly distributed and are within the limit, they may be considered as the common cause variation. However, if there is a drastic irregularity or points exceeding the limit, you may want to analyse more into it to determine if it is a special cause variation.

Histogram is a type of bar graph that could be used to present the distribution of occurrences of data. It is easily understandable and analysed. A histogram provides information on the history of the processes done as well as forecasting the future performance of the operations. To ensure the reliability of the data presented in the histogram, it is essential for the process to be stable. As mentioned earlier, although affected by common cause variation, the processes are still considered stable, hence histogram may be used on this occasion, especially if the processes undergo regular measurement and assessment.

The data is considered to be normally distributed if it portrays a “bell” shape in the histogram. The data are grouped around the central value and this cluster is known as variation. There are several other examples of more complicated patterns, such as having several peaks in the histogram or a shortened histogram. Whenever these examples of complex structures appear in the histogram, it is fundamental to look into the data and operations more deeply.

assignable variation with examples

The above bar graph is an example of the histogram with a “bell” shape.

However, it should be noted that just because the histogram displays a “bell” shaped distribution, that does not mean the process is only experiencing common cause variation. A deeper analysis should be done to investigate if there were other underlying factors or causes that lead towards the pattern of the distribution displayed in the histogram.

Countering common cause and special cause variation

Once the causes of the variation have been pinpointed, here comes the attempt to combat and resolve it. Different measures are implemented to counter different types of variation, i.e. common cause variation and special cause variation. Common cause variation is quite tough to be completely eliminated. Drastic or long-term process modification could be used to counter common cause variation. A new method should be introduced and constantly conducted to achieve the long-term goal of eliminating the common cause variation. Some other effects may happen to the operations but as time passes, the cause may be gradually solved. As for special cause variation, it could be countered using contingency plans. Usually, additional processes are implemented into the usual operation in order to counter the special cause variation.

  • What To Do If You Have “A Perfect Student” Syndrome?
  • How to Effectively Find and Use the Best Possible Suppliers in 2022
  • 5 Reasons to Screen Potential Employees 
  • Why It’s Important to Hire an InventHelp Patent Attorney?

most recent

The Impact of Diet on Oral Health

Tips & Guides

The impact of diet on oral health.

A Comprehensive Checklist for Small Business Equipment

Set Up for Success: A Comprehensive Checklist for Small Business Equipment

Simple Tips to Protect Yourself in Unlikely Situations

Safety Hacks: Simple Tips to Protect Yourself in Unlikely Situations

© 2024 Copyright ProjectPractical.com

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • BMJ Open Access

The meaning of variation to healthcare managers, clinical and health-services researchers, and individual patients

Duncan neuhauser.

1 Department of Epidemiology and Biostatistics, Case Western Reserve University, Cleveland, Ohio, USA

Lloyd Provost

2 Associates in Process Improvement, Austin, Texas, USA

3 Centre for Health Improvement, Chalmers University of Technology, Gothenburg, Sweden

Healthcare managers, clinical researchers and individual patients (and their physicians) manage variation differently to achieve different ends. First, managers are primarily concerned with the performance of care processes over time. Their time horizon is relatively short, and the improvements they are concerned with are pragmatic and ‘holistic.’ Their goal is to create processes that are stable and effective. The analytical techniques of statistical process control effectively reflect these concerns. Second, clinical and health-services researchers are interested in the effectiveness of care and the generalisability of findings. They seek to control variation by their study design methods. Their primary question is: ‘Does A cause B, everything else being equal?’ Consequently, randomised controlled trials and regression models are the research methods of choice. The focus of this reductionist approach is on the ‘average patient’ in the group being observed rather than the individual patient working with the individual care provider. Third, individual patients are primarily concerned with the nature and quality of their own care and clinical outcomes. They and their care providers are not primarily seeking to generalise beyond the unique individual. We propose that the gold standard for helping individual patients with chronic conditions should be longitudinal factorial design of trials with individual patients. Understanding how these three groups deal differently with variation can help appreciate these three approaches.

Introduction

Health managers, clinical researchers, and individual patients need to understand and manage variation in healthcare processes in different time frames and in different ways. In short, they ask different questions about why and how healthcare processes and outcomes change ( table 1 ). Confusing the needs of these three stakeholders results in misunderstanding.

Meaning of variation to managers, researchers and individual patients: questions, methods and time frames

Health managers

Our extensive experience in working with healthcare managers has taught us that their primary goal is to maintain and improve the quality of care processes and outcomes for groups of patients. Ongoing care and its improvement are temporal, so in their situation, learning from variation over time is essential. Data are organised over time to answer the fundamental management question: is care today as good as or better than it was in the past, and how likely is it to be better tomorrow? In answering that question, it becomes crucial to understand the difference between common-cause and special-cause variation (as will be discussed later). Common-cause variation appears as random variation in all measures from healthcare processes. 1 Special-cause variation appears as the effect of causes outside the core processes of the work. Management can reduce this variation by enabling the easy recognition of special-cause variation and by changing healthcare processes—by supporting the use of clinical practice guidelines, for example—but common-cause variation can never be eliminated.

The magnitude of common-cause variation creates the upper and lower control limits in Shewhart control charts. 2–5 Such charts summarise the work of health managers well. Figure 1 shows a Shewhart control chart (p-chart) developed by a quality-improvement team whose aim was to increase compliance with a new care protocol. The clinical records of eligible patients discharged (45–75 patients) were evaluated each week by the team, and records indicating that the complete protocol was followed were identified. The baseline control chart showed a stable process with a centre line (average performance) of 38% compliance. The team analysed the aspects of the protocol that were not followed and developed process changes to make it easier to complete these particular tasks. After successfully adapting the changes to the local environment (indicated by weekly points above the upper control limit in the ‘Implementing Changes’ period), the team formally implemented the changes in each unit. The team continued to monitor the process and eventually developed updated limits for the chart. The updated chart indicated a stable process averaging 83%.

An external file that holds a picture, illustration, etc.
Object name is qhc46334fig1.jpg

Annotated Shewhart control chart—using protocol.

This control chart makes it clear that a stable but inferior process was operating for the first 11 weeks and, by inference, probably before that. The annotated changes (testing, adapting and implementing new processes of care) are linked to designed tests of change which are special (assignable) causes of variation, in this case, to improvement after week 15, after which a new better stable process has taken hold. Note that there is common-cause (random) variation in both the old and improved processes.

After updating the control limits, the chart reveals a new stable process with no special-cause variation, which is to say, no points above or below the control limits (the dotted lines). Note that the change after week 15 cannot easily be explained by chance (random, or common-cause, variation), since the probability of 13 points in a row occurring by chance above the baseline control limit is one divided by 2 to the 13th power. This is the same likelihood that in flipping a coin 13 times, it will come up heads every time. This level of statistical power to exclude randomness as an explanation is not to be found in randomised controlled trials (RCTs). Although there is no hard-and-fast rule about the number of observations over time needed to demonstrate process stability and establish change, we believe a persuasive control chart requires 20–30 or more observations.

The manager's task demonstrates several important characteristics. First is the need to define the key quality characteristics, and choose among them for focused improvement efforts. The choice should be made based on the needs of patients and families. The importance of these quality characteristics to those being served means that speed in learning and improvement is important. Indeed, for the healthcare manager, information for improvement must be as rapid as possible (in real time). Year-old research data are not very helpful here; just-in-time performance data in the hands of the decision-makers provide a potent opportunity for rapid improvement. 6

Second, managerial change is holistic; that is, every element of an intervention that might help to improve and can be done is put to use, sometimes incrementally, but simultaneously if need be. Healthcare managers are actively working to promote measurement of process and clinical outcomes, take problems in organisational performance seriously, consider the root causes of those problems, encourage the formation of problem solving clinical micro-system teams and promote the use of multiple, evolving Plan–Do–Study–Act (PDSA) tests of change.

This kind of improvement reasoning can be applied to a wide range of care processes, large and small. For example, good surgery is the appropriate combination of hundreds of individual tasks, many of which could be improved in small ways. Aggregating these many smaller changes may result in important, observable improvement over time. The protocol-driven, randomised trial research approach is a powerful tool for establishing efficacy but has limitations for evaluating and improving such complex processes as surgery, which are continually and purposefully changing over time. The realities of clinical improvement call for a move from after-the-fact quality inspection to building quality measures into medical information systems, thereby creating real-time quality data for providers to act upon. Caring for populations of similar patients in similar ways (economies of scale) can be of particular value, because the resulting large numbers and process stability can help rapidly demonstrate variation in care processes 7 ; very tight control limits (minimal common-cause variation) allow special-cause variation to be detected more quickly.

Clinical and health-services researchers

While quality-management thinking tends towards the use of data plotted over time in control-chart format, clinical researchers think in terms of true experimental methods, such as RCTs. Health-services researchers, in contrast, think in terms of regression analysis as their principal tool for discovering explainable variation in processes and outcomes of care. The data that both communities of researchers use are generally collected during fixed periods of time, or combined across time periods; neither is usually concerned with the analysis of data over time.

Take, for example, the question of whether age and sex are associated with the ability to undertake early ambulation after hip surgery. Clinical researchers try to control for such variables through the use of entry criteria into a trial, and random assignment of patients to experimental or control group. The usual health-services research approach would be to use a regression model to predict the outcome (early ambulation), over hundreds of patients using age and sex as independent variables. Such research could show that age and sex predict outcomes and are statistically significant, and that perhaps 10% of the variance is explained by these two independent variables. In contrast, quality-improvement thinking is likely to conclude that 90% of the variance is unexplained and could be common-cause variation. The health-services researcher is therefore likely to conclude that if we measured more variables, we could explain more of this variance, while improvement scientists are more likely to conclude that this unexplained variance is a reflection of common-cause variation in a good process that is under control.

The entry criteria into RCTs are carefully defined, which makes it a challenge to generalise the results beyond the kinds of patients included in such studies. Restricted patient entry criteria are imposed to reduce variation in outcomes unrelated to the experimental intervention. RCTs focus on the difference between point estimates of outcomes for entire groups (control and experimental), using statistical tests of significance to show that differences between the two arms of a trial are not likely to be due to chance.

Individual patients and their healthcare providers

The question an individual patient asks is different from those asked by manager and researcher, namely ‘How can I get better?’ The answer is unique to each patient; the question does not focus on generalising results beyond this person. At the same time, the question the patient's physician is asking is whether the group results from the best clinical trials will apply in this patient's case. This question calls for a different inferential approach. 8–10 The cost of projecting general findings to individual patients could be substantial, as described below.

Consider the implications of a drug trial in which 100 patients taking a new drug and 100 patients taking a placebo are reported as successful because 25 drug takers improved compared with 10 controls. This difference is shown as not likely to be due to chance. (The drug company undertakes a multimillion dollar advertising campaign to promote this breakthrough.) However, on closer examination, the meaning of these results for individual patients is not so clear. To begin with, 75 of the patients who took the drug did not benefit. And among those 25 who benefited, some, perhaps 15, responded extremely well, while the size of the benefit in the other 10 was much smaller. To have only the 15 ‘maximum responders’ take this drug instead of all 100 could save the healthcare system 85% of the drug's costs (as well as reduce the chance of unnecessary adverse drug effects); those ‘savings’ would, of course, also reduce the drug company's sales proportionally. These considerations make it clear that looking at more than group results could potentially make an enormous difference in the value of research studies, particularly from the point of view of individual patients and their providers.

In light of the above concerns, we propose that the longitudinal factorial study design should be the gold standard of evidence for efficacy, particularly for assessing whether interventions whose efficacy has been established through controlled trials are effective in individual patients for whom they might be appropriate ( box 1 ). Take the case of a patient with hypertension who measures her blood pressure at least twice every day and plots these numbers on a run chart. Through this informal observation, she has learnt about several factors that result in the variation in her blood pressure readings: time of day, the three different hypertension medicines she takes (not always regularly), her stress level, eating salty French fries, exercise, meditation (and, in her case, saying the rosary), and whether she slept well the night before. Some of these factors she can control; some are out of her control.

Longitudinal factorial design of experiments for individual patients

The six individual components of this approach are not new, but in combination they are new 8 9

  • One patient with a chronic health condition; sometimes referred to as an ‘N-of-1 trial.’
  • Care processes and health status are measured over time. These could include daily measures over 20 or more days, with the patient day as the unit of analysis.
  • Whenever possible, data are numerical rather than simple clinical observation and classification.
  • The patient is directly involved in making therapeutic changes and collecting data.
  • Two or more inputs (factors) are experimentally and concurrently changed in a predetermined fashion.
  • Therapeutic inputs are added or deleted in a predetermined, systematic way. For example: on day 1, drug A is taken; on day 2, drug B; on day 3, drug A and B; day 4, neither. For the next 4 days, this sequence could be randomly reordered.

Since she is accustomed to monitoring her blood pressure over time, she is in an excellent position to carry out an experiment that would help her optimise the effects of these various influences on her hypertension. Working with her primary care provider, she could, for example, set up a table of randomly chosen dates to make each of several of these changes each day, thereby creating a systematically predetermined mix of these controllable factors over time. This factorial design allows her to measure the effects of individual inputs on her blood pressure, and even interactions among them. After an appropriate number of days (perhaps 30 days, depending on the trade-off between urgency and statistical power), she might conclude that one of her three medications has no effect on her hypertension, and she can stop using it. She might also find that the combination of exercise and consistently low salt intake is as effective as either of the other two drugs. Her answers could well be unique to her. Planned experimental interventions involving single patients are known as ‘N-of-1’ trials, and hundreds have been reported. 10 Although longitudinal factorial design of experiments has long been used in quality engineering, as of 2005 there appears to have been only one published example of its use for an individual patient. 8 9 This method of investigation could potentially become widely used in the future to establish the efficacy of specific drugs for individual patients, 11 and perhaps even required, particularly for very expensive drug therapies for chronic conditions. Such individual trial results could be combined to obtain generalised knowledge.

This method can be used to show (1) the independent effect of each input on the outcome, (2) the interaction effect between the inputs (perhaps neither drug A or B is effective on its own, but in combination they work well), (3) the effect of different drug dosages and (4) the lag time between treatment and outcome. This approach will not be practical if the outcome of interest occurs years later. This method will be more practical with patient access to their medical record where they could monitor all five of Bergman's core health processes. 12

Understanding variation is one of the cornerstones of the science of improvement

This broad understanding of variation, which is based on the work of Walter Shewart in the 1920s, goes well beyond such simple issues as making an intended departure from a guideline or recognising a meaningful change in the outcome of care. It encompasses more than good or bad variation (meeting a target). It is concerned with more than the variation found by researchers in random samples from large populations.

Everything we observe or measure varies. Some variation in healthcare is desirable, even essential, since each patient is different and should be cared for uniquely. New and better treatments, and improvements in care processes result in beneficial variation. Special-cause variation should lead to learning. The ‘Plan–Do–Study’ portion of the Shewhart PDSA cycle can promote valuable change.

The ‘act’ step in the PDSA cycle represents the arrival of stability after a successful improvement has been made. Reducing unintended, and particularly harmful, variation is therefore a key improvement strategy. The more variation is controlled, the easier it is to detect changes that are not explained by chance. Stated differently, narrow limits on a Shewhart control chart make it easier and quicker to detect, and therefore respond to, special-cause variation.

The goal of statistical thinking in quality improvement is to make the available statistical tools as simple and useful as possible in meeting the primary goal, which is not mathematical correctness, but improvement in both the processes and outcomes of care. It is not fruitful to ask whether statistical process control, RCTs, regression equations or longitudinal factorial design of experiments is best in some absolute sense. Each is appropriate for answering different questions.

Forces driving this new way of thinking

The idea of reducing unwanted variation in healthcare represents a major shift in thinking, and it will take time to be accepted. Forces for this change include the computerisation of medical records leading to public reporting of care and outcome comparisons between providers and around the world. This in turn will promote pay for performance, and preferred provider contracting based on guideline use and good outcomes. This way of thinking about variation could spread across all five core systems of health, 12 including self-care and processes of healthy living.

Competing interests: None.

Provenance and peer review: Not commissioned; externally peer reviewed.

assignable variation with examples

Random Variation

Published: November 7, 2018 by Ken Feldman

assignable variation with examples

If you’re interested in the statistical concepts surrounding random variation, we will provide the statistical definition and explore how it might apply to your organization. 

For those more interested in what it means in practical terms, we will explore the definition and application in terms of its benefits and how it can be used to better manage your organization.

Overview: What is random variation?  

One of the best definitions for random variation appears in the dictionary of the iSixSigma.com website: 

The tendency for the estimated magnitude of a parameter (e.g., based upon the average of a sample of observations of a treatment effect) to deviate randomly from the true magnitude of that parameter. Random variation is independent of the effects of systematic biases. In general, the larger the sample size is, the lower the random variation is of the estimate of a parameter. As random variation decreases, precision increases.

In other words, everything varies, whether it be the dimensions of your product, your personal weight, your manufacturing processing time, the time to get to work, or your blood pressure. Over time, you would expect the variation of those measurements to form some kind of statistical distribution that would approximate the underlying population of whatever you are measuring. 

That underlying distribution will have a calculated central tendency, variation, and shape. At any point in time, the measurement you take will vary and can come from any place in that distribution. If there is random variation, you will not be able to predict the exact value of the next measurement. You might be able to calculate the probability of what the next value might be — or even calculate a range of values within the next measurement might fall. We can call that a confidence interval .

How random variation affects your processes

While the statistical properties are interesting, what might be more important for you is how the concept of random variation impacts your ability to manage your process. If your process is exhibiting random variation, or what Dr. W. Edwards Deming called common cause variation , then your process is predictable and in what might be called a steady state. Deming distinguished common cause from special cause. Special cause variation is unpredictable and a function of some unexpected intervention in your process.

For example, the fill level of your bottle will have some variation as a function of the variation in your fill equipment, liquid, temperature, and run speed. That is the steady state given the combined effects of the variation in your process elements. It is expected and, over time, will form some distribution. 

However, if one of your fill nozzles starts to clog up, there will be variation in fill that is a function of a specific and assignable cause. That would not be expected or predicted until after its occurrence. That would be non-random variation — or special cause variation.

You can use a control chart to distinguish between a random (common cause, predictable, noise) variation and a non-random (special cause, unpredictable, signal) variation.

3 benefits of paying attention to your variation 

Knowing whether your process is exhibiting random or non-random variation will help you properly respond to the signal you receive from your control chart.

1. Proper response

If your process is exhibiting random variation then any improvement will require a fundamental change in the process. If the process is exhibiting non-random variation, then you will need to identify the reason for that assignable cause and then take action to either eliminate or incorporate changes to maintain an improved state or eliminate a negative impact. 

If you are taking sample measurements and the process is demonstrating random variation, you’ll be able to do some level of prediction of future values.  

3. Assess changes

If your process is demonstrating random variation and you make a change, you will have confidence that, if you see an impact due to your change, it will be real and believable.

Why is random variation important to understand?

The concept of random variation, or noise, is a central concept in statistics. You will want to understand what random variation is and its implications for taking the appropriate actions on your process.

Underlying assumption

Most statistical tests will have an underlying assumption that the data you’re analyzing was created by a random process. If not, your results may be inaccurate because of the influence of non-random variation.

Desired state

You should strive to achieve random variation in your processes. Random variation does not imply that everything is OK or good, but merely that the process is predictable and steady state. From there, you will want to evaluate whether that steady state is satisfactory or needs to be improved.

For example, why do you think your doctor wants you to fast before a blood test? Is it to be mean (especially if your appointment is in the afternoon)? No, your doctor wants you to only exhibit random variation in your body processes and not have the influence of special cause variation, so your test results can be considered representative of your true steady state. That doesn’t mean an elevated blood pressure is good, but at least your doctor knows that it exists. From there, he or she can have the proper response. 

Improper response

Unless you have a good understanding of random variation, you may inadvertently believe you have non-random variation when you don’t. This would cause you to try and find an assignable cause when none exists, or make changes as a result of an individual observation that would be tantamount to tampering with the process.

An industry example of random variation 

Unfortunately, many managers don’t understand or appreciate the concept of random variation. For example, a manager in the finance department of a B2B online business was getting complaints from the CFO that invoices were slow getting out to the customer, and thus cash flow was being negatively impacted.

The LSS Master Black Belt (MBB) investigated and found out that, as a result of their LSS training, the manager was control charting the invoice processing time. That was a good thing. When the MBB started questioning the manager how he uses the control chart, he realized what the problem was. The control chart had all of the points within the upper and lower control limits so the process was demonstrating random variation.

The manager was reacting to high and low points without appreciating whether the process was exhibiting common or special cause variation. It turned out that when the manager saw a “high” point on the control chart he initiated a search for the root cause. And when he was happy with a “low” point, he didn’t do anything except to say “Great job!” 

An example chart showing variation in process time

The manager should have realized that the process was stable and showing random variation so the appropriate response should have been to change the process to reduce the overall variation — and if desired, to lower the average processing time.

3 best practices when thinking about random variation 

To manage your process by properly using the concept of random variation, you should consider the following best practices.

1. Collect your data in a random manner

To get a picture of the true random variation of your process, you should collect your data in a random manner. Introducing any bias in your data collection will impact the randomness of your variation.

2. Use the appropriate statistical tools to determine if you have random variation

As has been explained before, the statistical control chart is the best tool for determining whether your process is generating data in a random pattern or not. 

3. Provide a proper response

You should react to random variation by seeking to improve your process if it’s not capable of meeting your specs, targets, or expectations. If you have non-random variation, you will need to investigate why and then take the appropriate steps to either incorporate or eliminate the reasons why. 

Frequently Asked Questions (FAQ) about random variation

What is an example of random variation vs. non-random variation.

Let’s use a pair of fair dice as an example. If we throw our dice many times, we will experience variation in the numbers we throw. If we threw them even more times, we would get a distribution with an average of 7, a range of 10 (12-2) and a shape that is triangular. That is the hypothetical distribution.

But what if we started to see throws of 8, 7, 9, 10, 9, 12, 11, and 10. They are all above the average. We might be suspicious that this is not random variation. We would investigate and possibly find that the dice are loaded. We would then seek to correct the situation if we wanted the dice to represent random variation.

What is the best way to know if we are seeing random variation?

The statistical control chart is the best tool for distinguishing between random and non random variation.

Must I always react to random variation?

If your process is showing random variation and is operating at a desired level, there is no need for you to react. But if you wish to improve your process, you’ll want your process to be in a steady state of random variation. That way, when you observe a change, you can attribute it to what you did rather than some unknown source.

Random variation in a nutshell

Random variation is the desired state for your process. It is predictable and consistent. But, it does not mean your process is operating at its best, only that it is steady state. 

The control chart is the best tool for distinguishing between random variation and non random variation. If you want to improve your process, then make sure you are only seeing random variation. If you have non random variation, find out why, and deal with the root cause(s). Hopefully then you’ll have a process showing only random variation.

About the Author

' src=

Ken Feldman

COMMENTS

  1. Assignable Cause

    Specific cause By investigating and identifying the specific cause of your signal, you can narrow in on your next steps for bringing the process back into control. 3. Can become common cause variation Good news! You found that your assignable cause for lowered production was due to a power outage.

  2. ASSIGNABLE CAUSES OF VARIATIONS

    1 Citations Download reference work entry PDF Assignable causes of variation are present in most production processes. These causes of variability are also called special causes of variation ( Deming, 1982 ). The sources of assignable variation can usually be identified (assigned to a specific cause) leading to their elimination.

  3. Common cause and special cause (statistics)

    The outcomes of a perfectly balanced roulette wheel are a good example of common-cause variation. Common-cause variation is the noise within the system. Walter A. Shewhart originally used the term chance cause. [1] The term common cause was coined by Harry Alpert in 1947. The Western Electric Company used the term natural pattern. [2]

  4. Special Causes of Variation

    April 8, 2020 / SPC / By TQP. Special Causes of Variation are also known as Assignable Causes (un natural) of variation. If Special cause of variations are present in a process, then the voice of the process is neither stable nor predictable and is said to be out of statistical control. SPC technique uses Control Charts to monitor and control ...

  5. Understanding and managing variation: three different perspectives

    Common-cause variation is random variation present in stable healthcare processes. Special-cause variation is an unpredictable deviation resulting from a cause that is not an intrinsic part of a process. ... As an example, patients with hypertension are often advised to take and titrate medications, modify dietary intake, and increase activity ...

  6. Common Cause vs. Special Cause Variation: What's the Difference?

    One example of a common cause variation would be when a task takes slightly longer or shorter to accomplish than the mean time. Other examples could be normal wear and tear, computer lag time, and measurement errors. The Benefits of Common Cause Variations

  7. The Power of Special Cause Variation: Learning from Process Changes

    A special cause of variation is assignable to a defect, fault, mistake, delay, breakdown, accident, and/or shortage in the process. When special causes are present, process quality is unpredictable. Special causes are a signal for you to act to make the process improvements necessary to bring the process measure back into control.

  8. Sources of Variation: Common and Assignable Causes

    For example, if the average bottle of a soft drink called Cocoa Fizz contains 16 ounces of liquid, we may determine that the amount of natural variation is between 15.8 and 16.2 ounces. If this were the case, we would monitor the production process to make sure that the amount stays within this range.

  9. Variations in Care

    For example, the county-level association between fewer primary care physicians and higher 30-day hospital readmission rates suggests that inadequate primary care capacity may result in preventable hospitalizations. Table 16-1 provides examples of warranted and unwarranted variation in each of these categories of care. Table 16-1.

  10. Assignable Cause: Learn More From Our Online Lean Guide

    An assignable cause is a type of variation in which a specific activity or event can be linked to inconsistency in a system. In effect, it is a special cause that has been identified. As a refresher, common cause variation is the natural fluctuation within a system. It comes from the inherent randomness in the world.

  11. Common Cause Variation Vs. Special Cause Variation

    Types of Variance Change is inevitable, even in statistics. You'll need to know what kind of variation affects your process because the course of action you take will depend on the type of variance. There are two types of Variance: Common Cause Variation and Special Cause Variation.

  12. PDF Statistical Quality Control

    example of the soft-drink bottling operation, bottles filled with 15.6 ounces of liquid would signal a problem. The machine may need to be readjusted, an assignable cause of variation. We can assign the variation to a particular cause (machine needs to be readjusted) and we can correct the problem (readjust the machine).

  13. 26.2.6 Chance and assignable causes of variation

    The variation due to these causes is beyond the control of human being and cannot be prevented or eliminated under any circumstance. Such type of variation has got to be allowed within the stable pattern, usually termed as Allowable Variation. The range of such variation is known as natural tolerance of the process. 26.2.6.2 Assignable causes

  14. Assignable cause

    Examples include mechanical problems, electrical problems, and computer software problems. Advantages of Assignable cause One advantage of identifying the assignable causes of variation is that it can help to eliminate their impact on the process.

  15. Chance & assignable causes of variation

    Some of the important factors of assignable causes of variation are - i) Substandard or defective raw materials ii) New techniques or operation iii) Negligence of the operators iv) Wrong or improper handling of machines v) Faulty equipment vi) Unskilled or inexperienced technical staff and so on.

  16. Six Sigma Control Charts: An Ultimate Guide

    One of the critical features of a Six Sigma control chart is its ability to detect special cause variation, also known as assignable cause variation. Special cause variation is due to factors not inherent in the process and can be eliminated by taking corrective action. The control chart helps detect special cause variation by highlighting data ...

  17. Assignable causes with examples and a case study

    Assignable causes with examples and a case study. 1. Assignable causes with examples and a case study Tejas V. 2. Variation in process • Variation is deviation from the precise pattern • No two products are same • Differences are called variation. 3. Common causes • Stable and repeatable • Present in the process • 85% of them are ...

  18. The meaning of variation to healthcare managers, clinical and health

    In answering that question, it becomes crucial to understand the difference between common-cause and special-cause variation (as will be discussed later). Common-cause variation appears as random variation in all measures from healthcare processes.1 Special-cause variation appears as the effect of causes outside the core processes of the work ...

  19. Identifying and Managing Special Cause Variations

    Examples of special cause variations include machine faults, power surges, operator absences, and computer faults. 3 drawbacks of special cause variations There are some drawbacks to special cause variations that should be acknowledged: 1. They can be difficult to prepare for

  20. Common Cause & Special Cause Variation Explained with Examples

    In terms of project-related variations, some of the examples include technical issues, human errors, downtime, high trafficking, poor computer response times, mistakes in standard procedures, and many more. Some other examples of common causes include poor design of products, outdated systems, and poor maintenance.

  21. When Assignable Cause Masquerades as Common Cause

    The difference between common (or random) cause and special (or assignable) cause variation is the foundation of statistical process control (SPC). An SPC chart prevents tampering or overadjustment by assuming that the process is in control, i.e., special or assignable causes are absent unless a point goes outside the control limits.

  22. The meaning of variation to healthcare managers, clinical and health

    Introduction Health managers, clinical researchers, and individual patients need to understand and manage variation in healthcare processes in different time frames and in different ways. In short, they ask different questions about why and how healthcare processes and outcomes change ( table 1 ).

  23. Random Variation

    For example, the fill level of your bottle will have some variation as a function of the variation in your fill equipment, liquid, temperature, and run speed. That is the steady state given the combined effects of the variation in your process elements. It is expected and, over time, will form some distribution.