Statistical QC and Risk Management

The combination can improve the overall quality of patient results.

By Curtis A. Parvin, PhD, John Yundt-Pacheco and Andy Quintenz

Quality Assurance Series

Recently, the Clinical Laboratory Standards Institute (CLSI) published a guideline titled, "EP23-A Laboratory Quality Control Based on Risk Management."1 It is the latest in a series of documents that provide risk management guidance to laboratories and medical device manufacturers.2-4 These efforts reflect an ongoing trend toward placing a greater focus on the patient throughout all areas of the healthcare enterprise, including the clinical laboratory.5

Risk management provides a formal approach to identify potential failure modes in the lab, rank those modes in terms of their risk, and establish policies and procedures to prevent or reduce (mitigate) the risks. The concept of risk has different definitions depending on the area of application. In the risk management arena risk is a concept comprised of two components:

  1. the likelihood of occurrence of an unwanted outcome and
  2. the severity of the unwanted outcome.

In laboratory medicine, the unwanted outcome is generally defined as patient harm.

The sequence of events that lead to patient harm can be depicted and shown in Fig.1. figure 1

This figure is similar to one that appears in EP23-A. The occurrence of a failure mode creates an out-of-control condition. Depending on the type, size and duration of the out-of-control condition, some number of incorrect patient results are produced.

An incorrect patient result is one that fails to meet the requirements for its intended medical use. This is generally defined as a result with measurement error that exceeds an allowable total error specification.5 Some or all of the incorrect patient results produced during the existence of the out-of-control state will be reported based on how and when results are reported by the laboratory. While the likelihood that an incorrect result reported to a healthcare provider will lead to an incorrect action and the probability that the incorrect action causes patient harm is outside the primary control of the laboratory, it will depend on the nature of the analyte and characteristics of the patient population served by the lab.

Identifying Potential Failure Modes

The first task is to map the total testing process and identify potential failure modes that could lead to patient harm. For each mode the lab should estimate its rate of occurrence. While quantitative estimates of the expected failure rates are desirable, it is recognized that these may be difficult to obtain. As an alternative, a descriptive semiquantitative approach is often employed. An example given in EP23-A suggests a 5 level categorization for rate of occurrence.

Frequent = once per week

Probable = once per month

Occasional = once per year

Remote = once every few years

Improbable = once in the life of the measuring system

The laboratory may identify many potential failure modes in the total testing process that could lead to an out-of-control condition. The lab must then develop a strategy to control the number of incorrect patient results reported due to any of the potential failure modes.

Detecting Out-of-Control Conditions

It is advantageous to devise specific control procedures that address each potential failure mode where failure can occur. However, there will always be potential failure modes that are never identified or that cannot be adequately controlled at the point of failure. A lab should strive to minimize the number of out-of-control conditions created but plan for their eventual presence, as sooner or later something unexpected will happen that causes an out-of-control condition.

Statistical quality control procedures based on the periodic measurement of stable QC materials is the approach that has been successfully employed for decades to detect out-of-control conditions. Defining a QC strategy based on the periodic measurement of stable QC materials involves answering three questions:

  1. when to schedule QC evaluations,
  2. how many QC samples to measure and
  3. what QC rule(s) to apply to the QC sample results to decide the in-control or out-of-control status of the testing process.6,7

Given the answers to these questions, the performance characteristics of the QC strategy can be quantitatively assessed. Different outcome metrics can be computed, but the outcome metric that best fits into the overall model of the sequence of events that can lead to patient harm depicted in Fig. 1 is the expected number of incorrect patient results produced and reported due to an out-of-control condition of a given type and magnitude.5

Assessing Likelihood, Severity

Even though the lab has little control over the probabilities leading to patient harm after an incorrect result is reported, the lab should make its best attempt at estimating these probabilities based on the analyte, patient population and medical judgment. Likewise, the severity of harm to a patient resulting from an incorrect lab result will depend on the analyte and the patient population. The severity of harm requires assessment of the various ways the results may be used. If multiple scenarios leading to different degrees of severity are possible, the lab should consider the most likely and most harmful scenarios. EP23-A provides an example of a severity scale using five descriptive categories:

Negligible = inconvenience or temporary discomfort

Minor = temporary injury or impairment not requiring professional medical intervention

Serious = injury or impairment requiring professional medical intervention

Critical = permanent impairment or life-threatening injury

Catastrophic = patient death

Risk Management, Statistical QC

Figure 2As demonstrated in Fig. 2, risk management activities that identify failure modes and estimate the likelihood and severity of patient harm from an incorrect reported patient result (areas shaded in blue) combined with statistical QC planning and implementation to control the number of incorrect patient results produced and reported in the event of an out-of-control condition (areas shaded in green) complement one another and in combination address all aspects of the sequence of events that can lead to patient harm.

Risk management should be used to minimize the number of out-of-control conditions occurring in the analysis process. Statistical QC should be used to mitigate the impact of the eventual out-of-control conditions that will inevitably arise. Protocols based on statistical QC can reduce the probability that an out-of-control condition will lead to an incorrect result being reported. The combination of risk management mitigation activities and QC practices based on statistical QC can significantly reduce the number of incorrect results reported.

In summary, recent guidelines such as EP23-A introduce risk management principles that may not be familiar to many in the laboratory community. In combination with statistical QC, risk management principles and activities can help the laboratory estimate and control the chance of failures in the laboratory leading to incorrect patient results than cause patient harm.

Dr. Parvin is manager of Advanced Statistical Research; John Yundt-Pacheco is Scientific Fellow; and Andy Quintenz is Global Scientific and Professional Affairs Manager, Bio-Rad.


1. CLSI. Laboratory Quality Control Based on Risk Management; Approved Guideline. CLSI document EP23-A. Wayne PA: Clinical and Laboratory Standards Institute; 2011.

2. CLSI. Risk Management Techniques to Identify and Control Laboratory Error Sources; Approved Guideline - Second Edition. CLSI document EP18-A2. Wayne PA: Clinical and Laboratory Standards Institute; 2009.

3. ISO. Medical devices - Application of risk management to medical devices. ISO 14971. Geneva, Switzerland: International Organization for Standardization; 2007.

4. ISO. Medical laboratories - Reduction of error through risk management and continual improvement. ISO 22367. Geneva, Switzerland: International Organization for Standardization; 2008.

5. Parvin CA, Yundt-Pacheco J, Williams M. The focus of laboratory quality control: Why QC strategies should be designed around the patient, not the instrument. ADVANCE for Administrators of the Laboratory 2011;20(3):48-9.

6. Parvin CA, Yundt-Pacheco J, Williams M. Designing a quality control strategy: In the modern laboratory three questions must be answered. ADVANCE for Administrators of the Laboratory 2011;20(5):53-4.

7. Parvin CA, Yundt-Pacheco J, Williams M. The frequency of quality control testing. QC testing by time or number of patient specimens and the implications for patient risk are explored. ADVANCE for Administrators of the Laboratory 2011;20(7):66-9.

Copyright 2015 Merion Matters. All rights reserved.


Find Products & Information

Log In / Register

Log In / Register
Login ID