Patient Risk Management

Looking Ahead to
Patient Risk Management

We're now living in a time when sophisticated automated systems continuously produce patient test results. Yet typical QC practices are based around a batch of patient samples, or are set by default to a once daily, regulatory minimum. Take your laboratory into the era of patient risk management – with Bio-Rad as your partner.   

In this article you will learn about building a QC system based around patient risk management.  Related articles provide more detail on key concepts.  There are links to useful references and resources.  You also have links to: a catalog of independent third party Bio-Rad control materials; Product Inserts with Levels, Mean Values and Ranges; and the Unity™ Interlaboratory Program.  

Regulation is changing from “One-size-fits all QC” to doing the “Right QC”

Regulation is changing. Centers for Medicare & Medicaid Services (CMS), has recognized that "One-size-fits-all QC" is no longer appropriate due to newer technologies now available in the laboratory. What is needed is "design of an appropriate and effective QCP (QC Plan) for each laboratory and each specific test; that is the 'Right' QC!" "This new QC protocol will not necessarily reduce QC requirements, but instead, will be the 'right' QC for [each] laboratory, its environment, patients, personnel, test systems, etc."1 For more information on Right QC see the related article One-size-fits-all QC vs. Right QC.

1 DHSS, CMS, Office of Clinical Standards and Quality/Survey and Certification Group, Ref: S&C: 12-03-CLIA, November 4, 2011.

back Limitations of 'Default' One-size-fits-all QC Laboratory Responsibility for QC A Patient Approach to Doing the Right QC Laboratory Quality Control Based on Risk Management Advantages of a Bracketed QC Strategy Laboratory Quality Control Based on Six Sigma An Integrated Approach to Doing the Right QC continue

 

RELATED ARTICLES

     
One-size-fits-all QC vs. Right QC   Expected Number of Patients Compromised by Failure   Concentrations of Control Materials   Six Sigma Background
           
     
Sigma Values in the Laboratory   Sigma Values calculated from the Unity Interlab Program   Sigma Values and QC Strategy Design    

 

PRODUCTS & SERVICES

   
  Bio-Rad Controls   My eInserts   QC Data Management

 

 
One-size-fits-all QC vs. Right QC
  

One-size-fits-all QC vs. Right QC


The concept of moving from One-size-fits-all QC to Right QC was introduced in a memorandum from the Director of the Survey & Certification Group to State Survey Agency Directors dated November 4, 2011 “Initial Plans and Policy Implementation for Clinical and Laboratory Standards Institute (CLSI) Evaluation Protocol-23 (EP), Laboratory Quality Control Based on Risk Management’, as Clinical Laboratory Improvement (CLIA) Quality Control (QC) Policy.”   “This new QC protocol will not necessarily reduce QC requirements, but instead, will be the ‘right’ QC for this laboratory, its environment, patients, personnel, test systems, etc.”  A summary of the memorandum, the memorandum itself and answers to Frequently Asked Questions for EP-23 is available at:  
https://www.cms.gov/Medicare/Provider-Enrollment-and-Certification/SurveyCertificationGenInfo/Policy-and-Memos-to-States-and-Regions-Items/CMS1253857.html.

 
Expected Number of Patients Compromised by Failure

  

Expected Number of Patients Compromised by Failure


With respect to the window of vulnerability that is opened whenever patient results are reported between the evaluation of QC materials, Parvin and colleagues state that:

  • if we consider that a test system failure can begin at any specimen with equal probability, then the expectation is that half the number of patient specimens tested between QC evaluations will be affected in the event of an undetected test system failure.1

Parvin CA, Yundt-Pacheco J, Williams M. Designing a quality control strategy: In the modern laboratory three questions must be answered. ADVANCE for Administrators of the Laboratory 2011;(5):53-54.

Here is the proof. We begin by assuming that failure is equally likely to occur before any patient specimen between the QC events. If there are n patient specimens between QC events, there are n+1 spaces (we count the space before the last QC event as part of this).

The expected number of patients compromised by failure – or the average number of patients compromised by failure - can be computed as the sum of the patients compromised divided by the number of possible failure locations (there are n+1 failure locations).

If the failure occurs prior to the first patient specimen (but after the QC), then all n patients would be compromised.

If the failure occurs prior to the second patient specimen (but after the first patient specimen), then n-1 patients would be compromised.

If the failure occurs prior to the last patient specimen (but after the second to the last patient specimen), then 1 patient would be compromised.

If the failure occurs after the last patient specimen but before the next QC, then none of the patients would be compromised.

So the sum of the patients compromised is n + n-1 + … + 1 + 0.
This is equal to the sum of the first n integers, which is equal to

 .
To compute the expected number of patients compromised by a failure, divide the sum of the first n integers by n+1.

, or ½ the number of patients between QC events.
The expected number of patients compromised by a failure is ½ the number of patients between QC events.

1. Parvin, C.A., Yundt-Pacheco, J., Williams, M. “Designing a quality control strategy: In the modern laboratory three questions must be answered,” ADVANCE for Administrators of the Laboratory 2011;(5):53-54.

 
Concentrations of Control Materials
  

Concentrations of Control Materials


In Guideline C24, CLSI states “The number of levels and concentration of quality control materials should be sufficient to determine proper method performance over the measuring range of interest. “ They continue:

For most analyte-method combinations, a minimum of two levels (concentrations) of control materials is recommended. Where possible, analyte concentrations should be at clinically relevant levels to reflect values encountered in patient specimens.

And:

Control materials may be selected to cover the measuring range. Routine testing of these materials may be helpful in confirming the expected range of the procedure.

Finally, note that:

. . . the control materials specified are separate external specimens to be analyzed repeatedly by the measurement procedure, the quality control materials should be different from the calibrator materials to ensure the QC procedure provides an independent assessment of the measurement procedure’s performance in its entirety, including the procedure for calibration of the measurement.

Clinical and Laboratory Standards Institute, C24 Statistical Quality Control for Quantitative Measurement Procedures: Principles and Definitions, Wayne PA. Section 6.2.1 Relation to Calibrators and Section 6.2.2 Concentrations of Analytes in Control Materials.

While for most analyte-method combinations a minimum of two levels (concentrations) of control materials is recommended, consider the analyte-method combinations that require single point calibration and have a linear detector response. Two levels of control material may not be sufficient to demonstrate linearity over the analytical measurement range (AMR). Then, consider analyte-method combinations that require multipoint calibration and have a non-linear response. Again just two levels of control material may not be sufficient to demonstrate appropriate results over the entire calibration curve. Clearly then, there are situations requiring more than a minimum of two levels (concentrations) of control materials.

For some analyte-method combinations, measuring ranges and clinical situations, adequate monitoring may have to come from more than one control material. For example, depending on the clinical situation (gestational stage; ectopic pregnancy; spontaneous abortion; trophoblastic disease) more than one control material may be needed to verify hCG performance at relevant concentrations:

 

 
Six Sigma Background
  

Six Sigma Background


Six Sigma is a quality management strategy that categorizes process capability by evaluating how many process standard deviations can fit within the tolerance limits of the process. The more standard deviations – or sigmas that fit between the mean of the process and the tolerance limits, the more robust the process will be.

As variation is removed from a process, the standard deviations become smaller and more of them can fit within the process tolerance limits. A process that can fit six standard deviations within the tolerance limits (a six sigma process) will only produce 3.4 defects per million opportunities even in the presence of a 1.5 standard deviation shift, and is considered "world class quality."

Sigma metrics can be computed directly from the standard deviation and the tolerance limits or they can be estimated by the defect rate. See the side bar Sigma Values in the Laboratory for details on computing laboratory sigma metrics. For estimating a sigma metric from a defect rate, determine how many defects are produced from a million opportunities. The number of defects per million opportunities can be converted into a sigma metric by comparing it to the normal distribution in terms of standard deviations.

Using the definition of the normal distribution, 0.682689 % of the distribution falls within one standard deviation, which means 0.317310 % of the distribution is expected to fall outside the one standard deviation range. Multiplying by 1,000,000, we get 317,310. If a process produces 317,310 defects per 1,000,000 opportunities, this corresponds to tolerance limits of one standard deviation, or a 1 σ process. A table can be constructed for sigma metrics 1 – 6:

This Sigma Defects table assumes that a process is perfectly centered on the mean - usually an optimistic assumption. A more pragmatic view is taken by estimating sigma metrics assuming a 1.5 SD shift has taken place. Most sigma metric defect tables build in a 1.5 SD shift into the table, like this:

 
Sigma Values in the Laboratory
  

Sigma Values in the Laboratory


Laboratory quality specifications are often defined in terms of allowable total error limits (TEa). If the difference between the true concentration of an analyte and the reported concentration in a patient's specimen exceeds TEa the result is considered unreliable. The sigma metric expresses the number of analytical standard deviations of the test system process that fit within the specified allowable total error limits. That is,

Bias is the systematic difference between the expected results obtained by the laboratory's test method and the results that would be obtained from an accepted reference. The reference may be another test method, a standard, or a consensus reference like a proficiency program or an inter-laboratory peer-comparison program. SD is the total analytical standard deviation of the test method. Equivalently, the quantities can be given as percents;

where %CV is the analytical coefficient of variation of the test method. The figure below, gives a graphical example of a test method with 1% bias, 2.5% coefficient of variation, and a specified allowable total error of 10%.

In this case the sigma value is

That is, 3.6 analytical standard deviations fit within the 10% quality specification.

Bias can have a significant impact on analytical quality and should usually be removed from the laboratory test system when it is identified. However, eliminating bias below a certain threshold can be difficult and attempts to do so are more likely to increase the overall imprecision of the test method. In general, the value for bias used in sigma computations should be the minimum threshold at which bias is actionable (an attempt to remove it will be made).

Source: “Sigma Metrics, Total Error Budgets, and Quality Control”, http://laboratory-manager.advanceweb.com/Archives/Article-Archives/Sigma-Metrics-Total-Error-Budgets-QC.aspx, accessed on September 26, 2012

 
Sigma Values calculated from the Unity™ Interlaboratory Program
  

Sigma Values calculated from the Unity™ Interlaboratory Program


Analytical imprecision was computed using QC data reported by laboratories participating in the Bio-Rad Unity™ Interlaboratory Program. 36 analytes were evaluated. 20 analytes had QC data reported at 2 concentration levels and 16 analytes had QC data reported at 3 concentration levels. QC results submitted over a 12 month period from a single lot were used to compute the mean and within-laboratory standard deviation at each concentration level of the control for every laboratory submitting.

The laboratory mean and SD at each concentration level that represented the 50th percentile (median) of the means and SDs computed for all of the individual laboratories were used in the analyses. The sigma values are representative of the QC data reported because each laboratory's sigma value was computed with respect to that laboratory's test method and then the median of all sigma values was used.

Total allowable error (TEa) specifications were obtained from published tables that define allowable error in terms of biological variability.

When reviewing the following table recall that the Sigma Value expresses the number of analytical standard deviations that fit within the specified allowable total error limits (QC Sigma = TEa%/CV%) so that the higher the QC Sigma the more robust the analyte and the lower the QC Sigma (σ) the less robust the analyte. For example, looking at electrolytes, potassium is a very robust analyte (5.51σ at Level 1 and 7.34σ at Level 2) whereas sodium and chloride are much less robust (1.24σ at Level 1 and 1.38σ at Level 2; and, 1.60σ at Level 1 and 1.86σ at Level 2 respectively).

The main reason there is such a difference between the sigma values for potassium, sodium and chloride is the total allowable error (TEa) specifications used. While the CV's are relatively close, the TEa specifications are not. Sodium has a TEa of 1.12, Chloride 1.88, and Potassium 7.44. The large, and probably surprising differences in the performance of these electrolytes, demonstrate the utility of calculating the sigma values. The large differences in sigma values also suggest that patient results for sodium and chloride would benefit from running more Levels of QC more frequently than potassium. See related article Sigma Values and QC Strategy Design.

Turning from chemistry to immunoassay, the following table clearly identifies less robust analytes where patient results would likewise benefit from more running more Levels of QC more frequently. For example, Total T3, Free T4, Total T4 and Testosterone have Sigma Values less than 2. Again, see related article Sigma Values and QC Strategy Design.

Adapted from "Computing a Patient-Based Sigma Metric" Kuchipudi L, Yundt-Pacheco J, Parvin CA, Computing a patient-based sigma metric, Clin Chem. 2010, 56(6), Supplement:A35.

A table of showing values for Imprecision, Bias and Total Error can be found here.

The values are derived from: Ricos, C., Alvarez, V., Cava, F., Garcia-Lario, J.V., Hernandez, A., Jimenez, C.V., Mininchela, J., Perich, C., Simon, M.,
“Current databases on biologic variation: pros, cons and progress”, Scandinavian Journal of Clinical and Laboratory Investigation, 1999;59:491-500.
These values are updated/modified with the most recent specifications made available in 2012.

 
Sigma Values and QC Strategy Design
  

Sigma Values and QC Strategy Design


Sigma values are useful for guiding QC strategy design. For a high sigma process it is relatively easy for the laboratory to design a QC procedure to detect any out-of-control condition that could pose a significant risk of producing unreliable results. The reason this is true is because a relatively large out-of-control condition would have to occur before there would be much chance of producing results that contained errors that exceed the allowable total error specification and it is easy to design QC procedures that can detect large out-of-control conditions. On the other hand, for a low sigma process a relatively small out-of-control condition may pose an unacceptably high risk of producing unreliable patient results. It can be challenging to design QC produces that are good at detecting small out-of-control conditions.

How many QC samples should be run?

Simple guidelines for choosing the number of QC samples to run and appropriate quality control rules based on sigma values have been proposed (Westgard JO. Six sigma quality design & control, 2nd ed. Madison WI: Westgard QC Inc., 2006). An example of one such guideline is shown in the table below:

For lower sigma values more QC samples and more powerful QC rules are recommended. Note, a 1:3S QC rule rejects if any of the QC results differ from their target concentration by more than 3 standard deviations. Mulitrules are combinations of individual QC rules that tend to be more powerful than simple rules such as the 1:3S QC rule. In general, for large sigma value processes (≥6σ) simple QC rules with low false rejection rates are adequate. For intermediate sigma value processes (sigma values between 3.5 and 6) quality goals can be met, but more elaborate QC strategies may be required. For low sigma values (<3.5 sigma) it will be difficult to meet the laboratory's quality goals without finding ways to further reduce the test systems analytical bias, or its analytical imprecision.

See SIGMA METRICS, TOTAL ERROR BUDGETS, AND QUALITY CONTROL here

How often should QC samples be run?

As well being used to guide as how many QC samples to run and the QC rules to be applied to the result, the sigma metric has also been proposed as the starting point for deciding how often controls should be run:

  • >6σ (excellent tests) – evaluate with one QC per day (alternating levels between days) and a 1:3.5 s rule
  • 4σ–6σ (suited for purpose) – evaluate with two levels of QC per day and the 1:2.5 s rule
  • 3σ–4σ (poor performers) – use a combination of rules with two levels of QC twice per day
  • <3σ (problems) – maximum QC, three levels, three times a day. Consider testing specimens in duplicate

Cooper et al. Collective opinion paper on findings of the 2010 convocation of experts on laboratory quality, Clin Chem Lab Med. 2011; 49(5):793-802.

The basic methodology is to group analytes by their sigma metric and then make frequency decisions based on patient volume and risk assessment with the low sigma analytes controlled the most frequently and high sigma analytes controlled the least frequently. See related article Sigma Values calculated from the Unity™ Interlaboratory Program and Expected Number of Patients Compromised by Failure.

 

Find Products & Information

Log In / Register

Log In / Register
PLEASE LOG IN NOT REGISTERED?
Login ID
Password