## Six Sigma Background

Six Sigma is a quality management strategy that categorizes process capability by evaluating how many process standard deviations can fit within the tolerance limits of the process. The more standard deviations – or sigmas that fit between the mean of the process and the tolerance limits, the more robust the process will be.

As variation is removed from a process, the standard deviations become smaller and more of them can fit within the process tolerance limits. A process that can fit six standard deviations within the tolerance limits (a six sigma process) will only produce 3.4 defects per million opportunities even in the presence of a 1.5 standard deviation shift, and is considered “world class quality.”

## Featured products

### Unity Real-Time

Smarter QC data management solutions to help you work proactively, identifying trends and making corrections before results are compromised.

See how Unity can help manage your lab### Power of Peer Groups

Improve performance and help increase confidence in your lab’s results. Join the world’s largest QC peer group, with 65M+ monthly data processed from 29K peers and 53K participating instruments.

Learn more about the power of Peer GroupsSigma metrics can be computed directly from the standard deviation and the tolerance limits or they can be estimated by the defect rate. See the side bar Sigma Values in the Laboratory for details on computing laboratory sigma metrics. For estimating a sigma metric from a defect rate, determine how many defects are produced from a million opportunities. The number of defects per million opportunities can be converted into a sigma metric by comparing it to the normal distribution in terms of standard deviations.

Using the definition of the normal distribution, 68.26% of the distribution falls within one standard deviation, which means 31.73 % of the distribution is expected to fall outside the one standard deviation range. Multiplying by 1,000,000, we get 317,310. If a process produces 317,310 defects per 1,000,000 opportunities, this corresponds to tolerance limits of one standard deviation, or a 1 σ process. A table can be constructed for sigma metrics 1 – 6:

Sigma Defect per 1,000,000

This Sigma Defects table assumes that a process is perfectly centered on the mean – usually an optimistic assumption. A more pragmatic view is taken by estimating sigma metrics assuming a 1.5 SD shift has taken place.

Most sigma metric defect tables build in a 1.5 SD shift into the table, like this:

Sigma Defect per 1,000,000 with 1.5 SD Shift

## Sigma Values in the Clinical Laboratory

Laboratory specifications are often defined in terms of allowable total error limits (TEa). If the difference between the true concentration of an analyte and the reported concentration in a patient’s specimen exceeds TEa the result is considered unreliable. The sigma metric expresses the number of analytical standard deviations of the test system process that fit within the specified allowable total error limits. That is,

Bias is the systematic difference between the expected results obtained by the laboratory’s test method and the results that would be obtained from an accepted reference. The reference may be another test method, a standard, or a consensus reference like a proficiency program or an inter-laboratory peer-comparison program. SD is the total analytical standard deviation of the test method. Equivalently, the quantities can be given as percents;

where %CV is the analytical coefficient of variation of the test method. The figure below, gives a graphical example of a test method with 1% bias, 2.5% coefficient of variation, and a specified allowable total error of 10%.

In this case the sigma value is

That is, 3.6 analytical standard deviations fit within the 10% quality specification. Bias can have a significant impact on analytical quality and should usually be removed from the laboratory test system when it is identified. However, eliminating bias below a certain threshold can be difficult and attempts to do so are more likely to increase the overall imprecision of the test method. In general, the value for bias used in sigma computations should be the minimum threshold at which bias is actionable (i.e. an attempt to remove it will be made).

Source: “Sigma Metrics, Total Error Budgets, and Quality Control”, http://laboratory-manager.advanceweb.com/Archives/Article-Archives/Sigma-Metrics-Total-Error-Budgets-QC.aspx, accessed on October 24, 2012

## Sigma Values calculated from the Unity Interlaboratory Program

Analytical imprecision was computed using QC data reported by laboratories participating in the Bio-Rad Unity™ Interlaboratory Program. 36 analytes were evaluated. 20 analytes had QC data reported at 2 concentration levels and 16 analytes had QC data reported at 3 concentration levels. QC results submitted over a 12 month period from a single lot were used to compute the mean and within-laboratory standard deviation at each concentration level of the control for every laboratory submitting.

The laboratory mean and SD at each concentration level that represented the 50th percentile (median) of the means and SDs computed for all of the individual laboratories were used in the analyses. The sigma values are representative of the QC data reported because each laboratory’s sigma value was computed with respect to that laboratory’s test method and then the median of all sigma values was used.Total allowable error (TEa) specifications were obtained from published tables that define allowable error in terms of biological variability.

When reviewing the following table recall that the Sigma Value expresses the number of analytical standard deviations that fit within the specified allowable total error limits (QC Sigma = TEa%/CV%) so that the higher the QC Sigma the more robust the analyte and the lower the QC Sigma (σ) the less robust the analyte. For example, looking at electrolytes, potassium is a very robust analyte (5.51σ at Level 1 and 7.34σ at Level 2) whereas sodium and chloride are much less robust (1.24σ at Level 1 and 1.38σ at Level 2; and, 1.60σ at Level 1 and 1.86σ at Level 2 respectively).

The main reason there is such a difference between the sigma values for potassium, sodium and chloride is the total allowable error (TEa) specifications used. While the CV’s are relatively close, the TEa specifications are not. Sodium has a TEa of 1.12, Chloride 1.88, and Potassium 7.44. The large, and probably surprising differences in the performance of these electrolytes, demonstrate the utility of calculating the sigma values. The large differences in sigma values also suggest that patient results for sodium and chloride would benefit from running more Levels of QC more frequently than potassium.

Turning from chemistry to immunoassay, the following table clearly identifies less robust analytes where patient results would likewise benefit from more running more Levels of QC more frequently. For example, Total T3, Free T4, Total T4 and Testosterone have Sigma Values less than 2.

Adapted from “Computing a Patient-Based Sigma Metric” Kuchipudi, L., Yundt-Pacheco, J., Parvin, C.A., Clinical Chemistry. 2010, 56(6), Supplement:A35.

A table of showing values for Imprecision, Bias and Total Error can be found at https://www.qcnet.com/Portals/0/PDFs/BVValues1Final.pdf

The values are derived from: Ricos, C., Alvarez, V., Cava, F., Garcia-Lario, J.V., Hernandez, A., Jimenez, C.V., Mininchela, J., Perich, C., Simon, M., “Current databases on biologic variation: pros, cons and progress”, Scandinavian Journal of Clinical and Laboratory Investigation, 1999;59:491-500. These values are updated/modified with the most recent specifications made available in 2012.

## Sigma Values and QC Strategy Design

Sigma values are useful for guiding QC strategy design. For a high sigma process it is relatively easy for the laboratory to design a QC procedure to detect any out-of-control condition that could pose a significant risk of producing unreliable results. The reason this is true is because a relatively large out-of-control condition would have to occur before there would be much chance of producing results that contained errors that exceed the allowable total error specification and it is easy to design QC procedures that can detect large out-of-control conditions. On the other hand, for a low sigma process a relatively small out-of-control condition may pose an unacceptably high risk of producing unreliable patient results. It can be challenging to design QC produces that are good at detecting small out-of-control conditions.

### How Many QC Samples Should be Run?

Simple guidelines for choosing the number of QC samples to run and appropriate quality control rules based on sigma values have been proposed (Westgard JO. Six sigma quality design & control, 2nd ed. Madison WI: Westgard QC Inc., 2006). An example of one such guideline is shown in the table below:

For lower sigma values more QC samples and more powerful QC rules are recommended. Note, a 1:3S QC rule rejects if any of the QC results differ from their target concentration by more than 3 standard deviations. Mulitrules are combinations of individual QC rules that tend to be more powerful than simple rules such as the 1:3S QC rule. In general, for large sigma value processes (≥6σ) simple QC rules with low false rejection rates are adequate. For intermediate sigma value processes (sigma values between 3.5 and 6) quality goals can be met, but more elaborate QC strategies may be required. For low sigma values (<3.5 sigma) it will be difficult to meet the laboratory’s quality goals without finding ways to further reduce the test systems analytical.