Appendix D - Quality Control
Precision: Precision refers to the reproducibility of replicate results about a mean which is not necessarily the true value. Replicate analysis is the primary means of evaluating data variability or precision. Two commonly used measures of variability which adjust for the magnitude of analyte concentration are coefficient of variation and relative percent difference.
The coefficient of variation is used most often when the size of the standard deviation (s) changes with the magnitude of the mean. Coefficient of variation (CV), also called relative standard deviation (RSD), is defined:
CV = RSD = (s / -y) x 100%
Sample standard deviation and coefficient of variation are used when there are at least three replicate measurements.
The second measure of variability which adjusts for the magnitude of the analyte is relative percent difference (RPD) or relative range (RR). This measure is used when duplicate measurements are made and is defined:
Bias: Bias is the nearness of a result to the true value and is often described as systematic error. Bias estimates are frequently based on the recovery of the analyte of interest from certified reference materials (such as NIST reference materials) or from matrix or surrogate spikes when reference materials are not available. The percent recovery is calculated as:
where: %R = percent recovery (of known materials) M = measured concentration K = known concentration where: %R = percent recovery (of matrix or surrogate spikes) S = measured concentration in spiked sample U = measured concentration in unspiked sample Csa = actual concentration of spike added
Control Chart: Precision and bias are monitored by plotting control charts to determine if the measurement system is in control. Standard deviation (s) is calculated from repeated analysis of in-house quality control samples or from the recoveries of known or spiked materials. The ±2s value is used as an "alert" or warning marker on the control. The ±3s value serves as the outer bound of control. Once control charts have been established, they are easily used to determine if the analysis is "in control" or "out of control." If the system is determined to be "out of control," all analytical work must be stopped until an "in control" situation is established.
Control Charting Three types of control charts are used in laboratories: the X-chart, the spiked-sample chart, and the range or R-chart.
X-Chart: A standard reference material or control sample is selected and analyzed with each batch of unknowns or, if a large number of unknowns is run in a batch, one control sample for each 10 or 20 unknowns. After 15 to 20 analyses, the mean and standard deviation of the data are calculated and a control chart constructed. An example of the chart is shown below:
The center line represents the mean, the two outer lines represent the upper (UCL) and lower (LCL) control limits, or 99 percent confidence limits, and the two lines closest to the mean line are the 95 percent confidence limits, or upper (UWL) and lower (LWL) warning limits. One analysis outside the 95 percent confidence limits is not cause for alarm; however, two consecutive analyses falling on one side of the mean line between the 95 and 99 percent limits would certainly be cause for an investigation. Control charts are very useful in visualizing trends. The chart below is an example of a chart showing drift.
The X-chart would be appropriate for monitoring nitrogen (crude protein), acid detergent fiber, neutral detergent fiber and acid detergent insoluble nitrogen determinations in forages.
Spiked-sample control chart:
R-chart: It is common practice in analytical laboratories to run duplicate analyses at frequent intervals as a means of monitoring the precision of analyses and detecting out-of-control situations. This is often done for analyses for which there are no suitable control samples or reference materials available, such as moisture determinations in forages. Usually the mean of the duplicates is reported and the difference between the duplicates, or range, is examined for acceptability. Frequently, there is no quantitative criterion for acceptability. The use of duplicate range in a control chart is one system of deciding acceptability of the individual ranges. Youden and Steiner (Statistical Manual of the Association of the Official Analytical Chemists, Washington, DC: Association of the Official Analytical Chemists, 1975) have published a table for the variation of duplicate differences or ranges. Using the factors they published, a control chart can be set up for the differences between duplicate analyses, or ranges of the duplicates. Based on this table it can be shown that 50 percent of the ranges are below the 0.845R value, 95 percent are below 2.45R and 99 percent are below 3.27R, where R is the average range for a set of duplicate analyses.
After 15 or 20 duplicates are run, the average range is calculated, and the mean and control limits are drawn, using the 95 and 99 percent confidence limits and the 50 percent line is also drawn on the chart. The resulting chart is used to monitor the precision of the analysis.
A typical control chart for a moisture determination is shown below. The solid lines on the chart represent the 50 percent line and the two dashed lines are the 95 and 99 percent confidence limits.
In interpreting duplicate control charts the 50 percent line plays the same role as the mean or average line in X-charts. If more than five or six points in succession fall on one side or the other of the line, it is a strong indication that something has changed and should be investigated. If the points are on the high side, the precision has deteriorated, while if they are on the low side, it has improved.
Duplicate control charts are extremely useful for monitoring the precision of analyses for which there are no acceptable reference check materials. Since they are based on the difference between two results, errors due to bias are effectively canceled, and no conclusions can be drawn regarding the bias of the analysis.