Quality Assurance in clinical laboratory part 04 (Method comparison).
Method comparison
A method comparison study is conducted to evaluate the accuracy of a new method by comparing its measurements to those of a reference method. The goal is to determine any systematic errors, known as inaccuracy, in the new method. The decision to accept the new method relies on assessing whether the observed measurement errors would impact the medical usefulness of the test.
For the study, a minimum of 40 different patient samples are
tested. These samples are carefully selected to cover a range of low, normal,
and high concentrations. The measurements of these samples are obtained using
both the new test method or instrument being validated and the comparative
method, which serves as the reference. Typically, the comparative method is the
existing method or instrument used by the laboratory.
It is recommended that the samples are analyzed within two
hours using both the new and comparative methods. While ideally, the results
obtained from both methods should be identical, in practice, there will be
variations within a certain range for each sample.
Software capable of generating scatter plots, tables, and
performing statistical calculations is utilized to compile the results from
both the new and reference methods in a method comparison study. The data
points from each method are plotted on a "comparison" plot, with the
new test method on the Y-axis and the reference method on the X-axis. Various
statistical calculations are then applied.
To assess the relationship between the two methods, linear
regression analysis is conducted. This involves fitting a best-fit line through
the data points, and the slope (m) and Y-intercept (b) of the line are
calculated using the equation:
Y = mx + b
A perfect correlation between the two methods would be
represented by all data points lying on a line at a 45° angle. In this case,
the Y-intercept would be 0, and the slope would be 1. A perfect positive
correlation is denoted by a correlation coefficient (r) of 1.00, while a
perfect negative correlation is indicated by r = -1.00. A lack of correlation
is represented by r = 0.00.
In the context of the study, an acceptable range for the
correlation coefficient is defined as 0.95 < r < 1.05. This range suggests
a strong positive correlation between the new and reference methods, indicating
a close agreement between the two sets of measurements.
\
Systematic error (SE)
Systematic error (SE), also known as bias, is determined by the sum of constant and proportional errors. The formula to calculate SE is:
SE = bias = | Y - X |
Here, X represents the concentration measured by the
reference or existing method, while Y represents the concentration measured by
the new method.
To evaluate SE accurately, it is crucial to obtain estimates
at concentrations that have significance in medical decision-making. When
multiple medical decision concentrations are involved, the correlation
coefficient (r) is employed to assess the range of data.
In the context of method comparison, systematic error (SE)
can manifest as proportional error, constant error, or a combination of both.
Minimizing SE is essential for improving the accuracy of the new method.
Constant systematic error
Constant systematic error in the new method indicates a
consistent deviation from the comparable results obtained by the existing
method. This means that the new method consistently provides measurements that
are either higher or lower by a fixed amount. For instance, if the new method
for measuring serum glucose has a constant systematic error of 10 mg/dL,
samples with glucose values of 80 and 120 mg/dL measured by the existing method
would yield values of 90 and 130 mg/dL, respectively, when measured by the new
method.
The constant error is determined by the Y-intercept of the best-fit
line in the method comparison study. When there is no constant error, the
Y-intercept passes through the origin, which is 0.00. However, in cases where
there is a constant error, such as the example given above with a Y-intercept
of 10 mg/dL, the difference between all corresponding X and Y values would be
10 mg/dL.
Constant error can sometimes arise due to issues with the
blank or background matrix used in the analysis. By adjusting the blank
appropriately, it is possible to correct for this constant error and improve
the accuracy of the new method.
Proportional systematic error
In method comparison, proportional systematic error refers
to a consistent variation in the results obtained by the new method compared to
the existing method, expressed as a fixed percentage. This percentage remains
the same regardless of the analyte concentration being measured. For example,
if there is a 5% proportional error and the glucose values obtained using the
existing method are 80 and 120 mg/dL, the values obtained by the new method for
the same samples would be 84 mg/dL and 126 mg/dL, respectively.
To calculate the percentage of proportional error, the
results from the two consecutive methods are used in the equation Y = mx + b,
where Y represents the result obtained by the new method, m is the slope, x
represents the result obtained by the existing method, and b is the
Y-intercept. To express the proportional error in percentage units, one must
subtract one from the slope (m) and multiply this difference by 100. The
proportional error can be either positive or negative, indicating that the
values obtained by the new method are greater or lesser than the values
obtained by the existing method, respectively.
In the given example, where Y is 84 mg/dL, X is 80 mg/dL,
and b is 0, we can use the equation Y = mx + b to find the value of m. By
substituting the values into the equation, we have:
84 = m * 80 + 0
Simplifying the equation, we get:
84 = 80m
Dividing both sides by 80, we find:
m = 84/80 = 1.05
Therefore, the slope (m) is determined to be 1.05.
To calculate the proportional error, we subtract 1 from the
slope and multiply the difference by 100:
(1.05 - 1) * 100 = 0.05 * 100 = 5%
Hence, the proportional error in this case is +5%.
Proportional error can indeed result from improper
calibration and can be rectified by recalibrating the method to ensure more
accurate and consistent results.
In the majority of method comparison studies, it is common
to observe both proportional and constant errors. These errors contribute to
the overall deviation between the measurements obtained by the new method and
those obtained by the existing method.
The slope (m) in the linear regression analysis represents
the proportional error. It indicates the relationship between the measurements
from the new method (Y) and the existing method (X). A slope greater than 1
suggests a positive proportional error, where the values obtained by the new
method tend to be higher than the values obtained by the existing method.
Conversely, a slope less than 1 indicates a negative proportional error, where
the values obtained by the new method tend to be lower.
The y-intercept (b) in the linear regression analysis
represents the constant error. It reflects the deviation between the
measurements of the new method (Y) and the existing method (X) when the values
of X are zero. A non-zero y-intercept indicates a constant error, where the
measurements by the new method consistently differ from the measurements by the
existing method by a fixed amount, regardless of the analyte concentration.
Therefore, the combination of the slope (m) and the
y-intercept (b) in the method comparison study provides insights into both the
proportional and constant errors associated with the new method when compared
to the existing method.
Total Analytical Error (TE)
Total Analytical Error (TE) is a comprehensive measure that combines both relative error (RE) and systematic error (SE) to assess the acceptability of a method and determine the potential magnitude of error in a single measurement. TE takes into account the imprecision (RE) and bias (SE) of a method, providing a holistic view of the overall error associated with the measurement.
The standard deviation (SD) is used to quantify the imprecision or random variation in the method. It is determined through reproducibility studies, which assess the variability of results obtained by repeating measurements on the same sample.
Systematic error (SE) is evaluated through method comparison studies, where the same sample is measured using both the new method and a reference method. The bias represents the difference between the results obtained by the two methods for the same sample.
To express the total error (TE), a formula is commonly used:
TE = Z * SD + | Yc - Xc |
Here, Z represents a multiplier that reflects the desired confidence level chosen by the laboratory. The Z-value can range from 2 to 6, with a Z-value of 2 or 3 often used. A higher Z-value indicates better control of measurement procedures.
By calculating TE using the method standard deviation (SD) and the method bias (| Yc - Xc |), laboratories can evaluate the acceptability of a method and assess the potential error associated with a single measurement. The reference [1], authored by Westgard JO, provides further insights into basic method validation.
Total allowable error (TEa)
Total allowable error (TEa) refers to the maximum
permissible level of analytical error that can occur without rendering the
analytical measurement medically unreliable or unusable. TEa represents the
desired quality or accuracy of the test results and is determined based on
several factors, including medical decision levels, the best available analytical
methods, and proficiency testing expectations.
Method validation plays a crucial role in determining the
total error associated with a specific analytical method. The objective within
a laboratory setting is to ensure that the total error of the method remains
within the limits defined by the TEa. If the observed errors, including
relative error (RE) and systematic error (SE), exceed the medically allowable
error, the new method is deemed unacceptable.
Laboratory directors and staff rely on various resources to
determine what constitutes a medically allowable error. One such resource is
the Clinical Laboratory Improvement Amendments (CLIA) of 1988, which provides
guidelines and acceptable performance criteria for analytes in clinical
laboratory testing.
By adhering to the established TEa and striving to keep
total errors within acceptable limits, laboratories can ensure the reliability
and usefulness of analytical measurements in medical decision-making processes. (Continue..... )

.jpg)






Comments
Post a Comment