Quality Assurance in clinical laboratory part 02.
Part 02
instrument selection
Selecting an appropriate instrument and validating a testing method are one-time processes, while quality control and proficiency testing are ongoing procedures.
Method validation, in this context, involves the verification of the manufacturer's performance specifications in clinical laboratory studies.
Before purchasing an instrument for the clinical laboratory, several factors are considered, such as cost, required laboratory space, and necessary staff training. Specific instrument requirements, including sample throughput rate, turn-around time, and test menu, are also taken into account. Once a shortlist of instruments is made, the laboratory team may choose to visit another clinical lab where the instruments are currently in use to observe their performance before making a purchase.
The specific needs for an instrument include various factors such as the rate of sample processing, the time it takes to get results, the amount of sample required for testing, the type of specimen and tube, and the available test menu. After identifying these requirements, a list of potential instruments is created. The laboratory team may then decide to evaluate the instruments by contacting or visiting a clinical lab where they are already in use to observe their daily performance before making a purchase decision.
Method validation
Overview
Under the Clinical Laboratory Improvement Amendments (CLIA), laboratories are required to validate all moderate- and high-complexity methods before reporting patient results. To validate a method, the FDA mandates that four analytical criteria must be established.
- Precision refers to the level of consistency in producing the same result repeatedly.
- Accuracy is the aggrement between the result to the true value of the analyte in the sample.
- The Reportable Range (RR) is the concentration range of the analyte where the instrument signal is proportional to the analyte concentration, and results within this range can be reported reliably.
- The Reference Interval (RI) is the range of the analyte concentration observed in a healthy population and is used to interpret patient results
Errors
Laboratory techniques inherently contain certain levels of errors. Method validation is a crucial process that helps us identify the type, extent, and medical relevance of these errors. The suitability of a particular method is determined by whether the degree of measurement errors detected could impact the medical efficacy of the test. The majority of errors are typically minor and do not compromise the clinical usefulness of the method. However, to establish if errors in a method are substantial enough to impact the clinical utility of a test, validation is the only way forward.Errors in analysis have an impact on both the accuracy and precision of the test method. The evaluation of a method's accuracy and precision is an integral component of Method Validation. Therefore, by carrying out validation of a method, we can identify any analytical errors that may be present in the method.
There are two types of analytical errors that can occur
- Random errors
- Systematic errors.
Random errors
Random Error (RE) is an unpredictable and inconsistent error that occurs in a laboratory method and does not impact all samples uniformly. Temperature fluctuations are an example of a source of RE that is impossible to forecast and occurs frequently in laboratories. RE can cause a measured result to either increase or decrease in value, and is thus normally distributed on both sides of the mean value.
Systematic Error
Systematic Error (SE) is a consistent error that occurs in a laboratory method due to a predictable source, which impacts all samples uniformly. Examples of SE include
- Bad calibrators
- Poorly calibrated pipettes
- Bad reagents
- Changes in the incubation temperature of an assay.
SE affects the measured result in a predictable manner, which moves the results towards one side of the mean, and has an impact on the accuracy of the method. Method bias, which is the difference between results obtained using a local method or instrument and a reference method or instrument, is used to measure the impact of SE. If the source of the error is identified, SE can usually be eliminated, and its presence can cause a characteristic bias in the test results
Detection of Errors
(continive...)





Comments
Post a Comment