Test Verification & Validation
In the highly regulated environment of Blood Bank, a laboratory cannot simply purchase a new test kit or analyzer and begin using it on patients immediately. Before any new method, instrument, or reagent is placed into service, it must undergo a rigorous evaluation process to prove that it performs as expected and produces accurate results. This process is divided into Validation (proving it works in theory/design) and Verification (proving it works in your hands)
Definitions: Validation vs. Verification
While often used interchangeably, these terms have distinct regulatory meanings under CLIA (Clinical Laboratory Improvement Amendments) and CAP (College of American Pathologists)
Validation (The “Design” Phase)
- Definition: The process of establishing the performance specifications of a test method. It answers the question: “Does this test measure what it is supposed to measure, and how well?”
-
Who Does It?
- FDA-Approved Tests: The Manufacturer performs the validation. They run thousands of samples to determine accuracy, precision, and reference ranges before selling it
- Laboratory-Developed Tests (LDTs): If the lab modifies an FDA-approved test (e.g., using a different sample type or diluting a reagent differently), the Laboratory acts as the manufacturer and must perform the full validation study itself
Verification (The “Implementation” Phase)
- Definition: The one-time process performed by the laboratory to confirm that they can replicate the manufacturer’s performance specifications in their own facility. It answers the question: “Does this test work correctly in OUR lab, with OUR staff, on OUR equipment?”
- Who Does It?: The Laboratory performs verification on every new FDA-approved assay or instrument before reporting patient results
The Verification Process (Performance Specifications)
When bringing on a new analyzer (e.g., switching from Manual Tube to Automated Gel), CLIA requires the verification of four key performance characteristics:
1. Accuracy
- Goal: To determine how close the test result is to the “true” value
-
Method: Correlation Study
- Test a set of samples (typically 20-50) on the new method and compare the results to an established “Gold Standard” (the old method or a reference lab method)
- Example: Run 20 known Antibody-Positive samples and 20 known Negative samples on the new analyzer
- Analysis: Calculate Concordance. Ideally, the new method should agree 100% with the old method. If discrepancies exist (e.g., Old = Negative, New = Positive), they must be investigated to see if the new method is simply more sensitive or if it is false-positive
2. Precision (Reproducibility)
- Goal: To determine if the test gives the same result when repeated multiple times
-
Method
- Intra-run Precision: Run the same sample 10 times in the same run. (Should get 10 identical results)
- Inter-run Precision: Run the same QC material once a day for 10-20 days
- Blood Bank Context: Since results are qualitative (Pos/Neg) or semi-quantitative (1+ to 4+), precision means getting the same reaction grade consistently. A sample shouldn’t be 4+ today and 1+ tomorrow
3. Analytical Sensitivity (Lower Limit of Detection)
- Goal: To determine the smallest amount of antibody/antigen the test can detect
-
Method: Dilution Studies
- Take a weak antibody (e.g., Anti-D) and perform serial dilutions (1:2, 1:4, 1:8…). Test each dilution on the new system
- Comparison: If the manual tube method detects it out to 1:16 and the new analyzer detects it out to 1:32, the new method is more sensitive
4. Analytical Specificity (Interfering Substances)
- Goal: To ensure the test targets only the specific analyte and is not fooled by interfering substances
-
Method: Test samples containing common interferences:
- Hemolysis: Does a lysed sample cause a false positive?
- Lipemia: Does fatty blood block the camera?
- Icteric: Does high bilirubin affect the reading?
- Outcome: This establishes the “Specimen Rejection Criteria.” If the verification shows that lipemia causes indeterminate results, the procedure manual must state: “Reject lipemic specimens.”
Reference Intervals (Normal Ranges)
For quantitative tests (rare in Blood Bank, but applicable to things like pH of Platelets or Fetal Screen %HbF), the lab must verify the “Normal Range.”
- Transfusion Medicine: Most tests are qualitative (Compatible/Incompatible). The “Reference Interval” is essentially “Negative.”
- Verification: Confirming that “normal” healthy donors consistently test negative
The Validation Plan & Report
This process must be formally documented to satisfy inspectors
- The Plan: A written document approved by the Medical Director before testing begins. It outlines: “We will test 50 samples; we define acceptable performance as 95% concordance.”
- The Data: Raw printouts of every test run during the study
- The Summary/Approval: A final report signed by the Medical Director stating: “The verification met criteria. This test is approved for patient use as of [Date].”
Ongoing Verification (Calibrations & QC)
Verification is not just a one-time event at purchase; aspects of it continue throughout the life of the instrument
- Calibration Verification: Every 6 months (for quantitative tests), the lab must prove the instrument is still accurate across the reportable range
- Lot-to-Lot Verification: Every time a new lot number of reagent (e.g., Anti-A) arrives, it must be verified against the old lot (with controls) to ensure the new batch is not defective before it is used on patients