Antibody validation standards would help ensure reproducibility of research studies and improve the consistency medical laboratory test results
As science and industry gets better at measuring things and assessing quality, the acceptable standard often comes into question. This seems to be happening with antibodies, the most common reagents used in diagnostics, clinical laboratory diagnostic tests, and medical research. In many cases, the end result is that companies and their suppliers must use new technologies and quality methods to revise the “old way” and create products that have measurable better quality.
The techniques currently used to validate antibodies is the topic of a recently-published scientific paper. The authors of a paper published in the March, 2010, issue of Biotechniques pointed out, antibody validation and standardization ensure study reproducibility, which is critical to accuracy. And yet, no standard guidelines define how these important biological tools should be validated prior to use.
Thus, researchers participating in a recent webinar, presented by The Scientist expressed concern that—without improved antibody validation and standardization—the accuracy of published research is in question and diagnostic test results, such as those produced by medical laboratories, will continue to be inconsistent.
Shocking Results from a Survey of 53 Published Studies
Webinar participant C. Glenn Begley, Ph.D., Chief Scientific Officer and Senior Vice President of Research & Development at TetraLogic Pharmaceuticals, noted that a reproducibility study, which he led in 2012 while Vice President and Global Head of Hematology and Oncology Research at Amgen, Inc., examined 53 published research studies and only six had reproducible scientific findings. “Even knowing the limitations of preclinical research, this was a shocking result,” wrote Begley’s Amgen group, in a commentary in the March 29, 2012, issue of the journal Nature.
This study identified “red flags for suspect research.” The most common of these red flags were:
• non-blinded studies;
• cherry-picked results not representative of all experiments;
• lack of controls; and,
• use of non-validated reagents.
In several cases, analysis of the study determined that failure to reproduce experimental data involved nonspecific or poorly validated antibodies. “The papers that we were unable to reproduce had a number of things in common,” said Begley.
“We have a systemic problem: our system tolerates and even encourages these behaviors,” he warned, noting that results of novel discoveries in top-tier journals drive promotions, grants, and the stature and respect of scientists. But that pressure to publish may result in inclusion of faulty data. “We get what we incentivize,” Begley stressed, pointing out that publication bias is “often unspoken and unacknowledged.”
Do Clinical Pathology Laboratories Produce Inconsistent Results
Another webinar participant was David Rimm, M.D., Ph.D., who is a Professor of Pathology and Medicine and Director of Pathology Tissue Services and Translational Pathology at Yale School of Medicine. Rimm noted that non-validated antibodies used in companion diagnostic tests to predict a patient’s response to a specific drug might provide faulty results.
He cited the example of a survey of 70 clinical laboratories by the College of American Pathologists (CAP). Began in 2001, this survey was intended to measure the success of an antibody-based diagnostic test for the EGFR (epidermal growth factor receptor) protein in predicting a patient’s response to the cancer drug Erbitux (cetuximab). The survey found that in 2004 only four in 10 cases had 90% consistent results, and in one case in 2005, half the clinical pathology laboratories doing the test said it was positive and the other half said it was negative.
He and colleagues at Rimm’s lab analyzed five antibodies used across the 70 labs. Almost none of them produced comparable results on tissue microarrays or quantified levels of EGFR expression, according to an article that highlighted findings from the webinar published in The Scientist.
Rimm attributed the differences to three possibilities:
• antibodies binding to different areas of an antigen;
• conditions under which antigens are retrieved; or,
• differences in lab protocols.
He suggested testing antibodies for:
• sensitivity;
• specificity;
• reproducibility; and,
• function in formalin-fixed, paraffin-embedded tissues.
Rimm suggested that such testing involving antibodies could improve consistency of results. “If all of these tests are met, then you can have some degree of confidence that the epitope you’re detecting in an immunohistochemistry or immunofluorescence assay is likely to be representative of the actual scientific facts.”
Rimm also noted that commercial suppliers frequently sell antibodies without detailed information about correct dilution for a particular test. Additionally different lots of the same antibody from the same vendor can produce remarkably different results, he said.
“Validation may be lab-specific and experiment-specific with respect to controls; however, wouldn’t it be nice if we could buy reagents that we knew were validated?” Rimm continued. “The hope is that there might be, in the future, some sort of standardization agency that can give a certain level of validation. You’d still need to do it in your own lab, but you’d have a high likelihood of at least having good starting material,” he added.
Even a Robot Cannot Track Down Antibodies
A few years ago the National Institutes of Health (NIH) initiated a project to determine if antibodies mentioned in a study could be tracked down electronically. This project was assigned to a team of researchers affiliated with the Neuroscience Information Framework (NIF), a web-based neurosciences resource that contains data, materials and research tools. The idea was to build a robot search engine to perform this task, noted webinar participant Anita Bandrowski, Ph.D., a Neurophysiologist at University of California San Diego Health Sciences who was part of this team.
By manually reviewing one volume of the Journal of Neuroscience, which included eight studies that used antibody methods, the NIF team realized this would be an impossible task, because few of the citations contained clone or catalog numbers for the antibodies used in the study. And while most listed the antibody suppliers and locations, none included the lot numbers. So, the system didn’t get built, Bandrowski said, because “if we can’t do it, neither can a robot.”
As a result, several researchers, funding institutions, and journals launched a pilot project called The Antibody Registry. Their goal is to assign unique identifiers and catalog every single antibody product offered by every supplier. The project is consolidating different identifiers for the same antibody and requesting that authors and journals cite each unique antibody identifier in papers.
“Resources should be identifiable in such a way that they are machine readable, available outside of the pay-wall, and uniform across publishers and journals,” Bandrowski emphasized. “We really need to move forward as a discipline and make this better.”
In vitro diagnostic companies that manufacture and sell medical laboratory test kits are familiar with most of the problems and challenges described by the various researcher in this intelligence briefing. Similarly, most pathologists and clinical laboratory managers performing immunoassay testing recognize that reagent lot variability is an ongoing issue that makes it tough to ensure that the quality and reproducibility of lab tests results are consistent over time. This is one reason why different groups are advocating improved quality and a uniform system of identifying and tracking antibodies.
— Patricia Kirk
Related Information:
An Urgent Need for Validating and Characterizing Antibodies
Drug Development: Raise Standards for Preclinical Cancer Research