News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel

News, Analysis, Trends, Management Innovations for
Clinical Laboratories and Pathology Groups

Hosted by Robert Michel
Sign In

AMA Issues Proposal to Help Circumvent False and Misleading Information When Using Artificial Intelligence in Medicine

Pathologists and clinical laboratory managers will want to stay alert to the concerns voiced by tech experts about the need to exercise caution when using generative AI to assist medical diagnoses

Even as many companies push to introduce use of GPT-powered (generative pre-trained transformer) solutions into various healthcare services, both the American Medical Association (AMA) and the World Health Organization (WHO) as well as healthcare professionals urge caution regarding use of AI-powered technologies in the practice of medicine. 

In June, the AMA House of Delegates adopted a proposal introduced by the American Society for Surgery of the Hand (ASSH) and the American Association for Hand Surgery (AAHS) titled, “Regulating Misleading AI Generated Advice to Patients.” The proposal is intended to help protect patients from false and misleading medical information derived from artificial intelligence (AI) tools such as GPTs.

GPTs are an integral part of the framework of a generative artificial intelligence that creates text, images, and other media using generative models. These neural network models can learn the patterns and structure of inputted information and then develop new data that contains similar characteristics.

Through their proposal, the AMA has developed principles and recommendations surrounding the benefits and potentially harmful consequences of relying on AI-generated medical advice and content to advance diagnoses.

Alexander Ding, MD

“We’re trying to look around the corner for our patients to understand the promise and limitations of AI,” said Alexander Ding, MD (above), AMA Trustee and Associate Vice President for Physician Strategy and Medical Affairs at Humana, in a press release. “There is a lot of uncertainty about the direction and regulatory framework for this use of AI that has found its way into the day-to-day practice of medicine.” Clinical laboratory professionals following advances in AI may want to remain informed on the use of generative AI solutions in healthcare. (Photo copyright: American Medical Association.)

Preventing Spread of Mis/Disinformation

GPTs are “a family of neural network models that uses the transformer architecture and is a key advancement in artificial intelligence (AI) powering generative AI applications such as ChatGPT,” according to Amazon Web Services.

In addition to creating human-like text and content, GPTs have the ability to answer questions in a conversational manner. They can analyze language queries and then predict high-quality responses based on their understanding of the language. GPTs can perform this task after being trained with billions of parameters on massive language datasets and then generate long responses, not just the next word in a sequence. 

“AI holds the promise of transforming medicine,” said diagnostic and interventional radiologist Alexander Ding, MD, AMA Trustee and Associate Vice President for Physician Strategy and Medical Affairs at Humana, in an AMA press release.

“We don’t want to be chasing technology. Rather, as scientists, we want to use our expertise to structure guidelines, and guardrails to prevent unintended consequences, such as baking in bias and widening disparities, dissemination of incorrect medical advice, or spread of misinformation or disinformation,” he added.

The AMA plans to work with the federal government and other appropriate organizations to advise policymakers on the optimal ways to use AI in healthcare to protect patients from misleading AI-generated data that may or may not be validated, accurate, or relevant.

Advantages and Risks of AI in Medicine

The AMA’s proposal was prompted by AMA-affiliated organizations that stressed concerns about the lack of regulatory oversight for GPTs. They are encouraging healthcare professionals to educate patients about the advantages and risks of AI in medicine. 

“AI took a huge leap with large language model tool and generative models, so all of the work that has been done up to this point in terms of regulatory and governance frameworks will have to be treated or at least reviewed with this new lens,” Sha Edathumparampil, Corporate Vice President, Digital and Data, Baptist Health South Florida, told Healthcare Brew.

According to the AMA press release, “the current limitations create potential risks for physicians and patients and should be used with appropriate caution at this time. AI-generated fabrications, errors, or inaccuracies can harm patients, and physicians need to be acutely aware of these risks and added liability before they rely on unregulated machine-learning algorithms and tools.”

According to the AMA press release, the organization will propose state and federal regulations for AI tools at next year’s annual meeting in Chicago.

In a July AMA podcast, AMA’s President, Jesse Ehrenfeld, MD, stressed that more must be done through regulation and development to bolster trust in these new technologies.

“There’s a lot of discomfort around the use of these tools among Americans with the idea of AI being used in their own healthcare,” Ehrenfeld said. “There was a 2023 Pew Research Center poll [that said] 60% of Americans would feel uncomfortable if their own healthcare provider relied on AI to do things like diagnose disease or recommend a treatment.”

WHO Issues Cautions about Use of AI in Healthcare

In May, the World Health Organization (WHO) issued a statement advocating for caution when implementing AI-generated large language GPT models into healthcare.

A current example of such a GPT is ChatGPT, a large language-based model (LLM) that enables users to refine and lead conversations towards a desired length, format, style, level of detail and language. Organizations across industries are now utilizing GPT models for Question and Answer bots for customers, text summarization, and content generation and search features. 

“Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI, and thereby undermine (or delay) the potential long-term benefits and uses of such technologies around the world,” commented WHO in the statement.

WHO’s concerns regarding the need for prudence and oversight in the use of AI technologies include:

  • Data used to train AI may be biased, which could pose risks to health, equity, and inclusiveness.
  • LLMs generate responses that can appear authoritative and plausible, but which may be completely incorrect or contain serious errors.
  • LLMs may be trained on data for which consent may not have been given.
  • LLMs may not be able to protect sensitive data that is provided to an application to generate a response.
  • LLMs can be misused to generate and disseminate highly convincing disinformation in the form of text, audio, or video that may be difficult for people to differentiate from reliable health content.

Tech Experts Recommended Caution

Generative AI will continue to evolve. Therefore, clinical laboratory professionals may want to keep a keen eye on advances in AI technology and GPTs in healthcare diagnosis.

“While generative AI holds tremendous potential to transform various industries, it also presents significant challenges and risks that should not be ignored,” wrote Edathumparampil in an article he penned for CXOTECH Magazine. “With the right strategy and approach, generative AI can be a powerful tool for innovation and differentiation, helping businesses to stay ahead of the competition and better serve their customers.”

GPT’s may eventually be a boon to healthcare providers, including clinical laboratories, and pathology groups. But for the moment, caution is recommended.

JP Schlingman

Related Information:

AMA Adopts Proposal to Protect Patients from False and Misleading AI-generated Medical Advice

Regulating Misleading AI Generated Advice to Patients

AMA to Develop Recommendations for Augmented Intelligence

What is GPT?

60% of Americans Would Be Uncomfortable with Provider Relying on AI in Their Own Health Care

Navigating the Risks of Generative AI: A Guide for Businesses

Contributed: Top 10 Use Cases for AI in Healthcare

Anatomic Pathology at the Tipping Point? The Economic Case for Adopting Digital Technology and AI Applications Now

ChatGPT, AI in Healthcare and the future of Medicine with AMA President Jesse Ehrenfeld, MD, MPH

What is Generative AI? Everything You Need to Know

WHO Calls for Safe and Ethical AI for Health

GPT-3

Researchers Create Artificial Intelligence Tool That Accurately Predicts Outcomes for 14 Types of Cancer

Proof-of-concept study ‘highlights that using AI to integrate different types of clinically informed data to predict disease outcomes is feasible’ researchers say

Artificial intelligence (AI) and machine learning are—in stepwise fashion—making progress in demonstrating value in the world of pathology diagnostics. But human anatomic pathologists are generally required for a prognosis. Now, in a proof-of-concept study, researchers at Brigham and Women’s Hospital in Boston have developed a method that uses AI models to integrate multiple types of data from disparate sources to accurately predict patient outcomes for 14 different types of cancer.

The process also uncovered “the predictive bases of features used to predict patient risk—a property that could be used to uncover new biomarkers,” according to Genetic Engineering and Biotechnology News (GEN).

Should these research findings become clinically viable, anatomic pathologists may gain powerful new AI tools specifically designed to help them predict what type of outcome a cancer patient can expect.

The Brigham scientists published their findings in the journal Cancer Cell, titled, “Pan-cancer Integrative Histology-genomic Analysis via Multimodal Deep Learning.”

Faisal Mahmood, PhD

“Experts analyze many pieces of evidence to predict how well a patient may do. These early examinations become the basis of making decisions about enrolling in a clinical trial or specific treatment regimens,” said Faisal Mahmood, PhD (above) in a Brigham press release. “But that means that this multimodal prediction happens at the level of the expert. We’re trying to address the problem computationally,” he added. Should they be proven clinically-viable through additional studies, these findings could lead to useful tools that help anatomic pathologists and clinical laboratory scientists more accurately predict what type of outcomes cancer patient may experience. (Photo copyright: Harvard.)

AI-based Prognostics in Pathology and Clinical Laboratory Medicine

The team at Brigham constructed their AI model using The Cancer Genome Atlas (TCGA), a publicly available resource which contains data on many types of cancer. They then created a deep learning-based algorithm that examines information from different data sources.

Pathologists traditionally depend on several distinct sources of data, such as pathology images, genomic sequencing, and patient history to diagnose various cancers and help develop prognoses.

For their research, Mahmood and his colleagues trained and validated their AI algorithm on 6,592 H/E (hematoxylin and eosin) whole slide images (WSIs) from 5,720 cancer patients. Molecular profile features, which included mutation status, copy-number variation, and RNA sequencing expression, were also inputted into the model to measure and explain relative risk of cancer death. 

The scientists “evaluated the model’s efficacy by feeding it data sets from 14 cancer types as well as patient histology and genomic data. Results demonstrated that the models yielded more accurate patient outcome predictions than those incorporating only single sources of information,” states a Brigham press release.

“This work sets the stage for larger healthcare AI studies that combine data from multiple sources,” said Faisal Mahmood, PhD, Associate Professor, Division of Computational Pathology, Brigham and Women’s Hospital; and Associate Member, Cancer Program, Broad Institute of MIT and Harvard, in the press release. “In a broader sense, our findings emphasize a need for building computational pathology prognostic models with much larger datasets and downstream clinical trials to establish utility.”

Future Prognostics Based on Multiple Data Sources

The Brigham researchers also generated a research tool they dubbed the Pathology-omics Research Platform for Integrative Survival Estimation (PORPOISE). This tool serves as an interactive platform that can yield prognostic markers detected by the algorithm for thousands of patients across various cancer types.  

The researchers believe their algorithm reveals another role for AI technology in medical care, but that more research is needed before their model can be implemented clinically. Larger data sets will have to be examined and the researchers plan to use more types of patient information, such as radiology scans, family histories, and electronic medical records in future tests of their AI technology.

“Future work will focus on developing more focused prognostic models by curating larger multimodal datasets for individual disease models, adapting models to large independent multimodal test cohorts, and using multimodal deep learning for predicting response and resistance to treatment,” the Cancer Cell paper states.

“As research advances in sequencing technologies, such as single-cell RNA-seq, mass cytometry, and spatial transcriptomics, these technologies continue to mature and gain clinical penetrance, in combination with whole-slide imaging, and our approach to understanding molecular biology will become increasingly spatially resolved and multimodal,” the researchers concluded.  

Anatomic pathologists may find the Brigham and Women’s Hospital research team’s findings intriguing. An AI tool that integrates data from disparate sources, analyzes that information, and provides useful insights, could one day help them provide more accurate cancer prognoses and improve the care of their patients.   

JP Schlingman

Related Information:

AI Integrates Multiple Data Types to Predict Cancer Outcomes

Pan-cancer Integrative Histology-genomic Analysis via Multimodal Deep Learning

New AI Technology Integrates Multiple Data Types to Predict Cancer Outcomes

Artificial Intelligence in Digital Pathology Developments Lean Toward Practical Tools

Florida Hospital Utilizes Machine Learning Artificial Intelligence Platform to Reduce Clinical Variation in Its Healthcare, with Implications for Medical Laboratories

Artificial Intelligence and Computational Pathology

Artificial Intelligence in Digital Pathology Developments Lean Toward Practical Tools

Patient care gaps can be addressed by machine learning algorithms, Labcorp vice president explains

Is there hype for artificial intelligence (AI)? As it turns out, yes, there is. Keynote speakers acknowledged as much at the 2022 Executive War College Conference on Laboratory and Pathology Management. Nevertheless, leading clinical laboratory companies are taking real steps with the technology that showcase AI developments in digital pathology and patient care.

Labcorp, the commercial laboratory giant headquartered in Burlington, N.C., has billions of diagnostic test results archived. It takes samplings of those results and runs them through a machine learning algorithm that compares the data against a condition of interest, such as chronic kidney disease (CKD). Machine learning is a subdiscipline of AI.

Based on patterns it identifies, the machine learning algorithm can predict future test results for CKD based on patients’ testing histories, explained Stan Letovsky, PhD, Vice President for AI, Data Sciences, and Bioinformatics at Labcorp. Labcorp has found the accuracy of those predictions to be better than 90%, he added.

In “Keynote Speakers at the Executive War College Describe the Divergent Paths of Clinical Laboratory Testing as New Players Offer Point-of-Care Tests and More Consumers Want Access to Home Tests,” Robert Michel, Editor-in-Chief of Dark Daily, reported on how AI in digital pathology was one of several “powerful economic forces [that] are about to be unleashed on the traditional market for clinical laboratory testing.”

Labcorp also has created an AI-powered dashboard that—once layered over an electronic health record (EHR) system—allows physicians to configure views of an individual patient’s existing health data and add a predictive view based on the machine learning results.

For anatomic pathologists, this type of setup can quickly bring a trove of data into their hands, allowing them to be more efficient with patient diagnoses. The long-term implications of using this technology are significant for pathology groups’ bottom line.

Stan Letovsky, PhD
Stan Letovsky, PhD (above), Vice President for AI, Data Sciences, and Bioinformatics at Labcorp, discussed AI developments in digital pathology during his keynote address at the 2022 Executive War College in New Orleans. “The best thing as a community that we can do for patients and their physicians with AI is to identify care gaps early on,” he said, adding, “If pathologists want to grow and improve their revenue, they have to be more productive.” (Photo copyright: Dark Intelligence Group). 

Mayo Clinic Plans to Digitize 25 Million Glass Slides

In other AI developments, Mayo Clinic in Rochester, Minn., has started a project to digitally scan 25 million tissue samples on glass slides—some more than 100 years old. As part of the initiative, Mayo wants to digitize five million of those slides within three years and put them on the cloud, said pathologist and physician scientist Jason Hipp, MD, PhD, Chair of Computational Pathology and AI at Mayo Clinic.

“We want to be a hub within Mayo Clinic for digital pathology,” Hipp told Executive War College attendees during his keynote address.

Hipp views his team as the bridge between pathologists and the data science engineers who develop AI algorithms. Both sides must collaborate to move AI forward, he commented, yet most clinical laboratories and pathology groups have not yet developed those relationships.

“We want to embed both sides,” Hipp added. “We need the data scientists working with the pathologists side by side. That practical part is missing today.”

The future medical laboratory at Mayo Clinic will feature an intersection of pathology, computer technology, and patient data. Cloud storage is a big part of that vision.

“AI requires storage and lots of data to be practical,” Hipp said. 

Scott Wallask

Related Information:

Keynote Speakers at the Executive War College Describe the Divergent Paths of Clinical Laboratory Testing

COVID-19 Testing Reimbursement Scrutiny is Coming for Clinical Laboratories, Attorneys Predict at Executive War College

What is Machine Learning?

Data Scientist Overview

Former FDA Commissioner Scott Gottlieb to Headline Artificial Intelligence in Healthcare and Diagnostics Conference

Gottlieb will speak about the state of AI in healthcare at the event May 11-12

Medical technicians in clinical laboratories and pathology groups may worry that artificial intelligence (AI) will eventually put them out of their jobs.

However, that’s not likely to be the case, according to former Food and Drug Administration (FDA) Commissioner Scott Gottlieb. He was just announced as a top speaker at the Artificial Intelligence in Healthcare and Diagnostics (AIHD) Conference, which takes place May 10-11 in San Jose, Calif.

Instead, expect AI in healthcare to help labs better aggregate and analyze an ever-growing repository of clinical data.

“As we start to digitize more of this information, build out bigger repositories, and correlate more of this information with experimental evidence that’s also captured digitally, it’s going to become an immensely powerful tool,” Gottlieb said during a 2021 webinar hosted by Proscia, which develops pathology software embedded with AI.

Scott Gottlieb, former FDA commissioner
Former FDA Commissioner Scott Gottlieb said AI in healthcare will “become an immensely powerful tool.” (Photo courtesy of: Worldwide Speakers Group)

“[AI is] going to be a predictive tool,” he continued. “So, now you start to think about digital data from traditional pathology, digital data from characterizing tumors to sequencing, alongside digital data capture through electronic health records. And you start to have a really powerful, robust set of information.”

Writing for MobiHealthNews last year, Liz Kwo, MD, also noted the potential of AI to deal with unstructured data—in other words, information that is not in a pre-set data model and thus difficult to analyze.

“In many cases, health data and medical records of patients are stored as complicated unstructured data, which makes it difficult to interpret and access,” wrote Kwo, who is Deputy Chief Clinical Officer at insurer Anthem and Faculty Lecturer at Harvard Medical School.

AI can seek, collect, store, and standardize medical data regardless of the format, assisting repetitive tasks and supporting clinicians with fast, accurate, tailored treatment plans and medicine for their patients instead of being buried under the weight of searching, identifying, collecting and transcribing the solutions they need from piles of paper formatted EHRs,” she added.

AIHD conference to explore the state of artificial intelligence in healthcare

At AIHD, Gottlieb will take part in a fireside chat and also contribute to a panel discussion with other keynote speakers.

“There’s no better individual than Dr. Gottlieb to address AIHD participants about the state of artificial intelligence, where it’s going, how it’s regulatory oversight will unfold, and what’s likely to be the most surprising contribution of AI in patient care,” said Robert Michel, founder of AIHD, Executive Director of the Precision Medicine Institute, and Editor-in-Chief of clinical lab intelligence publication The Dark Report.

The event will bring together senior-level representatives from AI companies, hospitals, physician offices, and diagnostic providers.

Gottlieb promoted greater use of digital tools for clinicians

Gottlieb is a well-known advocate for digital tools in healthcare, including AI. In 2019, he outlined a framework the FDA would start using to promote the development of safe medical devices that use advanced AI algorithms.

“I can envision a world where, one day, artificial intelligence can help detect and treat challenging health problems, for example by recognizing the signs of disease well in advance of what we can do today,” Gottlieb stated at the time. “These tools can provide more time for intervention, identifying effective therapies and ultimately saving lives.”

During and after his tenure at the FDA, he has been a prolific commentator about the SARS-CoV-2 pandemic and steps public health agencies have taken to curb COVID-19.

As Dark Daily previously reported, Gottlieb has also shown interest in technologies used to combat COVID-19, such as laboratory-developed tests created under emergency use authorizations.

Gottlieb is currently a Senior Fellow at the American Enterprise Institute, a public policy think tank. He is also partner at venture capital firm New Enterprise Associates and serves on the boards of Pfizer and Illumina.

—Scott Wallask

Related Resources:

Artificial Intelligence in Healthcare and Diagnostics Conference

Future Ready Pathology by Proscia

What is unstructured data?

Top 10 Use Cases for AI in Healthcare

Statement from FDA Commissioner Scott Gottlieb, M.D. on steps toward a new, tailored review framework for artificial intelligence-based medical devices

FDA Issues its First Emergency Use Authorization for an Antigen-based Diagnostic as Top IVD Manufacturers Race to Supply Medical Laboratories with COVID-19 Tests

Webinar: Clinical-Grade Artificial Intelligence (AI) for Your Pathology Lab

Webinar: Clinical-Grade Artificial Intelligence (AI) for Your Pathology Lab PRESS RELEASE FOR IMMEDIATE RELEASE THE DARK REPORT21806 Briarcliff Dr.Spicewood, TX 78669512-264-7103 o512-264-0969 f Media Contact: Kristen Noonaninfo@darkreport.com AUSTIN, Texas (June 14, 2021) – DARK Daily today announced “Clinical-Grade Artificial Intelligence (AI) for Your Pathology Lab: What’s Ready Now, What’s Coming Soon, and How Pathologists Can Profit from Its Use,” a premium webinar to guide...
;