WASE-COVID Study also found that use of artificial intelligence technology minimized variability among echocardiogram scan results
Many physicians—including anatomic pathologists—are watching the development of artificial intelligence (AI)-powered diagnostic tools that are intended to analyze images and analyze the data with accuracy comparable to trained doctors. Now comes news of a recent study that demonstrated the ability of an AI tool to analyze echocardiograph images and deliver analyses equal to or better than trained physicians.
Conducted by researchers from the World Alliance Societies of Echocardiography and presented at the latest annual sessions of the American College of Cardiology (ACC), the WASE-COVID Study involved assessing the ability of the AI platform to analyze digital echocardiograph images with the goal of predicting mortality in patients with severe cases of COVID-19.
To complete their research, the WASE-COVID Study scientists examined 870 patients with acute COVID-19 infection from 13 medical centers in nine countries throughout Asia, Europe, United States, and Latin America.
Human versus Artificial Intelligence Analysis
Echocardiograms were analyzed with automated, machine learning-derived algorithms to calculate various data points and identify echocardiographic parameters that would be prognostic of clinical outcomes in hospitalized patients. The results were then compared to human analysis.
All patients in the study had previously tested positive for COVID-19 infection using a polymerase chain reaction (PCR) or rapid antigen test (RAT) and received a clinically-indicated echocardiogram upon admission. For those patients ultimately discharged from the hospital, a follow-up echocardiogram was performed after three months.
“What we learned was that the manual tracings were not able to predict mortality,” Federico Asch, MD, FACC, FASE, Director of the Echocardiography Core Lab at MedStar Health Research Institute in Washington, DC, told US Cardiology Review in a video interview describing the WASE-COVID Study findings.
Asch is also Associate Professor of Medicine (Cardiology) at Georgetown University. He added, “But on the same echoes, if the analysis was done by machine—Ultromics EchoGo Core, a software that is commercially available—when we used the measurements obtained through this platform, we were able to predict in-hospital and out-of-hospital mortality both with ejection fraction and left ventricular longitudinal strain.”
Nearly half of the 870 hospitalized patients were admitted to intensive care units, 27% were placed on ventilators, 188 patients died in the hospital, and 50 additional patients died within three to six months after being released from the hospital.
10 of 13 medical centers performed limited cardiac exams as their primary COVID in-patient practice and three out of the 13 centers performed comprehensive exams.
In-hospital mortality rates ranged from 11% in Asia, 19% in Europe, 26% in the US, to 27% in Latin America.
Left ventricular longitudinal strain (LVLS), right ventricle free wall strain (RVFWS), as well as a patient’s age, lactic dehydrogenase levels and history of lung disease, were independently associated with mortality. Left ventricle ejection fraction (LVEF) was not.
Fully automated quantification of LVEF and LVLS using AI minimized variability.
AI-based left ventricular analyses, but not manual, were significant predictors of in-hospital and follow-up mortality.
The WASE-COVID Study also revealed the varying international use of cardiac ultrasound (echocardiography) on COVID-19 patients.
“By using machines, we reduce variability. By reducing variability, we have a better capacity to compare our results with other outcomes, whether that outcome in this case is mortality or it could be changes over time,” Asch stated in the US Cardiology Review video. “What this really means is that we may be able to show associations and comparisons by using AI that we cannot do with manual [readings] because manual has more variation and is less reliable.”
He said the next steps will be to see if the findings hold true when AI is used in other populations of cardiac patients.
COVID-19 Pandemic Increased Need for Swift Analyses
An earlier WASE Study in 2016 set out to answer whether normal left ventricular heart chamber quantifications vary across countries, geographical regions, and cultures. However, the data produced by that study took years to review. Asch said the COVID-19 pandemic created a need for such analysis to be done more quickly.
“When the pandemic began, we knew that the clinical urgency to learn as much as possible about the cardiovascular connection to COVID-19 was incredibly high, and that we had to find a better way of securely and consistently reviewing all of this information in a timely manner,” he said in the Ultromics new release.
Coronary artery disease (CAD) is the most common form of heart disease and affects more than 16.5 million people over the age of 20. By 2035, the economic burden of CAD will reach an estimated $749 billion in the US alone, according to the Ultromics website.
“COVID-19 has placed an even greater pressure on cardiac care and looks likely to have lasting implications in terms of its impact on the heart,” said Ross Upton, PhD, Founder and CEO of Oxford, UK-based Ultromics, in a news release announcing the US Food and Drug Administration’s 510(k) clearance for the EchoGo Pro, which supports clinicians’ diagnosing of CAD. “The healthcare industry needs to quickly pivot towards AI-powered automation to reduce the time to diagnosis and improve patient care.”
Use of AI to analyze digital pathology images is expected to be a fast-growing element in the anatomic pathology profession, particularly in the diagnosis of cancer. As Dark Daily outlined in this free white Paper, “Anatomic Pathology at the Tipping Point? The Economic Case for Adopting Digital Technology and AI Applications Now,” anatomic pathology laboratories can expect adoption of AI and digital technology to gain in popularity among pathologists in coming years.
Researchers find a savings of more than one million dollars and prevention of hundreds, if not thousands, of adverse drug events could have been had with machine learning system
Support for artificial intelligence (AI) and machine learning (ML) in healthcare has been mixed among anatomic pathologists and clinical laboratory leaders. Nevertheless, there’s increasing evidence that diagnostic systems based on AI and ML can be as accurate or more accurate at detecting disease than systems without them.
Dark Daily has covered the development of artificial intelligence and machine learning systems and their ability to accurately detect disease in many e-briefings over the years. Now, a recent study conducted at Brigham and Women’s Hospital (BWH) and Massachusetts General Hospital (MGH) suggests machine learning can be more accurate than existing clinical decision support (CDS) systems at detecting prescription medication errors as well.
The study was partially retrospective in that the
researchers compiled past alerts generated by the CDS systems at BWH and MGH
between 2009-2011 and added them to alerts generated during the active part of
the study, which took place from January 1, 2012 to December 31, 2013, for a
total of five years’ worth of CDS alerts.
They then sent the same patient-encounter data that generated those CDS alerts to a machine learning platform called MedAware, an AI-enabled software system developed in Ra’anana, Israel.
MedAware was created for the “identification and prevention
of prescription errors and adverse drug effects,” notes the study, which goes
on to state, “This system identifies medication issues based on machine
learning using a set of algorithms with different complexity levels, ranging
from statistical analysis to deep learning with neural networks. Different
algorithms are used for different types of medication errors. The data elements
used by the algorithms include demographics, encounters, lab test results,
vital signs, medications, diagnosis, and procedures.”
The researchers then compared the alerts produced by
MedAware to the existing CDS alerts from that 5-year period. The results were
astonishing.
According to the study:
“68.2% of the alerts generated were unique to
the MedAware system and not generated by the institutions’ CDS alerting system.
“Clinical outlier alerts were the type least
likely to be generated by the institutions’ CDS—99.2% of these alerts were
unique to the MedAware system.
“The largest overlap was with dosage alerts,
with only 10.6% unique to the MedAware system.
“68% of the time-dependent alerts were unique to
the MedAware system.”
Perhaps even more important was the results of the cost
analysis, which found:
“The average cost of an adverse event
potentially prevented by an alert was $60.67 (range: $5.95–$115.40).
“The average adverse event cost per type of
alert varied from $14.58 (range: $2.99–$26.18) for dosage outliers to $19.14
(range: $1.86–$36.41) for clinical outliers and $66.47 (range: $6.47–$126.47)
for time-dependent alerts.”
The researchers concluded that, “Potential savings of $60.67 per alert was mainly derived from the prevention of ADEs [adverse drug events]. The prevention of ADEs could result in savings of $60.63 per alert, representing 99.93% of the total potential savings. Potential savings related to averted calls between pharmacists and clinicians could save an average of $0.047 per alert, representing 0.08% of the total potential savings.
“Extrapolating the results of the analysis to the 747,985
BWH and MGH patients who had at least one outpatient encounter during the
two-year study period from 2012 to 2013, the alerts that would have been fired
over five years of their clinical care by the machine learning medication
errors identification system could have resulted in potential savings of
$1,294,457.”
Savings of more than one million dollars plus the prevention
of potential patient harm or deaths caused by thousands of adverse drug events
is a strong argument for machine learning platforms in diagnostics and
prescription drug monitoring.
Researchers Say Current Clinical Decision Support Systems
are Limited
Machine learning is not the same as artificial intelligence. ML is a “discipline of AI” which aims for “enhancing accuracy,” while AI’s objective is “increasing probability of success,” explained Tech Differences.
Healthcare needs the help. Prescription medication errors cause patient harm or deaths that cost more than $20 billion annually, states a Joint Commission news release.
CDS alerting systems are widely used to improve patient
safety and quality of care. However, the BWH-MGH researchers say the current
CDS systems “have a variety of limitations.” According to the study:
“One limitation is that current CDS systems are rule-based and can thus identify only the medication errors that have been previously identified and programmed into their alerting logic.
“Further, most have high alerting rates with many false positives, resulting in alert fatigue.”
Commenting on the value of adding machine learning
medication alerts software to existing CDS hospital systems, the BWH-MGH
researchers wrote, “This kind of approach can complement traditional rule-based
decision support, because it is likely to find additional errors that would not
be identified by usual rule-based approaches.”
However, they concluded, “The true value of such alerts is
highly contingent on whether and how clinicians respond to such alerts and
their potential to prevent actual patient harm.”
Future research based on real-time data is needed before machine
learning systems will be ready for use in clinical settings, HealthITAnalytics
noted.
However, medical laboratory leaders and pathologists will
want to keep an eye on developments in machine learning and artificial
intelligence that help physicians reduce medication errors and adverse drug
events. Implementation of AI-ML systems in healthcare will certainly affect
clinical laboratory workflows.