MIT’s deep learning artificial intelligence algorithm demonstrates how similar new technologies and smartphones can be combined to give dermatologists and dermatopathologists valuable new ways to diagnose skin cancer from digital images
According to an MIT press release, “The paper describes the development of an SPL [Suspicious Pigmented Lesion] analysis system using DCNNs [Deep Convolutional Neural Networks] to more quickly and efficiently identify skin lesions that require more investigation, screenings that can be done during routine primary care visits, or even by the patients themselves. The system utilized DCNNs to optimize the identification and classification of SPLs in wide-field images.”
The MIT scientists believe their AI analysis system could aid dermatologists, dermatopathologists, and clinical laboratories detect melanoma, a deadly form of skin cancer, in its early stages using smartphones at the point-of-care.
Improving Melanoma Treatment and Patient Outcomes
Melanoma develops when pigment-producing cells called melanocytes start to grow out of control. The cancer has traditionally been diagnosed through visual inspection of SPLs by physicians in medical settings. Early-stage identification of SPLs can drastically improve the prognosis for patients and significantly reduce treatment costs. It is common to biopsy many lesions to ensure that every case of melanoma can be diagnosed as early as possible, thus contributing to better patient outcomes.
“Early detection of SPLs can save lives. However, the current capacity of medical systems to provide comprehensive skin screenings at scale are still lacking,” said Luis Soenksen, PhD, Venture Builder in Artificial Intelligence and Healthcare at MIT and first author of the study in the MIT press release.
The researchers trained their AI system by using 20,388 wide-field images from 133 patients at the Gregorio Marañón General University Hospital in Madrid, as well as publicly available images. The collected photographs were taken with a variety of ordinary smartphone cameras that are easily obtainable by consumers.
They taught the deep learning algorithm to examine various features of skin lesions such as size, circularity, and intensity. Dermatologists working with the researchers also visually classified the lesions for comparison.
“Our system achieved more than 90.3% sensitivity (95% confidence interval, 90 to 90.6) and 89.9% specificity (89.6 to 90.2%) in distinguishing SPLs from nonsuspicious lesions, skin, and complex backgrounds, avoiding the need for cumbersome individual lesion imaging,” the MIT researchers noted in their Science Translational Medicine paper.
In addition, the algorithm agreed with the consensus of experienced dermatologists 88% of the time and concurred with the opinions of individual dermatologists 86% of the time, Medgadget reported.
Modern Imaging Technologies Will Advance Diagnosis of Disease
According to the American Cancer Society, about 106,110 new cases of melanoma will be diagnosed in the United States in 2021. Approximately 7,180 people are expected to die of the disease this year. Melanoma is less common than other types of skin cancer but more dangerous as it’s more likely to spread to other parts of the body if not detected and treated early.
More research is needed to substantiate the effectiveness and accuracy of this new tool before it could be used in clinical settings. However, the early research looks promising and smartphone camera technology is constantly improving. Higher resolutions would further advance development of this type of diagnostic tool.
In addition, MIT’s algorithm enables in situ examination and possible diagnosis of cancer. Therefore, a smartphone so equipped could enable a dermatologist to diagnose and excise cancerous tissue in a single visit, without the need for biopsies to be sent to a dermatopathologist.
Currently, dermatologists refer a lot of skin biopsies to dermapathologists and anatomic pathology laboratories. An accurate diagnostic tool that uses modern smartphones to characterize suspicious skin lesions could become quite popular with dermatologists and affect the flow of referrals to medical laboratories.
DeepMind hopes its unrivaled collection of data, enabled by artificial intelligence, may advance development of precision medicines, new medical laboratory tests, and therapeutic treatments
‘Tis the season for giving, and one United Kingdom-based artificial intelligence (AI) research laboratory is making a sizeable gift. After using AI and machine learning to create “the most comprehensive map of human proteins,” in existence, DeepMind, a subsidiary of Alphabet Inc. (NASDAQ:GOOGL), parent company of Google, plans to give away for free its database of millions of protein structure predictions to the global scientific community and to all of humanity, The Verge reported.
Pathologists and clinical laboratory scientists developing proteomic assays understand the significance of this gesture. They know how difficult and expensive it is to determine protein structures using sequencing of amino acids. That’s because the various types of amino acids in use cause the [DNA] string to “fold.” Thus, the availability of this data may accelerate the development of more diagnostic tests based on proteomics.
“For decades, scientists have been trying to find a method to reliably determine a protein’s structure just from its sequence of amino acids. Attraction and repulsion between the 20 different types of amino acids cause the string to fold in a feat of ‘spontaneous origami,’ forming the intricate curls, loops, and pleats of a protein’s 3D structure. This grand scientific challenge is known as the protein-folding problem,” a DeepMind statement noted.
Enter DeepMind’s AlphaFold AI platform to help iron things out. “Experimental techniques for determining structures are painstakingly laborious and time consuming (sometimes taking years and millions of dollars). Our latest version [of AlphaFold] can now predict the shape of a protein, at scale and in minutes, down to atomic accuracy. This is a significant breakthrough and highlights the impact AI can have on science,” DeepMind stated.
Release of Data Will Be ‘Transformative’
In July, DeepMind announced it would begin releasing data from its AlphaFold Protein Structure Database which contains “predictions for the structure of some 350,000 proteins across 20 different organisms,” The Verge reported, adding, “Most significantly, the release includes predictions for 98% of all human proteins, around 20,000 different structures, which are collectively known as the human proteome. By the end of the year, DeepMind hopes to release predictions for 100 million protein structures.”
According to Edith Heard, PhD, Director General of the European Molecular Biology Laboratory (EMBL), the open release of such a dataset will be “transformative for our understanding of how life works,” The Verge reported.
Free Data about Proteins Will Accelerate Research on Diseases, Treatments
Research into how protein folds and, thereby, functions could have implications to fighting diseases and developing new medicines, according to DeepMind.
“This will be one of the most important datasets since the mapping of the human genome,” said Ewan Birney, PhD, Deputy Director General of the EMBL, in the DeepMind statement. EMBL worked with DeepMind on the dataset.
DeepMind protein prediction data are already being used by scientists in medical research. “Anyone can use it for anything. They just need to credit the people involved in the citation,” said Demis Hassabis, DeepMind CEO and Co-founder, in The Verge.
In a blog article, Hassabis listed several projects and organizations already using AlphaFold. They include:
“As researchers seek cures for diseases and pursue solutions to other big problems facing humankind—including antibiotic resistance, microplastic pollution, and climate change—they will benefit from fresh insights in the structure of proteins,” Hassabis wrote.
Because of the deep financial backing that Alphabet/Google can offer, it is reasonable to predict that DeepMind will make progress with its AI technology that regularly adds capabilities and accuracy, allowing AlphaFold to be effective for many uses.
This will be particularly true for the development of new diagnostic assays that will give clinical laboratories better tools for diagnosing disease earlier and more accurately.
WASE-COVID Study also found that use of artificial intelligence technology minimized variability among echocardiogram scan results
Many physicians—including anatomic pathologists—are watching the development of artificial intelligence (AI)-powered diagnostic tools that are intended to analyze images and analyze the data with accuracy comparable to trained doctors. Now comes news of a recent study that demonstrated the ability of an AI tool to analyze echocardiograph images and deliver analyses equal to or better than trained physicians.
Conducted by researchers from the World Alliance Societies of Echocardiography and presented at the latest annual sessions of the American College of Cardiology (ACC), the WASE-COVID Study involved assessing the ability of the AI platform to analyze digital echocardiograph images with the goal of predicting mortality in patients with severe cases of COVID-19.
To complete their research, the WASE-COVID Study scientists examined 870 patients with acute COVID-19 infection from 13 medical centers in nine countries throughout Asia, Europe, United States, and Latin America.
Human versus Artificial Intelligence Analysis
Echocardiograms were analyzed with automated, machine learning-derived algorithms to calculate various data points and identify echocardiographic parameters that would be prognostic of clinical outcomes in hospitalized patients. The results were then compared to human analysis.
All patients in the study had previously tested positive for COVID-19 infection using a polymerase chain reaction (PCR) or rapid antigen test (RAT) and received a clinically-indicated echocardiogram upon admission. For those patients ultimately discharged from the hospital, a follow-up echocardiogram was performed after three months.
“What we learned was that the manual tracings were not able to predict mortality,” Federico Asch, MD, FACC, FASE, Director of the Echocardiography Core Lab at MedStar Health Research Institute in Washington, DC, told US Cardiology Review in a video interview describing the WASE-COVID Study findings.
Asch is also Associate Professor of Medicine (Cardiology) at Georgetown University. He added, “But on the same echoes, if the analysis was done by machine—Ultromics EchoGo Core, a software that is commercially available—when we used the measurements obtained through this platform, we were able to predict in-hospital and out-of-hospital mortality both with ejection fraction and left ventricular longitudinal strain.”
Nearly half of the 870 hospitalized patients were admitted to intensive care units, 27% were placed on ventilators, 188 patients died in the hospital, and 50 additional patients died within three to six months after being released from the hospital.
10 of 13 medical centers performed limited cardiac exams as their primary COVID in-patient practice and three out of the 13 centers performed comprehensive exams.
In-hospital mortality rates ranged from 11% in Asia, 19% in Europe, 26% in the US, to 27% in Latin America.
Left ventricular longitudinal strain (LVLS), right ventricle free wall strain (RVFWS), as well as a patient’s age, lactic dehydrogenase levels and history of lung disease, were independently associated with mortality. Left ventricle ejection fraction (LVEF) was not.
Fully automated quantification of LVEF and LVLS using AI minimized variability.
AI-based left ventricular analyses, but not manual, were significant predictors of in-hospital and follow-up mortality.
The WASE-COVID Study also revealed the varying international use of cardiac ultrasound (echocardiography) on COVID-19 patients.
“By using machines, we reduce variability. By reducing variability, we have a better capacity to compare our results with other outcomes, whether that outcome in this case is mortality or it could be changes over time,” Asch stated in the US Cardiology Review video. “What this really means is that we may be able to show associations and comparisons by using AI that we cannot do with manual [readings] because manual has more variation and is less reliable.”
He said the next steps will be to see if the findings hold true when AI is used in other populations of cardiac patients.
COVID-19 Pandemic Increased Need for Swift Analyses
An earlier WASE Study in 2016 set out to answer whether normal left ventricular heart chamber quantifications vary across countries, geographical regions, and cultures. However, the data produced by that study took years to review. Asch said the COVID-19 pandemic created a need for such analysis to be done more quickly.
“When the pandemic began, we knew that the clinical urgency to learn as much as possible about the cardiovascular connection to COVID-19 was incredibly high, and that we had to find a better way of securely and consistently reviewing all of this information in a timely manner,” he said in the Ultromics new release.
Coronary artery disease (CAD) is the most common form of heart disease and affects more than 16.5 million people over the age of 20. By 2035, the economic burden of CAD will reach an estimated $749 billion in the US alone, according to the Ultromics website.
“COVID-19 has placed an even greater pressure on cardiac care and looks likely to have lasting implications in terms of its impact on the heart,” said Ross Upton, PhD, Founder and CEO of Oxford, UK-based Ultromics, in a news release announcing the US Food and Drug Administration’s 510(k) clearance for the EchoGo Pro, which supports clinicians’ diagnosing of CAD. “The healthcare industry needs to quickly pivot towards AI-powered automation to reduce the time to diagnosis and improve patient care.”
Use of AI to analyze digital pathology images is expected to be a fast-growing element in the anatomic pathology profession, particularly in the diagnosis of cancer. As Dark Daily outlined in this free white Paper, “Anatomic Pathology at the Tipping Point? The Economic Case for Adopting Digital Technology and AI Applications Now,” anatomic pathology laboratories can expect adoption of AI and digital technology to gain in popularity among pathologists in coming years.
Determining how dogs do this may lead to biomarkers for new clinical laboratory diagnostics tests
Development of new diagnostic olfactory tools for prostate and other cancers is expected to result from research now being conducted by a consortium of researchers at different universities and institutes. To identify new biomarkers, these scientists are studying how dogs can detect the presence of prostate cancer by sniffing urine specimens.
Funded by a grant from the Prostate Cancer Foundation, the pilot study demonstrated that dogs could identify prostate samples containing cancer and discern between cancer positive and cancer negative samples.
Canine Olfactory Combined with Artificial Intelligence Analysis Approach
The part of a canine brain that controls smell is 40 million times greater than that of humans. Some dog breeds have 300 to 350 million sensory receptors, compared to about five million in humans. With their keen sense of smell, dogs are proving to be vital resources in the detection of some diseases.
The pilot study examined how dogs could be trained to detect prostate cancer in human urine samples.
To perform the study, the researchers trained two dogs to sniff urine samples from men with high-grade prostate cancer and from men without the cancer. The two dogs used in the study were a four-year-old female Labrador Retriever named Florin, and a seven-year-old female wirehaired Hungarian Vizsla named Midas. The dogs were trained to respond to cancer-related chemicals, known as volatile organic compounds, or VOCs, the researchers added to the urine samples, and to not respond to the samples without the VOCs.
Both dogs performed well in their cancer detection roles, and both successfully identified five of seven urine samples from men with prostate cancer, correlating to a 71.4% accuracy rate. In addition, Florin correctly identified 16 of 21 non-aggressive or no cancer samples for an accuracy rate of 76.2% and Midas did the same with a 66.7% accuracy rate.
“We wondered if having the dogs detect the chemicals, combined with analysis by GC-MS, bacterial profiling, and an artificial intelligence (AI) neural network trained to emulate the canine cancer detection ability, could significantly improve the diagnosis of high-grade prostate cancer,” said Alan Partin, MD, PhD, Professor of Urology, Pathology and Oncology, Johns Hopkins University School of Medicine and one of the authors of the study, told Futurity.
The researchers determined that canine olfaction was able to distinguish between positive and negative prostate cancer in the samples, and the VOC and microbiota profiling analyses showed a qualitative difference between the two groups. The multisystem approach demonstrated a more sensitive and specific way of detecting the presence of prostate cancer than any of the methods used by themselves.
In their paper, the researchers concluded that “this study demonstrated feasibility and identified the challenges of a multiparametric approach as a first step towards creating a more effective, non-invasive early urine diagnostic method for the highly aggressive histology of prostate cancer.”
Can Man’s Best Friend be Trained to Detect Cancer and Save Lives?
Prostate cancer is the second leading cause of cancer deaths among men in the developed world. And, according to data from the National Cancer Institute, standard clinical laboratory blood tests, such as the prostate-specific antigen (PSA) test for early detection, sometimes miss the presence of cancer.
Establishing an accurate, non-invasive method of sensing the disease could help detect the disease sooner when it is more treatable and save lives.
The American Cancer Society estimates that there will be about 248,530 new cases of prostate cancer diagnosed in 2021 and that there will be approximately 34,130 deaths resulting from the disease during the same year.
Of course, more testing will be needed before Man’s best friend can be put to work detecting cancer in medical environments. But if canines can be trained to detect the disease early, and in a non-invasive way, more timely diagnosis and treatment could result in higher survival rates.
Meanwhile, as researchers identify the elements dogs use to detect cancer and other diseases, this knowledge can result in the creation of new biomarkers than can be used in clinical laboratory tests.
Molecular probes designed to spot minute amounts of pathogens in biological samples may aid clinical laboratories’ speed-to-answer
Driven to find a better way to isolate minute samples of pathogens from among high-volumes of other biological organisms, researchers at Canada’s McMaster University in Hamilton, Ontario, have unveiled a bioinformatics algorithm which they claim shortens time-to-answer and speeds diagnosis of deadly diseases.
Two disease pathogens the researchers specifically targeted in their study are responsible for sepsis and SARS-CoV-2, the coronavirus causing COVID-19. Clinical laboratories would welcome a technology which both shortens time-to-answer and improves diagnostic accuracy, particularly for pathogens such as sepsis and SARS-CoV-2.
Their design of molecular probes that target the genomic sequences of specific pathogens can enable diagnosticians and clinical laboratories to spot extremely small amounts of viral and bacterial pathogens in patients’ biological samples, as well as in the environment and wildlife.
“There are thousands of bacterial pathogens and being able to determine which one is present in a patient’s blood sample could lead to the correct treatment faster when time is very important,” Zachery Dickson, a lead author of the study, told Brighter World. Dickson is a bioinformatics PhD candidate in the Department of Biology at McMaster University. “The probe makes identification much faster, meaning we could potentially save people who might otherwise die,” he added.
Sepsis is a life-threatening response to infection that leads to organ failure, tissue damage, and death in hospitals worldwide. According to Sepsis Alliance, about 30% of people diagnosed with severe sepsis will die without quick and proper treatment. Thus, a “shortcut” to identifying sepsis in its early stages may well save many lives, the McMaster researchers noted.
And COVID-19 has killed millions. Such a tool that identifies sepsis and SARS-CoV-2 in minute biological samples would be a boon to hospital medical laboratories worldwide.
Is Bioinformatics ‘Shortcut’ Faster than PCR Testing?
The researchers say their probes enable a shortcut to detection—even in an infection’s early stages—by “targeting, isolating, and identifying the DNA sequences specifically and simultaneously.”
The probes’ design makes possible simultaneous targeted capture of diverse metagenomics targets, Biocompare explained.
But is it faster than PCR (polymerase chain reaction) testing?
The McMaster scientists were motivated by the “challenges of low signal, high background, and uncertain targets that plague many metagenomic sequencing efforts,” they noted in their paper.
They pointed to challenges posed by PCR testing, a popular technique used for detection of sepsis pathogens as well as, more recently, for SARS-CoV-2, the coronavirus causing COVID-19.
“The (PCR) technique relies on primers that bind to nucleic acid sequences specific to an organism or group of organisms. Although capable of sensitive, rapid detection and quantification of a particular target, PCR is limited when multiple loci are targeted by primers,” the researchers wrote in Cell Reports Methods.
According to LabMedica, “A wide array of metagenomic study efforts are hampered by the same challenge: low concentrations of targets of interest combined with overwhelming amounts of background signal. Although PCR or naive DNA capture can be used when there are a small number of organisms of interest, design challenges become untenable for large numbers of targets.”
Detecting Pathogens Faster, Cheaper, and More Accurately
As part of their study, researchers tested two probe sets:
one to target bacterial pathogens linked to sepsis, and
another to detect coronaviruses including SARS-CoV-2.
They were successful in using the probes to capture a variety of pathogens linked to sepsis and SARS-CoV-2.
“We validated HUBDesign by generating probe sets targeting the breadth of coronavirus diversity, as well as a suite of bacterial pathogens often underlying sepsis. In separate experiments demonstrating significant, simultaneous enrichment, we captured SARS-CoV-2 and HCoV-NL63 [Human coronavirus NL 63] in a human RNA background and seven bacterial strains in human blood. HUBDesign has broad applicability wherever there are multiple organisms of interest,” the researchers wrote in Cell Reports Methods.
The findings also have implications to the environment and wildlife, the researchers noted.
Of course, more research is needed to validate the tool’s usefulness in medical diagnostics. The McMaster University researchers intend to improve HUBDesign’s efficiency but note that probes cannot be designed for unknown targets.
Nevertheless, the advanced application of novel technologies to diagnose of sepsis, which causes 250,000 deaths in the US each year, according to the federal Centers for Disease Control and Prevention, is a positive development worth watching.
The McMaster scientists’ discoveries—confirmed by future research and clinical studies—could go a long way toward ending the dire effects of sepsis as well as COVID-19. That would be a welcome development, particularly for hospital-based laboratories.