The Center for Advanced Medical Simulation at Mount Sinai West Hosts Annual Tristate Regional Simulation Symposium May 17

The Center for Advanced Medical Simulation (CAMS) at Mount Sinai West is hosting its pioneering annual Tristate Regional Simulation Symposium. The symposium is scheduled for Friday, May 17, from 11 am to 2 pm, using a live online format.

The theme for this eighth annual symposium is “Embracing Change: How Artificial Intelligence (AI) Can Influence Health Care Simulation.” The symposium will include plenary talks, data-driven presentations, and panel discussions.

“Together, we will explore AI possibilities to enhance patient safety, team performance, and outcomes in simulation-based education and powerfully affirm everything that is most striking about simulation that we do at our institutions and worldwide,” said Priscilla V. Loanzon, EdD, RN, CHSE, Director of Simulation Education, Center for Advanced Medical Simulation, and Assistant Professor of Medicine (Pulmonary, Critical Care, and Sleep Medicine) at the Icahn School of Medicine at Mount Sinai.

Since the pandemic, the format for the symposium has changed from a full-day onsite and in-person conference to a three-hour live online. The target audience has expanded over the years from regional to national and international. Attendees can earn credits for continuing medical education and continuing education units.

CAMS is one of the Mount Sinai Health System’s outstanding simulation centers, all dedicated to improving patient safety, communication, and medical education. It provides health care training opportunities to professionals in the safe learning environment of a lab setting, offering courses that include case-based simulation, in-situ simulation, and procedural training such as point of care ultrasonography, central line training, blood culture competency, medical code response, managing mechanically ventilated patients, and advanced airway management. The Center includes three simulation laboratories, a virtual-reality training arcade, and two conference rooms. All areas of CAMS are equipped with audiovisual and video-recording equipment to facilitate education, training, debriefing, and research and quality improvement projects.

The Center, accredited by the Society for Simulation in Healthcare (SSH), is working with the Continuing Medical Education Department, Mount Sinai’s Office of Corporate Compliance and Office of Development.

To learn more about the symposium, contact Dr. Loanzon at priscilla.loanzon@mountsinai.org or call 212-523-8698.

The Society for Simulation in Healthcare declared September 11-15, 2017, as an inaugural simulation week with a focus on celebrating the professionals who work in health care simulation to improve the safety, effectiveness, and efficiency of health care.

“CAMS invited the simulation centers in the tristate area to a joint celebration through a symposium,” said Dr. Loanzon. “This inaugural celebration was intended to powerfully affirm the tristate region’s successes, opportunities, and myriad possibilities to be the best in what we do so well individually and collectively.”

AI Spotlight: Predicting Risk of Death in Dementia Patients

Kuan-lin Huang, PhD, Assistant Professor of Genetics and Genomic Sciences at the Icahn School of Medicine at Mount Sinai

Dementia is a neurodegenerative disorder, commonly known to affect cognitive function—including memory and reasoning. It is also a factor contributing to death. According to the Centers for Disease Control and Prevention, dementia is currently the seventh leading cause of death in the United States. Alzheimer’s disease is the most common form of dementia, accounting for approximately 70 percent of cases.

Researchers have used artificial intelligence and machine learning to help diagnose and classify dementia. But less effort has been put into understanding mortality among patients with dementia.

A group of researchers at the Icahn School of Medicine at Mount Sinai seeks to tackle this problem by developing a machine learning model to predict risks of death for a patient within 1-, 3-, 5-, and 10-year thresholds of a dementia diagnosis.

“We really want to call attention to how Alzheimer’s disease is actually a major cause of death,” says Kuan-lin Huang, PhD, Assistant Professor of Genetics and Genomic Sciences and Principal Investigator of the Precision Omics Lab at Icahn Mount Sinai.

“When people think of dementia, they think of patients losing their memory, as opposed to when people think about cardiovascular disease or cancer, they think about mortality,” says Dr. Huang. “As someone who has a family member who unfortunately passed away from Alzheimer’s disease, I’ve seen how the late stage of the disease—because you lose certain bodily functions—can become quite lethal.” In late-stage dementia, the disease destroys neurons and other brain cells, which could inhibit swallowing, breathing, or heart rate regulation, or cause deadly associated complications such as urinary tract infections or falls.

In the study, the team focused on this question: Given a person’s age, specific type of dementia, and other factors, what will be the risk the person will end up passing within a certain number of years?

For its model, the team used XGBoost, a machine learning algorithm that utilizes “gradient boosting.” This algorithm is based on the use of many decision trees—“if-this, then-that”-type reasoning. It learns from errors made by previous simple trees and collectively can make strong predictions.

Here’s how the study’s lead authors, Jimmy Zhang and Luo Song in Dr. Huang’s research team, leveraged machine learning to shed light on mortality in dementia.

The study used data from more than 40,000 unique patients from the National Alzheimer’s Coordinating Center, a database spanning about 40 Alzheimer’s disease centers across the United States. The model achieved an area under the receiver operating characteristic curve (AUC-ROC) score of more than 0.82 across the 1-, 3-, 5-, and 10-year thresholds. Compared to an AUC-ROC of 0.5, which amounts to a random guess that correctly predicts 50 percent of the time, the model performed reasonably well in predicting a dementia patient’s mortality, but still has room for improvement. By conducting stratified analyses within each dementia type, the researchers also identified distinct predictors of mortality across eight dementia types.

Findings were published in Communications Medicine on February 28.

In this Q&A, Dr. Huang discusses the team’s research.

What was the motivation for your study?

We wanted to address the challenges in dementia care: namely, to identify patients with dementia at high risk of near-term mortality, and to understand the factors contributing to mortality risk across different types of dementia.

What are the implications?

Clinically, it supports the early identification of high-risk patients, enabling targeted care strategies and personalized care. On a research level, it underscores the value of machine learning in understanding complex diseases like dementia and paves the way for future studies to explore predictive modeling in other aspects of dementia care.

What are the limitations of the study?

While our study includes nationwide data, to make the model more generalizable, it still needs to be adapted to different research and clinical settings.

How might these findings be put to use?

These findings could enhance the care of dementia patients by identifying those at high risk of mortality for more personalized management strategies. On a broader scale, the study’s methodologies and insights could influence future research in predictive modeling for dementia, potentially leading to improved patient outcomes and more efficient health care systems.

What is your plan for following up on this study?

We plan to refine our dementia models by including treatment effects and genetic data, and exploring advanced deep learning techniques for more accurate predictions.


Learn more about how Mount Sinai researchers and clinicians are leveraging machine learning to improve patient lives

AI Spotlight: Mapping Out Links Between Drugs and Birth Defects

AI Spotlight: Guiding Heart Disease Diagnosis Through Transformer Models

How Mount Sinai is Using Artificial Intelligence to Improve the Diagnosis of Breast Cancer

Laurie Margolies, MD, a radiologist who is Chief of Breast Imaging at the Dubin Breast Center and Vice Chair, Breast Imaging, Mount Sinai Health System

More and more people are getting mammograms as the population ages, as more younger people are choosing to get screened, and as the benefits of accurate screening and early detection of breast cancer remain clear.

Breast cancer is the most common cancer among women in the United States, except for skin cancer. Each year, about 240,000 cases of breast cancer are diagnosed in women (and about 2,100 in men), according to the U.S. Centers for Disease Control and Prevention.

In response to this growing need, Mount Sinai has expanded its network of breast imaging sites, and  has deployed a new tool: artificial intelligence.

In this Q&A, Laurie Margolies, MD, a radiologist who is Chief of Breast Imaging at the Dubin Breast Center and Vice Chair, Breast Imaging, Mount Sinai Health System, explains how radiologists at the Mount Sinai Breast Cancer of Excellence for Breast Cancer are leveraging the power of artificial intelligence to achieve a more precise diagnosis, which allows surgeons and oncologists to start the right treatment sooner, giving patients the best possible outcome.

How does AI help patients in the diagnosis of breast cancer?

AI is a new tool that gives a second opinion on a mammogram. It assists the radiologist, it does not replace the radiologist. It’s like having a very well trained senior fellow sitting next to you. Multiple studies have shown that when you have radiologists working with AI, you find more breast cancers, and often smaller cancers. What’s great about AI is that it never gets tired, it can’t get distracted. But there’s no substitute for the experience of the radiologist.

How does it help with “call backs”?

This additional review can help radiologists determine instances where there is a very low probability of cancer. This helps to reduce the number of times that patients will be asked to return for another procedure to get a closer look at an area of possible concern, which many know as a “call back.” Fewer than 10 percent of women who are asked to return are typically found to have cancer. But these extra screenings make people anxious, they cost money, and they fill our breast centers with people who don’t need to be there.

How does AI work? What does the patient see?

Patients will not see any difference in the process. As your radiologist is reading your mammogram or sonogram on their computer, they can access a special program that will also review the scan. It takes a few extra minutes. In many cases, AI reviews the scan before the radiologist and highlights areas for the radiologist to pay extra attention.

Who can access this service?

Anyone who receives a mammogram or breast ultrasound performed at Mount Sinai will have access to this AI capability. There is no extra cost to patients.

AI Spotlight: Mapping Out Links Between Drugs and Birth Defects

Avi Ma’ayan, PhD, Director of the Mount Sinai Center for Bioinformatics at the Icahn School of Medicine at Mount Sinai

Birth defects can be linked to many factors—genetic, environmental, even pure chance. Characterizing the links of any factor to congenital abnormalities is a daunting task, given the vastness of the problem.

In the face of this challenge, a team of researchers at the Icahn School of Medicine at Mount Sinai tapped artificial intelligence (AI) methods to shed light on associations between existing medications and their potential to induce specific birth abnormalities.

“We wanted to improve our understanding of reproductive health and fetal development, and importantly, warn about the potential of new drugs to cause birth defects before these drugs are widely marketed and distributed,” says Avi Ma’ayan, PhD, Professor of Pharmacological Sciences and Director of the Mount Sinai Center for Bioinformatics at Icahn Mount Sinai.

The team developed a knowledge graph—a descriptive model that maps out the relationships between entities and concepts—called ReproTox-KG to integrate data about small-molecule drugs, birth defects, and genes. In addition to constructing the knowledge graph, the team also used machine learning, specifically semi-supervised learning, to illuminate unexplored links between some drugs and birth defects.

Here’s how ReproTox-KG works as a knowledge graph to predict birth defects.

The study examined more than 30,000 preclinical small-molecule drugs for their potential to cross the placenta and induce birth defects, and identified more than 500 “cliques”—interlinked clusters between birth defects, genes, and drugs—that can be used to explain molecular mechanisms for drug-induced birth defects. Findings were published in Communications Medicine on July 17, and the platform has been made available on a web-based user interface.

In this Q&A, Dr. Ma’ayan, senior author of the paper, discusses ReproTox-KG and its potential impacts.

What was the motivation for your study?

The motivation for the study was to find a use case that combines several datasets produced by National Institutes of Health (NIH) Common Fund programs to demonstrate how integrating data from these resources can lead to synergistic discoveries, particularly in the context of reproductive health.

The study identifies some relationships between approved drugs and birth defects to identify existing drugs that are currently not classified as harmful but which may pose risks to the development of a fetus. It also provides a new global framework to assess potential toxicity for new drugs and explain the biological mechanisms by which some drugs known to cause birth defects may operate.

What are the implications?

Identifying the causes of birth defects is complicated and difficult. But we hope that through complex data analysis integrating evidence from multiple sources, we can improve our understanding of reproductive health and fetal development, and also warn about the potential of new drugs to cause birth defects before these drugs are widely marketed and distributed.

What are the limitations of the study?

We have not yet experimentally validated any of the predictions. There are currently no considerations of tissue and cell type, and the knowledge graph representation omits some detail from the original datasets for the sake of standardization. The website that supports the study may not be appealing to a large audience.

How might these findings be put to use?

Regulatory agencies such as the U.S. Environmental Protection Agency or the Food and Drug Administration may use the approach to evaluate the risk of new drug or other chemical applications. Manufacturers of drugs, cosmetics, supplements, and foods may consider the approach to evaluate the compounds they include in products.

What is your plan for following up on this study?

We plan to use a similar graph-based approach for other projects focusing on the relationship between genes, drugs, and diseases. We also aim to use the processed dataset as training materials for courses and workshops on bioinformatics analysis. Additionally, we plan to extend the study to consider more complex data, such as gene expression from specific tissues and cell types collected at multiple stages of development.


Learn more about how Mount Sinai researchers and clinicians are leveraging machine learning to improve patient lives

AI Spotlight: Guiding Heart Disease Diagnosis Through Transformer Models

AI Spotlight: Forecasting ICU Patient States for Improved Outcomes

AI Spotlight: Guiding Heart Disease Diagnosis Through Transformer Models

Akhil Vaid, MD, left, and Girish Nadkarni, MD, MPH, right, are working to make artificial intelligence models more feasible for reading electrocardiograms, using a novel transformer neural network approach.

Electrocardiograms (ECGs) are often used by health providers to diagnose heart disease. At times, irregularities in the recordings are too subtle to be detected by human eyes but can be identified by artificial intelligence (AI).

However, most AI models for ECG analysis use a particular deep learning method called convolutional neural networks (CNNs). CNNs require large training datasets to make diagnoses, which spell limitations when it comes to rare heart diseases that do not have a wealth of data.

Researchers at the Icahn School of Medicine at Mount Sinai have developed an AI model, called HeartBEiT, for ECG analysis, which works by interpreting ECGs as language.

The model uses a transformer-based neural network, a class of network that is unlike conventional networks but does serve as a basis for popular generative language models, such as ChatGPT.

Here’s how HeartBEiT works as an artificial intelligence deep-learning model, and how it compares to CNNs.

HeartBEiT outperformed conventional approaches in terms of diagnostic accuracy, especially at lower sample sizes. Study findings were published in npj Digital Medicine on June 6. Akhil Vaid, MD, Instructor of Data-Driven and Digital Medicine, was lead author, and Girish Nadkarni, MD, MPH, Irene and Dr. Arthur Fishberg Professor of Medicine, was senior author.

In this Q&A, Dr. Vaid discusses the impact of this new AI model on reading ECGs.

What was the motivation for your study?

Deep learning as applied to ECGs has had much success, but most deep learning studies for ECGs use convolutional neural networks, which have limitations.

Recently, the transformer class of models has assumed a position of importance. These models function by establishing relationships between parts of the data they see. Generative transformer models such as the popular ChatGPT utilize this understanding to generate plain-language text.

By using another generative image model, HeartBEiT creates representations of the ECG that may be considered “words,” and the whole ECG may be considered a single “document.” HeartBEiT understands the relationship between these words within the context of the document, and uses this understanding to perform diagnostic tasks better.

What are the implications?

Our model forms a universal starting point for any ECG-based study. When comparing our model to popular CNN architectures on diagnostic tasks, HeartBEiT ended up with equivalent performance and better explanations for the model’s thinking and choices using as little as a tenth of the data required by other approaches.

Additionally, HeartBEiT generates very specific explanations of which parts of an ECG were most responsible for pushing a model towards making a diagnosis.

What are the limitations of the study?

Pre-training the model takes a fair amount of time. However, fine-tuning it for a specific diagnosis is a very quick process that can be accomplished in a few minutes.

HeartBEiT was compared against other conventional AI methods on diagnostic measures, including left ventricular ejection fraction ≤40%, hypertrophic myopathy, and ST-elevation myocardial infarction, and was found to perform better.
How might these findings be put to use?

Deployment of this model and its derivatives into clinical practice can greatly enhance the manner in which clinicians interact with ECGs. We are no longer limited to models for commonly seen conditions, since the paradigm can be extended to nearly any pathology.

What is your plan for following up on this study?

We intend to scale up the model so that it can capture even more detail. We also intend to validate this approach externally, in places outside Mount Sinai.


Learn more about how Mount Sinai researchers and clinicians are leveraging machine learning to improve patient lives

AI Spotlight: Forecasting ICU Patient States for Improved Outcomes

When Can a Patient Come Off a Ventilator? This AI Can Help Decide

AI Spotlight: Forecasting ICU Patient States for Improved Outcomes

AI Spotlight: Forecasting ICU Patient States for Improved Outcomes

Girish Nadkarni, MD, MPH, and Faris Gulamali

Artificial intelligence (AI) and machine learning (ML) have seen increasing use in health care, from guiding clinicians in diagnosis to helping them decide the best course of treatment. However, AI still has much unrealized potential in various health care settings.

Mount Sinai researchers are exploring bringing AI into intensive care, and developed Spatial Resolved Temporal Networks (SpaRTeN), a model to assess high-frequency patient data and generate representations of their state in real time.

The work was presented at the Time Series Representation Learning for Health workshop on Friday, May 5, hosted by the International Conference for Learned Representations, a premier gathering dedicated to machine learning.

Hear from Girish Nadkarni, MD, MPH, Irene and Dr. Arthur Fishberg Professor of Medicine at the Icahn School of Medicine at Mount Sinai and the leader of the SpaRTeN research, and Faris Gulamali, medical student at Icahn Mount Sinai and member of the Augmented Intelligence in Medicine and Science lab, on what lay behind creating the model and what it could achieve for patients.

What was the motivation for your study?

A growing amount of research is indicating the need to redefine critical illness by biological state rather than a non-specific illness syndrome. Advances in genomics, data science, and machine learning have generated evidence of different underlying etiologies for common ICU syndromes. As a result, patients with the exact same diagnosis can have entirely different outcomes.

What are the implications?

In the ICU, representations of a patient can be used to guide personalized treatments based on personalized diagnoses rather than generic treatments with empirical diagnoses.

What are the limitations of the study?

In this study, we only looked at using one type of data at a time in real time. For example, we looked primarily at measures of intracranial pressure. However, the ICU has many types of data being output simultaneously. Future work hopes to integrate all the different types of data such as electrocardiograms, blood pressure, and imaging to improve patient representations.

How might these findings be put to use?

These patient representations are being combined with data on medications and procedures to determine how to optimize patient treatment based on underlying state rather than common illness syndromes.

What is your plan for following up on this study?

In this study, we focused primarily on creating the algorithm and showing that it works for the case of intracranial hypertension. In future studies, we would like to integrate multiple data modalities such as imaging, electrocardiograms, and blood pressure as well as intervention-based data such as medications and procedures to determine precise empirical interventions that lead to improvements in short-term and long-term patient outcomes.


Learn more about how Mount Sinai researchers and clinicians are leveraging machine learning to improve patient lives

Computational Neuroscientist Opens Doors for New Ideas and Talent to Thrive

When Can a Patient Come Off a Ventilator? This AI Can Help Decide

Pin It on Pinterest