
Alvira Tyagi is a first-year medical student at the Icahn School of Medicine at Mount Sinai. She was part of a research team examining the limitations of ChatGPT Health in a study, which had findings published in Nature Medicine.
Most first-year medical students spend their time mastering anatomy, memorizing biochemical pathways, and adjusting to the pace of clinical training. For Alvira Tyagi, that first year coincided with an opportunity to understand the rapid transformation in how patients seek health information with AI tools.
“In January, OpenAI launched ChatGPT Health, and I was immediately curious as to how people were using it,” she says. ChatGPT Health is a service dedicated to answering health and wellness questions, with options to connect to medical records and wellness apps.
“We set out to test how well ChatGPT Health handles clinical urgency—specifically, whether it steers users with serious symptoms toward emergency care,” she says.
The research team, comprising several physicians and members from Mount Sinai’s Windreich Department of AI and Human Health (AIHH), conducted a study in which they posed clinical scenarios to ChatGPT Health and gauged how it triaged them, compared to gold-standard decisions from physicians following medical society recommendations.

ChatGPT Health, launched in January 2026, is a service on ChatGPT that lets users ask questions about health and wellness. In addition to asking the chatbot questions, users can also sync wearables to it or even upload lab results and ask it to explain the results.
The team found that textbook emergencies were correctly triaged. However, more than half of true emergencies were under-triaged, and the service’s suicide crisis safety alerts were inconsistent and lacking. The full findings, in an article with Ms. Tyagi as the second author, were published as “ChatGPT Health performance in a structured test of triage recommendations” in Nature Medicine in February.
“I did not expect to be involved in AI-driven health care research so early as a student,” says Ms. Tyagi. “Being part of work that could directly impact patient outcomes has been incredibly meaningful.” Read on to learn how she began working at the intersection of AI and health care, and the importance for students to be familiar with this rapidly evolving field.
How did this research project get started?
It started with me shadowing Dr. Ramaswamy in the Urology Department. In-between surgeries, we talked about our interests in AI in health care, and I learned we had a robust department at Mount Sinai that focused on AI and research.
We continued having conversations about AI in health care, and when OpenAI released ChatGPT Health, the discussions intensified. Immediately, we were texting about the implications of this tool, which coalesced into the idea of a study to examine it.
The project started out with the two of us, but with the help of leadership from AIHH, and other physicians, we managed to find collaborators and were able to begin the study quickly.
What was it like being on the research project as a student?
At first, I was intimidated. I was a first-year student working alongside physicians with far more experience in AI and clinical medicine than I had. It took some time to realize that I didn’t need to match their background to contribute meaningfully. I brought a different perspective. I could think through how someone my age would realistically use a tool like ChatGPT Health—how we’d phrase questions, what we might take at face value, and where misunderstandings could happen. That lens helped us step outside a purely clinical viewpoint.
We knew we needed to move quickly. From its release, ChatGPT Health was already being widely used, and we felt a responsibility to evaluate it while people were actively using it. We completed the data collection within two weeks because we wanted to better understand its safety profile and identify any potential limitations as early as possible. Our goal was not to diminish the value of AI in health care, but to approach it thoughtfully by examining where it performs well and where caution may be warranted.
Was it hard balancing school work and being on this project?
My school work always came first, and I was careful to keep that as my priority. Because of that, much of the research work happened in the evenings. It could be demanding at times, but I truly enjoyed it. Being part of a project that was unfolding in real time, and working alongside people who made the process engaging and collaborative, felt energizing rather than exhausting.
What also made this project so meaningful was that it never felt disconnected from my education. It was a different kind of learning: hands-on, fast-paced, and collaborative. There was constant progress and discussion, and that experience offered something you simply cannot replicate in a classroom.
The structure of the medical education program at the Icahn School of Medicine also helped tremendously. The flexibility and autonomy built into our curriculum made it possible to take on a project like this while staying on track academically. In the end, it was demanding, but it resulted in work I am genuinely proud of.
Should students be thinking about AI more?
As medical students, we’re trained to understand clinical systems and patient care. It can be easy to view AI as something reserved for computer science experts and engineers, and that it’s separate from us and the work we do as clinicians. But that is becoming less and less true by the day.
Patients now have direct access to AI technology, and many will go to doctor appointments having already used them to research symptoms or interpret medical information. At the same time, in our current health care system, patients may wait months to see a physician. In that gap, AI tools can function as a kind of interim resource—offering information, reassurance, or sometimes misinformation—before a patient ever steps into a clinic.
Because of this, it falls on us as future doctors to understand these AI health care technologies before patients come to see us. Understanding and discussing the AI-generated information a patient has already seen may soon become a routine part of taking a patient history. We cannot effectively counsel patients about tools they are using if we do not understand how those tools work, what their limitations are, and where they may fall short.
As part of a generation of physicians training alongside these technologies, we have a responsibility not only to react to AI’s presence in medicine, but to engage with it thoughtfully and proactively.
What advice do you have for students who are interested in AI research?
For students who are not sure whether they can even get started, you absolutely can. You don’t need to be an engineer or have years of technical experience to contribute meaningfully. AI research, especially in health care, needs people who can think critically, ask good questions, and communicate clearly.
Then, for those who aren’t sure how to get started, start having conversations—with classmates, professors, and doctors. A simple conversation in between patient cases is what transformed my shadowing experience at the Urology Department into this research project. There are so many talented scientists and faculty at Mount Sinai, and simply engaging with them by asking questions, sharing your interests, and expressing curiosity, can open doors. Sometimes all it takes is one thoughtful conversation to set something much larger in motion.
Being open to opportunities and willing to learn really makes a difference. I had never done AI research before this project, so stepping into it required me to get comfortable with not knowing everything. But I came to understand that AI is developing so quickly that no one has it completely figured out. Even people with years of experience are still asking questions and adjusting as the field evolves.
That realization made it feel less about being an expert and more about being engaged. You don’t have to start with deep technical knowledge; you just have to be willing to listen, learn, and contribute where you can. In a space that’s changing this fast, humility and curiosity go a long way.