Isaac Kohane, MD, PhD, Chair of the Department of Biomedical Informatics, Harvard Medical School, was keynote speaker of the Icahn School of Medicine at Mount Sinai’s Windreich Department of AI and Human Health (AIHH) December 2025 session of AIHH Grand Rounds.

Health care systems across the country have been increasingly using artificial intelligence (AI) systems to assist and augment what clinicians and researchers can achieve. As adoption of machine learning accelerates, thought leaders have been scrutinizing how AI is being embraced.

“Many doctors are already using these tools, such as OpenEvidence, but without visibility or oversight by health care systems,” says Isaac Kohane, MD, PhD, Chair of the Department of Biomedical Informatics, Harvard Medical School. OpenEvidence is an AI-powered clinical decision support and medical search engine.

Dr. Kohane is a prominent researcher in biomedical informatics and AI whose nearly 400 papers have been cited more than 95,000 times, according to Google Scholar. He was the keynote speaker of the Icahn School of Medicine at Mount Sinai’s Windreich Department of AI and Human Health (AIHH) December 2025 session of AIHH Grand Rounds. Dr. Kohane wants to see not just more use of AI, but more responsible use—a theme of his lecture, which was titled “A Tipping Point for Clinicians’ Influence Upon AI-Driven Clinical Decisions.”

Dr. Kohane gave a lecture, titled “A Tipping Point for Clinicians’ Influence Upon AI-Driven Clinical Decisions,” which focused on where the opportunities lie for the health care industry to use AI more, but in a thoughtful way that accounted for human values and ethics.
The AIHH Grand Rounds is a monthly seminar series hosted by Mount Sinai’s Windreich Department of AI and Human Health (AIHH). Clinicians and researchers who work extensively with AI, including Girish N. Nadkarni, MD, MPH, CPH, Chair of AIHH (left) and David L. Reich, MD, President of The Mount Sinai Hospital (right), attend to learn and discuss the latest developments in the field.
AI is transforming the health care and scientific publishing industries, with its potential to save time and effort for individuals and institutions. However, as long as there are incentives for perverse behaviors regarding AI, there will be bad actors abusing the technology, says Dr. Kohane. These fields need to collectively reset such cultures and behaviors.
A theme Dr. Kohane discussed in his lecture is the need to build in human values within AI models. There will be occasions when a broad, normative model will fail to account for the needs of an individual patient. He proposes that the responsibility for building human values in AI lies with the clinicians and researchers who use it.
A highlight of the AIHH Grand Rounds is not merely the lectures presented, but the discussions that occur after. These discussions help foster collaboration between researchers as they share ideas.

“I chose these topics for Grand Rounds because I view the Icahn School of Medicine and its leadership as among the most forward-looking in the country,” says Dr. Kohane, “and therefore they should be truly focused on setting an example in terms of accelerating adoption options that are both safe, and also enabling patients and clinicians to benefit from the complementarity of AI to human expertise, as well as changing the promotion process to reflect greater engagement with reproducibility and robust research.”

The AIHH Grand Rounds is a monthly seminar series that showcases developments in how AI, science, and medicine intersect, and features an open discussion to foster collaboration. The inaugural session launched in September 2025.

How should health systems think about engaging with AI as it pertains to patients, clinicians, and researchers in a way that is beneficial to all parties? Dr. Kohane discussed the following themes during the seminar.

Transforming the institution with AI

By their nature, large health care systems in the United States are high-revenue, low-margin businesses, and because of that, they face challenges in moving rapidly with change to avoid disruptions.

Institutionally, AI adoption has found more comfort and scalability on the administrative side of operations, including reimbursement and corporate functions. AI is a critical lever, but not a priority for health care system spending presently, according to Dr. Kohane.

However, the application of AI on the clinical side, including continuity of care, clinical operations, and quality and safety, remains nascent or in pilot stages.

“It’s actually the doctors who are leading [with AI adoption], even when their own institutions are not supporting them directly,” says Dr. Kohane.

That landscape is slowly changing as health care leaders begin to engage their clinicians with AI support where it is needed now, but it should not be at the cost of extended, effortful multi-year governance conversations, Dr. Kohane pointed out. The incentives for using AI in the practice of medicine must be focused on improving care rather than maximizing revenue.

“And so, I anticipate that the future first adoptions will happen in specialized high-end services like concierge services, primary care, or cancer care,” he says. “But eventually, it would become a requisite for the safe practice of medicine, and for meeting the expectation of our patients, that ultimately our health care systems will be propelled into more significant engagement [with AI].”

Transforming publishing and literature review with AI

“Every part of the scientific publication process—that is, the generation of manuscripts and review of manuscripts—is going to be augmented by AI,” says Dr. Kohane. “That is going to present, or is already presenting, challenges that the whole peer-review publishing industry is not well equipped to handle.”

Dr. Kohane discussed a case study in which he created a hypothesis that was incorrect, and with AI tools was ultimately able to generate data that were not only fictional, but designed by AI to avoid detection by the majority of fake-data-generation detectors.

“We’re going to really have to address, first and foremost, the incentives that drive perverse behaviors,” he says. An industry that prizes publication volume, or publishing in high-profile publications over producing work with actual scientific impact—such as important but unglamorous replication studies—is only going to drive bad actors.

In the right hands, AI will increase the efficiency and quality of scholarly scientific review. AI can serve as a prism that allows clinical and laboratory experience to be distilled into new knowledge, forming a substrate for truly lifelong medical education. “However, we have to reset the culture and incentives,” Dr. Kohane says.

Transforming AI with human values

In an industry where urgency and time matters, AI presents a strong value proposition with its capability to process large datasets and execute large volumes of actions in a blink of an eye. Time-consuming tasks can be automated by AI, but when decisions that pertain to the care of individuals with unique needs are left to a normative model that adheres to overarching policies, the individual’s needs might not be met.

The solution is not to turn away from AI, but to develop personal models that account for the needs of not just the patient at hand, but also their caretakers, doctors, or any other relevant stakeholders, says Dr. Kohane. It is about building human values within an AI model, which can flag when an individual case does not align with the normative model.

That work to develop such projects falls on the health care system, says Dr. Kohane. He introduced the Human Values Project, an international initiative led by Harvard Medical School’s Department of Biomedical Informatics, which aims to characterize how AI models respond to ethical dilemmas in medicine, measuring both their default behaviors and their capacity for alignment. And he proposed that researchers at the Icahn School of Medicine have that potential to develop their own human values-based AI models.

“My takeaway from presenting and participating in the AIHH Grand Rounds really stemmed not from the presentation itself, but from discussions I had afterwards with various leaders of the AI efforts,” says Dr. Kohane. “My sense was that more than most institutions, [Mount Sinai’s] leadership was willing to invest and take a chance on pilots of deployments of these technologies to learn fast and adapt fast. And at the same time, everybody recognized that this is very challenging, given our current regulatory environments and incentives.”

Dr. Kohane ended his presentation with a line of wisdom for participants to consider: “There is no one to lead this in the direction we want, other than us.”

Pin It on Pinterest

Share This

Share this post with your friends!

Share This

Share this post with your friends!