Mount Sinai radiologists are comparing machine-read patient discharge summaries with original, human-read reports.

A patient’s electronic health record typically contains a trove of information that can be used to help predict and manage their future health needs. But much of that information is often composed of unstructured or fragmented data that first must be translated into language that physicians are able to understand.

A new partnership between the Mount Sinai Health System’s Department of Radiology and an Israel-based start-up, Maverick Medical AI, is exploring how to accomplish that task through the use of artificial intelligence. In a proof-of-concept study, Maverick’s deep learning and natural language processing (NLP) algorithms are being used to accurately identify co-morbidities in 1.5 million patient discharge summaries and radiology reports. If it is successful, Maverick’s program could open the door for its use in an array of medical, research, and business opportunities at Mount Sinai.

David Mendelson, MD

David Mendelson, MD, Vice Chair of Radiology Information Technology at the Icahn School of Medicine at Mount Sinai, is playing a key role in the research. He says one of Maverick’s strengths is its ability to report on secondary abnormalities in nearby organ systems that are sometimes only partially seen or could possibly be overlooked in radiological screenings.

“If someone is screened for lung cancer and the findings are negative, that’s great news for the patient,” says Dr. Mendelson. “But if natural language processing could identify secondary indications like coronary artery calcification or abnormal density of the liver, which might suggest non-alcoholic fatty liver disease, that information could prove very useful to physicians and patients. Physicians might be able to take preventive steps to improve outcomes for patients and ultimately lower health care costs downstream.”

Determining whether Maverick’s propriety algorithm can provide that important information is the responsibility of Pamela Argiriadi, MD, Assistant Professor of Diagnostic, Molecular and Interventional Radiology at Mount Sinai. Dr. Argiriadi and a team of residents are spot-checking secondary co-morbidities extracted by the algorithm from an ocean of radiology reports and discharge summaries to determine how they compare to the original, human-read reports.

“Radiology reports contain a wealth of information and we hope our study will shed light on how key-word phrases in those documents can be mined to provide input into the well-being of patients,” Dr. Argiriadi says. “A major goal of ours is to improve communication with primary care providers by reporting secondary findings to them, which can result in follow-up treatment and preventive medicine.” The software can recognize these findings within the report, extract them, and flag them for the provider.

Yossi Shahak, Co-founder and Chief Executive Officer of Maverick Medical AI, estimates that as much as 80 percent of a patient’s health information remains untapped due to its unstructured format. Translating that raw, fragmented data into medical coding language would provide physicians with actionable clinical insights.

“We are starting with radiology and hope to expand the vocabularies across many medical subspecialties, like cardiology and gastroenterology,” says Mr. Shahak. “That expansion of our data sets could provide Mount Sinai physicians with significant value when they mine it for often overlooked chronic conditions and risk factors. In addition, the conversion from unstructured data into medical coding will help Mount Sinai improve their financial capabilities.”

Pin It on Pinterest

Share This

Share this post with your friends!

Share This

Share this post with your friends!

Shares