Skip to Content

Tag: clinical AI

Marketwatch25: She survived breast cancer. Now her AI tool could help you skip annual mammograms.

As an MIT computer-science professor, Regina Barzilay was used to living on the bleeding edge of innovation, teaching computers to understand words in the nascent field of natural language processing. But when she was diagnosed with breast cancer in 2014, she was thrust into a different and, as she describes it, “really backwards” technological world. Learn more

AI steps in to detect the world’s deadliest infectious disease

As a professor and computer scientist at MIT, Regina Barzilay has spent years building AI models to detect breast cancer and lung cancer. Then, when a hospital in Sri Lanka told her it couldn't afford to buy off-the-shelf AI models for TB screenings, she agreed to build one for them.

As she got to work this past year, she says, she immediately understood why TB is at the vanguard of the global health challenges with AI solutions.

"You can see TB. TB is visual. You have an x-ray. You have a label which says whether they have it or not — and you just train the model," Barzilay says, adding that it only took her a few months and less than $50,000 to make her model. "It's straightforward, very cheap, very fast to develop." Learn more

AI medical tools downplay symptoms in women and ethnic minorities

Research by the MIT’s Jameel Clinic in June found that AI models, such as OpenAI’s GPT-4, Meta’s Llama 3 and Palmyra-Med — a healthcare- focused LLM — recommended a much lower level of care for female patients, and suggested some patients self-treat at home instead of seeking help.

A separate study by the MIT team showed that OpenAI’s GPT-4 and other models also displayed answers that had less compassion towards Black and Asian people seeking support for mental health problems.

That suggested “some patients could receive much less supportive guidance based purely on their perceived race by the model”, said Marzyeh Ghassemi, associate professor at MIT’s Jameel Clinic.  Learn more

TIME100 AI 2025

Regina Barzilay is in the business of patient future-telling. That is, using machine learning AI models to predict disease—including when and how it will strike, along with how it may behave. Barzilay began pursuing this after being diagnosed with breast cancer in 2014. As a patient, she experienced the frustrating uncertainty surrounding individual prognoses. Her questions about treatments were often answered in reference to what happened to the participants of clinical trials, but she felt those answers gave her little information about her individual situation.

As an AI researcher, she knew how to address that uncertainty. “To me it was quite clear,” she says, “That's what machine learning is about.” A decade later, the AI model she and her team built, named MIRAI, is able to detect a patient’s risk of developing breast cancer within five years. By 2025, MIRAI was validated by over 2 million mammograms in 48 hospitals across 22 countries.

And her future-telling continues. In 2024, Barzilay worked on an AI model that estimates the expected effectiveness of candidate flu vaccines by predicting which versions of the flu virus are likely to spread next season. She’s now working on using the same concept on cancer, in order to predict how patients—particularly with advanced cancers—will react to a specific treatment. “We are constantly running behind the disease,” she says. “The idea here is to be able to predict it.” Learn more

Can artificial intelligence cause medical errors? This MIT researcher shows it can.

Could a misspelled word cause a medical crisis? Maybe, if your medical records are being analyzed by an artificial intelligence system. One little typo, or even the use of an unusual word, can cause a medical AI to conclude there’s nothing wrong with somebody who might actually be quite sick. It’s a real danger, now that hospitals worldwide are deploying systems that use AI software like ChatGPT to assist in diagnosing illnesses. The potential benefits are huge; AIs can be excellent at spotting potential health problems that a human physician might miss. But new research from Marzyeh Ghassemi, a professor at the Massachusetts Institute of Technology and principal investigator at MIT Jameel Clinic, also finds that these AI tools are often remarkably easy to mislead, in ways that could do serious harm. Learn more
image description