Skip to Content

AI Developers Should Understand the Risks of Deploying Their Clinical Tools, MIT Expert Says

AI applications for health care should be designed to function well in different settings and across different populations, says Marzyeh Ghassemi, PhD (Video), whose work at the Massachusetts Institute of Technology (MIT) focuses on creating “healthy” machine learning (ML) models that are “robust, private, and fair.” The way AI-generated clinical advice is presented to physicians is also important for reducing harms, according to Ghassemi, who is an assistant professor at MIT’s Department of Electrical Engineering and Computer Science and Institute for Medical Engineering and Science. And, she says, developers should be aware that they have a responsibility to clinicians and patients who could one day be affected by their tools.
JAMA
Screenshot of a video with a title that reads "AI and Responsible Clinical Implementation"
Read the Article
image description