Skip to Content

Tag: ai bias

Illustration in a 64-bit style depicting a prescription pill bottle, a pill, and a mouse pointer with various circles with Xs.

Say Hello to Your Addiction Risk Score — Courtesy of the Tech Industry

MIT Assistant Professor of EECS and Jameel Clinic Principal Investigator Marzyeh Ghassemi spoke with New York Times Opinion Contributor Maia Szalavitz on how the task of addiction prediction and prevention could potentially perpetuate biases in medical decision making. Learn more
Screenshot of a video with a title that reads "AI and Responsible Clinical Implementation"

AI Developers Should Understand the Risks of Deploying Their Clinical Tools, MIT Expert Says

AI applications for health care should be designed to function well in different settings and across different populations, says Marzyeh Ghassemi, PhD (Video), whose work at the Massachusetts Institute of Technology (MIT) focuses on creating “healthy” machine learning (ML) models that are “robust, private, and fair.” The way AI-generated clinical advice is presented to physicians is also important for reducing harms, according to Ghassemi, who is an assistant professor at MIT’s Department of Electrical Engineering and Computer Science and Institute for Medical Engineering and Science. And, she says, developers should be aware that they have a responsibility to clinicians and patients who could one day be affected by their tools. Learn more
Marzyeh Ghassemi seated on a bench.

ChatGPT one year on: who is using it, how and why?

On 30 November 2022, the technology company OpenAI released ChatGPT — a chatbot built to respond to prompts in a human-like manner. It has taken the scientific community and the public by storm, attracting one million users in the first 5 days alone; that number now totals more than 180 million. Seven researchers told Nature how it has changed their approach. Learn more
image description