Skip to Content

Category: Uncategorized

Study: When allocating scarce resources with AI, randomization can improve fairness

Organizations are increasingly utilizing machine-learning models to allocate scarce resources or opportunities. For instance, such models can help companies screen resumes to choose job interview candidates or aid hospitals in ranking kidney transplant patients based on their likelihood of survival.

When deploying a model, users typically strive to ensure its predictions are fair by reducing bias. This often involves techniques like adjusting the features a model uses to make decisions or calibrating the scores it generates.

However, researchers from MIT and Northeastern University argue that these fairness methods are not sufficient to address structural injustices and inherent uncertainties. In a new paper, they show how randomizing a model’s decisions in a structured way can improve fairness in certain situations. Learn more

Reverse Engineering Dementia With Human Computer Interaction

With many millions of Americans suffering from Alzheimer’s and dementia, cognitive decline is a major issue. The cost of dementia care is a rude awakening for many families, and patients experiencing the troubling symptoms of these difficulties might despair when they hear that there’s really no “cure,” just treatment. One of the problems is that dementia can look a lot like other forms of cognitive decline, like milder senility. So part of the process is diagnosis. We may not be able to cure dementia. But we can at least get help figuring out how to diagnose it with tools based on something called HCI. What is HCI? It stands for ‘human-computer interaction’. In some ways, it’s pretty much what it sounds like – the study of users and their behaviors in using computers. But it’s also a form of cognitive engineering, and may give us a window into the human mind. Looking at stylus-based interaction tasks, scientists are pondering quite a few metrics that reveal details on what people are thinking: eye fixation, blink rate, pupil size, etc. That in turn can help with the epidemic of cognitive impairment as we age (as presented by Randall Davis in this presentation. Davis also showed us some of the new technology coming down the pike). Learn more

Sybil FAQ

Learn more

A smarter way to streamline drug discovery

The use of AI to streamline drug discovery is exploding. Researchers are deploying machine-learning models to help them identify molecules, among billions of options, that might have the properties they are seeking to develop new medicines.

But there are so many variables to consider — from the price of materials to the risk of something going wrong — that even when scientists use AI, weighing the costs of synthesizing the best candidates is no easy task.

The myriad challenges involved in identifying the best and most cost-efficient molecules to test is one reason new medicines take so long to develop, as well as a key driver of high prescription drug prices.

To help scientists make cost-aware choices, MIT researchers developed an algorithmic framework to automatically identify optimal molecular candidates, which minimizes synthetic cost while maximizing the likelihood candidates have desired properties. The algorithm also identifies the materials and experimental steps needed to synthesize these molecules. Learn more

Mirai FAQ

Learn more
Illustration in a 64-bit style depicting a prescription pill bottle, a pill, and a mouse pointer with various circles with Xs.

Say Hello to Your Addiction Risk Score — Courtesy of the Tech Industry

MIT Assistant Professor of EECS and Jameel Clinic Principal Investigator Marzyeh Ghassemi spoke with New York Times Opinion Contributor Maia Szalavitz on how the task of addiction prediction and prevention could potentially perpetuate biases in medical decision making. Learn more

Explainable AI for Rational Antibiotic Discovery

Researchers now employ artificial intelligence (AI) models based on deep learning to make functional predictions about big datasets. While the concepts behind these networks are well established, their inner workings are often invisible to the user. The emerging area of explainable AI (xAI) provides model interpretation techniques that empower life science researchers to uncover the underlying basis on which AI models make such predictions.

In this month’s episode, Deanna MacNeil from The Scientist spoke with Jim Collins from the Massachusetts Institute of Technology to learn how researchers are using explainable AI and artificial neural networks to gain mechanistic insights for large scale antibiotic discovery. Learn more
image description