At the turn of the 20th century, W.E.B. Du Bois wrote about the conditions and culture of Black people in Philadelphia, documenting also the racist attitudes and beliefs that pervaded the white society around them. He described how unequal outcomes in domains like health could be attributed not only to racist ideas, but to racism embedded in American institutions.
Almost 125 years later, the concept of “systemic racism” is central to the study of race. Centuries of data collection and analysis, like the work of Du Bois, document the mechanisms of racial inequity in law and institutions, and attempt to measure their impact.
“There’s extensive research showing racial discrimination and systemic inequity in essentially all sectors of American society,” explains Fotini Christia, the Ford International Professor of Social Sciences in the Department of Political Science, who directs the MIT Institute for Data, Systems, and Society (IDSS), where she also co-leads the Initiative on Combatting Systemic Racism (ICSR). “Newer research demonstrates how computational technologies, typically trained or reliant on historical data, can further entrench racial bias. But these same tools can also help to identify racially inequitable outcomes, to understand their causes and impacts, and even contribute to proposing solutions.” Learn more
Despite their impressive capabilities, large language models are far from perfect. These artificial intelligence models sometimes “hallucinate” by generating incorrect or unsupported information in response to a query.
Due to this hallucination problem, an LLM’s responses are often verified by human fact-checkers, especially if a model is deployed in a high-stakes setting like health care or finance. However, validation processes typically require people to read through long documents cited by the model, a task so onerous and error-prone it may prevent some users from deploying generative AI models in the first place.
To help human validators, MIT researchers created a user-friendly system that enables people to verify an LLM’s responses much more quickly. With this tool, called SymGen, an LLM generates responses with citations that point directly to the place in a source document, such as a given cell in a database. Learn more
BOSTON (WHDH) - Cancer survivor Dr. Regina Barzilay is a cancer survivor who is using her experience to help others.
The professor at the Massachusetts Institute of Technology helped develop a way for artificial intelligence to analyze mammograms, with an algorithm that can detect small changes that might otherwise go unnoticed, which means possibly being able to predict developing cancers up to five years earlier.
“Once we finished developing the system we looked back at my own mammograms and it’s quite clear that my cancer was seen by machines at least two years before my diagnosis,” Barzilay said.
Now, 10 years cancer-free, Barzilay said her cancer battle led her to study ways to use AI predictive technology. Learn more
Today, the U.S. Department of Health and Human Services (HHS) through the Advanced Research Projects Agency for Health (ARPA-H) announced funding for the Transforming Antibiotic R&D with Generative AI to stop Emerging Threats (TARGET) project, which will use AI to speed the discovery and development of new classes of antibiotics. This program is another action to support the United States’ longstanding commitment to combating antimicrobial resistance (AMR), from groundbreaking innovation to international collaboration. The U.S. is a global leader in the fight against AMR and has a demonstrated track record of progress in protecting people, animals, and the environment from the threat of AMR domestically and globally.
“Antibiotic resistance is a real and urgent threat affecting millions of people. We need to prevent infections and conserve the antibiotics we have. We also urgently need new drugs to treat these increasingly resistant infections. This project will use AI to speed this needed innovation and help ensure we have the medicines we need to keep people alive,” said Secretary Xavier Becerra. Learn more
When I first became a doctor, I cared for an older man whom I’ll call Ted. He was so sick with pneumonia that he was struggling to breathe. His primary-care physician had prescribed one antibiotic after another, but his symptoms had only worsened; by the time I saw him in the hospital, he had a high fever and was coughing up blood. His lungs seemed to be infected with methicillin-resistant Staphylococcus aureus (MRSA), a bacterium so hardy that few drugs can kill it. I placed an oxygen tube in his nostrils, and one of my colleagues inserted an I.V. into his arm. We decided to give him vancomycin, a last line of defense against otherwise untreatable infections.
Ted recovered with astonishing speed. When I stopped by the next morning, he smiled and removed the oxygen tube, letting it dangle near his neck like a pendant. Then he pointed to the I.V. pole near his bed, where a clear liquid was dripping from a bag and into his veins.
“Where did that stuff come from?” Ted asked.
“The pharmacy,” I said.
“No, I mean, where did it come from?”
At the time, I could barely pronounce the names of medications, let alone hold forth on their provenance. “I’ll have to get back to you,” I told Ted. He was discharged before I could. But, in the years that followed, I often thought about his question. Every day, I administer medicines whose origins are a mystery to me. I occasionally meet a patient for whom I have no effective treatment to offer, and Ted’s inquiry starts to seem existential. Where do drugs come from, and how can we get more of them? Learn more
Organizations are increasingly utilizing machine-learning models to allocate scarce resources or opportunities. For instance, such models can help companies screen resumes to choose job interview candidates or aid hospitals in ranking kidney transplant patients based on their likelihood of survival.
When deploying a model, users typically strive to ensure its predictions are fair by reducing bias. This often involves techniques like adjusting the features a model uses to make decisions or calibrating the scores it generates.
However, researchers from MIT and Northeastern University argue that these fairness methods are not sufficient to address structural injustices and inherent uncertainties. In a new paper, they show how randomizing a model’s decisions in a structured way can improve fairness in certain situations. Learn more
With many millions of Americans suffering from Alzheimer’s and dementia, cognitive decline is a major issue. The cost of dementia care is a rude awakening for many families, and patients experiencing the troubling symptoms of these difficulties might despair when they hear that there’s really no “cure,” just treatment.
One of the problems is that dementia can look a lot like other forms of cognitive decline, like milder senility. So part of the process is diagnosis.
We may not be able to cure dementia. But we can at least get help figuring out how to diagnose it with tools based on something called HCI.
What is HCI? It stands for ‘human-computer interaction’. In some ways, it’s pretty much what it sounds like – the study of users and their behaviors in using computers. But it’s also a form of cognitive engineering, and may give us a window into the human mind.
Looking at stylus-based interaction tasks, scientists are pondering quite a few metrics that reveal details on what people are thinking: eye fixation, blink rate, pupil size, etc.
That in turn can help with the epidemic of cognitive impairment as we age (as presented by Randall Davis in this presentation. Davis also showed us some of the new technology coming down the pike). Learn more