When Regina Barzilay was diagnosed with breast cancer in 2014, it upended her life and shifted the direction of her research. Already an accomplished computer scientist specializing in natural language processing, her experience as a patient shed light on the possibility of new applications for machine learning and revealed a stark disconnect between technology’s promise and its implementation in health care. “It was upsetting to see that all these great technologies are not translated into patient care,” she recalls. “I wanted to change it.” After going through her own treatment, Barzilay’s work took on an urgent new focus: could the very technologies she used in her research predict who might be at risk for breast cancer? Learn more
Researchers from the Massachusetts Institute of Technology (MIT) Jameel Clinic for Machine Learning in Health have announced the open-source release of Boltz-2, which now predicts molecular binding affinity at newfound speed and accuracy to democratize commercial drug discovery. The model is available under the highly permissive MIT license, which allows commercial drug developers to use the model internally and apply their own proprietary data. Learn more
The 2024 Nobel Prize in Chemistry was awarded in part to Deepmind’s Demis Hassabis and John Jumper for the development of AlphaFold–an AI model that predicts the structure of proteins, the complex chemicals essential to making our bodies work. Since its inception, this model and others like it have been put to use in laboratories around the world, enabling new biological discoveries.
Now a team from MIT and pharmaceutical company Recursion, with support from Cancer Grand Challenges, have developed a tool that takes these principles further–and may help researchers find new medicines more quickly. Called Boltz-2, this open-source generative AI model can not only predict the structure of proteins, it can also predict its binding affinity–that is, how well a potential drug is able to interact with that protein. This is crucial in the early stages of developing a new medicine. Learn more
Toddlers may swiftly master the meaning of the word “no”, but many artificial intelligence models struggle to do so. They show a high fail rate when it comes to understanding commands that contain negation words such as “no” and “not”.
That could mean medical AI models failing to realise that there is a big difference between an X-ray image labelled as showing “signs of pneumonia” and one labelled as showing “no signs of pneumonia” – with potentially catastrophic consequences if physicians rely on AI assistance to classify images when making diagnoses or prioritising treatment for certain patients.
It might seem surprising that today’s sophisticated AI models would struggle with something so fundamental. But, says Kumail Alhamoud at the Massachusetts Institute of Technology, “they’re all bad [at it] in some sense”. Learn more
Imagine a radiologist examining a chest X-ray from a new patient. She notices the patient has swelling in the tissue but does not have an enlarged heart. Looking to speed up diagnosis, she might use a vision-language machine-learning model to search for reports from similar patients.
But if the model mistakenly identifies reports with both conditions, the most likely diagnosis could be quite different: If a patient has tissue swelling and an enlarged heart, the condition is very likely to be cardiac related, but with no enlarged heart there could be several underlying causes.
In a new study, MIT researchers have found that vision-language models are extremely likely to make such a mistake in real-world situations because they don’t understand negation — words like “no” and “doesn’t” that specify what is false or absent. Learn more