DEV Community

Cover image for What AI Means for Science in 2025
Jimmy Guerrero for Voxel51

Posted on • Originally published at voxel51.com

What AI Means for Science in 2025

#ai

This post was originally published by Jacob Marks on the ML @ Voxel51 blog.

On October 8th, 2024, the Royal Swedish Academy of Sciences announced that the 2024 Nobel Prize in Physics had been awarded to John J. Hopfield and Geoffrey Hinton, for “foundational discoveries and inventions that enable machine learning with artificial neural networks”. This announcement caused quite a stir, as many in the physics community felt that the most prestigious honor in their field had been awarded to breakthroughs in artificial intelligence rather than a breakthrough in physics itself.

The very next day — before the world had time to fully process the curious case of the AI physics prize — one half of the 2024 Nobel Prize in Chemistry was awarded to Demis Hassabis and John Jumper of Google DeepMind for their work on “protein structure prediction,” AKA AlphaFold. The commotion quickly escalated, with AI apologists declaring that deep learning had overthrown traditional scientific exploration and natural scientists arguing that AI attracts enough attention already without usurping the crown jewels of scientific excellence.

AI taking center stage in not one, but two Nobel Prize awards begs the question: what does AI mean for science in the coming years? In this blog post I give my attempt at an answer. I’ll explain why I believe AI played distinct roles in the 2024 Chemistry and Physics awards and how these awards reflect the character of AI’s impact thus far on the natural sciences.

👋 Who am I? I’m a machine learning engineer/researcher at Voxel51. In 2022, I completed my Ph.D. in theoretical physics. If you’re curious, see the bio at the end of this article.

🤖 What is AI? Definitions differ, but roughly speaking, artificial intelligence is the umbrella term for research and applications that enable computers to reason, make decisions, and intelligently interact with the natural world. Machine learning refers to the subset of AI concerned with the algorithms and models that underlie many AI systems. Much of what we now call AI would not have been called AI even a few years ago.

Chapter 1: Why Do I Care? And Why Should You?

In 2014, I spent the summer conducting research at the Large Hadron Collider (LHC) at CERN in Switzerland. Greenhorn rising college sophomore that I was, I had little idea what I had signed myself up for. I expected to draw Feynman diagrams and solve equations by hand. In reality, my summer involved engineering and pruning features that were fed into random forest classifiers and other machine learning models.

What I failed to realize is that while the physics of proposing candidate particles and processes to explain natural phenomena is incredibly challenging, the detection of said particles hinges on our ability to parse and filter unfathomable quantities of data. When the LHC is running, it generates approximately 1 petabyte of collision data per second, and a single experimental run can generate collision data continuously for up to 20 hours. For reference, at a fast write-to-disk speed of 520MB/s, it would take almost a month to write a single petabyte to disk, not to mention the exorbitant storage costs.

To overcome these logistical challenges, researchers at CERN use machine learning models to rapidly decide whether to keep or discard a given data point. This on-the-fly filtering makes it possible to achieve the level of statistical significance needed to verify the Higgs boson, given practical storage, time, and cost constraints. In other words, AI enabled this scientific discovery and many others! Note that In 2014, decision trees would not have been considered “AI”, but rather data science or statistical learning.

Over the subsequent decade, I’ve had a number of experiences applying AI for scientific research: As a physics Ph.D. student in 2019, I presented at the APS March Meeting about using AI to accelerate the convergence of a certain kind of pesky computation used to simulate quantum systems on classical computers. During a residency at Google X, I developed a machine learning method for constructing physical mixtures known as thermal states by jointly leveraging quantum and classical computers.

But my experiences as a scientist applying machine learning in my research are far from unique. According to a report by Australia’s National Science Agency, 7.2% of all published research papers in physics and astronomy in 2022 were related to artificial intelligence, along with 3.6% in chemistry and 4.8% in biochemistry, genetics, and molecular biology. What does this actually look like? A few examples:

  • A friend in materials science spent their Ph.D. building adaptive AI-driven materials discovery laboratories;

  • A grad-school roommate in bioengineering spent his Ph.D. designing graph neural networks for drug discovery;

  • Collaborators leaving academia to start AI-based computing companies.

Since finishing my Ph.D. and joining Voxel51, however, I’ve been in a rather unique position to see the impact that AI is having on science. Because so many of our community members are building in the open, over the past two years, I’ve had the distinct privilege of engaging with scientists conducting cutting-edge AI-enabled research in just about any domain you can imagine. These experiences have led me to believe that AI is already transforming biology, chemistry, and medicine with similar advances in the physical sciences soon to come.

These experiences have deeply informed my views. Still, it bears stating: all thoughts and opinions are my own and do not reflect the beliefs of any organization or institution.

Chapter 2: The AI Revolution in Biology and Chemistry

In January of 2021, shortly after Google DeepMind released AlphaFold2, I wrote a blog post on their “solution” to the protein folding problem. While I praised the DeepMind team on their progress and acknowledged that “even if AlphaFold never improves beyond its current state, it will still prove useful in medical research,” this praise was couched in caution and skepticism in both the generalizability and interpretability of the model and the philosophical notion of a machine learning model solving a scientific theory. What does it really mean for a machine learning model to solve a scientific theory?

My 2021 blog post ended with:

"AlphaFold is not a solution to the protein folding problem, but it is absolutely a breakthrough. Any machine learning based approach to science will need to address practical and philosophical challenges. For now, we should appreciate DeepMind’s colossal step forward, and we should prepare for unprecedented progress in the near future. This is only the beginning.”

It is safe to say that we are seeing the first inklings of that unprecedented progress.

According to a November 2024 blog post from Google, AlphaFold2 has already been cited more than 20,000 times and has been used to make discoveries in malaria vaccines and cancer treatments. This technology has also entered the commercial phase, as companies like Isomorphic Labs, Cradle Bio, and Etcembly are now pioneering the discovery and development of proteins, antibodies, and therapeutics using similar generative AI models.

At a high level, AlphaFold works on the same principles as large language models (LLMs) like GPT4. The model is generative, meaning that given an input sequence of input molecules, AlphaFold will generate a joint 3D structure in much the same way that LLMs generate textual responses to user prompts.

For AlphaFold as for LLMs, the inputs and outputs are known as tokens — discrete entities that comprise model vocabulary. LLM tokens are text characters, words, and punctuation marks. If you’re curious about how tokenization works, copy and paste this paragraph here to get a glimpse, and check out this video tutorial by Andrej Karpathy if you really want to go down the rabbit hole.

Whereas LLMs work with text tokens, AlphaFold reads in and generates tokens representing amino acids, nucleotides, and atoms: the model operates directly on (representations of) biological and chemical inputs. In the parlance of AI, the protein folding algorithms of old had very strong [inductive biases](https://en.wikipedia.org/wiki/Inductive_bias#:~:text=The%20inductive%20bias%20(also%20known,that%20it%20has%20not%20encountered.). AlphaFold employs self-attention to learn these relationships from vast quantities of data.

In 2021, I wrote, “It’s quite possible that artificial intelligence helps us to… find the right language” to describe protein folding. Hassabis and Jumper won the 2024 Nobel Prize in Chemistry because it appears that AlphaFold has done precisely that. In May of 2024, DeepMind and Isomorphic Labs released AlphaFold3, extending their scope from protein folding to predicting “the structure and interactions of all of life's molecules.”

We’re still in the early days, and we still need to take appropriate caution when working with the outputs of these models. But biology and chemistry research have undeniably entered a new era — one that was unthinkable just five years ago.

Chapter 3: Where Physics Meets AI

The story in physics is quite different. When the 2024 Physics Nobel Prize announcement made the rounds on October 8th, my friends in the physics community had a wide range of reactions, to say the least. Some of my more cynical friends saw the announcement as a ploy to attract more funding and attention to a field that some view as in crisis. On the opposite end of the spectrum, physics maximalists took the award as confirmation that “everything is physics.” As 2004 Physics Nobel Laureate David Gross writes,

“Physicists like to say that, if you look deeply into any branch of science, you’ll find physics at its core. Not every chemist, biologist or psychologist may agree with that notion, but the physicists do have a point” - David Gross, Everything Is Physics

It’s no wonder that the 2024 Physics Prize already finds itself on the Nobel prize controversies Wikipedia page.

Physical scientists have been using machine learning methods for decades. The first instance of the term “neural network” in an astronomy publication, for example, occurred all the way back in 1986. And AI has been employed to great effect! Some of the most impactful AI applications in the physical sciences over the past year or so (in my opinion) include making particle accelerators more efficient, searching for high-energy neutrinos, controlling fusion reactions, and decoding errors on quantum computers.

The distinction between AI’s role in the physical sciences and biological/chemical sciences is clear if we adopt the taxonomy introduced in a 2023 Nature survey that asked 1600 scientists how they see the impacts of AI in research. AI advances in the physical sciences fall into the first four buckets: “faster data processing,” “accelerated computations,” “saving time and money,” and “automating data acquisition.” In some of the biological and chemical sciences, we’re seeing machine learning models generate new research hypotheses and make new discoveries.

Drawing out the comparison with AlphaFold, I believe we’re not seeing these kinds of advances yet in physics because we have yet to train a foundation model that is “physics-native” — one that “speaks” the language of physics in the same way that AlphaFold speaks the language of biology and chemistry. I’m not talking about pseudo–world models like Sora, which learn causally sensible dynamics on the scale of people, places, and things. At its core, physics is about the fundamental laws of nature; the atomic elements of our universe and their interactions; the theoretical and mathematical frameworks underpinning reality. A physics foundation model must concern itself with these same ideas.

The effort that I’ve seen come closest to this was DeepMind’s AlphaGeometry and AlphaProof, which solved 25 out of 30 Olympiad-level math problems on which it was evaluated. This model still operates on text strings in the hopes that, after training and with appropriate test-time prompting, the model will generate mathematically valid, human-readable proofs. I don’t know exactly what a physics-native generative model would look like, but the success of AlphaFold and AlphaGeometry give me hope that with the right vocabulary and the right dataset, physics will one day benefit from the same variety of AI-enabled discovery.

Why, then, was the 2024 Nobel Prize in Physics awarded for developments in AI?

It makes sense when contributions to machine learning are viewed as a technological export of sorts from physics to other fields. It’s no secret that physics has informed and inspired many of the crucial components of neural networks and machine learning systems. The physical concept of momentum is an essential element of the most popular neural network optimization routine, Adam; and the physical process of diffusion inspired some of today’s most powerful image and video generation models, just to name a few. Hopfield was a statistical and condensed matter physicist by training and primary departmental affiliation, and neural networks are categorized under the subheading of condensed matter physics on the preprint server Arxiv.

2024 is not the first time the Nobel Prize in Physics has been awarded for a technological export. In 2000, Jack Kirby was awarded a portion of the prize “for his part in the invention of the integrated circuit,” and back in 1909, Guglielmo Marconi and Ferdinand Braun received the award “in recognition of their contributions to the development of wireless telegraphy.” Contributions to neural networks represent a slightly more abstract export than these, but there is solid precedent. It’s also worth noting that in the 118 years that the prize has been awarded, only twice has the committee used the word “enabled” in their description, once in 2014 (“for the invention of efficient blue light-emitting diodes which has enabled bright and energy-saving white light sources”) and once in 2024.

By no means does this mean that AI belongs under the umbrella of physics or that physics is solely responsible for AI. If anything, the 2024 Nobel Prize in Physics is an acknowledgment of just how deeply physics and AI are in conversation with each other.

Chapter 4: Where Are We Headed

Artificial intelligence is rapidly evolving: 2023 saw 220,000 new AI publications, and the total number of AI projects on GitHub increased by 59.3% in just one year. At the same time, the scientific discourse is constantly changing. Something that seems impossible today may be reality next year.

As we enter 2025, it is clear that science and AI will play even greater roles in their respective futures. The Trillion Parameter Consortium launched in late 2023 to bring together leading organizations in advancing AI for science, the Schmidt Sciences Foundation recently introduced an AI in Science Postdoctoral Fellowship, and Google just announced a $20M fund for AI and Science. 2024 was the first year that Nobel Prizes were awarded for AI innovations, but it certainly won’t be the last.

With machine learning models becoming embedded irrevocably into scientific research, it is more important than ever that we use these models safely and responsibly, maintain strong experimental hygiene, and strive for integrity at every turn. Both open-source AI and open science will be key to ensuring that applications of AI in science benefit all.

What This Post Does Not Cover

For the sake of brevity, this blog post maintains a (relatively) narrow focus on the natural sciences, drawing out a dichotomy between biology and chemistry on the one hand and the physical sciences on the other. The lines between disciplines are murkier than ever, but AI is being deployed to great effect in neuroscience, as well as in applied settings in medicine, battery design, and climate modeling.

Beyond the direct impact I will outline in this post, the Cambrian explosion in AI and its associated increased compute demands are already spurring unprecedented investment into alternative computing platforms. 2024 has seen $1.5B in venture funding directed toward quantum computing startups, and companies like Normal Computing, Lightmatter, and FinalSpark (among others) are pioneering thermodynamic computing, photonic computing, and biological computing, respectively. These efforts will push the scientific world forward as well, just as the semiconductor revolution drove innovation in materials science and condensed matter physics.

Acknowledgments

Thank you to Dan Gural and Amara McCune for their feedback and suggestions on this blog!

Biography

Jacob Marks is a Senior Machine Learning Engineer and Researcher at Voxel51, where he conducts research in representation learning, interpretability, and data-centric AI. He also leads open-source efforts in search and generative AI for the FiftyOne data-centric AI toolkit, including building VoxelGPT and integrations with Hugging Face, vector databases, and more.

Prior to joining Voxel51, Jacob worked at Google X, Samsung Research, and Wolfram Research. In a past life, he was a theoretical physicist: in 2022, he completed his Ph.D. at Stanford, where he investigated quantum phases of matter.

Top comments (0)