Monday, July 28, 2025

How Physics Made Modern AI Possible

Math and Science News from Quanta Magazine
View this email in your browser
Each week Quanta Magazine explains one of the most important ideas driving modern research. This week, computer science staff writer Ben Brubaker explains how ideas from physics shaped the development of AI, and how they're still inspiring progress today.

 

How Physics Made Modern AI Possible
By BEN BRUBAKER

Physicists are notorious for barging into other scientific fields with big ideas that don't always pan out. (Take it from a recovering physicist.) But sometimes their insights really do prove revolutionary — perhaps nowhere more than in the study of artificial intelligence.

At first glance, physics and AI make an unlikely pairing. Physicists usually study natural phenomena that have nothing to do with "intelligence," however defined. And most early work in AI was dominated by a "symbolic" approach, in which researchers focused on building AI systems that combined predefined concepts in novel ways. The symbolic approach drew on research in psychology and mathematical logic, with nary a physicist in sight.

Then in the 1980s, a few maverick researchers revived an alternative approach to AI based on mathematical structures called neural networks — webs of artificial "neurons" loosely inspired by the structure of the human brain. Instead of starting with predefined concepts, neural network researchers wanted to understand how their systems could "learn" concepts from scratch by forming connections between neighboring neurons.

That's where physics comes in. There's a well-established branch of physics, called statistical mechanics, that studies the collective behavior arising from interactions between many simple systems, like atoms in a magnetic material. Neurons in a network are also simple systems with complicated collective behavior. In 1982, inspired by this analogy, the physicist John Hopfield created a type of neural network based on a mathematical model of unusual magnetic materials. Hopfield's network could learn to store patterns of neuronal activity and reproduce them later, giving it a simple sort of memory. It was a new and elegant solution to a problem that had vexed many AI researchers.

A few years later, the computer scientist Geoffrey Hinton and others built on Hopfield's results and devised methods that are still used to train AI systems today. In 2024, the pair were awarded the Nobel Prize in physics — a testament to how strongly that field has influenced the study of AI. Elise Cutts explored the impact of their work in April as part of Quanta's series on science in the age of AI
 

What's New and Noteworthy

Since Hopfield and Hinton's pioneering work, AI researchers have found many new ways to repurpose ideas from physics. The computer scientist Lenka Zdeborová has analyzed AI models through the lens of phase transitions — sharp changes in the behavior of physical systems that occur at specific temperatures, such as the transformation of water from liquid to steam at its boiling point. John Pavlus interviewed Zdeborová last October about her work on phase transitions in language models like ChatGPT.

Other physical phenomena have influenced AI systems designed to generate images. The most widely used AI image generators, called diffusion models, are based on the equations that describe how a drop of milk spreads through a cup of coffee. In January 2023, Anil Ananthaswamy explored how diffusion models work and recounted the series of breakthroughs that led to their widespread adoption. In September of that year, Steve Nadis wrote about a new approach to AI image generation based on an equation describing the flow of electric charge. And as Webb Wright reported last month, researchers have recently argued that the apparent creativity shown by diffusion models can be explained by their physics-inspired architecture.

AI researchers have also drawn inspiration from more abstract areas of physics, far removed from everyday life. In January 2020, Pavlus wrote about a new kind of image-recognition network based on the mathematical symmetries of fundamental particles. "I have always had this sense that machine learning and physics are doing very similar things," the AI researcher Taco Cohen told Pavlus. "As we started improving our systems, we gradually unraveled more and more connections."

AROUND THE WEB
In his Nobel Prize lecture, Hopfield argued that physics is less a specific set of subjects and more a distinctive approach to research.
In the provocatively titled 2021 essay "Why Is AI Hard and Physics Simple?" the physicist and AI researcher Daniel A. Roberts explored how insights from theoretical physics can inform the study of neural networks.
The Polo Club of Data Science, a data visualization research group at Georgia Tech, has published an excellent interactive explainer about the inner workings of diffusion models.
Follow Quanta
Facebook
Twitter
YouTube
Instagram
Simons Foundation

160 5th Avenue, 7th Floor
New York, NY 10010

Copyright © 2025 Quanta Magazine, an editorially independent division of Simons Foundation

Scientist Pankaj

Space & Physics: First-ever antimatter quantum bit

View in web browser ...