AI in Physics
Unlike mathematics — where AI climbs a single difficulty ladder from textbook to Millennium Problems — physics is a sprawling, experiment-driven science. AI is advancing simultaneously across many sub-fields: from predicting weather to controlling fusion plasma to discovering new materials. This page maps the key domains and highlights the most significant, sourced milestones.
Automation Progress
In 2024, the Nobel Prize in Physics was awarded to the foundational work behind neural networks — a historic acknowledgment that AI's roots are deeply intertwined with physics.
Nobel Prize in Physics awarded to Hopfield & Hinton
John Hopfield and Geoffrey Hinton received the 2024 Nobel Prize in Physics "for foundational discoveries and inventions that enable machine learning with artificial neural networks." Hopfield's associative memory network (1982) was inspired by spin-glass physics, and Hinton's Boltzmann machine used statistical mechanics to learn. The committee's choice underscored that modern AI was born from physics.
AI weather models have gone from research curiosity to outperforming the world's best numerical weather prediction (NWP) systems. This is arguably where AI has had its most immediate, large-scale impact on physics — models run in minutes on a single GPU that previously required supercomputer-hours.
Huawei's Pangu-Weather: first AI model to beat ECMWF operational forecasts
Pangu-Weather, a 3D vision transformer trained on 39 years of ERA5 reanalysis data, produced medium-range global forecasts that outperformed ECMWF's HRES (the gold standard) across multiple lead times — and ran 10,000× faster. Published in Nature.
DeepMind's GraphCast: 10-day global weather in under a minute
GraphCast used graph neural networks to produce 10-day global weather forecasts more accurately than ECMWF's HRES on 90% of test variables. The model runs in under 60 seconds on a single TPU. Published in Science.
GenCast: probabilistic forecasting surpasses operational ensembles
DeepMind's GenCast introduced probabilistic (ensemble) AI forecasting, generating calibrated uncertainty estimates. It outperformed ECMWF's ENS on 97.2% of targets across 1–15 day lead times. This allows not just "what will happen" but "how confident are we." Published in Nature.
AI is accelerating the discovery of new materials by orders of magnitude — predicting crystal stability, simulating molecular dynamics at quantum accuracy, and designing molecules with desired properties. This domain saw a Nobel Prize (Chemistry 2024) for AlphaFold's protein structure prediction, which applies the same physics principles.
GNoME discovers 2.2 million new crystal structures
DeepMind's Graph Networks for Materials Exploration (GNoME) predicted 2.2 million new stable crystal structures — equivalent to 800 years of human experimental discovery. 736 of these were independently synthesized in labs by collaborators at Lawrence Berkeley National Lab. Published in Nature.
Nobel Prize in Chemistry for AlphaFold (protein structure prediction)
Demis Hassabis and John Jumper (DeepMind) shared the 2024 Nobel Prize in Chemistry with David Baker for computational protein design. AlphaFold2 solved the 50-year protein folding problem, predicting 3D structures of 200+ million proteins. While formally a chemistry prize, the underlying physics of molecular interactions is central.
AI²BMD: ab initio protein dynamics with quantum accuracy
Microsoft Research's AI²BMD used machine-learning force fields to run all-atom protein molecular dynamics at near-DFT (density functional theory) accuracy — something that would be computationally impossible with traditional quantum chemistry. Published in Nature.
At CERN's LHC, AI is now embedded in every stage — from real-time triggering (deciding which of 40 million collisions per second to keep) to jet tagging, anomaly detection, and the search for physics beyond the Standard Model. No major new particle has been found yet, but AI is fundamentally reshaping how physicists look.
ML-based jet tagging becomes standard at LHC
Deep learning classifiers for identifying jets from b-quarks, top quarks, W/Z/H bosons are now integral to ATLAS and CMS experiments. ParticleNet and other GNN-based taggers achieve 2–3× background rejection improvements over traditional methods, enhancing sensitivity to rare processes.
Real-time ML on FPGAs for LHC triggering
Neural networks are now running on FPGAs directly attached to LHC detectors, making microsecond-level decisions about which collision events to save. This transforms AI from a post-hoc analysis tool into part of the scientific instrument itself.
Anomaly detection: AI hunting for "unknown unknowns"
Unsupervised ML methods are now being deployed at the LHC to flag collision events that don't match any known physics model — a paradigm shift from testing specific theories to open-ended discovery. No breakthrough signal yet, but the approach could detect new physics that human-designed searches would miss.
AI is amplifying astronomers' abilities to detect faint signals, classify transient events in real time, and even design next-generation detectors. Gravitational wave astronomy, born in 2015, is particularly data-hungry and a natural fit for ML.
AI denoises LIGO data, revealing hidden gravitational wave signals
Deep learning models trained on LIGO detector noise can now separate genuine gravitational wave signals from instrumental artifacts with unprecedented sensitivity, effectively increasing the observatory's reach. Multiple teams reported finding previously missed merger events in archival data.
AI designs next-gen gravitational wave detectors
Researchers used AI-driven optimization to discover novel interferometer configurations that surpass current LIGO design plans in sensitivity. The AI found approaches human engineers hadn't considered. Published in Physical Review X.
Controlling the superheated plasma inside a fusion reactor is one of the hardest real-time control problems in physics. AI — especially reinforcement learning — is proving uniquely suited to this challenge, bringing commercial fusion energy closer to reality.
DeepMind's RL agent controls tokamak plasma
A deep reinforcement learning system learned to control the magnetic coils of the TCV tokamak at EPFL, maintaining various plasma configurations including an elongated shape and a "droplet" form — all in real time. This was the first demonstration of RL controlling a real fusion reactor. Published in Nature.
Chinese team develops data-driven tokamak plasma control
Researchers from the Chinese Academy of Sciences developed a fully data-driven AI control system for tokamak plasmas, demonstrated on the EAST reactor. The system learns control strategies directly from experimental data, bypassing the need for complex physics-based models.
DeepMind partners with CFS to build AI plasma control for commercial fusion
Google DeepMind and Commonwealth Fusion Systems (CFS) announced a partnership to develop AI-powered plasma control systems for CFS's SPARC tokamak — a commercial-scale device under construction. This marks AI's transition from lab demos to industrial fusion applications.
The quantum many-body problem — computing the behavior of systems with many interacting quantum particles — is exponentially hard. Neural network wave functions and ML-enhanced quantum Monte Carlo are opening new doors to simulating strongly correlated materials.
Neural quantum states: from proof-of-concept to real materials
Since Carleo & Troyer's pioneering 2017 work (published in Science), neural network representations of quantum wave functions have advanced to tackle real materials. By 2024, neural-network variational Monte Carlo methods achieve state-of-the-art accuracy for frustrated magnets and strongly correlated electron systems.
ML on quantum experimental data solves many-body problems efficiently
A Nature Communications study showed that classical ML methods trained on quantum simulation data (from quantum processors or cold-atom experiments) can efficiently solve quantum many-body problems that would be intractable with classical data alone — bridging quantum experiments and classical computation.
Physics lives in differential equations — from fluid flow (Navier-Stokes) to electromagnetic waves (Maxwell) to quantum mechanics (Schrödinger). Neural operators and physics-informed neural networks are learning to solve these equations 100–1000× faster than traditional numerical methods.
Fourier Neural Operator (FNO): learning solution operators for PDEs
Caltech's FNO learns to map between infinite-dimensional function spaces, solving families of PDEs (turbulent flow, wave propagation) in seconds rather than hours. It generalizes across different initial conditions and geometries — a key advantage over re-running simulations. Foundational work in ICLR 2021, extended through 2023.
Hybrid neural–classical PDE solvers outperform both worlds
A Nature Machine Intelligence study combined neural operators with classical relaxation methods, showing that the hybrid approach solves PDEs more accurately than either method alone, especially for high-frequency features that pure neural approaches struggle with.
AI reaches fundamental optical precision limits
Scientists demonstrated how close AI-driven approaches can get to the fundamental physical limits of optical measurement precision — constrained by the wave nature of light itself. This defines the theoretical ceiling for AI-assisted optical physics. Published in collaboration with TU Wien.