Can AI Chips Handle Complex Science? SandboxAQ and Nvidia Show What’s Possible

Can AI Chips Handle Complex Science? SandboxAQ and Nvidia Show What’s Possible October 20, 2025 by Jaime Hampton

(Jackie Niam/Shutterstock)

When researchers talk about “AI for science,” they often mean applying machine learning to accelerate discovery. But a new collaboration between SandboxAQ and Nvidia shows how AI-optimized hardware can be adapted for high performance computing. The project, which includes researchers from PNNL and Hungary’s Wigner Research Centre for Physics, uses mixed-precision methods to achieve the accuracy required for scientific simulation.

In a paper released this month, the scientists demonstrated that quantum chemistry simulations, which traditionally demand extreme numerical precision, can run accurately on Nvidia’s Blackwell GPUs. The work demonstrates a new level of precision for emulated double-precision arithmetic on AI-optimized GPUs, achieving chemical accuracy in challenging quantum chemistry tasks.

The research demonstrates that the same GPU architectures driving modern AI workloads can also accelerate physics-based simulations, provided their low-precision hardware is used in smarter ways. At the center of this work is a method known as mixed-precision arithmetic, which blends fast, low-precision computation with selective high-precision steps to maintain scientific accuracy.

“These Nvidia hardware advances were developed to run extremely quickly for specialized machine learning computations, but applying them to simulation tasks is not straightforward,” said Adam Lewis, a physicist and head of innovation for AI simulation at SandboxAQ, in a briefing with AIwire. “What this paper shows is how to use those same techniques to get the exact results you need for quantum chemistry while efficiently exploiting the machine learning hardware.”

Adapting AI Hardware for Science

At the center of the study is Nvidia’s Blackwell GPU architecture, the same platform built to train and run inference for large language models. Designed for efficiency in low-precision arithmetic, the chips can perform vast numbers of calculations per second, but until now, that speed came at the cost of accuracy. The researchers have found a way to bridge that gap.

Nvidia's Blackwell Ultra chip (Source: Nvidia)

Their approach relies on a technique called FP64 emulation, first proposed by Japanese computer scientist Kensuke Ozaki, which reconstructs double-precision accuracy from operations carried out at much lower precision. In practice, this means representing 64-bit numbers as a set of smaller, fixed-point slices that can be processed quickly by Blackwell’s tensor cores. The results are then recombined to produce accuracy comparable to native FP64 calculations.

The researchers tested this approach on molecular systems that test the limits of current scientific computing methods, including FeMoco, a complex metal cluster involved in natural nitrogen production, and cytochrome P450, a key enzyme in drug metabolism. Both are large, multi-electron systems with heavy metal centers that make them notoriously difficult to model. Yet, using FP64 emulation on Blackwell GPUs, the team reproduced results that closely matched those from conventional high-precision calculations.

The underlying algorithm is a tensor-network method known as the Density Matrix Renormalization Group (DMRG), which represents quantum wavefunctions as networks of interconnected tensors. Its structure makes it well-suited to GPUs, where similar tensor operations drive modern neural networks. By combining this structure with emulated-precision math, the team achieved performance levels rarely seen in scientific codes, reaching 90 to 95% GPU utilization even for complex enzyme systems. That result, Lewis noted, is “pretty wild,” and suggests that features of Blackwell’s architecture may be unexpectedly compatible with quantum chemistry.

“Even in LLM applications, which these [GPUs] are precisely tuned for, that would be very impressive numbers,” Lewis told AIwire. "The high utilization shows that they're actually weirdly optimized for performing these quantum chemistry calculations that no one had in mind when they made them in the first place.”

Bridging AI and HPC

(Shutterstock)

The results highlight a growing overlap between AI and HPC. As GPU architectures evolve to support expanding AI models, they are also becoming powerful engines for scientific simulation. The same hardware advances that handle AI models with billions (or trillions) of parameters can now model the behavior of atoms and molecules, with the right algorithms in place.

Lewis framed the result as part of a larger industry pattern: as AI dominates more of the computing landscape, hardware innovations developed for machine learning workloads are starting to spill over into simulation. “The trend in the past few generations towards reduced precision arithmetic is likely to continue as AI takes an ever-higher slice of the computing pie,” he said.

The implications extend beyond chemistry. Efficient emulation of double-precision math could make AI accelerators viable for a range of physics and materials science workloads, many of which have been constrained by the need for specialized HPC infrastructure. By leveraging GPUs already optimized for AI training and inference, researchers could dramatically expand access to scientific computing power without waiting for specialty systems.

The collaboration also reflects Nvidia’s expanding interest in AI for science, a theme the company has been vocal about in recent years. Lewis described SandboxAQ’s collaboration with the company as “very fruitful,” noting that Nvidia’s internal research teams were deeply engaged in the project. “They're always interested in people who are using their hardware in new ways to do good science. They've been super helpful,” he said.

Next Steps

Lewis said the next step is to make the approach faster and more robust across a wider range of molecular systems. While the method achieves high accuracy, it still relies on substantial GPU resources. “If you have to use an entire DGX cluster for every data point, that’s not necessarily attractive from an ROI perspective,” he said. “The next step is to make more efficient approximations and speed-ups. And making the algorithm more stable and robust so we can use it for arbitrary systems without any handholding.”

(mechichi/Shutterstock)

Another direction, he added, is combining these physics-based simulations with machine learning. “One major application would be to train models on this data,” he said. In practice, that could mean using emulated-precision quantum chemistry calculations to generate training data for foundation models in materials discovery or molecular design, creating a feedback loop between AI and simulation that strengthens both.

For SandboxAQ, the research reinforces its mission to develop large quantitative AI models that combine the rigor of physics-based simulation with the adaptability of AI. It also hints at a future where the same chips that train LLMs might also model chemical reactions, discover new drugs, or design new materials. Explore the paper at this link.

Related

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...