Making Manufacturing Simulation Better, Not Just Faster
AI physics simulation has the potential to make simulations 10,000x faster.
As I wrote about previously here and here, simulation has always been expensive, slow, and confined to a relatively small set of experts. So the idea that AI could accelerate it – and perhaps even democratize it – is compelling.
However, I recently spoke with Ben Savinson, a PhD student at ETH Zürich who is applying physics-based ML methods to manufacturing. “Speed is valuable, but it’s not always the bottleneck,” he told me. “Often the real value is unlocked by improvements in accuracy.”
In a production environment, simulation doesn’t happen in isolation. It sits inside a much larger process with physical steps, machine setup, tool changes, parallel operations, and sometimes long feedback loops. “If the manufacturing process is not compute-limited, companies really don’t care how long the simulation takes,” he told me. In other words, if the rest of the workflow is gated by days or weeks of physical production time, making one piece of software faster may not materially change the economic outcome.
Simulation speed in manufacturing
In many manufacturing settings, simulation is already fast enough to fit inside production cycle times. Speeding it up further often does not make a material difference.
That does not mean speed is irrelevant. In some cases, it matters a lot. Ben pointed to semiconductor manufacturing, where the problem of correcting masks for wafer printing has become genuinely compute constrained. In a context like that, reducing simulation time can unlock enormous value because the factory is literally waiting on the computation.
But those cases are somewhat narrow. “The assumption that if you make something faster, it always provides a lot of value is not necessarily true,” Ben said.
Manufacturing companies don’t care about simulation time on its own. They care about throughput, yield, scrap, downtime, and output. As Ben put it, “They care about the metric of whatever they’re producing.”
Modeling failures at the tail
If AI in manufacturing simulation is not just about making existing simulations faster, the question becomes what it means to make them better. And in manufacturing, the hardest version of “better” is also the most valuable: modeling failures at the tail. The failures that matter most are often not the average cases. They are the rare ones.
“In manufacturing, you mainly care about tail and extreme events,” Ben told me.
Most ML is very good at interpolation – it learns patterns from historical examples and makes predictions on cases that look similar to what it’s seen before. But in manufacturing, the most economically important outcomes are usually the anomalies – the one-in-a-thousand failures, the edge cases, the defects that destroy yield or cause catastrophic downstream consequences.
“The events you really want to capture are exactly the events that boilerplate AI is pretty poor at capturing,” he said.
This makes the opportunity both exciting and challenging. On the one hand, there is clearly value in better modeling these processes. On the other hand, the specific cases customers care about most are precisely the ones that data-driven systems struggle with because they are underrepresented in the training distribution.
This is why Ben is skeptical that AI will simply replace the physics stack.
“If anyone tells you we’re going to completely replace the physics stack, I don’t think it’s going to work,” he said.
When you need to extrapolate, physics still matters. “Physical laws, by definition, extrapolate beyond the observed distribution,” he said. Maxwell’s equations do not stop working when conditions become extreme. Navier-Stokes does not cease to apply because the environment gets more difficult. The equations are useful precisely because they encode structure that generalizes beyond the available dataset.
This doesn’t mean the physics model is sufficient on its own. Ben’s point is that it often is not. But it does mean that the best systems are likely to be hybrid.
“You need to bake as much physics into the model as possible,” he said, “to make sure the model has a prior to capture these extreme events. And it’s not just interpolating.”
The gap between simulation and reality
So if AI is not going to replace the physics stack, where does it add the most value?
Ben’s view is that it might be in closing the gap between simulation and reality. “It isn’t just making physics solvers much faster,” he said, “but enhancing them and closing the gap to real-world data.”
In many industrial processes, the physics model is principled but incomplete. It captures the known structure of the system, but not everything that actually happens on the factory floor. Materials behave differently than expected. Machines wear down. Environmental conditions shift. Unknown interactions emerge between tools and processes. There is usually a gap between what the simulator predicts and what you actually see.
“Oftentimes the physical simulation is quite far away from what you see in experiment,” Ben said. The opportunity is therefore to use real process data to bring simulations closer to reality – not just to run faster but to capture what purely physics-based models miss.“By combining data-driven approaches with the underlying physical framework, you can actually have a better simulation as opposed to only a faster one,” Ben said.
Ben pointed to weather as a useful analogy. One reason AI has worked so well there is that you have two things at once: a physics-based framework and a large amount of real-world data. “There’s a physical model that is principled, but it doesn’t really capture everything in the accuracy you would want,” he said. “And you have a huge amount of data.” The governing equations provide structure, but the system itself is chaotic, only partially observable, and difficult to model perfectly. That is where AI helps because it can learn the gap between theory and reality.
Certain manufacturing processes may eventually look similar. You have an underlying physics stack, but also a large amount of measured output data from the process itself. You can observe what comes out the other side. You can see the defects, the drift, the variability, the errors, etc. In the right settings, this creates the conditions for AI to sit on top of the physics layer and improve the model.
This is also a much stronger value proposition. Manufacturers do not buy on solver speed alone; they buy when it moves a production metric.
“The metrics you want to push upwards are throughput and yield,” Ben said. If you can model the process more accurately, you can tune it more effectively. You can reduce defects, tighten variability, and throw away fewer parts. In semiconductor manufacturing, that means fewer broken transistors and fewer unusable chips. In other industrial settings, it could mean fewer failed runs, fewer rejected components, or more stable performance across production lines. Either way, the value accrues because the model improves the process, not just the speed of the simulation.
This is why the opportunity for AI in manufacturing simulation may be less about replacing physics than extending it. In manufacturing, value comes from using AI to close the gap between first-principles models and real-world production data — and in doing so, improve outcomes on the factory floor. After all, the goal is not faster simulation — it’s better manufacturing.
Author’s note: An LLM was used for light copy editing only (spelling, grammar, and clarity). Content, meaning, tone, and structure remain unchanged.


