AI-Physics Simulation Opportunities
This is part two of a two part series (though I may end up expanding it further eventually). You can find part one here. This is something I’m still learning about. I’m early in my understanding here, so consider this an exploration of recent conversations with some experts in the field as well as what I’ve been reading and listening to.
In Part 1, I focused on the what. What are PINNs, neural operators, and the future of simulation – the idea that AI can learn mappings from geometry, materials, and boundary conditions to predict system behavior.
This post is mostly about the where and the how. Where the near-term opportunities are and how to actually move these methods into the hands of engineers.
However, before diving into this, some people reached out to me after I posted Part 1 to tell me that they think I missed some things, so I want to address that first.
What I Missed in Part 1
After publishing Part 1, I got some helpful pushback from a few people who work in this space. Specifically, I implied it was a “short jump” from neural operators to foundation models for physics. That undersells how hard the jump actually is, and I want to correct that here.
The first thing I underappreciated is how sensitive some physical systems can be to small changes in setup. Chaos theory tells us that you can write your initial conditions to the fifth decimal place, evolve the system forward, and get one answer. However, if you write them to the sixth decimal, the solutions will diverge completely. Everything is deterministic, but tiny differences in precision cascade over time. This shows up across fluid dynamics (Navier-Stokes, for example), structural mechanics, and many other domains engineers care about. It means that even a perfect model can produce wildly different outputs based on very small changes in how the problem is set up. This makes generalization hard.
The second thing I over-simplified was how I framed the Fourier Neural Operator (FNO). I presented it as a promising early step toward generalization – which it is, in narrow settings. But I didn’t adequately convey how narrow those settings are. FNO works best when the underlying problem has clean, regular structure. Once the boundary conditions get messy or the geometry loses that regularity, the approach starts to break down. For example, airflow through a smooth, rectangular channel is where FNO does well. But airflow through a machine part with holes, bends, sharp corners, and different materials is much less clean. The geometry is irregular, and what happens at the edges matters much more. FNO has a harder time here.
More broadly, I underestimated the degree to which physics modeling is an art, not just a science. The same governing equations, implemented on different computer architectures with operations ordered slightly differently, can produce different numerical results. One person I spoke with pointed to jet engine design as an example: some of the code still in use dates back to the 1970s, and firms are extremely reluctant to touch it. The reason isn’t that no one can rewrite Fortran 77 in C or Python. It’s that the full stack (the models, the numerics, the implementation, and the validation history around them) has already been vetted by the FAA. As a result, design changes made using that code are often approved without having to rerun physical experiments. Reimplementing this would mean reopening lots of questions that would require FAA approval each time: did anything change in the physics, the numerics, the code path, or the approval basis built around them? In that sense, some of the hardest barriers here are not purely technical. They’re also about trust and how deeply a tool is embedded in an existing engineering workflow.
None of this means the work on PINNs or neural operators isn’t an important step for the field. But it does mean the path to deployment is narrower and more domain-constrained than I implied in Part 1. At least in the near term, the opportunity is unlikely to be a single model that can handle all physics across all settings. More likely, progress will come from systems that start in specific domains. In some cases, that may look like specialized surrogate models built on top of expensive simulations. In other cases, that may look like a narrow initial wedge that expands into solver acceleration or a broader physics intelligence layer. That is where I want to focus the rest of this post.
Where the Opportunities Are Today
There seem to be at least two paths emerging in AI-driven simulation. One uses AI to learn surrogate models that approximate expensive simulations and accelerate design exploration. The other focuses on accelerating the physics computation itself while remaining grounded in the governing equations.
The opportunities I’m most excited about do not all sit neatly in one camp, but they tend to share the same traits: physics that is constrained enough to model today, relatively structured workflows, and some path to verification against either trusted simulations or real-world test data.
Thermal simulation as an early wedge
Thermal is one of the cleanest early wedges because the physics is relatively well understood, the governing equations are well established, and validation data exists across many hardware domains. In many hardware systems – whether it be semiconductors, batteries, power electronics, or industrial equipment – heat is an important constraint to understand. Teams need to know where and when temperatures rise, how materials behave under thermal stress, and how this impacts performance and manufacturability.
It’s also a useful place to start for workflow reasons. In many organizations, thermal analysis is still done by specialized people using specialized tools, and it often happens relatively late in the process, after much of the design has already been set. If you can make thermal analysis faster, easier to access, and easier to use without giving up fidelity, you can change when physics shows up in the design process and who can realistically use it.
A close friend, Dr. Hardik Kabaria, founded Vinci, which started with thermal problems in semiconductor electronics. That narrow starting point meant that they could build and ship a product fast. “We did not want to spend eight years building a model before shipping a product,” Hardik told me.
The broader ambition, though, is not to build a different model for every industry. Vinci is already working with customers across semiconductors, electronics, batteries, defense, and other hardware-heavy domains. As Hardik put it, “What gives us confidence in the breadth is that the underlying physics is the same. Heat transfer in a robotic arm, a PCB, or a semiconductor is still governed by the same equations, so the ambition is one model, not a different model for every industry.”
From there, the plan is to keep adding more physical phenomena to the same foundation model, starting with thermo-mechanical coupling. So even though the entry point is narrow, the system itself is unified (one model, one codebase) and over time a broader set of physics capabilities can be added on top.
AI-powered surrogate modeling
In the physics world, engineers already work with a hierarchy of models: there are simple, fast ones for early design, and expensive, granular ones (think full finite element simulations) for detailed analysis. The process of taking those granular models and building cheaper simplified versions takes deep expertise, and the results vary a lot depending on who does it.
AI could automate a meaningful chunk of this by running the expensive simulations, collecting the outputs, and then using that data (along with physics constraints) to build faster surrogate models. By integrating these surrogate models into their workflows, engineers wouldn’t have to query the expensive model as often, and they’d get a more accurate simplified model than what they’d typically build by hand.
This is one place where techniques like PINNs and neural operators could be helpful today as tools for building better surrogates. That said, surrogate modeling is not the only path emerging. A lot of the academic work around PINNs and neural operators naturally lends itself to surrogate construction, since these models learn mappings from inputs to outputs based on training data. But there is also another line of work focused on accelerating the physics computation itself, rather than learning an approximation of it.
In those systems, the goal isn’t to replace simulation with a trained model, but to build architectures that remain rooted in the governing physics while also dramatically reducing computational cost. This matters because many engineering organizations still require solver-grade determinism and traceability for design decisions, which can make purely learned surrogates harder to deploy in production workflows.
Surrogates will likely play an important role in early design exploration, but there may also be a parallel path where AI helps make full-fidelity physics computation much more accessible and scalable.
Reducing setup time and lowering the expertise barrier
In a lot of multi-physics work, the bottleneck is the pre-solve. Engineers spend hours cleaning up geometry, figuring out how to mesh it, choosing boundary conditions, and chasing missing material properties. This is basically all the work required to turn a CAD file into something you can actually simulate.
If you can shrink that setup time, simulation becomes something teams can use while they’re designing. This not only makes simulation faster, but it also changes who can realistically use it. When the setup process becomes automated or significantly simplified, simulation stops being something only specialized analysts can run. It becomes something design engineers or small hardware teams can use during iteration, while they’re still exploring the design.
Initially, you still have to show up where engineers already work: inside CAD/CAE, or at least one click away from it. If it’s a separate tool with a new file format and a whole new workflow, it won’t become a habit. But if AI can remove enough of the pre-solve burden, physics can start showing up earlier in the design loop rather than only during validation.
There is an even bigger opportunity here, too: lowering the expertise threshold required to ask a physics question in the first place, without lowering the quality of the answer. As Hardik put it to me, “the goal is to reduce the bar for access, but not at the cost of reducing accuracy.”
A lot of AI tooling can make experts somewhat faster. The bigger unlock would be making solver-grade physics accessible to design engineers, manufacturing teams, and other adjacent functions that currently have to wait on specialized analysts.
Material property prediction
Every time something is manufactured, the material properties shift slightly, and it’s very hard to know exactly what changed. Engineers deal with this through uncertainty analysis: they physically test samples, use those measurements to estimate a realistic range for the material properties, and then run analyses across that range. You can almost never know the material properties at every point in the part with complete precision.
For non-safety-critical parts with high tolerance for variation, rough bounds are often fine. But for a turbine blade or a chip going onto a satellite, the simulation has to be reliable, which means the material inputs have to be reliable too.
AI could help here in two ways. One is by improving material prediction itself: narrowing the uncertainty, flagging anomalies, or learning patterns across manufacturing runs. The other is by making uncertainty analysis more tractable. In many workflows today, engineers run parameter sweeps across ranges of material properties to understand how sensitive a design is to variation, but those analyses can be computationally expensive, which limits how thoroughly teams can explore the uncertainty space.
If simulation becomes cheaper or easier to run, engineers can evaluate far more combinations of parameters and get a better picture of the reliability envelope of a design. In that sense, AI may help not just by predicting material properties more accurately, but by helping teams reason about the consequences of uncertainty much more efficiently.
Experimental validation and parameter sweeps
Between simulation and manufacturing, there is usually a validation phase where engineers test whether the simulated results hold up in the real world. For example: does this design still work when the temperature is higher or lower? What if the material comes in at the low end of tolerance? To build confidence in the design, teams build prototypes or use test rigs, run parameter sweeps across different conditions, and iterate as needed.
This process is usually slow and expensive because teams have to test many different configurations to understand where the design holds and where it breaks.
AI could be useful here as an experiment-planning layer: helping teams decide which tests to run first, which parameter combinations are most informative, and how to reduce the total number of physical experiments needed to reach confidence. If you can cut a validation campaign from 200 runs to 50 without missing the important failure modes, the ROI is obvious.
AI may also be helpful here by shifting more of the exploration into simulation before experiments begin. In many engineering workflows today, teams run a relatively small number of simulations and then rely heavily on physical testing to explore the parameter space. If simulation becomes significantly cheaper or easier to run, engineers can evaluate far more conditions computationally before committing to physical prototypes.
In that world, the role of experiments changes slightly. Instead of mapping out the full parameter space, experiments become a way to validate the most critical regions of a design space that has already been explored in simulation. AI-driven experiment planning is clearly promising, but the combination of large-scale simulation exploration plus targeted validation experiments may be just as important for reducing overall development time.
And unlike simulation itself, this is often a less established workflow, with fewer incumbent software vendors to displace.
The Business Model Problem & How it Evolves
Because the CAD/CAE stack is already so entrenched, one seemingly obvious path for a startup is integration. Ship a plugin and live inside the existing ecosystem.
But being a plugin is a trap. The same incumbents you rely on for distribution control the chokepoints. They can see your downloads, pricing, and usage. As Matthew told me last week: “You build a plugin and distribute it through their app store, and in return, they get perfect visibility into your pricing and revenue. If it starts working, they show up with an ‘acquisition offer’ that’s really a veiled threat: nice business you’ve got… would be a shame if anything happened to your app store access.”
Then there’s the consulting trap. If every deployment requires building bespoke models for a specific customer or domain, it becomes difficult to scale the business beyond services. That challenge is made worse by the fact that domain specialization still seems hard to avoid, at least for now. You can’t take a model built for HVAC systems and use it to design a jet engine. The governing equations may be similar at a high level, but as discussed above, that is not enough. In practice, this market is still a collection of narrow verticals.
However, there may be a third path emerging between those two extremes. Instead of positioning as a plugin or bespoke modeling shop, some companies may try to build independent physics computation platforms that sit alongside existing design tools and can be called from multiple workflows (design, verification, manufacturing, or reliability analysis) without being tightly coupled to a single CAD or CAE system. Here, the product becomes a kind of physics infrastructure that can serve multiple parts of the engineering stack.
Vinci is an example of this idea. They’re not just focused on faster simulation, but rather on broader access: can physics become accessible, usable, and easy for a much wider set of people in the design workflow? As Hardik put it to me, “the product is a physics intelligence layer that can be available to everybody. It’s not just for the thermal engineer. If I’m doing a manufacturability check, physics is accessible to me too.”
That shift in who can use the product also shifts the economic model around it. If the goal is to put physics into the hands of more people, traditional seat-based pricing is probably not the right model. Hardik believes that once physics becomes broadly accessible across an engineering organization, the bottleneck shifts from expert labor to system throughput. “Where a thermal engineer might previously run 10 to 100 simulations a day at best, Vinci’s system is already enabling thousands,” he told me. In that world, pricing starts to look more like compute- or usage-based.
Then, if simulation becomes cheaper and easier to run, the value may shift away from selling individual solver runs and toward providing reliable physics computation as a service across the engineering stack. That approach may help avoid both the plugin dependency problem and the consulting trap, though it introduces its own challenges around integration, trust, and workflow adoption.
From Simulation to Continuous Physics
It seems unlikely that AI will broadly replace engineering simulation in the near term, especially for the hardest multi-physics problems with complex geometries and uncertain inputs. Inference is too expensive for always-on use in many settings. The talent pool is small. And the data requirements are enormous: as I mentioned last week, Matthew’s back-of-the-envelope is ~2 million points per time step in 3D, multiplied across thousands of time steps.
But the gap between what is possible in AI-physics research and what’s getting deployed in engineering workflows is closing.
“The impact of AI is less about replacing the solver and more about making physics computation cheaper and more accessible so that it becomes a continuous part of engineering decision-making,” Hardik told me toward the end of our conversation. That shift could move physics simulation from a specialized validation step into a continuous layer of engineering decision-making across the design process. And that may ultimately be the bigger opportunity for physics foundation models.
Author’s note: An LLM was used for light copy editing only (spelling, grammar, and clarity). Content, meaning, tone, and structure remain unchanged.


