From Meshes to Neural Operators: The Future of Physics Simulation

By Prahlad Menon 5 min read

From Meshes to Neural Operators: The Future of Physics Simulation

Traditional numerical simulation methods have a dirty secret: you spend 80% of your time on the mesh.

Complex geometries, moving boundaries, adaptive refinement—the grid dominates your workflow. Finite Element Analysis (FEA), Finite Volume Methods (FVM), and Direct Numerical Simulation (DNS) are powerful, but they’re fundamentally grid-bound.

Physics-Informed Neural Networks (PINNs) promised a mesh-free revolution. And they delivered—sort of. You still have to retrain for every new simulation. Change your initial conditions? Retrain. Different boundary conditions? Retrain.

Physics-Informed Neural Operators (PINOs) fix this. Train once, solve for any parameters. Here’s how we got here and where it’s going.

The Traditional Methods

Finite Difference Method (FDM)

The simplest discretization. Replace derivatives with difference quotients on a structured grid.

Pros: Easy to implement, fast for simple geometries Cons: Struggles with complex boundaries, requires structured grids

Finite Volume Method (FVM)

Discretize the integral form of conservation laws. Flux goes in, flux goes out.

Pros: Naturally conservative, handles discontinuities well Cons: Still mesh-dependent, accuracy depends on mesh quality Used in: CFD (OpenFOAM, ANSYS Fluent)

Finite Element Method (FEM)

Discretize the weak form of PDEs using basis functions. The workhorse of structural analysis.

Pros: Handles complex geometries, well-understood error bounds Cons: Mesh generation is painful, remeshing for moving domains Used in: Structural analysis, electromagnetics, acoustics

Direct Numerical Simulation (DNS)

Resolve every scale of turbulence without modeling. The gold standard for accuracy.

Pros: No turbulence modeling assumptions Cons: Computationally brutal—scales as Re³, impractical for most engineering flows

Immersed Boundary Method (IBM)

The Immersed Boundary Method, pioneered by Charles Peskin in 1972 for simulating blood flow in the heart, offers a clever middle ground.

Instead of fitting the mesh to complex geometries, IBM:

  • Uses a simple Cartesian background grid (easy to generate)
  • Represents boundaries as immersed structures within the grid
  • Applies forcing functions to enforce boundary conditions

Pros: No body-fitted mesh needed, handles moving boundaries naturally, existing solvers can be adapted Cons: Accuracy near boundaries can suffer, still fundamentally grid-based Used in: Biological flows, fluid-structure interaction, flows with moving bodies

IBM was a precursor to truly mesh-free thinking—it showed you don’t need the mesh to conform to geometry. But you still need a mesh.

The Common Problem

All traditional methods require a mesh:

  1. Generate geometry → 2. Create mesh → 3. Solve → 4. Geometry changes → 5. Remesh → 6. Solve again

For complex geometries or moving boundaries, steps 2 and 5 dominate. Engineers joke that CFD stands for “Colors For Directors” because you spend more time meshing than analyzing.

PINNs: The Mesh-Free Revolution

Physics-Informed Neural Networks, pioneered by George Karniadakis at Brown University, flip the paradigm.

How PINNs Work

Instead of discretizing on a mesh, PINNs:

  1. Sample collocation points randomly anywhere in the domain
  2. Evaluate the PDE residual at each point using automatic differentiation
  3. Minimize the residual via neural network training
Loss = L_PDE + L_BC + L_IC

L_PDE = Σ |N(u) - f|²     # PDE residual at collocation points
L_BC  = Σ |u - g|²        # Boundary condition error
L_IC  = Σ |u(t=0) - h|²   # Initial condition error

The neural network becomes a continuous approximation to the solution field.

Why Mesh-Free Matters

Virtual points anywhere: No predefined grid. Sample via Latin Hypercube Sampling, random sampling, or adaptive refinement—wherever the physics needs resolution.

Complex geometries: Irregular boundaries? Just sample points along the edge. No mesh fitting required.

Moving domains: The domain changed? Sample new points. No remeshing.

Automatic differentiation: Computing derivatives (for Navier-Stokes, heat equation, etc.) is trivial with autodiff. No finite difference stencils.

The PINN Problem

PINNs have a fundamental limitation: you train a new network for every simulation.

  • New initial conditions → retrain
  • Different boundary conditions → retrain
  • Changed parameters (Reynolds number, etc.) → retrain

This is unsatisfying from an ML perspective. We’re not learning the physics—we’re just curve-fitting each individual problem.

For a single complex simulation, PINNs can be faster than traditional solvers. But for parametric studies (optimize a design across 1000 configurations), you’re running 1000 training jobs.

Neural Operators: Train Once, Solve Forever

The insight: instead of learning a solution, learn the solution operator.

What’s an Operator?

In PDE-speak, an operator maps inputs (initial conditions, boundary conditions, parameters) to outputs (the solution field):

G: (IC, BC, parameters) → u(x, t)

Traditional solvers approximate G by discretizing the PDE. Neural operators learn G directly from data or physics constraints.

Fourier Neural Operator (FNO)

The Fourier Neural Operator by Zongyi Li et al. (2020) was the breakthrough:

  • Operates in Fourier space: Learns global features efficiently
  • Resolution-invariant: Train at one resolution, predict at another
  • 3 orders of magnitude faster than traditional PDE solvers
  • Zero-shot super-resolution: Predict beyond training resolution

FNO learns the mapping from input functions to output functions, not just point values.

Physics-Informed Neural Operators (PINO)

PINO combines the best of both worlds:

  • Data-driven: Learn from simulation data at coarse resolution
  • Physics-constrained: Enforce PDE at fine resolution
  • Hybrid training: Cheap data + expensive physics

The key innovation: use coarse training data to initialize, then refine with physics constraints at higher resolution. You get FNO’s efficiency with PINN’s physics fidelity.

Results:

  • Solves Kolmogorov flow where PINN fails (optimization challenges)
  • Handles long temporal transients accurately
  • Zero-shot generalization to new parameters

The Comparison

MethodMesh RequiredTrain Per ProblemParametricSpeed
FDM/FVM/FEM✅ Body-fittedN/ASolve eachBaseline
DNS✅ Fine meshN/ASolve eachSlow
IBM✅ Cartesian (simple)N/ASolve each~Baseline
PINN❌ No✅ YesTrain each~1x
FNO❌ No❌ Once✅ Yes1000x
PINO❌ No❌ Once✅ Yes1000x+

When to Use What

Use traditional methods (FEM/FVM/DNS) when:

  • You have well-established validated solvers
  • Regulatory certification requires specific methods
  • Single high-fidelity simulation (not parametric)

Use PINNs when:

  • Complex geometry that’s hard to mesh
  • Inverse problems (inferring parameters from data)
  • You only need a few simulations
  • Integrating sparse experimental data

Use Neural Operators (FNO/PINO) when:

  • Parametric studies (1000s of configurations)
  • Real-time simulation (digital twins, control)
  • Surrogate models for optimization
  • Data is available for training

The Practical Reality

Neural operators aren’t replacing FEM tomorrow. Here’s why:

  1. Training data: FNO/PINO need training data from somewhere—often from traditional solvers
  2. Validation: Engineering codes have decades of validation. Neural operators are new.
  3. Extrapolation: ML models can fail catastrophically outside training distribution
  4. Interpretability: Regulators want to understand why the simulation says what it says

But for inner-loop applications—optimization, uncertainty quantification, real-time control—neural operators are already winning. Run 10 FEM simulations to train, then evaluate 10,000 configurations in seconds.

Key Researchers

Charles Peskin (NYU) — Invented the Immersed Boundary Method (1972), foundational work on computational cardiology

George Karniadakis (Brown University) — Pioneer of PINNs, coined the term physics-informed neural networks. Lab

Zongyi Li (Caltech/NVIDIA) — Lead author on FNO and PINO papers. Website

Anima Anandkumar (Caltech/NVIDIA) — Neural operator theory and applications

Lu Lu (Yale) — DeepXDE framework, PINN extensions

Getting Started

Frameworks:

Papers:

The mesh problem isn’t solved—but it’s becoming optional. And that changes everything.


The Menon Lab covers emerging technologies at the intersection of AI and engineering. For more on scientific computing and simulation, get in touch.