Back to Blog
News

Beyond language: Why world models could be the next frontier for enterprise AI

adminDatabase Expert
March 26, 2026
5 min read
#Artificial Intelligence#Emerging technology
Beyond language: Why world models could be the next frontier for enterprise AI
Beyond language: Why world models could be the next frontier for enterprise AI - Image 2
Beyond language: Why world models could be the next frontier for enterprise AI - Image 3

IBM researchers have spent years building AI systems that simulate physical reality rather than just describing it. Now, some of the biggest names in the field are converging around that same idea, with billions of dollars riding on its potential.When Turing Award winner Yann LeCun left Meta late last year to launchAMI Labs, a Paris-based startupdevoted to building what he calls “world models,” he put a name and a funding round behind a critique that has been simmering in AI research circles for years. His coreargument: language models predict text, not physical reality, and that gap limits what they can do for industries that run on physics, not prose.“LeCun’s idea of world models is that systems should learn the latent structure and dynamics of reality, not just patterns in text,”Anuradha Bhamidipaty, an IBM Distinguished Engineer and Master Inventor working on a new initiative to build world models for physical assets, toldIBM Thinkin an interview. “This view aligns with IBM’s long-standing focus on physics-aware, simulation-driven and scientifically grounded AI.”The investment flowing into the field has been substantial.AMI Labs raised USD 1.03 billionat a USD 3.5 billion pre-money valuation. Backers include Bezos Expeditions, NVIDIA, Toyota Ventures and Samsung.World Labs, founded by AI pioneer Fei-Fei Li, separately raised USD 1 billion from investors including AMD, Autodesk, NVIDIA and Fidelity. Google DeepMind has also committed significant resources to world-model research, including its SIMA, Genie and Veo programs.At the center of LeCun’s approach is a specific technical framework.JEPA, or Joint Embedding Predictive Architecture, is a learning method he proposed in 2022 that trains AI systems to develop abstract representations of their environment rather than generating outputs word by word. He has been explicit about why. “The world is unpredictable,” hetoldMIT Technology Review. “If you try to build a generative model that predicts every detail of the future, it will fail.”AMI’s stated target customers include organizations that operate complex physical systems, such as manufacturers, aerospace companies, biomedical firms and pharmaceutical groups. LeCun has acknowledged that the timeline is long, describing AMI as a project that starts with fundamental research and could take years to reach commercial applications.

IBM researchers have been developing what Bhamidipaty describes as “asset-agnostic simulation frameworks”: systems that generate thousands of trajectories to learn how physical assets transition between states, then use those learned dynamics to evaluate interventions before they happen. The frameworks connect to theIBM Maximo Application Suite, an asset management platform, which links AI outputs to real-world work orders, parts inventories and maintenance policies.For enterprise settings, that matters in ways a language model cannot address, Bhamidipaty said. A language model can describe what typically happens when a piece of industrial equipment fails. What it cannot reliably do is simulate whether a specific maintenance decision will cause a specific asset to fail, estimate the cost of that outcome or recommend an intervention. “Such reasoning is backed by an evolving representation of the asset, process or supply network,” she said.One documented case involvesSund & Bælt, a Danish company that manages major infrastructure, including the Øresund Bridge. “Partnering with IBM, they created an AI, IoT and digital twin-powered system to help prolong the lifespan of aging infrastructure,” Bhamidipaty said. She added that the system streamlined inspections and shifted the organization toward predictive rather than reactive maintenance.Meanwhile, IBM’scollaboration with NASAon weather and climate forecasting has produced what Bhamidipaty describes as large-scale spatiotemporal models, systems designed to learn how atmospheric conditions evolve across space and time—an architecture she said is “conceptually similar to world-model architectures.”In a more recent example, IBM researchers used quantum computing simulations tomodel the electronic dynamics of a half-Möbius molecule, a structure that had not previously been physically observed, and validated its existence through simulation before synthesis. Bhamidipaty described the work as an example of the “deep, physics-grounded predictive modeling that world-model research aspires to.”How AI systems should be structured internally has emerged as one of the more consequential practical questions in the world-model discussion, particularly for enterprise deployments where the cost of a wrong recommendation is high.“Architecturally, IBM separates the roles,” Bhamidipaty said.Large language models(LLMs) in her team’s frameworks handle configuration and explanation: parsing equipment manuals, harmonizing maintenance records and generating human-readable summaries. The actual forecasting, counterfactual reasoning and policy optimization fall to what she calls “predictive/dynamical models,” systems trained on operational data to represent physical behavior.The separation matters, Bhamidipaty said, because the failure modes are different. A language model that hallucinates a fictional reference in a summary is an inconvenience. A model that hallucinates a fictional equipment state and triggers a real-world intervention is a different problem category. Researchers working on world models for science and education have noted that faithfulness to real-world dynamics is more important than surface realism, because a simulation of a cell or a surgical procedure is only useful to the extent that it accurately reflects underlying physics.

The practical implications of world models, Bhamidipaty said, are straightforward . Today, a supply chain manager gets a warning when a disruption is coming. A world model would instead simulate the outcomes of different responses, such as rerouting shipments, switching suppliers, or adjusting inventory, before a decision is made.The scientific applications could extend further. “World models move beyond data mining to accelerating hypothesis testing and design exploration across materials, energy,” Bhamidipaty said.For enterprise applications, the foundation is already being laid in narrow domains, from bridge inspection to climate modeling to molecular simulation. Whether the underlying principles can scale into something more general is the question researchers on all sides are now working to answer.“The key change,” Bhamidipaty said, “is that with AI grounded in the physics of a process or a business, enterprises gain a tool that doesn’t just predict the next word, but enables reasoning about ‘What will happen if we change X?’”

Comments (0)