World models, AI 2025, Nvidia Cosmos, OpenAI Sora, Meta V-JEPA 2, Google DeepMind, World Labs, interactive 3D environments, the $190bn global video games industry and triple-A production budgets exceeding $1bn now define a single structural shift in artificial intelligence: machines are moving away from predicting language and towards simulating how the real world behaves in space, time and physics. Instead of guessing the next word, these systems reconstruct scenes, anticipate motion and calculate the consequences of actions — a capability that is already beginning to reshape gaming, robotics and industrial AI. This is reported by The WP Times, citing Financial Times.
For more than a decade, progress in artificial intelligence was measured by how convincingly systems could generate text, images or software code. That progress was genuine, but also narrow. Large language models learn statistical patterns in language; they do not understand gravity, movement or physical causality. They know what is usually written next in a sentence, but not what happens next in a room, on a road or inside a virtual world. World models operate on a different level. They are trained on video, motion and spatial data so that they learn how objects move, how light changes and how environments respond when something acts within them. This allows AI to run internal simulations of possible futures before taking a single step in the real one — the basis of planning, prediction and intelligent control rather than mere imitation.
Why the video games industry is the first battlefield
Gaming has become the first large-scale commercial test of world models because it already operates inside simulated worlds — and because the money is large enough to justify a new technological stack. The global games market is projected to generate nearly $190bn this year, according to Newzoo, while the biggest “triple-A” releases can take years to develop and cost more than $1bn per title.
World models allow studios to generate entire 3D environments from prompts, not just individual assets. Instead of building levels manually, developers can explore, edit and iterate inside AI-generated worlds that obey physical rules. That changes production economics: teams can prototype dozens of ideas before committing budget, and smaller studios can reach visual and mechanical complexity that previously required hundreds of specialists.
The technology is already moving into live games. AI-driven characters and dynamic environments are being deployed not as background tools but as part of the player experience. Studios report dramatic increases in development speed, turning months of work into days. In an industry under constant pressure to release on time, this ability to simulate, test and polish inside a world model is becoming a competitive advantage.
Why Nvidia, Meta and Google are racing for physical AI
The real prize is not gaming. It is physical intelligence. Nvidia’s Cosmos platform is designed to train robots, industrial machines and autonomous vehicles inside realistic simulations before they ever touch the real world. Meta’s V-JEPA 2 uses vast volumes of video and interaction data to teach AI how gravity, motion and object permanence work. OpenAI’s Sora shows how coherent, physics-consistent video can become training material for future agents. Google DeepMind and World Labs are building similar systems to generate interactive, navigable environments.

Together, these platforms represent a shift from digital pattern recognition to world-scale simulation. An AI that can model how a factory floor changes when a robot moves, or how traffic reacts when a vehicle turns, can replace costly real-world testing with virtual experimentation. That has direct implications for logistics, manufacturing, healthcare, climate modelling and defence — all sectors where mistakes are expensive and slow.
The economic tension is unavoidable. World models reduce the need for large teams to build, test and refine physical and digital systems. They also create new forms of creative and technical work around simulation, control and design. The balance between productivity and displacement will define the next phase of AI policy in Britain and across Europe.
What is clear in 2025 is that artificial intelligence is no longer about sounding human. It is about understanding the world well enough to operate inside it. World models are the bridge between code and reality — and the industries that control them will shape how work, creativity and technology evolve for the next generation.
Read about the life of Westminster and Pimlico district, London and the world. 24/7 news with fresh and useful updates on culture, business, technology and city life: Will Windows finally unlock SSD performance as Microsoft moves to native NVMe architecture